All AFNI program -help files
This page auto-generated on Tue May 6 10:05:35 PM EDT 2025
AFNI program: 1dApar2mat
Usage: 1dApar2mat dx dy dz a1 a2 a3 sx sy sz hx hy hz
* This program computes the affine transformation matrix
from the set of 3dAllineate parameters.
* The result is printed to stdout, and can be captured
by Unix shell redirection (e.g., '|', '>', '>>', etc.).
See the EXAMPLE, far below.
* One use for 1dApar2mat is to take a set of parameters
from '3dAllineate -1Dparam_save', alter them in some way,
and re-compute the corresponding matrix. For example,
compute the full affine transform with 12 parameters,
but then omit the final 6 parameters to see what the
'pure' shift+rotation matrix looks like.
* The 12 parameters are, in the order used on the 1dApar2mat command line
(the same order as output by 3dAllineate):
x-shift in mm
y-shift in mm
z-shift in mm
z-angle (roll) in degrees (not radians!)
x-angle (pitch) in degrees
y-angle (yaw) in degrees
x-scale unitless factor, in [0.10,10.0]
y-scale unitless factor, in [0.10,10.0]
z-scale unitless factor, in [0.10,10.0]
y/x-shear unitless factor, in [-0.3333,0.3333]
z/x-shear unitless factor, in [-0.3333,0.3333]
z/y-shear unitless factor, in [-0.3333,0.3333]
* Parameters omitted from the end of the command line get their
default values (0 except for scales, which default to 1).
* At least 1 parameter must be given, or you get this help message :)
The minimum command line is
1dApar2mat 0
which will output the identity matrix.
* Legal scale and shear factors have limited ranges, as
described above. An input value outside the given range
will be reset to the default value for that factor (1 or 0).
* UNUSUAL SPECIAL CASES:
If you used 3dAllineate with any of the options described
under 'CHANGING THE ORDER OF MATRIX APPLICATION' or you
used the '-EPI' option, then the order of parameters inside
3dAllineate will no longer be the same as the parameter order
in 1dApar2mat. In such a situation, the matrix output by
this program will NOT agree with that output by 3dAllineate
for the same set of parameter numbers :(
* EXAMPLE:
1dApar2mat 0 1 2 3 4 5
to get a rotation matrix with some shifts; the output is:
# mat44 1dApar2mat 0 1 2 3 4 5 :
0.994511 0.058208 -0.086943 0.000000
-0.052208 0.996197 0.069756 1.000000
0.090673 -0.064834 0.993768 2.000000
If you wish to capture this matrix all on one line, you can
combine various Unix shell and command tricks/tools, as in
echo `1dApar2mat 0 1 2 3 4 5 | tail -3` > Fred.aff12.1D
This 12-numbers-in-one-line is the format output by '-1Dmatrix_save'
in 3dAllineate and 3dvolreg.
* FANCY EXAMPLE:
Tricksy command line stuff to compute the inverse of a matrix
set fred = `1dApar2mat 0 0 0 3 4 5 1 1 1 0.2 0.1 0.2 | tail -3`
cat_matvec `echo $fred | sed -e 's/ /,/g' -e 's/^/MATRIX('/`')' -I
* ALSO SEE: Programs cat_matvec and 1dmatcalc for doing
simple matrix arithmetic on such files.
* OPTIONS: This program has no options. Love it or leave it :)
* AUTHOR: Zhark the Most Affine and Sublime - April 2019
AFNI program: 1dAstrip
Usage: 1dAstrip < input > output
This very simple program strips non-numeric characters
from a file, so that it can be processed by other AFNI
1d programs. For example, if your input is
x=3.6 y=21.6 z=14.2
then your output would be
3.6 21.6 14.2
* Non-numeric characters are replaced with blanks.
* The letter 'e' is preserved if it is preceded
or followed by a numeric character. This is
to allow for numbers like '1.2e-3'.
* Numeric characters, for the purpose of this
program, are defined as the digits '0'..'9',
and '.', '+', '-'.
* The program is simple and can easily end up leaving
undesired junk characters in the output. Sorry.
* This help string is longer than the rest of the
source code to this program!
AFNI program: 1dBandpass
Usage: 1dBandpass [options] fbot ftop infile ~1~
* infile is an AFNI *.1D file; each column is processed
* fbot = lowest frequency in the passband, in Hz
[can be 0 if you want to do a lowpass filter only,]
but the mean and Nyquist freq are always removed ]
* ftop = highest frequency in the passband (must be > fbot)
[if ftop > Nyquist freq, then we have a highpass filter only]
* You cannot construct a 'notch' filter with this program!
* Output vectors appear on stdout; redirect as desired
* Program will fail if fbot and ftop are too close for comfort
* The actual FFT length used will be printed, and may be larger
than the input time series length for the sake of efficiency.
Options: ~1~
-dt dd = set time step to 'dd' sec [default = 1.0]
-ort f.1D = Also orthogonalize input to columns in f.1D
[only one '-ort' option is allowed]
-nodetrend = Skip the quadratic detrending of the input
-norm = Make output time series have L2 norm = 1
Example: ~1~
1deval -num 1000 -expr 'gran(0,1)' > r1000.1D
1dBandpass 0.025 0.20 r1000.1D > f1000.1D
1dfft f1000.1D - | 1dplot -del 0.000977 -stdin -plabel 'Filtered |FFT|'
Goal: ~1~
* Mostly to test the functions in thd_bandpass.c -- RWCox -- May 2009
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dBport
Usage: 1dBport [options]
Creates a set of columns of sines and cosines for the purpose of
bandpassing via regression (e.g., in 3dDeconvolve). Various option
are given to specify the duration and structure of the time series
to be created. Results are written to stdout, and usually should be
redirected appropriately (cf. EXAMPLES, infra). The file produced
could be used with the '-ortvec' option to 3dDeconvolve, for example.
OPTIONS
-------
-band fbot ftop = Specify lowest and highest frequencies in the passband.
fbot can be 0 if you want to do a highpass filter only;
on the other hand, if ftop > Nyquist frequency, then
it's a lowpass filter only.
** This 'option' is actually mandatory! (At least once.)
* For the un-enlightened, the Nyquist frequency is the
highest frequency supported on the given grid, and
is equal to 0.5/TR (units are Hz if TR is in s).
* The lowest nonzero frequency supported on the grid
is equal to 1/(N*TR), where N=number of time points.
** Multiple -band options can be used, if needed.
If the bands overlap, regressors will NOT be duplicated.
* That is, '-band 0.01 0.05 -band 0.03 0.08' is the same
as using '-band 0.01 0.08'.
** Note that if fbot==0 and ftop>=Nyquist frequency, you
get a 'complete' set of trig functions, meaning that
using these in regression is effectively a 'no-pass'
filter -- probably not what you want!
** It is legitimate to set fbot = ftop.
** The 0 frequency (fbot = 0) component is all 1, of course.
But unless you use the '-quad' option, nothing generated
herein will deal well with linear-ish or quadratic-ish
trends, which fall below the lowest nonzero frequency
representable in a full cycle on the grid:
f_low = 1 / ( NT * TR )
where NT = number of time points.
** See the fourth EXAMPLE to learn how to use 3dDeconvolve
to generate a file of polynomials for regression fun.
-invert = After computing which frequency indexes correspond to the
input band(s), invert the selection -- that is, output
all those frequencies NOT selected by the -band option(s).
See the fifth EXAMPLE.
-nozero } Do NOT generate the 0 frequency (constant) component
*OR } when fbot = 0; this has the effect of setting fbot to
-noconst } 1/(N*TR), and is essentially a convenient way to say
'eliminate all oscillations below the ftop frequency'.
-quad = Add regressors for linear and quadratic trends.
(These will be the last columns in the output.)
-input dataset } One of these options is used to specify the number of
*OR* } time points to be created, as in 3dDeconvolve.
-input1D 1Dfile } ** '-input' allow catenated datasets, as in 3dDeconvolve.
*OR* } ** '-input1D' assumes TR=1 unless you use the '-TR' option.
-nodata NT [TR] } ** One of these options is mandatory, to specify the length
of the time series file to generate.
-TR del = Set the time step to 'del' rather than use the one
given in the input dataset (if any).
** If TR is not specified by the -input dataset or by
-nodata or by -TR, the program will assume it is 1.0 s.
-concat rname = As in 3dDeconvolve, used to specify the list of start
indexes for concatenated runs.
** Also as in 3dDeconvolve, if the -input dataset is auto-
catenated (by providing a list of more than one dataset),
the run start list is automatically generated. Otherwise,
this option is needed if more than one run is involved.
EXAMPLES
--------
The first example provides basis functions to filter out all frequency
components from 0 to 0.25 Hz:
1dBport -nodata 100 1 -band 0 0.25 > highpass.1D
The second example provides basis functions to filter out all frequency
components from 0.25 Hz up to the Nyquist frequency:
1dBport -nodata 100 1 -band 0.25 666 > lowpass.1D
The third example shows how to examine the results visually, for fun:
1dBport -nodata 100 1 -band 0.41 0.43 | 1dplot -stdin -thick
The fourth example shows how to use 3dDeconvolve to generate a file of
polynomial 'orts', in case you find yourself needing this ability someday
(e.g., when stranded on a desert isle, with Gilligan, the Skipper, et al.):
3dDeconvolve -nodata 100 1 -polort 2 -x1D_stop -x1D stdout: | 1dcat stdin: > pol3.1D
The fifth example shows how to use 1dBport to generate a set of regressors to
eliminate all frequencies EXCEPT those in the selected range:
1dBport -nodata 100 1 -band 0.03 0.13 -nozero -invert | 1dplot -stdin
In this example, the '-nozero' flag is used because the next step will be to
3dDeconvolve with '-polort 2' and '-ortvec' to get rid of the undesirable stuff.
ETYMOLOGICAL NOTES
------------------
* The word 'ort' was coined by Andrzej Jesmanowicz, as a shorthand name for
a timeseries to which you want to 'orthogonalize' your data.
* 'Ort' actually IS an English word, and means 'a scrap of food left from a meal'.
As far as I know, its only usage in modern English is in crossword puzzles,
and in Scrabble.
* For other meanings of 'ort', see http://en.wikipedia.org/wiki/Ort
* Do not confuse 'ort' with 'Oort': http://en.wikipedia.org/wiki/Oort_cloud
AUTHOR -- RWCox -- Jan 2012
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dcat
Usage: 1dcat [options] a.1D b.1D ...
where each file a.1D, b.1D, etc. is a 1D file.
In the simplest form, a 1D file is an ASCII file of numbers
arranged in rows and columns.
1dcat takes as input one or more 1D files, and writes out a 1D file
containing the side-by-side concatenation of all or a subset of the
columns from the input files.
* Output goes to stdout (the screen); redirect (e.g., '>') to save elsewhere.
* All files MUST have the same number of rows!
* Any header lines (i.e., lines that start with '#') will be lost.
* For generic 1D file usage help and information, see '1dplot -help'
-----------
TSV files: [Sep 2018]
-----------
* 1dcat can now also read .tsv files, which are columns of values separated
by tab characters (tsv = tab separated values). The first row of a .tsv
file is a set of column labels. After the header row, each column is either
all numbers, or is a column of strings. For example
Col 1 Col 2 Col 3
3.2 7.2 Elvis
8.2 -1.2 Sinatra
6.66 33.3 20892
In this example, the column labels contain spaces, which are NOT separators;
the only column separator used in a .tsv file is the tab character.
The first and second columns are converted to number columns, since every
value (after the label/header row) is a numeric string. The third column
is stored as strings, since some of the entries are not valid numbers.
* 1dcat can deal with a mix of .1D and .tsv files. The .tsv file header
rows are NOT output by default, since .1D files don't have such headers.
* The usual output from 1dcat is NOT a .tsv file - blanks are used for
separators. You can use the '-tsvout' option to get TSV formatted output.
* If you mix .1D and .tsv files, the number of data rows in each file
must be the same. Since the header row in a .tsv file is NOT used here,
the total number of lines in a .tsv file must be 1 more than the number
of lines in a .1D file for the two files to match in this program.
* The purpose of supporting .tsv files is for eventual compatibility with
the BIDS format http://bids.neuroimaging.io - which uses .tsv files
extensively to provide auxiliary information for (F)MRI datasets.
* Column selectors (like '[0,3]') can be used on .tsv files, but row selectors
(like '{0,3..5}') cannot be used on .tsv files - at this time :(
* You can also select a column in a .tsv file by using the label at the top of
of the column. A BIDS-related example:
1dcat sub-666_task-XXX_events.tsv'[onset,duration,trial_type,reaction_time]'
A similar example, which outputs a list of the trial types in an imaging run:
1dcat sub-666_task-XXX_events.tsv'[trial_type]' | sort | uniq
* Since .1D files don't have headers, the label method of column selection
doesn't work with such inputs; you must use integer column selectors
on .1D files.
* NOTE WELL: The string 'N/A' or 'n/a' in a column that is otherwise numeric
will be considered to be a number, and will be replaced on input
with the mean of the "true" numbers in the column -- there is
no concept of missing data in an AFNI .1D file.
++ If you don't like this, well ... too bad for you.
* NOTE WELL: 1dcat now also allows comma separated value (.csv) files. These
are treated the same as .tsv files, with a header line, et cetera.
--------
OPTIONS:
--------
-tsvout = Output in a TSV (.tsv) format, where the values in each row
are separated by tabs, not blanks. Also, a header line will
be provided, as TSV files require.
-csvout = Output in a CSV (.csv) format, where the values in each row
are separated by commas, not blanks. Also, a header line will
be provided, as CSV files require.
-nonconst = Columns that are identically constant should be omitted
from the output.
-nonfixed = Keep only columns that are marked as 'free' in the
3dAllineate header from '-1Dparam_save'.
If there is no such header, all columns are kept.
* NOTE: -nconst and -nonfixed don't have any effect on
.tsv/.csv files, and the use of these options
has NOT been tested at all when the inputs
are mixture of .tsv/.csv and .1D files.
-form FORM = Format of the numbers to be output.
You can also substitute -form FORM with shortcuts such
as -i, -f, or -c.
For help on -form's usage, and its shortcut versions
see ccalc's help for the option of the same name.
-stack = Stack the columns of the resultant matrix in the output.
You can't use '-stack' with .tsv/.csv files :(
-sel SEL = Apply the same column/row selection string to all filenames
on the command line.
For example:
1dcat -sel '[0,2]' f1.1D f2.1D
is the same as: 1dcat f1.1D'[1,2]' f2.1D'[1,2]'
The advantage of the option is that it allows wildcard use
in file specification so that you can run something like:
1dcat -sel '[0,2]' f?.1D
-OKempty: Exit quietly when encountering an empty file on disk.
Note that if the file is poorly formatted, it might be
considered empty.
EXAMPLE:
--------
Input file 1:
1
2
3
4
Input file 2:
5
6
7
8
1dcat data1.1D data2.1D > catout.1D
Output file:
1 5
2 6
3 7
4 8
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dCorrelate
Usage: 1dCorrelate [options] 1Dfile 1Dfile ...
------
* Each input 1D column is a collection of data points.
* The correlation coefficient between each column pair is computed, along
with its confidence interval (via a bias-corrected bootstrap procedure).
* The minimum sensible column length is 7.
* At least 2 columns are needed [in 1 or more .1D files].
* If there are N input columns, there will be N*(N-1)/2 output rows.
* Output appears on stdout; redirect ('>' or '>>') as needed.
* Only one correlation method can be used in one run of this program.
* This program is basically the basterd offspring of program 1ddot.
* Also see http://en.wikipedia.org/wiki/Confidence_interval
-------
Methods [actually, only the first letter is needed to choose a method]
------- [and the case doesn't matter: '-P' and '-p' both = '-Pearson']
-Pearson = Pearson correlation [the default method]
-Spearman = Spearman (rank) correlation [more robust vs. outliers]
-Quadrant = Quadrant (binarized) correlation [most robust, but weaker]
-Ktaub = Kendall's tau_b 'correlation' [popular somewhere, maybe]
-------------
Other Options [these options cannot be abbreviated!]
-------------
-nboot B = Set the number of bootstrap replicates to 'B'.
* The default value of B is 4000.
* A larger number will give somewhat more accurate
confidence intervals, at the cost of more CPU time.
-alpha A = Set the 2-sided confidence interval width to '100-A' percent.
* The default value of A is 5, giving the 2.5..97.5% interval.
* The smallest allowed A is 1 (0.5%..99.5%) and the largest
allowed value of A is 20 (10%..90%).
* If you are interested assessing if the 'p-value' of a
correlation is smaller than 5% (say), then you should use
'-alpha 10' and see if the confidence interval includes 0.
-block = Attempt to allow for serial correlation in the data by doing
*OR* variable-length block resampling, rather than completely
-blk random resampling as in the usual bootstrap.
* You should NOT do this unless you believe that serial
correlation (along each column) is present and significant.
* Block resampling requires at least 20 data points in each
input column. Fewer than 20 will turn off this option.
-----
Notes
-----
* For each pair of columns, the output include the correlation value
as directly calculated, plus the bias-corrected bootstrap value, and
the desired (100-A)% confidence interval [also via bootstrap].
* The primary purpose of this program is to provide an easy way to get
the bootstrap confidence intervals, since people almost always seem to use
the asymptotic normal theory to decide if a correlation is 'significant',
and this often seems misleading to me [especially for short columns].
* Bootstrapping confidence intervals for the inverse correlations matrix
(i.e., partial correlations) would be interesting -- anyone out there
need this ability?
-------------
Sample output [command was '1dCorrelate -alpha 10 A2.1D B2.1D']
-------------
# Pearson correlation [n=12 #col=2]
# Name Name Value BiasCorr 5.00% 95.00% N: 5.00% N:95.00%
# -------- -------- -------- -------- -------- -------- -------- --------
A2.1D[0] B2.1D[0] +0.57254 +0.57225 -0.03826 +0.86306 +0.10265 +0.83353
* Bias correction of the correlation had little effect; this is very common.
++ To be clear, the bootstrap bias correction is to allow for potential bias
in the statistical estimate of correlation when the sample size is small.
++ It cannot correct for biases that result from faulty data (or faulty
assumptions about the data).
* The correlation is NOT significant at this level, since the CI (confidence
interval) includes 0 in its range.
* For the Pearson method ONLY, the last two columns ('N:', as above) also
show the widely used asymptotic normal theory confidence interval. As in
the example, the bootstrap interval is often (but not always) wider than
the theoretical interval.
* In the example, the normal theory might indicate that the correlation is
significant (less than a 5% chance that the CI includes 0), but the
bootstrap CI shows that is not a reasonable statistical conclusion.
++ The principal reason that I wrote this program was to make it easy
to check if the normal (Gaussian) theory for correlation significance
testing is reasonable in any given case -- for small samples, it often
is NOT reasonable!
* Using the same data with the '-S' option gives the table below, again
indicating that there is no significant correlation between the columns
(note also the lack of the 'N:' results for Spearman correlation):
# Spearman correlation [n=12 #col=2]
# Name Name Value BiasCorr 5.00% 95.00%
# -------- -------- -------- -------- -------- --------
A2.1D[0] B2.1D[0] +0.46154 +0.42756 -0.23063 +0.86078
-------------
SAMPLE SCRIPT
-------------
This script generates random data and correlates it until it is
statistically significant at some level (default=2%). Then it
plots the data that looks correlated. The point is to show what
purely random stuff that appears correlated can look like.
(Like most AFNI scripts, this is written in tcsh, not bash.)
#!/bin/tcsh
set npt = 20
set alp = 2
foreach fred ( `count_afni -dig 1 1 1000` )
1dcat jrandom1D:${npt},2 > qqq.1D
set aabb = ( `1dCorrelate -spearman -alpha $alp qqq.1D | grep qqq.1D | colrm 1 42` )
set ab = `ccalc -form rint "1000 * $aabb[1] * $aabb[2]"`
echo $fred $ab
if( $ab > 1 )then
1dplot -one -noline -x qqq.1D'[0]' -xaxis -1:1:20:5 -yaxis -1:1:20:5 \
-DAFNI_1DPLOT_BOXSIZE=0.012 \
-plabel "N=$npt trial#=$fred \alpha=${alp}% => r\in[$aabb[1],$aabb[2]]" \
qqq.1D'[1]'
break
endif
end
\rm qqq.1D
----------------------------------------------------------------------
*** Written by RWCox (AKA Zhark the Mad Correlator) -- 19 May 2011 ***
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: @1dDiffMag
Usage: @1dDiffMag file.1D
* Computes a magnitude estimate of the first differences of a 1D file.
* Differences are computed down each column.
* The result -- a single number -- is on stdout.
* But (I hear you say), what IS the result?
* For each column, the standard deviation of the first differences is computed.
* The final result is the square-root of the sum of the squares of these stdev values.
AFNI program: 1ddot
Usage: 1ddot [options] 1Dfile 1Dfile ...
* Prints out correlation matrix of the 1D files and
their inverse correlation matrix.
* Output appears on stdout.
* Program 1dCorrelate does something similar-ish.
Options:
-one = Make 1st vector be all 1's.
-dem = Remove mean from all vectors (conflicts with '-one')
-cov = Compute with covariance matrix instead of correlation
-inn = Computed with inner product matrix instead
-rank = Compute Spearman rank correlation instead
(also implies '-terse')
-terse= Output only the correlation or covariance matrix
and without any of the garnish.
-okzero= Do not quit if a vector is all zeros.
The correlation matrix will have 0 where NaNs ought to go.
Expect rubbish in the inverse matrices if all zero
vectors exist.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dDW_Grad_o_Mat++
++ Program version: 2.2
Simple function to manipulate DW gradient vector files, b-value
files, and b- or g-matrices. Let: g_i be one of Ng spatial gradients
in three dimensions; |g_i| = 1, and the g-matrix is G_{ij} = g_i * g_j
(i.e., dyad of gradients, without b-value included); and the DW-scaled
b-matrix is B_{ij} = b * g_i * g_j.
**This new version of the function** will replace the original/older
version (1dDW_Grad_o_Mat). The new has similar functionality, but
improved defaults:
+ it does not average b=0 volumes together by default;
+ it does not remove top b=0 line from top by default;
+ output has same scaling as input by default (i.e., by bval or not);
and a switch is used to turn *off* scaling, for unit magn output
(which is cleverly concealed under the name '-unit_mag_out').
Wherefore, you ask? Well, times change, and people change.
The above functionality is still available, but each just requires
selection with command line switches.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
As of right now, one can input:
+ 3 rows of gradients (as output from dcm2nii, for example);
+ 3 columns of gradients;
+ 6 columns of g- or b-matrices, in `diagonal-first' (-> matA) order:
Bxx, Byy, Bzz, Bxy, Bxz, Byz,
which is used in 3dDWItoDT, for example;
+ 6 columns of g- or b-matrices, in `row-first' (-> matT) order:
Bxx, 2*Bxy, 2*Bxz, Byy, 2*Byz, Bzz,
which is output by TORTOISE, for example;
+ when specifying input file, one can use the brackets '{ }'
in order to specify a subset of rows to keep (NB: probably
can't use this grad-filter when reading in row-data right
now).
During processing, one can:
+ flip the sign of any of the x-, y- or z-components, which
may be necessary to do to make the scanned data and tracking
work happily together;
+ filter out all `zero' rows of recorded reference images,
THOUGH this is not really recommended.
One can then output:
+ 3 columns of gradients;
+ 6 columns of g- or b-matrices, in 'diagonal-first' order;
+ 6 columns of g- or b-matrices, in 'row-first' order;
+ as well as including a column of b-values (such as used in, e.g.,
DSI-Studio);
+ as well as explicitly include a row of zeros at the top;
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ RUNNING:
1dDW_Grad_o_Mat++ \
{ -in_row_vec | -in_col_vec | \
-in_col_matA | -in_col_matT } INFILE \
{ -flip_x | -flip_y | -flip_z | -no_flip } \
{ -out_row_vec | -out_col_vec | \
-out_col_matA | -out_col_matT } OUTFILE \
{ -in_bvals BVAL_FILE } \
{ -out_col_bval } \
{ -out_row_bval_sep BB | -out_col_bval_sep BB } \
{ -unit_mag_out } \
{ -bref_mean_top } \
{ -bmax_ref THRESH } \
{ -put_zeros_top } \
where:
(one of the following formats of input must be given):
-in_row_vec INFILE :input file of 3 rows of gradients (e.g.,
dcm2nii-format output).
-in_col_vec INFILE :input file of 3 columns of gradients.
-in_col_matA INFILE :input file of 6 columns of b- or g-matrix in
'A(FNI)' `diagonal first'-format. (See above.)
-in_col_matT INFILE :input file of 6 columns of b- or g-matrix in
'T(ORTOISE)' `row first'-format. (See above.)
(one of the following formats of output must be given):
-out_row_vec OUTFILE :output file of 3 rows of gradients.
-out_col_vec OUTFILE :output file of 3 columns of gradients.
-out_col_matA OUTFILE :output file of 6 columns of b- or g-matrix in
'A(FNI)' `diagonal first'-format. (See above.)
-out_col_matT OUTFILE :output file of 6 cols of b- or g-matrix in
'T(ORTOISE)' `row first'-format. (See above.)
(and any of the following options may be used):
-in_bvals BVAL_FILE :BVAL_FILE is a file of b-values, either a single
row (such as the 'bval' file generated by
dcm2nii) or a single column of numbers. Must
have the same number of entries as the number
of grad vectors or matrices.
-out_col_bval :switch to put a column of the bvalues as the
first column in the output data.
-out_row_bval_sep BB :output a file BB of bvalues in a single row.
-out_col_bval_sep BB :output a file BB of bvalues in a single column.
-unit_mag_out :switch so that each vector/matrix from the INFILE
is scaled to either unit or zero magnitude.
(Supplementary input bvalues would be ignored
in the output matrix/vector, but not in the
output bvalues themselves.) The default
behavior of the function is to leave the output
scaled however it is input (while also applying
any input BVAL_FILE).
-flip_x :change sign of first column of gradients (or of
the x-component parts of the matrix)
-flip_y :change sign of second column of gradients (or of
the y-component parts of the matrix)
-flip_z :change sign of third column of gradients (or of
the z-component parts of the matrix)
-no_flip :don't change any gradient/matrix signs. This
is an extraneous switch, as the default is to
not flip any signs (this is mainly used for
some scripting convenience
-check_abs_min VVV :By default, this program checks input matrix
formats for consistency (having positive semi-
definite diagonal matrix elements). It will fail
if those don't occur. However, sometimes there is
just a tiny values <0, like a rounding error;
you can specify to push throughfor negative
diagonal elements with magnitude <VVV, with those
values getting replaced by zero. Be judicious
with this power! (E.g., maybe VVV ~ 0.0001 might
be OK... but if you get looots of negatives, then
you really, really need to check your data for
badness.
(and the follow options are probably mainly extraneous, nowadays)
-bref_mean_top :when averaging the reference X 'b0' values (the
default behavior), have the mean of the X
values be represented in the top row; default
behavior is to have nothing representing the b0
information in the top row (for historical
functionality reasons). NB: if your reference
'b0' actually has b>0, you might not want to
average the b0 refs together, because their
images could have differing contrast if the
same reference vector wasn't used for each.
-put_zeros_top :whatever the output format is, add a row at the
top with all zeros.
-bmax_ref THRESH :THRESH is a scalar number below which b-values
(in BVAL_IN) are considered `zero' or reference.
Sometimes, for the reference images, the scanner
has a value like b=5 s/mm^2, instead of strictly
b=0 strictly. One can still flag such values as
being associated with a reference image and
trim it out, using, for the example case here,
'-bmax_ref 5.1'.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
EXAMPLES
# An example of type-conversion from a TORTOISE-style matrix to column
# gradients (if the matT file has bweights, so will the grad values):
1dDW_Grad_o_Mat++ \
-in_col_matT BMTXT_TORT.txt \
-out_col_vec GRAD.dat
# An example of filtering (note the different styles of parentheses
# for the column- and row-type files) and type-conversion (to an
# AFNI-style matrix that should have the bvalue weights afterwards):
1dDW_Grad_o_Mat++ \
-in_col_vec GRADS_col.dat'{0..10,12..30}' \
-in_bvals BVALS_row.dat'[0..10,12..30]' \
-out_col_matA FILT_matA.dat
# An example of filtering *without* type-conversion. Here, note
# the '-unit_mag_out' flag is used so that the output row-vec does
# not carry the bvalue weight with it; it does not affect the output
# bval file. As Levon might say, the '-unit_mag_out' option acts to
# 'Take a load off bvecs, take a load for free;
# Take a load off bvecs, and you put the load right on bvals only.'
# This example might be useful for working with dcm2nii* output:
1dDW_Grad_o_Mat++ \
-in_row_vec ap.bvec'[0..10,12..30]' \
-in_bvals ap.bval'[0..10,12..30]' \
-out_row_vec FILT_ap.bvec \
-out_row_bval_sep FILT_ap.bval \
-unit_mag_out
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
___________________________________________________________________________
AFNI program: 1deval
Usage: 1deval [options] -expr 'expression'
Evaluates an expression that may include columns of data
from one or more text files and writes the result to stdout.
** Only a single column can be used for each input 1D file. **
* Simple multiple column operations (e.g., addition, scaling)
can be done with program 1dmatcalc.
* Any single letter from a-z can be used as the independent
variable in the expression.
* Unless specified using the '[]' notation (cf. 1dplot -help),
only the first column of an input 1D file is used, and other
columns are ignored.
* Only one column of output will be produced -- if you want to
calculate a multi-column output file, you'll have to run 1deval
separately for each column, and then glue the results together
using program 1dcat. [However, see the 1dcat example combined
with the '-1D:' option, infra.]
Options:
--------
-del d = Use 'd' as the step for a single undetermined variable
in the expression [default = 1.0]
SYNONYMS: '-dx' and '-dt'
-start s = Start at value 's' for a single undetermined variable
in the expression [default = 0.0]
That is, for the indeterminate variable in the expression
(if any), the i-th value will be s+i*d for i=0, 1, ....
SYNONYMS: '-xzero' and '-tzero'
-num n = Evaluate the expression 'n' times.
If -num is not used, then the length of an
input time series is used. If there are no
time series input, then -num is required.
-a q.1D = Read time series file q.1D and assign it
to the symbol 'a' (as in 3dcalc).
* Letters 'a' to 'z' may be used as symbols.
* You can use the filename 'stdin:' to indicate that
the data for 1 symbol comes from standard input:
1dTsort q.1D stdout: | 1deval -a stdin: -expr 'sqrt(a)' | 1dplot stdin:
-a=NUMBER = set the symbol 'a' to a fixed numerical value
rather than a variable value from a 1D file.
* Letters 'a' to 'z' may be used as symbols.
* You can't assign the same symbol twice!
-index i.1D = Read index column from file i.1D and
write it out as 1st column of output.
This option is useful when working with
surface data.
-1D: = Write output in the form of a single '1D:'
string suitable for input on the command
line of another program.
[-1D: is incompatible with the -index option!]
[This won't work if the output string is very long,]
[since the maximum command line length is limited. ]
Examples:
---------
* 't' is the indeterminate variable in the expression below:
1deval -expr 'sin(2*PI*t)' -del 0.01 -num 101 > sin.1D
* Multiply two columns of data (no indeterminate variable):
1deval -expr 'a*b' -a fred.1D -b ethel.1D > ab.1D
* Compute and plot the F-statistic corresponding to p=0.001 for
varying degrees of freedom given by the indeterminate variable 'n':
1deval -start 10 -num 90 -expr 'fift_p2t(0.001,n,2*n)' | 1dplot -xzero 10 -stdin
* Compute the square root of some numbers given in '1D:' form
directly on the command line:
1deval -x '1D: 1 4 9 16' -expr 'sqrt(x)'
Examples using '-1D:' as the output format:
-------------------------------------------
The examples use the shell backquote `xxx` operation, where the
command inside the backquotes is executed, its stdout is captured
into a string, and placed back on the command line. When you have
mastered this idea, you have taken another step towards becoming
a Jedi AFNI Master!
1dplot `1deval -1D: -num 71 -expr 'cos(t/2)*exp(-t/19)'`
1dcat `1deval -1D: -num 100 -expr 'cos(t/5)'` \
`1deval -1D: -num 100 -expr 'sin(t/5)'` > sincos.1D
3dTfitter -quiet -prefix - \
-RHS `1deval -1D: -num 30 -expr 'cos(t)*exp(-t/7)'` \
-LHS `1deval -1D: -num 30 -expr 'cos(t)'` \
`1deval -1D: -num 30 -expr 'sin(t)'`
Notes:
------
* Program 3dcalc operates on 3D and 3D+time datasets in a similar way.
* Program ccalc can be used to evaluate a single numeric expression.
* If I had any sense, THIS program would have been called 1dcalc!
* For generic 1D file usage help, see '1dplot -help'
* For help with expression format, see '3dcalc -help', or type
'help' when using ccalc in interactive mode.
* 1deval only produces a single column of output. 3dcalc can be
tricked into doing multi-column 1D format output by treating
a 1D file as a 3D dataset and auto-transposing it with \'
For example:
3dcalc -a '1D: 3 4 5 | 1 2 3'\' -expr 'cbrt(a)' -prefix -
The input has 2 'columns' and so does the output.
Note that the 1D 'file' is transposed on input to 3dcalc!
This is essential, or 3dcalc will not treat the 1D file as
a dataset, and the results will be very different. Recall that
when a 1D file is read as an 3D AFNI dataset, the row direction
corresponds to the sub-brick (e.g., time) direction, and the
column direction corresponds to the voxel direction.
A Dastardly Trick:
------------------
If you use some other letter than 'z' as the indeterminate variable
in the calculation, and if 'z' is not assigned to any input 1D file,
then 'z' in the expression will be the previous value computed.
This trick can be used to create 1 point recursions, as in the
following command for creating a AR(1) noise time series:
1deval -num 500 -expr 'gran(0,1)+(i-i)+0.7*z' > g07.1D
Note the use of '(i-i)' to intoduce the variable 'i' so that 'z'
would be used as the previous output value, rather than as the
indeterminate variable generated by '-del' and '-start'.
The initial value of 'z' is 0 (for the first evaluation).
* [02 Apr 2010] You can set the initial value of 'z' to a nonzero
value by using the environment variable AFNI_1DEVAL_ZZERO, as in
1deval -DAFNI_1DEVAL_ZZERO=1 -num 10 -expr 'i+z'
-- RW Cox --
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dfft
Usage: 1dfft [options] infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, with the absolute
value of the FFT of the input columns. The length of the file
will be 1+(FFT length)/2.
Options:
-ignore sss = Skip the first 'sss' lines in the input file.
[default = no skipping]
-use uuu = Use only 'uuu' lines of the input file.
[default = use them all, Frank]
-nfft nnn = Set FFT length to 'nnn'.
[default = length of data (# of lines used)]
-tocx = Save Re and Im parts of transform in 2 columns.
-fromcx = Convert 2 column complex input into 1 column
real output.
[-fromcx will not work if the original]
[data FFT length was an odd number! :(]
-hilbert = When -fromcx is used, the inverse FFT will
do the Hilbert transform instead.
-nodetrend = Skip the detrending of the input.
Nota Bene:
* Each input time series has any quadratic trend of the
form 'a+b*t+c*t*t' removed before the FFT, where 't'
is the line number.
* The FFT length can be any positive even integer, but
the Fast Fourier Transform algorithm will be slower if
any prime factors of the FFT length are large (say > 997)
Unless you are applying this program to VERY long files,
this slowdown will probably not be appreciable.
* If the FFT length is longer than the file length, the
data is zero-padded to make up the difference.
* Do NOT call the output of this program the Power Spectrum!
That is something else entirely.
* If 'outfile' is '-' (or missing), the output appears on stdout.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dFlagMotion
Usage: 1dFlagMotion [options] MotionParamsFile
Produces an list of time points that have more than a
user specified amount of motion relative to the previous
time point.
Options:
-MaxTrans maximum translation allowed in any direction
[defaults to 1.5mm]
-MaxRot maximum rotation allowed in any direction
[defaults to 1.25 degrees]
** The input file must have EXACTLY 6 columns of input, in the order:
roll pitch yaw delta-SI delta-LR delta-AP
(angles in degrees first, then translations in mm)
** The program does NOT accept column '[...]' selectors on the input
file name, or comments in the file itself. As a palliative, if the
input file name is '-', then the input numbers are read from stdin,
so you could do something like the following:
1dcat mfile.1D'[1..6]' | 1dFlagMotion -
e.g., to work with the output from 3dvolreg's '-dfile' option
(where the first column is just the time index).
** The output is in a 1D format, with comments on '#' comment lines,
and the list of points exceeding the motion bounds listed being
intercalated on normal (non-comment) lines.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dgenARMA11
Program to generate an ARMA(1,1) time series, for simulation studies.
Results are written to stdout.
Usage: 1dgenARMA11 [options]
Options:
========
-num N } These equivalent options specify the length of the time
-len N } series vector to generate.
-nvec M = The number of time series vectors to generate;
if this option is not given, defaults to 1.
-a a = Specify ARMA(1,1) parameters 'a'.
-b b = Specify ARMA(1,1) parameter 'b' directly.
-lam lam = Specify ARMA(1,1) parameter 'b' indirectly.
-sig ss = Set standard deviation of results [default=1].
-norm = Normalize time series so sum of squares is 1.
-seed dd = Set random number seed.
* The correlation coefficient r(k) of noise samples k units apart in time,
for k >= 1, is given by r(k) = lam * a^(k-1)
where lam = (b+a)(1+a*b)/(1+2*a*b+b*b)
(N.B.: lam=a when b=0 -- AR(1) noise has r(k)=a^k for k >= 0)
(N.B.: lam=b when a=0 -- MA(1) noise has r(k)=b for k=1, r(k)=0 for k>1)
* lam can be bigger or smaller than a, depending on the sign of b:
b > 0 means lam > a; b < 0 means lam < a.
* What I call (a,b) here is sometimes called (p,q) in the ARMA literature.
* For a noise model which is the sum of AR(1) and white noise, 0 < lam < a
(i.e., a > 0 and -a < b < 0 ).
-CORcut cc = The exact ARMA(1,1) correlation matrix (for a != 0)
has no non-zero entries. The calculations in this
program set correlations below a cutoff to zero.
The default cutoff is 0.00010, but can be altered with
this option. The usual reason to use this option is
to test the sensitivity of the results to the cutoff.
-----------------------------
A restricted ARMA(3,1) model:
-----------------------------
Skip the '-a', '-b', and '-lam' options, and use a model with 3 roots
-arma31 a r theta vrat
where the roots are z = a, z = r*exp(I*theta), z = r*exp(-I*theta)
and vrat = s^2/(s^2+w^2) [so 0 < vrat < 1], where s = variance
of the pure AR(3) component and w = variance of extra white noise
added to the AR(3) process -- this is the 'restricted' ARMA(3,1).
If the data has given TR, and you want a frequency of f Hz, in
the noise model, then theta = 2 * PI * TR * f. If theta > PI,
then you are modeling noise beyond the Nyquist frequency and
the gods (and this program) won't be happy.
# csh syntax for 'set' variable assignment commands
set nt = 500
set tr = 1
set df = `ccalc "1/($nt*$tr)"`
set f1 = 0.10
set t1 = `ccalc "2*PI*$tr*$f1"`
1dgenARMA11 -nvec 500 -len $nt -arma31 0.8 0.9 $t1 0.9 -CORcut 0.0001 \
| 1dfft -nodetrend stdin: > qqq.1D
3dTstat -mean -prefix stdout: qqq.1D \
| 1dplot -stdin -num 201 -dt $df -xlabel 'frequency' -ylabel '|FFT|'
---------------------------------------------------------------------------
A similar option is now available for a restricted ARMA(5,1) model:
-arma51 a r1 theta1 r2 theta2 vrat
where now the roots are
z = a z = r1*exp(I*theta1) z = r1*exp(-I*theta1)
z = r2*exp(I*theta2) z = r2*exp(-I*theta2)
This model allows the simulation of two separate frequencies in the 'noise'.
---------------------------------------------------------------------------
Author: RWCox [for his own demented and deranged purposes]
Examples:
1dgenARMA11 -num 200 -a .8 -lam 0.7 | 1dplot -stdin
1dgenARMA11 -num 2000 -a .8 -lam 0.7 | 1dfft -nodetrend stdin: stdout: | 1dplot -stdin
AFNI program: 1dgrayplot
Usage: 1dgrayplot [options] tsfile
Graphs the columns of a *.1D type time series file to the screen,
sort of like 1dplot, but in grayscale.
Options:
-install = Install a new X11 colormap (for X11 PseudoColor)
-ignore nn = Skip first 'nn' rows in the input file
[default = 0]
-flip = Plot x and y axes interchanged.
[default: data columns plotted DOWN the screen]
-sep = Separate scales for each column.
-use mm = Plot 'mm' points
[default: all of them]
-ps = Don't draw plot in a window; instead, write it
to stdout in PostScript format.
N.B.: If you view this result in 'gv', you should
turn 'anti-alias' off, and switch to
landscape mode.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dMarry
Usage: 1dMarry [options] file1 file2 ...
Joins together 2 (or more) ragged-right .1D files, for use with
3dDeconvolve -stim_times_AM2.
**_OR_**
Breaks up 1 married file into 2 (or more) single-valued files.
OPTIONS:
=======
-sep abc == Use the first character (e.g., 'a') as the separator
between values 1 and 2, the second character (e.g., 'b')
as the separator between values 2 and 3, etc.
* These characters CANNOT be a blank, a tab, a digit,
or a non-printable control character!
* Default separator string is '*,' which will result
in output similar to '3*4,5,6'
-divorce == Instead of marrying the files, assume that file1
is already a married file: split time*value*value... tuples
into separate files, and name them in the pattern
'file2_A.1D' 'file2_B.1D' et cetera.
If not divorcing, the 'married' file is written to stdout, and
probably should be captured using a redirection such as '>'.
NOTES:
=====
* You cannot use column [...] or row {...} selectors on
ragged-right .1D files, so don't even think about trying!
* The maximum number of values that can be married is 26.
(No polygamy or polyandry jokes here, please.)
* For debugging purposes, with '-divorce', if 'file2' is '-',
then all the divorcees are written directly to stdout.
-- RWCox -- written hastily in March 2007 -- hope I don't repent
-- modified to deal with multiple marriages -- December 2008
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dmatcalc
Usage: 1dmatcalc [-verb] expression
Evaluate a space delimited RPN matrix-valued expression:
* The operations are on a stack, each element of which is a
real-valued matrix.
* N.B.: This is a computer-science stack of separate matrices.
If you want to join two matrices in separate files
into one 'stacked' matrix, then you must use program
1dcat to join them as columns, or the system program
cat to join them as rows.
* You can also save matrices by name in an internal buffer
using the '=NAME' operation and then retrieve them later
using just the same NAME.
* You can read and write matrices from files stored in ASCII
columns (.1D format) using the &read and &write operations.
* The following 5 operations, input as a single string,
'&read(V.1D) &read(U.1D) &transp * &write(VUT.1D)'
- reads matrices V and U from disk (separately),
- transposes U (on top of the stack) into U',
- multiplies V and U' (the two matrices on top of the stack),
- and writes matrix VU' out (the matrix left on the stack by '*').
* Calculations are carried out in single precision ('float').
* Operations mostly contain characters such as '&' and '*' that
are special to Unix shells, so you'll probably need to put
the arguments to this program in 'single quotes'.
* You can use '%' or '@' in place of the '&' character, if you wish.
STACK OPERATIONS
-----------------
number == push scalar value (1x1 matrix) on stack;
a number starts with a digit or a minus sign
=NAME == save a copy matrix on top of stack as 'NAME'
NAME == push a copy of NAME-ed matrix onto top of stack;
names start with an alphabetic character
&clear == erase all named matrices (to save memory);
does not affect the stack at all
&purge == erase the stack;
does not affect named matrices
&read(FF) == read ASCII (.1D) file onto top of stack from file 'FF'
&read4x4Xform(FF)
== Similar to &read(FF), except that it expects data
for a 12-parameter spatial affine transform.
FF can contain 12x1, 1x12, 16x1, 1x16, 3x4, or
4x4 values.
The read operation loads the data into a 4x4 matrix
r11 r12 r13 r14
r21 r22 r23 r24
r31 r32 r33 r34
0.0 0.0 0.0 1.0
This option was added to simplify the combination of
linear spatial transformations. However, you are better
off using cat_matvec for that purpose.
&write(FF) == write top matrix to ASCII file to file 'FF';
if 'FF' == '-', writes to stdout
&transp == replace top matrix with its transpose
&ident(N) == push square identity matrix of order N onto stack
N is an fixed integer, OR
&R to indicate the row dimension of the
current top matrix, OR
&C to indicate the column dimension of the
current top matrix, OR
=X to indicate the (1,1) element of the
matrix named X
&Psinv == replace top matrix with its pseudo-inverse
[computed via SVD, not via inv(A'*A)*A']
&Sqrt == replace top matrix with its square root
[computed via Denman & Beavers iteration]
N.B.: not all real matrices have real square
roots, and &Sqrt will fail if you push it
N.B.: the matrix must be square!
&Pproj == replace top matrix with the projection onto
its column space; Input=A; Output = A*Psinv(A)
N.B.: result P is symmetric and P*P=P
&Qproj == replace top matrix with the projection onto
the orthogonal complement of its column space
Input=A; Output=I-Pproj(A)
* == replace top 2 matrices with their product;
OR stack = [ ... C A B ] (where B = top) goes to
&mult stack = [ ... C AB ]
if either of the top matrices is a 1x1 scalar,
then the result is the scalar multiplication of
the other matrix; otherwise, matrices must conform
+ OR &add == replace top 2 matrices with sum A+B
- OR &sub == replace top 2 matrices with difference A-B
&dup == push duplicate of top matrix onto stack
&pop == discard top matrix
&swap == swap top two matrices (A <-> B)
&Hglue == glue top two matrices together horizontally:
stack = [ ... C A B ] goes to
stack = [ ... C A|B ]
this is like what program 1dcat does.
&Vglue == glue top two matrices together vertically:
stack = [ ... C A B ] goes to
A
stack = [ ... C - ]
B
this is like what program cat does.
SIMPLE EXAMPLES
---------------
* Multiply each element of an input 1D file
by a constant factor and write to disk.
1dmatcalc "&read(in.1D) 3.1416 * &write(out.1D)"
* Subtract two 1D files
1dmatcalc "&read(a.1D) &read(b.1D) - &write(stdout:)"
AFNI program: 1dNLfit
Program to fit a model to a vector of data. The model is given by a
symbolic expression, with parameters to be estimated.
Usage: 1dNLfit OPTIONS
Options: [all but '-meth' are actually mandatory]
--------
-expr eee = The expression for the fit. It must contain one symbol from
'a' to 'z' which is marked as the independent variable by
option '-indvar', and at least one more symbol which is
a parameter to be estimated.
++ Expressions use the same syntax as 3dcalc, ccalc, and 1deval.
++ Note: expressions and symbols are not case sensitive.
-indvar c d = Indicates which variable in '-expr' is the independent
variable. All other symbols are parameters, which are
either fixed (constants) or variables to be estimated.
++ Then, read the values of the independent variable from
1D file 'd' (only the first column will be used).
++ If the independent variable has a constant step size,
you can input it via with 'd' replaced by a string like
'1D: 100%0:2.1'
which creates an array with 100 value, starting at 0,
then adding 2.1 for each step:
0 2.1 4.2 6.3 8.4 ...
-param ppp = Set fixed value or estimating range for a particular
symbol.
++ For a fixed value, 'ppp' takes the form 'a=3.14', where the
first letter is the symbol name, which must be followed by
an '=', then followed by a constant expression. This
expression can be symbolic, as in 'a=cbrt(3)'.
++ For a parameter to be estimated, 'ppp' takes the form of
two constant expressions separated by a ':', as in
'q=-sqrt(2):sqrt(2)'.
++ All symbols in '-expr' must have a corresponding '-param'
option, EXCEPT for the '-indvar' symbol (which will be set
by its data file).
-depdata v = Read the values of the dependent variable (to be fitted to
'-expr') from 1D file 'v'.
++ File 'v' must have the same number of rows as file 'd'
from the '-indvar' option!
++ File 'v' can have more than one column; each will be fitted
separately to the expression.
-meth m = Set the method for fitting: '1' for L1, '2' for L2.
(The default method is L2, which is usually better.)
Example:
--------
Create a sin wave corrupted by logistic noise, to file ss.1D.
Create a cos wave similarly, to file cc.1D.
Put these files together into a 2 column file sc.1D.
Fit both columns to a 3 parameter model and write the fits to file ff.1D.
Plot the data and the fit together, for fun and profit(?).
1deval -expr 'sin(2*x)+lran(0.3)' -del 0.1 -num 100 > ss.1D
1deval -expr 'cos(2*x)+lran(0.3)' -del 0.1 -num 100 > cc.1D
1dcat ss.1D cc.1D > sc.1D ; \rm ss.1D cc.1D
1dNLfit -depdata sc.1D -indvar x '1D: 100%0:0.1' -expr 'a*sin(b*x)+c*cos(b*x)' \
-param a=-2:2 -param b=1:3 -param c=-2:2 > ff.1D
1dplot -one -del 0.1 -ynames sin:data cos:data sin:fit cos:fit - sc.1D ff.1D
Notes:
------
* PLOT YOUR RESULTS! There is no guarantee that you'll get a good fit.
* This program is not particularly efficient, so using it on a large
scale (e.g., for lots of columns, or in a shell loop) will be slow.
* The results (fitted time series models) are written to stdout,
and should be saved by '>' redirection (as in the example).
The first few lines of the output from the example are:
# 1dNLfit output (meth=L2)
# expr = a*sin(b*x)+c*cos(b*x)
# Fitted parameters:
# A = 1.0828 0.12786
# B = 1.9681 2.0208
# C = 0.16905 1.0102
# ----------- -----------
0.16905 1.0102
0.37753 1.0153
0.57142 0.97907
* Coded by Zhark the Well-Fitted - during Snowzilla 2016.
AFNI program: 1dnorm
Usage: 1dnorm [options] infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, with each column being
L_2 normalized (sum of squares = 1).
* If 'infile' is '-', it will be read from stdin.
* If 'outfile' is '-', it will be written to stdout.
Options:
--------
-norm1 = Normalize so sum of absolute values is 1 (L_1 norm)
-normx = So that max absolute value is 1 (L_infinity norm)
-demean = Subtract each column's mean before normalizing
-demed = Subtract each column's median before normalizing
[-demean and -demed are mutually exclusive!]
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dplot
++ 1dplot: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: RWC et al.
Usage: 1dplot [options] tsfile ...
Graphs the columns of a *.1D time series file to the X11 screen,
or to an image file (.jpg or .png).
** This is the original C-language plotting program in AFNI, first created **
** in 1999 (by RW Cox), built on routines he first wrote in the 1980s. **
** Also see the much newer and similar Python-language program 1dplot.py **
** (created by PA Taylor in 2018), which can produce nicer looking graphs. **
-------
OPTIONS
-------
-install = Install a new X11 colormap.
-sep = Plot each column in a separate sub-graph.
-one = Plot all columns together in one big graph.
[default = -sep]
-sepscl = Plot each column in a separate sub-graph
and allow each sub-graph to have a different
y-scale. -sepscl is meaningless with -one!
-noline = Don't plot the connecting lines (also implies '-box').
-NOLINE = Same as '-noline', but will not try to plot values outside
the rectangular box that contains the graph axes.
-box = Plot a small 'box' at each data point, in addition
to the lines connecting the points.
* The box size can be set via the environment variable
AFNI_1DPLOT_BOXSIZE; the value is a fraction of the
overall plot size. The standard box size is 0.006.
Example with a bigger box:
1dplot -DAFNI_1DPLOT_BOXSIZE=0.01 -box A.1D
* The box shapes are different for different time
series columns. At present, there is no way to
control which shape is used for what column
(unless you modify the source code, that is).
* If you want some data columns plotted with boxes
and some with lines, don't use '-box'. Instead, use
option '-dashed'.
* You can set environment variable AFNI_1DPLOT_RANBOX
to YES to get the '-noline' boxes plotted in a
pseudo-random order, so that one particular color
doesn't dominate just because it is last in the
plotting order; for example:
1dplot -DAFNI_1DPLOT_RANBOX=YES -one -x X.1D -noline Y1.1D Y2.1D Y3.1D
-hist = Plot graphs in histogram style (i.e., vertical boxes).
* Histograms can be generated from 3D or 1D files using
program 3dhistog; for example
3dhistog -nbin 50 -notitle -min 0 -max .04 err.1D > eh.1D
1dplot -hist -x eh.1D'[0]' -xlabel err -ylabel hist eh.1D'[1]'
or, for something a little more fun looking:
1dplot -one -hist -dashed 1:2 -x eh.1D'[0]' \
-xlabel err -ylabel hist eh.1D'[1]' eh.1D'[1]'
** The '-norm' options below can be useful for plotting data
with different value ranges on top of each other via '-one':
-norm2 = Independently scale each time series plotted to
have L_2 norm = 1 (sum of squares).
-normx = Independently scale each time series plotted to
have max absolute value = 1 (L_infinity norm).
-norm1 = Independently scale each time series plotted to
have max sum of absolute values = 1 (L_1 norm).
-demean = This option will remove the mean from each time series
(before normalizing). The combination '-demean -normx -one'
can be useful when plotting disparate data together.
* If you use '-demean' twice, you will get linear detrending.
* Et cetera (e.g,, 4 times gives you cubic detrending.)
-x X.1D = Use for X axis the data in X.1D.
Note that X.1D should have one column
of the same length as the columns in tsfile.
** Coupled with '-box -noline', you can use '-x' to make
a scatter plot, as in graphing file A1.1D along the
x-axis and file A2.1D along the y-axis:
1dplot -box -noline -x A1.1D -xlabel A1 -ylabel A2 A2.1D
** '-x' will override -dx and -xzero; -xaxis still works
-xl10 X.1D = Use log10(X.1D) as the X axis.
-xmulti X1.1D X2.1D ...
This new [Oct 2013] option allows you to plot different
columns from the data with different values along the
x-axis. You can supply one or more 1D files after the
'-xmulti' option. The columns from these files are
catenated, and then the first xmulti column is used as
as x-axis values for the first data column plotted, the
second xmulti column gives the x-axis values for the
second data column plotted, and so on.
** The command line arguments after '-xmulti' are taken
as 1D filenames to read, until an argument starts with
a '-' character -- this would either be another option,
or just a single '-' to separate the xmulti 1D files
from the data files to be plotted.
** If you don't provide enough xmulti columns for all the
data files, the last xmulti column will be reused.
** Useless but fun example:
1deval -num 100 -expr '(i-i)+z+gran(0,6)' > X1.1D
1deval -num 100 -expr '(i-i)+z+gran(0,6)' > X2.1D
1dplot -one -box -xmulti X1.1D X2.1D - X2.1D X1.1D
-dx xx = Spacing between points on the x-axis is 'xx'
[default = 1] SYNONYMS: '-dt' and '-del'
-xzero zz = Initial x coordinate is 'zz' [default = 0]
SYNONYMS: '-tzero' and '-start'
-nopush = Don't 'push' axes ranges outwards.
-ignore nn = Skip first 'nn' rows in the input file
[default = 0]
-use mm = Plot 'mm' points [default = all of them]
-xlabel aa = Put string 'aa' below the x-axis
[default = no axis label]
-ylabel aa = Put string 'aa' to the left of the y-axis
[default = no axis label]
-plabel pp = Put string 'pp' atop the plot.
Some characters, such as '_' have
special formatting effects. You
can escape that with ''. For example:
echo 2 4.5 -1 | 1dplot -plabel 'test_underscore' -stdin
versus
echo 2 4.5 -1 | 1dplot -plabel 'test\_underscore' -stdin
-title pp = Same as -plabel, but only works with -ps/-png/-jpg/-pnm options.
-wintitle pp = Set string 'pp' as the title of the frame
containing the plot. Default is based on input.
-naked = Do NOT plot axes or labels, just the graph(s).
You might want to use '-nopush' with '-naked'.
-aspect A = Set the width-to-height ratio of the plot region to 'A'.
Default value is 1.3. Larger 'A' means a wider graph.
-stdin = Don't read from tsfile; instead, read from
stdin and plot it. You cannot combine input
from stdin and tsfile(s). If you want to do so,
use program 1dcat first.
-ps = Don't draw plot in a window; instead, write it
to stdout in PostScript format.
* If you view the result in 'gv', you should turn
'anti-alias' off, and switch to landscape mode.
* You can use the 'gs' program to convert PostScript
to other formats; for example, a .bmp file:
1dplot -ps ~/data/verbal/cosall.1D |
gs -r100 -sOutputFile=fred.bmp -sDEVICE=bmp256 -q -dBATCH -
* 1dplot is built on some line drawing software written
a long time ago in a galaxy far away, which is why PostScript
output was a natural thing to do -- I doubt that anyone uses
this feature in these decadent modern times.
-jpg fname } = Render plot to an image and save to a file named
-jpeg fname } = 'fname', in JPEG mode or in PNG mode or in PNM mode.
-png fname } = The default image width is 1024 pixels; to change
-pnm fname } = this value to 2048 pixels (say), do
setenv AFNI_1DPLOT_IMSIZE 2048
before running 1dplot, or add
-DAFNI_1DPLOT_IMSIZE=2048
to the 1dplot command line. Widths over 4096 might
start to look odd in some cases. The largest allowed
size is 8192 pixels.
* PNG files created by 1dplot will be smaller than JPEG,
and are compressed without loss.
* PNG output requires that the netpbm program
pnmtopng be installed somewhere in your PATH.
This program is NOT supplied with AFNI, but must
be installed separately:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/index.html
* PNM output files are not compressed, and are manipulable
by the netpbm package: http://netpbm.sourceforge.net/
Otherwise, this format isn't very useful anymore.
* There will be small drawing differences between the
X11 (interactive) plotting window and the images saved
by these options -- or by the interactive button.
These differences arise from the use of different line
drawing functions for X11 windows and for off-screen
bitmap images.
-pngs size fname } = convenience options equivalent to
-jpgs size fname } = -DAFNI_1DPLOT_IMSIZE=size followed by
-jpegs size fname} = -png fname (or -jpg or -jpeg or -pnm)
-pnms size fname } = The largest allowed size is 8192 pixels.
-ytran 'expr' = Transform the data along the y-axis by
applying the expression to each input value.
For example:
-ytran 'log10(z)'
will take log10 of each input time series value
before plotting it.
* The expression should have one variable (any letter
from a-z will do), which stands for the time series
data to be transformed.
* An expression such as 'sqrt(x*x+i)' will use 'x'
for the time series value and use 'i' for the time
index (starting at 0) -- in this way, you can use
time-dependent transformations, if needed.
* This transformation applies to all input time series
(at present, there is no way to transform different
time series in distinct ways inside 1dplot).
* '-ytran' is applied BEFORE the various '-norm' options.
-xtran 'expr' = Similar, but for the x-axis.
** Applies to '-xmulti' , '-x' , or the default x-axis.
-xaxis b:t:n:m = Set the x-axis to run from value 'b' to
value 't', with 'n' major divisions and
'm' minor tic marks per major division.
For example:
-xaxis 0:100:5:20
Setting 'n' to 0 means no tic marks or labels.
* You can set 'b' to be greater than 't', to
have the x-coordinate decrease from left-to-right.
* This is the only way to have this effect in 1dplot.
* In particular, '-dx' with a negative value will not work!
-yaxis b:t:n:m = Similar to above, for the y-axis. These
options override the normal autoscaling
of their respective axes.
-ynames a b ... = Use the strings 'a', 'b', etc., as
labels to the right of the graphs,
corresponding to each input column.
These strings CANNOT start with the
'-' character.
N.B.: Each separate string after '-ynames'
is taken to be a new label, until the
end of the command line or until some
string starts with a '-'. In particular,
This means you CANNOT do something like
1dplot -ynames a b c file.1D
since the input filename 'file.1D' will
be used as a label string, not a filename.
Instead, you must put another option between
the end of the '-ynames' label list, OR you
can put a single '-' at the end of the label
list to signal its end:
1dplot -ynames a b c - file.1D
TSV files: When plotting a TSV file, where the first row
is the set of column labels, you can use this
Unix trick to put the column labels here:
-ynames `head -1 file.tsv`
The 'head' command copies just the first line
of the file to stdout, and the backquotes `...`
capture stdout and put it onto the command line.
* You might need to put a single '-' after this
option to prevent the problem alluded to above.
In any case, it can't hurt to use '-' as an option
after '-ynames'.
* If any of the TSV labels start with the '-' character,
peculiar and unpleasant things might transpire.
-volreg = Makes the 'ynames' be the same as the
6 labels used in plug_volreg for
Roll, Pitch, Yaw, I-S, R-L, and A-P
movements, in that order.
-thick = Each time you give this, it makes the line
thickness used for plotting a little larger.
[An alternative to using '-DAFNI_1DPLOT_THIK=...']
-THICK = Twice the power of '-thick' at no extra cost!!
-dashed codes = Plot dashed lines between data points. The 'codes'
are a colon-separated list of dash values, which
can be 1 (solid), 2 (longer dashes), or 3 (shorter dashes).
0 can be used to indicate that a time series is to be
plotted without lines but with boxes instead.
** Example: '-dashed 1:2:3' means to plot the first time
series with solid lines, the second with long dashes,
and the third with short dashes.
-Dname=val = Set environment variable 'name' to 'val'
for this run of the program only:
1dplot -DAFNI_1DPLOT_THIK=0.01 -DAFNI_1DPLOT_COLOR_01=blue '1D:3 4 5 3 1 0'
You may also select a subset of columns to display using
a tsfile specification like 'fred.1D[0,3,5]', indicating
that columns #0, #3, and #5 will be the only ones plotted.
For more details on this selection scheme, see the output
of '3dcalc -help'.
Example: graphing a 'dfile' output by 3dvolreg, when TR=5:
1dplot -volreg -dx 5 -xlabel Time 'dfile[1..6]'
You can also input more than one tsfile, in which case the files
will all be plotted. However, if the files have different column
lengths, the shortest one will rule.
The colors for the line graphs cycle between black, red, green, and
blue. You can alter these colors by setting Unix environment
variables of the form AFNI_1DPLOT_COLOR_xx -- cf. README.environment.
You can alter the thickness of the lines by setting the variable
AFNI_1DPLOT_THIK to a value between 0.00 and 0.05 -- the units are
fractions of the page size; of course, you can also use the options
'-thick' or '-THICK' if you prefer.
----------------
RENDERING METHOD
----------------
On 30 Apr 2012, a new method of rendering the 1dplot graph into an X11
window was introduced -- this method uses 'anti-aliasing' to produce
smoother-looking lines and characters. If you want the old coarser-looking
rendering method, set environment variable AFNI_1DPLOT_RENDEROLD to YES.
The program always uses the new rendering method when drawing to a JPEG
or PNG or PNM file (which is not and never has been just a screen capture).
There is no way to disable the new rendering method for image-file saves.
------
LABELS
------
Besides normal alphabetic text, the various labels can include some
special characters, using TeX-like escapes starting with '\'.
Also, the '^' and '_' characters denote super- and sub-scripts,
respectively. The following command shows many of the escapes:
1deval -num 100 -expr 'J0(t/4)' | 1dplot -stdin -thick \
-xlabel '\alpha\beta\gamma\delta\epsilon\zeta\eta^{\oplus\dagger}\times c' \
-ylabel 'Bessel Function \green J_0(t/4)' \
-plabel '\Upsilon\Phi\Chi\Psi\Omega\red\leftrightarrow\blue\partial^{2}f/\partial x^2'
TIMESERIES (1D) INPUT
---------------------
A timeseries file is in the form of a 1D or 2D table of ASCII numbers;
for example: 3 5 7
2 4 6
0 3 3
7 2 9
This example has 4 rows and 3 columns. Each column is considered as
a timeseries in AFNI. The convention is to store this type of data
in a filename ending in '.1D'.
** COLUMN SELECTION WITH [] **
When specifying a timeseries file to an command-line AFNI program, you
can select a subset of columns using the '[...]' notation:
'fred.1D[5]' ==> use only column #5
'fred.1D[5,9,17]' ==> use columns #5, #9, and #17
'fred.1D[5..8]' ==> use columns #5, #6, #7, and #8
'fred.1D[5..13(2)]' ==> use columns #5, #7, #9, #11, and #13
Column indices start at 0. You can use the character '$'
to indicate the last column in a 1D file; for example, you
can select every third column in a 1D file by using the selection list
'fred.1D[0..$(3)]' ==> use columns #0, #3, #6, #9, ....
** ROW SELECTION WITH {} **
Similarly, you select a subset of the rows using the '{...}' notation:
'fred.1D{0..$(2)}' ==> use rows #0, #2, #4, ....
You can also use both notations together, as in
'fred.1D[1,3]{1..$(2)}' ==> columns #1 and #3; rows #1, #3, #5, ....
** DIRECT INPUT OF DATA ON THE COMMAND LINE WITH 1D: **
You can also input a 1D time series 'dataset' directly on the command
line, without an external file. The 'filename' for such input has the
general format
'1D:n_1@val_1,n_2@val_2,n_3@val_3,...'
where each 'n_i' is an integer and each 'val_i' is a float. For
example
-a '1D:5@0,10@1,5@0,10@1,5@0'
specifies that variable 'a' be assigned to a 1D time series of 35,
alternating in blocks between values 0 and value 1.
* Spaces or commas can be used to separate values.
* A '|' character can be used to start a new input "line":
Try 1dplot '1D: 3 4 3 5 | 3 5 4 3'
** TRANSPOSITION WITH \' **
Finally, you can force most AFNI programs to transpose a 1D file on
input by appending a single ' character at the end of the filename.
N.B.: Since the ' character is also special to the shell, you'll
probably have to put a \ character before it. Examples:
1dplot '1D: 3 2 3 4 | 2 3 4 3' and
1dplot '1D: 3 2 3 4 | 2 3 4 3'\'
When you have reached this level of understanding, you are ready to
take the AFNI Jedi Master test. I won't insult you by telling you
where to find this examination.
TAB SEPARATED VALUE (.tsv) FILES [Sep 2018]
-------------------------------------------
These files are used in BIDS http://bids.neuroimaging.io and AFNI
programs can read these in a few places.
The format of a .tsv file is a set of columns, where the values in
each row are separated by tab characters -- spaces are NOT separators.
Each element is string, some of which are numeric (e.g. 3.1416).
The first row of a .tsv file is a set of strings which are column
descriptors (separated by tabs, of course). For the most part, the
following data in each column are exclusively numeric or exclusively
strings. Strings can contain blanks/spaces since only tabs are used
to separate values.
A .tsv file can be read in most places where a .1D file is read.
However, columns (after the header row) that are not purely numeric
will be ignored, since the internal usage of .1D data in AFNI is numeric.
Thus, you can do something like
1dplot -nopush -sepscl sub-10506_task-pamenc_events.tsv
and you will get a plot of all the numeric columns in this BIDS file.
Column selection '[]' can be done, using numbers to specify columns
or using the column labels in the .tsv file.
N.B.: The string 'N/A' or 'n/a' in a column that is otherwise numeric
will be considered to be a number, and will be replaced on input
with the mean of the "true" numbers in the column -- there is
no concept of missing data in an AFNI .1D file.
++ If you don't like this, well ... too bad for you.
Program 1dcat has special knowledge of .tsv files, and will cat
(sideways - along rows) .tsv and .1D files together. It also has an
option to write the output in .tsv format.
For example, to get the 'onset', 'duration', and 'trial_type' columns
out of a BIDS task .tsv file, a command like this could be used:
1dcat sub-10506_task-pamenc_events.tsv'[onset,duration,trial_type]'
Note that the column headers are lost in this output, but could be kept
if the 1dcat '-tsvout' option were used. In reverse, a numeric .1D file
can be converted to .tsv format by a command like:
1dcat -tsvout Fred.1D
In this case, since a the data for .1D file doesn't have headers for its
columns, 1dcat will invent some column names.
At this time, other programs don't 'know' much about .tsv files, and will
ignore the header row and non-numeric columns when reading a .tsv file.
in place of a .1D file.
--------------
MARKING BLOCKS (e.g., censored time points)
--------------
The following options let you mark blocks along the x-axis, by drawing
colored vertical boxes over the standard white background.
* The intended use is to mark blocks of time points that are censored
out of an analysis, which is why the options are the same as those
in 3dDeconvolve -- but you can mark blocks for any reason, of course.
* These options don't do anything when the '-x' option is used to
alter the x-axis spacings.
* To see what the various color markings look like, try this silly example:
1deval -num 100 -expr 'lran(2)' > zz.1D
1dplot -thick -censor_RGB red -CENSORTR 3-8 \
-censor_RGB green -CENSORTR 11-16 \
-censor_RGB blue -CENSORTR 22-27 \
-censor_RGB yellow -CENSORTR 34-39 \
-censor_RGB violet -CENSORTR 45-50 \
-censor_RGB pink -CENSORTR 55-60 \
-censor_RGB gray -CENSORTR 65-70 \
-censor_RGB #2cf -CENSORTR 75-80 \
-plabel 'red green blue yellow violet pink gray #2cf' zz.1D &
-censor_RGB clr = set the color used for the marking to 'clr', which
can be one of the strings below:
red green blue yellow violet pink gray (OR grey)
* OR 'clr' can be in the form '#xyz' or '#xxyyzz', where
'x', 'y', and 'z' are hexadecimal digits -- for example,
'#2cf' is sort of a cyan color.
* OR 'clr' can be in the form 'rgbi:rf/gf/bf' where
each color intensity (rf, gf, bf) is a number between
0.0 and 1.0 -- e.g., white is 'rgbi:1.0/1.0/1.0'.
Since the background is white, dark colors don't look
good here, and will obscure the graphs; for example,
pink is defined here as 'rgbi:1.0/0.5/0.5'.
* The default color is (a rather pale) yellow.
* You can use '-censor_RGB' more than once. The color
most recently specified previous on the command line
is what will be used with the '-censor' and '-CENSORTR'
options. This allows you to mark different blocks
with different colors (e.g., if they were censored
for different reasons).
* The feature of allowing multiple '-censor_RGB' options
means that you must put this option BEFORE the
relevant '-censor' and/or '-CENSORTR' options.
Otherwise, you'll get the default yellow color!
-censor cname = cname is the filename of censor .1D time series
* This is a file of 1s and 0s, indicating which
time points are to be un-marked (1) and which are
to be marked (0).
* Please note that only one '-censor' option can be
used, for compatibility with 3dDeconvolve.
* The option below may be simpler to use!
(And can be used multiple times.)
-CENSORTR clist = clist is a list of strings that specify time indexes
to be marked in the graph(s). Each string is of
one of the following forms:
37 => mark global time index #37
2:37 => mark time index #37 in run #2
37..47 => mark global time indexes #37-47
37-47 => same as above
*:0-2 => mark time indexes #0-2 in all runs
2:37..47 => mark time indexes #37-47 in run #2
* Time indexes within each run start at 0.
* Run indexes start at 1 (just be to confusing).
* Multiple -CENSORTR options may be used, or
multiple -CENSORTR strings can be given at
once, separated by spaces or commas.
* Each argument on the command line after
'-CENSORTR' is treated as a censoring string,
until an argument starts with a '-' or an
alphabetic character, or it contains the substring
'1D'. This means that if you want to plot a file
named '9zork.xyz', you may have to do this:
1dplot -CENSORTR 3-7 18-22 - 9zork.xyz
The stand-alone '-' will stop the processing
of censor strings; otherwise, the '9zork.xyz'
string, since it doesn't start with a letter,
would be treated as a censoring string, which
you would find confusing.
** N.B.: 2:37,47 means index #37 in run #2 and
global time index 47; it does NOT mean
index #37 in run #2 AND index #47 in run #2.
-concat rname = rname is the filename for list of concatenated runs
* 'rname' can be in the format
'1D: 0 100 200 300'
which indicates 4 runs, the first of which
starts at time index=0, second at index=100,
and so on.
* The ONLY function of '-concat' is for use with
'-CENSORTR', to be compatible with 3dDeconvolve
[e.g., for plotting motion parameters from]
[3dvolreg -1Dfile, where you've cat-enated]
[the 1D files from separate runs into one ]
[long file for plotting with this program.]
-rbox x1 y1 x2 y2 color1 color2
= Draw a rectangular box with corners (x1,y1) to
(x2,y2), in color1, with an outline in color2.
Colors are names, such as 'green'.
[This option lets you make bar]
[charts, *if* you care enough.]
-Rbox x1 y1 x2 y2 y3 color1 color2
= As above, with an extra horizontal line at y3.
-line x1 y1 x2 y2 color dashcode
= Draw one line segment.
Another fun fun example:
1dplot -censor_RGB #ffa -CENSORTR '0-99' \
`1deval -1D: -num 61 -dx 0.3 -expr 'J0(x)'`
which illustrates the use of 'censoring' to mark the entire graph
background in pale yellow '#ffa', and also illustrates the use
of the '-1D:' option in 1deval to produce output that can be
used directly on the command line, via the backquote `...` operator.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dplot.py
OVERVIEW ~1~
This program is for making images to visualize columns of numbers from
"1D" text files. It is based heavily on RWCox's 1dplot program, just
using Python (particularly matplotlib). To use this program, Python
version >=2.7 is required, as well as matplotlib modules (someday numpy
might be needed, as well).
This program takes very few required options-- mainly, file names and
an output prefix-- but it allows the user to control/add many
features, such as axis labels, titles, colors, adding in censor
information, plotting summary boxplots and more.
++ constructed by PA Taylor (NIMH, NIH, USA).
# =========================================================================
COMMAND OPTIONS ~1~
-help, -h :see helpfile
-infiles II :(req) one or more file names of text files. Each column
in this file will be treated as a separate time series
for plotting (i.e., as 'y-values'). One can use
AFNI-style column '{ }' and row '[ ]' selectors. One
or more files may be entered, but they must all be of
equal length.
-yfiles YY :exactly the same behavior as "-infiles ..", just another
option name for it that might be more consistent with
other options.
-prefix PP :output filename or prefix; if no file extension for an
image is included in 'PP', one will be added from a
list. At present, OK file types to output should include:
.jpg, .png, .tif, .pdf, .svg
... but note that the kinds of image files you may output
may be limited by packages (or lack thereof) installed on
your own computer. Default output image type is .jpg
-boxplot_on :a fun feature to show an small, additional boxplot
adjacent to each time series. The plot is a standard
Python boxplot of that times series's values. The box
shows the 25-75%ile range (interquartile range, IQR);
the median value highlighted by a white line; whiskers
stretch to 1.5*IQR; circles show outliers.
When using this option and censoring, by default both a
boxplot of data "before censoring" (BC) and "after
censoring (AC) will be added after. See '-bplot_view ...'
about current opts to change that, if desired.
-bplot_view BC_ONLY | AC_ONLY
:when using '-boxplot_on' and censoring, by default the
plotter will put one boxplot of data "before censoring"
(BC) and after censoring (AC). To show *only* the
uncensored one, use this option and keyword.
-margin_off :use this option to have the plot frame fill the figure
window completely; thus, no labels, frame, titles or
other parts of the 'normal' image outside the plot
window will be visible. Tick lines will still be
present, living their best lives.
This is probably only useful/recommended/tested for
plots with a single panel.
-scale SCA1 SCA2 SCA3 ...
:provide a list of scales to apply to the y-values.
These will be applied multiplicatively to the y-values;
there should either be 1 (applied to all time series)
or the same number as the time series (in the same
order as those were entered). The scale values are
also applied to the censor_hline values, but *not* to
the "-yaxis ..." range(s).
Note that there are a couple keywords that can be used
instead of SCA* values:
SCALE_TO_HLINE: each input time series is
vertically scaled so that its censor_hline -> 1.
That is, each time point is divided by the
censor_hline value. When using this, a visually
pleasing yaxis range might be 0:3.
SCALE_TO_MAX: each input time series is
vertically scaled so that its max value -> 1.
That is, each time point is divided by the
max value. When using this, a visually
pleasing yaxis range might be 0:1.1.
-yfiles_pm YP :one or more file names of text files. Each column in
this file will be treated as a separate time series for
plotting a plus/minus colorized range for an associated
yfile/infile line. The number of files input with YP
must exactly match that of either '-infiles ..' or
'-yfiles ..'. The color will match the line color, but at
greatly reduced opacity.
-ylim_use_pm :by default, if not '-yaxis ..' opt is used, the ylim
range each subplot comes from (slightly expanded)
bounds of the min and max yvalue in each. But if
'-yfiles_pm ..' is used, you can use this option to expand
those limits by the min and max of the extra error-bounded
space.
-xfile XX :one way to input x-values explicitly: as a "1D" file XX, a
containing a single file of numbers. If no xfile is
entered, then a list of integers is created, 0..N-1, based
on the length of the "-infiles ..".
-xvals START STOP STEP
:an alternative means for entering abscissa values: one
can provide exactly 3 numbers, the start (inclusive)
the stop (exclusive) and the steps to take, following
Python conventions-- that is, numbers are generated
[START,STOP) in stepsizes of STEP.
-yaxis YMIN1:YMAX1 YMIN2:YMAX2 YMIN3:YMAX3 ...
:optional range for each "infile" y-axis; note the use
of a colon to designate the min/max of the range. One
can also specify just the min (e.g., "YMIN:") or just
the max (e.g., ":YMAX"). The final number of y-axis
values or pairs *must* match the total number of columns
of data from infiles; a placeholder could just be
":". Without specifying a range, one is calculated
automatically from the min and max of the dsets
themselves. The order of ylabels should match the order
of infiles.
-ylabels YL1 YL2 YL3 ...
:optional text labels for each "infile" column; the
final number of ylabels *must* match the total number
of columns of data from infiles. The order of ylabels
should match the order of infiles. These labels are
plotted vertically along the y-axis of the plot.
* For 1D files output by 3dvolreg, one can
automatically provide the 6 associated ylabels by
providing the keyword 'VOLREG' (and this counts as 6
labels).
* For 1D files output by '3dAllineate -1Dparam_save ..',
if you are using just the 6 rigid body parameters, you
can automatically provide the 6 associated ylabels by
providing the keyword 'ALLINPAR6' (and this counts as
6 labels). If using the 6 rigid body parameters and 3
scaling, you can use the keyword 'ALLINPAR9' (which counts
as 9 labels). If using all 12 affine parameters, you can use
the keyword 'ALLINPAR12' (which counts as 9 labels).
-ylabels_maxlen MM
:y-axis labels can get long; this opt allows you to have
them wrap into multiple rows, each of length <=MM. At the
moment, this wrapping is done with some "logic" that tries
to be helpful (e.g., split at underscores where possible),
as long as that helpfulness doesn't increase line numbers
a lot. The value entered here will apply to all y-axis
labels in the plot.
-legend_on :turn on the plotting of a legend in the plot(s). Legend
will not be shown in the boxplot panels, if using.
-legend_labels LL1 LL2 LL3 ...
:optional legend labels, if using '-legend_on' to show a
legend. If no arguments are provided for this option,
then the labels will be the arguments to '-infiles ..'
(or '-yfiles ..'). If arguments ARE input, then they must
match the number of '-infiles ..' (or '-yfiles ..').
-legend_locs LOC1 LOC2 LOC3 ...
:optional legend locations, if using '-legend_on' to
show a legend. If no arguments are provided for this
option, then the locations will be the ones picked by
Python (reasonable starting point) If arguments ARE
input, then they must match the number of '-infiles ..'
(or '-yfiles ..'). Valid entries are strings
recognizable by matplotlib's plt.legend()'s "loc" opt;
this includes: 'best', 'right', 'upper right', 'lower
right', 'center right', etc. Note that if you use a
two-word argument here, you MUST put it in quotes (or,
as a special treat, you can combine it with an
underscore, and it will be parsed correctly. So, valid
values of LOC* could be:
left
'lower left'
upper_center
-xlabel XL :optional text labels for the abscissa/x-axis. Only one may
be entered, and it will *only* be displayed on the bottom
panel of the output plot. Using labels is good practice!!
-title TT :optional title for the set of plots, placed above the top-
most subplot
-reverse_order :optional switch; by default, the entered time series
are plotted top to bottom according to the order they
were entered (i.e., first- listed plot at the top).
This option reverses that order (to first-listed plot
at the bottom), in order to match with 1dplot's
behavior.
-sepscl :make each graph have its own y-range, determined by
slightly padding its min and max values. By default,
the separate plots all have the same y-range, which
is determined by taking the min-of-mins and max-of-
maxes, and padding slightly outward.
-one_graph :plot multiple infiles in a single subplot (default is to put
each one in a new subplot).
-dpi DDD :choose the output image's DPI. The default value is
150.
-figsize FX FY :choose the output image's dimensions (units are inches).
The default width is 10; the default height
is 0.5 + N*0.75, where 'N' is the number of
infile columns.
-fontsize FS :change image fontsize; default is 10.
-fontfamily FF :change font-family used; default is the luvly
monospace.
-fontstyles FSS :add in a fontname; should match with chosen
font-family; default is whatever Python has on your
system for the given family. Whether your prescribed
font gets used depends on what is installed on your
comp.
-colors C1 C2 C3 ...
:you can decide what color(s) to cycle through in plots
(enter one or more); if there are more infile columns
than entered colors, the program just keeps cycling
through the list. By default, if only 1 infile column is
given, the plotline will be black; when more than one
infile column is given, a default palette of 10
colors, chosen for their mutual-distinguishable-ness,
will be cycled through.
One of the colors can also be a decimal in range [0.0, 1.0],
which will correspond to grayscale in range [black, white],
respectively.
-patches RL1 RL2 RL3 ...
:when viewing data from multiple runs that have been
processing+concatenated, knowing where they start/stop
can be useful. This option helps with that, by
alternating patches of the background slightly between
white and light gray. The user enters any appropriate
number of run lengths, and the background patch for
the duration of the first is white, then light gray,
etc. (to *start* with light gray, one can have '0' be
the first RL value).
-censor_trs CS1 CS2 CS3 ...
:specify time points where censoring has occurred (e.g.,
due to a motion or outlier criterion). With this
option, the values are entered using AFNI index
notation, such as '0..3,8,25,99..$'. Note that if you
use special characters like the '$', then the given
string must be enclosed on quotes.
One or more string can be entered, and results are
simply combined (as well as if censor files are
entered-- see the '-censor_files ..' opt).
In order to highlight censored points, a translucent
background color will be added to all plots of width 1.
-censor_files CF1 CF2 CF3 ...
:specify time points where censoring has occurred (e.g.,
due to a motion or outlier criterion). With this
option, the values are entered as 1D files, columns
where 0 indicates censoring at that [i]th time point,
and 1 indicates *no* censoring there.
One or more file can be entered, and results are
simply combined (as well as if censor strings are
entered-- see the '-censor_str ..' opt).
In order to highlight censored points, a translucent
background color will be added to all plots of width 1.
-censor_hline CH1 CH2 CH3 ...
:one can add a dotted horizontal line to the plot, with
the intention that it represents the relevant threshold
(for example, motion limit or outlier fraction limit).
One can specify more than one hline: if one line
is entered, it will be applied to each plot; if more
than one hline is entered, there must be the same number
of values as infile columns.
Ummm, it is also assumed that all censor hline values
are >=0; if negative, it will be a problem-- ask if this
is a problem!
A value of 'NONE' can also be input, to be a placeholder
in a list, when some subplots have censor_hline values
and others don't.
-censor_RGB COL :choose the color of the censoring background; from the
command line, users enter a string, which could be:
+ 3 space-separated floats in range [0, 1], of RGB values
+ 4 space-separated floats in range [0, 1], of RGBA values
+ 1 string of a valid matplotlib color
+ 1 string of a valid matplotlib color and 1 floats in
range [0, 1], which is an alpha opacity value.
(default is: '1 0.7 0.7').
-bkgd_color BC :change the background color outside of the plot
windows. Default is the Python color: 0.9.
EXAMPLES ~1~
1) Plot Euclidean norm (enorm) profile, with the censor limit and
related file of censoring:
1dplot.py \
-sepscl \
-boxplot_on \
-infiles motion_sub-10506_enorm.1D \
-censor_files motion_sub-10506_censor.1D \
-censor_hline 0.2 \
-title "Motion censoring" \
-ylabels enorm \
-xlabel "vols" \
-title "Motion censoring" \
-prefix mot_cen_plot.jpg
2) Plot the 6 solid body parameters from 3dvolreg, along with
the useful composite 'enorm' and outlier time series:
1dplot.py \
-sepscl \
-boxplot_on \
-reverse_order \
-infiles dfile_rall.1D \
motion_sub-10506_enorm.1D \
outcount_rall.1D \
-ylabels VOLREG enorm outliers \
-xlabel "vols" \
-title "Motion and outlier plots" \
-prefix mot_outlier_plot.png
3) Use labels and locations to plot 3dhistog output (there will
be some minor whining about failing to process comment label
*.1D files, but won't cause any problems for plot); here,
legend labels will be the args after '-yfiles ..' (with the
part in square brackets, but without the quotes):
1dplot.py \
-xfile HOUT_A.1D'[0]' \
-yfiles HOUT_A.1D'[1]' HOUT_B.1D'[1]' \
-prefix img_histog.png \
-colors blue 0.6 \
-boxplot_on \
-legend_on
4) Same as #3, but using some additional opts to control legends.
Here, am using 2 different formats of providing the legend
locations in each separate subplot, just for fun:
1dplot.py \
-xfile HOUT_A.1D'[0]' \
-yfiles HOUT_A.1D'[1]' HOUT_B.1D'[1]' \
-prefix img_histog.png \
-colors blue 0.6 \
-boxplot_on \
-legend_on \
-legend_locs upper_right "lower left" \
-legend_labels A B
AFNI program: 1dRplot
Usage:
------
1dRplot is a program for plotting a 1D file
Options in alphabetical order:
------------------------------
-addavg: Add line at average of column
-col.color COL1 [COL2 ...]: Colors for each column in -input.
COL? are integers for now.
-col.grp 1Dfile or Rexp: integer labels defining column grouping
-col.line.type LT1 [LT2 ...]: Line type for each column in -input.
LT? are integers for now.
-col.name NAME1 [NAME2 ...]: Name of each column in -input.
Special flags:
VOLREG: --> 'Roll Pitch Yaw I-S R-L A-P'
-col.name.show : Show names of column in -input.
-col.nozeros: Do not plot all zeros columns
-col.plot.char CHAR1 [CHAR2 ...] : Symbols for each column in -input.
CHAR? are integers (usually 0-127), or
characters + - I etc.
See the following link for what CHAR? values you can use:
http://stat.ethz.ch/R-manual/R-patched/library/graphics/html/points.html
-col.plot.type PLOT_TYPE: Column plot type.
'l' for line, 'p' for points, 'b' for both
-col.text.lym LYM_TEXT: Text to be placed at left Y margin.
You need one string per column.
Special Flags: You can also use COL.NAME to use column
names for the margin text, or you can use
COL.IND to use the colum's index in the file
-col.text.rym RYM_TEXT: Text to be placed at right Y margin.
You need one string per column.
See also Special Flags section under -col.text.lym
-col.ystack: Scale each column and offset it based on its
column index. This is useful for stacking
a large number of columns on one plot.
It is only carried out when graphing more
than one series with the -one option.
-grid.show : Show grid.
-grp.label GROUP1 [GROUP2 ...]: Labels assigned to each group.
Default is no labeling
-help: this help message
-i 1D_INPUT: file to plot. This field can have multiple
formats. See Data Strings section below.
1dRplot will automatically detect certain
1D files ouput by some programs such as 3dhistog
or 3ddot and adjust parameters accordingly.
-input 1D_INPUT: Same as -i
-input_delta 1D_INPUT: file containing value for error bars
-input_type 1D_TYPE: Type of data in 1D file.
Choose from 'VOLREG', or 'XMAT'
-leg.fontsize : fontsize for legend text.
-leg.line.color : Color to use for items in legend.
Default is taken from column line color.
-leg.line.type : Line type to use for items in legend.
Default is taken from column line types.
If you want no line, set -leg.line.type = 0
-leg.names : Names to use for items in legend.
Default is taken from column names.
-leg.ncol : Number of columns in legend.
-leg.plot.char : plot characters to use for items in legend.
Default is taken from column plot character (-col.plot.char).
-leg.position : Legend position. Choose from:
bottomright, bottom, bottomleft
left, topleft, top, topright, right,
and center
-leg.show : Show legend.
-load.Rdat RDAT: load data list from save.Rdat for reproducing plot.
Note that you cannot override the settings in RDAT,
unless you run in the interactive R mode. For example,
say you have dice.Rdat saved from a previous command
and you want to change P$nodisp to TRUE:
load('dice.Rdat'); P$nodisp <- TRUE; plot.1D.eng(P)
-mat: Display as matrix
-matplot: Display as matrix
-msg.trace: Output trace information along with errors and notices
-multi: Put columns in separate graphs
-multiplot: Put columns in separate graphs
-nozeros: Do not plot all zeros time series
-one: Put all columns on one graph
-oneplot: Put all columns on one graph
-prefix PREFIX: Output prefix. See also -save.
-row.name NAME1 [NAME2 ...]: Name of each row in -input.
For the moment, this is only used with -matplot
-rowcol.name NAME1 [NAME2 ...]: Names of rows, same as name of columns.
For the moment, this is only used with -matplot.
-run_examples: Run all examples, one after the other.
-save PREFIX: Save plot and quit
No need for -prefix with this option
-save.Rdat : Save data list for reproducing plot in R.
You need to specify -prefix or -save
along with this option to set the prefix.
See also -load.Rdat
-save.size width height: Save figure size in pixels
Default is 2000 2000
-show_allowed_options: list of allowed options
-title TITLE: Graph title. File name is used by default.
Use NONE to be sure no title is used.
-TR TR: Sampling period, in seconds.
-verb VERB: VERB is an integer specifying verbosity level.
0 for quiet (Default). 1 or more: talkative.
-x 1D_INPUT: x axis. You can also use the string 'ENUM'
to indicate that the x axis should go from
1 to N, the number of samples in -input
-xax.label XLABEL: Label of X axis
-xax.lim MIN MAX [STEP]: Range of X axis, STEP is optional
-xax.tic.text XTTEXT: X tics text
-yax.label YLABEL: Label of Y axis
-yax.lim MIN MAX [STEP]: Range of X axis, STEP is optional
-yax.tic.text YTTEXT: Y tics text
-zeros: Do plot all zeros time series
Data Strings:
-------------
You can specify input matrices and vectors in a variety of
ways. The simplest is by specifying a .1D file with all
the trimmings of column and row selectors. You can also
specify a string that gets evaluated on the fly.
For example: '1D: 1 4 8' evaluates to a vector of values 1 4 and 8.
Also, you can use R expressions such as: 'R: seq(0,10,3)'
To download demo data from AFNI's website run this command:
-----------------------------------------------------------
curl -o demo.X.xmat.1D afni.nimh.nih.gov/pub/dist/edu/data/samples/X.xmat.1D
curl -o demo.motion.1D afni.nimh.nih.gov/pub/dist/edu/data/samples/motion.1D
Example 1 --- :
--------------------------------
1dRplot -input demo.X.xmat.1D'[5..10]'
Example 2 --- :
--------------------------------
1dRplot -input demo.X.xmat.1D'[5..10]' \
-input_type XMAT
Example 3 --- :
--------------------------------
1dRplot -input demo.motion.1D \
-input_type VOLREG
Example 4 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)'
Example 5 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 5)' \
-one
Example 6 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack
Example 7 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack \
-col.grp '1D:1 1 1 2 2 2 3 3 3 3' \
-grp.label slow medium fast \
-prefix ta.jpg \
-yax.lim 0 18 \
-leg.show \
-leg.position top
Example 8 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack \
-col.grp '1D:1 1 1 2 2 2 3 3 3 3' \
-grp.label slow medium fast \
-prefix tb.jpg \
-yax.lim 0 18 \
-leg.show \
-leg.position top \
-nozeros \
-addavg
Example 9 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack \
-col.grp '1D:1 1 1 2 2 2 3 3 3 3' \
-grp.label slow medium fast \
-prefix tb.jpg \
-yax.lim 0 18 \
-leg.show \
-leg.position top \
-nozeros \
-addavg \
-col.text.lym Tutti mi chiedono tutti mi vogliono \
Donne ragazzi vecchi fanciulle \
-col.text.rym "R:paste('Col',seq(1,10), sep='')"
Example 10 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.plot.char 2 \
-col.plot.type p
Example 11 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.line.type 3 \
-col.plot.type l
Example 12 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.plot.char 2 \
-col.line.type 3 \
-col.plot.type b
Example 13 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.plot.char 2 5\
-col.line.type 3 4\
-col.plot.type b \
-TR 2
Example 14 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one -col.plot.char 2 -col.line.type 3 \
-col.plot.type b -TR 2 \
-yax.tic.text 'numa numa numa numaei' \
-xax.tic.text 'Alo' 'Salut' 'sunt eu' 'un haiduc'
AFNI program: 1dSEM
Usage: 1dSEM [options] -theta 1dfile -C 1dfile -psi 1dfile -DF nn.n
Computes path coefficients for connection matrix in Structural Equation
Modeling (SEM)
The program takes as input :
1. A 1D file with an initial representation of the connection matrix
with a 1 for each interaction component to be modeled and a 0 if
if it is not to be modeled. This matrix should be PxP rows and column
2. A 1D file of the C, correlation matrix, also with dimensions PxP
3. A 1D file of the residual variance vector, psi
4. The degrees of freedom, DF
Output is printed to the terminal and may be redirected to a 1D file
The path coefficient matrix is printed for each matrix computed
Options:
-theta file.1D = connection matrix 1D file with initial representation
-C file.1D = correlation matrix 1D file
-psi file.1D = residual variance vector 1D file
-DF nn.n = degrees of freedom
-max_iter n = maximum number of iterations for convergence (Default=10000).
Values can range from 1 to any positive integer less than 10000.
-nrand n = number of random trials before optimization (Default = 100)
-limits m.mmm n.nnn = lower and upper limits for connection coefficients
(Default = -1.0 to 1.0)
-calccost = no modeling at all, just calculate the cost function for the
coefficients as given in the theta file. This may be useful for verifying
published results
-verbose nnnnn = print info every nnnnn steps
Model search options:
Look for best model. The initial connection matrix file must follow these
specifications. Each entry must be 0 for entries excluded from the model,
1 for each required entry in the minimum model, 2 for each possible path
to try.
-tree_growth or
-model_search = search for best model by growing a model for one additional
coefficient from the previous model for n-1 coefficients. If the initial
theta matrix has no required coefficients, the initial model will grow from
the best model for a single coefficient
-max_paths n = maximum number of paths to include (Default = 1000)
-stop_cost n.nnn = stop searching for paths when cost function is below
this value (Default = 0.1)
-forest_growth or
-grow_all = search over all possible models by comparing models at
incrementally increasing number of path coefficients. This
algorithm searches all possible combinations; for the number of coeffs
this method can be exceptionally slow, especially as the number of
coefficients gets larger, for example at n>=9.
-leafpicker = relevant only for forest growth searches. Expands the search
optimization to look at multiple paths to avoid local minimum. This method
is the default technique for tree growth and standard coefficient searches
This program uses a Powell optimization algorithm to find the connection
coefficients for any particular model.
References:
Powell, MJD, "The NEWUOA software for unconstrained optimization without
derivatives", Technical report DAMTP 2004/NA08, Cambridge University
Numerical Analysis Group:
See: http://www.ii.uib.no/~lennart/drgrad/Powell2004.pdf
Bullmore, ET, Horwitz, B, Honey, GD, Brammer, MJ, Williams, SCR, Sharma, T,
How Good is Good Enough in Path Analysis of fMRI Data?
NeuroImage 11, 289-301 (2000)
Stein, JL, et al., A validated network of effective amygdala connectivity,
NeuroImage (2007), doi:10.1016/j.neuroimage.2007.03.022
The initial representation in the theta file is non-zero for each element
to be modeled. The 1D file can have leading columns for labels that will
be used in the output. Label rows must be commented with the # symbol
If using any of the model search options, the theta file should have a '1' for
each required coefficient, '0' for each excluded coefficient, '2' for an
optional coefficient. Excluded coefficients are not modeled. Required
coefficients are included in every computed model.
N.B. - Connection directionality in the path connection matrices is from
column to row of the output connection coefficient matrices.
Be very careful when interpreting those path coefficients.
First of all, they are not correlation coefficients. Suppose we have a
network with a path connecting from region A to region B. The meaning
of the coefficient theta (e.g., 0.81) is this: if region A increases by
one standard deviation from its mean, region B would be expected to increase
by 0.81 its own standard deviations from its own mean while holding all other
relevant regional connections constant. With a path coefficient of -0.16,
when region A increases by one standard deviation from its mean, region B
would be expected to decrease by 0.16 its own standard deviations from its
own mean while holding all other relevant regional connections constant.
So theoretically speaking the range of the path coefficients can be anything,
but most of the time they range from -1 to 1. To save running time, the
default values for -limits are set with -1 and 1, but if the result hits
the boundary, increase them and re-run the analysis.
Examples:
To confirm a specific model:
1dSEM -theta inittheta.1D -C SEMCorr.1D -psi SEMvar.1D -DF 30
To search models by growing from the best single coefficient model
up to 12 coefficients
1dSEM -theta testthetas_ms.1D -C testcorr.1D -psi testpsi.1D \
-limits -2 2 -nrand 100 -DF 30 -model_search -max_paths 12
To search all possible models up to 8 coefficients:
1dSEM -theta testthetas_ms.1D -C testcorr.1D -psi testpsi.1D \
-nrand 10 -DF 30 -stop_cost 0.1 -grow_all -max_paths 8 | & tee testgrow.txt
For more information, see https://afni.nimh.nih.gov/sscc/gangc/PathAna.html
and our HBM 2007 poster at
https://sscc.nimh.nih.gov/sscc/posters/file.2007-06-07.0771819246
If you find this program useful, please cite:
G Chen, DR Glen, JL Stein, AS Meyer-Lindenberg, ZS Saad, RW Cox,
Model Validation and Automated Search in FMRI Path Analysis:
A Fast Open-Source Tool for Structural Equation Modeling,
Human Brain Mapping Conference, 2007
AFNI program: 1dsound
Usage: 1dsound [options] tsfile
Program to create a sound file from a 1D file (column of numbers).
Is this program useful? Probably not, but it can be fun.
-------
OPTIONS
-------
===== output filename =====
-prefix ppp = Output filename will be ppp.au
[Sun audio format https://en.wikipedia.org/wiki/Au_file_format]
+ If you don't use '-prefix', the output is file 'sound.au'.
+ If 'ppp' ends in '.au', this program won't add another '.au.
===== encoding details =====
-16PCM = Output in 16-bit linear PCM encoding (uncompressed)
+ Less quantization noise (audible hiss) :)
+ Takes twice as much disk space for output as 8-bit output :(
+++ This is the default method now!
+ https://en.wikipedia.org/wiki/Pulse-code_modulation
-8PCM = Output in 8-bit linear PCM encoding
+ There is no good reason to use this option.
-8ulaw = Output in 8-bit mu-law encoding.
+ Provides a little better quality than -8PCM,
but still has audible quantization noise hiss.
+ https://en.wikipedia.org/wiki/M-law_algorithm
-tper X = X seconds of sound per time point in 'tsfile'.
-TR X Allowed range for 'X' is 0.01 to 1.0 (inclusive).
-dt X [default time step is 0.2 s]
You can use '-tper', '-dt', or '-TR', as you like.
===== how the sound timeseries is produced from the data timeseries =====
-FM = Output sound is frequency modulated between 110 and 1760 Hz
from min to max in the input 1D file.
+ Usually 'sounds terrible'.
+ The only reason this is here is that it was the first method
I implemented, and I kept it for the sake of nostalgia.
-notes = Output sound is a sequence of notes, low to high pitch
based on min to max in the input 1D file.
+++ This is the default method of operation.
+ A pentatonic scale is used, which usually 'sounds nice':
https://en.wikipedia.org/wiki/Pentatonic_scale
-notewave W = Selects the shape of the notes used. 'W' is one of these:
-waveform W sine = pure sine wave (sounds simplistic)
sqsine = square root of sine wave (a little harsh and loud)
square = square wave (a lot harsh and loud)
triangle = triangle wave [the default waveform]
-despike = apply a simple despiking algorithm, to avoid the artifact
of one very large or small value making all the other notes
end up being the same.
===== Notes about notes =====
** At this time, the default production method is '-notes', **
** using the triangle waveform (I like this best). **
** With '-notes', up to 6 columns of the input file will be used **
** to produce a polyphonic sound (in a single channel). **
** (Any columns past the 6th in the input 'tsfile' are ignored.) **
===== hear the sound right away! =====
-play = Plays the sound file after it is written.
On this computer: uses program /usr/bin/aplay
===>> Playing sound on a remote computer is
annoying, pointless, and likely to get you punched.
--------
EXAMPLES
--------
The first 2 examples are purely synthetic, using 'data' files created
on the command line. The third example uses a data file that was written
out of an AFNI graph viewer using the 'w' keystroke.
1dsound -prefix A1 '1D: 0 1 2 1 0 1 2 0 1 2'
1deval -num 100 -expr 'sin(x+0.01*x*x)' | 1dsound -tper 0.1 -prefix A2 1D:stdin
1dsound -prefix -tper 0.1 A3 028_044_003.1D
-----
NOTES
-----
* File can be played with the 'sox' audio package command
play A1.au gain -5
+ Here 'gain -5' turns the volume down :)
+ sox is not provided with AFNI :(
+ To see if sox is on your system, type the command 'which sox'
+ If you have sox, you can add 'reverb 99' at the end of the
'play' command line, and have some extra fun.
+ Many other effects are available with sox 'play',
and they can also be used to produce edited sound files:
http://sox.sourceforge.net/sox.html#EFFECTS
+ You can convert the .au file produced from here to other
formats using sox; for example:
sox Bob.au Cox.au BobCox.aiff
combines the 2 .au input files to a 2-channel (stereo)
Apple .aiff output file. See this for more information:
http://sox.sourceforge.net/soxformat.html
* Creation of the file does not depend on sox, so if you have
another way to play .au files, you can use that.
* Mac OS X: Quicktime (GUI) or afplay (command line) programs.
+ sox can be installed by first installing 'brew'
-- see https://brew.sh/ -- and then using command
'brew install sox'.
* Linux: Getting sox is probably the simplest thing to do.
+ Or install the mplayer package (which also does videos).
+ Another possibility is the aplay program.
* The audio output file is sampled at 16K bytes per second.
For example, a 30 second file will be 960K bytes in size,
at 16 bits per sample.
* The auditory effect varies significantly with the '-tper'
parameter X; '-tper 0.02' is very different than '-tper 0.4'.
--- Quick hack for experimentation and fun - RWCox - Aug 2018 ---
AFNI program: 1dsum
Usage: 1dsum [options] a.1D b.1D ...
where each file a.1D, b.1D, etc. is an ASCII file of numbers arranged
in rows and columns. The sum of each column is written to stdout.
Options:
-ignore nn = skip the first nn rows of each file
-use mm = use only mm rows from each file
-mean = compute the average instead of the sum
-nocomment = the # comments from the header of the first
input file will be reproduced to the output;
if you do NOT want this to happen, use the
'-nocomment' option.
-OKempty = If you encounter an empty 1D file, print 0
and exit quietly instead of exiting with an
error message
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dsvd
Usage: 1dsvd [options] 1Dfile 1Dfile ...
- Computes SVD of the matrix formed by the 1D file(s).
- Output appears on stdout; to save it, use '>' redirection.
OPTIONS:
-one = Make 1st vector be all 1's.
-vmean = Remove mean from each vector (can't be used with -one).
-vnorm = Make L2-norm of each vector = 1 before SVD.
* The above 2 options mirror those in 3dpc.
-cond = Only print condition number (ratio of extremes)
-sing = Only print singular values
* To compare the singular values from 1dsvd with those from
3dDeconvolve you must use the -vnorm option with 1dsvd.
For example, try
3dDeconvolve -nodata 200 1 -polort 5 -num_stimts 1 \
-stim_times 1 '1D: 30 130' 'BLOCK(50,1)' -singvals
1dsvd -sing -vnorm nodata.xmat.1D
-sort = Sort singular values (descending) [the default]
-nosort = Don't bother to sort the singular values
-asort = Sort singular values (ascending)
-1Dleft = Only output left eigenvectors, in a .1D format
This might be useful for reducing the number of
columns in a design matrix. The singular values
are printed at the top of each vector column,
as a '#...' comment line.
-nev n = If -1Dleft is used, '-nev' specifies to output only
the first 'n' eigenvectors, rather than all of them.
* If you are a tricky person, such as Souheil, you can
put a '%' after the value, and then you are saying
keep eigenvectors until at least n% of the sum of
singular values is accounted for. In this usage,
'n' must be a number less than 100; for example, to
reduce a matrix down to a smaller set of columns that
capture most of its column space, try something like
1dsvd -1Dleft -nev 99% Xorig.1D > X99.1D
EXAMPLE:
1dsvd -vmean -vnorm -1Dleft fred.1D'[1..6]' | 1dplot -stdin
NOTES:
* Call the input n X m matrix [A] (n rows, m columns). The SVD
is the factorization [A] = [U] [S] [V]' ('=transpose), where
- [U] is an n x m matrix (whose columns are the 'Left vectors')
- [S] is a diagonal m x m matrix (the 'singular values')
- [V] is an m x m matrix (whose columns are the 'Right vectors')
* The default output of the program is
- An echo of the input [A]
- The [U] matrix, each column headed by its singular value
- The [V] matrix, each column headed by its singular value
(please note that [V] is output, not [V]')
- The pseudo-inverse of [A]
* This program was written simply for some testing purposes,
but is distributed with AFNI because it might be useful-ish.
* Recall that you can transpose a .1D file on input by putting
an escaped ' character after the filename. For example,
1dsvd fred.1D\'
You can use this feature to get around the fact that there
is no '-1Dright' option. If you understand.
* For more information on the SVD, you can start at
http://en.wikipedia.org/wiki/Singular_value_decomposition
* Author: Zhark the Algebraical (Linear).
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1d_tool.py
=============================================================================
1d_tool.py - for manipulating and evaluating 1D files
---------------------------------------------------------------------------
purpose: ~1~
This program is meant to read/manipulate/write/diagnose 1D datasets.
Input can be specified using AFNI sub-brick[]/time{} selectors.
---------------------------------------------------------------------------
examples (very basic for now): ~1~
Example 1. Select by rows and columns, akin to 1dcat. ~2~
Note: columns can be X-matrix labels.
1d_tool.py -infile 'data/X.xmat.1D[0..3]{0..5}' -write t1.1D
or using column labels:
1d_tool.py -infile 'data/X.xmat.1D[Run#1Pol#0..Run#1Pol#3]' \
-write run0_polorts.1D
Example 2. Compare with selection by separate options. ~2~
1d_tool.py -infile data/X.xmat.1D \
-select_cols '0..3' -select_rows '0..5' \
-write t2.1D
diff t1.1D t2.1D
Example 2b. Select or remove columns by label prefixes. ~2~
Keep only bandpass columns:
1d_tool.py -infile X.xmat.1D -write X.bandpass.1D \
-label_prefix_keep bandpass
Remove only bandpass columns (maybe for 3dRFSC):
1d_tool.py -infile X.xmat.1D -write X.no.bandpass.1D \
-label_prefix_drop bandpass
Keep polort columns (start with 'Run') motion shifts ('d') and labels
starting with 'a' and 'b'. But drop 'bandpass' columns:
1d_tool.py -infile X.xmat.1D -write X.weird.1D \
-label_prefix_keep Run d a b \
-label_prefix_drop bandpass
Example 2c. Select columns by group values, 3 examples. ~2~
First be sure of what the group labels represent.
1d_tool.py -infile X.xmat.1D -show_group_labels
i) Select polort (group -1) and other baseline (group 0) terms.
1d_tool.py -infile X.xmat.1D -select_groups -1 0 -write baseline.1D
ii) Select everything but baseline groups (anything positive).
1d_tool.py -infile X.xmat.1D -select_groups POS -write regs.of.int.1D
iii) Reorder to have rests of interest, then motion, then polort.
1d_tool.py -infile X.xmat.1D -select_groups POS 0, -1 -write order.1D
iv) Create stim-only X-matrix file: select non-baseline columns of
X-matrix and write with header comment.
1d_tool.py -infile X.xmat.1D -select_groups POS \
-write_with_header yes -write X.stim.xmat.1D
Or, using a convenience option:
1d_tool.py -infile X.xmat.1D -write_xstim X.stim.xmat.1D
Example 2d. Select specific runs from the input. ~2~
Note that X.xmat.1D may have runs defined automatically, but for an
arbitrary input, they may need to be specified via -set_run_lengths.
i) .... apparently I forgot to do this...
Example 2e. Select tedana mixing columns by accept or reject. ~2~
ME-ICA tedana outputs component metrics in desc-tedana_metrics.tsv
and the actual components in desc-ICA_mixing.tsv. Write the rejected
components to tedana.rejected.1D.
1d_tool.py -infile desc-ICA_mixing.tsv \
-select_cols_via_TSV_table desc-tedana_metrics.tsv \
Component classification=rejected \
-write tedana.rejected.1D -verb 2
Example 2f. Select fMRIPrep confounds. ~2~
fMRIPrep outputs many time series to optionally use for regression.
Assuming this is in a file fmriprep_confounds.tsv:
select AROMA motion time series:
1d_tool.py -infile fmriprep_confounds.tsv'[aroma_mot*]' \
-write aroma_motion.1D
select standard motion parameters (3 rotations, 3 shifts):
1d_tool.py -infile fmriprep_confounds.tsv'[rot_?,trans_?]' \
-write fmriprep_motion.1D
verify the labels chosen by selector:
1d_tool.py -infile fmriprep_confounds.tsv'[rot_?,trans_?]' \
-show_group_labels
Example 3. Transpose a dataset, akin to 1dtranspose. ~2~
1d_tool.py -infile t3.1D -transpose -write ttr.1D
Example 4a. Zero-pad a single-run 1D file across many runs. ~2~
Given a file of regressors (for example) across a single run (run 2),
created a new file that is padded with zeros, so that it now spans
many (7) runs. Runs are 1-based here.
1d_tool.py -infile ricor_r02.1D -pad_into_many_runs 2 7 \
-write ricor_r02_all.1D
Example 4b. Similar to 4a, but specify varying TRs per run. ~2~
The number of runs must match the number of run_lengths parameters.
1d_tool.py -infile ricor_r02.1D -pad_into_many_runs 2 7 \
-set_run_lengths 64 61 67 61 67 61 67 \
-write ricor_r02_all.1D
Example 5. Display small details about a 1D dataset: ~2~
a. Display number of rows and columns for a 1D dataset.
Note: to display them "quietly" (only the numbers), add -verb 0.
This is useful for setting a script variable.
1d_tool.py -infile X.xmat.1D -show_rows_cols
1d_tool.py -infile X.xmat.1D -show_rows_cols -verb 0
b. Display indices of regressors of interest from an X-matrix.
1d_tool.py -infile X.xmat.1D -show_indices_interest
c. Display X-matrix labels by group.
1d_tool.py -infile X.xmat.1D -show_group_labels
d. Display "degree of freedom" information:
1d_tool.py -infile X.xmat.1D -show_df_info
e. Display X-matrix stimulus class information (for one class or ALL).
1d_tool.py -infile X.xmat.1D -show_xmat_stim_info aud
1d_tool.py -infile X.xmat.1D -show_xmat_stim_info ALL
f. Display X-matrix column index list for those of the given classes.
Display regressor labels or in encoded column index format.
1d_tool.py -infile X.xmat.1D -show_xmat_stype_cols AM IM
1d_tool.py -infile X.xmat.1D -show_xmat_stype_cols ALL \
-show_regs_style encoded
g. Display X-matrix column index list for all-zero regressors.
Display regressor labels or in encoded column index format.
1d_tool.py -infile X.xmat.1D -show_xmat_stype_cols AM IM
Example 6a. Show correlation matrix warnings for this matrix. ~2~
This option does not include warnings from baseline regressors,
which are common (from polort 0, from similar motion, etc).
1d_tool.py -infile X.xmat.1D -show_cormat_warnings
Example 6b. Show entire correlation matrix. ~2~
1d_tool.py -infile X.xmat.1D -show_cormat
Example 6c. Like 6a, but include warnings for baseline regressors. ~2~
1d_tool.py -infile X.xmat.1D -show_cormat_warnings_full
Example 7a. Output temporal derivative of motion regressors. ~2~
There are 9 runs in dfile_rall.1D, and derivatives are applied per run.
1d_tool.py -infile dfile_rall.1D -set_nruns 9 \
-derivative -write motion.deriv.1D
Example 7b. Similar to 7a, but let the run lengths vary. ~2~
The sum of run lengths should equal the number of time points.
1d_tool.py -infile dfile_rall.1D \
-set_run_lengths 64 64 64 64 64 64 64 64 64 \
-derivative -write motion.deriv.rlens.1D
Example 7c. Use forward differences. ~2~
instead of the default backward differences...
1d_tool.py -infile dfile_rall.1D \
-set_run_lengths 64 64 64 64 64 64 64 64 64 \
-forward_diff -write motion.deriv.rlens.1D
Example 8. Verify whether labels show slice-major ordering.
This is where all slice0 regressors come first, then all slice1
regressors, etc. Either show the labels and verify visually, or
print whether it is true.
1d_tool.py -infile scan_2.slibase.1D'[0..12]' -show_labels
1d_tool.py -infile scan_2.slibase.1D -show_labels
1d_tool.py -infile scan_2.slibase.1D -show_label_ordering
Example 9a. Given motion.1D, create an Enorm time series. ~2~
Take the derivative (ignoring run breaks) and the Euclidean Norm,
and write as e.norm.1D. This might be plotted to show show sudden
motion as a single time series.
1d_tool.py -infile motion.1D -set_nruns 9 \
-derivative -collapse_cols euclidean_norm \
-write e.norm.1D
Example 9b. Like 9a, but supposing the run lengths vary (still 576 TRs). ~2~
1d_tool.py -infile motion.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-derivative -collapse_cols euclidean_norm \
-write e.norm.rlens.1D
Example 9c. Like 9b, but weight the rotations as 0.9 mm. ~2~
1d_tool.py -infile motion.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-derivative -collapse_cols weighted_enorm \
-weight_vec .9 .9 .9 1 1 1 \
-write e.norm.weighted.1D
Example 10. Given motion.1D, create censor files to use in 3dDeconvolve. ~2~
Here a TR is censored if the derivative values have a Euclidean Norm
above 1.2. It is common to also censor each previous TR, as motion may
span both (previous because "derivative" is actually a backward
difference).
The file created by -write_censor can be used with 3dD's -censor option.
The file created by -write_CENSORTR can be used with -CENSORTR. They
should have the same effect in 3dDeconvolve. The CENSORTR file is more
readable, but the censor file is better for plotting against the data.
a. general example ~3~
1d_tool.py -infile motion.1D -set_nruns 9 \
-derivative -censor_prev_TR \
-collapse_cols euclidean_norm \
-moderate_mask -1.2 1.2 \
-show_censor_count \
-write_censor subjA_censor.1D \
-write_CENSORTR subjA_CENSORTR.txt
b. using -censor_motion ~3~
The -censor_motion option is available, which implies '-derivative',
'-collapse_cols euclidean_norm', 'moderate_mask -LIMIT LIMIT', and the
prefix for '-write_censor' and '-write_CENSORTR' output files. This
option will also result in subjA_enorm.1D being written, which is the
euclidean norm of the derivative, before the extreme mask is applied.
1d_tool.py -infile motion.1D -set_nruns 9 \
-show_censor_count \
-censor_motion 1.2 subjA \
-censor_prev_TR
c. allow the run lengths to vary ~3~
1d_tool.py -infile motion.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-show_censor_count \
-censor_motion 1.2 subjA_rlens \
-censor_prev_TR
Consider also '-censor_prev_TR' and '-censor_first_trs'.
Example 11. Demean the data. Use motion parameters as an example. ~2~
The demean operation is done per run (default is 1 when 1d_tool.py
does not otherwise know).
a. across all runs (if runs are not known from input file)
1d_tool.py -infile dfile_rall.1D -demean -write motion.demean.a.1D
b. per run, over 9 runs of equal length
1d_tool.py -infile dfile_rall.1D -set_nruns 9 \
-demean -write motion.demean.b.1D
c. per run, over 9 runs of varying length
1d_tool.py -infile dfile_rall.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-demean -write motion.demean.c.1D
Example 12. "Uncensor" the data, zero-padding previously censored TRs. ~2~
Note that an X-matrix output by 3dDeconvolve contains censor
information in GoodList, which is the list of uncensored TRs.
a. if the input dataset has censor information
1d_tool.py -infile X.xmat.1D -censor_fill -write X.uncensored.1D
b. if censor information needs to come from a parent
1d_tool.py -infile sum.ideal.1D -censor_fill_parent X.xmat.1D \
-write sum.ideal.uncensored.1D
c. if censor information needs to come from a simple 1D time series
1d_tool.py -censor_fill_parent motion_FT_censor.1D \
-infile cdata.1D -write cdata.zeropad.1D
Example 13. Show whether the input file is valid as a numeric data file. ~2~
a. as any generic 1D file
1d_tool.py -infile data.txt -looks_like_1D
b. as a 1D stim_file, of 3 runs of 64 TRs (TR is irrelevant)
1d_tool.py -infile data.txt -looks_like_1D \
-set_run_lengths 64 64 64
c. as a stim_times file with local times
1d_tool.py -infile data.txt -looks_like_local_times \
-set_run_lengths 64 64 64 -set_tr 2
d. as a 1D or stim_times file with global times
1d_tool.py -infile data.txt -looks_like_global_times \
-set_run_lengths 64 64 64 -set_tr 2
e. report modulation type (amplitude and/or duration)
1d_tool.py -infile data.txt -looks_like_AM
f. perform all tests, reporting all errors
1d_tool.py -infile data.txt -looks_like_test_all \
-set_run_lengths 64 64 64 -set_tr 2
Example 14. Split motion parameters across runs. ~2~
Split, but keep them at the original length so they apply to the same
multi-run regression. Each file will be the same as the original for
the run it applies to, but zero across all other runs.
Note that -split_into_pad_runs takes the output prefix as a parameter.
1d_tool.py -infile motion.1D \
-set_run_lengths 64 64 64 \
-split_into_pad_runs mot.padded
The output files are:
mot.padded.r01.1D mot.padded.r02.1D mot.padded.r03.1D
If the run lengths are the same -set_nruns is shorter...
1d_tool.py -infile motion.1D \
-set_nruns 3 \
-split_into_pad_runs mot.padded
Example 15a. Show the maximum pairwise displacement. ~2~
Show the max pairwise displacement in the motion parameter file.
So over all TRs pairs, find the biggest displacement.
In one direction it is easy (AP say). If the minimum AP shift is -0.8
and the maximum is 1.5, then the maximum displacement is 2.3 mm. It
is less clear in 6-D space, and instead of trying to find an enveloping
set of "coordinates", distances between all N choose 2 pairs are
evaluated (brute force).
1d_tool.py -infile dfile_rall.1D -show_max_displace
Example 15b. Like 15a, but do not include displacement from censored TRs. ~2~
1d_tool.py -infile dfile_rall.1D -show_max_displace \
-censor_infile motion_censor.1D
Example 15c. Show the entire distance/displacement matrix. ~2~
Show all pairwise displacements (vector distances) in a (motion param?)
row vector file. Note that the maximum element of this matrix should
be the one output by -show_max_displace.
1d_tool.py -infile coords.1D -show_distmat
Example 16. Randomize a list of numbers, say, those from 1..40. ~2~
The numbers can come from 1deval, with the result piped to
'1d_tool.py -infile stdin -randomize_trs ...'.
1deval -num 40 -expr t+1 | \
1d_tool.py -infile stdin -randomize_trs -write stdout
See also -seed.
Example 17. Display min, mean, max, stdev of 1D file. ~2~
1d_tool.py -show_mmms -infile data.1D
To be more detailed, get stats for each of x, y, and z directional
blur estimates for all subjects. Cat(enate) all of the subject files
and pipe that to 1d_tool.py with infile - (meaning stdin).
cat subject_results/group.*/sub*/*.results/blur.errts.1D \
| 1d_tool.py -show_mmms -infile -
Example 18. Just output censor count for default method. ~2~
This will output nothing but the number of TRs that would be censored,
akin to using -censor_motion and -censor_prev_TR.
1d_tool.py -infile dfile_rall.1D -set_nruns 3 -quick_censor_count 0.3
1d_tool.py -infile dfile_rall.1D -set_run_lengths 100 80 120 \
-quick_censor_count 0.3
Example 19. Compute GCOR from some 1D file. ~2~
* Note, time should be in the vertical direction of the file
(else use -transpose).
1d_tool.py -infile data.1D -show_gcor
Or get some GCOR documentation and many values.
1d_tool.py -infile data.1D -show_gcor_doc
1d_tool.py -infile data.1D -show_gcor_all
Example 20. Display censored or uncensored TRs lists (for use in 3dTcat). ~2~
TRs which were censored:
1d_tool.py -infile X.xmat.1D -show_trs_censored encoded
TRs which were applied in analysis (those NOT censored):
1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded
Only those applied in run #2 (1-based).
1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \
-show_trs_run 2
Example 21. Convert to rank order. ~2~
a. show rank order of slice times from a 1D file
1d_tool.py -infile slice_times.1D -rank -write -
b. show rank order of slice times piped directly from 3dinfo
Note: input should be space separated, not '|' separated.
3dinfo -slice_timing -sb_delim ' ' epi+orig \
| 1d_tool.py -infile - -rank -write -
c. show rank order using 'competition' rank, instead of default 'dense'
3dinfo -slice_timing -sb_delim ' ' epi+orig \
| 1d_tool.py -infile - -rank_style competition -write -
Example 22. Guess volreg base index from motion parameters. ~2~
1d_tool.py -infile dfile_rall.1D -collapse_cols enorm -show_argmin
Example 23. Convert volreg parameters to those suitable for 3dAllineate. ~2~
1d_tool.py -infile dfile_rall.1D -volreg2allineate \
-write allin_rall_aff12.1D
Example 24. Show TR counts per run. ~2~
a. list the number of TRs in each run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts trs
b. list the number of TRs censored in each run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts trs_cen
c. list the number of TRs prior to censoring in each run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts trs_no_cen
d. list the fraction of TRs censored per run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts frac_cen
e. list the fraction of TRs censored in run 3
1d_tool.py -infile X.xmat.1D -show_tr_run_counts frac_cen \
-show_trs_run 3
Example 25. Show number of runs. ~2~
1d_tool.py -infile X.xmat.1D -show_num_runs
Example 26. Convert global index to run and TR index. ~2~
Note that run indices are 1-based, while TR indices are 0-based,
as usual. Confusion is key.
a. explicitly, given run lengths
1d_tool.py -set_run_lengths 100 80 120 -index_to_run_tr 217
b. implicitly, given an X-matrix (** be careful about censoring **)
1d_tool.py -infile X.nocensor.xmat.1D -index_to_run_tr 217
Example 27. Display length of response curve. ~2~
1d_tool.py -show_trs_to_zero -infile data.1D
Print out the length of the input (in TRs, say) until the data
values become a constant zero. Zeros that are followed by non-zero
values are irrelevant.
Example 28. Convert slice order to slice times. ~2~
A slice order might be the sequence in which slices were acquired.
For example, with 33 slices, perhaps the order is:
set slice_order = ( 0 6 12 18 24 30 1 7 13 19 25 31 2 8 14 20 \
26 32 3 9 15 21 27 4 10 16 22 28 5 11 17 23 29 )
Put this in a file:
echo $slice_order > slice_order.1D
1d_tool.py -set_tr 2 -slice_order_to_times \
-infile slice_order.1D -write slice_times.1D
Or as a filter:
echo $slice_order | 1d_tool.py -set_tr 2 -slice_order_to_times \
-infile - -write -
Example 29. Display minimum cluster size from 3dClustSim output. ~2~
Given a text file output by 3dClustSim, e.g. ClustSim.ACF.NN1_1sided.1D,
and given both an uncorrected (pthr) and a corrected (alpha) p-value,
look up the entry that specifies the minimum cluster size needed for
corrected p-value significance.
If requested in afni_proc.py, they are under files_ClustSim.
a. with modestly verbose output (default is -verb 1)
1d_tool.py -infile ClustSim.ACF.NN1_1sided.1D -csim_show_clustsize
b. quiet, to see just the output value
1d_tool.py -infile ClustSim.ACF.NN1_1sided.1D -csim_show_clustsize \
-verb 0
c. quiet, and capture the output value (tcsh syntax)
set clustsize = `1d_tool.py -infile ClustSim.ACF.NN1_1sided.1D \
-csim_show_clustsize -verb 0`
Example 30. Display columns that are all-zero (e.g. censored out) ~2~
Given a regression matrix, list columns that are entirely zero, such
as those for which there were no events, or those for which event
responses were censored out.
a. basic output
Show the number of such columns and a list of labels
1d_tool.py -show_regs allzero -infile zerocols.X.xmat.1D
b. quiet output (do not include the number of such columns)
1d_tool.py -show_regs allzero -infile zerocols.X.xmat.1D -verb 0
c. quiet encoded index list
1d_tool.py -show_regs allzero -infile zerocols.X.xmat.1D \
-show_regs_style encoded -verb 0
d. list all labels of regressors of interest (with no initial count)
1d_tool.py -show_regs set -infile zerocols.X.xmat.1D \
-select_groups POS -verb 0
Example 31. Determine slice timing pattern (for EPI data) ~2~
Determine the slice timing pattern from a list of slice times.
The output is :
- multiband level (usually 1)
- tpattern, one such pattern from those in 'to3d -help'
a. where slice times are in a file
1d_tool.py -show_slice_timing_pattern -infile slice_times.1D
b. or as a filter
3dinfo -slice_timing -sb_delim ' ' FT_epi_r1+orig \
| 1d_tool.py -show_slice_timing_pattern -infile -
c. or if it fails, be gentle and verbose
1d_tool.py -infile slice_times.1D \
-show_slice_timing_gentle -verb 3
---
d. Related, show slice timing resolution, the accuracy of the slice
times, assuming they should be multiples of a constant
(the slice duration).
1d_tool.py -infile slice_times.1D -show_slice_timing_resolution
e. or as a filter
3dinfo -slice_timing -sb_delim ' ' FT_epi_r1+orig \
| 1d_tool.py -show_slice_timing_resolution -infile -
Example 32. Display slice timing ~2~
Display slice timing given a to3d timing pattern, the number of
slices, the multiband level, and optionally the TR.
a. pattern alt+z, 40 slices, multiband 1, TR 2s
(40 slices in 2s means slices are acquired every 0.05 s)
1d_tool.py -slice_pattern_to_times alt+z 40 1 -set_tr 2
b. same, but multiband 2
(so slices are acquired every 0.1 s, and there are 2 such sets)
1d_tool.py -slice_pattern_to_times alt+z 40 2 -set_tr 2
c. test this by feeding the output to -show_slice_timing_pattern
1d_tool.py -slice_pattern_to_times alt+z 40 2 -set_tr 2 \
| 1d_tool.py -show_slice_timing_pattern -infile -
---------------------------------------------------------------------------
command-line options: ~1~
---------------------------------------------------------------------------
basic informational options: ~2~
-help : show this help
-hist : show the module history
-show_valid_opts : show all valid options
-ver : show the version number
----------------------------------------
required input: ~2~
-infile DATASET.1D : specify input 1D file
----------------------------------------
general options: ~2~
-add_cols NEW_DSET.1D : extend dset to include these columns
-backward_diff : take derivative as first backward difference
Take the backward differences at each time point. For each index > 0,
value[index] = value[index] - value[index-1], and value[0] = 0.
This option is identical to -derivative.
See also -forward_diff, -derivative, -set_nruns, -set_run_lens.
-collapse_cols METHOD : collapse multiple columns into one, where
METHOD is one of: min, max, minabs, maxabs, euclidean_norm,
weighted_enorm.
Consideration of the euclidean_norm method:
For censoring, the euclidean_norm method is used (sqrt(sum squares)).
This combines rotations (in degrees) with shifts (in mm) as if they
had the same weight.
Note that assuming rotations are about the center of mass (which
should produce a minimum average distance), then the average arc
length (averaged over the brain mask) of a voxel rotated by 1 degree
(about the CM) is the following (for the given datasets):
TT_N27+tlrc: 0.967 mm (average radius = 55.43 mm)
MNIa_caez_N27+tlrc: 1.042 mm (average radius = 59.69 mm)
MNI_avg152T1+tlrc: 1.088 mm (average radius = 62.32 mm)
The point of these numbers is to suggest that equating degrees and
mm should be fine. The average distance caused by a 1 degree
rotation is very close to 1 mm (in an adult human).
* 'enorm' is short for 'euclidean_norm'.
* Use of weighted_enorm requires the -weight_vec option.
e.g. -collapse_cols weighted_enorm -weight_vec .9 .9 .9 1 1 1
-censor_motion LIMIT PREFIX : create censor files
This option implies '-derivative', '-collapse_cols euclidean_norm',
'moderate_mask -LIMIT LIMIT' and applies PREFIX for '-write_censor'
and '-write_CENSORTR' output files. It also outputs the derivative
of the euclidean norm, before the limit it applied.
The temporal derivative is taken with run breaks applied (derivative
of the first run of a TR is 0), then the columns are collapsed into
one via each TR's vector length (Euclidean Norm: sqrt(sum of squares)).
After that, a mask time series is made from TRs with values outside
(-LIMIT,LIMIT), i.e. if >= LIMIT or <= LIMIT, result is 1.
This binary time series is then written out in -CENSORTR format, with
the moderate TRs written in -censor format (either can be applied in
3dDeconvolve). The output files will be named PREFIX_censor.1D,
PREFIX_CENSORTR.txt and PREFIX_enorm.1D (e.g. subj123_censor.1D,
subj123_CENSORTR.txt and subj123_enorm.1D).
Besides an input motion file (-infile), the number of runs is needed
(-set_nruns or -set_run_lengths).
Consider also '-censor_prev_TR' and '-censor_first_trs'.
See example 10.
-censor_fill : expand data, filling censored TRs with zeros
-censor_fill_parent PARENT : similar, but get censor info from a parent
The output of these operation is a longer dataset. Each TR that had
previously been censored is re-inserted as a zero.
The purpose of this is to make 1D time series data properly align
with the all_runs dataset, for example. Otherwise, the ideal 1D data
might have missing TRs, and so will align worse with responses over
the duration of all runs (it might start aligned, but drift earlier
and earlier as more TRs are censored).
See example 12.
-censor_infile CENSOR_FILE : apply censoring to -infile dataset
This removes TRs from the -infile dataset where the CENSOR_FILE is 0.
The censor file is akin to what is used with "3dDeconvolve -censor",
where TRs with 1 are kept and those with 0 are excluded from analysis.
See example 15b.
-censor_first_trs N : when censoring motion, also censor the first
N TRs of each run
-censor_next_TR : for each censored TR, also censor next one
(probably for use with -forward_diff)
-censor_prev_TR : for each censored TR, also censor previous
-cormat_cutoff CUTOFF : set cutoff for cormat warnings (in [0,1])
-csim_show_clustsize : for 3dClustSim input, show min clust size
Given a 3dClustSim table output (e.g. ClustSim.ACF.NN1_1sided.1D),
along with uncorrected (pthr) and corrected (alpha) p-values, show the
minimum cluster size to achieve significance.
The pthr and alpha values can be controlled via the options -csim_pthr
and -csim_alpha (with defaults of 0.001 and 0.05, respectively).
The -verb option can be used to provide additional or no details
about the clustering method.
See Example 29, along with options -csim_pthr, -csim_alpha and -verb.
-csim_pthr THRESH : specify uncorrected threshold for csim output
e.g. -csim_pthr 0.0001
This option implies -csim_show_clustsize, and is used to specify the
uncorrected p-value of the 3dClustSim output.
See also -csim_show_clustsize.
-csim_alpha THRESH : specify corrected threshold for csim output
e.g. -csim_alpha 0.01
This option implies -csim_show_clustsize, and is used to specify the
corrected, cluster-wise p-value of the 3dClustSim output.
See also -csim_show_clustsize.
-demean : demean each run (new mean of each run = 0.0)
-derivative : take the temporal derivative of each vector
(done as first backward difference)
Take the backward differences at each time point. For each index > 0,
value[index] = value[index] - value[index-1], and value[0] = 0.
This option is identical to -backward_diff.
See also -backward_diff, -forward_diff, -set_nruns, -set_run_lens.
-extreme_mask MIN MAX : make mask of extreme values
Convert to a 0/1 mask, where 1 means the given value is extreme
(outside the (MIN, MAX) range), and 0 means otherwise. This is the
opposite of -moderate_mask (not exactly, both are inclusive).
Note: values = MIN or MAX will be in both extreme and moderate masks.
Note: this was originally described incorrectly in the help.
-forward_diff : take first forward difference of each vector
Take the first forward differences at each time point. For index<last,
value[index] = value[index+1] - value[index], and value[last] = 0.
The difference between -forward_diff and -backward_diff is a time shift
by one index.
See also -backward_diff, -derivative, -set_nruns, -set_run_lens.
-index_to_run_tr INDEX : convert global INDEX to run and TR indices
Given a list of run lengths, convert INDEX to a run and TR index pair.
This option requires -set_run_lens or maybe an Xmat.
See also -set_run_lens example 26.
-moderate_mask MIN MAX : make mask of moderate values
Convert to a 0/1 mask, where 1 means the given value is moderate
(within [MIN, MAX]), and 0 means otherwise. This is useful for
censoring motion (in the -censor case, not -CENSORTR), where the
-censor file should be a time series of TRs to apply.
See also -extreme_mask.
-label_prefix_drop prefix1 prefix2 ... : remove labels matching prefix list
e.g. to remove motion shift (starting with 'd') and bandpass labels:
-label_prefix_drop d bandpass
This is a type of column selection.
Use this option to remove columns from a matrix that have labels
starting with any from the given prefix list.
This option can be applied along with -label_prefix_keep.
See also -label_prefix_keep and example 2b.
-label_prefix_keep prefix1 prefix2 ... : keep labels matching prefix list
e.g. to keep only motion shift (starting with 'd') and bandpass labels:
-label_prefix_keep d bandpass
This is a type of column selection.
Use this option to keep columns from a matrix that have labels starting
with any from the given prefix list.
This option can be applied along with -label_prefix_drop.
See also -label_prefix_drop and example 2b.
"Looks like" options:
These are terminal options that check whether the input file seems to
be of type 1D, local stim_times or global stim_times formats. The only
associated options are currently -infile, -set_run_lens, -set_tr and
-verb.
They are terminal in that no other 1D-style actions are performed.
See 'timing_tool.py -help' for details on stim_times operations.
-looks_like_1D : is the file in 1D format
Does the input data file seem to be in 1D format?
- must be rectangular (same number of columns per row)
- duration must match number of rows (if run lengths are given)
-looks_like_AM : does the file have modulators?
Does the file seem to be in local or global times format, and
do the times have modulators?
- amplitude modulators should use '*' format (e.g. 127.3*5.1)
- duration modulators should use trailing ':' format (12*5.1:3.4)
- number of amplitude modulators should be constant
-looks_like_local_times : is the file in local stim_times format
Does the input data file seem to be in the -stim_times format used by
3dDeconvolve (and timing_tool.py)? More specifically, is it the local
format, with one scanning run per row.
- number of rows must match number of runs
- times cannot be negative
- times must be unique per run (per row)
- times cannot exceed the current run time
-looks_like_global_times : is the file in global stim_times format
Does the input data file seem to be in the -stim_times format used by
3dDeconvolve (and timing_tool.py)? More specifically, is it the global
format, either as one long row or one long line?
- must be one dimensional (either a single row or column)
- times cannot be negative
- times must be unique
- times cannot exceed total duration of all runs
-looks_like_test_all : run all -looks_like tests
Applies all "looks like" test options: -looks_like_1D, -looks_like_AM,
-looks_like_local_times and -looks_like_global_times.
-overwrite : allow overwriting of any output dataset
-pad_into_many_runs RUN NRUNS : pad as current run of num_runs
e.g. -pad_into_many_runs 2 7
This option is used to create a longer time series dataset where the
input is consider one particular run out of many. The output is
padded with zero for all run TRs before and after this run.
Given the example, there would be 1 run of zeros, then the input would
be treated as run 2, and there would be 5 more runs of zeros.
-quick_censor_count LIMIT : output # TRs that would be censored
e.g. -quick_censor_count 0.3
This is akin to -censor_motion, but it only outputs the number of TRs
that would be censored. It does not actually create a censor file.
This option essentially replaces these:
-derivative -demean -collapse_cols euclidean_norm
-censor_prev_TR -verb 0 -show_censor_count
-moderate_mask 0 LIMIT
-rank : convert data to rank order
0-based index order of small to large values
The default rank STYLE is 'dense'.
See also -rank_style.
-rank_style STYLE : convert to rank using the given style
The STYLE refers to what to do in the case of repeated values.
Assuming inputs 4 5 5 9...
dense - repeats get same rank, no gaps in rank
- same a "3dmerge -1rank"
- result: 0 1 1 2
competition - repeats get same rank, leading to gaps in rank
- same a "3dmerge -1rank"
- result: 0 1 1 3
(case '2' is counted, though no such rank occurs)
Option '-rank' uses style 'dense'.
See also -rank.
-reverse_rank : convert data to reverse rank order
(large values come first)
-reverse : reverse data over time
-randomize_trs : randomize the data over time
-seed SEED : set random number seed (integer)
-select_groups g0 g1 ... : select columns by group numbers
e.g. -select groups 0
e.g. -select groups POS 0
An X-matrix dataset (e.g. X.xmat.1D) often has columns partitioned by
groups, such as:
-1 : polort regressors
0 : motion regressors and other (non-polort) baseline terms
N>0: regressors of interest
This option can be used to select columns by integer groups, with
special cases of POS (regs of interest), NEG (probably polort).
Note that NONNEG is unneeded as it is the pair POS 0.
See also -show_group_labels.
-select_cols SELECTOR : apply AFNI column selectors, [] is optional
e.g. '[5,0,7..21(2)]'
e.g. '[aroma_mot*]' # aroma_motion
e.g. '[rot_?,trans_?]' " 6 motion params
-select_cols_via_TSV_table TABLE FIELD WHERE
: use tsv TABLE to select FIELD elements where
WHERE is true; resulting values are then
taken as column headers to select from any
-input tsv data file
-select_rows SELECTOR : apply AFNI row selectors, {} is optional
e.g. '{5,0,7..21(2)}'
-select_runs r1 r2 ... : extract the given runs from the dataset
(these are 1-based run indices)
e.g. 2
e.g. 2 3 1 1 1 1 1 4
-set_nruns NRUNS : treat the input data as if it has nruns
(e.g. applies to -derivative and -demean)
See examples 7a, 10a and b, and 14.
-set_run_lengths N1 N2 ... : treat as if data has run lengths N1, N2, etc.
(applies to -derivative, for example)
Notes: o option -set_nruns is not allowed with -set_run_lengths
o the sum of run lengths must equal NT
See examples 7b, 10c and 14.
-set_tr TR : set the TR (in seconds) for the data
-show_argmin : display the index of min arg (of first column)
-show_censor_count : display the total number of censored TRs
Note : if input is a valid xmat.1D dataset, then the
count will come from the header. Otherwise
the input is assumed to be a binary censor
file, and zeros are simply counted.
-show_cormat : display correlation matrix
-show_cormat_warnings : display correlation matrix warnings
(this does not include baseline terms)
-show_cormat_warnings_full : display correlation matrix warnings
(this DOES include baseline terms)
-show_distmat : display distance matrix
Expect input as one coordinate vector per row.
Output NROWxNROW matrix of vector distances.
See Example 15c.
-show_df_info : display info about degrees of freedom
(found in in xmat.1D formatted files)
-show_df_protect yes/no : protection flag (def=yes)
-show_gcor : display GCOR: the average correlation
-show_gcor_all : display many ways of computing (a) GCOR
-show_gcor_doc : display descriptions of those ways
-show_group_labels : display group and label, per column
-show_indices_baseline : display column indices for baseline
-show_indices_interest : display column indices for regs of interest
-show_indices_motion : display column indices for motion regressors
-show_indices_zero : display column indices for all-zero columns
-show_label_ordering : display the labels
-show_labels : display the labels
-show_max_displace : display max displacement (from motion params)
- the maximum pairwise distance (enorm)
-show_mmms : display min, mean, max, stdev of columns
-show_num_runs : display number of runs found
-show_regs PROPERTY : display regressors with the given property
Show column indices or labels for those columns where PROPERTY holds:
allzero : the entire column is exactly 0
set : (NOT allzero) the column has some set (non-zero) value
How the columns are displayed is controlled by -show_regs_style
(label, encoded, comma, space) and -verb (0, 1 or 2).
With -verb > 0, the number of matching columns is also output.
See also -show_regs_style, -verb.
See example 30.
-show_regs_style STYLE : use STYLE for how to -show_regs
This only applies when using -show_regs, and specifies the style for
how to show matching columns.
space : show indices as a space-separated list
comma : show indices as a comma-separated list
encoded : succinct selector list (like sub-brick selectors)
label : if xmat.1D has them, show space separated labels
set : (NOT allzero) the column has some set (non-zero) value
How the columns are displayed is controlled by -show_regs_style
(label, encoded, comma, space) and -verb (0, 1 or 2).
-show_rows_cols : display the number of rows and columns
-show_slice_timing_pattern : display the to3d tpattern for the data
e.g. -show_slice_timing_pattern -infile slice_times.txt
The output will be 2 values, the multiband level (the number of
sets of unique slice times) and the tpattern for those slice times.
The tpattern will be one of those from 'to3d -help', such as alt+z.
This operation is the reverse of -slice_pattern_to_times.
See also -slice_pattern_to_times.
See example 31 and example 32
-show_slice_timing_resolution : display the to3d tpattern for the data
e.g. -show_slice_timing_resolution -infile slice_times.txt
Display the apparent resolution of values expected to be on a grid,
where zero is good.
The slice times are supposed to be multiples of some constant C, such
that the sorted list of unique values should be:
{0*C, 1*C, 2*C, ..., (N-1)*C}.
In such a case, the first diffs would all be C, and the second diffs
would be zero. The displayed resolution would be zero.
If the first diffs are not all exactly some constant C, the largest
difference between those diffs should implicate the numerical
resolution, like a truncation error. So display the largest first diff
minus the smallest first diff.
For Siemens data, this might be 0.025 (2.5 ms), as reported by D Glen.
See also -show_slice_timing_pattern.
See example 31.
-show_tr_run_counts STYLE : display TR counts per run, according to STYLE
STYLE can be one of:
trs : TR counts
trs_cen : censored TR counts
trs_no_cen : TR counts, as if no censoring
frac_cen : fractions of TRs censored
See example 24.
-show_trs_censored STYLE : display a list of TRs which were censored
-show_trs_uncensored STYLE : display a list of TRs which were not censored
STYLE can be one of:
comma : comma delimited
space : space delimited
encoded : succinct selector list
verbose : chatty
See example 20.
-show_trs_run RUN : restrict -show_trs_[un]censored to the given
1-based run
-show_trs_to_zero : display number of TRs before final zero value
(e.g. length of response curve)
-show_xmat_stype_cols T1 ... : display columns of the given class types
Display the columns (labels, indices or encoded) of the given stimulus
types. These types refer specifically to those with basis functions,
and correspond with 3dDeconvolve -stim_* options as follows:
times : -stim_times
AM : -stim_times_AM1 or -stim_times_AM2
AM1 : -stim_times_AM1
AM2 : -stim_times_AM2
IM : -stim_times_IM
Multiple types can be provided.
See example 5f.
See also -show_regs_style.
-show_xmat_stim_info CLASS : display information for the given stim class
(CLASS can be a specific one, or 'ALL')
Display information for a specific (3dDeconvolve -stim_*) stim class.
This includes the class Name, the 3dDeconvolve Option, the basis
Function, and the relevant Columns of the X-matrix.
See example 5e.
See also -show_regs_style.
-show_group_labels : display group and label, per column
-slice_order_to_times : convert a list of slice indices to times
Programs like to3d, 3drefit, 3dTcat and 3dTshift expect slice timing
to be a list of slice times over the sequential slices. But in some
cases, people have an ordered list of slices. So the sorting needs
to change.
input: a file with TIME-SORTED slice indices
output: a SLICE-SORTED list of slice times
* Note, this is a list of slice indices over time (one TR interval).
Across one TR, this lists each slice index as acquired.
It IS a per-slice-time index of acquired slices.
It IS **NOT** a per-slice index of its acquisition position.
(this latter case could be output by -slice_pattern_to_times)
If TR=2 and the slice order is alt+z: 0 2 4 6 8 1 3 5 7 9
Then the slices/times ordered by time (as input) are:
times: 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8
input-> slices: 0 2 4 6 8 1 3 5 7 9
(slices across time)
And the slices/times ordered instead by slice index are:
slices: 0 1 2 3 4 5 6 7 8 9
output-> times: 0.0 1.0 0.2 1.2 0.4 1.4 0.6 1.6 0.8 1.8
(timing across slices)
It is this final list of times that is output.
For kicks, note that one can convert from per-time slice indices to
per-slice acquisition indices by setting TR=nslices.
See example 28.
-slice_pattern_to_times PAT NS MB : output slice timing, given:
slice pattern, nslices, MBlevel
(TR is optionally set via -set_tr)
e.g. -slice_pattern_to_times alt+z 30 1
-set_tr 2.0
Input description:
PAT : a valid to3d-style slice timing pattern, one of:
zero simult
seq+z seqplus seq-z seqminus
alt+z altplus alt+z2
alt-z altminus alt-z2
NS : the total number of slices (MB * nunique_times)
MB : the multiband level
For a volume with NS slices and multiband MB and a
slice timing pattern PAT with NST unique slice times,
we must have: NS = MB * NST
TR : (optional) the volume repetition time
TR is specified via -set_tr.
Output the appropriate slice times for the timing pattern, also given
the number of slices, multiband level and TR. If TR is not specified,
the output will be as if TR=NST (number of unique slice times), which
means the output is order index of each slice.
This operation is the reverse of -show_slice_timing_pattern.
See also -show_slice_timing_pattern, -show_slice_timing_resolution.
See example 32.
-sort : sort data over time (smallest to largest)
- sorts EVERY vector
- consider the -reverse option
-split_into_pad_runs PREFIX : split input into one padded file per run
e.g. -split_into_pad_runs motion.pad
This option is used for breaking a set of regressors up by run. The
output would be one file per run, where each file is the same as the
input for the run it corresponds to, and is padded with 0 across all
other runs.
Assuming the 300 row input dataset spans 3 100-TR runs, then there
would be 3 output datasets created, each still be 300 rows:
motion.pad.r01.1D : 100 rows as input, 200 rows of 0
motion.pad.r02.1D : 100 rows of 0, 100 rows as input, 100 of 0
motion.pad.r03.1D : 200 rows of 0, 100 rows as input
This option requires either -set_nruns or -set_run_lengths.
See example 14.
-transpose : transpose the input matrix (rows for columns)
-transpose_write : transpose the output matrix before writing
-volreg2allineate : convert 3dvolreg parameters to 3dAllineate
This option should be used when the -infile file is a 6 column file
of motion parameters (roll, pitch, yaw, dS, dL, dP). The output would
be converted to a 12 parameter file, suitable for input to 3dAllineate
via the -1Dparam_apply option.
volreg: roll, pitch, yaw, dS, dL, dP
3dAllinate: -dL, -dP, -dS, roll, pitch, yaw, 0,0,0, 0,0,0
These parameters would be to correct the motion, akin to what 3dvolreg
did (i.e. they are the negative estimates of how the subject moved).
See example 23.
-write FILE : write the current 1D data to FILE
-write_sep SEP : use SEP for column separators
-write_style STYLE : write using one of the given styles
basic: the default, don't work too hard
ljust: left-justified columns of the same width
rjust: right-justified columns of the same width
tsv: tab-separated (use <tab> as in -write_sep '\t')
-weight_vec v1 v2 ... : supply weighting vector
e.g. -weight_vec 0.9 0.9 0.9 1 1 1
This vector currently works only with the weighted_enorm method for
the -collapse_cols option. If supplied (as with the example), it will
weight the angles at 0.9 times the weights of the shifts in the motion
parameters output by 3dvolreg.
See also -collapse_cols.
-write_censor FILE : write as boolean censor.1D
e.g. -write_censor subjA_censor.1D
This file can be given to 3dDeconvolve to censor TRs with excessive
motion, applied with the -censor option.
e.g. 3dDeconvolve -censor subjA_censor.1D
This file works well for plotting against the data, where the 0 entries
are removed from the regression of 3dDeconvolve. Alternatively, the
file created with -write_CENSORTR is probably more human readable.
-write_CENSORTR FILE : write censor times as CENSORTR string
e.g. -write_CENSORTR subjA_CENSORTR.txt
This file can be given to 3dDeconvolve to censor TRs with excessive
motion, applied with the -CENSORTR option.
e.g. 3dDeconvolve -CENSORTR `cat subjA_CENSORTR.txt`
Which might expand to:
3dDeconvolve -CENSORTR '1:16..19,44 3:28 4:19,37..39'
Note that the -CENSORTR option requires the text on the command line.
This file is in the easily readable format applied with -CENSORTR.
It has the same effect on 3dDeconvolve as the sister file from
-write_censor, above.
-verb LEVEL : set the verbosity level
-----------------------------------------------------------------------------
R Reynolds March 2009
=============================================================================
AFNI program: 1dtranspose
Usage: 1dtranspose infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, but transposed.
You can use a column subvector selector list on infile, as in
1dtranspose 'fred.1D[0,3,7]' ethel.1D
* This program may produce files with lines longer than a
text editor can handle.
* If 'outfile' is '-' (or missing entirely), output goes to stdout.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dTsort
Usage: 1dTsort [options] file.1D
Sorts each column of the input 1D file and writes result to stdout.
Options
-------
-inc = sort into increasing order [default]
-dec = sort into decreasing order
-flip = transpose the file before OUTPUT
* the INPUT can be transposed using file.1D\'
* thus, to sort each ROW, do something like
1dTsort -flip file.1D\' > sfile.1D
-col j = sort only on column #j (counting starts at 0),
and carry the rest of the columns with it.
-imode = typecast all values to integers, return the mode in
the input then exit. No sorting results are returned.
N.B.: Data will be read from standard input if the filename IS stdin,
and will also be row/column transposed if the filename is stdin\'
For example:
1deval -num 100 -expr 'uran(1)' | 1dTsort stdin | 1dplot stdin
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 1dUpsample
Program 1dUpsample:
Upsamples a 1D time series (along the column direction)
to a finer time grid.
Usage: 1dUpsample [options] n fred.1D > ethel.1D
Where 'n' is the upsample factor (integer from 2..32)
NOTES:
------
* Interpolation is done with 7th order polynomials.
(Why 7? It's a nice number, and the code already existed.)
* The only option is '-1' or '-one', to use 1st order
polynomials instead (i.e., linear interpolation).
* Output is written to stdout.
* If you want to interpolate along the row direction,
transpose before input, then transpose the output.
* Example:
1dUpsample 5 '1D: 4 5 4 3 4' | 1dplot -stdin -dx 0.2
* If the input has M time points, the output will
have n*M time points. The last n-1 of them
will be past the end of the original time series.
* This program is a quick hack for Gang Chen.
Where are my Twizzlers?
AFNI program: 24swap
Usage: 24swap [options] file ...
Swaps bytes pairs and/or quadruples on the files listed.
Options:
-q Operate quietly
-pattern pat 'pat' determines the pattern of 2 and 4
byte swaps. Each element is of the form
2xN or 4xN, where N is the number of
bytes to swap as pairs (for 2x) or
as quadruples (for 4x). For 2x, N must
be divisible by 2; for 4x, N must be
divisible by 4. The whole pattern is
made up of elements separated by colons,
as in '-pattern 4x39984:2x0'. If bytes
are left over after the pattern is used
up, the pattern starts over. However,
if a byte count N is zero, as in the
example below, then it means to continue
until the end of file.
N.B.: You can also use 1xN as a pattern, indicating to
skip N bytes without any swapping.
N.B.: A default pattern can be stored in the Unix
environment variable AFNI_24SWAP_PATTERN.
If no -pattern option is given, the default
will be used. If there is no default, then
nothing will be done.
N.B.: If there are bytes 'left over' at the end of the file,
they are written out unswapped. This will happen
if the file is an odd number of bytes long.
N.B.: If you just want to swap pairs, see program 2swap.
For quadruples only, see program 4swap.
N.B.: This program will overwrite the input file!
You might want to test it first.
Example: 24swap -pat 4x8:2x0 fred
If fred contains 'abcdabcdabcdabcdabcd' on input,
then fred has 'dcbadcbabadcbadcbadc' on output.
AFNI program: 2dcat
Usage: 2dcat [options] fname1 fname2 etc.
Puts a set images into an image matrix (IM)
montage of NX by NY images.
The minimum set of input is N images (N >= 1).
If need be, the default is to reuse images until the desired
NX by NY size is achieved.
See options -zero_wrap and -image_wrap for more detail.
OPTIONS:
++ Options for editing, coloring input images:
-scale_image SCALE_IMG: Multiply each image IM(i,j) in output
image matrix IM by the color or intensity
of the pixel (i,j) in SCALE_IMG.
-scale_pixels SCALE_PIX: Multiply each pixel (i,j) in output image
by the color or intensity
of the pixel (i,j) in SCALE_IMG.
SCALE_IMG is automatically resized to the
resolution of the output image.
-scale_intensity: Instead of multiplying by the color of
pixel (i,j), use its intensity
(average color)
-gscale FAC: Apply FAC in addition to scaling of -scale_* options
-rgb_out: Force output to be in rgb, even if input is bytes.
This option is turned on automatically in certain cases.
-res_in RX RY: Set resolution of all input images to RX by RY pixels.
Default is to make all input have the same
resolution as the first image.
-respad_in RPX RPY: Like -res_in, but resample to the max while respecting
the aspect ratio, and then pad to achieve desired
pixel count.
-pad_val VAL: Set the padding value, should it be needed by -respad_in
to VAL. VAL is typecast to byte, default is 0, max is 255.
-crop L R T B: Crop images by L (Left), R (Right), T (Top), B (Bottom)
pixels. Cutting is performed after any resolution change,
if any, is to be done.
-autocrop_ctol CTOL: A line is eliminated if none of its R G B values
differ by more than CTOL% from those of the corner
pixel.
-autocrop_atol ATOL: A line is eliminated if none of its R G B values
differ by more than ATOL% from those of line
average.
-autocrop: This option is the same as using both of -autocrop_atol 20
and -autocrop_ctol 20
NOTE: Do not mix -autocrop* options with -crop
Cropping is determined from the 1st input image and applied to
to all remaining ones.
++ Options for output:
-zero_wrap: If number of images is not enough to fill matrix
solid black images are used.
-white_wrap: If number of images is not enough to fill matrix
solid white images are used.
-gray_wrap GRAY: If number of images is not enough to fill matrix
solid gray images are used. GRAY must be between 0 and 1.0
-image_wrap: If number of images is not enough to fill matrix
images on command line are reused (default)
-rand_wrap: When reusing images to fill matrix, randomize the order
in refill section only.
-prefix ppp = Prefix the output files with string 'ppp'
Note: If the prefix ends with .1D, then a 1D file containing
the average of RGB values. You can view the output with
1dgrayplot.
-matrix NX NY: Specify number of images in each row and column
of IM at the same time.
-nx NX: Number of images in each row (3 for example below)
-ny NY: Number of images in each column (4 for example below)
Example: If 12 images appearing on the command line
are to be assembled into a 3x4 IM matrix they
would appear in this order:
0 1 2
3 4 5
6 7 8
9 10 11
NOTE: The program will try to guess if neither NX nor NY
are specified.
-matrix_from_scale: Set NX and NY to be the same as the
SCALE_IMG's dimensions. (needs -scale_image)
-gap G: Put a line G pixels wide between images.
-gap_col R G B: Set color of line to R G B values.
Values range between 0 and 255.
Example 0 (assuming afni is in ~/abin directory):
Resizing an image:
2dcat -prefix big -res_in 1024 1024 \
~/abin/funstuff/face_zzzsunbrain.jpg
2dcat -prefix small -res_in 64 64 \
~/abin/funstuff/face_zzzsunbrain.jpg
aiv small.ppm big.ppm
Example 1:
Stitching together images:
(Can be used to make very high resolution SUMA images.
Read about 'Ctrl+r' in SUMA's GUI help.)
2dcat -prefix cat -matrix 14 12 \
~/abin/funstuff/face_*.jpg
aiv cat.ppm
Example 2:
Stitching together 3 images getting rid of annoying white boundary:
2dcat -prefix surfview_pry3b.jpg -ny 1 -autocrop surfview.000[789].jpg
Example 20 (assuming afni is in ~/abin directory):
2dcat -prefix bigcat.jpg -scale_image ~/abin/afnigui_logo.jpg \
-matrix_from_scale -rand_wrap -rgb_out -respad_in 128 128 \
-pad_val 128 ~/abin/funstuff/face_*.jpg
aiv bigcat.jpg bigcat.jpg
Crop/Zoom in to see what was done. In practice, you want to use
a faster image viewer to examine the result. Zooming on such
a large image is not fast in aiv.
Be careful with this toy. Images get real big, real quick.
You can look at the output image file with
afni -im ppp.ppm [then open the Sagittal image window]
Deprecation warning: The imcat program will be replaced by 2dcat in the future.
AFNI program: 2dImReg
++ 2dImReg: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
This program performs 2d image registration. Image alignment is
performed on a slice-by-slice basis for the input 3d+time dataset,
relative to a user specified base image.
** Note that the script @2dwarper.Allin can do similar things, **
** with nonlinear (polynomial) warping on a slice-wise basis. **
Usage:
2dImReg
-input fname Filename of input 3d+time dataset to process
-basefile fname Filename of 3d+time dataset for base image
(default = current input dataset)
-base num Time index for base image (0 <= num)
(default: num = 3)
-nofine Deactivate fine fit phase of image registration
(default: fine fit is active)
-fine blur dxy dphi Set fine fit parameters
where:
blur = FWHM of blurring prior to registration (in pixels)
(default: blur = 1.0)
dxy = Convergence tolerance for translations (in pixels)
(default: dxy = 0.07)
dphi = Convergence tolerance for rotations (in degrees)
(default: dphi = 0.21)
-prefix pname Prefix name for output 3d+time dataset
-dprefix dname Write files 'dname'.dx, 'dname'.dy, 'dname'.psi
containing the registration parameters for each
slice in chronological order.
File formats:
'dname'.dx: time(sec) dx(pixels)
'dname'.dy: time(sec) dy(pixels)
'dname'.psi: time(sec) psi(degrees)
-dmm Change dx and dy output format from pixels to mm
-rprefix rname Write files 'rname'.oldrms and 'rname'.newrms
containing the volume RMS error for the original
and the registered datasets, respectively.
File formats:
'rname'.oldrms: volume(number) rms_error
'rname'.newrms: volume(number) rms_error
-debug Lots of additional output to screen
AFNI program: @2dwarper.Allin
script to do 2D registration on each slice of a 3D+time
dataset, and glue the results back together at the end
This script is structured to operate only on an AFNI
+orig.HEAD dataset. The one input on the command line
is the prefix for the dataset.
Modified 07 Dec 2010 by RWC to use 3dAllineate instead
of 3dWarpDrive, with nonlinear slice-wise warping.
Set prefix of input 3D+time dataset here.
In this example with 'wilma' as the command line
argument, the output dataset will be 'wilma_reg+orig'.
The output registration parameters files will
be 'wilma_param_ssss.1D', where 'ssss' is the slice number.
usage: @2dwarper.Allin [options] INPUT_PREFIX
example: @2dwarper.Allin epi_run1
example: @2dwarper.Allin -mask my_mask epi_run1
options:
-mask MSET : provide the prefix of an existing mask dataset
-prefix PREFIX : provide the prefix for output datasets
AFNI program: 2perm
Usage: 2perm [-prefix PPP] [-comma] bot top [n1 n2]
This program creates 2 random non-overlapping subsets of the set of
integers from 'bot' to 'top' (inclusive). The first subset is of
length 'n1' and the second of length 'n2'. If those values are not
given, then equal size subsets of length (top-bot+1)/2 are used.
This program is intended for use in various simulation and/or
randomization scripts, or for amusement/hilarity.
OPTIONS:
========
-prefix PPP == Two output files are created, with names PPP_A and PPP_B,
where 'PPP' is the given prefix. If no '-prefix' option
is given, then the string 'AFNIroolz' will be used.
++ Each file is a single column of numbers.
++ Note that the filenames do NOT end in '.1D'.
-comma == Write each file as a single row of comma-separated numbers.
EXAMPLE:
========
This illustration shows the purpose of 2perm -- for use in permutation
and/or randomization tests of statistical significance and power.
Given a dataset with 100 sub-bricks (indexed 0..99), split it into two
random halves and do a 2-sample t-test between them.
2perm -prefix Q50 0 99
3dttest++ -setA dataset+orig"[1dcat Q50_A]" \
-setB dataset+orig"[1dcat Q50_B]" \
-no1sam -prefix Q50
\rm -f Q50_?
Alternatively:
2perm -prefix Q50 -comma 0 99
3dttest++ -setA dataset+orig"[`cat Q50_A`]" \
-setB dataset+orig"[`cat Q50_B`]" \
-no1sam -prefix Q50
\rm -f Q50_?
Note the combined use of the double quote " and backward quote `
shell operators in this second approach.
AUTHOR: (no one want to admit they wrote this trivial code).
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 2swap
Usage: 2swap [-q] file ...
-- Swaps byte pairs on the files listed.
The -q option means to work quietly.
AFNI program: 3dABoverlap
Usage: 3dABoverlap [options] A B
Output (to screen) is a count of various things about how
the automasks of datasets A and B overlap or don't overlap.
* Dataset B will be resampled to match dataset A, if necessary,
which will be slow if A is high resolution. In such a case,
you should only use one sub-brick from dataset B.
++ The resampling of B is done before the automask is generated.
* The values output are labeled thusly:
#A = number of voxels in the A mask
#B = number of voxels in the B mask
#(A uni B) = number of voxels in the either or both masks (set union)
#(A int B) = number of voxels present in BOTH masks (set intersection)
#(A \ B) = number of voxels in A mask that aren't in B mask
#(B \ A) = number of voxels in B mask that aren't in A mask
%(A \ B) = percentage of voxels from A mask that aren't in B mask
%(B \ A) = percentage of voxels from B mask that aren't in A mask
Rx(B/A) = radius of gyration of B mask / A mask, in x direction
Ry(B/A) = radius of gyration of B mask / A mask, in y direction
Rz(B/A) = radius of gyration of B mask / A mask, in z direction
* If B is an EPI dataset sub-brick, and A is a skull stripped anatomical
dataset, then %(B \ A) might be useful for assessing if the EPI
brick B is grossly misaligned with respect to the anatomical brick A.
* The radius of gyration ratios might be useful for determining if one
dataset is grossly larger or smaller than the other.
OPTIONS
-------
-no_automask = consider input datasets as masks
(automask does not work on mask datasets)
-quiet = be as quiet as possible (without being entirely mute)
-verb = print out some progress reports (to stderr)
NOTES
-----
* If an input dataset is comprised of bytes and contains only one
sub-brick, then this program assumes it is already an automask-
generated dataset and the automask operation will be skipped.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAFNIto3D
[7m*+ WARNING:[0m This program (3dAFNIto3D) is old, not maintained, and probably useless!
Usage: 3dAFNIto3D [options] dataset
Reads in an AFNI dataset, and writes it out as a 3D file.
OPTIONS:
-prefix ppp = Write result into file ppp.3D;
default prefix is same as AFNI dataset's.
-bin = Write data in binary format, not text.
-txt = Write data in text format, not binary.
NOTES:
* At present, all bricks are written out in float format.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAFNItoANALYZE
[7m*+ WARNING:[0m This program (3dAFNItoANALYZE) is old, not maintained, and probably useless!
Usage: 3dAFNItoANALYZE [-4D] [-orient code] aname dset
Writes AFNI dataset 'dset' to 1 or more ANALYZE 7.5 format
.hdr/.img file pairs (one pair for each sub-brick in the
AFNI dataset). The ANALYZE files will be named
aname_0000.hdr aname_0000.img for sub-brick #0
aname_0001.hdr aname_0001.img for sub-brick #1
and so forth. Each file pair will contain a single 3D array.
* If the AFNI dataset does not include sub-brick scale
factors, then the ANALYZE files will be written in the
datum type of the AFNI dataset.
* If the AFNI dataset does have sub-brick scale factors,
then each sub-brick will be scaled to floating format
and the ANALYZE files will be written as floats.
* The .hdr and .img files are written in the native byte
order of the computer on which this program is executed.
Options
-------
-4D [30 Sep 2002]:
If you use this option, then all the data will be written to
one big ANALYZE file pair named aname.hdr/aname.img, rather
than a series of 3D files. Even if you only have 1 sub-brick,
you may prefer this option, since the filenames won't have
the '_0000' appended to 'aname'.
-orient code [19 Mar 2003]:
This option lets you flip the dataset to a different orientation
when it is written to the ANALYZE files. The orientation code is
formed as follows:
The code must be 3 letters, one each from the
pairs {R,L} {A,P} {I,S}. The first letter gives
the orientation of the x-axis, the second the
orientation of the y-axis, the third the z-axis:
R = Right-to-Left L = Left-to-Right
A = Anterior-to-Posterior P = Posterior-to-Anterior
I = Inferior-to-Superior S = Superior-to-Inferior
For example, 'LPI' means
-x = Left +x = Right
-y = Posterior +y = Anterior
-z = Inferior +z = Superior
* For display in SPM, 'LPI' or 'RPI' seem to work OK.
Be careful with this: you don't want to confuse L and R
in the SPM display!
* If you DON'T use this option, the dataset will be written
out in the orientation in which it is stored in AFNI
(e.g., the output of '3dinfo dset' will tell you this.)
* The dataset orientation is NOT stored in the .hdr file.
* AFNI and ANALYZE data are stored in files with the x-axis
varying most rapidly and the z-axis most slowly.
* Note that if you read an ANALYZE dataset into AFNI for
display, AFNI assumes the LPI orientation, unless you
set environment variable AFNI_ANALYZE_ORIENT.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAFNItoNIFTI
Usage: 3dAFNItoNIFTI [options] dataset
Reads an AFNI dataset, writes it out as a NIfTI-1.1 file.
NOTES:
* The nifti_tool program can be used to manipulate
the contents of a NIfTI-1.1 file.
* The input dataset can actually be in any input format
that AFNI can read directly (e.g., MINC-1).
* There is no 3dNIFTItoAFNI program, since AFNI programs
can directly read .nii files. If you wish to make such
a conversion anyway, one way to do so is like so:
3dcalc -a ppp.nii -prefix ppp -expr 'a'
OPTIONS:
-prefix ppp = Write the NIfTI-1.1 file as 'ppp.nii'.
Default: the dataset's prefix is used.
* You can use 'ppp.hdr' to output a 2-file
NIfTI-1.1 file pair 'ppp.hdr' & 'ppp.img'.
* If you want a compressed file, try
using a prefix like 'ppp.nii.gz'.
* Setting the Unix environment variable
AFNI_AUTOGZIP to YES will result in
all output .nii files being gzip-ed.
-verb = Be verbose = print progress messages.
Repeating this increases the verbosity
(maximum setting is 3 '-verb' options).
-float = Force the output dataset to be 32-bit
floats. This option should be used when
the input AFNI dataset has different
float scale factors for different sub-bricks,
an option that NIfTI-1.1 does not support.
The following options affect the contents of the AFNI extension
field that is written by default into the NIfTI-1.1 header:
-pure = Do NOT write an AFNI extension field into
the output file. Only use this option if
needed. You can also use the 'nifti_tool'
program to strip extensions from a file.
-denote = When writing the AFNI extension field, remove
text notes that might contain subject
identifying information.
-oldid = Give the new dataset the input dataset's
AFNI ID code.
-newid = Give the new dataset a new AFNI ID code, to
distinguish it from the input dataset.
**** N.B.: -newid is now the default action.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAFNItoNIML
Usage: 3dAFNItoNIML [options] dset
Dumps AFNI dataset header information to stdout in NIML format.
Mostly for debugging and testing purposes!
OPTIONS:
-data == Also put the data into the output (will be huge).
-ascii == Format in ASCII, not binary (even huger).
-tcp:host:port == Instead of stdout, send the dataset to a socket.
(implies '-data' as well)
-- RWCox - Mar 2005
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAFNItoRaw
[7m*+ WARNING:[0m This program (3dAFNItoRaw) is old, not maintained, and probably useless!
Usage: 3dAFNItoRaw [options] dataset
Convert an AFNI brik file with multiple sub-briks to a raw file with
each sub-brik voxel concatenated voxel-wise.
For example, a dataset with 3 sub-briks X,Y,Z with elements x1,x2,x3,...,xn,
y1,y2,y3,...,yn and z1,z2,z3,...,zn will be converted to a raw dataset with
elements x1,y1,z1, x2,y2,z2, x3,y3,z3, ..., xn,yn,zn
The dataset is kept in the original data format (float/short/int)
Options:
-output / -prefix = name of the output file (not an AFNI dataset prefix)
the default output name will be rawxyz.dat
-datum float = force floating point output. Floating point forced if any
sub-brik scale factors not equal to 1.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAllineate
Usage: 3dAllineate [options] sourcedataset
--------------------------------------------------------------------------
Program to align one dataset (the 'source') to a 'base'
dataset, using an affine (matrix) transformation of space.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
***** Please check your results visually, or at some point *****
***** in time you will have bad results and not know it :-( *****
***** *****
***** No method for 3D image alignment, however tested it *****
***** was, can be relied upon 100% of the time, and anyone *****
***** who tells you otherwise is a madman or is a liar!!!! *****
***** *****
***** In particular, if you are aligning two datasets with *****
***** significantly different spatial coverage (e.g., *****
***** -source = whole head T1w and -base = MNI template), *****
***** the be careful to check the results. In such a case, *****
***** using '-twobest MAX' should increase the chance of *****
***** getting a good alignment (at the cost of CPU time). *****
***** *****
***** Furthermore, don't EVER think that "I have so much *****
***** data that a few errors will not matter"!!!! *****
--------------------------------------------------------------------------
* Options (lots of them!) are available to control:
++ How the matching between the source and the base is computed
(i.e., the 'cost functional' measuring image mismatch).
++ How the resliced source is interpolated to the base space.
++ The complexity of the spatial transformation ('warp') used.
++ And many many technical options to control the process in detail,
if you know what you are doing (or just like to fool around).
* This program is a generalization of and improvement on the older
software 3dWarpDrive.
* For nonlinear transformations, see program 3dQwarp.
* 3dAllineate can also be used to apply a pre-computed matrix to a dataset
to produce the transformed output. In this mode of operation, it just
skips the alignment process, whose function is to compute the matrix,
and instead it reads the matrix in, computes the output dataset,
writes it out, and stops.
* If you are curious about the stepwise process used, see the section below
titled: SUMMARY of the Default Allineation Process.
=====----------------------------------------------------------------------
NOTES: For most 3D image registration purposes, we now recommend that you
===== use Daniel Glen's script align_epi_anat.py (which, despite its name,
can do many more registration problems than EPI-to-T1-weighted).
-->> In particular, using 3dAllineate with the 'lpc' cost functional
(to align EPI and T1-weighted volumes) requires using a '-weight'
volume to get good results, and the align_epi_anat.py script will
automagically generate such a weight dataset that works well for
EPI-to-structural alignment.
-->> This script can also be used for other alignment purposes, such
as T1-weighted alignment between field strengths using the
'-lpa' cost functional. Investigate align_epi_anat.py to
see if it will do what you need -- you might make your life
a little easier and nicer and happier and more tranquil.
-->> Also, if/when you ask for registration help on the AFNI
message board, we'll probably start by recommending that you
try align_epi_anat.py if you haven't already done so.
-->> For aligning EPI and T1-weighted volumes, we have found that
using a flip angle of 50-60 degrees for the EPI works better than
a flip angle of 90 degrees. The reason is that there is more
internal contrast in the EPI data when the flip angle is smaller,
so the registration has some image structure to work with. With
the 90 degree flip angle, there is so little internal contrast in
the EPI dataset that the alignment process ends up being just
trying to match brain outlines -- which doesn't always give accurate
results: see http://dx.doi.org/10.1016/j.neuroimage.2008.09.037
-->> Although the total MRI signal is reduced at a smaller flip angle,
there is little or no loss in FMRI/BOLD information, since the bulk
of the time series 'noise' is from physiological fluctuation signals,
which are also reduced by the lower flip angle -- for more details,
see http://dx.doi.org/10.1016/j.neuroimage.2010.11.020
---------------------------------------------------------------------------
**** New (Summer 2013) program 3dQwarp is available to do nonlinear ****
*** alignment between a base and source dataset, including the use ***
** of 3dAllineate for the preliminary affine alignment. If you are **
* interested, see the output of '3dQwarp -help' for the details. *
---------------------------------------------------------------------------
COMMAND LINE OPTIONS:
====================
-base bbb = Set the base dataset to be the #0 sub-brick of 'bbb'.
If no -base option is given, then the base volume is
taken to be the #0 sub-brick of the source dataset.
(Base must be stored as floats, shorts, or bytes.)
** -base is not needed if you are just applying a given
transformation to the -source dataset to produce
the output, using -1Dmatrix_apply or -1Dparam_apply
** Unless you use the -master option, the aligned
output dataset will be stored on the same 3D grid
as the -base dataset.
-source ttt = Read the source dataset from 'ttt'. If no -source
*OR* (or -input) option is given, then the source dataset
-input ttt is the last argument on the command line.
(Source must be stored as floats, shorts, or bytes.)
** This is the dataset to be transformed, to match the
-base dataset, or directly with one of the options
-1Dmatrix_apply or -1Dparam_apply
** 3dAllineate can register 2D datasets (single slice),
but both the base and source must be 2D -- you cannot
use this program to register a 2D slice into a 3D volume!
-- However, the 'lpc' and 'lpa' cost functionals do not
work properly with 2D images, as they are designed
around local 3D neighborhoods and that code has not
been patched to work with 2D neighborhoods :(
-- You can input .jpg files as 2D 'datasets', register
them with 3dAllineate, and write the result back out
using a prefix that ends in '.jpg'; HOWEVER, the color
information will not be used in the registration, as
this program was written to deal with monochrome medical
datasets. At the end, if the source was RGB (color), then
the output will be also be RGB, and then a color .jpg
can be output.
-- The above remarks also apply to aligning 3D RGB datasets:
it will be done using only the 3D volumes converted to
grayscale, but the final output will be the source
RGB dataset transformed to the (hopefully) aligned grid.
* However, I've never tested aligning 3D color datasets;
you can be the first one ever!
** See the script @2dwarper.Allin for an example of using
3dAllineate to do slice-by-slice nonlinear warping to
align 3D volumes distorted by time-dependent magnetic
field inhomogeneities.
** NOTA BENE: The base and source dataset do NOT have to be defined **
** [that's] on the same 3D grids; the alignment process uses the **
** [Latin ] coordinate systems defined in the dataset headers to **
** [ for ] make the match between spatial locations, rather than **
** [ NOTE ] matching the 2 datasets on a voxel-by-voxel basis **
** [ WELL ] (as 3dvolreg and 3dWarpDrive do). **
** -->> However, this coordinate-based matching requires that **
** image volumes be defined on roughly the same patch of **
** of (x,y,z) space, in order to find a decent starting **
** point for the transformation. You might need to use **
** the script @Align_Centers to do this, if the 3D **
** spaces occupied by the images do not overlap much. **
** -->> Or the '-cmass' option to this program might be **
** sufficient to solve this problem, maybe, with luck. **
** (Another reason why you should use align_epi_anat.py) **
** -->> If the coordinate system in the dataset headers is **
** WRONG, then 3dAllineate will probably not work well! **
** And I say this because we have seen this in several **
** datasets downloaded from online archives. **
-prefix ppp = Output the resulting dataset to file 'ppp'. If this
*OR* option is NOT given, no dataset will be output! The
-out ppp transformation matrix to align the source to the base will
be estimated, but not applied. You can save the matrix
for later use using the '-1Dmatrix_save' option.
*N.B.: By default, the new dataset is computed on the grid of the
base dataset; see the '-master' and/or the '-mast_dxyz'
options to change this grid.
*N.B.: If 'ppp' is 'NULL', then no output dataset will be produced.
This option is for compatibility with 3dvolreg.
-floatize = Write result dataset as floats. Internal calculations
-float are all done on float copies of the input datasets.
[Default=convert output dataset to data format of ]
[ source dataset; if the source dataset was ]
[ shorts with a scale factor, then the new ]
[ dataset will get a scale factor as well; ]
[ if the source dataset was shorts with no ]
[ scale factor, the result will be unscaled.]
-1Dparam_save ff = Save the warp parameters in ASCII (.1D) format into
file 'ff' (1 row per sub-brick in source).
* A historical synonym for this option is '-1Dfile'.
* At the top of the saved 1D file is a #comment line
listing the names of the parameters; those parameters
that are fixed (e.g., via '-parfix') will be marked
by having their symbolic names end in the '$' character.
You can use '1dcat -nonfixed' to remove these columns
from the 1D file if you just want to further process the
varying parameters somehow (e.g., 1dsvd).
* However, the '-1Dparam_apply' option requires the
full list of parameters, including those that were
fixed, in order to work properly!
-1Dparam_apply aa = Read warp parameters from file 'aa', apply them to
the source dataset, and produce a new dataset.
(Must also use the '-prefix' option for this to work! )
(In this mode of operation, there is no optimization of)
(the cost functional by changing the warp parameters; )
(previously computed parameters are applied directly. )
*N.B.: If you use -1Dparam_apply, you may also want to use
-master to control the grid on which the new
dataset is written -- the base dataset from the
original 3dAllineate run would be a good possibility.
Otherwise, the new dataset will be written out on the
3D grid coverage of the source dataset, and this
might result in clipping off part of the image.
*N.B.: Each row in the 'aa' file contains the parameters for
transforming one sub-brick in the source dataset.
If there are more sub-bricks in the source dataset
than there are rows in the 'aa' file, then the last
row is used repeatedly.
*N.B.: A trick to use 3dAllineate to resample a dataset to
a finer grid spacing:
3dAllineate -input dataset+orig \
-master template+orig \
-prefix newdataset \
-final wsinc5 \
-1Dparam_apply '1D: 12@0'\'
Here, the identity transformation is specified
by giving all 12 affine parameters as 0 (note
the extra \' at the end of the '1D: 12@0' input!).
** You can also use the word 'IDENTITY' in place of
'1D: 12@0'\' (to indicate the identity transformation).
**N.B.: Some expert options for modifying how the wsinc5
method works are described far below, if you use
'-HELP' instead of '-help'.
****N.B.: The interpolation method used to produce a dataset
is always given via the '-final' option, NOT via
'-interp'. If you forget this and use '-interp'
along with one of the 'apply' options, this program
will chastise you (gently) and change '-final'
to match what the '-interp' input.
-1Dmatrix_save ff = Save the transformation matrix for each sub-brick into
file 'ff' (1 row per sub-brick in the source dataset).
If 'ff' does NOT end in '.1D', then the program will
append '.aff12.1D' to 'ff' to make the output filename.
*N.B.: This matrix is the coordinate transformation from base
to source DICOM coordinates. In other terms:
Xin = Xsource = M Xout = M Xbase
or
Xout = Xbase = inv(M) Xin = inv(M) Xsource
where Xin or Xsource is the 4x1 coordinates of a
location in the input volume. Xout is the
coordinate of that same location in the output volume.
Xbase is the coordinate of the corresponding location
in the base dataset. M is ff augmented by a 4th row of
[0 0 0 1], X. is an augmented column vector [x,y,z,1]'
To get the inverse matrix inv(M)
(source to base), use the cat_matvec program, as in
cat_matvec fred.aff12.1D -I
-1Dmatrix_apply aa = Use the matrices in file 'aa' to define the spatial
transformations to be applied. Also see program
cat_matvec for ways to manipulate these matrix files.
*N.B.: You probably want to use either -base or -master
with either *_apply option, so that the coordinate
system that the matrix refers to is correctly loaded.
** You can also use the word 'IDENTITY' in place of a
filename to indicate the identity transformation --
presumably for the purpose of resampling the source
dataset to a new grid.
* The -1Dmatrix_* options can be used to save and reuse the transformation *
* matrices. In combination with the program cat_matvec, which can multiply *
* saved transformation matrices, you can also adjust these matrices to *
* other alignments. These matrices can also be combined with nonlinear *
* warps (from 3dQwarp) using programs 3dNwarpApply or 3dNwarpCat. *
* The script 'align_epi_anat.py' uses 3dAllineate and 3dvolreg to align EPI *
* datasets to T1-weighted anatomical datasets, using saved matrices between *
* the two programs. This script is our currently recommended method for *
* doing such intra-subject alignments. *
-cost ccc = Defines the 'cost' function that defines the matching
between the source and the base; 'ccc' is one of
ls *OR* leastsq = Least Squares [Pearson Correlation]
mi *OR* mutualinfo = Mutual Information [H(b)+H(s)-H(b,s)]
crM *OR* corratio_mul = Correlation Ratio (Symmetrized*)
nmi *OR* norm_mutualinfo = Normalized MI [H(b,s)/(H(b)+H(s))]
hel *OR* hellinger = Hellinger metric
crA *OR* corratio_add = Correlation Ratio (Symmetrized+)
crU *OR* corratio_uns = Correlation Ratio (Unsym)
lpc *OR* localPcorSigned = Local Pearson Correlation Signed
lpa *OR* localPcorAbs = Local Pearson Correlation Abs
lpc+ *OR* localPcor+Others= Local Pearson Signed + Others
lpa+ *OR* localPcorAbs+Others= Local Pearson Abs + Others
You can also specify the cost functional using an option
of the form '-mi' rather than '-cost mi', if you like
to keep things terse and cryptic (as I do).
[Default == '-hel' (for no good reason, but it sounds nice).]
**NB** See more below about lpa and lpc, which are typically
what we would recommend as first-choice cost functions
now:
lpa if you have similar contrast vols to align;
lpc if you have *non*similar contrast vols to align!
-interp iii = Defines interpolation method to use during matching
process, where 'iii' is one of
NN *OR* nearestneighbour *OR nearestneighbor
linear *OR* trilinear
cubic *OR* tricubic
quintic *OR* triquintic
Using '-NN' instead of '-interp NN' is allowed (e.g.).
Note that using cubic or quintic interpolation during
the matching process will slow the program down a lot.
Use '-final' to affect the interpolation method used
to produce the output dataset, once the final registration
parameters are determined. [Default method == 'linear'.]
** N.B.: Linear interpolation is used during the coarse
alignment pass; the selection here only affects
the interpolation method used during the second
(fine) alignment pass.
** N.B.: '-interp' does NOT define the final method used
to produce the output dataset as warped from the
input dataset. If you want to do that, use '-final'.
-final iii = Defines the interpolation mode used to create the
output dataset. [Default == 'cubic']
** N.B.: If you are applying a transformation to an
integer-valued dataset (such as an atlas),
then you should use '-final NN' to avoid
interpolation of the integer labels.
** N.B.: For '-final' ONLY, you can use 'wsinc5' to specify
that the final interpolation be done using a
weighted sinc interpolation method. This method
is so SLOW that you aren't allowed to use it for
the registration itself.
++ wsinc5 interpolation is highly accurate and should
reduce the smoothing artifacts from lower
order interpolation methods (which are most
visible if you interpolate an EPI time series
to high resolution and then make an image of
the voxel-wise variance).
++ On my Intel-based Mac, it takes about 2.5 s to do
wsinc5 interpolation, per 1 million voxels output.
For comparison, quintic interpolation takes about
0.3 s per 1 million voxels: 8 times faster than wsinc5.
++ The '5' refers to the width of the sinc interpolation
weights: plus/minus 5 grid points in each direction;
this is a tensor product interpolation, for speed.
TECHNICAL OPTIONS (used for fine control of the program):
=================
-nmatch nnn = Use at most 'nnn' scattered points to match the
datasets. The smaller nnn is, the faster the matching
algorithm will run; however, accuracy may be bad if
nnn is too small. If you end the 'nnn' value with the
'%' character, then that percentage of the base's
voxels will be used.
[Default == 47% of voxels in the weight mask]
-nopad = Do not use zero-padding on the base image.
(I cannot think of a good reason to use this option.)
[Default == zero-pad, if needed; -verb shows how much]
-zclip = Replace negative values in the input datasets (source & base)
-noneg with zero. The intent is to clip off a small set of negative
values that may arise when using 3dresample (say) with
cubic interpolation.
-conv mmm = Convergence test is set to 'mmm' millimeters.
This doesn't mean that the results will be accurate
to 'mmm' millimeters! It just means that the program
stops trying to improve the alignment when the optimizer
(NEWUOA) reports it has narrowed the search radius
down to this level.
* To set this value to the smallest allowable, use '-conv 0'.
* A coarser value for 'quick-and-dirty' alignment is 0.05.
-verb = Print out verbose progress reports.
[Using '-VERB' will give even more prolix reports :]
-quiet = Don't print out verbose stuff. (But WHY?)
-usetemp = Write intermediate stuff to disk, to economize on RAM.
Using this will slow the program down, but may make it
possible to register datasets that need lots of space.
**N.B.: Temporary files are written to the directory given
in environment variable TMPDIR, or in /tmp, or in ./
(preference in that order). If the program crashes,
these files are named TIM_somethingrandom, and you
may have to delete them manually. (TIM=Temporary IMage)
**N.B.: If the program fails with a 'malloc failure' type of
message, then try '-usetemp' (malloc=memory allocator).
* If the program just stops with a message 'killed', that
means the operating system (Unix/Linux) stopped the
program, which almost always is due to the system running
low on memory -- so it starts killing programs to save itself.
-nousetemp = Don't use temporary workspace on disk [the default].
-check hhh = After cost functional optimization is done, start at the
final parameters and RE-optimize using the new cost
function 'hhh'. If the results are too different, a
warning message will be printed. However, the final
parameters from the original optimization will be
used to create the output dataset. Using '-check'
increases the CPU time, but can help you feel sure
that the alignment process did not go wild and crazy.
[Default == no check == don't worry, be happy!]
**N.B.: You can put more than one function after '-check', as in
-nmi -check mi hel crU crM
to register with Normalized Mutual Information, and
then check the results against 4 other cost functionals.
**N.B.: On the other hand, some cost functionals give better
results than others for specific problems, and so
a warning that 'mi' was significantly different than
'hel' might not actually mean anything useful (e.g.).
** PARAMETERS THAT AFFECT THE COST OPTIMIZATION STRATEGY **
-onepass = Use only the refining pass -- do not try a coarse
resolution pass first. Useful if you know that only
SMALL amounts of image alignment are needed.
[The default is to use both passes.]
-twopass = Use a two pass alignment strategy, first searching for
a large rotation+shift and then refining the alignment.
[Two passes are used by default for the first sub-brick]
[in the source dataset, and then one pass for the others.]
['-twopass' will do two passes for ALL source sub-bricks.]
*** The first (coarse) pass is relatively slow, as it tries
to search a large volume of parameter (rotations+shifts)
space for initial guesses at the alignment transformation.
* A lot of these initial guesses are kept and checked to
see which ones lead to good starting points for the
further refinement.
* The winners of this competition are then passed to the
'-twobest' (infra) successive optimization passes.
* The ultimate winner of THAT stage is what starts
the second (fine) pass alignment. Usually, this starting
point is so good that the fine pass optimization does
not provide a lot of improvement; that is, most of the
run time ends up in coarse pass with its multiple stages.
* All of these stages are intended to help the program avoid
stopping at a 'false' minimum in the cost functional.
They were added to the software as we gathered experience
with difficult 3D alignment problems. The combination of
multiple stages of partial optimization of multiple
parameter candidates makes the coarse pass slow, but also
makes it (usually) work well.
-twoblur rr = Set the blurring radius for the first pass to 'rr'
millimeters. [Default == 11 mm]
**N.B.: You may want to change this from the default if
your voxels are unusually small or unusually large
(e.g., outside the range 1-4 mm along each axis).
-twofirst = Use -twopass on the first image to be registered, and
then on all subsequent images from the source dataset,
use results from the first image's coarse pass to start
the fine pass.
(Useful when there may be large motions between the )
(source and the base, but only small motions within )
(the source dataset itself; since the coarse pass can )
(be slow, doing it only once makes sense in this case.)
**N.B.: [-twofirst is on by default; '-twopass' turns it off.]
-twobest bb = In the coarse pass, use the best 'bb' set of initial
points to search for the starting point for the fine
pass. If bb==0, then no search is made for the best
starting point, and the identity transformation is
used as the starting point. [Default=5; min=0 max=29]
**N.B.: Setting bb=0 will make things run faster, but less reliably.
Setting bb = 'MAX' will make it be the max allowed value.
-fineblur x = Set the blurring radius to use in the fine resolution
pass to 'x' mm. A small amount (1-2 mm?) of blurring at
the fine step may help with convergence, if there is
some problem, especially if the base volume is very noisy.
[Default == 0 mm = no blurring at the final alignment pass]
**NOTES ON
**STRATEGY: * If you expect only small-ish (< 2 voxels?) image movement,
then using '-onepass' or '-twobest 0' makes sense.
* If you expect large-ish image movements, then do not
use '-onepass' or '-twobest 0'; the purpose of the
'-twobest' parameter is to search for large initial
rotations/shifts with which to start the coarse
optimization round.
* If you have multiple sub-bricks in the source dataset,
then the default '-twofirst' makes sense if you don't expect
large movements WITHIN the source, but expect large motions
between the source and base.
* '-twopass' re-starts the alignment process for each sub-brick
in the source dataset -- this option can be time consuming,
and is really intended to be used when you might expect large
movements between sub-bricks; for example, when the different
volumes are gathered on different days. For most purposes,
'-twofirst' (the default process) will be adequate and faster,
when operating on multi-volume source datasets.
-cmass = Use the center-of-mass calculation to determine an initial shift
[This option is OFF by default]
can be given as cmass+a, cmass+xy, cmass+yz, cmass+xz
where +a means to try determine automatically in which
direction the data is partial by looking for a too large shift
If given in the form '-cmass+xy' (for example), means to
do the CoM calculation in the x- and y-directions, but
not the z-direction.
* MY OPINION: This option is REALLY useful in most cases.
However, if you only have partial coverage in
the -source dataset, you will need to use
one of the '+' additions to restrict the
use of the CoM limits.
-nocmass = Don't use the center-of-mass calculation. [The default]
(You would not want to use the C-o-M calculation if the )
(source sub-bricks have very different spatial locations,)
(since the source C-o-M is calculated from all sub-bricks)
**EXAMPLE: You have a limited coverage set of axial EPI slices you want to
register into a larger head volume (after 3dSkullStrip, of course).
In this case, '-cmass+xy' makes sense, allowing CoM adjustment
along the x = R-L and y = A-P directions, but not along the
z = I-S direction, since the EPI doesn't cover the whole brain
along that axis.
-autoweight = Compute a weight function using the 3dAutomask
algorithm plus some blurring of the base image.
**N.B.: '-autoweight+100' means to zero out all voxels
with values below 100 before computing the weight.
'-autoweight**1.5' means to compute the autoweight
and then raise it to the 1.5-th power (e.g., to
increase the weight of high-intensity regions).
These two processing steps can be combined, as in
'-autoweight+100**1.5'
** Note that '**' must be enclosed in quotes;
otherwise, the shell will treat it as a wildcard
and you will get an error message before 3dAllineate
even starts!!
** UPDATE: one can now use '^' for power notation, to
avoid needing to enclose the string in quotes.
**N.B.: Some cost functionals do not allow -autoweight, and
will use -automask instead. A warning message
will be printed if you run into this situation.
If a clip level '+xxx' is appended to '-autoweight',
then the conversion into '-automask' will NOT happen.
Thus, using a small positive '+xxx' can be used trick
-autoweight into working on any cost functional.
-automask = Compute a mask function, which is like -autoweight,
but the weight for a voxel is set to either 0 or 1.
**N.B.: '-automask+3' means to compute the mask function, and
then dilate it outwards by 3 voxels (e.g.).
** Note that '+' means something very different
for '-automask' and '-autoweight'!!
-autobox = Expand the -automask function to enclose a rectangular
box that holds the irregular mask.
**N.B.: This is the default mode of operation!
For intra-modality registration, '-autoweight' may be better!
* If the cost functional is 'ls', then '-autoweight' will be
the default, instead of '-autobox'.
-nomask = Don't compute the autoweight/mask; if -weight is not
also used, then every voxel will be counted equally.
-weight www = Set the weighting for each voxel in the base dataset;
larger weights mean that voxel counts more in the cost
function.
**N.B.: The weight dataset must be defined on the same grid as
the base dataset.
**N.B.: Even if a method does not allow -autoweight, you CAN
use a weight dataset that is not 0/1 valued. The
risk is yours, of course (!*! as always in AFNI !*!).
-wtprefix p = Write the weight volume to disk as a dataset with
prefix name 'p'. Used with '-autoweight/mask', this option
lets you see what voxels were important in the algorithm.
-emask ee = This option lets you specify a mask of voxels to EXCLUDE from
the analysis. The voxels where the dataset 'ee' is nonzero
will not be included (i.e., their weights will be set to zero).
* Like all the weight options, it applies in the base image
coordinate system.
** Like all the weight options, it means nothing if you are using
one of the 'apply' options.
Method Allows -autoweight
------ ------------------
ls YES
mi NO
crM YES
nmi NO
hel NO
crA YES
crU YES
lpc YES
lpa YES
lpc+ YES
lpa+ YES
-source_mask sss = Mask the source (input) dataset, using 'sss'.
-source_automask = Automatically mask the source dataset.
[By default, all voxels in the source]
[dataset are used in the matching. ]
**N.B.: You can also use '-source_automask+3' to dilate
the default source automask outward by 3 voxels.
-warp xxx = Set the warp type to 'xxx', which is one of
shift_only *OR* sho = 3 parameters
shift_rotate *OR* shr = 6 parameters
shift_rotate_scale *OR* srs = 9 parameters
affine_general *OR* aff = 12 parameters
[Default = affine_general, which includes image]
[ shifts, rotations, scaling, and shearing]
* MY OPINION: Shearing is usually unimportant, so
you can omit it if you want: '-warp srs'.
But it doesn't hurt to keep shearing,
except for a little extra CPU time.
On the other hand, scaling is often
important, so should not be omitted.
-warpfreeze = Freeze the non-rigid body parameters (those past #6)
after doing the first sub-brick. Subsequent volumes
will have the same spatial distortions as sub-brick #0,
plus rigid body motions only.
* MY OPINION: This option is almost useless.
-replacebase = If the source has more than one sub-brick, and this
option is turned on, then after the #0 sub-brick is
aligned to the base, the aligned #0 sub-brick is used
as the base image for subsequent source sub-bricks.
* MY OPINION: This option is almost useless.
-replacemeth m = After sub-brick #0 is aligned, switch to method 'm'
for later sub-bricks. For use with '-replacebase'.
* MY OPINION: This option is almost useless.
-EPI = Treat the source dataset as being composed of warped
EPI slices, and the base as comprising anatomically
'true' images. Only phase-encoding direction image
shearing and scaling will be allowed with this option.
**N.B.: For most people, the base dataset will be a 3dSkullStrip-ed
T1-weighted anatomy (MPRAGE or SPGR). If you don't remove
the skull first, the EPI images (which have little skull
visible due to fat-suppression) might expand to fit EPI
brain over T1-weighted skull.
**N.B.: Usually, EPI datasets don't have as complete slice coverage
of the brain as do T1-weighted datasets. If you don't use
some option (like '-EPI') to suppress scaling in the slice-
direction, the EPI dataset is likely to stretch the slice
thickness to better 'match' the T1-weighted brain coverage.
**N.B.: '-EPI' turns on '-warpfreeze -replacebase'.
You can use '-nowarpfreeze' and/or '-noreplacebase' AFTER the
'-EPI' on the command line if you do not want these options used.
** OPTIONS to change search ranges for alignment parameters **
-smallrange = Set all the parameter ranges to be smaller (about half) than
the default ranges, which are rather large for many purposes.
* Default angle range is plus/minus 30 degrees
* Default shift range is plus/minus 32% of grid size
* Default scaling range is plus/minus 20% of grid size
* Default shearing range is plus/minus 0.1111
-parfix n v = Fix parameter #n to be exactly at value 'v'.
-parang n b t = Allow parameter #n to range only between 'b' and 't'.
If not given, default ranges are used.
-parini n v = Initialize parameter #n to value 'v', but then
allow the algorithm to adjust it.
**N.B.: Multiple '-par...' options can be used, to constrain
multiple parameters.
**N.B.: -parini has no effect if -twopass is used, since
the -twopass algorithm carries out its own search
for initial parameters.
-maxrot dd = Allow maximum rotation of 'dd' degrees. Equivalent
to '-parang 4 -dd dd -parang 5 -dd dd -parang 6 -dd dd'
[Default=30 degrees]
-maxshf dd = Allow maximum shift of 'dd' millimeters. Equivalent
to '-parang 1 -dd dd -parang 2 -dd dd -parang 3 -dd dd'
[Default=32% of the size of the base image]
**N.B.: This max shift setting is relative to the center-of-mass
shift, if the '-cmass' option is used.
-maxscl dd = Allow maximum scaling factor to be 'dd'. Equivalent
to '-parang 7 1/dd dd -parang 8 1/dd dd -paran2 9 1/dd dd'
[Default=1.4=image can go up or down 40% in size]
-maxshr dd = Allow maximum shearing factor to be 'dd'. Equivalent
to '-parang 10 -dd dd -parang 11 -dd dd -parang 12 -dd dd'
[Default=0.1111 for no good reason]
NOTE: If the datasets being registered have only 1 slice, 3dAllineate
will automatically fix the 6 out-of-plane motion parameters to
their 'do nothing' values, so you don't have to specify '-parfix'.
-master mmm = Write the output dataset on the same grid as dataset
'mmm'. If this option is NOT given, the base dataset
is the master.
**N.B.: 3dAllineate transforms the source dataset to be 'similar'
to the base image. Therefore, the coordinate system
of the master dataset is interpreted as being in the
reference system of the base image. It is thus vital
that these finite 3D volumes overlap, or you will lose data!
**N.B.: If 'mmm' is the string 'SOURCE', then the source dataset
is used as the master for the output dataset grid.
You can also use 'BASE', which is of course the default.
-mast_dxyz del = Write the output dataset using grid spacings of
*OR* 'del' mm. If this option is NOT given, then the
-newgrid del grid spacings in the master dataset will be used.
This option is useful when registering low resolution
data (e.g., EPI time series) to high resolution
datasets (e.g., MPRAGE) where you don't want to
consume vast amounts of disk space interpolating
the low resolution data to some artificially fine
(and meaningless) spatial grid.
----------------------------------------------
DEFINITION OF AFFINE TRANSFORMATION PARAMETERS
----------------------------------------------
The 3x3 spatial transformation matrix is calculated as [S][D][U],
where [S] is the shear matrix,
[D] is the scaling matrix, and
[U] is the rotation (proper orthogonal) matrix.
Thes matrices are specified in DICOM-ordered (x=-R+L,y=-A+P,z=-I+S)
coordinates as:
[U] = [Rotate_y(param#6)] [Rotate_x(param#5)] [Rotate_z(param #4)]
(angles are in degrees)
[D] = diag( param#7 , param#8 , param#9 )
[ 1 0 0 ] [ 1 param#10 param#11 ]
[S] = [ param#10 1 0 ] OR [ 0 1 param#12 ]
[ param#11 param#12 1 ] [ 0 0 1 ]
The shift vector comprises parameters #1, #2, and #3.
The goal of the program is to find the warp parameters such that
I([x]_warped) 'is similar to' J([x]_in)
as closely as possible in some sense of 'similar', where J(x) is the
base image, and I(x) is the source image.
Using '-parfix', you can specify that some of these parameters
are fixed. For example, '-shift_rotate_scale' is equivalent
'-affine_general -parfix 10 0 -parfix 11 0 -parfix 12 0'.
Don't even think of using the '-parfix' option unless you grok
this example!
----------- Special Note for the '-EPI' Option's Coordinates -----------
In this case, the parameters above are with reference to coordinates
x = frequency encoding direction (by default, first axis of dataset)
y = phase encoding direction (by default, second axis of dataset)
z = slice encoding direction (by default, third axis of dataset)
This option lets you freeze some of the warping parameters in ways that
make physical sense, considering how echo-planar images are acquired.
The x- and z-scaling parameters are disabled, and shears will only affect
the y-axis. Thus, there will be only 9 free parameters when '-EPI' is
used. If desired, you can use a '-parang' option to allow the scaling
fixed parameters to vary (put these after the '-EPI' option):
-parang 7 0.833 1.20 to allow x-scaling
-parang 9 0.833 1.20 to allow z-scaling
You could also fix some of the other parameters, if that makes sense
in your situation; for example, to disable out-of-slice rotations:
-parfix 5 0 -parfix 6 0
and to disable out of slice translation:
-parfix 3 0
NOTE WELL: If you use '-EPI', then the output warp parameters (e.g., in
'-1Dparam_save') apply to the (freq,phase,slice) xyz coordinates,
NOT to the DICOM xyz coordinates, so equivalent transformations
will be expressed with different sets of parameters entirely
than if you don't use '-EPI'! This comment does NOT apply
to the output of '-1Dmatrix_save', since that matrix is
defined relative to the RAI (DICOM) spatial coordinates.
*********** CHANGING THE ORDER OF MATRIX APPLICATION ***********
{{{ There is no good reason to ever use these options! }}}
-SDU or -SUD }= Set the order of the matrix multiplication
-DSU or -DUS }= for the affine transformations:
-USD or -UDS }= S = triangular shear (params #10-12)
D = diagonal scaling matrix (params #7-9)
U = rotation matrix (params #4-6)
Default order is '-SDU', which means that
the U matrix is applied first, then the
D matrix, then the S matrix.
-Supper }= Set the S matrix to be upper or lower
-Slower }= triangular [Default=lower triangular]
NOTE: There is no '-Lunch' option.
There is no '-Faster' option.
-ashift OR }= Apply the shift parameters (#1-3) after OR
-bshift }= before the matrix transformation. [Default=after]
==================================================
===== RWCox - September 2006 - Live Long and Prosper =====
==================================================
********************************************************
*** From Webster's Dictionary: Allineate == 'to align' ***
********************************************************
===========================================================================
FORMERLY SECRET HIDDEN OPTIONS
---------------------------------------------------------------------------
** N.B.: Most of these are experimental! [permanent beta] **
===========================================================================
-num_rtb n = At the beginning of the fine pass, the best set of results
from the coarse pass are 'refined' a little by further
optimization, before the single best one is chosen for
for the final fine optimization.
* This option sets the maximum number of cost functional
evaluations to be used (for each set of parameters)
in this step.
* The default is 99; a larger value will take more CPU
time but may give more robust results.
* If you want to skip this step entirely, use '-num_rtb 0'.
then, the best of the coarse pass results is taken
straight to the final optimization passes.
**N.B.: If you use '-VERB', you will see that one extra case
is involved in this initial fine refinement step; that
case is starting with the identity transformation, which
helps insure against the chance that the coarse pass
optimizations ran totally amok.
* MY OPINION: This option is mostly useless - but not always!
* Every step in the multi-step alignment process
was added at some point to solve a difficult
alignment problem.
* Since you usually don't know if YOUR problem
is difficult, you should not reduce the default
process without good reason.
-nocast = By default, parameter vectors that are too close to the
best one are cast out at the end of the coarse pass
refinement process. Use this option if you want to keep
them all for the fine resolution pass.
* MY OPINION: This option is nearly useless.
-norefinal = Do NOT re-start the fine iteration step after it
has converged. The default is to re-start it, which
usually results in a small improvement to the result
(at the cost of CPU time). This re-start step is an
an attempt to avoid a local minimum trap. It is usually
not necessary, but sometimes helps.
-realaxes = Use the 'real' axes stored in the dataset headers, if they
conflict with the default axes. [For Jedi AFNI Masters only!]
-savehist sss = Save start and final 2D histograms as PGM
files, with prefix 'sss' (cost: cr mi nmi hel).
* if filename contains 'FF', floats is written
* these are the weighted histograms!
* -savehist will also save histogram files when
the -allcost evaluations takes place
* this option is mostly useless unless '-histbin' is
also used
* MY OPINION: This option is mostly for debugging.
-median = Smooth with median filter instead of Gaussian blur.
(Somewhat slower, and not obviously useful.)
* MY OPINION: This option is nearly useless.
-powell m a = Set the Powell NEWUOA dimensional parameters to
'm' and 'a' (cf. source code in powell_int.c).
The number of points used for approximating the
cost functional is m*N+a, where N is the number
of parameters being optimized. The default values
are m=2 and a=3. Larger values will probably slow
the program down for no good reason. The smallest
allowed values are 1.
* MY OPINION: This option is nearly useless.
-target ttt = Same as '-source ttt'. In the earliest versions,
what I now call the 'source' dataset was called the
'target' dataset:
Try to remember the kind of September (2006)
When life was slow and oh so mellow
Try to remember the kind of September
When grass was green and source was target.
-Xwarp =} Change the warp/matrix setup so that only the x-, y-, or z-
-Ywarp =} axis is stretched & sheared. Useful for EPI, where 'X',
-Zwarp =} 'Y', or 'Z' corresponds to the phase encoding direction.
-FPS fps = Generalizes -EPI to arbitrary permutation of directions.
-histpow pp = By default, the number of bins in the histogram used
for calculating the Hellinger, Mutual Information, and
Correlation Ratio statistics is n^(1/3), where n is
the number of data points. You can change that exponent
to 'pp' with this option.
-histbin nn = Or you can just set the number of bins directly to 'nn'.
-eqbin nn = Use equalized marginal histograms with 'nn' bins.
-clbin nn = Use 'nn' equal-spaced bins except for the bot and top,
which will be clipped (thus the 'cl'). If nn is 0, the
program will pick the number of bins for you.
**N.B.: '-clbin 0' is now the default [25 Jul 2007];
if you want the old all-equal-spaced bins, use
'-histbin 0'.
**N.B.: '-clbin' only works when the datasets are
non-negative; any negative voxels in either
the input or source volumes will force a switch
to all equal-spaced bins.
* MY OPINION: The above histogram-altering options are useless.
-wtmrad mm = Set autoweight/mask median filter radius to 'mm' voxels.
-wtgrad gg = Set autoweight/mask Gaussian filter radius to 'gg' voxels.
-nmsetup nn = Use 'nn' points for the setup matching [default=98756]
-ignout = Ignore voxels outside the warped source dataset.
-blok bbb = Blok definition for the 'lp?' (Local Pearson) cost
functions: 'bbb' is one of
'BALL(r)' or 'CUBE(r)' or 'RHDD(r)' or 'TOHD(r)'
corresponding to
spheres or cubes or rhombic dodecahedra or
truncated octahedra
where 'r' is the size parameter in mm.
[Default is 'TOHD(r)' = truncated octahedron]
[with 'radius' r chosen to include about 500]
[voxels in the base dataset 3D grid. ]
* Changing the 'blok' definition/radius should only be
needed in unusual situations, as when you are trying
to have fun fun fun.
* You can change the blok shape but leave the program
to set the radius, using (say) 'RHDD(0)'.
* The old default blok shape/size was 'RHDD(6.54321)',
so if you want to maintain backward compatibility,
you should use option '-blok "RHDD(6.54321)"'
* Only voxels in the weight mask will be used
inside a blok.
* HISTORICAL NOTES:
* CUBE, RHDD, and TOHD are space filling polyhedra.
That is, they are shapes that fit together without
overlaps or gaps to fill up 3D space.
* To even approximately fill space, BALLs must overlap,
unlike the other blok shapes. Which means that BALL
bloks will use some voxels more than once.
* Kepler discovered/invented the RHDD (honeybees also did).
* The TOHD is the 'most compact' or 'most ball-like'
of the known convex space filling polyhedra.
[Which is why TOHD is the default blok shape.]
-PearSave sss = Save the final local Pearson correlations into a dataset
*OR* with prefix 'sss'. These are the correlations from
-SavePear sss which the lpc and lpa cost functionals are calculated.
* The values will be between -1 and 1 in each blok.
See the 'Too Much Detail' section below for how
these correlations are used to compute lpc and lpa.
* Locations not used in the matching will get 0.
** Unless you use '-nmatch 100%', there will be holes
of 0s in the bloks, as not all voxels are used in
the matching algorithm (speedup attempt).
* All the matching points in a given blok will get
the same value, which makes the resulting dataset
look jauntily blocky, especially in color.
* This saved dataset will be on the grid of the base
dataset, and may be zero padded if the program
chose to do so in it wisdom. This padding means
that the voxels in this output dataset may not
match one-to-one with the voxels in the base
dataset; however, AFNI displays things using
coordinates, so overlaying this dataset on the
base dataset (say) should work OK.
* If you really want this saved dataset to be on the
grid as the base dataset, you'll have use
3dZeropad -master {Base Dataset} ....
* Option '-PearSave' works even if you don't use the
'lpc' or 'lpa' cost functionals.
* If you use this option combined with '-allcostX', then
the local correlations will be saved from the INITIAL
alignment parameters, rather than from the FINAL
optimized parameters.
(Of course, with '-allcostX', there IS no final result.)
* This option does NOT work with '-allcost' or '-allcostX1D'.
-allcost = Compute ALL available cost functionals and print them
at various points in the optimization progress.
-allcostX = Compute and print ALL available cost functionals for the
un-warped inputs, and then quit.
* This option is for testing purposes (AKA 'fun').
-allcostX1D p q = Compute ALL available cost functionals for the set of
parameters given in the 1D file 'p' (12 values per row),
write them to the 1D file 'q', then exit. (For you, Zman)
* N.B.: If -fineblur is used, that amount of smoothing
will be applied prior to the -allcostX evaluations.
The parameters are the rotation, shift, scale,
and shear values, not the affine transformation
matrix. An identity matrix could be provided as
"0 0 0 0 0 0 1 1 1 0 0 0" for instance or by
using the word "IDENTITY"
* This option is for testing purposes (even more 'fun').
===========================================================================
Too Much Detail -- How Local Pearson Correlations Are Computed and Used
-----------------------------------------------------------------------
* The automask region of the base dataset is divided into a discrete
set of 'bloks'. Usually there are several thousand bloks.
* In each blok, the voxel values from the base and the source (after
the alignment transformation is applied) are extracted and the
correlation coefficient is computed -- either weighted or unweighted,
depending on the options used in 3dAllineate (usually weighted).
* Let p[i] = correlation coefficient in blok #i,
w[i] = sum of weights used in blok #i, or = 1 if unweighted.
** The values of p[i] are what get output via the '-PearSave' option.
* Define pc[i] = arctanh(p[i]) = 0.5 * log( (1+p[i]) / (1-p[i]) )
This expression is designed to 'stretch' out larger correlations,
giving them more emphasis in psum below. The same reasoning
is why pc[i]*abs(pc[i]) is used below, to make bigger correlations
have a bigger impact in the final result.
* psum = SUM_OVER_i { w[i]*pc[i]*abs(pc[i]) }
wsum = SUM_OVER_i { w[i] }
lpc = psum / wsum ==> negative correlations are good (smaller lpc)
lpa = 1 - abs(lpc) ==> positive correlations are good (smaller lpa)
===========================================================================
Modifying '-final wsinc5' -- for the truly crazy people out there
-----------------------------------------------------------------
* The windowed (tapered) sinc function interpolation can be modified
by several environment variables. This is expert-level stuff, and
you should understand what you are doing if you use these options.
The simplest way to use these would be on the command line, as in
-DAFNI_WSINC5_RADIUS=9 -DAFNI_WSINC5_TAPERFUN=Hamming
* AFNI_WSINC5_TAPERFUN lets you choose the taper function.
The default taper function is the minimum sidelobe 3-term cosine:
0.4243801 + 0.4973406*cos(PI*x) + 0.0782793*cos(2*PI*x)
If you set this environment variable to 'Hamming', then the
minimum sidelobe 2-term cosine will be used instead:
0.53836 + 0.46164*cos(PI*x)
Here, 'x' is between 0 and 1, where x=0 is the center of the
interpolation mask and x=1 is the outer edge.
++ Unfortunately, the 3-term cosine doesn't have a catchy name; you can
find it (and many other) taper functions described in the paper
AH Nuttall, Some Windows with Very Good Sidelobe Behavior.
IEEE Trans. ASSP, 29:84-91 (1981).
In particular, see Fig.14 and Eq.36 in this paper.
* AFNI_WSINC5_TAPERCUT lets you choose the start 'x' point for tapering:
This value should be between 0 and 0.8; for example, 0 means to taper
all the way from x=0 to x=1 (maximum tapering). The default value
is 0. Setting TAPERCUT to 0.5 (say) means only to taper from x=0.5
to x=1; thus, a larger value means that fewer points are tapered
inside the interpolation mask.
* AFNI_WSINC5_RADIUS lets you choose the radius of the tapering window
(i.e., the interpolation mask region). This value is an integer
between 3 and 21. The default value is 5 (which used to be the
ONLY value, thus 'wsinc5'). RADIUS is measured in voxels, not mm.
* AFNI_WSINC5_SPHERICAL lets you choose the shape of the mask region.
If you set this value to 'Yes', then the interpolation mask will be
spherical; otherwise, it defaults to cubical.
* The Hamming taper function is a little faster than the 3-term function,
but will have a little more Gibbs phenomenon.
* A larger TAPERCUT will give a little more Gibbs phenomenon; compute
speed won't change much with this parameter.
* Compute time goes up with (at least) the 3rd power of the RADIUS; setting
RADIUS to 21 will be VERY slow.
* Visually, RADIUS=3 is similar to quintic interpolation. Increasing
RADIUS makes the interpolated images look sharper and more well-
defined. However, values of RADIUS greater than or equal to 7 appear
(to Zhark's eagle eye) to be almost identical. If you really care,
you'll have to experiment with this parameter yourself.
* A spherical mask is also VERY slow, since the cubical mask allows
evaluation as a tensor product. There is really no good reason
to use a spherical mask; I only put it in for fun/experimental purposes.
** For most users, there is NO reason to ever use these environment variables
to modify wsinc5. You should only do this kind of thing if you have a
good and articulable reason! (Or if you really like to screw around.)
** The wsinc5 interpolation function is parallelized using OpenMP, which
makes its usage moderately tolerable.
===========================================================================
Hidden experimental cost functionals:
-------------------------------------
sp *OR* spearman = Spearman [rank] Correlation
je *OR* jointentropy = Joint Entropy [H(b,s)]
lss *OR* signedPcor = Signed Pearson Correlation
Notes for the new [Feb 2010] lpc+ cost functional:
--------------------------------------------------
* The cost functional named 'lpc+' is a combination of several others:
lpc + hel*0.4 + crA*0.4 + nmi*0.2 + mi*0.2 + ov*0.4
++ 'hel', 'crA', 'nmi', and 'mi' are the histogram-based cost
functionals also available as standalone options.
++ 'ov' is a measure of the overlap of the automasks of the base and
source volumes; ov is not available as a standalone option.
* The purpose of lpc+ is to avoid situations where the pure lpc cost
goes wild; this especially happens if '-source_automask' isn't used.
++ Even with lpc+, you should use '-source_automask+2' (say) to be safe.
* You can alter the weighting of the extra functionals by giving the
option in the form (for example)
'-lpc+hel*0.5+nmi*0+mi*0+crA*1.0+ov*0.5'
* The quotes are needed to prevent the shell from wild-card expanding
the '*' character.
--> You can now use ':' in place of '*' to avoid this wildcard problem:
-lpc+hel:0.5+nmi:0+mi:0+crA:1+ov:0.5+ZZ
* Notice the weight factors FOLLOW the name of the extra functionals.
++ If you want a weight to be 0 or 1, you have to provide for that
explicitly -- if you leave a weight off, then it will get its
default value!
++ The order of the weight factor names is unimportant here:
'-lpc+hel*0.5+nmi*0.8' == '-lpc+nmi*0.8+hel*0.5'
* Only the 5 functionals listed (hel,crA,nmi,mi,ov) can be used in '-lpc+'.
* In addition, if you want the initial alignments to be with '-lpc+' and
then finish the Final alignment with pure '-lpc', you can indicate this
by putting 'ZZ' somewhere in the option string, as in '-lpc+ZZ'.
***** '-cost lpc+ZZ' is very useful for aligning EPI to T1w volumes *****
* [28 Nov 2018]
All of the above now applies to the 'lpa+' cost functional,
which can be used as a robust method for like-to-like alignment.
For example, aligning 3T and 7T T1-weighted datasets from the same person.
* [28 Sep 2021]
However, the default multiplier constants for cost 'lpa+' are now
different from the 'lpc+' multipliers -- to make 'lpa+' more
robust. The new default for 'lpa+' is
lpa + hel*0.4 + crA*0.4 + nmi*0.2 + mi*0.0 + ov*0.4
***** '-cost lpa+ZZ' is very useful for T1w to T1w volumes (or any *****
***** similar-contrast datasets). *****
*** Note that in trial runs, we have found that lpc+ZZ and lpa+ZZ are ***
*** more robust than lpc+ and lpa+ -- which is why the '+ZZ' amendment ***
*** was created. ***
Cost functional descriptions (for use with -allcost output):
------------------------------------------------------------
ls :: 1 - abs(Pearson correlation coefficient)
sp :: 1 - abs(Spearman correlation coefficient)
mi :: - Mutual Information = H(base,source)-H(base)-H(source)
crM :: 1 - abs[ CR(base,source) * CR(source,base) ]
nmi :: 1/Normalized MI = H(base,source)/[H(base)+H(source)]
je :: H(base,source) = joint entropy of image pair
hel :: - Hellinger distance(base,source)
crA :: 1 - abs[ CR(base,source) + CR(source,base) ]
crU :: CR(source,base) = Var(source|base) / Var(source)
lss :: Pearson correlation coefficient between image pair
lpc :: nonlinear average of Pearson cc over local neighborhoods
lpa :: 1 - abs(lpc)
lpc+:: lpc + hel + mi + nmi + crA + overlap
lpa+:: lpa + hel + nmi + crA + overlap
* N.B.: Some cost functional values (as printed out above)
are negated from their theoretical descriptions (e.g., 'hel')
so that the best image alignment will be found when the cost
is minimized. See the descriptions above and the references
below for more details for each functional.
* MY OPINIONS:
* Some of these cost functionals were implemented only for
the purposes of fun and/or comparison and/or experimentation
and/or special circumstances. These are
sp je lss crM crA crM hel mi nmi
* For many purposes, lpc+ZZ and lpa+ZZ are the most robust
cost functionals, but usually the slowest to evaluate.
* HOWEVER, just because some method is best MOST of the
time does not mean it is best ALL of the time.
Please check your results visually, or at some point
in time you will have bad results and not know it!
* For speed and for 'like-to-like' alignment, '-cost ls'
can work well.
* For more information about the 'lpc' functional, see
ZS Saad, DR Glen, G Chen, MS Beauchamp, R Desai, RW Cox.
A new method for improving functional-to-structural
MRI alignment using local Pearson correlation.
NeuroImage 44: 839-848, 2009.
http://dx.doi.org/10.1016/j.neuroimage.2008.09.037
https://pubmed.ncbi.nlm.nih.gov/18976717
The '-blok' option can be used to control the regions
(size and shape) used to compute the local correlations.
*** Using the 'lpc' functional wisely requires the use of
a proper weight volume. We HIGHLY recommend you use
the align_epi_anat.py script if you want to use this
cost functional! Otherwise, you are likely to get
less than optimal results (and then swear at us unjustly).
* For more information about the 'cr' functionals, see
http://en.wikipedia.org/wiki/Correlation_ratio
Note that CR(x,y) is not the same as CR(y,x), which
is why there are symmetrized versions of it available.
* For more information about the 'mi', 'nmi', and 'je'
cost functionals, see
http://en.wikipedia.org/wiki/Mutual_information
http://en.wikipedia.org/wiki/Joint_entropy
http://www.cs.jhu.edu/~cis/cista/746/papers/mutual_info_survey.pdf
* For more information about the 'hel' functional, see
http://en.wikipedia.org/wiki/Hellinger_distance
* Some cost functionals (e.g., 'mi', 'cr', 'hel') are
computed by creating a 2D joint histogram of the
base and source image pair. Various options above
(e.g., '-histbin', etc.) can be used to control the
number of bins used in the histogram on each axis.
(If you care to control the program in such detail!)
* Minimization of the chosen cost functional is done via
the NEWUOA software, described in detail in
MJD Powell. 'The NEWUOA software for unconstrained
optimization without derivatives.' In: GD Pillo,
M Roma (Eds), Large-Scale Nonlinear Optimization.
Springer, 2006.
http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2004_08.pdf
===========================================================================
SUMMARY of the Default Allineation Process
------------------------------------------
As mentioned earlier, each of these steps was added to deal with a problem
that came up over the years. The resulting process is reasonably robust :),
but then tends to be slow :(. If you use the '-verb' or '-VERB' option, you
will get a lot of fun fun fun progress messages that show the results from
this sequence of steps.
Below, I refer to different scales of effort in the optimizations at each
step. Easier/faster optimization is done using: matching with fewer points
from the datasets; more smoothing of the base and source datasets; and by
putting a smaller upper limit on the number of trials the optimizer is
allowed to take. The Coarse phase starts with the easiest optimization,
and increases the difficulty a little at each refinement. The Fine phase
starts with the most difficult optimization setup: the most points for
matching, little or no smoothing, and a large limit on the number of
optimizer trials.
0. Preliminary Setup [Goal: create the basis for the following steps]
a. Create the automask and/or autoweight from the '-base' dataset.
The cost functional will only be computed from voxels inside the
automask, and only a fraction of those voxels will actually be used
for evaluating the cost functional (unless '-nmatch 100%' is used).
b. If the automask is 'too close' to the outside of the base 3D volume,
zeropad the base dataset to avoid edge effects.
c. Determine the 3D (x,y,z) shifts for the '-cmass' center-of-mass
crude alignment, if ordered by the user.
d. Set ranges of transformation parameters and which parameters are to
be frozen at fixed values.
1. Coarse Phase [Goal: explore the vastness of 6-12D parameter space]
a. The first step uses only the first 6 parameters (shifts + rotations),
and evaluates thousands of potential starting points -- selected from
a 6D grid in parameter space and also from random points in 6D
parameter space. This step is fairly slow. The best 45 parameter
sets (in the sense of the cost functional) are kept for the next step.
b. Still using only the first 6 parameters, the best 45 sets of parameters
undergo a little optimization. The best 6 parameter sets after this
refinement are kept for the next step. (The number of sets chosen
to go on to the next step can be set by the '-twobest' option.)
The optimizations in this step use the blurring radius that is
given by option '-twoblur', which defaults to 7.77 mm, and use
relatively few points in each dataset for computing the cost functional.
c. These 6 best parameter sets undergo further, more costly, optimization,
now using all 12 parameters. This optimization runs in 3 passes, each
more costly (less smoothing, more matching points) than the previous.
(If 2 sets get too close in parameter space, 1 of them will be cast out
-- this does not happen often.) Output parameter sets from the 3rd pass
of successive refinement are inputs to the fine refinement phase.
2. Fine Phase [Goal: use more expensive optimization on good starting points]
a. The 6 outputs from step 1c have the null parameter set (all 0, except
for the '-cmass' shifts) appended. Then a small amount of optimization
is applied to each of these 7 parameter sets ('-num_rtb'). The null
parameter set is added here to insure against the possibility that the
coarse optimizations 'ran away' to some unpleasant locations in the 12D
parameter space. These optimizations use the full set of points specified
by '-nmatch', and the smoothing specified by '-fineblur' (default = 0),
but the number of functional evaluations is small, to make this step fast.
b. The best (smallest cost) set from step 2a is chosen for the final
optimization, which is run until the '-conv' limit is reached.
These are the 'Finalish' parameters (shown using '-verb').
c. The set of parameters from step 2b is used as the starting point
for a new optimization, in an attempt to avoid a false minimum.
The results of this optimization are the final parameter set.
3. The final set of parameters is used to produce the output volume,
using the '-final' interpolation method.
In practice, the output from the Coarse phase successive refinements is
usually so good that the Fine phase runs quickly and makes only small
adjustments. The quality resulting from the Coarse phase steps is mostly
due, in my opinion, to the large number of initial trials (1ab), followed by
by the successive refinements of several parameter sets (1c) to help usher
'good' candidates to the starting line for the Fine phase.
For some 'easy' registration problems -- such as T1w-to-T1w alignment, high
quality images, a lot of overlap to start with -- the process can be sped
up by reducing the number of steps. For example, '-num_rtb 0 -twobest 0'
would eliminate step 2a and speed up step 1c. Even more extreme, '-onepass'
could be used to skip all of the Coarse phase. But be careful out there!
For 'hard' registration problems, cleverness is usually needed. Choice
of cost functional matters. Preprocessing the datasets may be necessary.
Using '-twobest 29' could help by providing more candidates for the
Fine phase -- at the cost of CPU time. If you run into trouble -- which
happens sooner or later -- try the AFNI Message Board -- and please
give details, including the exact command line(s) you used.
=========================================================================
* This binary version of 3dAllineate is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
* OpenMP may or may not speed up the program significantly. Limited
tests show that it provides some benefit, particularly when using
the more complicated interpolation methods (e.g., '-cubic' and/or
'-final wsinc5'), for up to 3-4 CPU threads.
* But the speedup is definitely not linear in the number of threads, alas.
Probably because my parallelization efforts were pretty limited.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAmpToRSFC
This program is for converting spectral amplitudes into standard RSFC
parameters. This function is made to work directly with the outputs of
3dLombScargle, but you could use other inputs that have similar
formatting. (3dLombScargle's main algorithm is special because it
calculates spectra from time series with nonconstant sampling, such as if
some time points have been censored during processing-- check it out!.)
At present, 6 RSFC parameters get returned in separate volumes:
ALFF, mALFF, fALFF, RSFA, mRSFA and fRSFA.
For more information about each RSFC parameter, see, e.g.:
ALFF/mALFF -- Zang et al. (2007),
fALFF -- Zou et al. (2008),
RSFA -- Kannurpatti & Biswal (2008).
You can also see the help of 3dRSFC, as well as the Appendix of
Taylor, Gohel, Di, Walter and Biswal (2012) for a mathematical
description and set of relations.
NB: *if* you want to input an unbandpassed time series and do some
filtering/other processing at the same time as estimating RSFC parameters,
then you would want to use 3dRSFC, instead.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMAND:
3dAmpToRSFC { -in_amp AMPS | -in_pow POWS } -prefix PREFIX \
-band FBOT FTOP { -mask MASK } { -nifti }
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ RUNNING:
-in_amp AMPS :input file of one-sided spectral amplitudes, such as
output by 3dLombScargle. It is also assumed that the
the frequencies are uniformly spaced with a single DF
('delta f'), and that the zeroth brick is at 1*DF (i.e.
that the zeroth/baseline frequency is not present in the
or spectrum.
-in_pow POWS :input file of a one-sided power spectrum, such as
output by 3dLombScargle. Similar freq assumptions
as in '-in_amp ...'.
-band FBOT FTOP :lower and upper boundaries, respectively, of the low
frequency fluctuations (LFFs), which will be in the
inclusive interval [FBOT, FTOP], within the provided
input file's frequency range.
-prefix PREFIX :output file prefix; file names will be: PREFIX_ALFF*,
PREFIX_FALFF*, etc.
-mask MASK :volume mask of voxels to include for calculations; if
no mask is included, values are calculated for voxels
whose values are not identically zero across time.
-nifti :output files as *.nii.gz (default is BRIK/HEAD).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ OUTPUT:
Currently, 6 volumes of common RSFC parameters, briefly:
PREFIX_ALFF+orig :amplitude of low freq fluctuations
(L1 sum).
PREFIX_MALFF+orig :ALFF divided by the mean value within
the input/estimated whole brain mask
(a.k.a. 'mean-scaled ALFF').
PREFIX_FALFF+orig :ALFF divided by sum of full amplitude
spectrum (-> 'fractional ALFF').
PREFIX_RSFA+orig :square-root of summed square of low freq
fluctuations (L2 sum).
PREFIX_MRSFA+orig :RSFA divided by the mean value within
the input/estimated whole brain mask
(a.k.a. 'mean-scaled RSFA').
PREFIX_FRSFA+orig :ALFF divided by sum of full amplitude
spectrum (a.k.a. 'fractional RSFA').
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3dAmpToRSFC \
-in_amp SUBJ_01_amp.nii.gz \
-prefix SUBJ_01 \
-mask mask_WB.nii.gz \
-band 0.01 0.1 \
-nifti
___________________________________________________________________________
AFNI program: 3dAnhist
Usage: 3dAnhist [options] dataset
Input dataset is a T1-weighted high-res of the brain (shorts only).
Output is a list of peaks in the histogram, to stdout, in the form
( datasetname #peaks peak1 peak2 ... )
In the C-shell, for example, you could do
set anhist = `3dAnhist -q -w1 dset+orig`
Then the number of peaks found is in the shell variable $anhist[2].
Options:
-q = be quiet (don't print progress reports)
-h = dump histogram data to Anhist.1D and plot to Anhist.ps
-F = DON'T fit histogram with stupid curves.
-w = apply a Winsorizing filter prior to histogram scan
(or -w7 to Winsorize 7 times, etc.)
-2 = Analyze top 2 peaks only, for overlap etc.
-label xxx = Use 'xxx' for a label on the Anhist.ps plot file
instead of the input dataset filename.
-fname fff = Use 'fff' for the filename instead of 'Anhist'.
If the '-2' option is used, AND if 2 peaks are detected, AND if
the -h option is also given, then stdout will be of the form
( datasetname 2 peak1 peak2 thresh CER CJV count1 count2 count1/count2)
where 2 = number of peaks
thresh = threshold between peak1 and peak2 for decision-making
CER = classification error rate of thresh
CJV = coefficient of joint variation
count1 = area under fitted PDF for peak1
count2 = area under fitted PDF for peak2
count1/count2 = ratio of the above quantities
NOTA BENE
---------
* If the input is a T1-weighted MRI dataset (the usual case), then
peak 1 should be the gray matter (GM) peak and peak 2 the white
matter (WM) peak.
* For the definitions of CER and CJV, see the paper
Method for Bias Field Correction of Brain T1-Weighted Magnetic
Resonance Images Minimizing Segmentation Error
JD Gispert, S Reig, J Pascau, JJ Vaquero, P Garcia-Barreno,
and M Desco, Human Brain Mapping 22:133-144 (2004).
* Roughly speaking, CER is the ratio of the overlapping area of the
2 peak fitted PDFs to the total area of the fitted PDFS. CJV is
(sigma_GM+sigma_WM)/(mean_WM-mean_GM), and is a different, ad hoc,
measurement of how much the two PDF overlap.
* The fitted PDFs are NOT Gaussians. They are of the form
f(x) = b((x-p)/w,a), where p=location of peak, w=width, 'a' is
a skewness parameter between -1 and 1; the basic distribution
is defined by b(x)=(1-x^2)^2*(1+a*x*abs(x)) for -1 < x < 1.
-- RWCox - November 2004
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3danisosmooth
Usage: 3danisosmooth [options] dataset
Smooths a dataset using an anisotropic smoothing technique.
The output dataset is preferentially smoothed to preserve edges.
Options :
-prefix pname = Use 'pname' for output dataset prefix name.
-iters nnn = compute nnn iterations (default=10)
-2D = smooth a slice at a time (default)
-3D = smooth through slices. Can not be combined with 2D option
-mask dset = use dset as mask to include/exclude voxels
-automask = automatically compute mask for dataset
Can not be combined with -mask
-viewer = show central axial slice image every iteration.
Starts aiv program internally.
-nosmooth = do not do intermediate smoothing of gradients
-sigma1 n.nnn = assign Gaussian smoothing sigma before
gradient computation for calculation of structure tensor.
Default = 0.5
-sigma2 n.nnn = assign Gaussian smoothing sigma after
gradient matrix computation for calculation of structure tensor.
Default = 1.0
-deltat n.nnn = assign pseudotime step. Default = 0.25
-savetempdata = save temporary datasets each iteration.
Dataset prefixes are Gradient, Eigens, phi, Dtensor.
Ematrix, Flux and Gmatrix are also stored for the first sub-brick.
Where appropriate, the filename is suffixed by .ITER where
ITER is the iteration number. Existing datasets will get overwritten.
-save_temp_with_diff_measures: Like -savetempdata, but with
a dataset named Diff_measures.ITER containing FA, MD, Cl, Cp,
and Cs values.
-phiding = use Ding method for computing phi (default)
-phiexp = use exponential method for computing phi
-noneg = set negative voxels to 0
-setneg NEGVAL = set negative voxels to NEGVAL
-edgefraction n.nnn = adjust the fraction of the anisotropic
component to be added to the original image. Can vary between
0 and 1. Default =0.5
-datum type = Coerce the output data to be stored as the given type
which may be byte, short or float. [default=float]
-matchorig - match datum type and clip min and max to match input data
-help = print this help screen
References:
Z Ding, JC Gore, AW Anderson, Reduction of Noise in Diffusion
Tensor Images Using Anisotropic Smoothing, Mag. Res. Med.,
53:485-490, 2005
J Weickert, H Scharr, A Scheme for Coherence-Enhancing
Diffusion Filtering with Optimized Rotation Invariance,
CVGPR Group Technical Report at the Department of Mathematics
and Computer Science,University of Mannheim,Germany,TR 4/2000.
J.Weickert,H.Scharr. A scheme for coherence-enhancing diffusion
filtering with optimized rotation invariance. J Visual
Communication and Image Representation, Special Issue On
Partial Differential Equations In Image Processing,Comp Vision
Computer Graphics, pages 103-118, 2002.
Gerig, G., Kubler, O., Kikinis, R., Jolesz, F., Nonlinear
anisotropic filtering of MRI data, IEEE Trans. Med. Imaging 11
(2), 221-232, 1992.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dANOVA
++ 3dANOVA: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
This program performs single factor Analysis of Variance (ANOVA)
on 3D datasets
---------------------------------------------------------------
Usage:
-----
3dANOVA
-levels r : r = number of factor levels
-dset 1 filename : data set for factor level 1
. . .. . .
-dset 1 filename data set for factor level 1
. . .. . .
-dset r filename data set for factor level r
. . .. . .
-dset r filename data set for factor level r
[-voxel num] : screen output for voxel # num
[-diskspace] : print out disk space required for
program execution
[-mask mset] : use sub-brick #0 of dataset 'mset'
to define which voxels to process
[-debug level] : request extra output
The following commands generate individual AFNI 2-sub-brick datasets:
(In each case, output is written to the file with the specified
prefix file name.)
[-ftr prefix] : F-statistic for treatment effect
[-mean i prefix] : estimate of factor level i mean
[-diff i j prefix] : difference between factor levels
[-contr c1...cr prefix] : contrast in factor levels
Modified ANOVA computation options: (December, 2005)
** For details, see https://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html
[-old_method] request to perform ANOVA using the previous
functionality (requires -OK, also)
[-OK] confirm you understand that contrasts that
do not sum to zero have inflated t-stats, and
contrasts that do sum to zero assume sphericity
(to be used with -old_method)
[-assume_sph] assume sphericity (zero-sum contrasts, only)
This allows use of the old_method for
computing contrasts which sum to zero (this
includes diffs, for instance). Any contrast
that does not sum to zero is invalid, and
cannot be used with this option (such as
ameans).
The following command generates one AFNI 'bucket' type dataset:
[-bucket prefix] : create one AFNI 'bucket' dataset whose
sub-bricks are obtained by
concatenating the above output files;
the output 'bucket' is written to file
with prefix file name
N.B.: For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used,
e.g., -dset 2 'fred+orig[3]'
Example of 3dANOVA:
------------------
Example is based on a study with one factor (independent variable)
called 'Pictures', with 3 levels:
(1) Faces, (2) Houses, and (3) Donuts
The ANOVA is being conducted on the data of subjects Fred and Ethel:
3dANOVA -levels 3 \
-dset 1 fred_Faces+tlrc \
-dset 1 ethel_Faces+tlrc \
\
-dset 2 fred_Houses+tlrc \
-dset 2 ethel_Houses+tlrc \
\
-dset 3 fred_Donuts+tlrc \
-dset 3 ethel_Donuts+tlrc \
\
-ftr Pictures \
-mean 1 Faces \
-mean 2 Houses \
-mean 3 Donuts \
-diff 1 2 FvsH \
-diff 2 3 HvsD \
-diff 1 3 FvsD \
-contr 1 1 -1 FHvsD \
-contr -1 1 1 FvsHD \
-contr 1 -1 1 FDvsH \
-bucket fred_n_ethel_ANOVA
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
---------------------------------------------------
Also see HowTo#5 - Group Analysis on the AFNI website:
https://afni.nimh.nih.gov/pub/dist/HOWTO/howto/ht05_group/html/index.shtml
-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers. This truncantion might cause significant errors.
If you receive warnings that look like this:
*+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program. For convenience, you can do this
on the command line, as in
3dANOVA -DAFNI_FLOATIZE=YES ... other options ...
Also see the following links:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dANOVA2
++ 3dANOVA: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
This program performs a two-factor Analysis of Variance (ANOVA)
on 3D datasets.
Please also see (and consider using) AFNI's gen_group_command.py program
to construct your 3dANOVA2 command. That program helps simplify the
process of specifying your command.
-----------------------------------------------------------
Usage ~1~
3dANOVA2
-type k : type of ANOVA model to be used:
k=1 fixed effects model (A and B fixed)
k=2 random effects model (A and B random)
k=3 mixed effects model (A fixed, B random)
-alevels a : a = number of levels of factor A
-blevels b : b = number of levels of factor B
-dset 1 1 filename : data set for level 1 of factor A
and level 1 of factor B
. . . . . .
-dset i j filename : data set for level i of factor A
and level j of factor B
. . . . . .
-dset a b filename : data set for level a of factor A
and level b of factor B
[-voxel num] : screen output for voxel # num
[-diskspace] : print out disk space required for
program execution
[-mask mset] : use sub-brick #0 of dataset 'mset'
to define which voxels to process
The following commands generate individual AFNI 2-sub-brick datasets:
(In each case, output is written to the file with the specified
prefix file name.)
[-ftr prefix] : F-statistic for treatment effect
[-fa prefix] : F-statistic for factor A effect
[-fb prefix] : F-statistic for factor B effect
[-fab prefix] : F-statistic for interaction
[-amean i prefix] : estimate mean of factor A level i
[-bmean j prefix] : estimate mean of factor B level j
[-xmean i j prefix] : estimate mean of cell at level i of factor A,
level j of factor B
[-adiff i j prefix] : difference between levels i and j of factor A
[-bdiff i j prefix] : difference between levels i and j of factor B
[-xdiff i j k l prefix] : difference between cell mean at A=i,B=j
and cell mean at A=k,B=l
[-acontr c1 ... ca prefix] : contrast in factor A levels
[-bcontr c1 ... cb prefix] : contrast in factor B levels
[-xcontr c11 ... c1b c21 ... c2b ... ca1 ... cab prefix]
: contrast in cell means
The following command generates one AFNI 'bucket' type dataset:
[-bucket prefix] : create one AFNI 'bucket' dataset whose
sub-bricks are obtained by concatenating
the above output files; the output 'bucket'
is written to file with prefix file name
Modified ANOVA computation options: (December, 2005) ~1~
** These options apply to model type 3, only.
For details, see https://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html
[-old_method] : request to perform ANOVA using the previous
functionality (requires -OK, also)
[-OK] : confirm you understand that contrasts that
do not sum to zero have inflated t-stats, and
contrasts that do sum to zero assume sphericity
(to be used with -old_method)
[-assume_sph] : assume sphericity (zero-sum contrasts, only)
This allows use of the old_method for
computing contrasts which sum to zero (this
includes diffs, for instance). Any contrast
that does not sum to zero is invalid, and
cannot be used with this option (such as
ameans).
----------------------------------------------------------
Examples of 3dANOVA2 ~1~
(And see also AFNI's gen_group_command.py for what might is likely a
simpler method for constructing these commands.)
1) This example is based on a study with a 3 x 4 mixed factorial:
design:
Factor 1 - DONUTS has 3 levels:
(1) chocolate, (2) glazed, (3) sugar
Factor 2 - SUBJECTS, of which there are 4 in this analysis:
(1) fred, (2) ethel, (3) lucy, (4) ricky
3dANOVA2 \
-type 3 -alevels 3 -blevels 4 \
-dset 1 1 fred_choc+tlrc \
-dset 2 1 fred_glaz+tlrc \
-dset 3 1 fred_sugr+tlrc \
-dset 1 2 ethel_choc+tlrc \
-dset 2 2 ethel_glaz+tlrc \
-dset 3 2 ethel_sugr+tlrc \
-dset 1 3 lucy_choc+tlrc \
-dset 2 3 lucy_glaz+tlrc \
-dset 3 3 lucy_sugr+tlrc \
-dset 1 3 ricky_choc+tlrc \
-dset 2 3 ricky_glaz+tlrc \
-dset 3 3 ricky_sugr+tlrc \
-amean 1 Chocolate \
-amean 2 Glazed \
-amean 3 Sugar \
-adiff 1 2 CvsG \
-adiff 2 3 GvsS \
-adiff 1 3 CvsS \
-acontr 1 1 -2 CGvsS \
-acontr -2 1 1 CvsGS \
-acontr 1 -2 1 CSvsG \
-fa Donuts \
-bucket ANOVA_results
The -bucket option will place all of the 3dANOVA2 results (i.e., main
effect of DONUTS, means for each of the 3 levels of DONUTS, and
contrasts between the 3 levels of DONUTS) into one big dataset with
multiple sub-bricks called ANOVA_results+tlrc.
-----------------------------------------------------------
Notes ~1~
For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used, e.g.:
-dset 2 4 'fred+orig[3]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
Also see HowTo #5: Group Analysis on the AFNI website:
https://afni.nimh.nih.gov/pub/dist/HOWTO/howto/ht05_group/html/index.shtml
-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers. This truncantion might cause significant errors.
If you receive warnings that look like this:
*+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program. For convenience, you can do this
on the command line, as in
3dANOVA -DAFNI_FLOATIZE=YES ... other options ...
Also see the following links:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dANOVA3
This program performs three-factor ANOVA on 3D data sets.
Please also see (and consider using) AFNI's gen_group_command.py program
to construct your 3dANOVA2 command. That program helps simplify the
process of specifying your command.
-----------------------------------------------------------
Usage ~1~
3dANOVA3
-type k type of ANOVA model to be used:
k = 1 A,B,C fixed; AxBxC
k = 2 A,B,C random; AxBxC
k = 3 A fixed; B,C random; AxBxC
k = 4 A,B fixed; C random; AxBxC
k = 5 A,B fixed; C random; AxB,BxC,C(A)
-alevels a a = number of levels of factor A
-blevels b b = number of levels of factor B
-clevels c c = number of levels of factor C
-dset 1 1 1 filename data set for level 1 of factor A
and level 1 of factor B
and level 1 of factor C
. . . . . .
-dset i j k filename data set for level i of factor A
and level j of factor B
and level k of factor C
. . . . . .
-dset a b c filename data set for level a of factor A
and level b of factor B
and level c of factor C
[-voxel num] screen output for voxel # num
[-diskspace] print out disk space required for
program execution
[-mask mset] use sub-brick #0 of dataset 'mset'
to define which voxels to process
The following commands generate individual AFNI 2 sub-brick datasets:
(In each case, output is written to the file with the specified
prefix file name.)
[-fa prefix] F-statistic for factor A effect
[-fb prefix] F-statistic for factor B effect
[-fc prefix] F-statistic for factor C effect
[-fab prefix] F-statistic for A*B interaction
[-fac prefix] F-statistic for A*C interaction
[-fbc prefix] F-statistic for B*C interaction
[-fabc prefix] F-statistic for A*B*C interaction
[-amean i prefix] estimate of factor A level i mean
[-bmean i prefix] estimate of factor B level i mean
[-cmean i prefix] estimate of factor C level i mean
[-xmean i j k prefix] estimate mean of cell at factor A level i,
factor B level j, factor C level k
[-adiff i j prefix] difference between factor A levels i and j
(with factors B and C collapsed)
[-bdiff i j prefix] difference between factor B levels i and j
(with factors A and C collapsed)
[-cdiff i j prefix] difference between factor C levels i and j
(with factors A and B collapsed)
[-xdiff i j k l m n prefix] difference between cell mean at A=i,B=j,
C=k, and cell mean at A=l,B=m,C=n
[-acontr c1...ca prefix] contrast in factor A levels
(with factors B and C collapsed)
[-bcontr c1...cb prefix] contrast in factor B levels
(with factors A and C collapsed)
[-ccontr c1...cc prefix] contrast in factor C levels
(with factors A and B collapsed)
[-aBcontr c1 ... ca : j prefix] 2nd order contrast in A, at fixed
B level j (collapsed across C)
[-Abcontr i : c1 ... cb prefix] 2nd order contrast in B, at fixed
A level i (collapsed across C)
[-aBdiff i_1 i_2 : j prefix] difference between levels i_1 and i_2 of
factor A, with factor B fixed at level j
[-Abdiff i : j_1 j_2 prefix] difference between levels j_1 and j_2 of
factor B, with factor A fixed at level i
[-abmean i j prefix] mean effect at factor A level i and
factor B level j
The following command generates one AFNI 'bucket' type dataset:
[-bucket prefix] create one AFNI 'bucket' dataset whose
sub-bricks are obtained by concatenating
the above output files; the output 'bucket'
is written to file with prefix file name
Modified ANOVA computation options: (December, 2005) ~1~
** These options apply to model types 4 and 5, only.
For details, see: https://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html
https://afni.nimh.nih.gov/afni/doc/manual/ANOVAm.pdf
[-old_method] request to perform ANOVA using the previous
functionality (requires -OK, also)
[-OK] confirm you understand that contrasts that
do not sum to zero have inflated t-stats, and
contrasts that do sum to zero assume sphericity
(to be used with -old_method)
[-assume_sph] assume sphericity (zero-sum contrasts, only)
This allows use of the old_method for
computing contrasts which sum to zero (this
includes diffs, for instance). Any contrast
that does not sum to zero is invalid, and
cannot be used with this option (such as
ameans).
-----------------------------------------------------------------
Examples ~1~
(And see also AFNI's gen_group_command.py for what might is likely a
simpler method for constructing these commands.)
1) The "classic" houses/faces/donuts for 4 subjects (2 genders)
(level sets are gender (M/W), image (H/F/D), and subject)
Note: factor C is really subject within gender (since it is
nested). There are 4 subjects in this example, and 2
subjects per gender. So clevels is 2.
3dANOVA3 -type 5 \
-alevels 2 \
-blevels 3 \
-clevels 2 \
-dset 1 1 1 man1_houses+tlrc \
-dset 1 2 1 man1_faces+tlrc \
-dset 1 3 1 man1_donuts+tlrc \
-dset 1 1 2 man2_houses+tlrc \
-dset 1 2 2 man2_faces+tlrc \
-dset 1 3 2 man2_donuts+tlrc \
-dset 2 1 1 woman1_houses+tlrc \
-dset 2 2 1 woman1_faces+tlrc \
-dset 2 3 1 woman1_donuts+tlrc \
-dset 2 1 2 woman2_houses+tlrc \
-dset 2 2 2 woman2_faces+tlrc \
-dset 2 3 2 woman2_donuts+tlrc \
-adiff 1 2 MvsW \
-bdiff 2 3 FvsD \
-bcontr -0.5 1 -0.5 FvsHD \
-aBcontr 1 -1 : 1 MHvsWH \
-aBdiff 1 2 : 1 same_as_MHvsWH \
-Abcontr 2 : 0 1 -1 WFvsWD \
-Abdiff 2 : 2 3 same_as_WFvsWD \
-Abcontr 2 : 1 7 -4.2 goofy_example \
-bucket donut_anova
Notes ~1~
For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used, e.g.:
-dset 2 4 5 'fred+orig[3]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers. This truncantion might cause significant errors.
If you receive warnings that look like this:
*+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program. For convenience, you can do this
on the command line, as in
3dANOVA -DAFNI_FLOATIZE=YES ... other options ...
Also see the following links:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAttribute
Usage ~1~
3dAttribute [options] aname dset
Prints (to stdout) the value of the attribute 'aname' from
the header of dataset 'dset'. If the attribute doesn't exist,
prints nothing and sets the exit status to 1.
See the full list of attributes in README.attributes here:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.attributes.html
Options ~1~
-name = Include attribute name in printout
-all = Print all attributes [don't put aname on command line]
Also implies '-name'. Attributes print in whatever order
they are in the .HEAD file, one per line. You may want
to do '3dAttribute -all elvis+orig | sort' to get them
in alphabetical order.
-center = Center of volume in RAI coordinates.
Note that center is not itself an attribute in the
.HEAD file. It is calculated from other attributes.
Special options for string attributes:
-ssep SSEP Use string SSEP as a separator between strings for
multiple sub-bricks. The default is '~', which is what
is used internally in AFNI's .HEAD file. For tcsh,
I recommend ' ' which makes parsing easy, assuming each
individual string contains no spaces to begin with.
Try -ssep 'NUM'
-sprep SPREP Use string SPREP to replace blank space in string
attributes.
-quote Use single quote around each string.
Examples ~1~
3dAttribute -quote -ssep ' ' BRICK_LABS SomeStatDset+tlrc.HEAD
3dAttribute -quote -ssep 'NUM' -sprep '+' BRICK_LABS SomeStatDset+tlrc.HEAD
3dAttribute BRICK_STATAUX SomeStatDset+tlrc.HEAD'[0]'
# ... which outputs information for just the [0]th brick of a dset.
# If that dset were an F-stat, then the output might look like:
# 0 4 2 2 430
# ... which, in order, translate to:
# 0 --> the index of the brick in question
# 4 --> the brick's statistical code, findable in README.attributes:
# '#define FUNC_FT_TYPE 4 /* fift: F-statistic */'
# to be an F-statistic
# 2 --> the number of parameters for that stat (shown subsequently)
# 2 --> here, the 1st parameter for the F-stat: 'Numerator DOF'
# 430 --> here, the 2nd parameter for the F-stat: 'Denominator DOF'
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAutobox
++ 3dAutobox: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
Usage: 3dAutobox [options] DATASET
Computes size of a box that fits around the volume.
Also can be used to crop the volume to that box.
The default 'info message'-based terminal text is a set of IJK coords.
See below for options to display coordinates in other ways, as well as
to save them in a text file. Please note in particular the difference
between *ijk* and *ijkord* outputs, for scripting.
OPTIONS: ~1~
-prefix PREFIX :Crop the input dataset to the size of the box, and
write an output dataset with PREFIX for the name.
*If -prefix is not used, no new volume is written out,
just the (x,y,z) extents of the voxels to be kept.
-input DATASET :An alternate way to specify the input dataset.
The default method is to pass DATASET as
the last parameter on the command line.
-noclust :Don't do any clustering to find box. Any non-zero
voxel will be preserved in the cropped volume.
The default method uses some clustering to find the
cropping box, and will clip off small isolated blobs.
-extent :Write to standard out the spatial extent of the box
-extent_xyz_quiet :The same numbers as '-extent', but only numbers and
no string content. Ordering is RLAPIS.
-extent_ijk :Write out the 6 auto bbox ijk slice numbers to
screen:
imin imax jmin jmax kmin kmax
Note that resampling would affect the ijk vals (but
not necessarily the xyz ones).
-extent_ijk_to_file FF
:Write out the 6 auto bbox ijk slice numbers to
a simple-formatted text file FF (single row file):
imin imax jmin jmax kmin kmax
(same notes as above apply).
-extent_ijk_midslice :Write out the 3 ijk midslices of the autobox to
the screen:
imid jmid kmid
These are obtained via: (imin + imax)/2, etc.
-extent_ijkord :Write out the 6 auto bbox ijk slice numbers to screen
but in a particular order and format (see 'NOTE on
*ijkord* format', below).
NB: This ordering is useful if you want to use
the output indices in 3dcalc expressions.
-extent_ijkord_to_file FFORRD
:Write out the 6 auto bbox ijk slice numbers to a file
but in a particular order and format (see 'NOTE on
*ijkord* format', below).
NB: This option is quite useful if you want to use
the output indices in 3dcalc expressions.
-extent_xyz_to_file GG
:Write out the 6 auto bbox xyz coords to
a simple-formatted text file GG (single row file):
xmin xmax ymin ymax zmin zmax
(same values as '-extent').
-extent_xyz_midslice :Write out the 3 xyz midslices of the autobox to
the screen:
xmid ymid zmid
These are obtained via: (xmin + xmax)/2, etc.
These follow the same meaning as '-extent'.
-npad NNN :Number of extra voxels to pad on each side of box,
since some troublesome people (that's you, LRF) want
this feature for no apparent reason.
** With this option, it is possible to get a dataset
thatis actually bigger than the input.
** You can input a negative value for NNN, which will
crop the dataset even more than the automatic method.
-npad_safety_on :Constrain npad-ded extents to be within dset. So,
each index is bounded to be in range [0, L-1], where L
is matrix length along that dimension.
NOTE on *ijkord* format ~1~
Using any of the '-*ijkord*' options above will output pairs of ijk
indices just like the regular ijk options, **but** they will be ordered
in a way that you can associate each of the i, j, and k indices with
a standard x, y and z coordinate direction. Without this ordering,
resampling a dataset could change what index is associated with which
coordinate axis. That situation can be confusing for scripting (and
by confusing, we mean 'bad').
The output format for any '-*ijkord*' options is a 3x3 table, where
the first column is the index value (i, j or k), and the next two
columns are the min and max interval boundaries for the autobox.
Importantly, the rows are placed in order so that the top corresponds
to the x-axis, the middle to the y-axis and the bottom to the z-axis.
So, if you had the following table output for a dset:
k 10 170
i 35 254
j 21 199
... you would look at the third row for the min/max slice values
along the z-axis, and you would use the index 'j' to refer to it in,
say, a 3dcalc expression.
Note that the above example table output came from a dataset with ASL
orientation. We can see how that fits, recalling that the first,
second and third rows tell us about x, y and z info, respectively; and
that i, j and k refer to the first, second and third characters in the
orientation string. So, the third (z-like) row contains a j, which
points us at the middle character in the orientation, which is S, which
is along the z-axis---all consistent! Similarly, the top (x-like) row
contains a k, which points us at the last char in the orientation,
which is L and that is along the x-axis---phew!
The main point of this would be to extra this information and use it
in a script. If you knew that you wanted the z-slice range to use
in a 3dcalc 'within()' expression, then you could extract the 3rd row
to get the correct index and slice ranges, e.g., in tcsh:
set vvv = `sed -n 3p FILE_ijkord.txt`
... where now ${vvv} will have 3 values, the first of which is the
relevant index letter, then the min and max slice range values.
So an example 3dcalc expression to keep values only within
that slice range:
3dcalc \
-a DSET \
-expr "a*within(${vvv[1]},${vvv[2]},${vvv[3]})" \
-prefix DSET_SUBSET
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAutomask
Usage: 3dAutomask [options] dataset
Input dataset is EPI 3D+time, or a skull-stripped anatomical.
Output dataset is a brain-only mask dataset.
This program by itself does NOT do 'skull-stripping'. Use
program 3dSkullStrip for that purpose!
Method:
+ Uses 3dClipLevel algorithm to find clipping level.
+ Keeps only the largest connected component of the
supra-threshold voxels, after an erosion/dilation step.
+ Writes result as a 'fim' type of functional dataset,
which will be 1 inside the mask and 0 outside the mask.
Options:
--------
-prefix ppp = Write mask into dataset with prefix 'ppp'.
[Default == 'automask']
-apply_prefix ppp = Apply mask to input dataset and save
masked dataset. If an apply_prefix is given
and not the usual prefix, the only output
will be the applied dataset
-clfrac cc = Set the 'clip level fraction' to 'cc', which
must be a number between 0.1 and 0.9.
A small 'cc' means to make the initial threshold
for clipping (a la 3dClipLevel) smaller, which
will tend to make the mask larger. [default=0.5]
-nograd = The program uses a 'gradual' clip level by default.
To use a fixed clip level, use '-nograd'.
[Change to gradual clip level made 24 Oct 2006.]
-peels pp = Peel (erode) the mask 'pp' times,
then unpeel (dilate). Using NN2 neighborhoods,
clips off protuberances less than 2*pp voxels
thick. Turn off by setting to 0. [Default == 1]
-NN1 -NN2 -NN3 = Erode and dilate using different neighbor definitions
NN1=faces, NN2=edges, NN3= corners [Default=NN2]
Applies to erode and dilate options, if present.
Note the default peeling processes still use NN2
unless the peels are set to 0
-nbhrs nn = Define the number of neighbors needed for a voxel
NOT to be eroded. The 18 nearest neighbors in
the 3D lattice are used, so 'nn' should be between
6 and 26. [Default == 17]
-q = Don't write progress messages (i.e., be quiet).
-eclip = After creating the mask, remove exterior
voxels below the clip threshold.
-dilate nd = Dilate the mask outwards 'nd' times.
-erode ne = Erode the mask inwards 'ne' times.
-SI hh = After creating the mask, find the most superior
voxel, then zero out everything more than 'hh'
millimeters inferior to that. hh=130 seems to
be decent (i.e., for Homo Sapiens brains).
-depth DEP = Produce a dataset (DEP) that shows how many peel
operations it takes to get to a voxel in the mask.
The higher the number, the deeper a voxel is located
in the mask. Note this uses the NN1,2,3 neighborhoods
above with a default of 2 for edge-sharing neighbors
None of -peels, -dilate, or -erode affect this option.
--------------------------------------------------------------------
How to make an edge-of-brain mask from an anatomical volume:
* 3dSkullStrip to create a brain-only dataset; say, Astrip+orig
* 3dAutomask -prefix Amask Astrip+orig
* Create a mask of edge-only voxels via
3dcalc -a Amask+orig -b a+i -c a-i -d a+j -e a-j -f a+k -g a-k \
-expr 'ispositive(a)*amongst(0,b,c,d,e,f,g)' -prefix Aedge
which will be 1 at all voxels in the brain mask that have a
nearest neighbor that is NOT in the brain mask.
* cf. '3dcalc -help' DIFFERENTIAL SUBSCRIPTS for information
on the 'a+i' et cetera inputs used above.
* In regions where the brain mask is 'stair-stepping', then the
voxels buried inside the corner of the steps probably won't
show up in this edge mask:
...00000000...
...aaa00000...
...bbbaa000...
...bbbbbaa0...
Only the 'a' voxels are in this edge mask, and the 'b' voxels
down in the corners won't show up, because they only touch a
0 voxel on a corner, not face-on. Depending on your use for
the edge mask, this effect may or may not be a problem.
--------------------------------------------------------------------
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dAutoTcorrelate
Usage: 3dAutoTcorrelate [options] dset
Computes the correlation coefficient between the time series of each
pair of voxels in the input dataset, and stores the output into a
new anatomical bucket dataset [scaled to shorts to save memory space].
*** Also see program 3dTcorrMap ***
Options:
-pearson = Correlation is the normal Pearson (product moment)
correlation coefficient [default].
-eta2 = Output is eta^2 measure from Cohen et al., NeuroImage, 2008:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2705206/
http://dx.doi.org/10.1016/j.neuroimage.2008.01.066
** '-eta2' is intended to be used to measure the similarity
between 2 correlation maps; therefore, this option is
to be used in a second stage analysis, where the input
dataset is the output of running 3dAutoTcorrelate with
the '-pearson' option -- the voxel 'time series' from
that first stage run is the correlation map of that
voxel with all other voxels.
** '-polort -1' is recommended with this option!
** Odds are you do not want use this option if the dataset
on which eta^2 is to be computed was generated with
options -mask_only_targets or -mask_source.
In this program, the eta^2 is computed between pseudo-
timeseries (the 4th dimension of the dataset).
If you want to compute eta^2 between sub-bricks then use
3ddot -eta2 instead.
-spearman AND -quadrant are disabled at this time :-(
-polort m = Remove polynomial trend of order 'm', for m=-1..3.
[default is m=1; removal is by least squares].
Using m=-1 means no detrending; this is only useful
for data/information that has been pre-processed.
-autoclip = Clip off low-intensity regions in the dataset,
-automask = so that the correlation is only computed between
high-intensity (presumably brain) voxels. The
mask is determined the same way that 3dAutomask works.
-mask mmm = Mask of both 'source' and 'target' voxels.
** Restricts computations to those in the mask. Output
volumes are restricted to masked voxels. Also, only
masked voxels will have non-zero output.
** A dataset with 1000 voxels would lead to output of
1000 sub-bricks. With a '-mask' of 50 voxels, the
output dataset have 50 sub-bricks, where the 950
unmasked voxels would be all zero in all 50 sub-bricks
(unless option '-mask_only_targets' is also used).
** The mask is encoded in the output dataset header in the
attribute named 'AFNI_AUTOTCORR_MASK' (cf. 3dMaskToASCII).
-mask_only_targets = Provide output for all voxels.
** Used with '-mask': every voxel is correlated with each
of the mask voxels. In the example above, there would
be 50 output sub-bricks; the n-th output sub-brick
would contain the correlations of the n-th voxel in
the mask with ALL 1000 voxels in the dataset (rather
than with just the 50 voxels in the mask).
-mask_source sss = Provide output for voxels only in mask sss
** For each seed in mask mm, compute correlations only with
non-zero voxels in sss. If you have 250 non-zero voxels
in sss, then the output will still have 50 sub-bricks, but
each n-th sub-brick will have non-zero values at the 250
non-zero voxels in sss
Do not use this option along with -mask_only_targets
-prefix p = Save output into dataset with prefix 'p'
[default prefix is 'ATcorr'].
-out1D FILE.1D = Save output in a text file formatted thusly:
Row 1 contains the 1D indices of non zero voxels in the
mask from option -mask.
Column 1 contains the 1D indices of non zero voxels in the
mask from option -mask_source
The rest of the matrix contains the correlation/eta2
values. Each column k corresponds to sub-brick k in
the output volume p.
To see 1D indices in AFNI, right click on the top left
corner of the AFNI controller - where coordinates are
shown - and chose voxel indices.
A 1D index (ijk) is computed from the 3D (i,j,k) indices:
ijk = i + j*Ni + k*Ni*Nj , with Ni and Nj being the
number of voxels in the slice orientation and given by:
3dinfo -ni -nj YOUR_VOLUME_HERE
This option can only be used in conjunction with
options -mask and -mask_source. Otherwise it makes little
sense to write a potentially enormous text file.
-time = Mark output as a 3D+time dataset instead of an anat bucket.
-mmap = Write .BRIK results to disk directly using Unix mmap().
This trick can speed the program up when the amount
of memory required to hold the output is very large.
** In many case, the amount of time needed to write
the results to disk is longer than the CPU time.
This option can shorten the disk write time.
** If the program crashes, you'll have to manually
remove the .BRIK file, which will have been created
before the loop over voxels and written into during
that loop, rather than being written all at once
at the end of the analysis, as is usually the case.
** If the amount of memory needed is bigger than the
RAM on your system, this program will be very slow
with or without '-mmap'.
** This option won't work with NIfTI-1 (.nii) output!
Example: correlate every voxel in mask_in+tlrc with only those voxels in
mask_out+tlrc (the rest of each volume is zero, for speed).
Assume detrending was already done along with other pre-processing.
The output will have one volume per masked voxel in mask_in+tlrc.
Volumes will be labeled by the ijk index triples of mask_in+tlrc.
3dAutoTcorrelate -mask_source mask_out+tlrc -mask mask_in+tlrc \
-polort -1 -prefix test_corr clean_epi+tlrc
Notes:
* The output dataset is anatomical bucket type of shorts
(unless '-time' is used).
* Values are scaled so that a correlation (or eta-squared)
of 1 corresponds to a value of 10000.
* The output file might be gigantic and you might run out
of memory running this program. Use at your own risk!
++ If you get an error message like
*** malloc error for dataset sub-brick
this means that the program ran out of memory when making
the output dataset.
++ If this happens, you can try to use the '-mmap' option,
and if you are lucky, the program may actually run.
* The program prints out an estimate of its memory usage
when it starts. It also prints out a progress 'meter'
to keep you pacified.
* This is a quick hack for Peter Bandettini. Now pay up.
* OpenMP-ized for Hang Joon Jo. Where's my baem-sul?
-- RWCox - 31 Jan 2002 and 16 Jul 2010
=========================================================================
* This binary version of 3dAutoTcorrelate is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3daxialize
[7m*+ WARNING:[0m This program (3daxialize) is old, not maintained, and probably useless!
Usage: 3daxialize [options] dataset
Purpose: Read in a dataset and write it out as a new dataset
with the data brick oriented as axial slices.
The input dataset must have a .BRIK file.
One application is to create a dataset that can
be used with the AFNI volume rendering plugin.
Options:
-prefix ppp = Use 'ppp' as the prefix for the new dataset.
[default = 'axialize']
-verb = Print out a progress report.
The following options determine the order/orientation
in which the slices will be written to the dataset:
-sagittal = Do sagittal slice order [-orient ASL]
-coronal = Do coronal slice order [-orient RSA]
-axial = Do axial slice order [-orient RAI]
This is the default AFNI axial order, and
is the one currently required by the
volume rendering plugin; this is also
the default orientation output by this
program (hence the program's name).
-orient code = Orientation code for output.
The code must be 3 letters, one each from the
pairs {R,L} {A,P} {I,S}. The first letter gives
the orientation of the x-axis, the second the
orientation of the y-axis, the third the z-axis:
R = Right-to-left L = Left-to-right
A = Anterior-to-posterior P = Posterior-to-anterior
I = Inferior-to-superior S = Superior-to-inferior
If you give an illegal code (e.g., 'LPR'), then
the program will print a message and stop.
N.B.: 'Neurological order' is -orient LPI
-frugal = Write out data as it is rotated, a sub-brick at
a time. This saves a little memory and was the
previous behavior.
Note the frugal option is not available with NIFTI
datasets
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dBallMatch
--------------------------------------
Usage #1: 3dBallMatch dataset [radius]
--------------------------------------
-----------------------------------------------------------------------
Usage #2: 3dBallMatch [options]
where the pitifully few options are:
-input dataset = read this dataset
-ball radius = set the radius of the 3D ball to match (mm)
-spheroid a b = match with a spheroid of revolution, with principal
axis radius of 'a' and secondary axes radii 'b'
++ this option is considerably slower
-----------------------------------------------------------------------
-------------------
WHAT IT IS GOOD FOR
-------------------
* This program tries to find a good match between a ball (filled sphere)
of the given radius (in mm) and a dataset. The goal is to find a crude
approximate center of the brain quickly.
* The output can be used to re-center a dataset so that its coordinate
origin is inside the brain and/or as a starting point for more refined
3D alignment. Sample scripts are given below.
* The reason for this program is that not all brain images are even
crudely centered by using the center-of-mass ('3dAllineate -cmass')
as a starting point -- if the volume covered by the image includes
a lot of neck or even shoulders, then the center-of-mass may be
far from the brain.
* If you don't give a radius, the default is 72 mm, which is about the
radius of an adult human brain/cranium. A larger value would be needed
for elephant brain images. A smaller value for marmosets.
* For advanced use, you could try a prolate spheroid, using something like
3dBallMatch -input Fred.nii -spheroid 90 70
for a human head image (that was not skull stripped). This option is
several times slower than the 'ball' option, as multiple spheroids have
to be correlated with the input dataset.
* This program does NOT work well with datasets containing large amounts
of negative values or background junk -- such as I've seen with animal
MRI scans and CT scans. Such datasets will likely require some repair
first, such as cropping (cf. 3dZeropad), to make this program useful.
* Frankly, this program may not be that useful for any purpose :(
* The output is text to stdout containing 3 triples of numbers, all on
one line:
i j k xs ys zs xd yd zd
where
i j k = index triple of the central voxel
xs ys zs = values to use in '3drefit -dxxorigin' (etc.)
to make (i,j,k) be at coordinates (x,y,z)=(0,0,0)
xd yd zd = DICOM-order (x,y,z) coordinates of (i,j,k) in the
input dataset
* The intention is that this output line be captured and then the
appropriate pieces be used for some higher purpose.
--------------------------------------------------------------
SAMPLE SCRIPT - VISUALIZING THE MATCHED LOCATION (csh syntax)
--------------------------------------------------------------
Below is a script to process all the entries in a directory.
#!/bin/tcsh
# optional: start a virtual X11 server
set xdisplay = `count_afni -dig 1 3 999 R1`
echo " -- trying to start Xvfb :${xdisplay}"
Xvfb :${xdisplay} -screen 0 1024x768x24 >& /dev/null &
sleep 1
set display_old = $DISPLAY
setenv DISPLAY :${xdisplay}
# loop over all subjects
foreach sss ( sub-?????_T1w.nii.gz )
# extract subject ID code
set sub = `echo $sss | sed -e 's/sub-//' -e 's/_T1w.nii.gz//'`
# skip if already finished
if ( -f $sub.match ) continue
if ( -f $sub.sag.jpg ) continue
if ( -f $sub.cor.jpg ) continue
# run the program, save output to a file
3dBallMatch $sss > $sub.match
# capture the output for use below
set ijk = ( `cat $sub.match` )
echo $sub $ijk
# run afni to make some QC images
afni -DAFNI_NOSPLASH=YES \
-DAFNI_NOPLUGINS=YES \
-com "OPEN_WINDOW A.sagittalimage" \
-com "OPEN_WINDOW A.coronalimage" \
-com "SET_IJK $ijk[1-3]" \
-com "SAVE_JPEG A.sagittalimage $sub.sag.jpg" \
-com "SAVE_JPEG A.coronalimage $sub.cor.jpg" \
-com "QUITT" \
$sss
# end of loop over subject
end
# kill the virtual X11 server (if it was started above)
sleep 1
killall Xvfb
# make a movie of the sagittal slices
im_to_mov -resize -prefix Bsag -npure 4 -nfade 0 *.sag.jpg
# make a movie of the coronal slices
im_to_mov -resize -prefix Bcor -npure 4 -nfade 0 *.cor.jpg
exit 0
------------------------------------------------------------
SAMPLE SCRIPT - IMPROVING THE MATCHED LOCATION (csh syntax)
------------------------------------------------------------
This script is an extension of the one above, where it uses
3dAllineate to align the human brain image to the MNI template,
guided by the initial point computed by 3dBallMatch. The output
of 3dAllineate is the coordinate of the center of the original
volume, in the first 3 values stored in '*Aparam.1D' file.
* Note that the 3dAllineate step presumes that the input
dataset is a T1-weighted volume. A different set of options would
have to be used for an EPI (T2*-weighted) or T2-weighted volume.
* This script worked pretty well for putting the crosshairs at
the 'origin' of the brain -- near the anterior commissure.
Of course, you will need to evaluate its performance yourself.
#!/bin/tcsh
# optional: start Xvfb to avoid the AFNI GUI starting visibly
set xdisplay = `count_afni -dig 1 3 999 R1`
echo " -- trying to start Xvfb :${xdisplay}"
Xvfb :${xdisplay} -screen 0 1024x768x24 >& /dev/null &
sleep 1
set display_old = $DISPLAY
setenv DISPLAY :${xdisplay}
# loop over datasets in the current directory
foreach sss ( anat_sub?????.nii.gz )
# extract the subject identifier code (the '?????')
set sub = `echo $sss | sed -e 's/anat_sub//' -e 's/.nii.gz//'`
# if 3dAllineate was already run on this, skip to next dataset
if ( -f $sub.Aparam.1D ) continue
# find the 'center' voxel location with 3dBallMatch
if ( ! -f $sub.match ) then
echo "Running 3dBallMatch $sss"
3dBallMatch $sss | tee $sub.match
endif
# extract results from 3dBallMatch output
# in this case, we want the final triplet of coordinates
set ijk = ( `cat $sub.match` )
# set shift range to be 55 mm about 3dBallMatch coordinates
set xd = $ijk[7] ; set xbot = `ccalc "${xd}-55"` ; set xtop = `ccalc "${xd}+55"`
set yd = $ijk[8] ; set ybot = `ccalc "${yd}-55"` ; set ytop = `ccalc "${yd}+55"`
set zd = $ijk[9] ; set zbot = `ccalc "${zd}-55"` ; set ztop = `ccalc "${zd}+55"`
# Align the brain image volume with 3dAllineate:
# match to 'skull on' part of MNI template = sub-brick [1]
# only save the parameters, not the final aligned dataset
3dAllineate \
-base ~/abin/MNI152_2009_template_SSW.nii.gz'[1]' \
-source $sss \
-parang 1 $xbot $xtop \
-parang 2 $ybot $ytop \
-parang 3 $zbot $ztop \
-prefix NULL -lpa \
-1Dparam_save $sub.Aparam.1D \
-conv 3.666 -fineblur 3 -num_rtb 0 -norefinal -verb
# 1dcat (instead of cat) to strip off the comments at the top of the file
# the first 3 values in 'param' are the (x,y,z) shifts
# Those values could be used in 3drefit to re-center the dataset
set param = ( `1dcat $sub.Aparam.1D` )
# run AFNI to produce the snapshots with crosshairs at
# the 3dBallMatch center and the 3dAllineate center
# - B.*.jpg = 3dBallMatch result in crosshairs
# - A.*.jpg = 3dAllineate result in crosshairs
afni -DAFNI_NOSPLASH=YES \
-DAFNI_NOPLUGINS=YES \
-com "OPEN_WINDOW A.sagittalimage" \
-com "SET_IJK $ijk[1-3]" \
-com "SAVE_JPEG A.sagittalimage B.$sub.sag.jpg" \
-com "SET_DICOM_XYZ $param[1-3]" \
-com "SAVE_JPEG A.sagittalimage A.$sub.sag.jpg" \
-com "QUITT" \
$sss
# End of loop over datasets
end
# stop Xvfb (only needed if it was started above)
sleep 1
killall Xvfb
# make movies from the resulting images
im_to_mov -resize -prefix Bsag -npure 4 -nfade 0 B.[1-9]*.sag.jpg
im_to_mov -resize -prefix Asag -npure 4 -nfade 0 A.[1-9]*.sag.jpg
exit 0
----------------------------
HOW IT WORKS (approximately)
----------------------------
1] Create the automask of the input dataset (as in 3dAutomask).
+ This is a 0/1 binary marking of outside/inside voxels.
+ Then convert it to a -1/+1 mask instead.
2] Create a -1/+1 mask for the ball [-1=outside, +1=inside],
inside a rectangular box.
3] Convolve these 2 masks (using FFTs for speed).
+ Basically, this is moving the ball around, then adding up
the voxel counts where the masks match sign (both positive
means ball and dataset are both 'inside'; both negative
means ball and dataset are both 'outside'), and subtracting
off the voxel counts where the mask differ in sign
(one is 'inside' and one is 'outside' == not matched).
+ That is, the convolution value is the sum of matched voxels
minus the sum of mismatched voxels, at every location of
offset (i,j,k) of the corner of the ball mask.
+ The ball mask is in a cube of side 2*radius, which has volume
8*radius^3. The volume of the ball is 4*pi/3*radius^3, so the
inside of the ball is about 4*pi/(3*8) = 52% of the volume of the cube
-- that is, inside and outside voxels are (roughly) matched, so they
have (approximately) equal weight.
+ Most of the CPU time is in the 3D FFTs required.
4] Find the centroid of the locations where the convolution
is positive (matches win over non-matches) and at least 5%
of the maximum convolution. This centroid gives (i,j,k).
Why the centroid? I found that the peak convolution location
is not very stable, as a lot of locations have results barely less
than the peak value -- it was more stable to average them together.
------------------------
WHY 'ball' NOT 'sphere'?
------------------------
* Because a 'sphere' is a 2D object, the surface of the 3D object 'ball'.
* Because my training was in mathematics, where precise terminology has
been developed and honed for centuries.
* Because I'm yanking your chain. Any other questions? No? Good.
-------
CREDITS
-------
By RWCox, September 2020 (the year it all fell apart).
Delenda est. Never forget.
AFNI program: 3dBandpass
--------------------------------------------------------------------------
** NOTA BENE: For the purpose of preparing resting-state FMRI datasets **
** for analysis (e.g., with 3dGroupInCorr), this program is now mostly **
** superseded by the afni_proc.py script. See the 'afni_proc.py -help' **
** section 'Resting state analysis (modern)' to get our current rs-FMRI **
** pre-processing recommended sequence of steps. -- RW Cox, et alii. **
--------------------------------------------------------------------------
** If you insist on doing your own bandpassing, I now recommend using **
** program 3dTproject instead of this program. 3dTproject also can do **
** censoring and other nuisance regression at the same time -- RW Cox. **
--------------------------------------------------------------------------
Usage: 3dBandpass [options] fbot ftop dataset
* One function of this program is to prepare datasets for input
to 3dSetupGroupInCorr. Other uses are left to your imagination.
* 'dataset' is a 3D+time sequence of volumes
++ This must be a single imaging run -- that is, no discontinuities
in time from 3dTcat-ing multiple datasets together.
* fbot = lowest frequency in the passband, in Hz
++ fbot can be 0 if you want to do a lowpass filter only;
HOWEVER, the mean and Nyquist freq are always removed.
* ftop = highest frequency in the passband (must be > fbot)
++ if ftop > Nyquist freq, then it's a highpass filter only.
* Set fbot=0 and ftop=99999 to do an 'allpass' filter.
++ Except for removal of the 0 and Nyquist frequencies, that is.
* You cannot construct a 'notch' filter with this program!
++ You could use 3dBandpass followed by 3dcalc to get the same effect.
++ If you are understand what you are doing, that is.
++ Of course, that is the AFNI way -- if you don't want to
understand what you are doing, use Some other PrograM, and
you can still get Fine StatisticaL maps.
* 3dBandpass will fail if fbot and ftop are too close for comfort.
++ Which means closer than one frequency grid step df,
where df = 1 / (nfft * dt) [of course]
* The actual FFT length used will be printed, and may be larger
than the input time series length for the sake of efficiency.
++ The program will use a power-of-2, possibly multiplied by
a power of 3 and/or 5 (up to and including the 3rd power of
each of these: 3, 9, 27, and 5, 25, 125).
* Note that the results of combining 3dDetrend and 3dBandpass will
depend on the order in which you run these programs. That's why
3dBandpass has the '-ort' and '-dsort' options, so that the
time series filtering can be done properly, in one place.
* The output dataset is stored in float format.
* The order of processing steps is the following (most are optional):
(0) Check time series for initial transients [does not alter data]
(1) Despiking of each time series
(2) Removal of a constant+linear+quadratic trend in each time series
(3) Bandpass of data time series
(4) Bandpass of -ort time series, then detrending of data
with respect to the -ort time series
(5) Bandpass and de-orting of the -dsort dataset,
then detrending of the data with respect to -dsort
(6) Blurring inside the mask [might be slow]
(7) Local PV calculation [WILL be slow!]
(8) L2 normalization [will be fast.]
--------
OPTIONS:
--------
-despike = Despike each time series before other processing.
++ Hopefully, you don't actually need to do this,
which is why it is optional.
-ort f.1D = Also orthogonalize input to columns in f.1D
++ Multiple '-ort' options are allowed.
-dsort fset = Orthogonalize each voxel to the corresponding
voxel time series in dataset 'fset', which must
have the same spatial and temporal grid structure
as the main input dataset.
++ At present, only one '-dsort' option is allowed.
-nodetrend = Skip the quadratic detrending of the input that
occurs before the FFT-based bandpassing.
++ You would only want to do this if the dataset
had been detrended already in some other program.
-dt dd = set time step to 'dd' sec [default=from dataset header]
-nfft N = set the FFT length to 'N' [must be a legal value]
-norm = Make all output time series have L2 norm = 1
++ i.e., sum of squares = 1
-mask mset = Mask dataset
-automask = Create a mask from the input dataset
-blur fff = Blur (inside the mask only) with a filter
width (FWHM) of 'fff' millimeters.
-localPV rrr = Replace each vector by the local Principal Vector
(AKA first singular vector) from a neighborhood
of radius 'rrr' millimeters.
++ Note that the PV time series is L2 normalized.
++ This option is mostly for Bob Cox to have fun with.
-input dataset = Alternative way to specify input dataset.
-band fbot ftop = Alternative way to specify passband frequencies.
-prefix ppp = Set prefix name of output dataset.
-quiet = Turn off the fun and informative messages. (Why?)
-notrans = Don't check for initial positive transients in the data:
*OR* ++ The test is a little slow, so skipping it is OK,
-nosat if you KNOW the data time series are transient-free.
++ Or set AFNI_SKIP_SATCHECK to YES.
++ Initial transients won't be handled well by the
bandpassing algorithm, and in addition may seriously
contaminate any further processing, such as inter-voxel
correlations via InstaCorr.
++ No other tests are made [yet] for non-stationary behavior
in the time series data.
=========================================================================
* This binary version of 3dBandpass is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
* At present, the only part of 3dBandpass that is parallelized is the
'-blur' option, which processes each sub-brick independently.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dBlurInMask
Usage: ~1~
3dBlurInMask [options]
Blurs a dataset spatially inside a mask. That's all. Experimental.
OPTIONS ~1~
-------
-input ddd = This required 'option' specifies the dataset
that will be smoothed and output.
-FWHM f = Add 'f' amount of smoothness to the dataset (in mm).
**N.B.: This is also a required 'option'.
-FWHMdset d = Read in dataset 'd' and add the amount of smoothness
given at each voxel -- spatially variable blurring.
** EXPERIMENTAL EXPERIMENTAL EXPERIMENTAL **
-mask mmm = Mask dataset, if desired. Blurring will
occur only within the mask. Voxels NOT in
the mask will be set to zero in the output.
-Mmask mmm = Multi-mask dataset -- each distinct nonzero
value in dataset 'mmm' will be treated as
a separate mask for blurring purposes.
**N.B.: 'mmm' must be byte- or short-valued!
-automask = Create an automask from the input dataset.
**N.B.: only 1 masking option can be used!
-preserve = Normally, voxels not in the mask will be
set to zero in the output. If you want the
original values in the dataset to be preserved
in the output, use this option.
-prefix ppp = Prefix for output dataset will be 'ppp'.
**N.B.: Output dataset is always in float format.
-quiet = Don't be verbose with the progress reports.
-float = Save dataset as floats, no matter what the
input data type is.
**N.B.: If the input dataset is unscaled shorts, then
the default is to save the output in short
format as well. In EVERY other case, the
program saves the output as floats. Thus,
the ONLY purpose of the '-float' option is to
force an all-shorts input dataset to be saved
as all-floats after blurring.
** NEW IN 2021 **
-FWHMxyz fx fy fz = Add different amounts of smoothness in the 3
spatial directions.
** If one of the 'f' values is 0, no smoothing is done
in that direction.
** Here, the axes names ('x', 'y', 'z') refer to the
order of storage in the dataset, as can be seen
in the output of 3dinfo; for example, from a dataset
that I happen to have lying around:
Data Axes Orientation:
first (x) = Anterior-to-Posterior
second (y) = Superior-to-Inferior
third (z) = Left-to-Right
In this example, 'fx' is the FWHM blurring along the
A-P direction, et cetera.
** In other words, x-y-z does not necessarily refer
to the DICOM order of coordinates (R-L, A-P, I-S)!
NOTES ~1~
-----
* If you don't provide a mask, then all voxels will be included
in the blurring. (But then why are you using this program?)
* Note that voxels inside the mask that are not contiguous with
any other voxels inside the mask will not be modified at all!
* Works iteratively, similarly to 3dBlurToFWHM, but without
the extensive overhead of monitoring the smoothness.
* But this program will be faster than 3dBlurToFWHM, and probably
slower than 3dmerge.
* Since the blurring is done iteratively, rather than all-at-once as
in 3dmerge, the results will be slightly different than 3dmerge's,
even if no mask is used here (3dmerge, of course, doesn't take a mask).
* If the original FWHM of the dataset was 'S' and you input a value
'F' with the '-FWHM' option, then the output dataset's smoothness
will be about sqrt(S*S+F*F). The number of iterations will be
about (F*F/d*d) where d=grid spacing; this means that a large value
of F might take a lot of CPU time!
* The spatial smoothness of a 3D+time dataset can be estimated with a
command similar to the following:
3dFWHMx -detrend -mask mmm+orig -input ddd+orig
* The minimum number of voxels in the mask is 9
* Isolated voxels will be removed from the mask!
=========================================================================
* This binary version of 3dBlurInMask is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dBlurToFWHM
Usage: 3dBlurToFWHM [options]
Blurs a 'master' dataset until it reaches a specified FWHM
smoothness (approximately). The same blurring schedule is
applied to the input dataset to produce the output. The goal
is to make the output dataset have the given smoothness, no
matter what smoothness it had on input (however, the program
cannot 'unsmooth' a dataset!). See below for the METHOD used.
OPTIONS
-------
-input ddd = This required 'option' specifies the dataset
that will be smoothed and output.
-blurmaster bbb = This option specifies the dataset whose
whose smoothness controls the process.
**N.B.: If not given, the input dataset is used.
**N.B.: This should be one continuous run.
Do not input catenated runs!
-prefix ppp = Prefix for output dataset will be 'ppp'.
**N.B.: Output dataset is always in float format.
-mask mmm = Mask dataset, if desired. Blurring will
occur only within the mask. Voxels NOT in
the mask will be set to zero in the output.
-automask = Create an automask from the input dataset.
**N.B.: Not useful if the input dataset has been
detrended or otherwise regressed before input!
-FWHM f = Blur until the 3D FWHM is 'f'.
-FWHMxy f = Blur until the 2D (x,y)-plane FWHM is 'f'.
No blurring is done along the z-axis.
**N.B.: Note that you can't REDUCE the smoothness
of a dataset.
**N.B.: Here, 'x', 'y', and 'z' refer to the
grid/slice order as stored in the dataset,
not DICOM ordered coordinates!
**N.B.: With -FWHMxy, smoothing is done only in the
dataset xy-plane. With -FWHM, smoothing
is done in 3D.
**N.B.: The actual goal is reached when
-FWHM : cbrt(FWHMx*FWHMy*FWHMz) >= f
-FWHMxy: sqrt(FWHMx*FWHMy) >= f
That is, when the area or volume of a
'resolution element' goes past a threshold.
-quiet Shut up the verbose progress reports.
**N.B.: This should be the first option, to stifle
any verbosity from the option processing code.
FILE RECOMMENDATIONS for -blurmaster:
For FMRI statistical purposes, you DO NOT want the FWHM to reflect
the spatial structure of the underlying anatomy. Rather, you want
the FWHM to reflect the spatial structure of the noise. This means
that the -blurmaster dataset should not have anatomical structure. One
good form of input is the output of '3dDeconvolve -errts', which is
the residuals left over after the GLM fitted signal model is subtracted
out from each voxel's time series. You can also use the output of
'3dREMLfit -Rerrts' or '3dREMLfit -Rwherr' for this purpose.
You CAN give a multi-brick EPI dataset as the -blurmaster dataset; the
dataset will be detrended in time (like the -detrend option in 3dFWHMx)
which will tend to remove the spatial structure. This makes it
practicable to make the input and blurmaster datasets be the same,
without having to create a detrended or residual dataset beforehand.
Considering the accuracy of blurring estimates, this is probably good
enough for government work [that is an insider's joke :-].
N.B.: Do not use catenated runs as blurmasters. There should
be no discontinuities in the time axis of blurmaster, which would
make the simple regression detrending do peculiar things.
ALSO SEE:
* 3dFWHMx, which estimates smoothness globally
* 3dLocalstat -stat FWHM, which estimates smoothness locally
* This paper, which discusses the need for a fixed level of smoothness
when combining FMRI datasets from different scanner platforms:
Friedman L, Glover GH, Krenz D, Magnotta V; The FIRST BIRN.
Reducing inter-scanner variability of activation in a multicenter
fMRI study: role of smoothness equalization.
Neuroimage. 2006 Oct 1;32(4):1656-68.
METHOD:
The blurring is done by a conservative finite difference approximation
to the diffusion equation:
du/dt = d/dx[ D_x(x,y,z) du/dx ] + d/dy[ D_y(x,y,z) du/dy ]
+ d/dz[ D_z(x,y,z) du/dz ]
= div[ D(x,y,z) grad[u(x,y,z)] ]
where diffusion tensor D() is diagonal, Euler time-stepping is used, and
with Neumann (reflecting) boundary conditions at the edges of the mask
(which ensures that voxel data inside and outside the mask don't mix).
* At each pseudo-time step, the FWHM is estimated globally (like '3dFWHMx')
and locally (like '3dLocalstat -stat FWHM'). Voxels where the local FWHM
goes past the goal will not be smoothed any more (D gets set to zero).
* When the global smoothness estimate gets close to the goal, the blurring
rate (pseudo-time step) will be reduced, to avoid over-smoothing.
* When an individual direction's smoothness (e.g., FWHMz) goes past the goal,
all smoothing in that direction stops, but the other directions continue
to be smoothed until the overall resolution element goal is achieved.
* When the global FWHM estimate reaches the goal, the program is done.
It will also stop if progress stalls for some reason, or if the maximum
iteration count is reached (infinite loops being unpopular).
* The output dataset will NOT have exactly the smoothness you ask for, but
it will be close (fondly we do hope). In our Imperial experiments, the
results (measured via 3dFWHMx) are within 10% of the goal (usually better).
* 2D blurring via -FWHMxy may increase the smoothness in the z-direction
reported by 3dFWHMx, even though there is no inter-slice processing.
At this moment, I'm not sure why. It may be an estimation artifact due
to increased correlation in the xy-plane that biases the variance estimates
used to calculate FWHMz.
ADVANCED OPTIONS:
-maxite ccc = Set maximum number of iterations to 'ccc' [Default=variable].
-rate rrr = The value of 'rrr' should be a number between
0.05 and 3.5, inclusive. It is a factor to change
the overall blurring rate (slower for rrr < 1) and thus
require more or less blurring steps. This option should only
be needed to slow down the program if the it over-smooths
significantly (e.g., it overshoots the desired FWHM in
Iteration #1 or #2). You can increase the speed by using
rrr > 1, but be careful and examine the output.
-nbhd nnn = As in 3dLocalstat, specifies the neighborhood
used to compute local smoothness.
[Default = 'SPHERE(-4)' in 3D, 'SPHERE(-6)' in 2D]
** N.B.: For the 2D -FWHMxy, a 'SPHERE()' nbhd
is really a circle in the xy-plane.
** N.B.: If you do NOT want to estimate local
smoothness, use '-nbhd NULL'.
-ACF or -acf = Use the 'ACF' method (from 3dFWHMx) to estimate
the global smoothness, rather than the 'classic'
Forman 1995 method. This option will be somewhat
slower. It will also set '-nbhd NULL', since there
is no local ACF estimation method implemented.
-bsave bbb = Save the local smoothness estimates at each iteration
with dataset prefix 'bbb' [for debugging purposes].
-bmall = Use all blurmaster sub-bricks.
[Default: a subset will be chosen, for speed]
-unif = Uniformize the voxel-wise MAD in the blurmaster AND
input datasets prior to blurring. Will be restored
in the output dataset.
-detrend = Detrend blurmaster dataset to order NT/30 before starting.
-nodetrend = Turn off detrending of blurmaster.
** N.B.: '-detrend' is the new default [05 Jun 2007]!
-detin = Also detrend input before blurring it, then retrend
it afterwards. [Off by default]
-temper = Try harder to make the smoothness spatially uniform.
-- Author: The Dreaded Emperor Zhark - Nov 2006
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dBrainSync
Usage: 3dBrainSync [options]
This program 'synchronizes' the -inset2 dataset to match the -inset1
dataset, as much as possible (average voxel-wise correlation), using the
same transformation on each input time series from -inset2:
++ With the -Qprefix option, the transformation is an orthogonal matrix,
computed as described in Joshi's original OHBM 2017 presentations,
and in the corresponding NeuroImage 2018 paper.
-->> Anand Joshi's presentation at OHBM was the genesis of this program.
++ With the -Pprefix option, the transformation is simply a
permutation of the time order of -inset2 (a very special case
of an orthogonal matrix).
++ The algorithms and a little discussion of the different features of
these two techniques are discussed in the METHODS section, infra.
++ At least one of '-Qprefix' or '-Pprefix' must be given, or
this program does not do anything! You can use both methods,
if you want to compare them.
++ 'Harmonize' might be a better name for what this program does,
but calling it 3dBrainHarm would probably not be good marketing
(except for Traumatic Brain Injury researchers?).
One possible application of this program is to correlate resting state
FMRI datasets between subjects, voxel-by-voxel, as is sometimes done
with naturalistic stimuli (e.g., movie viewing).
It would be amusing to see if within-subject resting state FMRI
runs can be BrainSync-ed better than between-subject runs.
--------
OPTIONS:
--------
-inset1 dataset1 = Reference dataset
-inset2 dataset2 = Dataset to be matched to the reference dataset,
as much as possible.
++ These 2 datasets must be on the same spatial grid,
and must have the same number of time points!
++ There must be at least twice as many voxels being
processed as there are time points (see '-mask', below).
++ These are both MANDATORY 'options'.
++ As usual in AFNI, since the computations herein are
voxel-wise, it is possible to input plain text .1D
files as datasets. When doing so, remember that
a ROW in the .1D file is interpreted as a time series
(single voxel's data). If your .1D files are oriented
so that time runs in down the COLUMNS, you will have to
transpose the inputs, which can be done on the command
line with the \' operator, or externally using the
1dtranspose program.
-->>++ These input datasets should be pre-processed first
to remove undesirable components (motions, baseline,
spikes, breathing, etc). Otherwise, you will be trying
to match artifacts between the datasets, which is not
likely to be interesting or useful. 3dTproject would be
one way to do this. Even better: afni_proc.py!
++ In particular, the mean of each time series should have
been removed! Otherwise, the calculations are fairly
meaningless.
-Qprefix qqq = Specifies the output dataset to be used for
the orthogonal matrix transformation.
++ This will be the -inset2 dataset transformed
to be as correlated as possible (in time)
with the -inset1 dataset, given the constraint
that the transformation applied to each time
series is an orthogonal matrix.
-Pprefix ppp = Specifies the output dataset to be used for
the permutation transformation.
++ The output dataset is the -inset2 dataset
re-ordered in time, again to make the result
as correlated as possible with the -inset1
dataset.
-normalize = Normalize the output dataset(s) so that each
time series has sum-of-squares = 1.
++ This option is not usually needed in AFNI
(e.g., 3dTcorrelate does not care).
-mask mset = Only operate on nonzero voxels in the mset dataset.
++ Voxels outside the mask will not be used in computing
the transformation, but WILL be transformed for
your application and/or edification later.
++ For FMRI purposes, a gray matter mask would make
sense here, or at least a brain mask.
++ If no masking option is given, then all voxels
will be processed in computing the transformation.
This set will include all non-brain voxels (if any).
++ Any voxel which is all constant in time
(in either input) will be removed from the mask.
++ This mask dataset must be on the same spatial grid
as the other input datasets!
-verb = Print some progress reports and auxiliary information.
++ Use this option twice to get LOTS of progress
reports; mostly useful for debugging, or for fun.
------
NOTES:
------
* Is this program useful? Not even The Shadow knows!
(But do NOT call it BS.)
* The output dataset is in floating point format.
* Although the goal of 3dBrainSync is to make the transformed
-inset2 as correlated (voxel-by-voxel) as possible with -inset1,
it does not actually compute or output that correlation dataset.
You can do that computation with program 3dTcorrelate, as in
3dBrainSync -inset1 dataset1 -inset2 dataset2 \
-Qprefix transformed-dataset2
3dTcorrelate -polort -1 -prefix AB.pcor.nii \
dataset1 transformed-dataset2
* Besides the transformed dataset(s), if the '-verb' option is used,
some other (text formatted) files are written out:
{Qprefix}.sval.1D = singular values from the BC' decomposition
{Qprefix}.qmat.1D = Q matrix
{Pprefix}.perm.1D = permutation indexes p(i)
You probably do not have any use for these files; they are mostly
present to diagnose any problems.
--------
METHODS:
--------
* Notation used in the explanations below:
M = Number of time points
N = Number of voxels > M (N = size of mask)
B = MxN matrix of time series from -inset1
C = MxN matrix of time series from -inset2
Both matrices will have each column normalized to
have sum-of-squares = 1 (L2 normalized) --
The program does this operation internally; you do not have
to ensure that the input datasets are so normalized.
Q = Desired orthgonal MxM matrix to transform C such that B-QC
is as small as possible (sum-of-squares = Frobenius norm).
That is, Q transforms dataset C to be as close as possible
to dataset B, given that Q is an orthogonal matrix.
normF(A) = sum_{ij} A_{ij}^2 = trace(AA') = trace(A'A).
NOTE: This norm is different from the matrix L2 norm.
NOTE: A' denotes the transpose of A.
NOTE: trace(A) = sum of diagonal element of square matrix A.
https://en.wikipedia.org/wiki/Matrix_norm
* The expansion below shows why the matrix BC' is crucial to the analysis:
normF(B-QC) = trace( [B-QC][B'-C'Q'] )
= trace(BB') + trace(QCC'Q') - trace(BC'Q') - trace(QCB')
= trace(BB') + trace(C'C) - 2 trace(BC'Q')
The second term collapses because trace(AA') = trace(A'A), so
trace([QC][QC]') = trace([QC]'[QC]) = trace(C'Q'QC) = trace(C'C)
because Q is orthogonal. So the first 2 terms in the expansion of
normF(B-QC) do not depend on Q at all. Thus, to minimize normF(B-QC),
we have to maximize trace(BC'Q') = trace([B][QC]') = trace([QC][B]').
Since the columns of B and C are the (normalized) time series,
each row represents the image at a particular time. So the (i,j)
element of BC' is the (spatial) dot product of the i-th TR image from
-inset1 with the j-th TR image from -inset2. Furthermore,
trace(BC') = trace(C'B) = sum of dot products (correlations)
of all time series. So maximizing trace(BC'Q') will maximize the
summed correlations of B (time series from -inset1) and QC
(transformed time series from -inset2).
Note again that the sum of correlations (dot products) of all the time
series is equal to the sum of dot products of all the spatial images.
So the algorithm to find the transformation Q is to maximize the sum of
dot products of spatial images from B with Q-transformed spatial images
from C -- since there are fewer time points than voxels, this is more
efficient and elegant than trying to maximize the sum over voxels of dot
products of time series.
If you use the '-verb' option, these summed correlations ('scores')
are printed to stderr during the analysis, for your fun and profit(?).
*******************************************************************************
* Joshi method [-Qprefix]:
(a) compute MxM matrix B C'
(b) compute SVD of B C' = U S V' (U, S, V are MxM matrices)
(c) Q = U V'
[note: if B=C, then U=V, so Q=I, as it should]
(d) transform each time series from -inset2 using Q
This matrix Q is the solution to the restricted least squares
problem (i.e., restricted to have Q be an orthogonal matrix).
NOTE: The sum of the singular values in S is equal to the sum
of the time series dot products (correlations) in B and QC,
when Q is calculated as above.
An article describing this method is available as:
AA Joshi, M Chong, RM Leahy.
Are you thinking what I'm thinking? Synchronization of resting fMRI
time-series across subjects.
NeuroImage v172:740-752 (2018).
https://doi.org/10.1016/j.neuroimage.2018.01.058
https://pubmed.ncbi.nlm.nih.gov/29428580/
https://www.google.com/search?q=joshi+brainsync
*******************************************************************************
* Permutation method [-Pprefix]:
(a) Compute B C' (same as above)
(b) Find a permutation p(i) of the integers {0..M-1} such
that sum_i { (BC')[i,p(i)] } is as large as possible
(i.e., p() is used as a permutation of the COLUMNS of BC').
This permutation is equivalent to post-multiplying BC'
by an orthogonal matrix P representing the permutation;
such a P is full of 0s except for a single 1 in each row
and each column.
(c) Permute the ROWS (time direction) of the time series matrix
from -inset2 using p().
Only an approximate (greedy) algorithm is used to find this
permutation; that is, the BEST permutation is not guaranteed to be found
(just a 'good' permutation -- it is the best thing I could code quickly :).
Algorithm currently implemented (let D=BC' for notational simplicity):
1) Find the largest element D(i,j) in the matrix.
Then the permutation at row i is p(i)=j.
Strike row i and column j out of the matrix D.
2) Repeat, finding the largest element left, say at D(f,g).
Then p(f) = g. Strike row f and column g from the matrix.
Repeat until done.
(Choosing the largest possible element at each step is what makes this
method 'greedy'.) This permutation is not optimal but is pretty good,
and another step is used to improve it:
3) For all pairs (i,j), p(i) and p(j) are swapped and that permutation
is tested to see if the trace gets bigger.
4) This pair-wise swapping is repeated until it does not improve things
any more (typically, it improves the trace about 1-2% -- not much).
The purpose of the pair swapping is to deal with situations where D looks
something like this: [ 1 70 ]
[ 70 99 ]
Step 1 would pick out 99, and Step 2 would pick out 1; that is,
p(2)=2 and then p(1)=1, for a total trace/score of 100. But swapping
1 and 2 would give a total trace/score of 140. In practice, extreme versions
of this situation do not seem common with real FMRI data, probably because
the subject's brain isn't actively conspiring against this algorithm :)
[Something called the 'Hungarian algorithm' can solve for the optimal]
[permutation exactly, but I've not had the inclination to program it.]
This whole permutation optimization procedure is very fast: about 1 second.
In the RS-FMRI data I've tried this on, the average time series correlation
resulting from this optimization is 30-60% of that which comes from
optimizing over ALL orthogonal matrices (Joshi method). If you use '-verb',
the stderr output line that looks like this
+ corr scores: original=-722.5 Q matrix=22366.0 permutation=12918.7 57.8%
shows trace(BC') before any transforms, with the Q matrix transform,
and with the permutation transform. As explained above, trace(BC') is
the summed correlations of the time series (since the columns of B and C
are normalized prior to the optimizations); in this example, the ratio of
the average time series correlation between the permutation method and the
Joshi method is about 58% (in a gray matter mask with 72221 voxels).
* Results from the permutation method MUST be less correlated (on average)
with -inset1 than the Joshi method's results: the permutation can be
thought of as an orthogonal matrix containing only 1s and 0s, and the BEST
possible orthogonal matrix, from Joshi's method, has more general entries.
++ However, the permutation method has an obvious interpretation
(re-ordering time points), while the general method linearly combines
different time points (perhaps far apart); the interpretation of this
combination in terms of synchronizing brain activity is harder to intuit
(at least for me).
++ Another feature of a permutation-only transformation is that it cannot
change the sign of data, unlike a general orthgonal matrix; e.g.,
[ 0 -1 0 ]
[-1 0 0 ]
[ 0 0 1 ], which swaps the first 2 time points AND negates them,
and leave the 3rd time point unchanged, is a valid orthogonal
matrix. For rs-FMRI datasets, this consideration might not be important,
since rs-FMRI correlations are generally positive, so don't often need
sign-flipping to make them so.
*******************************************************************************
* This program is NOT multi-threaded. Typically, I/O is the biggest part of
the run time (at least, for the cases I've tested). The '-verb' option
will give progress reports with elapsed-time stamps, making it easy to
see which parts of the program take the most time.
* Author: RWCox, servant of the ChronoSynclastic Infundibulum - July 2017
* Thanks go to Anand Joshi for his clear exposition of BrainSync at OHBM 2017,
and his encouragement about the development of this program.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dBRAIN_VOYAGERtoAFNI
Usage: 3dBRAIN_VOYAGERtoAFNI <-input BV_VOLUME.vmr>
[-bs] [-qx] [-tlrc|-acpc|-orig] [<-prefix PREFIX>]
Converts a BrainVoyager vmr dataset to AFNI's BRIK format
The conversion is based on information from BrainVoyager's
website: www.brainvoyager.com.
Sample data and information provided by
Adam Greenberg and Nikolaus Kriegeskorte.
If you get error messages about the number of
voxels and file size, try the options below.
I hope to automate these options once I have
a better description of the BrainVoyager QX format.
Optional Parameters:
-bs: Force byte swapping.
-qx: .vmr file is from BrainVoyager QX
-tlrc: dset in tlrc space
-acpc: dset in acpc-aligned space
-orig: dset in orig space
If unspecified, the program attempts to guess the view from
the name of the input.
[-novolreg]: Ignore any Rotate, Volreg, Tagalign,
or WarpDrive transformations present in
the Surface Volume.
[-noxform]: Same as -novolreg
[-setenv "'ENVname=ENVvalue'"]: Set environment variable ENVname
to be ENVvalue. Quotes are necessary.
Example: suma -setenv "'SUMA_BackgroundColor = 1 0 1'"
See also options -update_env, -environment, etc
in the output of 'suma -help'
Common Debugging Options:
[-trace]: Turns on In/Out debug and Memory tracing.
For speeding up the tracing log, I recommend
you redirect stdout to a file when using this option.
For example, if you were running suma you would use:
suma -spec lh.spec -sv ... > TraceFile
This option replaces the old -iodbg and -memdbg.
[-TRACE]: Turns on extreme tracing.
[-nomall]: Turn off memory tracing.
[-yesmall]: Turn on memory tracing (default).
NOTE: For programs that output results to stdout
(that is to your shell/screen), the debugging info
might get mixed up with your results.
Global Options (available to all AFNI/SUMA programs)
-h: Mini help, at time, same as -help in many cases.
-help: The entire help output
-HELP: Extreme help, same as -help in majority of cases.
-h_view: Open help in text editor. AFNI will try to find a GUI editor
-hview : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_web: Open help in web browser. AFNI will try to find a browser.
-hweb : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_find WORD: Look for lines in this programs's -help output that match
(approximately) WORD.
-h_raw: Help string unedited
-h_spx: Help string in sphinx loveliness, but do not try to autoformat
-h_aspx: Help string in sphinx with autoformatting of options, etc.
-all_opts: Try to identify all options for the program from the
output of its -help option. Some options might be missed
and others misidentified. Use this output for hints only.
Compile Date:
May 6 2025
Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov
AFNI program: 3dBrickStat
Usage: 3dBrickStat [options] dataset
Compute maximum and/or minimum voxel values of an input dataset
The output is a number to the console. The input dataset
may use a sub-brick selection list, as in program 3dcalc.
Note that this program computes ONE number as the output; e.g.,
the mean over all voxels and time points. If you want (say) the
mean over all voxels but for each time point individually, see
program 3dmaskave.
Note: If you don't specify one sub-brick, the parameter you get
----- back is computed from all the sub-bricks in dataset.
Options :
-quick = get the information from the header only (default)
-slow = read the whole dataset to find the min and max values
all other options except min and max imply slow
-min = print the minimum value in dataset
-max = print the maximum value in dataset (default)
-mean = print the mean value in dataset
-sum = print the sum of values in the dataset
-var = print the variance in the dataset
-stdev = print the standard deviation in the dataset
-stdev and -var are mutually exclusive
-count = print the number of voxels included
-volume = print the volume of voxels included in microliters
-positive = include only positive voxel values
-negative = include only negative voxel values
-zero = include only zero voxel values
-non-positive = include only voxel values 0 or negative
-non-negative = include only voxel values 0 or greater
-non-zero = include only voxel values not equal to 0
-absolute = use absolute value of voxel values for all calculations
can be combined with restrictive non-positive, non-negative,
etc. even if not practical. Ignored for percentile and
median computations.
-nan = include only voxel values that are not numbers (e.g., NaN or inf).
This is basically meant for counting bad numbers in a dataset.
-nan forces -slow mode.
-nonan = exclude voxel values that are not numbers
(exclude any NaN or inf values from computations).
-mask dset = use dset as mask to include/exclude voxels
-mrange MIN MAX = Only accept values between MIN and MAX (inclusive)
from the mask. Default it to accept all non-zero
voxels.
-mvalue VAL = Only accept values equal to VAL from the mask.
-automask = automatically compute mask for dataset
Can not be combined with -mask
-percentile p0 ps p1 write the percentile values starting
at p0% and ending at p1% at a step of ps%
Output is of the form p% value p% value ...
Percentile values are output first.
Only one sub-brick is accepted as input with this option.
Write the author if you REALLY need this option
to work with multiple sub-bricks.
-perclist NUM_PERC PERC1 PERC2 ...
Like -percentile, but output the given percentiles, rather
than a list on an evenly spaced grid using 'ps'.
-median a shortcut for -percentile 50 1 50 (or -perclist 1 50)
-perc_quiet = only print percentile results, not input percentile cutoffs
-ver = print author and version info
-help = print this help screen
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dbucket
++ 3dbucket: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
Concatenate sub-bricks from input datasets into one big 'bucket' dataset. ~1~
Usage: 3dbucket options
where the options are: ~1~
-prefix pname = Use 'pname' for the output dataset prefix name.
OR -output pname [default='buck']
-session dir = Use 'dir' for the output dataset session directory.
[default='./'=current working directory]
-glueto fname = Append bricks to the end of the 'fname' dataset.
This command is an alternative to the -prefix
and -session commands.
* Note that fname should include the view, as in
3dbucket -glueto newset+orig oldset+orig'[7]'
-aglueto fname= If fname dset does not exist, create it (like -prefix).
Otherwise append to fname (like -glueto).
This option is useful when appending in a loop.
* As with -glueto, fname should include the view, e.g.
3dbucket -aglueto newset+orig oldset+orig'[7]'
-dry = Execute a 'dry run'; that is, only print out
what would be done. This is useful when
combining sub-bricks from multiple inputs.
-verb = Print out some verbose output as the program
proceeds (-dry implies -verb).
-fbuc = Create a functional bucket.
-abuc = Create an anatomical bucket. If neither of
these options is given, the output type is
determined from the first input type.
Command line arguments after the above are taken as input datasets.
A dataset is specified using one of these forms:
'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
You can also add a sub-brick selection list after the end of the
dataset name. This allows only a subset of the sub-bricks to be
included into the output (by default, all of the input dataset
is copied into the output). A sub-brick selection list looks like
one of the following forms:
fred+orig[5] ==> use only sub-brick #5
fred+orig[5,9,17] ==> use #5, #9, and #17
fred+orig[5..8] or [5-8] ==> use #5, #6, #7, and #8
fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
Sub-brick indexes start at 0. You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
fred+orig[0..$(3)]
Notes: ~1~
N.B.: The sub-bricks are output in the order specified, which may
not be the order in the original datasets. For example, using
fred+orig[0..$(2),1..$(2)]
will cause the sub-bricks in fred+orig to be output into the
new dataset in an interleaved fashion. Using
fred+orig[$..0]
will reverse the order of the sub-bricks in the output.
N.B.: Bucket datasets have multiple sub-bricks, but do NOT have
a time dimension. You can input sub-bricks from a 3D+time dataset
into a bucket dataset. You can use the '3dinfo' program to see
how many sub-bricks a 3D+time or a bucket dataset contains.
N.B.: The '$', '(', ')', '[', and ']' characters are special to
the shell, so you will have to escape them. This is most easily
done by putting the entire dataset plus selection list inside
single quotes, as in 'fred+orig[5..7,9]'.
N.B.: In non-bucket functional datasets (like the 'fico' datasets
output by FIM, or the 'fitt' datasets output by 3dttest), sub-brick
[0] is the 'intensity' and sub-brick [1] is the statistical parameter
used as a threshold. Thus, to create a bucket dataset using the
intensity from dataset A and the threshold from dataset B, and
calling the output dataset C, you would type
3dbucket -prefix C -fbuc 'A+orig[0]' -fbuc 'B+orig[1]'
WARNING: ~1~
Using this program, it is possible to create a dataset that
has different basic datum types for different sub-bricks
(e.g., shorts for brick 0, floats for brick 1).
Do NOT do this! Very few AFNI programs will work correctly
with such datasets!
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dcalc
++ 3dcalc: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: A cast of thousands
Program: 3dcalc
Author: RW Cox et al
3dcalc - AFNI's calculator program ~1~
This program does voxel-by-voxel arithmetic on 3D datasets
(only limited inter-voxel computations are possible).
The program assumes that the voxel-by-voxel computations are being
performed on datasets that occupy the same space and have the same
orientations.
3dcalc has a lot of input options, as its capabilities have grown
over the years. So this 'help' output has gotten kind of long.
For simple voxel-wise averaging of datasets: cf. 3dMean
For averaging along the time axis: cf. 3dTstat
For smoothing in time: cf. 3dTsmooth
For statistics from a region around each voxel: cf. 3dLocalstat
------------------------------------------------------------------------
Usage: ~1~
-----
3dcalc -a dsetA [-b dsetB...] \
-expr EXPRESSION \
[options]
Examples: ~1~
--------
1. Average datasets together, on a voxel-by-voxel basis:
3dcalc -a fred+tlrc -b ethel+tlrc -c lucy+tlrc \
-expr '(a+b+c)/3' -prefix subjects_mean
Averaging datasets can also be done by programs 3dMean and 3dmerge.
Use 3dTstat to averaging across sub-bricks in a single dataset.
2. Perform arithmetic calculations between the sub-bricks of a single
dataset by noting the sub-brick number on the command line:
3dcalc -a 'func+orig[2]' -b 'func+orig[4]' -expr 'sqrt(a*b)'
3. Create a simple mask that consists only of values in sub-brick #0
that are greater than 3.14159:
3dcalc -a 'func+orig[0]' -expr 'ispositive(a-3.14159)' \
-prefix mask
4. Normalize subjects' time series datasets to percent change values in
preparation for group analysis:
Voxel-by-voxel, the example below divides each intensity value in
the time series (epi_r1+orig) with the voxel's mean value (mean+orig)
to get a percent change value. The 'ispositive' command will ignore
voxels with mean values less than 167 (i.e., they are labeled as
'zero' in the output file 'percent_change+orig') and are most likely
background/noncortical voxels.
3dcalc -a epi_run1+orig -b mean+orig \
-expr '100 * a/b * ispositive(b-167)' -prefix percent_chng
5. Create a compound mask from a statistical dataset, where 3 stimuli
show activation.
NOTE: 'step' and 'ispositive' are identical expressions that can
be used interchangeably:
3dcalc -a 'func+orig[12]' -b 'func+orig[15]' -c 'func+orig[18]' \
-expr 'step(a-4.2)*step(b-2.9)*step(c-3.1)' \
-prefix compound_mask
In this example, all 3 statistical criteria must be met at once for
a voxel to be selected (value of 1) in this mask.
6. Same as example #5, but this time create a mask of 8 different values
showing all combinations of activations (i.e., not only where
everything is active, but also each stimulus individually, and all
combinations). The output mask dataset labels voxel values as such:
0 = none active 1 = A only active 2 = B only active
3 = A and B only 4 = C only active 5 = A and C only
6 = B and C only 7 = all A, B, and C active
3dcalc -a 'func+orig[12]' -b 'func+orig[15]' -c 'func+orig[18]' \
-expr 'step(a-4.2)+2*step(b-2.9)+4*step(c-3.1)' \
-prefix mask_8
In displaying such a binary-encoded mask in AFNI, you would probably
set the color display to have 8 discrete levels (the '#' menu).
7. Create a region-of-interest mask comprised of a 3-dimensional sphere.
Values within the ROI sphere will be labeled as '1' while values
outside the mask will be labeled as '0'. Statistical analyses can
then be done on the voxels within the ROI sphere.
The example below puts a solid ball (sphere) of radius 3=sqrt(9)
about the point with coordinates (x,y,z)=(20,30,70):
3dcalc -a anat+tlrc \
-expr 'step(9-(x-20)*(x-20)-(y-30)*(y-30)-(z-70)*(z-70))' \
-prefix ball
The spatial meaning of (x,y,z) is discussed in the 'COORDINATES'
section of this help listing (far below).
8. Some datasets are 'short' (16 bit) integers with a scalar attached,
which allow them to be smaller than float datasets and to contain
fractional values.
Dataset 'a' is always used as a template for the output dataset. For
the examples below, assume that datasets d1+orig and d2+orig consist
of small integers.
a) When dividing 'a' by 'b', the result should be scaled, so that a
value of 2.4 is not truncated to '2'. To avoid this truncation,
force scaling with the -fscale option:
3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot -fscale
b) If it is preferable that the result is of type 'float', then set
the output data type (datum) to float:
3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot \
-datum float
c) Perhaps an integral division is desired, so that 9/4=2, not 2.24.
Force the results not to be scaled (opposite of example 8a) using
the -nscale option:
3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot -nscale
9. Compare the left and right amygdala between the Talairach atlas,
and the CA_N27_ML atlas. The result will be 1 if TT only, 2 if CA
only, and 3 where they overlap.
3dcalc -a 'TT_Daemon::amygdala' -b 'CA_N27_ML::amygdala' \
-expr 'step(a)+2*step(b)' -prefix compare.maps
(see 'whereami_afni -help' for more information on atlases)
10. Convert a dataset from AFNI short format storage to NIfTI-1 floating
point (perhaps for input to an non-AFNI program that requires this):
3dcalc -a zork+orig -prefix zfloat.nii -datum float -expr 'a'
This operation could also be performed with program 3dAFNItoNIFTI.
11. Compute the edge voxels of a mask dataset. An edge voxel is one
that shares some face with a non-masked voxel. This computation
assumes 'a' is a binary mask (particularly for 'amongst').
3dcalc -a mask+orig -prefix edge \
-b a+i -c a-i -d a+j -e a-j -f a+k -g a-k \
-expr 'a*amongst(0,b,c,d,e,f,g)'
consider similar erode or dilate operations:
erosion: -expr 'a*(1-amongst(0,b,c,d,e,f,g))'
dilation: -expr 'amongst(1,a,b,c,d,e,f,g)'
------------------------------------------------------------------------
ARGUMENTS for 3dcalc (must be included on command line): ~1~
---------
-a dname = Read dataset 'dname' and call the voxel values 'a' in the
expression (-expr) that is input below. Up to 26 dnames
(-a, -b, -c, ... -z) can be included in a single 3dcalc
calculation/expression.
** If some letter name is used in the expression, but
not present in one of the dataset options here, then
that variable is set to 0.
** You can use the subscript '[]' method
to select sub-bricks of datasets, as in
-b dname+orig'[3]'
** If you just want to test some 3dcalc expression,
you can supply a dataset 'name' of the form
jRandomDataset:64,64,16,40
to have the program create and use a dataset
with a 3D 64x64x16 grid, with 40 time points,
filled with random numbers (uniform on [-1,1]).
-expr = Apply the expression - within quotes - to the input
datasets (dnames), one voxel at time, to produce the
output dataset.
** You must use 1 and only 1 '-expr' option!
NOTE: If you want to average or sum up a lot of datasets, programs
3dTstat and/or 3dMean and/or 3dmerge are better suited for these
purposes. A common request is to increase the number of input
datasets beyond 26, but in almost all cases such users simply
want to do simple addition!
NOTE: If you want to include shell variables in the expression (or in
the dataset sub-brick selection), then you should use double
"quotes" and the '$' notation for the shell variables; this
example uses csh notation to set the shell variable 'z':
set z = 3.5
3dcalc -a moose.nii -prefix goose.nii -expr "a*$z"
The shell will not expand variables inside single 'quotes',
and 3dcalc's parser will not understand the '$' character.
NOTE: You can use the ccalc program to play with the expression
evaluator, in order to get a feel for how it works and
what it accepts.
------------------------------------------------------------------------
OPTIONS for 3dcalc: ~1~
-------
-help = Show this help.
-verbose = Makes the program print out various information as it
progresses.
-datum type= Coerce the output data to be stored as the given type,
which may be byte, short, or float.
[default = datum of first input dataset]
-float }
-short } = Alternative options to specify output data format.
-byte }
-fscale = Force scaling of the output to the maximum integer
range. This only has effect if the output datum is byte
or short (either forced or defaulted). This option is
often necessary to eliminate unpleasant truncation
artifacts.
[The default is to scale only if the computed values
seem to need it -- are all <= 1.0 or there is at
least one value beyond the integer upper limit.]
** In earlier versions of 3dcalc, scaling (if used) was
applied to all sub-bricks equally -- a common scale
factor was used. This would cause trouble if the
values in different sub-bricks were in vastly
different scales. In this version, each sub-brick
gets its own scale factor. To override this behavior,
use the '-gscale' option.
-gscale = Same as '-fscale', but also forces each output sub-brick
to get the same scaling factor. This may be desirable
for 3D+time datasets, for example.
** N.B.: -usetemp and -gscale are incompatible!!
-nscale = Don't do any scaling on output to byte or short datasets.
This may be especially useful when operating on mask
datasets whose output values are only 0's and 1's.
** Only use this option if you are sure you
want the output dataset to be integer-valued!
-prefix pname = Use 'pname' for the output dataset prefix name.
[default='calc']
-session dir = Use 'dir' for the output dataset session directory.
[default='./'=current working directory]
You can also include the output directory in the
'pname' parameter to the -prefix option.
-usetemp = With this option, a temporary file will be created to
hold intermediate results. This will make the program
run slower, but can be useful when creating huge
datasets that won't all fit in memory at once.
* The program prints out the name of the temporary
file; if 3dcalc crashes, you might have to delete
this file manually.
** N.B.: -usetemp and -gscale are incompatible!!
-dt tstep = Use 'tstep' as the TR for "manufactured" 3D+time
*OR* datasets.
-TR tstep = If not given, defaults to 1 second.
-taxis N = If only 3D datasets are input (no 3D+time or .1D files),
*OR* then normally only a 3D dataset is calculated. With
-taxis N:tstep: this option, you can force the creation of a time axis
of length 'N', optionally using time step 'tstep'. In
such a case, you will probably want to use the pre-
defined time variables 't' and/or 'k' in your
expression, or each resulting sub-brick will be
identical. For example:
'-taxis 121:0.1' will produce 121 points in time,
spaced with TR 0.1.
N.B.: You can also specify the TR using the -dt option.
N.B.: You can specify 1D input datasets using the
'1D:n@val,n@val' notation to get a similar effect.
For example:
-dt 0.1 -w '1D:121@0'
will have pretty much the same effect as
-taxis 121:0.1
N.B.: For both '-dt' and '-taxis', the 'tstep' value is in
seconds.
-rgbfac A B C = For RGB input datasets, the 3 channels (r,g,b) are
collapsed to one for the purposes of 3dcalc, using the
formula value = A*r + B*g + C*b
The default values are A=0.299 B=0.587 C=0.114, which
gives the grayscale intensity. To pick out the Green
channel only, use '-rgbfac 0 1 0', for example. Note
that each channel in an RGB dataset is a byte in the
range 0..255. Thus, '-rgbfac 0.001173 0.002302 0.000447'
will compute the intensity rescaled to the range 0..1.0
(i.e., 0.001173=0.299/255, etc.)
-cx2r METHOD = For complex input datasets, the 2 channels must be
converted to 1 real number for calculation. The
methods available are: REAL IMAG ABS PHASE
* The default method is ABS = sqrt(REAL^2+IMAG^2)
* PHASE = atan2(IMAG,REAL)
* Multiple '-cx2r' options can be given:
when a complex dataset is given on the command line,
the most recent previous method will govern.
This also means that for -cx2r to affect a variable
it must precede it. For example, to compute the
phase of data in 'a' you should use
3dcalc -cx2r PHASE -a dft.lh.TS.niml.dset -expr 'a'
However, the -cx2r option will have no effect in
3dcalc -a dft.lh.TS.niml.dset -cx2r PHASE -expr 'a'
which will produce the default ABS of 'a'
The -cx2r option in the latter example only applies
to variables that will be defined after it.
When in doubt, check your output.
* If a complex dataset is used in a differential
subscript, then the most recent previous -cx2r
method applies to the extraction; for example
-cx2r REAL -a cx+orig -cx2r IMAG -b 'a[0,0,0,0]'
means that variable 'a' refers to the real part
of the input dataset and variable 'b' to the
imaginary part of the input dataset.
* 3dcalc cannot be used to CREATE a complex dataset!
[See program 3dTwotoComplex for that purpose.]
-sort = Sort each output brick separately, before output:
-SORT 'sort' ==> increasing order, 'SORT' ==> decreasing.
[This is useful only under unusual circumstances!]
[Sorting is done in spatial indexes, not in time.]
[Program 3dTsort will sort voxels along time axis]
-isola = After computation, remove isolated non-zero voxels.
This option can be repeated to iterate the process;
each copy of '-isola' will cause the isola removal
process to be repeated one more time.
------------------------------------------------------------------------
DATASET TYPES: ~1~
-------------
The most common AFNI dataset types are 'byte', 'short', and 'float'.
A byte value is an 8-bit signed integer (0..255), a short value ia a
16-bit signed integer (-32768..32767), and a float value is a 32-bit
real number. A byte value has almost 3 decimals of accuracy, a short
has almost 5, and a float has approximately 7 (from a 23+1 bit
mantissa).
Datasets can also have a scalar attached to each sub-brick. The main
use of this is allowing a short type dataset to take on non-integral
values, while being half the size of a float dataset.
As an example, consider a short dataset with a scalar of 0.0001. This
could represent values between -32.768 and +32.767, at a resolution of
0.001. One could represent the difference between 4.916 and 4.917, for
instance, but not 4.9165. Each number has 15 bits of accuracy, plus a
sign bit, which gives 4-5 decimal places of accuracy. If this is not
enough, then it makes sense to use the larger type, float.
------------------------------------------------------------------------
3D+TIME DATASETS: ~1~
----------------
This version of 3dcalc can operate on 3D+time datasets. Each input
dataset will be in one of these conditions:
(A) Is a regular 3D (no time) dataset; or
(B) Is a 3D+time dataset with a sub-brick index specified ('[3]'); or
(C) Is a 3D+time dataset with no sub-brick index specified ('-b').
If there is at least one case (C) dataset, then the output dataset will
also be 3D+time; otherwise it will be a 3D dataset with one sub-brick.
When producing a 3D+time dataset, datasets in case (A) or (B) will be
treated as if the particular brick being used has the same value at each
point in time.
Multi-brick 'bucket' datasets may also be used. Note that if multi-brick
(bucket or 3D+time) datasets are used, the lowest letter dataset will
serve as the template for the output; that is, '-b fred+tlrc' takes
precedence over '-c wilma+tlrc'. (The program 3drefit can be used to
alter the .HEAD parameters of the output dataset, if desired.)
------------------------------------------------------------------------
INPUT DATASET NAMES
-------------------
An input dataset is specified using one of these forms:
'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
You can also add a sub-brick selection list after the end of the
dataset name. This allows only a subset of the sub-bricks to be
read in (by default, all of a dataset's sub-bricks are input).
A sub-brick selection list looks like one of the following forms:
fred+orig[5] ==> use only sub-brick #5
fred+orig[5,9,17] ==> use #5, #9, and #17
fred+orig[5..8] or [5-8] ==> use #5, #6, #7, and #8
fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
Sub-brick indexes start at 0. You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
fred+orig[0..$(3)]
N.B.: The sub-bricks are read in the order specified, which may
not be the order in the original dataset. For example, using
fred+orig[0..$(2),1..$(2)]
will cause the sub-bricks in fred+orig to be input into memory
in an interleaved fashion. Using
fred+orig[$..0]
will reverse the order of the sub-bricks.
N.B.: You may also use the syntax <a..b> after the name of an input
dataset to restrict the range of values read in to the numerical
values in a..b, inclusive. For example,
fred+orig[5..7]<100..200>
creates a 3 sub-brick dataset with values less than 100 or
greater than 200 from the original set to zero.
If you use the <> sub-range selection without the [] sub-brick
selection, it is the same as if you had put [0..$] in front of
the sub-range selection.
N.B.: Datasets using sub-brick/sub-range selectors are treated as:
- 3D+time if the dataset is 3D+time and more than 1 brick is chosen
- otherwise, as bucket datasets (-abuc or -fbuc)
(in particular, fico, fitt, etc datasets are converted to fbuc!)
N.B.: The characters '$ ( ) [ ] < >' are special to the shell,
so you will have to escape them. This is most easily done by
putting the entire dataset plus selection list inside forward
single quotes, as in 'fred+orig[5..7,9]', or double quotes "x".
CATENATED AND WILDCARD DATASET NAMES
------------------------------------
Datasets may also be catenated or combined in memory, as if one first
ran 3dTcat or 3dbucket.
An input with space-separated elements will be read as a concatenated
dataset, as with 'dset1+tlrc dset2+tlrc dset3+tlrc', or with paths,
'dir/dset1+tlrc dir/dset2+tlrc dir/dset3+tlrc'.
The datasets will be combined (as if by 3dTcat) and then treated as a
single input dataset. Note that the quotes are required to specify
them as a single argument.
Sub-brick selection using '[]' works with space separated dataset
names. If the selector is at the end, it is considered global and
applies to all inputs. Otherwise, it applies to the adjacent input.
For example:
local: 'dset1+tlrc[2,3] dset2+tlrc[7,0,1] dset3+tlrc[5,0,$]'
global: 'dset1+tlrc dset2+tlrc dset3+tlrc[5,6]'
N.B. If AFNI_PATH_SPACES_OK is set to Yes, will be considered as part
of the dataset name, and not as a separator between them.
Similar treatment applies when specifying datasets using a wildcard
pattern, using '*' or '?', as in: 'dset*+tlrc.HEAD'. Any sub-brick
selectors would apply to all matching datasets, as with:
'dset*+tlrc.HEAD[2,5,3]'
N.B.: complete filenames are required when using wildcard matching,
or no files will exist to match, e.g. 'dset*+tlrc' would not work.
N.B.: '[]' are processed as sub-brick or time point selectors. They
are therefore not allowed as wildcard characters in this context.
Space and wildcard catenation can be put together. In such a case,
spaces divide the input into wildcard pieces, which are processed
individually.
Examples (each is processed as a single, combined dataset):
'dset1+tlrc dset2+tlrc dset3+tlrc'
'dset1+tlrc dset2+tlrc dset3+tlrc[2,5,3]'
'dset1+tlrc[3] dset2+tlrc[0,1] dset3+tlrc[3,0,1]'
'dset*+tlrc.HEAD'
'dset*+tlrc.HEAD[2,5,3]'
'dset1*+tlrc.HEAD[0,1] dset2*+tlrc.HEAD[7,8]'
'group.*/subj.*/stats*+tlrc.HEAD[7]'
------------------------------------------------------------------------
1D TIME SERIES: ~1~
--------------
You can also input a '*.1D' time series file in place of a dataset.
In this case, the value at each spatial voxel at time index n will be
the same, and will be the n-th value from the time series file.
At least one true dataset must be input. If all the input datasets
are 3D (single sub-brick) or are single sub-bricks from multi-brick
datasets, then the output will be a 'manufactured' 3D+time dataset.
For example, suppose that 'a3D+orig' is a 3D dataset:
3dcalc -a a3D+orig -b b.1D -expr "a*b"
The output dataset will 3D+time with the value at (x,y,z,t) being
computed by a3D(x,y,z)*b(t). The TR for this dataset will be set
to 'tstep' seconds -- this could be altered later with program 3drefit.
Another method to set up the correct timing would be to input an
unused 3D+time dataset -- 3dcalc will then copy that dataset's time
information, but simply do not use that dataset's letter in -expr.
If the *.1D file has multiple columns, only the first read will be
used in this program. You can select a column to be the first by
using a sub-vector selection of the form 'b.1D[3]', which will
choose the 4th column (since counting starts at 0).
'{...}' row selectors can also be used - see the output of '1dcat -help'
for more details on these. Note that if multiple timeseries or 3D+time
or 3D bucket datasets are input, they must all have the same number of
points along the 'time' dimension.
N.B.: To perform calculations ONLY on .1D files, use program 1deval.
3dcalc takes .1D files for use in combination with 3D datasets!
N.B.: If you auto-transpose a .1D file on the command line, (by ending
the filename with \'), then 3dcalc will NOT treat it as the
special case described above, but instead will treat it as
a normal dataset, where each row in the transposed input is a
'voxel' time series. This would allow you to do differential
subscripts on 1D time series, which program 1deval does not
implement. For example:
3dcalc -a '1D: 3 4 5 6'\' -b a+l -expr 'sqrt(a+b)' -prefix -
This technique allows expression evaluation on multi-column
.1D files, which 1deval also does not implement. For example:
3dcalc -a '1D: 3 4 5 | 1 2 3'\' -expr 'cbrt(a)' -prefix -
------------------------------------------------------------------------
'1D:' INPUT: ~1~
-----------
You can input a 1D time series 'dataset' directly on the command line,
without an external file. The 'filename for such input takes the
general format
'1D:n_1@val_1,n_2@val_2,n_3@val_3,...'
where each 'n_i' is an integer and each 'val_i' is a float. For
example
-a '1D:5@0,10@1,5@0,10@1,5@0'
specifies that variable 'a' be assigned to a 1D time series of 35,
alternating in blocks between values 0 and value 1.
You can combine 3dUndump with 3dcalc to create an all zero 3D+time
dataset from 'thin air', as in the commands
3dUndump -dimen 128 128 32 -prefix AllZero_A -datum float
3dcalc -a AllZero_A+orig -b '1D: 100@' -expr 0 -prefix AllZero_B
If you replace the '0' expression with 'gran(0,1)', you'd get a
random 3D+time dataset, which might be useful for testing purposes.
------------------------------------------------------------------------
'I:*.1D' and 'J:*.1D' and 'K:*.1D' INPUT: ~1~
----------------------------------------
You can input a 1D time series 'dataset' to be defined as spatially
dependent instead of time dependent using a syntax like:
-c I:fred.1D
This indicates that the n-th value from file fred.1D is to be associated
with the spatial voxel index i=n (respectively j=n and k=n for 'J: and
K: input dataset names). This technique can be useful if you want to
scale each slice by a fixed constant; for example:
-a dset+orig -b K:slicefactor.1D -expr 'a*b'
In this example, the '-b' value only varies in the k-index spatial
direction.
------------------------------------------------------------------------
COORDINATES and PREDEFINED VALUES: ~1~
---------------------------------
If you don't use '-x', '-y', or '-z' for a dataset, then the voxel
spatial coordinates will be loaded into those variables. For example,
the expression 'a*step(x*x+y*y+z*z-100)' will zero out all the voxels
inside a 10 mm radius of the origin x=y=z=0.
Similarly, the '-t' value, if not otherwise used by a dataset or *.1D
input, will be loaded with the voxel time coordinate, as determined
from the header file created for the OUTPUT. Please note that the units
of this are variable; they might be in milliseconds, seconds, or Hertz.
In addition, slices of the dataset might be offset in time from one
another, and this is allowed for in the computation of 't'. Use program
3dinfo to find out the structure of your datasets, if you are not sure.
If no input datasets are 3D+time, then the effective value of TR is
tstep in the output dataset, with t=0 at the first sub-brick.
Similarly, the '-i', '-j', and '-k' values, if not otherwise used,
will be loaded with the voxel spatial index coordinates. The '-l'
(letter 'ell') value will be loaded with the temporal index coordinate.
The '-n' value, if not otherwise used, will be loaded with the overall
voxel 1D index. For a 3D dataset, n = i + j*NX + k*NX*NY, where
NX, NY, NZ are the array dimensions of the 3D grid. [29 Jul 2010]
Otherwise undefined letters will be set to zero. In the future, new
default values for other letters may be added.
NOTE WELL: By default, the coordinate order of (x,y,z) is the order in
********* which the data array is stored on disk; this order is output
by 3dinfo. The options below control can change this order:
-dicom }= Sets the coordinates to appear in DICOM standard (RAI) order,
-RAI }= (the AFNI standard), so that -x=Right, -y=Anterior , -z=Inferior,
+x=Left , +y=Posterior, +z=Superior.
-SPM }= Sets the coordinates to appear in SPM (LPI) order,
-LPI }= so that -x=Left , -y=Posterior, -z=Inferior,
+x=Right, +y=Anterior , +z=Superior.
The -LPI/-RAI behavior can also be achieved via the AFNI_ORIENT
environment variable (27 Aug, 2014).
------------------------------------------------------------------------
DIFFERENTIAL SUBSCRIPTS [22 Nov 1999]: ~1~
-----------------------
Normal calculations with 3dcalc are strictly on a per-voxel basis:
there is no 'cross-talk' between spatial or temporal locations.
The differential subscript feature allows you to specify variables
that refer to different locations, relative to the base voxel.
For example,
-a fred+orig -b 'a[1,0,0,0]' -c 'a[0,-1,0,0]' -d 'a[0,0,2,0]'
means: symbol 'a' refers to a voxel in dataset fred+orig,
symbol 'b' refers to the following voxel in the x-direction,
symbol 'c' refers to the previous voxel in the y-direction
symbol 'd' refers to the 2nd following voxel in the z-direction
To use this feature, you must define the base dataset (e.g., 'a')
first. Then the differentially subscripted symbols are defined
using the base dataset symbol followed by 4 integer subscripts,
which are the shifts in the x-, y-, z-, and t- (or sub-brick index)
directions. For example,
-a fred+orig -b 'a[0,0,0,1]' -c 'a[0,0,0,-1]' -expr 'median(a,b,c)'
will produce a temporal median smoothing of a 3D+time dataset (this
can be done more efficiently with program 3dTsmooth).
Note that the physical directions of the x-, y-, and z-axes depend
on how the dataset was acquired or constructed. See the output of
program 3dinfo to determine what direction corresponds to what axis.
For convenience, the following abbreviations may be used in place of
some common subscript combinations:
[1,0,0,0] == +i [-1, 0, 0, 0] == -i
[0,1,0,0] == +j [ 0,-1, 0, 0] == -j
[0,0,1,0] == +k [ 0, 0,-1, 0] == -k
[0,0,0,1] == +l [ 0, 0, 0,-1] == -l
The median smoothing example can thus be abbreviated as
-a fred+orig -b a+l -c a-l -expr 'median(a,b,c)'
When a shift calls for a voxel that is outside of the dataset range,
one of three things can happen:
STOP => shifting stops at the edge of the dataset
WRAP => shifting wraps back to the opposite edge of the dataset
ZERO => the voxel value is returned as zero
Which one applies depends on the setting of the shifting mode at the
time the symbol using differential subscripting is defined. The mode
is set by one of the switches '-dsSTOP', '-dsWRAP', or '-dsZERO'. The
default mode is STOP. Suppose that a dataset has range 0..99 in the
x-direction. Then when voxel 101 is called for, the value returned is
STOP => value from voxel 99 [didn't shift past edge of dataset]
WRAP => value from voxel 1 [wrapped back through opposite edge]
ZERO => the number 0.0
You can set the shifting mode more than once - the most recent setting
on the command line applies when a differential subscript symbol is
encountered.
N.B.: You can also use program 3dLocalstat to process data from a
spatial neighborhood of each voxel; for example, to compute
the maximum over a sphere of radius 9 mm placed around
each voxel:
3dLocalstat -nbhd 'SPHERE(9)' -stat max -prefix Amax9 A+orig
------------------------------------------------------------------------
ISSUES: ~1~
------
* Complex-valued datasets cannot be processed, except via '-cx2r'.
* This program is not very efficient (but is faster than it once was).
* Differential subscripts slow the program down even more.
------------------------------------------------------------------------
------------------------------------------------------------------------
EXPRESSIONS: ~1~
-----------
As noted above, datasets are referred to by single letter variable names.
Arithmetic expressions are allowed, using + - * / ** ^ and parentheses.
C relational, boolean, and conditional expressions are NOT implemented!
* Note that the expression evaluator is designed not to fail; illegal *
* operations like 'sqrt(-1)' are changed to legal ones to avoid crashes.*
Built in functions include:
sin , cos , tan , asin , acos , atan , atan2,
sinh , cosh , tanh , asinh , acosh , atanh , exp ,
log , log10, abs , int , sqrt , max , min ,
J0 , J1 , Y0 , Y1 , erf , erfc , qginv, qg ,
rect , step , astep, bool , and , or , mofn ,
sind , cosd , tand , median, lmode , hmode , mad ,
gran , uran , iran , eran , lran , orstat, mod ,
mean , stdev, sem , Pleg , cbrt , rhddc2, hrfbk4,hrfbk5
minabove, maxbelow, extreme, absextreme , acfwxm
gamp , gampq
where some of the less obvious functions are:
* qg(x) = reversed cdf of a standard normal distribution
* qginv(x) = inverse function to qg
* min, max, atan2 each take 2 arguments ONLY
* J0, J1, Y0, Y1 are Bessel functions (see the holy book: Watson)
* Pleg(m,x) is the m'th Legendre polynomial evaluated at x
* erf, erfc are the error and complementary error functions
* sind, cosd, tand take arguments in degrees (vs. radians)
* median(a,b,c,...) computes the median of its arguments
* mad(a,b,c,...) computes the MAD of its arguments
* mean(a,b,c,...) computes the mean of its arguments
* stdev(a,b,c,...) computes the standard deviation of its arguments
* sem(a,b,c,...) computes standard error of the mean of its arguments,
where sem(n arguments) = stdev(same)/sqrt(n)
* orstat(n,a,b,c,...) computes the n-th order statistic of
{a,b,c,...} - that is, the n-th value in size, starting
at the bottom (e.g., orstat(1,a,b,c) is the minimum)
* minabove(X,a,b,c,...) computes the smallest value amongst {a,b,c,...}
that is LARGER than the first argument X; if all values are smaller
than X, then X will be returned
* maxbelow(X,a,b,c,...) similarly returns the largest value amongst
{a,b,c,...} that is SMALLER than the first argument X.
* extreme(a,b,c,...) finds the largest absolute value amongst
{a,b,c,...} returning one of the original a,b,c,... values.
* absextreme(a,b,c,...) finds the largest absolute value amongst
{a,b,c,...} returning the maximum absolute value of a,b,c,... values.
* lmode(a,b,c,...) and hmode(a,b,c,...) compute the mode
of their arguments - lmode breaks ties by choosing the
smallest value with the maximal count, hmode breaks ties by
choosing the largest value with the maximal count
["a,b,c,..." indicates a variable number of arguments]
* gran(m,s) returns a Gaussian deviate with mean=m, stdev=s
* uran(r) returns a uniform deviate in the range [0,r]
* iran(t) returns a random integer in the range [0..t]
* eran(s) returns an exponentially distributed deviate
with parameter s; mean=s
* lran(t) returns a logistically distributed deviate
with parameter t; mean=0, stdev=t*1.814
* mod(a,b) returns (a modulo b) = a - b*int(a/b)
* hrfbk4(t,L) and hrfbk5(t,L) are the BLOCK4 and BLOCK5 hemodynamic
response functions from 3dDeconvolve (L=stimulus duration in sec,
and t is the time in sec since start of stimulus); for example:
1deval -del 0.1 -num 400 -expr 'hrfbk5(t-2,20)' | 1dplot -stdin -del 0.1
These HRF functions are scaled to return values in the range [0..1]
* ACFWXM(a,b,c,x) returns the Full Width at X Maximum for the mixed
model ACF function
f(r) = a*expr(-r*r/(2*b*b))+(1-a)*exp(-r/c)
for X between 0 and 1 (not inclusive). This is the model function
estimated in program 3dFWHMx.
* gamp(peak,fwhm) returns the parameter p in the formula
g(t) = (t/(p*q))^p * exp(p-t/q)
that gives the peak value of g(t) occurring at t=peak when the
FWHM of g(t) is given by fwhm; gamq(peak,fwhm) gives the q parameter.
These functions are largely used for creating FMRI hemodynamic shapes.
You may use the symbol 'PI' to refer to the constant of that name.
This is the only 2 letter symbol defined; all variables are
referred to by 1 letter symbols. The case of the expression is
ignored (in fact, it is converted to uppercase as the first step
in the parsing algorithm).
The following functions are designed to help implement logical
functions, such as masking of 3D volumes against some criterion:
step(x) = {1 if x>0 , 0 if x<=0},
posval(x) = {x if x>0 , 0 if x<=0},
astep(x,y) = {1 if abs(x) > y , 0 otherwise} = step(abs(x)-y)
within(x,MI,MX) = {1 if MI <= x <= MX , 0 otherwise},
rect(x) = {1 if abs(x)<=0.5, 0 if abs(x)>0.5},
bool(x) = {1 if x != 0.0 , 0 if x == 0.0},
notzero(x) = bool(x),
iszero(x) = 1-bool(x) = { 0 if x != 0.0, 1 if x == 0.0 },
not(x) = same as iszero(x)
equals(x,y) = 1-bool(x-y) = { 1 if x == y , 0 if x != y },
ispositive(x) = { 1 if x > 0; 0 if x <= 0 },
isnegative(x) = { 1 if x < 0; 0 if x >= 0 },
ifelse(x,t,f) = { t if x != 0; f if x == 0 },
not(x) = same as iszero(x) = Boolean negation
and(a,b,...,c) = {1 if all arguments are nonzero, 0 if any are zero}
or(a,b,...,c) = {1 if any arguments are nonzero, 0 if all are zero}
mofn(m,a,...,c) = {1 if at least 'm' arguments are nonzero, else 0 }
argmax(a,b,...) = index of largest argument; = 0 if all args are 0
argnum(a,b,...) = number of nonzero arguments
pairmax(a,b,...)= finds the 'paired' argument that corresponds to the
maximum of the first half of the input arguments;
for example, pairmax(a,b,c,p,q,r) determines which
of {a,b,c} is the max, then returns corresponding
value from {p,q,r}; requires even number of args.
pairmin(a,b,...)= Similar to pairmax, but for minimum; for example,
pairmin(a,b,c,p,q,r} finds the minimum of {a,b,c}
and returns the corresponding value from {p,q,r};
pairmin(3,2,7,5,-1,-2,-3,-4) = -2
(The 'pair' functions are Lukas Pezawas specials!)
amongst(a,b,...)= Return value is 1 if any of the b,c,... values
equals the a value; otherwise, return value is 0.
choose(n,a,b,...)= chooses the n-th value from the a,b,... values.
(e.g., choose(2,a,b,c) is b)
isprime(n) = 1 if n is a positive integer and a prime number
0 if n is a positive integer and not a prime number
-1 if n is not a positive integer
or if n is bigger than 2^31-1
[These last 9 functions take a variable number of arguments.]
The following 27 functions are used for statistical conversions,
as in the program 'cdf':
fico_t2p(t,a,b,c), fico_p2t(p,a,b,c), fico_t2z(t,a,b,c),
fitt_t2p(t,a) , fitt_p2t(p,a) , fitt_t2z(t,a) ,
fift_t2p(t,a,b) , fift_p2t(p,a,b) , fift_t2z(t,a,b) ,
fizt_t2p(t) , fizt_p2t(p) , fizt_t2z(t) ,
fict_t2p(t,a) , fict_p2t(p,a) , fict_t2z(t,a) ,
fibt_t2p(t,a,b) , fibt_p2t(p,a,b) , fibt_t2z(t,a,b) ,
fibn_t2p(t,a,b) , fibn_p2t(p,a,b) , fibn_t2z(t,a,b) ,
figt_t2p(t,a,b) , figt_p2t(p,a,b) , figt_t2z(t,a,b) ,
fipt_t2p(t,a) , fipt_p2t(p,a) , fipt_t2z(t,a) .
See the output of 'cdf -help' for documentation on the meanings of
and arguments to these functions. The two functions below use the
NIfTI-1 statistical codes to map between statistical values and
cumulative distribution values:
cdf2stat(val,code,p1,p2,p3) -- val is between 0 and 1
stat2cdf(val,code,p1,p2,p3) -- val is legal for the given distribution
where code is
2 = correlation statistic p1 = DOF
3 = t statistic (central) p1 = DOF
4 = F statistic (central) p1 = num DOF, p2 = den DOF
5 = N(0,1) statistic no parameters (p1=p2=p3=0)
6 = Chi-squared (central) p1 = DOF
7 = Beta variable (central) p1 = a , p2 = b
8 = Binomial variable p1 = #trials, p2 = prob per trial
9 = Gamma distribution p1 = shape, p2 = scale
10 = Poisson distribution p1 = mean
11 = N(mu,variance) normal p1 = mean, p2 = scale
12 = noncentral F statistic p1 = num DOF, p2 = den DOF, p3 = noncen
13 = noncentral chi-squared p1 = DOF, p2 = noncentrality parameter
14 = Logistic distribution p1 = mean, p2 = scale
15 = Laplace distribution p1 = mean, p2 = scale
16 = Uniform distribution p1 = min, p2 = max
17 = noncentral t statistic p1 = DOF, p2 = noncentrality parameter
18 = Weibull distribution p1 = location, p2 = scale, p3 = power
19 = Chi statistic (central) p1 = DOF
20 = inverse Gaussian variable p1 = mu, p2 = lambda
21 = Extreme value type I p1 = location, p2 = scale
22 = 'p-value' no parameters
23 = -ln(p) no parameters
24 = -log10(p) no parameters
When fewer than 3 parameters are needed, the values for later parameters
are still required, but will be ignored. An extreme case is code=5,
where the correct call is (e.g.) cdf2stat(p,5,0,0,0)
Finally, note that the expression evaluator is designed not to crash, or
to return NaN or Infinity. Illegal operations, such as division by 0,
logarithm of negative value, etc., are intercepted and something else
(usually 0) will be returned. To find out what that 'something else'
is in any specific case, you should play with the ccalc program.
** If you modify a statistical sub-brick, you may want to use program
'3drefit' to modify the dataset statistical auxiliary parameters.
** Computations are carried out in double precision before being
truncated to the final output 'datum'.
** Note that the quotes around the expression are needed so the shell
doesn't try to expand * characters, or interpret parentheses.
** Try the 'ccalc' program to see how the expression evaluator works.
The arithmetic parser and evaluator is written in Fortran-77 and
is derived from a program written long ago by RW Cox to facilitate
compiling on an array processor hooked up to a VAX. (It's a mess, but
it works - somewhat slowly - but hey, computers are fast these days.)
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dClipLevel
Usage: 3dClipLevel [options] dataset
Estimates the value at which to clip the anatomical dataset so
that background regions are set to zero.
The program's output is a single number sent to stdout. This
value can be 'captured' to a shell variable using the backward
single quote operator; a trivial csh/tcsh example is
set ccc = `3dClipLevel -mfrac 0.333 Elvis+orig`
3dcalc -a Elvis+orig -expr "step(a-$ccc)" -prefix Presley
Algorithm:
(a) Set some initial clip value using wizardry (AKA 'variance').
(b) Find the median of all positive values >= clip value.
(c) Set the clip value to 0.50 of this median.
(d) Loop back to (b) until the clip value doesn't change.
This method was made up out of nothing, based on histogram gazing.
Options:
--------
-mfrac ff = Use the number ff instead of 0.50 in the algorithm.
-doall = Apply the algorithm to each sub-brick separately.
[Cannot be combined with '-grad'!]
-grad ppp = In addition to using the 'one size fits all routine',
also compute a 'gradual' clip level as a function
of voxel position, and output that to a dataset with
prefix 'ppp'.
[This is the same 'gradual' clip level that is now the
default in 3dAutomask - as of 24 Oct 2006.
You can use this option to see how 3dAutomask clips
the dataset as its first step. The algorithm above is
is used in each octant of the dataset, and then these
8 values are interpolated to cover the whole volume.]
Notes:
------
* Use at your own risk! You might want to use the AFNI Histogram
plugin to see if the results are reasonable. This program is
likely to produce bad results on images gathered with local
RF coils, or with pulse sequences with unusual contrasts.
* For brain images, most brain voxels seem to be in the range from
the clip level (mfrac=0.5) to about 3-3.5 times the clip level.
- In T1-weighted images, voxels above that level are usually
blood vessels (e.g., inflow artifact brightens them).
* If the input dataset has more than 1 sub-brick, the data is
analyzed on the median volume -- at each voxel, the median
of all sub-bricks at that voxel is computed, and then this
median volume is used in the histogram algorithm.
* If the input dataset is short- or byte-valued, the output will
be an integer; otherwise, the output is a float value.
* Example -- Scaling a sequence of sub-bricks from a collection of
anatomicals from different sites to have about the
same numerical range (from 0 to 255):
3dTcat -prefix input anat_*+tlrc.HEAD
3dClipLevel -doall input+tlrc > clip.1D
3dcalc -datum byte -nscale -a input+tlrc -b clip.1D \
-expr '255*max(0,min(1,a/(3.2*b)))' -verb -prefix scaled
----------------------------------------------------------------------
* Author: Emperor Zhark -- Sadistic Galactic Domination since 1994!
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dclust
Program: 3dclust
Author: RW Cox et alii
Date: 12 Jul 2017
3dclust - performs simple-minded cluster detection in 3D datasets
*** PLEASE NOTE THAT THE NEWER PROGRAM 3dClusterize ***
*** IS BETTER AND YOU SHOULD USE THAT FROM NOW ON!! ***
This program can be used to find clusters of 'active' voxels and
print out a report about them.
* 'Active' refers to nonzero voxels that survive the threshold
that you (the user) have specified
* Clusters are defined by a connectivity radius parameter 'rmm'
*OR*
Clusters are defined by how close neighboring voxels must
be in the 3D grid:
first nearest neighbors (-NN1)
second nearest neighbors (-NN2)
third nearest neighbors (-NN3)
Note: by default, this program clusters on the absolute values
of the voxels
-----------------------------------------------------------------------
Usage:
3dclust [editing options] [other options] rmm vmul dset ...
*OR*
3dclust [editing options] -NNx dset ...
where '-NNx' is one of '-NN1' or '-NN2' or '-NN3':
-NN1 == 1st nearest-neighbor (faces touching) clustering
-NN2 == 2nd nearest-neighbor (edges touching) clustering
-NN2 == 3rd nearest-neighbor (corners touching) clustering
Optionally, you can put an integer after the '-NNx' option, to
indicate the minimum number of voxels to allow in a cluster;
for example: -NN2 60
-----------------------------------------------------------------------
Examples:
---------
3dclust -1clip 0.3 5 2000 func+orig'[1]'
3dclust -1noneg -1thresh 0.3 5 2000 func+orig'[1]'
3dclust -1noneg -1thresh 0.3 5 2000 func+orig'[1]' func+orig'[3]
3dclust -noabs -1clip 0.5 -dxyz=1 1 10 func+orig'[1]'
3dclust -noabs -1clip 0.5 5 700 func+orig'[1]'
3dclust -noabs -2clip 0 999 -dxyz=1 1 10 func+orig'[1]'
3dclust -1clip 0.3 5 3000 func+orig'[1]'
3dclust -quiet -1clip 0.3 5 3000 func+orig'[1]'
3dclust -summarize -quiet -1clip 0.3 5 3000 func+orig'[1]'
3dclust -1Dformat -1clip 0.3 5 3000 func+orig'[1]' > out.1D
-----------------------------------------------------------------------
Arguments (must be included on command line):
---------
THE OLD WAY TO SPECIFY THE TYPE OF CLUSTERING
rmm : cluster connection radius (in millimeters).
All nonzero voxels closer than rmm millimeters
(center-to-center distance) to the given voxel are
included in the cluster.
* If rmm = 0, then clusters are defined by nearest-
neighbor connectivity
vmul : minimum cluster volume (micro-liters)
i.e., determines the size of the volume cluster.
* If vmul = 0, then all clusters are kept.
* If vmul < 0, then the absolute vmul is the minimum
number of voxels allowed in a cluster.
If you do not use one of the '-NNx' options, you must give the
numbers for rmm and vmul just before the input dataset name(s)
THE NEW WAY TO SPECIFY TYPE OF CLUSTERING [13 Jul 2017]
-NN1 or -NN2 or -NN3
If you use one of these '-NNx' options, you do NOT give the rmm
and vmul values. Instead, after all the options that start with '-',
you just give the input dataset name(s).
If you want to set a minimum cluster size using '-NNx', put the minimum
voxel count immediately after, as in '-NN3 100'.
FOLLOWED BY ONE (or more) DATASETS
dset : input dataset (more than one allowed, but only the
first sub-brick of the dataset)
The results are sent to standard output (i.e., the screen):
if you want to save them in a file, then use redirection, as in
3dclust -1thresh 0.4 -NN2 Elvis.nii'[1]' > Elvis.clust.txt
-----------------------------------------------------------------------
Options:
-------
Editing options are as in 3dmerge (see 3dmerge -help)
(including -1thresh, -1dindex, -1tindex, -dxyz=1 options)
-NN1 => described earlier;
-NN2 => replaces the use of 'rmm' to specify the
-NN3 => clustering method (vmul is set to 2 voxels)
-noabs => Use the signed voxel intensities (not the absolute
value) for calculation of the mean and Standard
Error of the Mean (SEM)
-summarize => Write out only the total nonzero voxel
count and volume for each dataset
-nosum => Suppress printout of the totals
-verb => Print out a progress report (to stderr)
as the computations proceed
-1Dformat => Write output in 1D format (now default). You can
redirect the output to a .1D file and use the file
as input to whereami_afni for obtaining Atlas-based
information on cluster locations.
See whereami_afni -help for more info.
-no_1Dformat => Do not write output in 1D format.
-quiet => Suppress all non-essential output
-mni => If the input dataset has the +tlrc view, this option
will transform the output xyz-coordinates from TLRC to
MNI space.
N.B.0: Only use this option if the dataset is in Talairach
space, NOT when it is already in MNI space.
N.B.1: The MNI template brain is about 5 mm higher (in S),
10 mm lower (in I), 5 mm longer (in PA), and tilted
about 3 degrees backwards, relative to the Talairach-
Tournoux Atlas brain. For more details, see, e.g.:
https://imaging.mrc-cbu.cam.ac.uk/imaging/MniTalairach
N.B.2: If the input dataset does not have the +tlrc view,
then the only effect is to flip the output coordinates
to the 'LPI' (neuroscience) orientation, as if you
gave the '-orient LPI' option.)
-isovalue => Clusters will be formed only from contiguous (in the
rmm sense) voxels that also have the same value.
N.B.: The normal method is to cluster all contiguous
nonzero voxels together.
-isomerge => Clusters will be formed from each distinct value
in the dataset; spatial contiguity will not be
used (but you still have to supply rmm and vmul
on the command line).
N.B.: 'Clusters' formed this way may well have components
that are widely separated!
-inmask => If 3dClustSim put an internal attribute into the
input dataset that describes a mask, 3dclust will
use this mask to eliminate voxels before clustering,
if you give this option. '-inmask' is how the AFNI
AFNI Clusterize GUI works by default.
[If there is no internal mask in the dataset]
[header, then '-inmask' doesn't do anything.]
N.B.: The usual way for 3dClustSim to have put this internal
mask into a functional dataset is via afni_proc.py.
-prefix ppp => Write a new dataset that is a copy of the
input, but with all voxels not in a cluster
set to zero; the new dataset's prefix is 'ppp'
N.B.: Use of the -prefix option only affects the
first input dataset.
-savemask q => Write a new dataset that is an ordered mask, such
that the largest cluster is labeled '1', the next
largest '2' and so forth. Should be the same as
'3dmerge -1clust_order' or Clusterize 'SaveMsk'.
-binary => This turns the output of '-savemask' into a binary
(0 or 1) mask, rather than a cluster-index mask.
**-->> If no clusters are found, the mask is not written!
-----------------------------------------------------------------------
N.B.: 'N.B.' is short for 'Nota Bene', Latin for 'Note Well';
also see http://en.wikipedia.org/wiki/Nota_bene
-----------------------------------------------------------------------
E.g., 3dclust -1clip 0.3 5 3000 func+orig'[1]'
The above command tells 3dclust to find potential cluster volumes for
dataset func+orig, sub-brick #1, where the threshold has been set
to 0.3 (i.e., ignore voxels with activation threshold >0.3 or <-0.3).
Voxels must be no more than 5 mm apart, and the cluster volume
must be at least 3000 micro-liters in size.
Explanation of 3dclust Output:
-----------------------------
Volume : Volume that makes up the cluster, in microliters (mm^3)
(or the number of voxels, if -dxyz=1 is given)
CM RL : Center of mass (CM) for the cluster in the Right-Left
direction (i.e., the coordinates for the CM)
CM AP : Center of mass for the cluster in the
Anterior-Posterior direction
CM IS : Center of mass for the cluster in the
Inferior-Superior direction
minRL, maxRL : Bounding box for the cluster, min and max
coordinates in the Right-Left direction
minAP, maxAP : Min and max coordinates in the Anterior-Posterior
direction of the volume cluster
minIS, max IS: Min and max coordinates in the Inferior-Superior
direction of the volume cluster
Mean : Mean value for the volume cluster
SEM : Standard Error of the Mean for the volume cluster
Max Int : Maximum Intensity value for the volume cluster
MI RL : Coordinate of the Maximum Intensity value in the
Right-Left direction of the volume cluster
MI AP : Coordinate of the Maximum Intensity value in the
Anterior-Posterior direction of the volume cluster
MI IS : Coordinate of the Maximum Intensity value in the
Inferior-Superior direction of the volume cluster
-----------------------------------------------------------------------
Nota Bene:
* The program does not work on complex- or rgb-valued datasets!
* Using the -1noneg option is strongly recommended!
* 3D+time datasets are allowed, but only if you use the
-1tindex and -1dindex options.
* Bucket datasets are allowed, but you will almost certainly
want to use the -1tindex and -1dindex options with these.
* SEM values are not realistic for interpolated data sets!
A ROUGH correction is to multiply the SEM of the interpolated
data set by the square root of the number of interpolated
voxels per original voxel.
* If you use -dxyz=1, then rmm should be given in terms of
voxel edges (not mm) and vmul should be given in terms of
voxel counts (not microliters). Thus, to connect to only
3D nearest neighbors and keep clusters of 10 voxels or more,
use something like '3dclust -dxyz=1 1.01 10 dset+orig'.
In the report, 'Volume' will be voxel count, but the rest of
the coordinate dependent information will be in actual xyz
millimeters.
* The default coordinate output order is DICOM. If you prefer
the SPM coordinate order, use the option '-orient LPI' or
set the environment variable AFNI_ORIENT to 'LPI'. For more
information, see file README.environment.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dClustCount
Usage: 3dClustCount [options] dataset1 ...
This program takes as input 1 or more datasets, thresholds them at various
levels, and counts up the number of clusters of various sizes. It is
adapted from 3dClustSim, but only does the cluster counting functions --
where the datasets come from is the user's business. It is intended for
use in a simulation script.
-------
OPTIONS
-------
-prefix sss = Use string 'sss' as the prefix of the filename into which
results will be summed. The actual filename will be
'sss.clustcount.niml'. If this file already exists, then
the results from the current run will be summed into the
existing results, and the file then re-written.
-final = If this option is given, then the results will be output
in a format like that used from 3dClustSim -- as 1D and
NIML formatted files with probabilities of various
cluster sizes.
++ You can use '-final' without any input datasets if
you want to create the final output files from the
saved '.clustcount.niml' output file from earlier runs.
-quiet = Don't print out the progress reports, etc.
++ Put this option first to quiet most informational messages.
--------
EXAMPLE:
-------
The steps here are
(a) Create a set of 250 3dGroupInCorr results from a set of 190 subjects,
using 250 randomly located seed locations. Note the use of '-sendall'
to get the individual subject results -- these are used in the next
step, and are in sub-bricks 2..191 -- the collective 3dGroupInCorr
results (in sub-bricks 0..1) are not actually used here.
(b) For each of these 250 output datasets, create 80 random splittings
into 2 groups of 95 subjects each, and carry out a 2-sample t-test
between these groups.
++ Note the use of program 2perm to create the random splittings into
files QQ_A and QQ_B, drawn from sub-bricks 2..191 of the ${fred}
datasets.
++ Note the use of the '[1dcat filename]' construction to specify
which sub-bricks of the ${fred} dataset are used for input to
the '-setX' options of 3dttest++.
(c) Count clusters from the '[1]' sub-brick of the 80 t-test outputs --
the t-statistic sub-brick.
++ Note the use of a wildcard filename with a sub-brick selector:
'QQ*.HEAD[1]' -- 3dClustCount will do the wildcard expansion
internally, then add the sub-brick selector '[1]' to each expanded
dataset filename.
(d) Produce the final report files for empirical cluster-size thresholds
for 3dGroupInCorr analyses -- rather than rely on 3dClustSim's assumption
of Gaussian-shaped spatial correlation structure.
The syntax is C-shell (tcsh), naturally.
\rm -f ABscat*
3dGroupInCorr -setA A.errts.grpincorr.niml \
-setB B.errts.grpincorr.niml \
-labelA A -labelB B -seedrad 5 -nosix -sendall \
-batchRAND 250 ABscat
foreach fred ( ABscat*.HEAD )
foreach nnn ( `count_afni -dig 2 0 79` )
2perm -prefix QQ 2 191
3dttest++ -setA ${fred}'[1dcat QQ_A]' \
-setB ${fred}'[1dcat QQ_B]' \
-no1sam -prefix QQ${nnn}
end
3dClustCount -prefix ABcount 'QQ*.HEAD[1]'
\rm -f QQ*
end
3dClustCount -final -prefix ABcount
\rm -f ABscat*
--------------------------------
---- RW Cox -- August 2012 -----
--------------------------------
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dClusterize
PURPOSE ~1~
This program is for performing clusterizing: one can perform voxelwise
thresholding on a dataset (such as a statistic), and then make a map
of remaining clusters of voxels larger than a certain volume. The
main output of this program is a single volume dataset showing a map
of the cluster ROIs.
As of Apr 24, 2020, this program now behaves less (unnecessarily)
guardedly when thresholding non-stat volumes. About time, right?
This program is specifically meant to reproduce behavior of the muuuch
older 3dclust, but this new program:
+ uses simpler syntax (hopefully);
+ includes additional clustering behavior such as the '-bisided ...'
variety (essentially, two-sided testing where all voxels in a
given cluster come from either the left- or right- tail, but not
mixed);
+ a mask (such as the whole brain) can be entered in;
+ voxelwise thresholds can be input as statistic values or p-values.
This program was also written to have simpler/more direct syntax of
usage than 3dclust. Some minor options have been carried over for
similar behavior, but many of the major option names have been
altered. Please read the helps for those below carefully.
This program was cobbled together by PA Taylor (NIMH, NIH), but it
predominantly uses code written by many legends: RW Cox, BD Ward, MS
Beauchamp, ZS Saad, and more.
USAGE ~1~
Input: ~2~
+ A dataset of one or more bricks
+ Specify an index of the volume to threshold
+ Declare a voxelwise threshold, and optionally a cluster-volume
threshold
+ Optionally specify the index an additional 'data' brick
+ Optionally specify a mask
Output: ~2~
+ A report about the clusters (center of mass, extent, volume,
etc.) that can be dumped into a text file.
+ Optional: A dataset volume containing a map of cluster ROIs
(sorted by size) after thresholding (and clusterizing, if
specified).
That is, a data set where the voxels in the largest cluster all
have a value 1, those in the next largest are all 2, etc.
+ Optional: a cluster-masked version of an input data set. That is,
the values of a selected data set (e.g., effect estimate) that fall
within a cluster are output unchanged, and those outside a cluster
are zeroed.
+ Optional: a mask.
Explanation of 3dClusterize text report: ~2~
The following columns of cluster summary information are output
for quick reference (and please see the asterisked notes below
for some important details on the quantities displayed):
Nvoxel : Number of voxels in the cluster
CM RL : Center of mass (CM) for the cluster in the Right-Left
direction (i.e., the coordinates for the CM)
CM AP : Center of mass for the cluster in the
Anterior-Posterior direction
CM IS : Center of mass for the cluster in the
Inferior-Superior direction
minRL, maxRL : Bounding box for the cluster, min and max
coordinates in the Right-Left direction
minAP, maxAP : Min and max coordinates in the Anterior-Posterior
direction of the volume cluster
minIS, maxIS : Min and max coordinates in the Inferior-Superior
direction of the volume cluster
Mean : Mean value for the volume cluster
SEM : Standard Error of the Mean for the volume cluster
Max Int : Maximum Intensity value for the volume cluster
MI RL : Coordinate of the Maximum Intensity value in the
Right-Left direction of the volume cluster
MI AP : Coordinate of the Maximum Intensity value in the
Anterior-Posterior direction of the volume cluster
MI IS : Coordinate of the Maximum Intensity value in the
Inferior-Superior direction of the volume cluster
* The CM, Mean, SEM, Max Int and MI values are all calculated using
using the '-idat ..' subvolume/dataset. In general, those peaks
and weighted centers of mass will be different than those of the
'-ithr ..' dset (if those are different subvolumes).
* CM values use the absolute value of the voxel values as weights.
* The program does not work on complex- or rgb-valued datasets!
* SEM values are not realistic for interpolated data sets! A
ROUGH correction is to multiply the SEM of the interpolated data
set by the square root of the number of interpolated voxels per
original voxel.
* Some summary or 'global' values are placed at the bottoms of
report columns, by default. These include the 'global' volume,
CM of the combined cluster ROIs, and the mean+SEM of that
Pangaea.
COMMAND OPTIONS ~1~
-inset III :Load in a dataset III of one or more bricks for
thresholding and clusterizing; one can choose to use
either just a single sub-brick within it for all
operations (e.g., a 'statistics' brick), or to specify
an additional sub-brick within it for the actual
clusterizing+reporting (after the mask from the
thresholding dataset has been applied to it).
-mask MMM :Load in a dataset MMM to use as a mask, within which
to look for clusters.
-mask_from_hdr :If 3dClustSim put an internal attribute into the
input dataset that describes a mask, 3dClusterize will
use this mask to eliminate voxels before clustering,
if you give this option (this is how the AFNI
Clusterize GUI works by default). If there is no
internal mask in the dataset header, then this
doesn't do anything.
-out_mask OM :specify that you wanted the utilized mask dumped out
as a single volume dataset OM. This is probably only
really useful if you are using '-mask_from_hdr'. If
not mask option is specified, there will be no output.
-ithr j :(required) Uses sub-brick [j] as the threshold source;
'j' can be either an integer *or* a brick_label string.
-idat k :Uses sub-brick [k] as the data source (optional);
'k' can be either an integer *or* a brick_label string.
If this option is used, thresholding is still done by
the 'threshold' dataset, but that threshold map is
applied to this 'data' set, which is in turn used for
clusterizing and the 'data' set values are used to
make the report. If a 'data' dataset is NOT input
with '-idat ..', then thresholding, clustering and
reporting are all done using the 'threshold' dataset.
-1sided SSS TT :Perform one-sided testing. Two arguments are required:
SSS -> either 'RIGHT_TAIL' (or 'RIGHT') or 'LEFT_TAIL'
(or 'LEFT') to specify which side of the
distribution to test.
TT -> the threshold value itself.
See 'NOTES' below to use a p-value as threshold.
-2sided LL RR :Perform two-sided testing. Two arguments are required:
LL -> the upper bound of the left tail.
RR -> lower bound of the right tail.
*NOTE* that in this case, potentially a cluster could
be made of both left- and right-tail survivors (e.g.,
both positive and negative values). For this reason,
probably '-bisided ...' is a preferable choice.
See 'NOTES' below to use a p-value as threshold.
-bisided LL RR :Same as '-2sided ...', except that the tails are tested
independently, so a cluster cannot be made of both.
See 'NOTES' below to use a p-value as threshold.
-within_range AA BB
:Perform a kind of clustering where a different kind of
thresholding is first performed, compared to the above
cases; here, one keeps values within the range [AA, BB],
INSTEAD of keeping values on the tails. Is this useful?
Who knows, but it exists.
See 'NOTES' below to use a p-value as threshold.
-NN {1|2|3} :Necessary option to specify how many neighbors a voxel
has; one MUST put one of 1, 2 or 3 after it:
1 -> 6 facewise neighbors
2 -> 18 face+edgewise neighbors
3 -> 26 face+edge+cornerwise neighbors
If using 3dClustSim (or any other method), make sure
that this NN value matches what was used there. (In
many AFNI programs, NN=1 is a default choice, but BE
SURE YOURSELF!)
-clust_nvox M :specify the minimum cluster size in terms of number
of voxels M (such as output by 3dClustSim).
-clust_vol V :specify the minimum cluster size in terms of volume V,
in microliters (requires knowing the voxel
size). Probably '-clust_nvox ...' is more useful.
-pref_map PPP :The prefix/filename of the output map of cluster ROIs.
The 'map' shows each cluster as a set of voxels with the
same integer. The clusters are ordered by size, so the
largest cluster is made up of 1s, the next largest of 2s,
etc.
(def: no map of clusters output).
-pref_dat DDD :Including this option instructs the program to output
a cluster-masked version of the 'data' volume
specified by the '-idat ..' index. That is, only data
values within the cluster ROIs are included in the
output volume. Requires specifying '-idat ..'.
(def: no cluster-masked dataset output).
-1Dformat :Write output in 1D format (now default). You can
redirect the output to a .1D file and use the file
as input to whereami_afni for obtaining Atlas-based
information on cluster locations.
See whereami_afni -help for more info.
-no_1Dformat :Do not write output in 1D format.
-summarize :Write out only the total nonzero voxel count and
volume for each dataset
-nosum :Suppress printout of the totals
-quiet :Suppress all non-essential output
-outvol_if_no_clust: flag to still output an (empty) vol if no
clusters are found. Even in this case, no report is
is produced if no clusters are found. This option is
likely used for some scripting scenarios; also, the
user would still need to specify '-pref_* ...' options
as above in order to output any volumes with this opt.
(def: no volumes output if no clusters found).
-orient OOO :in the output report table, make the coordinate
order be 'OOO' (def: RAI, the DICOM standard);
alternatively, one could set the environment variable
AFNI_ORIENT (see the file README.environment).
NB: this only affects the coordinate orientation in the
*text table*; the dset orientation of the output
cluster maps and other volumetric data will match that
of the input dataset.
-abs_table_data :(new, from Apr 29, 2021) Use the absolute value of voxel
intensities (not the raw values) for calculation of the
mean and Standard Error of the Mean (SEM) in the report
table. Prior to the cited date, this was default behavior
(with '-noabs' switching out of it) but no longer.
### -noabs :(as of Apr 29, 2021, this option is no longer needed)
Previously this option switched from using default absolute
values of voxel intensities for calculation of the mean
and Standard Error of the Mean (SEM). But this has now
changed, and the default is to just use the signed values
themselves; this option will not cause an error, but is not
needed. See '-abs_table_data' for reporting abs values.
-binary :This turns the output map of cluster ROIs into a binary
(0 or 1) mask, rather than a cluster-index mask.
If no clusters are found, the mask is not written!
(def: each cluster has separate values)
NOTES ~1~
Saving the text report ~2~
To save the text file report, use the redirect '>' after the
3dClusterize command and dump the text into a separate file of
your own naming.
Using p-values as thresholds for statistic volumes ~2~
By default, numbers entered as voxelwise thresholds are assumed to
be appropriate statistic values that you have calculated for your
desired significance (e.g., using p2dsetstat). HOWEVER, if you
just want to enter p-values and have the program do the conversion
work for you, then do as follows: prepend 'p=' to your threshold
number.
- For one-sided tests, the *_TAIL specification is still used, so
in either case the p-value just represents the area in the
statistical distribution's tail (i.e., you don't have to worry
about doing '1-p'). Examples:
-1sided RIGHT_TAIL p=0.005
-1sided LEFT_TAIL p=0.001
- For the two-sided/bi-sided tests, the a single p-value is
entered to represent the total area under both tails in the
statistical distribution, which are assumed to be symmetric.
Examples:
-bisided p=0.001
-2sided p=0.005
If you want asymmetric tails, you will have to enter both
threshold values as statistic values (NB: you could use
p2dsetstat to convert each desired p-value to a statistic, and
then put in those stat values to this program).
You will probably NEED to have negative signs for the cases of
'-1sided LEFT_TAIL ..', and for the first entries of '-bisided ..'
or '-2sided ..'.
You cannot mix p-values and statistic values (for two-sided
things, enter either the single p-value or both stats).
You cannot use this internal p-to-stat conversion if the volume
you are thresholding is not recognized as a stat.
Performing appropriate testing ~2~
Don't use a pair of one-sided tests when you *should* be using a
two-sided test!
EXAMPLES ~1~
1. Take an output of FMRI testing (e.g., from afni_proc.py), whose
[1] brick contains the effect estimate from a statistical model and
whose [2] brick contains the associated statistic; use the results
of 3dClustSim run with NN=1 (here, a cluster threshold volume of 157
voxels) and perform one-sided testing with a threshold at an
appropriate value (here, 3.313).
3dClusterize \
-inset stats.FT+tlrc. \
-ithr 2 \
-idat 1 \
-mask mask_group+tlrc. \
-NN 1 \
-1sided RIGHT_TAIL 3.313 \
-clust_nvox 157 \
-pref_map ClusterMap
2. The same as Ex. 1, but using bisided testing (two sided testing
where the results of each tail can't be joined into the same
cluster). Note, the tail thresholds do NOT have to be symmetric (but
often they are). Also, here we output the cluster-masked 'data'
volume.
3dClusterize \
-inset stats.FT+tlrc. \
-ithr 2 \
-idat 1 \
-mask mask_group+tlrc. \
-NN 1 \
-bisided -3.313 3.313 \
-clust_nvox 157 \
-pref_map ClusterMap \
-pref_dat ClusterEffEst
3. The same as Ex. 2, but specifying a p-value to set the voxelwise
thresholds (in this case, tails DO have to be symmetric).
3dClusterize \
-inset stats.FT+tlrc. \
-ithr 2 \
-idat 1 \
-mask mask_group+tlrc. \
-NN 1 \
-bisided p=0.001 \
-clust_nvox 157 \
-pref_map ClusterMap \
-pref_dat ClusterEffEst
4. Threshold a non-stat dset.
3dClusterize \
-inset anat+orig \
-ithr 0 \
-idat 0 \
-NN 1 \
-within_range 500 1000 \
-clust_nvox 100 \
-pref_map ClusterMap \
-pref_dat ClusterEffEst
# ------------------------------------------------------------------------
AFNI program: 3dClustSim
Usage: 3dClustSim [options]
Program to estimate the probability of false positive (noise-only) clusters.
An adaptation of Doug Ward's AlphaSim, streamlined for various purposes.
-----------------------------------------------------------------------------
This program has several different modes of operation, each one involving
simulating noise-only random volumes, thresholding and clustering them,
and counting statistics of how often data 'survives' these processes at
various threshold combinations (per-voxel and cluster-size).
OLDEST method = simulate noise volume assuming the spatial auto-correlation
function (ACF) is given by a Gaussian-shaped function, where
this shape is specified using the FWHM parameter. The FWHM
parameter can be estimated by program 3dFWHMx.
** THIS METHOD IS NO LONGER RECOMMENDED **
NEWER method = simulate noise volume assuming the ACF is given by a mixed-model
of the form a*exp(-r*r/(2*b*b))+(1-a)*exp(-r/c), where a,b,c
are 3 parameters giving the shape, and can also be estimated
by program 3dFWHMx.
** THIS METHOD IS ACCEPTABLE **
NEWEST method = program 3dttest++ simulates the noise volumes by randomizing
and permuting input datasets, and sending those volumes into
3dClustSim directly. There is no built-in math model for the
spatial ACF.
** THIS METHOD IS MOST ACCURATE AT CONTROLLING FALSE POSITIVE RATE **
** You invoke this method with the '-Clustsim' option in 3dttest++ **
3dClustSim computes a cluster-size threshold for a given voxel-wise p-value
threshold, such that the probability of anything surviving the dual thresholds
is at some given level (specified by the '-athr' option).
Note that this cluster-size threshold is the same for all brain regions.
There is an implicit assumption that the noise spatial statistics are
the same everywhere.
Program 3dXClustSim introduces the idea of spatially variable cluster-size
thresholds, which may be more useful in some cases. 3dXClustSim's method is
invoked by using the '-ETAC' option in 3dttest++.
-----------------------------------------------------------------------------
**** NOTICE ****
You should use the -acf method, NOT the -fwhm method, when determining
cluster-size thresholds for FMRI data. The -acf method will give more
accurate false positive rate (FPR) control.
****************
In particular, this program lets you run with multiple p-value thresholds
(the '-pthr' option) and only outputs the cluster size threshold at chosen
values of the alpha significance level (the '-athr' option).
In addition, the program allows the output to be formatted for inclusion
into an AFNI dataset's header, whence it can be used in the AFNI Clusterize
interface to show approximate alpha values for the displayed clusters, where
the per-voxel p-value is taken from the interactive threshold slider in the
AFNI 'Define Overlay' control panel, and then the per-cluster alpha value
is interpolated in this table from 3dClustSim. As you change the threshold
slider, the per-voxel p-value (shown below the slider) changes, and then
the interpolated alpha values are updated.
************* IMPORTANT NOTE [Dec 2015] ***************************************
A completely new method for estimating and using noise smoothness values is
now available in 3dFWHMx and 3dClustSim. This method is implemented in the
'-acf' options to both programs. 'ACF' stands for (spatial) AutoCorrelation
Function, and it is estimated by calculating moments of differences out to
a larger radius than before.
Notably, real FMRI data does not actually have a Gaussian-shaped ACF, so the
estimated ACF is then fit (in 3dFWHMx) to a mixed model (Gaussian plus
mono-exponential) of the form
ACF(r) = a * exp(-r*r/(2*b*b)) + (1-a)*exp(-r/c)
where 'r' is the radius, and 'a', 'b', 'c' are the fitted parameters.
The apparent FWHM from this model is usually somewhat larger in real data
than the FWHM estimated from just the nearest-neighbor differences used
in the 'classic' analysis.
The longer tails provided by the mono-exponential are also significant.
3dClustSim has also been modified to use the ACF model given above to generate
noise random fields.
**----------------------------------------------------------------------------**
** The take-away (TL;DR or summary) message is that the 'classic' 3dFWHMx and **
** 3dClustSim analysis, using a pure Gaussian ACF, is not very correct for **
** FMRI data -- I cannot speak for PET or MEG data. **
**----------------------------------------------------------------------------**
** ---------------------------------------------------------------------------**
** IMPORTANT CHANGES -- February 2015 ******************************************
** ---------------------------------------------------------------------------**
** In the past, 3dClustSim did '1-sided' testing; that is, the random dataset
** of Gaussian noise-only values is generated, and then it is thresholded on
** the positive side so that the N(0,1) upper tail probability is pthr.
**
** NOW, 3dClustSim does 3 different types of thresholding:
** 1-sided: as above
** 2-sided: where positive and negative values above the threshold
** are included, and then clustered together
** (in this case, the threshold on the Gaussian values is)
** (fixed so that the 1-sided tail probability is pthr/2.)
** bi-sided: where positive values and negative values above the
** threshold are clustered SEPARATELY (with the 2-sided threshold)
** For high levels of smoothness, the results from bi-sided and 2-sided are
** very similar -- since for smooth data, it is unlikely that large clusters of
** positive and negative values will be next to each other. With high smoothness,
** it is also true that the 2-sided results for 2*pthr will be similar to the
** 1-sided results for pthr, for the same reason. Since 3dClustSim is meant to be
** useful when the noise is NOT very smooth, we provide tables for all 3 cases.
**
** In particular, note that when the AFNI GUI threshold is set to a t-statistic,
** 2-sided testing is what is usually appropriate -- in that case, the cluster
** size thresholds tend to be smaller than the 1-sided case, which means that
** more clusters tend to be significant than in the past.
**
** In addition, the 3 different NN approaches (NN=1, NN=2, NN=3) are ALL
** always computed now. That is, 9 different tables are produced, each
** of which has its proper place when combined with the AFNI Clusterize GUI.
** The 3 different NN methods are:
** 1 = Use first-nearest neighbor clustering
** * above threshold voxels cluster together if faces touch
** 2 = Use second-nearest neighbor clustering
** * voxels cluster together if faces OR edges touch
** 3 = Use third-nearest neighbor clustering
** * voxels cluster together if faces OR edges OR corners touch
** The clustering method only makes a difference at higher (less significant)
** values of pthr. At small values of pthr (more significant), all three
** clustering methods will give very similar results.
**
**** PLEASE NOTE that the NIML outputs from this new version are not named the
**** same as those from the older version. Thus, any script that takes the NIML
**** format tables and inserts them into an AFNI dataset header must be modified
**** to match the new names. The 3drefit command fragment output at the end of
**** this program (and echoed into file '3dClustSim.cmd') shows the new form
**** of the names involved.
**** -------------------------------------------------------------------------**
**** SMOOTHING CHANGE -- May 2015 **********************************************
** ---------------------------------------------------------------------------**
** It was pointed out to me (by Anders Eklund and Tom Nichols) that smoothing
** the simulated data over a finite volume introduces 2 artifacts, which might
** be called 'edge effects'. To minimize these problems, this program now makes
** extra-large (padded) simulated volumes before blurring, and then trims those
** back down to the desired size, before continuing with the thresholding and
** cluster-counting steps. To run 3dClustSim without this padding added, use
** the new '-nopad' option.
**** -------------------------------------------------------------------------**
-------
OPTIONS [at least 1 option is required, or you'll get this help message!]
-------
******* Specify the volume over which the simulation will occur *******
-----** (a) Directly give the spatial domain that will be used **-----
-nxyz n1 n2 n3 = Size of 3D grid to use for simulation
[default values = 64 64 32]
-dxyz d1 d2 d3 = give all 3 voxel sizes at once
[default values = 3.5 3.5 3.5]
-BALL = inside the 3D grid, mask off points outside a ball
at the center of the grid and touching the edges;
this will keep about 1/2 the points in the 3D grid.
[default = use all voxels in the 3D grid]
-----** OR: (b) Specify the spatial domain using a dataset mask **-----
-mask mset = Use the 0 sub-brick of dataset 'mset' as a mask
to indicate which voxels to analyze (a sub-brick
selector '[]' is allowed)
-OKsmallmask = Allow small masks. Normally, a mask volume must have
128 or more nonzero voxels. However, IF you know what
you are doing, and IF you are willing to live life on
the edge of statistical catastrophe, then you can use
this option to allow smaller masks -- in a sense, this
is the 'consent form' for such strange shenanigans.
* If you use this option, it must come BEFORE '-mask'.
* Also read the 'CAUTION and CAVEAT' section, far below.
-->>** This option is really only recommended for users who
understand what they are doing. Misuse of this option
could easily be construed as 'p-hacking'; for example,
finding results, but your favorite cluster is too small
to survive thresholding, so you post-hoc put a small mask
down in that region. DON'T DO THIS!
** '-mask' means that '-nxyz' & '-dxyz' & '-BALL' will be ignored. **
-----** OR: (c) Specify the spatial domain by directly giving simulated volumes **-----
-inset iset [iset ...] = Read the 'iset' dataset(s) and use THESE volumes
as the simulations to threshold and clusterize,
[Feb 2016] rather than create the simulations internally.
* For example, these datasets could come from
3dttest++ -toz -randomsign 1000 -setA ...
* This can be combined with '-mask'.
* Using '-inset' means that '-fwhm', '-acf', '-nopad',
'-niter', and '-ssave' are ignored as meaningless.
---** the remaining options control how the simulation is done **---
-fwhm s = Gaussian filter width (all 3 dimensions) in mm (non-negative)
[default = 0.0 = no smoothing]
* If you wish to set different smoothing amounts for each
axis, you can instead use option
-fwhmxyz sx sy sz
to specify the three values separately.
**** This option is no longer recommended, since FMRI data ****
**** does not have a Gaussian-shaped spatial autocorrelation. ****
**** Consider using '-acf' or '3dttest++ -Clustsim' instead. ****
-acf a b c = Alternative to Gaussian filtering: use the spherical
autocorrelation function parameters output by 3dFWHMx
to do non-Gaussian (long-tailed) filtering.
* Using '-acf' will make '-fwhm' pointless!
* The 'a' parameter must be between 0 and 1.
* The 'b' and 'c' parameters (scale radii) must be positive.
* The spatial autocorrelation function is given by
ACF(r) = a * exp(-r*r/(2*b*b)) + (1-a)*exp(-r/c)
>>---------->>*** Combined with 3dFWHMx, the '-acf' method is now a
recommended way to generate clustering statistics in AFNI!
*** Alternative methods we also recommend:
3dttest++ with the -Clustsim and/or -ETAC options.
-nopad = The program now [12 May 2015] adds 'padding' slices along
each face to allow for edge effects of the smoothing process.
If you want to turn this feature off, use the '-nopad' option.
* For example, if you want to compare the 'old' (un-padded)
results with the 'new' (padded) results.
* '-nopad' has no effect when '-acf' is used, since that option
automatically pads the volume when creating it (via FFTs) and
then truncates it back to the desired size for clustering.
-pthr p1 .. pn = list of uncorrected (per voxel) p-values at which to
threshold the simulated images prior to clustering.
[default = 0.05 0.02 0.01 0.005 0.002 0.001 0.0005 0.0002 0.0001]
-athr a1 .. an = list of corrected (whole volume) alpha-values at which
the simulation will print out the cluster size
thresholds. For each 'p' and 'a', the smallest cluster
size C(p,a) for which the probability of the 'p'-thresholded
image having a noise-only cluster of size C is less than 'a'
is the output (cf. the sample output, below)
[default = 0.10 0.05 0.02 0.01]
** It is possible to use only ONE value in each of '-pthr' and **
** '-athr', and then you will get exactly one line of output **
** for each sided-ness and NN case. For example: **
** -pthr 0.001 -athr 0.05 **
** Both lists '-pthr' and '-athr' (of values between 0 and 0.2) **
** should be given in DESCENDING order. They will be sorted to be **
** that way in any case, and such is how the output will be given. **
** The list of values following '-pthr' or '-athr' can be replaced **
** with the single word 'LOTS', which will tell the program to use **
** a longer list of values for these probabilities [try it & see!] **
** (i.e., '-pthr LOTS' and/or '-athr LOTS' are legal options) **
-LOTS = the same as using '-pthr LOTS -athr LOTS'
-MEGA = adds even MORE values to the '-pthr' and '-athr' grids.
* NOTE: you can also invoke '-MEGA' by setting environment
variable AFNI_CLUSTSIM_MEGA to YES.
* Doing this will over-ride any use of other options to set
the '-pthr' and '-athr' lists!
-iter n = number of Monte Carlo simulations [default = 10000]
-nodec = normally, the program prints the cluster size threshold to
1 decimal place (e.g., 27.2). Of course, clusters only come
with an integer number of voxels -- this fractional value
is interpolated to give the desired alpha level. If you
want no decimal places (so that 27.2 becomes 28), use '-nodec'.
-seed S = random number seed [default seed = 123456789]
* if seed=0, then program will quasi-randomize it
-niml = Output the table in an XML/NIML format, rather than a .1D format.
* This option is for use with other software programs;
see the NOTES section below for details.
* '-niml' also implicitly means '-LOTS'.
-both = Output the table in XML/NIML format AND in .1D format.
* You probably want to use '-prefix' with this option!
Otherwise, everything is mixed together on stdout.
* '-both' implies 'niml' which implies '-LOTS' (unless '-MEGA').
So '-pthr' (if desired) should follow '-both'/'-niml'
-prefix ppp = Write output for NN method #k to file 'ppp.NNk_Xsided.1D',
for k=1, 2, 3, and for X=1sided, 2sided, bisided.
* If '-prefix is not used, all results go to standard output.
You will probably find this confusing.
* If '-niml' is used, the filename is 'ppp.NNk_Xsided.niml'.
To be clear, the 9 files that will be named
ppp.NN1_1sided.niml ppp.NN1_2sided.niml ppp.NN1_bisided.niml
ppp.NN2_1sided.niml ppp.NN2_2sided.niml ppp.NN2_bisided.niml
ppp.NN3_1sided.niml ppp.NN3_2sided.niml ppp.NN3_bisided.niml
* If '-niml' AND '-mask' are both used, then a compressed ASCII
encoding of the mask volume is stored into file 'ppp.mask'.
This string can be stored into a dataset header as an attribute
with name AFNI_CLUSTSIM_MASK, and will be used in the AFNI
Clusterize GUI, if present, to mask out above-threshold voxels
before the clusterizing is done (which is how the mask is used
here in 3dClustSim).
* If the ASCII mask string is NOT stored into the statistics dataset
header, then the Clusterize GUI will try to find the original
mask dataset and use that instead. If that fails, then masking
won't be done in the Clusterize process.
-cmd ccc = Write command for putting results into a file's header to a file
named 'ccc' instead of '3dClustSim.cmd'. This option is mostly
to help with scripting, as in
3dClustSim -cmd XXX.cmd -prefix XXX.nii ...
`cat XXX.cmd` XXX.nii
-quiet = Don't print out the progress reports, etc.
* Put this option first to silence most informational messages.
-ssave:TYPE ssprefix = Save the un-thresholded generated random volumes into
datasets ('-iter' of them). Here, 'TYPE' is one of these:
* blurred == save the blurred 3D volume before masking
* masked == save the blurred volume after masking
The output datasets will actually get prefixes generated
with the string 'ssprefix' being appended by a 6 digit
integer (the iteration index), starting at 000000.
(You can use SOMETHING.nii as a prefix; it will work OK.)
N.B.: This option will slow the program down a lot,
and was intended to help just one specific user.
------
NOTES:
------
* This program is like running AlphaSim once for each '-pthr' value and then
extracting the relevant information from its 'Alpha' output column.
++ One reason for 3dClustSim to be used in place of AlphaSim is that it will
be much faster than running AlphaSim multiple times.
++ Another reason is that the resulting table can be stored in an AFNI
dataset's header, and used in the AFNI Clusterize GUI to see estimated
cluster significance (alpha) levels.
* To be clear, the C(p,alpha) thresholds that are calculated are for
alpha = probability of a noise-only smooth random field, after masking
and then thresholding at the given per-voxel p value, producing a cluster
of voxels at least this big.
++ So if your cluster is larger than the C(p,0.01) threshold in size (say),
then it is very unlikely that noise BY ITSELF produced this result.
++ This statement does not mean that ALL the voxels in the cluster are
'truly' active -- it means that at least SOME of them are (very probably)
active. The statement of low probability (0.01 in this example) of a
false positive result applies to the cluster as a whole, not to each
voxel within the cluster.
* To add the cluster simulation C(p,alpha) table to the header of an AFNI
dataset, something like the following can be done [tcsh syntax]:
set fx = ( `3dFWHMx -detrend time_series_dataset+orig` )
3dClustSim -mask mask+orig -acf $fx[5] $fx[6] $fx[7] -niml -prefix CStemp
3drefit -atrstring AFNI_CLUSTSIM_NN1_1sided file:CStemp.NN1_1sided.niml \
-atrstring AFNI_CLUSTSIM_MASK file:CStemp.mask \
statistics_dataset+orig
rm -f CStemp.*
AFNI's Clusterize GUI makes use of these attributes, if stored in a
statistics dataset (e.g., something from 3dDeconvolve, 3dREMLfit, etc.).
** Nota Bene: afni_proc.py will automatically run 3dClustSim, and **
*** put the results into the statistical results dataset for you. ***
**** Another reason to use afni_proc.py for single-subject analyses! ****
* 3dClustSim will print (to stderr) a 3drefit command fragment, similar
to the one above, that you can use to add cluster tables to any
relevant statistical datasets you have lolling about.
* The C(p,alpha) table will be used in Clusterize to provide the cluster
level alpha value when the AFNI GUI is set so that the Overlay threshold
sub-brick is a statistical parameter (e.g., a t- or F-statistic), from which
a per-voxel p-value can be calculated, so that Clusterize can interpolate
in the C(p,alpha) table.
++ To be clear, the per-voxel p-value is taken from the AFNI GUI threshold
slider (the p-value is shown beneath the slider), and then the C(p,alpha)
table is inverse-interpolated to find the per-cluster alpha value for
each different cluster size.
++ As you move the AFNI threshold slider, the per-voxel (uncorrected for
multiple comparisons) p-value changes, the cluster sizes change (as fewer
or more voxels are included), and so the reported per-cluster alpha
values change for both reasons -- different p and different cluster size.
++ The alpha values reported are 'per-cluster', and are not themselves
corrected for multiple comparisons ACROSS clusters. These alpha values
are corrected for multiple comparisons WITHIN a cluster.
* AFNI will use the NN1, NN2, NN3 tables as needed in its Clusterize
interface if they are all stored in the statistics dataset header,
depending on the NN level chosen in the Clusterize controller.
* The blur estimates (provided to 3dClustSim via -acf) comes from using
program 3dFWHMx.
-------------------
CAUTION and CAVEAT: [January 2011]
-------------------
* If you use a small ROI mask and also have a large blur, then it might happen
that it is impossible to find a cluster size threshold C that works for a
given (p,alpha) combination.
* Generally speaking, C(p,alpha) gets smaller as p gets smaller and C(p,alpha)
gets smaller as alpha gets larger. As a result, in a small mask with small p
and large alpha, C(p,alpha) might shrink below 1. But clusters of size C
less than 1 don't make any sense!
* For example, suppose that for p=0.0005 that only 6% of the simulations
have ANY above-threshold voxels inside the ROI mask. In that case,
C(p=0.0005,alpha=0.06) = 1. There is no smaller value of C where 10%
of the simulations have a cluster of size C or larger. Thus, it is
impossible to find the cluster size threshold for the combination of
p=0.0005 and alpha=0.10 in this case.
* 3dClustSim will report a cluster size threshold of C=1 for such cases.
It will also print (to stderr) a warning message for all the (p,alpha)
combinations that had this problem.
-----------------------------
---- RW Cox -- July 2010 ----
-------------
SAMPLE OUTPUT from the command '3dClustSim -fwhm 7' [only the NN=1 1-sided results]
-------------
# 3dClustSim -fwhm 7
# 1-sided thresholding
# Grid: 64x64x32 3.50x3.50x3.50 mm^3 (131072 voxels)
#
# CLUSTER SIZE THRESHOLD(pthr,alpha) in Voxels
# -NN 1 | alpha = Prob(Cluster >= given size)
# pthr | 0.100 0.050 0.020 0.010
# ------ | ------ ------ ------ ------
0.050000 162.5 182.2 207.8 225.7
0.020000 64.3 71.0 80.5 88.5
0.010000 40.3 44.7 50.7 55.1
0.005000 28.0 31.2 34.9 38.1
0.002000 19.0 21.2 24.2 26.1
0.001000 14.6 16.3 18.9 20.5
0.000500 11.5 13.0 15.1 16.7
0.000200 8.7 10.0 11.6 12.8
0.000100 7.1 8.3 9.7 10.9
e.g., for this sample volume, if the per-voxel p-value threshold is set
at 0.005, then to keep the probability of getting a single noise-only
cluster at 0.05 or less, the cluster size threshold should be 32 voxels
(the next integer above 31.2).
If you ran the same simulation with the '-nodec' option, then the last
line above would be
0.000100 8 9 10 11
If you set the per voxel p-value to 0.0001 (1e-4), and want the chance
of a noise-only false-positive cluster to be 5% or less, then the cluster
size threshold would be 9 -- that is, you would keep all NN clusters with
9 or more voxels.
The header lines start with the '#' (commenting) character so that the result
is a correctly formatted AFNI .1D file -- it can be used in 1dplot, etc.
=========================================================================
* This binary version of 3dClustSim is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dCM
Usage: 3dCM [options] dset
Output = center of mass of dataset, to stdout.
Note: by default, the output is (x,y,z) values in RAI-DICOM
coordinates. But as of Dec, 2016, there are now
command line switches for other options (see -local*
below).
-mask mset :Means to use the dataset 'mset' as a mask:
Only voxels with nonzero values in 'mset'
will be averaged from 'dataset'. Note
that the mask dataset and the input dataset
must have the same number of voxels.
-automask :Generate the mask automatically.
-set x y z :After computing the CM of the dataset, set the
origin fields in the header so that the CM
will be at (x,y,z) in DICOM coords.
-local_ijk :Output values as (i,j,k) in local orientation.
-roi_vals v0 v1 v2 ... :Compute center of mass for each blob
with voxel value of v0, v1, v2, etc.
This option is handy for getting ROI
centers of mass.
-all_rois :Don't bother listing the values of ROIs you want
the program will find all of them and produce a
full list.
-Icent :Compute Internal Center. For some shapes, the center can
lie outside the shape. This option finds the location
of the center of a voxel closest to the center of mass
It will be the same or similar to a center of mass
if the CM lies within the volume. It will lie necessarily
on an edge voxel if the CMass lies outside the volume
-Dcent :Compute Distance Center, i.e. the center of the voxel
that has the shortest average distance to all the other
voxels. This is much more computational expensive than
Cmass or Icent centers
-rep_xyz_orient RRR :when reporting (x,y,z) coordinates, use the
specified RRR orientation (def: RAI).
NB: this does not apply when using '-local_ijk',
and will not change the orientation of the dset
when using '-set ..'.
NOTE: Masking options are ignored with -roi_vals and -all_rois
AFNI program: 3dCompareAffine
Usage: 3dCompareAffine [options] ~1
This program compares two (or more) affine spatial transformations
on a dataset, and outputs various measurements of how much these
transformations differ in spatial displacements.
One use for this program is to compare affine alignment matrices
from different methods for aligning 3D brain images.
Transformation matrices are specified in a few different ways:
* ASCII filename containing 12 numbers arranged in 3 lines:
u11 u12 u13 v1
u21 u22 u23 v2
u31 u32 u33 v3
* ASCII filename containing with 12 numbers in a single line:
u11 u12 u13 v1 u21 u22 u23 v2 u31 u32 u33 v3
This is the '.aff12.1D' format output by 3dAllineate,
and this is the only format that can contain more than
one matrix in one file.
* Directly on the command line:
'MATRIX(u11,u12,u13,v1,u21,u22,u23,v2,u31,u32,u33,v3)'
-------
Options
-------
-mask mmm = Read in dataset 'mmm' and use non-zero voxels
as the region over which to compare the two
affine transformations.
* You can specify the use of the MNI152 built-in template
mask by '-mask MNI152'.
* In the future, perhaps other built-in masks will be created?
*OR*
-dset ddd = Read in dataset 'mmm', compute an automask from
it (via program 3dAutomask), and use that mask
as the spatial region for comparison.
* If you don't give EITHER '-mask' or '-dset', then
this program will use an internal mask derived from
the MNI152 template (skull off).
-affine aaa = Input an affine transformation (file or 'MATRIX').
*OR* * You can give more than one '-affine' option to
-matrix aaa input multiple files.
* You can also put multiple filenames after the
'-affine' option, as in '-affine aaa.aff12.1D bbb.aff12.1D'
* The first matrix found in the first '-affine' option
is the base transformation to which all following
transformations will be compared.
------
Method
------
1) The input mask is hollowed out -- that is, all nonzero mask voxels that
do NOT neighbor a zero voxel are turned to zero. Thus, only the 'edge'
voxels are used in the computations below. For example, the default
MNI152 mask has 1818562 nonzero voxels before hollowing out, and
has 74668 after hollowing out. The hollowing out algorithm is described
in the help for program 3dAutomask.
2) For each surviving voxel, the xyz coordinates are calculated and then
transformed by the pair of matrices being compared. Then the Euclidean
distance between these two sets of transformed xyz vectors is calculated.
The outputs for each comparison are the maximum distance and the
root-mean-square (RMS) distance, over the set of hollowed out mask voxels.
The purpose of this program is to compare the results from 3dAllineate
and other registration programs, run under different conditions.
-- Author: RWCox - Mar 2020 at the Tulsa bootcamp
AFNI program: 3dConformist
** Program 3dConformist reads in a collection of datasets and
zero pads them to the same size.
** The output volume size is the smallest region that includes
all datasets (i.e., the minimal covering box).
** If the datasets cannot be processed (e.g., different grid
spacings), then nothing will happen except for error messages.
** The purpose of this program is to be used in scripts that
process lots of datasets and needs to make them all conform
to the same size for collective voxel-wise analyses.
** The input datasets ARE ALTERED (embiggened)! <<<<<<------******
Therefore, don't use this program casually.
AFNI program: 3dConvolve
** :( :( :( :( :( :( :( :( :( :( :( :( :( :( :( :( :( :( :( **
** **
** This program, 3dConvolve, is no longer supported in AFNI **
** **
** :( :( :( :( :( :( :( :( :( :( :( :( :( :( :( :( :( :( :( **
** Program compile date = May 6 2025
AFNI program: 3dcopy
Usage 1: 3dcopy [-verb] [-denote] old_prefix new_prefix ~1~
Will copy all datasets using the old_prefix to use the new_prefix;
3dcopy fred ethel
will copy fred+orig.HEAD to ethel+orig.HEAD
fred+orig.BRIK to ethel+orig.BRIK
fred+tlrc.HEAD to ethel+tlrc.HEAD
fred+tlrc.BRIK.gz to ethel+tlrc.BRIK.gz
Usage 2: 3dcopy old_prefix+view new_prefix ~1~
Will copy only the dataset with the given view (orig, acpc, tlrc).
Usage 3: 3dcopy old_dataset new_prefix ~1~
Will copy the non-AFNI formatted dataset (e.g., MINC, ANALYZE, CTF)
to the AFNI formatted dataset with the given new prefix.
Notes: ~1~
* This is to copy entire datasets, possibly with multiple views.
So sub-brick selection is not allowed. Please use 3dbucket or
3dTcat for that purpose.
* The new datasets have new ID codes. If you are renaming
multiple datasets (as in Usage 1), then if the old +orig
dataset is the warp parent of the old +acpc and/or +tlrc
datasets, then the new +orig dataset will be the warp
parent of the new +acpc and +tlrc datasets. If any other
datasets point to the old datasets as anat or warp parents,
they will still point to the old datasets, not these new ones.
* The BRIK files are copied if they exist, keeping the compression
suffix unchanged (if any).
* The old_prefix may have a directory name attached in front,
as in 'gerard/manley/hopkins'.
* If the new_prefix does not have a directory name attached
(i.e., does NOT look like 'homer/simpson'), then the new
datasets will be written in the current directory ('./').
* The new can JUST be a directory now (like the Unix
utility 'cp'); in this case the output has the same prefix
as the input.
* The '-verb' option will print progress reports; otherwise, the
program operates silently (unless an error is detected).
* The '-denote' option will remove any Notes from the file.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dCRUISEtoAFNI
Usage: 3dCRUISEtoAFNI -input CRUISE_HEADER.dx
Converts a CRUISE dataset defined by a header in OpenDX format
The conversion is based on sample data and information
provided by Aaron Carass from JHU's IACL iacl.ece.jhu.edu
[-novolreg]: Ignore any Rotate, Volreg, Tagalign,
or WarpDrive transformations present in
the Surface Volume.
[-noxform]: Same as -novolreg
[-setenv "'ENVname=ENVvalue'"]: Set environment variable ENVname
to be ENVvalue. Quotes are necessary.
Example: suma -setenv "'SUMA_BackgroundColor = 1 0 1'"
See also options -update_env, -environment, etc
in the output of 'suma -help'
Common Debugging Options:
[-trace]: Turns on In/Out debug and Memory tracing.
For speeding up the tracing log, I recommend
you redirect stdout to a file when using this option.
For example, if you were running suma you would use:
suma -spec lh.spec -sv ... > TraceFile
This option replaces the old -iodbg and -memdbg.
[-TRACE]: Turns on extreme tracing.
[-nomall]: Turn off memory tracing.
[-yesmall]: Turn on memory tracing (default).
NOTE: For programs that output results to stdout
(that is to your shell/screen), the debugging info
might get mixed up with your results.
Global Options (available to all AFNI/SUMA programs)
-h: Mini help, at time, same as -help in many cases.
-help: The entire help output
-HELP: Extreme help, same as -help in majority of cases.
-h_view: Open help in text editor. AFNI will try to find a GUI editor
-hview : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_web: Open help in web browser. AFNI will try to find a browser.
-hweb : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_find WORD: Look for lines in this programs's -help output that match
(approximately) WORD.
-h_raw: Help string unedited
-h_spx: Help string in sphinx loveliness, but do not try to autoformat
-h_aspx: Help string in sphinx with autoformatting of options, etc.
-all_opts: Try to identify all options for the program from the
output of its -help option. Some options might be missed
and others misidentified. Use this output for hints only.
Compile Date:
May 6 2025
Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov
AFNI program: 3dDeconvolve
++ 3dDeconvolve: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward, et al.
------------------------------------------------------------------------
----- DESCRIPTION and PROLEGOMENON -----
------------------------------------------------------------------------
Program to calculate the deconvolution of a measurement 3D+time dataset
with a specified input stimulus time series. This program can also
perform multiple linear regression using multiple input stimulus time
series. Output consists of an AFNI 'bucket' type dataset containing
(for each voxel)
* the least squares estimates of the linear regression coefficients
* t-statistics for significance of the coefficients
* partial F-statistics for significance of individual input stimuli
* the F-statistic for significance of the overall regression model
The program can optionally output extra datasets containing
* the estimated impulse response function
* the fitted model and error (residual) time series
------------------------------------------------------------------------
* Program 3dDeconvolve does Ordinary Least Squares (OLSQ) regression.
* Program 3dREMLfit can be used to do Generalized Least Squares (GLSQ)
regression (AKA 'pre-whitened' least squares) combined with REML
estimation of an ARMA(1,1) temporal correlation structure:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dREMLfit.html
* The input to 3dREMLfit is the .xmat.1D matrix file output by
3dDeconvolve, which also writes a 3dREMLfit command line to a file
to make it relatively easy to use the latter program.
* 3dREMLfit also allows for voxel-specific regressors, unlike
3dDeconvolve. This feature is used with the '-fanaticor' option
to afni_proc.py, for example.
* Nonlinear time series model fitting can be done with program 3dNLfim:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dNLfim.html
* Preprocessing of the time series input can be done with various AFNI
programs, or with the 'uber-script' afni_proc.py:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/afni_proc.py.html
------------------------------------------------------------------------
------------------------------------------------------------------------
**** The recommended way to use 3dDeconvolve is via afni_proc.py, ****
**** which will pre-process the data, and also provide some useful ****
**** diagnostic tools/outputs for assessing the data's quality. ****
**** It can also run 3dREMLfit for you 'at no extra charge'. ****
**** [However, it will not wax your car or wash your windows.] ****
------------------------------------------------------------------------
------------------------------------------------------------------------
Consider the time series model Z(t) = K(t)*S(t) + baseline + noise,
where Z(t) = data
K(t) = kernel (e.g., hemodynamic response function or HRF)
S(t) = stimulus time series
baseline = constant, drift, etc. [regressors of no interest]
and * = convolution
Then 3dDeconvolve solves for K(t) given S(t). If you want to process
the reverse problem and solve for S(t) given the kernel K(t), use the
program 3dTfitter with the '-FALTUNG' option. The difference between
the two cases is that K(t) is presumed to be causal and have limited
support, whereas S(t) is a full-length time series. Note that program
3dTfitter does not have all the capabilities of 3dDeconvolve for
calculating output statistics; on the other hand, 3dTfitter can solve
a deconvolution problem (in either direction) with L1 or L2 regression,
and with sign constraints on the computed values (e.g., requiring that
the output S(t) or K(t) be non-negative):
https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dTfitter.html
------------------------------------------------------------------------
The 'baseline model' in 3dDeconvolve (and 3dREMLfit) does not mean just
a constant (mean) level of the signal, or even just the slow drifts that
happen in FMRI time series. 'Baseline' here also means the model that
forms the null hypothesis. The Full_Fstat result is the F-statistic
of the full model (all regressors) vs. the baseline model. Thus, it
it common to include irregular time series, such as estimated motion
parameters, in the baseline model via the -stim_file/-stim_base options,
or by using the -ortvec option (to include multiple regressors at once).
Thus, the 'baseline model' is really the 'null hypothesis model'.
------------------------------------------------------------------------
It is VERY important to realize that statistics (F, t, R^2) computed in
3dDeconvolve are MARGINAL (or partial) statistics. For example, the
t-statistic for a single beta coefficient measures the significance of
that beta value against the regression model where ONLY that one column
of the matrix is removed; that is, the null hypothesis for that
t-statistic is the full regression model minus just that single
regressor. Similarly, the F-statistic for a set of regressors measures
the significance of that set of regressors (eg, a set of TENT functions)
against the full model with just that set of regressors removed. If
this explanation or its consequences are unclear, you need to consult
with a statistician, or with the AFNI message board guru entities
(when they can be lured down from the peak of Mt Taniquetil or Kailash).
------------------------------------------------------------------------
Regression Programs in the AFNI Package:
* At its core, 3dDeconvolve solves a linear regression problem z = X b
for the parameter vector b, given the data vector z in each voxel, and
given the SAME matrix X in each voxel. The solution is calculated in
the Ordinary Least Squares (OLSQ) sense.
* Program 3dREMLfit does something similar, but allows for ARMA(1,1)
serial correlation in the data, so the solution method is called
Generalized Least Squares (GLSQ).
* If you want to solve a problem where some of the matrix columns in X
(the regressors) are different in different voxels (spatially variable),
then use program 3dTfitter, which uses OLSQ, or used 3dREMLfit.
* 3dTfitter can also use L1 and LASSO regression, instead of OLSQ; if you
want to use such 'robust' fitting methods, this program is your friend.
It can also impose sign constraints (positivity or negativity) on the
parameters b, and can (as mentioned above) do deconvolution.
* 3dBandpass and 3dTproject can do a sequence of 'time series cleanup'
operations, including 'regressing out' (via OLSQ) a set of nuisance
vectors (columns).
* 3dLSS can be used to solve -stim_times_IM systems using an alternative
linear technique that gives biased results, but with smaller variance.
------------------------------------------------------------------------
Usage Details:
3dDeconvolve command-line-arguments ...
**** Input data and control options ****
-input fname fname = filename of 3D+time input dataset
[more than one filename can be given]
[here, and these datasets will be]
[auto-catenated in time; if you do this,]
['-concat' is not needed and is ignored.]
**** You can input a 1D time series file here,
but the time axis should run along the
ROW direction, not the COLUMN direction as
in the -input1D option. You can automatically
transpose a 1D file on input using the \'
operator at the end of the filename, as in
-input fred.1D\'
** This is the only way to use 3dDeconvolve
with a multi-column 1D time series file.
* The output datasets by default will then
be in 1D format themselves. To have them
formatted as AFNI datasets instead, use
-DAFNI_WRITE_1D_AS_PREFIX=YES
on the command line.
* You should use '-force_TR' to set the TR of
the 1D 'dataset' if you use '-input' rather
than '-input1D' [the default is 1.0 sec].
-sat OR -trans * 3dDeconvolve can check the dataset time series
for initial saturation transients, which should
normally have been excised before data analysis.
(Or should be censored out: see '-censor' below.)
If you want to have it do this somewhat time
consuming check, use the option '-sat'.
* Or set environment variable AFNI_SKIP_SATCHECK to NO.
* Program 3dSatCheck does this check, also.
[-noblock] Normally, if you input multiple datasets with
'-input', then the separate datasets are taken to
be separate image runs that get separate baseline
models. If you want to have the program consider
these to be all one big run, use -noblock.
* If any of the input datasets has only 1 sub-brick,
then this option is automatically invoked!
* If the auto-catenation feature isn't used, then
this option has no effect, no how, no way.
[-force_TR TR] Use this value of TR instead of the one in
the -input dataset.
(It's better to fix the input using 3drefit.)
[-input1D dname] dname = filename of single (fMRI) .1D time series
where time run downs the column.
* If you want to analyze multiple columns from a
.1D file, see the '-input' option above for
the technique.
[-TR_1D tr1d] tr1d = TR for .1D time series [default 1.0 sec].
This option has no effect without -input1D
[-nodata [NT [TR]] Evaluate experimental design only (no input data)
* Optional, but highly recommended: follow the
'-nodata' with two numbers, NT=number of time
points, and TR=time spacing between points (sec)
[-mask mname] mname = filename of 3D mask dataset
Only data time series from within the mask
will be analyzed; results for voxels outside
the mask will be set to zero.
[-automask] Build a mask automatically from input data
(will be slow for long time series datasets)
** If you don't specify ANY mask, the program will
build one automatically (from each voxel's RMS)
and use this mask solely for the purpose of
reporting truncation-to-short errors (if '-short'
is used) AND for computing the FDR curves in the
bucket dataset's header (unless '-noFDR' is used,
of course).
* If you don't want the FDR curves to be computed
inside this automatically generated mask, then
use '-noFDR' and later run '3drefit -addFDR' on
the bucket dataset.
* To be precise, the above default masking only
happens when you use '-input' to run the program
with a 3D+time dataset; not with '-input1D'.
[-STATmask sname] Build a mask from file 'sname', and use this
mask for the purpose of reporting truncation-to
float issues AND for computing the FDR curves.
The actual results ARE not masked with this
option (only with '-mask' or '-automask' options)
* If you don't use '-STATmask', then the mask
from '-mask' or '-automask' is used for these
purposes. If neither of those is given, then
the automatically generated mask described
just above is used for these purposes.
[-censor cname] cname = filename of censor .1D time series
* This is a file of 1s and 0s, indicating which
time points are to be included (1) and which are
to be excluded (0).
* Option '-censor' can only be used once!
* The option below may be simpler to use!
[-CENSORTR clist] clist = list of strings that specify time indexes
to be removed from the analysis. Each string is
of one of the following forms:
37 => remove global time index #37
2:37 => remove time index #37 in run #2
37..47 => remove global time indexes #37-47
37-47 => same as above
2:37..47 => remove time indexes #37-47 in run #2
*:0-2 => remove time indexes #0-2 in all runs
+Time indexes within each run start at 0.
+Run indexes start at 1 (just be to confusing).
+Multiple -CENSORTR options may be used, or
multiple -CENSORTR strings can be given at
once, separated by spaces or commas.
+N.B.: 2:37,47 means index #37 in run #2 and
global time index 47; it does NOT mean
index #37 in run #2 AND index #47 in run #2.
[-concat rname] rname = filename for list of concatenated runs
* 'rname' can be in the format
'1D: 0 100 200 300'
which indicates 4 runs, the first of which
starts at time index=0, second at index=100,
and so on.
[-nfirst fnum] fnum = number of first dataset image to use in the
deconvolution procedure. [default = max maxlag]
[-nlast lnum] lnum = number of last dataset image to use in the
deconvolution procedure. [default = last point]
[-polort pnum] pnum = degree of polynomial corresponding to the
null hypothesis [default: pnum = 1]
** For pnum > 2, this type of baseline detrending
is roughly equivalent to a highpass filter
with a cutoff of (p-2)/D Hz, where 'D' is the
duration of the imaging run: D = N*TR
** If you use 'A' for pnum, the program will
automatically choose a value based on the
time duration D of the longest run:
pnum = 1 + int(D/150)
==>>** 3dDeconvolve is the ONLY AFNI program with the
-polort option that allows the use of 'A' to
set the polynomial order automatically!!!
** Use '-1' for pnum to specifically NOT include
any polynomials in the baseline model. Only
do this if you know what this means!
[-legendre] use Legendre polynomials for null hypothesis
(baseline model)
[-nolegendre] use power polynomials for null hypotheses
[default is -legendre]
** Don't do this unless you are crazy!
[-nodmbase] don't de-mean baseline time series
(i.e., polort>0 and -stim_base inputs)
[-dmbase] de-mean baseline time series [default if polort>=0]
[-svd] Use SVD instead of Gaussian elimination [default]
[-nosvd] Use Gaussian elimination instead of SVD
(only use for testing + backwards compatibility)
[-rmsmin r] r = minimum rms error to reject reduced model
(default = 0; don't use this option normally!)
[-nocond] DON'T calculate matrix condition number
** This value is NOT the same as Matlab!
[-singvals] Print out the matrix singular values
(useful for some testing/debugging purposes)
Also see program 1dsvd.
[-GOFORIT [g]] Use this to proceed even if the matrix has
bad problems (e.g., duplicate columns, large
condition number, etc.).
*N.B.: Warnings that you should particularly heed have
the string '!!' somewhere in their text.
*N.B.: Error and Warning messages go to stderr and
also to file 3dDeconvolve.err.
++ You can disable the creation of this .err
file by setting environment variable
AFNI_USE_ERROR_FILE to NO before running
this program.
*N.B.: The optional number 'g' that appears is the
number of warnings that can be ignored.
That is, if you use -GOFORIT 7 and 9 '!!'
matrix warnings appear, then the program will
not run. If 'g' is not present, 1 is used.
[-allzero_OK] Don't consider all zero matrix columns to be
the type of error that -GOFORIT is needed to
ignore.
* Please know what you are doing when you use
this option!
[-Dname=val] = Set environment variable 'name' to 'val' for this
run of the program only.
******* Input stimulus options *******
-num_stimts num num = number of input stimulus time series
(0 <= num) [default: num = 0]
*N.B.: '-num_stimts' must come before any of the
following '-stim' options!
*N.B.: Most '-stim' options have as their first argument
an integer 'k', ranging from 1..num, indicating
which stimulus class the argument is defining.
*N.B.: The purpose of requiring this option is to make
sure your model is complete -- that is, you say
you are giving 5 '-stim' options, and then the
program makes sure that all of them are given
-- that is, that you don't forget something.
-stim_file k sname sname = filename of kth time series input stimulus
*N.B.: This option directly inserts a column into the
regression matrix; unless you are using the 'old'
method of deconvolution (cf below), you would
normally only use '-stim_file' to insert baseline
model components such as motion parameters.
[-stim_label k slabel] slabel = label for kth input stimulus
*N.B.: This option is highly recommended, so that
output sub-bricks will be labeled for ease of
recognition when you view them in the AFNI GUI.
[-stim_base k] kth input stimulus is part of the baseline model
*N.B.: 'Baseline model' == Null Hypothesis model
*N.B.: The most common baseline components to add are
the 6 estimated motion parameters from 3dvolreg.
-ortvec fff lll This option lets you input a rectangular array
of 1 or more baseline vectors from file 'fff',
which will get the label 'lll'. Functionally,
it is the same as using '-stim_file' on each
column of 'fff' separately (plus '-stim_base').
This method is just a faster and simpler way to
include a lot of baseline regressors in one step.
-->>**N.B.: This file is NOT included in the '-num_stimts'
count that you provide.
*N.B.: These regression matrix columns appear LAST
in the matrix, after everything else.
*N.B.: You can use column '[..]' and/or row '{..}'
selectors on the filename 'fff' to pick out
a subset of the numbers in that file.
*N.B.: The q-th column of 'fff' will get a label
like 'lll[q]' in the 3dDeconvolve results.
*N.B.: This option is known as the 'Inati Option'.
*N.B.: Unlike the original 'Inati' (who is unique), it
is allowed to have more than one '-ortvec' option.
*N.B.: Program 1dBport is one place to generate a file
for use with '-ortvec'; 1deval might be another.
**N.B.: You must have -num_stimts > 0 AND/OR
You must use -ortvec AND/OR
You must have -polort >= 0
Otherwise, there is no regression model!
An example using -polort only:
3dDeconvolve -x1D_stop -polort A -nodata 300 2 -x1D stdout: | 1dplot -one -stdin
**N.B.: The following 3 options are for the 'old' style of explicit
deconvolution. For most purposes, their usage is no longer
recommended. Instead, you should use the '-stim_times' options
to directly input the stimulus times, rather than code the
stimuli as a sequence of 0s and 1s in this 'old' method!
[-stim_minlag k m] m = minimum time lag for kth input stimulus
[default: m = 0]
[-stim_maxlag k n] n = maximum time lag for kth input stimulus
[default: n = 0]
[-stim_nptr k p] p = number of stimulus function points per TR
Note: This option requires 0 slice offset times
[default: p = 1]
**N.B.: The '-stim_times' options below are the recommended way of
analyzing FMRI time series data now. The options directly
above are only maintained for the sake of backwards
compatibility! For most FMRI users, the 'BLOCK' and 'TENT'
(or 'CSPLIN') response models will serve their needs. The
other models are for users with specific needs who understand
clearly what they are doing.
[-stim_times k tname Rmodel]
Generate the k-th response model from a set of stimulus times
given in file 'tname'.
*** The format of file 'tname' is one line per imaging run
(cf. '-concat' above), and each line contains the list of START
times (in seconds) for the stimuli in class 'k' for its
corresponding run of data; times are relative to the start of
the run (i.e., sub-brick #0 occurring at time=0).
*** The DURATION of the stimulus is encoded in the 'Rmodel'
argument, described below. Units are in seconds, not TRs!
-- If different stimuli in the same class 'k' have different
durations, you'll have to use the dmBLOCK response model
and '-stim_times_AM1' or '-stim_times_AM2', described below.
*** Different lines in the 'tname' file can contain different
numbers of start times. Each line must contain at least 1 time.
*** If there is no stimulus in class 'k' in a particular imaging
run, there are two ways to indicate that:
(a) put a single '*' on the line, or
(b) put a very large number or a negative number
(e.g., 99999, or -1) on the line
-- times outside the range of the imaging run will cause
a warning message, but the program will soldier on.
*** In the case where the stimulus doesn't actually exist in the
data model (e.g., every line in 'tname' is a '*'), you will
also have to use the '-allzero_OK' option to force 3dDeconvolve
to run with regressor matrix columns that are filled with zeros.
The response model is specified by the third argument after
'-stim_times' ('Rmodel'), and can be one of the following:
*** In the descriptions below, a '1 parameter' model has a fixed
shape, and only the estimated amplitude ('Coef') varies:
BLOCK GAM TWOGAM SPMG1 WAV MION
*** Models with more than 1 parameter have multiple basis
functions, and the estimated parameters ('Coef') are their
amplitudes. The estimated shape of the response to a stimulus
will be different in different voxels:
TENT CSPLIN SPMG2 SPMG3 POLY SIN EXPR
*** Many models require the input of the start and stop times for
the response, 'b' and 'c'. Normally, 'b' would be zero, but
in some cases, 'b' could be negative -- for example, if you
are concerned about anticipatory effects. The stop time 'c'
should be based on how long you realistically expect the
hemodynamic response to last after the onset of the stimulus;
e.g., the duration of the stimulus plus 14 seconds for BOLD.
*** If you use '-tout', each parameter will get a separate
t-statistic. As mentioned far above, this is a marginal
statistic, measuring the impact of that model component on the
regression fit, relative to the fit with that one component
(matrix column) removed.
*** If you use '-fout', each stimulus will also get an F-statistic,
which is the collective impact of all the model components
it contains, relative to the regression fit with the entire
stimulus removed. (If there is only 1 parameter, then F = t*t.)
*** Some models below are described in terms of a simple response
function that is then convolved with a square wave whose
duration is a parameter you give (duration is NOT a parameter
that will be estimated). Read the descriptions below carefully:
not all functions are (or can be) convolved in this way:
* ALWAYS convolved: BLOCK dmBLOCK MION MIONN
* OPTIONALLY convolved: GAM TWOGAM SPMGx WAV
* NEVER convolved: TENT CSPLIN POLY SIN EXPR
Convolution is specified by providing the duration parameter
as described below for each particular model function.
'BLOCK(d,p)' = 1 parameter block stimulus of duration 'd'
** There are 2 variants of BLOCK:
BLOCK4 [the default] and BLOCK5
which have slightly different delays:
HRF(t) = int( g(t-s) , s=0..min(t,d) )
where g(t) = t^q * exp(-t) /(q^q*exp(-q))
and q = 4 or 5. The case q=5 is delayed by
about 1 second from the case q=4.
==> ** Despite the name, you can use 'BLOCK' for event-
related analyses just by setting the duration to
a small value; e.g., 'BLOCK5(1,1)'
** The 'p' parameter is the amplitude of the
basis function, and should usually be set to 1.
If 'p' is omitted, the amplitude will depend on
the duration 'd', which is useful only in
special circumstances!!
** For bad historical reasons, the peak amplitude
'BLOCK' without the 'p' parameter does not go to
1 as the duration 'd' gets large. Correcting
this oversight would break some people's lives,
so that's just the way it is.
** The 'UBLOCK' function (U for Unit) is the same
as the 'BLOCK' function except that when the
'p' parameter is missing (or 0), the peak
amplitude goes to 1 as the duration gets large.
If p > 0, 'UBLOCK(d,p)' and 'BLOCK(d,p)' are
identical.
'TENT(b,c,n)' = n parameter tent function expansion from times
b..c after stimulus time [piecewise linear]
[n must be at least 2; time step is (c-b)/(n-1)]
'CSPLIN(b,c,n)'= n parameter cubic spline function expansion
from times b..c after stimulus time
[n must be at least 4]
** CSPLIN is a drop-in upgrade of TENT to a
differentiable set of functions.
** TENT and CSPLIN are 'cardinal' interpolation
functions: their parameters are the values
of the HRF model at the n 'knot' points
b , b+dt , b+2*dt , ... [dt = (c-b)/(n-1)]
In contrast, in a model such as POLY or SIN,
the parameters output are not directly the
hemodynamic response function values at any
particular point.
==> ** You can also use 'TENTzero' and 'CSPLINzero',
which means to eliminate the first and last
basis functions from each set. The effect
of these omissions is to force the deconvolved
HRF to be zero at t=b and t=c (to start and
and end at zero response). With these 'zero'
response models, there are n-2 parameters
(thus for 'TENTzero', n must be at least 3).
** These 'zero' functions will force the HRF to
be continuous, since they will now be unable
to suddenly rise up from 0 at t=b and/or drop
down to 0 at t=c.
'GAM(p,q)' = 1 parameter gamma variate
(t/(p*q))^p * exp(p-t/q)
Defaults: p=8.6 q=0.547 if only 'GAM' is used
** The peak of 'GAM(p,q)' is at time p*q after
the stimulus. The FWHM is about 2.35*sqrt(p)*q;
this approximation is accurate for p > 0.3*q.
** To check this approximation, try the command
1deval -num 100 -del 0.02 -xzero 0.02 \
-expr 'sqrt(gamp(x,1))/2.35/x' | \
1dplot -stdin -del 0.02 -xzero 0.02 -yaxis 1:1.4:4:10
If the two functions gamp(x,1) and 2.35*x
were equal, the plot would be constant y=1.
==> ** If you add a third argument 'd', then the GAM
function is convolved with a square wave of
duration 'd' seconds; for example:
'GAM(8.6,.547,17)'
for a 17 second stimulus. [09 Aug 2010]
'GAMpw(K,W)' = Same as 'GAM(p,q)' but where the shape parameters
are specified at time to peak 'K' and full
width at half max (FWHM) 'W'. You can also
add a third argument as the duration. The (K,W)
parameters are converted to (p,q) values for
the actual computations; the (p,q) parameters
are printed to the text (stderr) output.
** Note that if you give weird values for K and W,
weird things will happen: (tcsh syntax)
set pp = `ccalc 'gamp(2,8)'`
set qq = `ccalc 'gamq(2,8)'`
1deval -p=$pp -q=$qq -num 200 -del 0.1 \
-expr '(t/p/q)^p*exp(p-t/q)' | \
1dplot -stdin -del 0.1
Here, K is significantly smaller than W,
so a gamma variate that fits peak=2 width=8
must be weirdly shaped. [Also note use of the
'calc' functions gamp(K,W) and gamq(K,W) to
calculate p and q from K and W in the script.]
'TWOGAM(p1,q1,r,p2,q2)'
= 1 parameter (amplitude) model:
= A combination of two 'GAM' functions:
GAM(p1,q1) - r*GAM(p2,q2)
This model is intended to let you use a HRF
similar to BrainVoyager (e.g.). You can
add a sixth argument as the duration.
** Note that a positive 'r' parameter means to
subtract the second GAM function (undershoot).
'TWOGAMpw(K1,W1,r,K2,W2)'
= Same as above, but where the peaks and widths
of the 2 component gamma variates are given
instead of the less intuitive p and q.
For FMRI work, K2 > K1 is usual, as the
second (subtracted) function is intended
to model the 'undershoot' after the main
positive part of the model. You can also
add a sixth argument as the duration.
** Example (no duration given):
3dDeconvolve -num_stimts 1 -polort -1 -nodata 81 0.5 \
-stim_times 1 '1D: 0' 'TWOGAMpw(3,6,0.2,10,12)' \
-x1D stdout: | 1dplot -stdin -THICK -del 0.5
'SPMG1' = 1 parameter SPM gamma variate basis function
exp(-t)*(A1*t^P1-A2*t^P2) where
A1 = 0.0083333333 P1 = 5 (main positive lobe)
A2 = 1.274527e-13 P2 = 15 (undershoot part)
This function is NOT normalized to have peak=1!
'SPMG2' = 2 parameter SPM: gamma variate + d/dt derivative
[For backward compatibility: 'SPMG' == 'SPMG2']
'SPMG3' = 3 parameter SPM basis function set
==> ** The SPMGx functions now can take an optional
(duration) argument, specifying that the primal
SPM basis functions should be convolved with
a square wave 'duration' seconds long and then
be normalized to have peak absolute value = 1;
e.g., 'SPMG3(20)' for a 20 second duration with
three basis function. [28 Apr 2009]
** Note that 'SPMG1(0)' will produce the usual
'SPMG1' wavefunction shape, but normalized to
have peak value = 1 (for example).
'POLY(b,c,n)' = n parameter Legendre polynomial expansion
from times b..c after stimulus time
[n can range from 1 (constant) to 20]
'SIN(b,c,n)' = n parameter sine series expansion
from times b..c after stimulus time
[n must be at least 1]
'WAV(d)' = 1 parameter block stimulus of duration 'd'.
* This is the '-WAV' function from program waver!
* If you wish to set the shape parameters of the
WAV function, you can do that by adding extra
arguments, in the order
delay time , rise time , fall time ,
undershoot fraction, undershoot restore time
* The default values are 'WAV(d,2,4,6,0.2,2)'
* Omitted parameters get the default values.
* 'WAV(d,,,,0)' (setting undershoot=0) is
very similar to 'BLOCK5(d,1)', for d > 0.
* Setting duration d to 0 (or just using 'WAV')
gives the pure '-WAV' impulse response function
from waver.
* If d > 0, the WAV(0) function is convolved with
a square wave of duration d to make the HRF,
and the amplitude is scaled back down to 1.
'EXPR(b,c) exp1 ... expn'
= n parameter; arbitrary expressions from times
b..c after stimulus time
* Expressions are separated by spaces, so
each expression must be a contiguous block
of non-whitespace characters
* The entire model, from 'EXPR' to the final
expression must be enclosed in one set of
quotes. The individual component expressions
are separated by blanks. Example:
'-EXPR(0,20) sin(PI*t/20)^2'
* Expressions use the same format as 3dcalc
* Symbols that can be used in an expression:
t = time in sec since stimulus time
x = time scaled to be x= 0..1 for t=bot..top
z = time scaled to be z=-1..1 for t=bot..top
* Spatially dependent regressors are not allowed!
* Other symbols are set to 0 (silently).
==> ** There is no convolution of the 'EXPR' functions
with a square wave implied. The expressions
you input are what you get, evaluated over
times b..c after each stimulus time. To be
sure of what your response model is, you should
plot the relevant columns from the matrix
.xmat.1D output file.
'MION(d)' = 1 parameter block stimulus of duration 'd',
intended to model the response of MION.
The zero-duration impulse response 'MION(0)' is
h(t) = 16.4486 * ( -0.184/ 1.5 * exp(-t/ 1.5)
+0.330/ 4.5 * exp(-t/ 4.5)
+0.670/13.5 * exp(-t/13.5) )
which is adapted from the paper
FP Leite, et al. NeuroImage 16:283-294 (2002)
http://dx.doi.org/10.1006/nimg.2002.1110
** Note that this is a positive function, but MION
produces a negative response to activation, so the
beta and t-statistic for MION are usually negative.
***** If you want a negative MION function (so you get
a positive beta), use the name 'MIONN' instead.
** After convolution with a square wave 'd' seconds
long, the resulting single-trial waveform is
scaled to have magnitude 1. For example, try
this fun command to compare BLOCK and MION:
3dDeconvolve -nodata 300 1 -polort -1 -num_stimts 2 \
-stim_times 1 '1D: 10 150' 'MION(70)' \
-stim_times 2 '1D: 10 150' 'BLOCK(70,1)' \
-x1D stdout: | 1dplot -stdin -one -thick
You will see that the MION curve rises and falls
much more slowly than the BLOCK curve.
==> ** Note that 'MION(d)' is already convolved with a
square wave of duration 'd' seconds. Do not
convolve it again by putting in multiple closely
spaced stimulus times (this mistake has been made)!
** Scaling the single-trial waveform to have magnitude
1 means that trials with different durations 'd'
will have the same magnitude for their regression
models.
* 3dDeconvolve does LINEAR regression, so the model parameters are
amplitudes of the basis functions; 1 parameter models are 'simple'
regression, where the shape of the impulse response function is
fixed and only the magnitude/amplitude varies. Models with more
free parameters have 'variable' shape impulse response functions.
* LINEAR regression means that each data time series (thought of as
a single column of numbers = a vector) is fitted to a sum of the
matrix columns, each one multiplied by an amplitude parameter to
be calculated ('Coef'). The purpose of the various options
'-stim_times', '-polort', '-ortvec', and/or '-stim_file'
is to build the columns of the regression matrix.
* If you want NONLINEAR regression, see program 3dNLfim.
* If you want LINEAR regression with allowance for non-white noise,
use program 3dREMLfit, after using 3dDeconvolve to set up the
regression model (in the form of a matrix file).
** When in any doubt about the shape of the response model you are **
* asking for, you should plot the relevant columns from the X matrix *
* to help develop some understanding of the analysis. The 'MION' *
* example above can be used as a starting point for how to easily *
* setup a quick command pipeline to graph response models. In that *
* example, '-polort -1' is used to suppress the usual baseline model *
* since graphing that part of the matrix would just be confusing. *
* Another example, for example, comparing the similar models *
** 'WAV(10)', 'BLOCK4(10,1)', and 'SPMG1(10)': **
3dDeconvolve -nodata 100 1.0 -num_stimts 3 -polort -1 \
-local_times -x1D stdout: \
-stim_times 1 '1D: 10 60' 'WAV(10)' \
-stim_times 2 '1D: 10 60' 'BLOCK4(10,1)' \
-stim_times 3 '1D: 10 60' 'SPMG1(10)' \
| 1dplot -thick -one -stdin -xlabel Time -ynames WAV BLOCK4 SPMG1
* For the format of the 'tname' file, see the last part of
https://afni.nimh.nih.gov/pub/dist/doc/misc/Decon/DeconSummer2004.html
and also see the other documents stored in the directory below:
https://afni.nimh.nih.gov/pub/dist/doc/misc/Decon/
and also read the presentation below:
https://afni.nimh.nih.gov/pub/dist/edu/latest/afni_handouts/afni05_regression.pdf
** Note Well:
* The contents of the 'tname' file are NOT just 0s and 1s,
but are the actual times of the stimulus events IN SECONDS.
* You can give the times on the command line by using a string
of the form '1D: 3.2 7.9 | 8.2 16.2 23.7' in place of 'tname',
where the '|' character indicates the start of a new line
(so this example is for a case with 2 catenated runs).
=> * You CANNOT USE the '1D:' form of input for any of the more
complicated '-stim_times_*' options below!!
* The '1D:' form of input is mostly useful for quick tests, as
in the examples above, rather than for production analyses with
lots of different stimulus times and multiple imaging runs.
[-stim_times_AM1 k tname Rmodel]
Similar, but generates an amplitude modulated response model.
The 'tname' file should consist of 'time*amplitude' pairs.
As in '-stim_times', the '*' character can be used as a placeholder
when an imaging run doesn't have any stimulus of a given class.
*N.B.: What I call 'amplitude' modulation is called 'parametric'
modulation in Some other PrograM.
***N.B.: If NO run at all has a stimulus of a given class, then you
must have at least 1 time that is not '*' for -stim_times_*
to work (so that the proper number of regressors can be set
up). You can use a negative time for this purpose, which
will produce a warning message but otherwise will be
ignored, as in:
-1*37
*
for a 2 run 'tname' file to be used with -stim_times_*.
** In such a case, you will also need the -allzero_OK option,
and probably -GOFORIT as well.
** It is possible to combine '-stim_times_AM1' with the Rmodel
being TENT. If you have an amplitude parameter at each TR,
and you want to try to deconvolve its impact on the data,
you can try the following:
a) create a 1D column file with the amplitude parameter,
one value per TR, matching the length of the data;
say this file is called Akk.1D
b) create a 1D column file with the actual TR time in
each row; for example, if you have 150 time points
and TR=2 s, then
1deval -num 150 -expr '2*i' > Att.1D
c) glue these files together for use with -stim_times_AM1:
echo `1dMarry Att.1D Akk.1D` > Atk.1D
d) Use option
-stim_times 1 Atk.1D 'TENT(0,20,11)' -stim_label 1 TENT
which gives a TENT response lasting 20s with 11 parameters
-- one every TR.
e) Use all the other clever options you need in 3dDeconvolve,
such as censoring, baseline, motion parameters, ....
Variations on the options chosen here can be made to
constrain the deconvolution; e.g., use CSPLIN vs. TENT, or
CSPLINzero; use fewer parameters in the TENT/CSPLIN to force
a smoother deconvolution, etc.
Graphing the regression matrix is useful in this type of
analysis, to be sure you are getting the analysis you want;
for example:
1dplot -sep_scl prefix.xmat.1D
[-stim_times_AM2 k tname Rmodel]
Similar, but generates 2 response models: one with the mean
amplitude and one with the differences from the mean.
*** Please note that 'AM2' is the option you should probably use!
*** 'AM1' is for special cases, and normally should not be used
for FMRI task activation analyses!!
*** 'AM2' will give you the ability to detect voxels that activate
but do not change proportional to the amplitude factor, as well
as provide a direct measure of the proportionality of the
activation to changes in the input amplitude factors. 'AM1'
will do neither of these things.
*** Normally, 3dDeconvolve removes the mean of the auxiliary
parameter(s) from the modulated regressor(s). However, if you
set environment variable AFNI_3dDeconvolve_rawAM2 to YES, then
the mean will NOT be removed from the auxiliary parameter(s).
This ability is provided for users who want to center their
parameters using their own method.
*** [12 Jul 2012] You can now specify the value to subtract from
each modulation parameter -- this value will replace the
subtraction of the average parameter value that usually happens.
To do this, add an extra parameter after the option, as in
-stim_times_AM2 1 timesAM.1D 'BLOCK(2,1)' :5.2:x:2.0
The extra argument must start with the colon ':' character, and
there should be as many different values (separated by ':') as
there are parameters in the timing file (timesAM.1D above).
==> In the example above, ':5.2:x:2.0' means
subtract 5.2 from each value of the first parameter in timesAM.1D
subtract the MEAN from each value of the second parameter
(since 'x' doesn't translate to a number)
subtract 2.0 from each value of the third parameter
==> What is this option for, anyway? The purpose is to facilitate
GROUP analysis the results from a collection of subjects, where
you want to treat each subject's analysis exactly the same
way -- and thus, the subtraction value for a parameter (e.g.,
reaction time) should then be the mean over all the reaction
times from all trials in all subjects.
** NOTE [04 Dec 2008] **
-stim_times_AM1 and -stim_times_AM2 now take files with more
than 1 amplitude attached to each time; for example,
33.7*9,-2,3
indicates a stimulus at time 33.7 seconds with 3 amplitudes
attached (9 and -2 and 3). In this example, -stim_times_AM2 would
generate 4 response models: 1 for the constant response case
and 1 scaled by each of the amplitude sets.
** Please don't carried away and use too many parameters!! **
For more information on modulated regression, see
https://afni.nimh.nih.gov/pub/dist/doc/misc/Decon/AMregression.pdf
** NOTE [08 Dec 2008] **
-stim_times_AM1 and -stim_times_AM2 now have 1 extra response model
function available:
dmBLOCK (or dmBLOCK4 or dmBLOCK5)
where 'dm' means 'duration modulated'. If you use this response
model, then the LAST married parameter in the timing file will
be used to modulate the duration of the block stimulus. Any
earlier parameters will be used to modulate the amplitude,
and should be separated from the duration parameter by a ':'
character, as in '30*5,3:12' which means (for dmBLOCK):
a block starting at 30 s,
with amplitude modulation parameters 5 and 3,
and with duration 12 s.
The unmodulated peak response of dmBLOCK depends on the duration
of the stimulus, as the BOLD response accumulates.
If you want the peak response to be a set to a fixed value, use
dmBLOCK(p)
where p = the desired peak value (e.g., 1).
*** Understand what you doing when you use dmBLOCK, and look at ***
*** the regression matrix! Otherwise, you will end up confused. ***
*N.B.: The maximum allowed dmBLOCK duration is 999 s.
*N.B.: You cannot use '-iresp' or '-sresp' with dmBLOCK!
*N.B.: If you are NOT doing amplitude modulation at the same time
(and so you only have 1 'married' parameter per time), use
'-stim_times_AM1' with dmBLOCK. If you also want to do
amplitude modulation at the same time as duration modulation
(and so you have 2 or more parameters with each time), use
'-stim_times_AM2' instead. If you use '-stim_times_AM2' and
there is only 1 'married' parameter, the program will print
a warning message, then convert to '-stim_times_AM1', and
continue -- so nothing bad will happen to your analysis!
(But you will be embarrassed in front of your friends.)
*N.B.: If you are using AM2 (amplitude modulation) with dmBLOCK, you
might want to use 'dmBLOCK(1)' to make each block have native
amplitude 1 before it is scaled by the amplitude parameter.
Or maybe not -- this is a matter for fine judgment.
*N.B.: You can also use dmBLOCK with -stim_times_IM, in which case
each time in the 'tname' file should have just ONE extra
parameter -- the duration -- married to it, as in '30:15',
meaning a block of duration 15 seconds starting at t=30 s.
*N.B.: For bad historical reasons, the peak amplitude dmBLOCK without
the 'p' parameter does not go to 1 as the duration gets large.
Correcting this oversight would break some people's lives, so
that's just the way it is.
*N.B.: The 'dmUBLOCK' function (U for Unit) is the same as the
'dmBLOCK' function except that when the 'p' parameter is
missing (or 0), the peak amplitude goes to 1 as the duration
gets large. If p > 0, 'dmUBLOCK(p)' and 'dmBLOCK(p)' are
identical
For some graphs of what dmBLOCK regressors look like, see
https://afni.nimh.nih.gov/pub/dist/doc/misc/Decon/AMregression.pdf
and/or try the following command:
3dDeconvolve -nodata 350 1 -polort -1 -num_stimts 1 \
-stim_times_AM1 1 q.1D 'dmBLOCK' \
-x1D stdout: | 1dplot -stdin -thick -thick
where file q.1D contains the single line
10:1 40:2 70:3 100:4 130:5 160:6 190:7 220:8 250:9 280:30
Change 'dmBLOCK' to 'dmBLOCK(1)' and see how the matrix plot changes.
**************** Further notes on dmBLOCK [Nov 2013] ****************
Basically (IMHO), there are 2 rational choices to use:
(a) 'dmUBLOCK' = allow the amplitude of the response model to
vary with the duration of the stimulus; getting
larger with larger durations; for durations longer
than about 15s, the amplitude will become 1.
-->> This choice is equivalent to 'dmUBLOCK(0)', but
is NOT equivalent to 'dmBLOCK(0)' due to the
historical scaling issue alluded to above.
(b) 'dmUBLOCK(1)' = all response models will get amplitude 1,
no matter what the duration of the stimulus.
-->> This choice is equivalent to 'dmBLOCK(1)'.
Some users have expressed the desire to allow the amplitude to
vary with duration, as in case (a), BUT to specify the duration
at which the amplitude goes to 1. This desideratum has now been
implemented, and provides the case below:
(a1) 'dmUBLOCK(-X)' = set the amplitude to be 1 for a duration
of 'X' seconds; e.g., 'dmBLOCK(-5)' means
that a stimulus with duration 5 gets
amplitude 1, shorter durations get amplitudes
smaller than 1, and longer durations get
amplitudes larger than 1.
-->> Please note that 'dmBLOCK(-X)' is NOT the
same as this case (a1), and in fact it
has no meaning.
I hope this clarifies things and makes your life simpler, happier,
and more carefree. (If not, please blame Gang Chen, not me.)
An example to clarify the difference between these cases:
3dDeconvolve -nodata 350 1 -polort -1 -num_stimts 3 \
-stim_times_AM1 1 q.1D 'dmUBLOCK' \
-stim_times_AM1 2 q.1D 'dmUBLOCK(1)' \
-stim_times_AM1 3 q.1D 'dmUBLOCK(-4)' \
-x1D stdout: | \
1dplot -stdin -thick \
-ynames 'dmUBLOCK' 'dmUB(1)' 'dmUB(-4)'
where file q.1D contains the single line
10:1 60:2 110:4 160:10 210:20 260:30
Note how the 'dmUBLOCK(-4)' curve (green) peaks at 1 for the 3rd
stimulus, and peaks at larger values for the later (longer) blocks.
Whereas the 'dmUBLOCK' curve (black) peaks at 1 at only the longest
blocks, and the 'dmUBLOCK(1)' curve (red) peaks at 1 for ALL blocks.
*********************************************************************
[-stim_times_FSL k tname Rmodel]
This option allows you to input FSL-style 3-column timing files,
where each line corresponds to one stimulus event/block; the
line '40 20 1' means 'stimulus starts at 40 seconds, lasts for
20 seconds, and is given amplitude 1'. Since in this format,
each stimulus can have a different duration and get a different
response amplitude, the 'Rmodel' must be one of the 'dm'
duration-modulated options above ['dmUBLOCK(1)' is probably the
most useful]. The amplitude modulation is taken to be like
'-stim_times_AM1', where the given amplitude in the 'tname' file
multiplies the basic response shape.
*** We DO NOT advocate the use of this '_FSL' option, but it's here
to make some scripting easier for some (unfortunate) people.
*** The results of 3dDeconvolve (or 3dREMLfit) cannot be expected
to be exactly the same as FSL FEAT, since the response model
shapes are different, among myriad other details.
*** You can also use '-stim_times_FS1' to indicate that the
amplitude factor in the 'tname' file should be ignored and
replaced with '1' in all cases.
*** FSL FEAT only analyzes contiguous time series -- nothing like
'-concat' allowing for multiple EPI runs is possible in FSL
(AFAIK). So the FSL stimulus time format doesn't allow for
this possibility. In 3dDeconvolve, you can get around this
problem by using a line consisting of '* * *' to indicate the
break between runs, as in the example below:
1 2 3
4 5 6
* * *
7 8 9
that indicates 2 runs, the first of which has 2 stimuli and
the second of which has just 1 stimulus. If there is a run
that has NO copies of this type of stimulus, then you would
use two '* * *' lines in succession.
Of course, a file using the '* * *' construction will NOT be
compatible with FSL!
[-stim_times_IM k tname Rmodel]
Similar, but each separate time in 'tname' will get a separate
regressor; 'IM' means 'Individually Modulated' -- that is, each
event will get its own amplitude estimated. Presumably you will
collect these many amplitudes afterwards and do some sort of
statistics or analysis on them.
*N.B.: Each time in the 'tname' file will get a separate regressor.
If some time is outside the duration of the imaging run(s),
or if the response model for that time happens to hit only
censored-out data values, then the corresponding regressor
will be all zeros. Normally, 3dDeconvolve will not run
if the matrix has any all zero columns. To carry out the
analysis, use the '-allzero_OK' option. Amplitude estimates
for all zero columns will be zero, and should be excluded
from any subsequent analysis. (Probably you should fix the
times in the 'tname' file instead of using '-allzero_OK'.)
[-global_times]
[-local_times]
By default, 3dDeconvolve guesses whether the times in the 'tname'
files for the various '-stim_times' options are global times
(relative to the start of run #1) or local times (relative to
the start of each run). With one of these options, you can force
the times to be considered as global or local for '-stim_times'
options that are AFTER the '-local_times' or '-global_times'.
** Using one of these options (most commonly, '-local_times') is
VERY highly recommended.
[-stim_times_millisec]
This option scales all the times in any '-stim_times_*' option by
0.001; the purpose is to allow you to input the times in ms instead
of in s. This factor will be applied to ALL '-stim_times' inputs,
before or after this option on the command line. This factor will
be applied before -stim_times_subtract, so the subtraction value
(if present) must be given in seconds, NOT milliseconds!
[-stim_times_subtract SS]
This option means to subtract 'SS' seconds from each time encountered
in any '-stim_times*' option. The purpose of this option is to make
it simple to adjust timing files for the removal of images from the
start of each imaging run. Note that this option will be useful
only if both of the following are true:
(a) each imaging run has exactly the same number of images removed
(b) the times in the 'tname' files were not already adjusted for
these image removal (i.e., the times refer to the image runs
as acquired, not as input to 3dDeconvolve).
In other words, use this option with understanding and care!
** Note that the subtraction of 'SS' applies to ALL '-stim_times'
inputs, before or after this option on the command line!
** And it applies to global times and local times alike!
** Any time (thus subtracted) below 0 will be ignored, as falling
before the start of the imaging run.
** This option, and the previous one, are simply for convenience, to
help you in setting up your '-stim_times*' timing files from
whatever source you get them.
[-basis_normall a]
Normalize all basis functions for '-stim_times' to have
amplitude 'a' (must have a > 0). The peak absolute value
of each basis function will be scaled to be 'a'.
NOTES:
* -basis_normall only affect -stim_times options that
appear LATER on the command line
* The main use for this option is for use with the
'EXPR' basis functions.
******* General linear test (GLT) options *******
-num_glt num num = number of general linear tests (GLTs)
(0 <= num) [default: num = 0]
**N.B.: You only need this option if you have
more than 10 GLTs specified; the program
has built-in space for 10 GLTs, and
this option is used to expand that space.
If you use this option, you should place
it on the command line BEFORE any of the
other GLT options.
[-glt s gltname] Perform s simultaneous linear tests, as specified
by the matrix contained in file 'gltname'
[-glt_label k glabel] glabel = label for kth general linear test
[-gltsym gltname] Read the GLT with symbolic names from the file
'gltname'; see the document below for details:
https://afni.nimh.nih.gov/pub/dist/doc/misc/Decon/DeconSummer2004.html
******* Options to create 3D+time datasets *******
[-iresp k iprefix] iprefix = prefix of 3D+time output dataset which
will contain the kth estimated impulse response
[-tshift] Use cubic spline interpolation to time shift the
estimated impulse response function, in order to
correct for differences in slice acquisition
times. Note that this effects only the 3D+time
output dataset generated by the -iresp option.
**N.B.: This option only applies to the 'old' style of
deconvolution analysis. Do not use this with
-stim_times analyses!
[-sresp k sprefix] sprefix = prefix of 3D+time output dataset which
will contain the standard deviations of the
kth impulse response function parameters
[-fitts fprefix] fprefix = prefix of 3D+time output dataset which
will contain the (full model) time series fit
to the input data
[-errts eprefix] eprefix = prefix of 3D+time output dataset which
will contain the residual error time series
from the full model fit to the input data
[-TR_times dt]
Use 'dt' as the stepsize for output of -iresp and -sresp file
for response models generated by '-stim_times' options.
Default is same as time spacing in the '-input' 3D+time dataset.
The units here are in seconds!
**** Options to control the contents of the output bucket dataset ****
[-fout] Flag to output the F-statistics for each stimulus
** F tests the null hypothesis that each and every
beta coefficient in the stimulus set is zero
** If there is only 1 stimulus class, then its
'-fout' value is redundant with the Full_Fstat
computed for all stimulus coefficients together.
[-rout] Flag to output the R^2 statistics
[-tout] Flag to output the t-statistics
** t tests a single beta coefficient against zero
** If a stimulus class has only one regressor, then
F = t^2 and the F statistic is redundant with t.
[-vout] Flag to output the sample variance (MSE) map
[-nobout] Flag to suppress output of baseline coefficients
(and associated statistics) [** DEFAULT **]
[-bout] Flag to turn on output of baseline coefs and stats.
** Will make the output dataset larger.
[-nocout] Flag to suppress output of regression coefficients
(and associated statistics)
** Useful if you just want GLT results.
[-full_first] Flag to specify that the full model statistics will
be first in the bucket dataset [** DEFAULT **]
[-nofull_first] Flag to specify that full model statistics go last
[-nofullf_atall] Flag to turn off the full model F statistic
** DEFAULT: the full F is always computed, even if
sub-model partial F's are not ordered with -fout.
[-bucket bprefix] Create one AFNI 'bucket' dataset containing various
parameters of interest, such as the estimated IRF
coefficients, and full model fit statistics.
Output 'bucket' dataset is written to bprefix.
[-nobucket] Don't output a bucket dataset. By default, the
program uses '-bucket Decon' if you don't give
either -bucket or -nobucket on the command line.
[-noFDR] Don't compute the statistic-vs-FDR curves for the
bucket dataset.
[same as 'setenv AFNI_AUTOMATIC_FDR NO']
[-xsave] Flag to save X matrix into file bprefix.xsave
(only works if -bucket option is also given)
[-noxsave] Don't save X matrix [this is the default]
[-cbucket cprefix] Save the regression coefficients (no statistics)
into a dataset named 'cprefix'. This dataset
will be used in a -xrestore run instead of the
bucket dataset, if possible.
** Also, the -cbucket and -x1D output can be combined
in 3dSynthesize to produce 3D+time datasets that
are derived from subsets of the regression model
[generalizing the -fitts option, which produces]
[a 3D+time dataset derived from the full model].
[-xrestore f.xsave] Restore the X matrix, etc. from a previous run
that was saved into file 'f.xsave'. You can
then carry out new -glt tests. When -xrestore
is used, most other command line options are
ignored.
[-float] Write output datasets in float format, instead of
as scaled shorts [** now the default **]
[-short] Write output as scaled shorts [no longer default]
***** The following options control miscellaneous outputs *****
[-quiet] Flag to suppress most screen output
[-xout] Flag to write X and inv(X'X) matrices to screen
[-xjpeg filename] Write a JPEG file graphing the X matrix
* If filename ends in '.png', a PNG file is output
[-x1D filename] Save X matrix to a .xmat.1D (ASCII) file [default]
** If 'filename' is 'stdout:', the file is written
to standard output, and could be piped into
1dplot (some examples are given earlier).
* This can be used for quick checks to see if your
inputs are setting up a 'reasonable' matrix.
[-nox1D] Don't save X matrix [a very bad idea]
[-x1D_uncensored ff] Save X matrix to a .xmat.1D file, but WITHOUT
ANY CENSORING. Might be useful in 3dSynthesize.
[-x1D_regcensored f] Save X matrix to a .xmat.1D file with the
censoring imposed by adding 0-1 columns instead
excising the censored rows.
[-x1D_stop] Stop running after writing .xmat.1D files.
* Useful for testing, or if you are going to
run 3dREMLfit instead -- that is, you are just
using 3dDeconvolve to set up the matrix file.
[-progress n] Write statistical results for every nth voxel
* To let you know that something is happening!
[-fdisp fval] Write statistical results to the screen, for those
voxels whose full model F-statistic is > fval
[-help] Oh go ahead, try it!
**** Multiple CPU option (local CPUs only, no networking) ****
-jobs J Run the program with 'J' jobs (sub-processes).
On a multi-CPU machine, this can speed the
program up considerably. On a single CPU
machine, using this option would be silly.
* J should be a number from 1 up to the
number of CPUs sharing memory on the system.
* J=1 is normal (single process) operation.
* The maximum allowed value of J is 32.
* Unlike other parallelized AFNI programs, this one
does not use OpenMP; it directly uses fork()
and shared memory to run multiple processes.
* For more information on parallelizing, see
https://afni.nimh.nih.gov/afni/doc/misc/afni_parallelize
* Also use -mask or -automask to get more speed; cf. 3dAutomask.
-virtvec To save memory, write the input dataset to a temporary file
and then read data vectors from it only as needed. This option
is for Javier and will probably not be useful for anyone else.
And it only takes effect if -jobs is greater than 1.
** NOTE **
This version of the program has been compiled to use
double precision arithmetic for most internal calculations.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dDegreeCentrality
Usage: 3dDegreeCentrality [options] dset
Computes voxelwise weighted and binary degree centrality and
stores the result in a new 3D bucket dataset as floats to
preserve their values. Degree centrality reflects the strength and
extent of the correlation of a voxel with every other voxel in
the brain.
Conceptually the process involves:
1. Calculating the correlation between voxel time series for
every pair of voxels in the brain (as determined by masking)
2. Applying a threshold to the resulting correlations to exclude
those that might have arisen by chance, or to sparsify the
connectivity graph.
3. At each voxel, summarizing its correlation with other voxels
in the brain, by either counting the number of voxels correlated
with the seed voxel (binary) or by summing the correlation
coefficients (weighted).
Practically the algorithm is ordered differently to optimize for
computational time and memory usage.
The threshold can be supplied as a correlation coefficient,
or a sparsity threshold. The sparsity threshold reflects the fraction
of connections that should be retained after the threshold has been
applied. To minimize resource consumption, using a sparsity threshold
involves a two-step procedure. In the first step, a correlation
coefficient threshold is applied to substantially reduce the number
of correlations. Next, the remaining correlations are sorted and a
threshold is calculated so that only the specified fraction of
possible correlations are above threshold. Due to ties between
correlations, the fraction of correlations that pass the sparsity
threshold might be slightly more than the number specified.
Regardless of the thresholding procedure employed, negative
correlations are excluded from the calculations.
Options:
-pearson = Correlation is the normal Pearson (product moment)
correlation coefficient [default].
-spearman AND -quadrant are disabled at this time :-(
-thresh r = exclude correlations <= r from calculations
-sparsity s = only use top s percent of correlations in calculations
s should be an integer between 0 and 100. Uses an
an adaptive thresholding procedure to reduce memory.
The speed of determining the adaptive threshold can
be improved by specifying an initial threshold with
the -thresh flag.
-polort m = Remove polynomial trend of order 'm', for m=-1..3.
[default is m=1; removal is by least squares].
Using m=-1 means no detrending; this is only useful
for data/information that has been pre-processed.
-autoclip = Clip off low-intensity regions in the dataset,
-automask = so that the correlation is only computed between
high-intensity (presumably brain) voxels. The
mask is determined the same way that 3dAutomask works.
-mask mmm = Mask to define 'in-brain' voxels. Reducing the number
the number of voxels included in the calculation will
significantly speedup the calculation. Consider using
a mask to constrain the calculations to the grey matter
rather than the whole brain. This is also preferable
to using -autoclip or -automask.
-prefix p = Save output into dataset with prefix 'p', this file will
contain bricks for both 'weighted' or 'degree' centrality
[default prefix is 'deg_centrality'].
-out1D f = Save information about the above threshold correlations to
1D file 'f'. Each row of this file will contain:
Voxel1 Voxel2 i1 j1 k1 i2 j2 k2 Corr
Where voxel1 and voxel2 are the 1D indices of the pair of
voxels, i j k correspond to their 3D coordinates, and Corr
is the value of the correlation between the voxel time courses.
Notes:
* The output dataset is a bucket type of floats.
* The program prints out an estimate of its memory used
when it ends. It also prints out a progress 'meter'
to keep you pacified.
-- RWCox - 31 Jan 2002 and 16 Jul 2010
-- Cameron Craddock - 26 Sept 2015
=========================================================================
* This binary version of 3dDegreeCentrality is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3ddelay
++ 3ddelay: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: Ziad Saad (with help from B Douglas Ward)
The program estimates the time delay between each voxel time series
in a 3D+time dataset and a reference time series[1][2].
The estimated delays are relative to the reference time series.
For example, a delay of 4 seconds means that the voxel time series
is delayed by 4 seconds with respect to the reference time series.
Usage:
3ddelay
-input fname fname = filename of input 3d+time dataset
DO NOT USE CATENATED timeseries! Time axis is assumed
to be continuous and not evil.
-ideal_file rname rname = input ideal time series file name
The length of the reference time series should be equal to
that of the 3d+time data set.
The reference time series vector is stored in an ascii file.
The programs assumes that there is one value per line and that all
values in the file are part of the reference vector.
PS: Unlike with 3dfim, and FIM in AFNI, values over 33333 are treated
as part of the time series.
-fs fs Sampling frequency in Hz. of data time series (1/TR).
-T Tstim Stimulus period in seconds.
If the stimulus is not periodic, you can set Tstim to 0.
[-prefix bucket] The prefix for the results Brick.
The first subbrick is for Delay.
The second subbrick is for Covariance, which is an
estimate of the power in voxel time series at the
frequencies present in the reference time series.
The third subbrick is for the Cross Correlation
Coefficients between FMRI time series and reference time
series. The fourth subbrick contains estimates of the
Variance of voxel time series.
The default prefix is the prefix of the input dset
with a '.DEL' extension appended to it.
[-polort order] Detrend input time series with polynomial of order
'order'. If you use -1 for order then the program will
suggest an order for you (about 1 for each 150 seconds)
The minimum recommended is 1. The default is -1 for auto
selection. This is the same as option Nort in the plugin
version.
[-nodtrnd] Equivalent to polort 0, whereby only the mean is removed.
NOTE: Regardless of these detrending options, No detrending is
done to the reference time series.
[-uS/-uD/-uR] Units for delay estimates. (Seconds/Degrees/Radians)
You can't use Degrees or Radians as units unless
you specify a value for Tstim > 0.
[-phzwrp] Delay (or phase) wrap.
This switch maps delays from:
(Seconds) 0->T/2 to 0->T/2 and T/2->T to -T/2->0
(Degrees) 0->180 to 0->180 and 180->360 to -180->0
(Radians) 0->pi to 0->pi and pi->2pi to -pi->0
You can't use this option unless you specify a
value for Tstim > 0.
[-nophzwrp] Do not wrap phase (default).
[-phzreverse] Reverse phase such that phase -> (T-phase)
[-phzscale SC] Scale phase: phase -> phase*SC (default no scaling)
[-bias] Do not correct for the bias in the estimates [1][2]
[-nobias | -correct_bias] Do correct for the bias in the estimates
(default).
[-dsamp] Correct for slice timing differences (default).
[-nodsamp ] Do not correct for slice timing differences .
[-mask mname] mname = filename of 3d mask dataset
only voxels with non-zero values in the mask would be
considered.
[-nfirst fnum] fnum = number of first dataset image to use in
the delay estimate. (default = 0)
[-nlast lnum] lnum = number of last dataset image to use in
the delay estimate. (default = last)
[-co CCT] Cross Correlation Coefficient threshold value.
This is only used to limit the ascii output (see below).
[-asc [out]] Write the results to an ascii file for voxels with
[-ascts [out]] cross correlation coefficients larger than CCT.
If 'out' is not specified, a default name similar
to the default output prefix is used.
-asc, only files 'out' and 'out.log' are written to disk
(see ahead)
-ascts, an additional file, 'out.ts', is written to disk
(see ahead)
There a 9 columns in 'out' which hold the following
values:
1- Voxel Index (VI) : Each voxel in an AFNI brick has a
unique index.
Indices map directly to XYZ coordinates.
See AFNI plugin documentations for more info.
2..4- Voxel coordinates (X Y Z): Those are the voxel
slice coordinates. You can see these coordinates
in the upper left side of the AFNI window.
To do so, you must first switch the voxel
coordinate units from mm to slice coordinates.
Define Datamode -> Misc -> Voxel Coords ?
PS: The coords that show up in the graph window
may be different from those in the upper left
side of AFNI's main window.
5- Duff : A value of no interest to you. It is preserved
for backward compatibility.
6- Delay (Del) : The estimated voxel delay.
7- Covariance (Cov) : Covariance estimate.
8- Cross Correlation Coefficient (xCorCoef) :
Cross Correlation Coefficient.
9- Variance (VTS) : Variance of voxel's time series.
The file 'out' can be used as an input to two plugins:
'4Ddump' and '3D+t Extract'
The log file 'out.log' contains all parameter settings
used for generating the output brick.
It also holds any warnings generated by the plugin.
Some warnings, such as 'null time series ...' , or
'Could not find zero crossing ...' are harmless. '
I might remove them in future versions.
A line (L) in the file 'out.ts' contains the time series
of the voxel whose results are written on line (L) in the
file 'out'.
The time series written to 'out.ts' do not contain the
ignored samples, they are detrended and have zero mean.
Random Comments/Advice:
The longer you time series, the better. It is generally recommended that
the largest delay be less than N/10, N being time series' length.
The algorithm does go all the way to N/2.
If you have/find questions/comments/bugs about the plugin,
send me an E-mail: saadz@mail.nih.gov
Ziad Saad Dec 8 00.
[1] : Bendat, J. S. (1985). The Hilbert transform and applications
to correlation measurements, Bruel and Kjaer Instruments Inc.
[2] : Bendat, J. S. and G. A. Piersol (1986). Random Data analysis and
measurement procedures, John Wiley & Sons.
Author's publications on delay estimation using the Hilbert Transform:
[3] : Saad, Z.S., et al., Analysis and use of FMRI response delays.
Hum Brain Mapp, 2001. 13(2): p. 74-93.
[4] : Saad, Z.S., E.A. DeYoe, and K.M. Ropella, Estimation of FMRI
Response Delays. Neuroimage, 2003. 18(2): p. 494-504.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dDepthMap
Overview ~1~
This program calculates the depth of ROIs, masks and 'background', using
the fun Euclidean Distance Transform (EDT).
Basically, this means calculating the Euclidean distance of each
voxel's centroid to the nearest boundary with a separate ROI (well, to be
brutally technical, to centroid of the nearest voxel in a neighboring ROI.
The input dataset should be a map of ROIs (so, integer-valued). The
EDT values are calculated throughout the entire FOV by default,
even in the zero/background regions (there is an option to control this).
written by: PA Taylor and P Lauren (SSCC, NIMH, NIH)
Description ~2~
This code calculates the Euclidean Distance Transform (EDT) for 3D
volumes following this nice, efficient algorithm, by Felzenszwalb
and Huttenlocher (2012; FH2012):
Felzenszwalb PF, Huttenlocher DP (2012). Distance Transforms of
Sampled Functions. Theory of Computing 8:415-428.
https://cs.brown.edu/people/pfelzens/papers/dt-final.pdf
Thanks to C. Rorden for pointing this paper out and discussing it.
The current code here extends/tweaks the FH2012 algorithm to a more
general case of having several different ROIs present, for running
in 3D (trivial extension), and for having voxels of non-unity and
non-isotropic lengths. It does this by utilizing the fact that at
its very heart, the FH2012 algorithm works line by line and can even
be thought of as working boundary-by-boundary.
Here, the zero-valued 'background' is also just treated like an ROI,
with one difference. At a FOV boundary, the zero-valued
ROI/background is treated as open, so that the EDT value at each
'zero' voxel is always to one of the shapes within the FOV. For
nonzero ROIs, one can treat the FOV boundary *either* as an ROI edge
(EDT value there will be 1 edge length) *or* as being open.
==========================================================================
Command usage and option list ~1~
3dDepthMap [options] -prefix PREF -input DSET
where:
-input DSET :(req) input dataset
-prefix PREF :(req) output prefix name
-mask MASK :mask dataset. NB: this mask is only applied *after*
the EDT has been calculated. Therefore, the boundaries
of this mask have no affect on the calculated distance
values, except for potentially zeroing some out at the
end.
-dist_sq :by default, the output EDT volume contains distance
values. By using this option, the output values are
distance**2.
-ignore_voxdims :this EDT algorithm works in terms of physical distance
and uses the voxel dimension info in each direction, by
default. However, using this option will ignore voxel
size, producing outputs as if each voxel dimension was
unity.
-rimify RIM :instead of outputting a depthmap for each ROI, output
a map of each ROI's 'rim' voxels---that is, the boundary
layer or periphery up to thickness RIM---if RIM>0.
+ Note that RIM is applied to whatever kind of depth
information you are calculating: if you use '-dist_sq'
then the voxel's distance-squared value to the ROI edge
is compared with RIM; if using '-ignore_voxdims', then
the number-of-voxels to the edge is compared with RIM.
The depthmap thresholding is applied as:
abs(DEPTH)<=RIM.
+ When using this opt, any labeltable/atlastable
from the original should be passed along, as well.
+ A negative RIM value inverts the check, and the
output is kept if the depth info is:
abs(DEPTH)>=abs(RIM).
NB: with a negative RIM value, it is possible an ROI
could disappear!
-zeros_are_zero :by default, EDT values are output for the full FOV,
even zero-valued regions. If this option is used, EDT
values are only reported within the nonzero locations
of the input dataset.
-zeros_are_neg :if this option is used, EDT in the zero/background
of the input will be negative (def: they are positive).
This opt cannot be used if '-zeros_are_zero' is.
-nz_are_neg :if this option is used, EDT in the nonzero ROI regions
of the input will be negative (def: they are positive).
-bounds_are_not_zero :this flag affects how FOV boundaries are treated for
nonzero ROIs: by default, they are viewed as ROI
boundaries (so the FOV is a closed boundary for an ROI,
as if the FOV were padded by an extra layer of zeros);
but when this option is used, the ROI behaves as if it
continued 'infinitely' at the FOV boundary (so it is
an open boundary). Zero-valued ROIs (= background)
are not affected by this option.
-only2D SLI :instead of running full 3D EDT, run just in 2D, per.
plane. Provide the slice plane you want to run along
as the single argument SLI:
"axi" -> for axial slice
"cor" -> for coronal slice
"sag" -> for sagittal slice
-binary_only :if the input is a binary mask or should be treated as
one (all nonzero voxels -> 1; all zeros stay 0), then
using this option will speed up the calculation. See
Notes below for more explanation of this. NOT ON YET!
-verb V :manage verbosity when running code (def: 1).
Providing a V of 0 means to run quietly.
==========================================================================
Notes ~1~
Depth and the Euclidean Distance Transform ~2~
The original EDT algorithm of FH2012 was developed for a simple binary
mask input (and actually for homogeneous data grids of spacing=1). This
program, however, was built to handle more generalized cases of inputs,
namely ROI maps (and arbitrary voxel dimensions).
The tradeoff of the expansion to handling ROI maps is an increase in
processing time---the original binary-mask algorithm is *very* efficient,
and the generalized one is still pretty quick but less so.
So, if you know that your input should be treated as a binary mask, then
you can use the '-binary_only' option to utilize the more efficient
(and less generalized) algorithm. The output dataset should be the same
in either case---this option flag is purely about speed of computation.
All other options about outputting dist**2 or negative values/etc. can be
used in conjunction with the '-binary_only', too.
==========================================================================
Examples ~1~
1) Basic case:
3dDepthMap \
-input roi_map.nii.gz \
-prefix roi_map_EDT.nii.gz
2) Same as above, but only output distances within nonzero regions/ROIs:
3dDepthMap \
-zeros_are_zero \
-input roi_map.nii.gz \
-prefix roi_map_EDT_ZZ.nii.gz
3) Output distance-squared at each voxel:
3dDepthMap \
-dist_sq \
-input mask.nii.gz \
-prefix mask_EDT_SQ.nii.gz
4) Distinguish ROIs from nonzero background by making the former have
negative distance values in output:
3dDepthMap \
-nz_are_neg \
-input roi_map.nii.gz \
-prefix roi_map_EDT_NZNEG.nii.gz
5) Have output voxel values represent (number of vox)**2 from a boundary;
voxel dimensions are ignored here:
3dDepthMap \
-ignore_voxdims \
-dist_sq \
-input roi_map.nii.gz \
-prefix roi_map_EDT_SQ_VOX.nii.gz
6) Basic case, with option for speed-up because the input is a binary mask
(i.e., only ones and zeros); any of the other above options can
be combined with this, too:
3dDepthMap \
-binary_only \
-input roi_mask.nii.gz \
-prefix roi_mask_EDT.nii.gz
7) Instead of outputting ROI depth, output a map of the ROI rims, keeping:
the part of the ROI up to where depth is >=1.6mm
3dDepthMap \
-input roi_map.nii.gz \
-rimify 1.6 \
-prefix roi_map_rim.nii.gz
==========================================================================
AFNI program: 3dDespike
Usage: 3dDespike [options] dataset
Removes 'spikes' from the 3D+time input dataset and writes
a new dataset with the spike values replaced by something
more pleasing to the eye.
------------------
Outline of Method:
------------------
* L1 fit a smooth-ish curve to each voxel time series
[see -corder option for description of the curve]
[see -NEW option for a different & faster fitting method]
* Compute the MAD of the difference between the curve and
the data time series (the residuals).
* Estimate the standard deviation 'sigma' of the residuals
from the MAD.
* For each voxel value, define s = (value-curve)/sigma.
* Values with s > c1 are replaced with a value that yields
a modified s' = c1+(c2-c1)*tanh((s-c1)/(c2-c1)).
* c1 is the threshold value of s for a 'spike' [default c1=2.5].
* c2 is the upper range of the allowed deviation from the curve:
s=[c1..infinity) is mapped to s'=[c1..c2) [default c2=4].
An alternative method for replacing the spike value is provided
by the '-localedit' option, and that method is preferred by
many users.
The input dataset can be stored in short or float formats.
The output dataset will always be stored in floats. [Feb 2017]
--------
Options:
--------
-ignore I = Ignore the first I points in the time series:
these values will just be copied to the
output dataset [default I=0].
-corder L = Set the curve fit order to L:
the curve that is fit to voxel data v(t) is
k=L [ (2*PI*k*t) (2*PI*k*t) ]
f(t) = a+b*t+c*t*t + SUM [ d * sin(--------) + e * cos(--------) ]
k=1 [ k ( T ) k ( T ) ]
where T = duration of time series;
the a,b,c,d,e parameters are chosen to minimize
the sum over t of |v(t)-f(t)| (L1 regression);
this type of fitting is is insensitive to large
spikes in the data. The default value of L is
NT/30, where NT = number of time points.
-cut c1 c2 = Alter default values for the spike cut values
[default c1=2.5, c2=4.0].
-prefix pp = Save de-spiked dataset with prefix 'pp'
[default pp='despike']
-ssave ttt = Save 'spikiness' measure s for each voxel into a
3D+time dataset with prefix 'ttt' [default=no save]
-nomask = Process all voxels
[default=use a mask of high-intensity voxels, ]
[as created via '3dAutomask -dilate 4 dataset'].
-dilate nd = Dilate 'nd' times (as in 3dAutomask). The default
value of 'nd' is 4.
-q[uiet] = Don't print '++' informational messages.
-localedit = Change the editing process to the following:
If a voxel |s| value is >= c2, then replace
the voxel value with the average of the two
nearest non-spike (|s| < c2) values; the first
one previous and the first one after.
Note that the c1 cut value is not used here.
-NEW = Use the 'new' method for computing the fit, which
should be faster than the L1 method for long time
series (200+ time points); however, the results
are similar but NOT identical. [29 Nov 2013]
* You can also make the program use the 'new'
method by setting the environment variable
AFNI_3dDespike_NEW
to the value YES; as in
setenv AFNI_3dDespike_NEW YES (csh)
export AFNI_3dDespike_NEW=YES (bash)
* If this variable is set to YES, you can turn off
the '-NEW' processing by using the '-OLD' option.
-->>* For time series more than 500 points long, the
'-OLD' algorithm is tremendously slow. You should
use the '-NEW' algorithm in such cases.
** At some indeterminate point in the future, the '-NEW'
method will become the default!
-->>* As of 29 Sep 2016, '-NEW' is the default if there
is more than 500 points in the time series dataset.
-NEW25 = A slightly more aggressive despiking approach than
the '-NEW' method.
--------
Caveats:
--------
* Despiking may interfere with image registration, since head
movement may produce 'spikes' at the edge of the brain, and
this information would be used in the registration process.
This possibility has not been explored or calibrated.
* [LATER] Actually, it seems like the registration problem
does NOT happen, and in fact, despiking seems to help!
* Check your data visually before and after despiking and
registration!
=========================================================================
* This binary version of 3dDespike is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dDetrend
Usage: 3dDetrend [options] dataset
* This program removes components from voxel time series using
linear least squares. Each voxel is treated independently.
* Note that least squares detrending is equivalent to orthogonalizing
the input dataset time series with respect to the basis time series
provided by the '-vector', '-polort', et cetera options.
* The input dataset may have a sub-brick selector string; otherwise,
all sub-bricks will be used.
*** You might also want to consider using program 3dBandpass ***
General Options:
-prefix pname = Use 'pname' for the output dataset prefix name.
[default='detrend']
-session dir = Use 'dir' for the output dataset session directory.
[default='./'=current working directory]
-verb = Print out some verbose output as the program runs.
-replace = Instead of subtracting the fit from each voxel,
replace the voxel data with the time series fit.
-normalize = Normalize each output voxel time series; that is,
make the sum-of-squares equal to 1.
N.B.: This option is only valid if the input dataset is
stored as floats! (1D files are always floats.)
-byslice = Treat each input vector (infra) as describing a set of
time series interlaced across slices. If NZ is the
number of slices and NT is the number of time points,
then each input vector should have NZ*NT values when
this option is used (usually, they only need NT values).
The values must be arranged in slice order, then time
order, in each vector column, as shown here:
f(z=0,t=0) // first slice, first time
f(z=1,t=0) // second slice, first time
...
f(z=NZ-1,t=0) // last slice, first time
f(z=0,t=1) // first slice, second time
f(z=1,t=1) // second slice, second time
...
f(z=NZ-1,t=NT-1) // last slice, last time
Component Options:
These options determine the components that will be removed from
each dataset voxel time series. They may be repeated to specify
multiple regression. At least one component must be specified.
-vector vvv = Remove components proportional to the columns vectors
of the ASCII *.1D file 'vvv'. You may use a
sub-vector selector string to specify which columns
to use; otherwise, all columns will be used.
For example:
-vector 'xyzzy.1D[3,5]'
will remove the 4th and 6th columns of file xyzzy.1D
from the dataset (sub-vector indexes start at 0).
You can use multiple -vector instances to specify
components from different files.
-expr eee = Remove components proportional to the function
specified in the expression string 'eee'.
Any single letter from a-z may be used as the
independent variable in 'eee'. For example:
-expr 'cos(2*PI*t/40)' -expr 'sin(2*PI*t/40)'
will remove sine and cosine waves of period 40
from the dataset.
-polort ppp = Add Legendre polynomials of order up to and
including 'ppp' in the list of vectors to remove.
-del ddd = Use the numerical value 'ddd' for the stepsize
in subsequent -expr options. If no -del option
is ever given, then the TR given in the dataset
header is used for 'ddd'; if that isn't available,
then 'ddd'=1.0 is assumed. The j-th time point
will have independent variable = j * ddd, starting
at j=0. For example:
-expr 'sin(x)' -del 2.0 -expr 'z**3'
means that the stepsize in 'sin(x)' is delta-x=TR,
but the stepsize in 'z**3' is delta-z = 2.
N.B.: expressions are NOT calculated on a per-slice basis when the
-byslice option is used. If you have to do this, you could
compute vectors with the required time series using 1deval.
Detrending 1D files
-------------------
As far as '3d' programs are concerned, you can input a 1D file as
a 'dataset'. Each row is a separate voxel, and each column is a
separate time point. If you want to detrend a single column, then
you need to transpose it on input. For example:
3dDetrend -prefix - -vector G1.1D -polort 3 G5.1D\' | 1dplot -stdin
Note that the '-vector' file is NOT transposed with \', but that
the input dataset file IS transposed. This is because in the first
case the program expects a 1D file, and so knows that the column
direction is time. In the second case, the program expects a 3D
dataset, and when given a 1D file, knows that the row direction is
time -- so it must be transposed. I'm sorry if this is confusing,
but that's the way it is.
NOTE: to have the output file appear so that time is in the column
direction, you'll have to add the option '-DAFNI_1D_TRANOUT=YES'
to the command line, as in
3dDetrend -DAFNI_1D_TRANOUT=YES -prefix - -vector G1.1D -polort 3 G5.1D\' > Q.1D
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dDFT
++ Authored by: Kevin Murphy & Zhark the Transformer
Usage: 3dDFT [options] dataset
where 'dataset' is complex- or float-valued.
* Carries out the DFT along the time axis.
* To do the DFT along the spatial axes, use program 3dFFT.
* The input dataset can be complex-valued or float-valued.
If it is any other data type, it will be converted to floats
before processing.
* [June 2018] The FFT length used is NOT rounded up to a convenient
FFT radix; instead, the FFT size is actual value supplied in option
'-nfft' or the number of time points (if '-nfft' isn't used).
* However, if the FFT length has large prime factors (say > 97), the
Fast Fourier Transform algorithm will be relatively slow. This slowdown
is probably only noticeable for very long files, since reading and
writing datasets seems to take most of the elapsed time in 'normal' cases.
OPTIONS:
--------
-prefix PP == use 'PP' as the prefix of the output file
-abs == output float dataset = abs(DFT)
* Otherwise, the output file is complex-valued.
You can then use 3dcalc to extract the real part, the
imaginary part, the phase, etc.; see its '-cx2r' option:
3dcalc -cx2r REAL -a cxset+orig-expr a -prefix rset+orig
* Please note that if you view a complex dataset in AFNI,
the default operation is that you are looking at the
absolute value of the dataset.
++ You can control the way a complex IMAGE appears via
the 'Disp' control panel (ABS, PHASE, REAL, IMAGE).
++ You can control the way a complex TIME SERIES graph appears
via environment variable AFNI_GRAPH_CX2R (in 'EditEnv').
-nfft N == use 'N' for DFT length (must be >= #time points)
-detrend == least-squares remove linear drift before DFT
[for more intricate detrending, use 3dDetrend first]
-taper f == taper 'f' fraction of data at ends (0 <= f <= 1).
[Hamming 'raised cosine' taper of f/2 of the ]
[data length at each end; default is no taper]
[cf. 3dPeriodogam -help for tapering details!]
-inverse == Do the inverse DFT:
SUM{ data[j] * exp(+2*PI*i*j/nfft) } * 1/nfft
instead of the forward transform
SUM{ data[j] * exp(-2*PI*i*j/nfft) }
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dDiff
This is a program to examine element-wise differences between two images.
Usage ~1~
3dDiff [display opt] [-tol TOLERANCE] [-mask MASK] <DSET_1> <DSET_2>
where:
-tol TOLERANCE :(opt) the floating-point tolerance/epsilon
-mask MASK: :(opt) the mask to use when comparing
-a DSET_1 :(req) input dataset a
-b DSET_2 :(req) input dataset b
... and there are the following (mutually exclusive) display options:
-q :(opt) quiet mode, indicate 0 for no differences and
1 for differences. -1 indicates that an error has
occurred (aka "Rick Mode").
-tabular :(opt) display only a table of differences, plus
a summary line (the same one as -brutalist)
Mostly for use with 4D data.
-brutalist :(opt) display one-liner. The first number indicates
whether there is a difference, the second number
indicates how many elements (3D) or volumes (4D)
were different, and the last number indicates the
total number of elements/volumes compared.
if there is a dataset dimension mismatch or an
error, then this will be a line of all -1s.
See examples below for sample output.
-long_report :(opt) print a large report with lots of information.
If no display options are used, a short message with a summary will print.
===========================================================================
Examples ~1~
1) Basic Example: comparing two images
A) In the 3D case, you get a short message indicating if there is no
difference:
$ 3dDiff -a image.nii -b image.nii
++ Images do NOT differ
... or a bit more information if there is a difference:
$ 3dDiff -a mine.nii -b yours.nii
++ Images differ: 126976 of 126976 elements differ (100.00%)
B) In the 4D case, the total number of elements AND total number of
volumes which differ are reported:
$ 3dDiff -a mine.nii -b yours.nii
++ Images differ: 10 of 10 volumes differ (100.00%) and 5965461 of 6082560 elements (98.07%)
2) A tolerance can be used to be more permissive of differences. In this
example, any voxel difference of 100 or less is considered equal:
$ 3dDiff -tol 100 -a mine.nii -b yours.nii
++ Images differ: 234529 of 608256 elements differ (38.56%)
3) A mask can be used to limit which regions are being compared:
$ 3dDiff -mask roi.nii -a mine.nii -b yours.nii
++ Images differ: 5 of 10 volumes differ (50.00%) and 675225 of 1350450 elements (50.00%)
NB: The mask is assumed to have a single time point; volumes in the mask
beyond the [0]th are ignored.
===========================================================================
Modes of output/reporting ~1~
There are a variety of reporting modes for 3dDiff, with varying levels
of verbosity. They can be used to view the image comparison in both human
and machine-readable formats. The default mode is the version shown in the
above examples, where a short statement is made summarizing the differences.
Reporting modes are mutually exclusive, but may be used with any of the
other program options without restriction.
1) Quiet Mode (-q) ~2~
Returns a single integer value in the range [-1, 1]: -1 indicates a program error (e.g., grids do not match)
0 indicates that the images have no differences
1 indicates that the images have differences
Examples:
$ 3dDiff -q -a image.nii # no image b supplied
-1
$ 3dDiff -q -a image.nii -b image.nii # an image agrees with itself
0
$ 3dDiff -q -a mine.nii -b yours.nii # two different images
1
2) Tabular Mode (-tabular) ~2~
Prints out a table of values. Useful for 4D data, but not recommended
for 3D data.
Each row of the table will indicate the volume index and number of
differing elements. At the end of the table, a summary line will
appear (see -brutalist).
Example (just using the first 10 volumes of two datasets):
$ 3dDiff -tabular -a "mine.nii[0..9]" -b "yours.nii[0..9]"
0: 596431
1: 596465
2: 596576
3: 596644
4: 596638
5: 596658
6: 596517
7: 596512
8: 596500
9: 596520
1 10 10 1.00000
3) Brutalist Mode (-brutalist) ~2~
Creates a one-line summary of the differences. The numbers appear in the
following order:
Summary [-1, 1], -1 failure, 1 differences, 0 agreement
Differences [0, NV/NT], the number of differing elements (3D) or
volumes (4D)
Total Compared NV/NT, the number of elements/volumes compared
Fraction Diff [0, 1.0], the fraction of differing elements/volumes
Examples:
$ 3dDiff -brutalist -a "mine.nii[0]" -b "yours.nii[0]" # 3D
1 596431 608256 0.98056
... which means: There is a difference, 596431 elements differed,
608256 elements were compared. The fraction of differing elements is
0.98056.)
$ 3dDiff -brutalist -a "mine.nii[0..9]" -b "yours.nii[0..9]" # 4D
1 10 10 1.00000
... which means: There is a difference, 10 volumes differed, 10 volumes
were compared. The fraction of differing volumes is 1.0).
If the program fails for some reason, brutalist output will be an array
of all -1s, like this:
$ 3dDiff -brutalist -a image.nii # no dataset b to compare to
-1 -1 -1 -1
4) Long Report Mode (-long_report)
Prints a very large report with lots of information.
**WARNING:** this report is intended for use with humans, not machines!
The author makes no guarantee of backwards compatibility for this mode,
and will add or remove report outputs at his own (shocking whimsical)
discretion.
===========================================================================
Note on unhappy comparisons ~1~
If this program reports that the images cannot be element-wise compared,
you can examine the header information with 3dinfo. In particular, check out
the section, "Options requiring dataset pairing at input", most notably
options starting with "same", for example, -same_grid.
===========================================================================
Author note: ~1~
Written by JB Teves, who notes:
"Perfection is achieved not when there is no data left to
add, but when there is no data left to throw away."
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3ddot
Usage: 3ddot [options] dset1 [dset2 dset3 ...]
Output = correlation coefficient between sub-brick pairs
All datasets on the command line will get catenated
at loading time and should all be on the same grid.
- you can use sub-brick selectors on the dsets
- the result is a number printed to stdout
Options:
-mask mset Means to use the dataset 'mset' as a mask:
Only voxels with nonzero values in 'mset'
will be averaged from 'dataset'. Note
that the mask dataset and the input dataset
must have the same number of voxels.
-mrange a b Means to further restrict the voxels from
'mset' so that only those mask values
between 'a' and 'b' (inclusive) will
be used. If this option is not given,
all nonzero values from 'mset' are used.
Note that if a voxel is zero in 'mset', then
it won't be included, even if a < 0 < b.
-demean Means to remove the mean from each volume
prior to computing the correlation.
-docor Return the correlation coefficient (default).
-dodot Return the dot product (unscaled).
-docoef Return the least square fit coefficients
{a,b} so that dset2 is approximately a + b*dset1
-dosums Return the 6 numbers xbar=<x> ybar=<y>
<(x-xbar)^2> <(y-ybar)^2> <(x-xbar)(y-ybar)>
and the correlation coefficient.
-doeta2 Return eta-squared (Cohen, NeuroImage 2008).
-dodice Return the Dice coefficient (the Sorensen-Dice index).
-show_labels Print sub-brick labels to help identify what
is being correlated. This option is useful when
you have more than 2 sub-bricks at input.
-upper Compute upper triangular matrix
-full Compute the whole matrix. A waste of time, but handy
for parsing.
-1D Comment headings in order to read in 1D format.
This is only useful with -full.
-NIML Write output in NIML 1D format. Nicer for plotting.
-full and -show_labels are automatically turned on with -NIML.
For example:
3ddot -NIML anat.001.sc7z.sigset+orig"[0,1,2,3,4]" \
> corrmat.1D
1dRplot corrmat.1D
or
1dRplot -save somecorr.jpg -i corrmat.1D
Note: This program is not efficient when more than two subbricks are input.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3ddot_beta
Beta version of updating 3ddot. Right now, *only* doing eta2 tests,
and only outputting a full matrix to a text file.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMAND: 3ddot_beta -input FILE -doeta2 \
{-mask MASK } -prefix PREFIX
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ OUTPUT:
1) A single text file with the correlation-like matrix. If the input
data set has N bricks, then the matrix will be NxN.
+ RUNNING:
-input FILE :file with N bricks.
-prefix PREFIX :output test file will be called PREFIX_eta2.dat.
-doeta2 :right now, required switch (more tests might be
present in the future, if demand calls for it).
-mask MASK :can include a mask within which to take values.
Otherwise, data should be masked already.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3ddot_beta \
-input RSFC_MAPS_cat+orig \
-mask mask.nii.gz \
-doeta2 \
-prefix My_Matrix_File
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
___________________________________________________________________________
AFNI program: 3dDTeig
Usage: 3dDTeig [options] dataset
Computes eigenvalues and eigenvectors for an input dataset of
6 sub-bricks Dxx,Dxy,Dyy,Dxz,Dyz,Dzz (lower diagonal order).
The results are stored in a 14-subbrick bucket dataset.
The resulting 14-subbricks are
lambda_1,lambda_2,lambda_3,
eigvec_1[1-3],eigvec_2[1-3],eigvec_3[1-3],
FA,MD.
The output is a bucket dataset. The input dataset
may use a sub-brick selection list, as in program 3dcalc.
Options:
-prefix pname = Use 'pname' for the output dataset prefix name.
[default='eig']
-datum type = Coerce the output data to be stored as the given type
which may be byte, short or float. [default=float]
-sep_dsets = save eigenvalues,vectors,FA,MD in separate datasets
-uddata = tensor data is stored as upper diagonal
instead of lower diagonal
Mean diffusivity (MD) calculated as simple average of eigenvalues.
Fractional Anisotropy (FA) calculated according to Pierpaoli C, Basser PJ.
Microstructural and physiological features of tissues elucidated by
quantitative-diffusion tensor MRI, J Magn Reson B 1996; 111:209-19
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dDTtoDWI
Usage: 3dDTtoDWI [options] gradient-file I0-dataset DT-dataset
Computes multiple gradient images from 6 principle direction tensors and
corresponding gradient vector coordinates applied to the I0-dataset.
The program takes three parameters as input :
a 1D file of the gradient vectors with lines of ASCII floats Gxi,Gyi,Gzi.
Only the non-zero gradient vectors are included in this file (no G0 line).
The I0 dataset is a volume without any gradient applied.
The DT dataset is the 6-sub-brick dataset containing the diffusion tensor data,
Dxx, Dxy, Dyy, Dxz, Dyz, Dzz (lower triangular row-wise order)
Options:
-prefix pname = Use 'pname' for the output dataset prefix name.
[default='DWI']
-automask = mask dataset so that the gradient images
are computed only for high-intensity (presumably
brain) voxels. The intensity level is determined
the same way that 3dClipLevel works.
-datum type = output dataset type [float/short/byte]
(default is float).
-help = show this help screen.
-scale_out_1000 = matches with 3dDWItoDT's '-scale_out_1000'
functionality. If the option was used
there, then use it here, too.
Example:
3dDTtoDWI -prefix DWI -automask tensor25.1D 'DT+orig[26]' DT+orig.
The output is a n sub-brick bucket dataset containing computed DWI images.
where n is the number of vectors in the gradient file + 1
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
AFNI program: 3dDTtoNoisyDWI
Take an AFNI-style DT file as input, such as might be output by 3dDWItoDT
(which means that the DT elements are ordered: Dxx,Dxy,Dyy,Dxz,Dyz,Dzz),
as well as a set of gradients, and then generate a synthetic set of DWI
measures with a given SNR. Might be useful for simulations/testing.
Part of FATCAT (Taylor & Saad, 2013) in AFNI.
It is similar in premise to 3dDTtoDWI, however this allows for the modeled
inclusion of Rician noise (such as appears in MRI magnitude images).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMAND: 3dDTtoNoisyDWI -dt_in DTFILE -grads GRADFILE -noise_frac0 FF \
{-bval BB} {-S0 SS} {-mask MASK } -prefix PREFIX
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ OUTPUT:
1) If N gradients are input, then the output is a file with N+1 bricks
that mimics a set of B0+DWI data (0th brick is the B0 reference).
+ RUNNING:
-dt_in DTFILE :diffusion tensor file, which should have six bricks
of DT components ordered in the AFNI (i.e., 3dDWItoDT)
manner:
Dxx,Dxy,Dyy,Dxz,Dyz,Dzz.
-grads GRADFILE :text file of gradients arranged in three columns.
It is assumed that there is no row of all zeros in the
GRADFILE (i.e., representing the b=0 line).
If there are N rows in GRADFILE, then the output DWI
file will have N+1 bricks (0th will be the b=0
reference set of noise S0 measures).
-noise_DWI FF :fractional value of noise in DWIs. The magnitude will
be set by the b=0 reference signal, S0. Rician noise
is used, which is characterized by a standard
deviation, sigma, so that FF = sigma/S0 = 1/SNR0.
For example, FF=0.05 roughly corresponds to an
SNR0=20 'measurement'.
-noise_B0 FF2 :optional switch to use a different fraction of Rician
noise in the b=0 reference image; one might consider
it realistic to have a much lower level of noise in
the reference signal, S0, mirroring the fact that
generally multiple averages of b=0 acquisitions are
averaged together. If no fraction is entered here,
then the simulation will run with FF2=FF.
-prefix PREFIX :output file name prefix. Will have N+1 bricks when
GRADFILE has N rows of gradients.
-mask MASK :can include a mask within which to calculate uncert.
Otherwise, data should be masked already.
-bval BB :optional DW factor to use if one has DT values scaled
to something physical (NB: AFNI 3dDWItoDT works in a
world of b=1, so the default setting here is BB=1; one
probably doesn't need to change this if using DTs made
by 3dDWItoDT).
-S0 SS :optional reference b=0 signal strength. Default value
SS=1000. This just sets scale of output.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3dDTtoNoisyDWI \
-dt_in DTI/DT_DT+orig \
-grads GRADS.dat \
-noise_DWI 0.1 \
-noise_B0 0 \
-prefix NEW_DWIs_SNR10
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
____________________________________________________________________________
AFNI program: 3dDWItoDT
Usage: 3dDWItoDT [options] gradient-file dataset
Computes 6 principle direction tensors from multiple gradient vectors
and corresponding DTI image volumes.
The program takes two parameters as input :
a 1D file of the gradient vectors with lines of ASCII floats:
Gxi, Gyi, Gzi.
Only the non-zero gradient vectors are included in this file (no G0
line).
** Now, a '1D' file of b-matrix elements can alternatively be input,
and *all* the gradient values are included!**
A 3D bucket dataset with Np+1 sub-briks where the first sub-brik is the
volume acquired with no diffusion weighting.
OUTPUTS:
+ you can output all 6 of the independent tensor values (Dxx, Dyy,
etc.), as well as all three eigenvalues (L1, L2, L3) and
eigenvectors (V1, V2, V3), and useful DTI parameters FA, MD and
RD.
+ 'Debugging bricks' can also be output, see below.
Options:
-prefix pname = Use 'pname' for the output dataset prefix name.
[default='DT']
-automask = mask dataset so that the tensors are computed only for
high-intensity (presumably brain) voxels. The intensity
level is determined the same way that 3dClipLevel works.
-mask dset = use dset as mask to include/exclude voxels
-bmatrix_NZ FF = switch to note that the input dataset is b-matrix,
not gradient directions, and there is *no* row of zeros
at the top of the file, similar to the format for the grad
input: N-1 rows in this file for N vols in matched data set.
There must be 6 columns of data, representing either elements
of G_{ij} = g_i*g_j (i.e., dyad of gradients, without b-value
included) or of the DW scaled version, B_{ij} = b*g_i*g_j.
The order of components is: G_xx G_yy G_zz G_xy G_xz G_yz.
-bmatrix_Z FF = similar to '-bmatrix_NZ' above, but assumes that first
row of the file is all zeros (or whatever the b-value for
the reference volume was!), i.e. there are N rows to the
text file and N volumes in the matched data set.
-bmatrix_FULL FF = exact same as '-bmatrix_Z FF' above (i.e. there are N
rows to the text file and N volumes in the matched data set)
with just a lot more commonsensical name. Definitely would
be preferred way to go, for ease of usage!
-scale_out_1000 = increase output parameters that have physical units
(DT, MD, RD, L1, L2 and L3) by multiplying them by 1000. This
might be convenient, as the input bmatrix/gradient values
can have their physical magnitudes of ~1000 s/mm^2, for
which typical adult WM has diffusion values of MD~0.0007
(in physical units of mm^2/s), and people might not like so
many decimal points output; using this option rescales the
input b-values and would lead to having a typical MD~0.7
(now in units of x10^{-3} mm^2/s). If you are not using
bmatrix/gradient values that have their physical scalings,
then using this switch probably wouldn't make much sense.
FA, V1, V2 and V3 are unchanged.
-bmax_ref THR = if the 'reference' bvalue is actually >0, you can flag
that here. Otherwise, it is assumed to be zero.
At present, this is probably only useful/meaningful if
using the '-bmatrix_Z ...' or '-bmatrix_FULL ...'
option, where the reference bvalue must be found and
identified from the input info alone.
-nonlinear = compute iterative solution to avoid negative eigenvalues.
This is the default method.
-linear = compute simple linear solution.
-reweight = recompute weight factors at end of iterations and restart
-max_iter n = maximum number of iterations for convergence (Default=10).
Values can range from -1 to any positive integer less than
101. A value of -1 is equivalent to the linear solution.
A value of 0 results in only the initial estimate of the
diffusion tensor solution adjusted to avoid negative
eigenvalues.
-max_iter_rw n = max number of iterations after reweighting (Default=5)
values can range from 1 to any positive integer less
than 101.
-eigs = compute eigenvalues, eigenvectors, fractional anisotropy and mean
diffusivity in sub-briks 6-19. Computed as in 3dDTeig
-debug_briks = add sub-briks with Ed (error functional), Ed0 (orig.
error), number of steps to convergence and I0 (modeled B0
volume).
[May, 2017] This also now calculates two goodness-of-fit
measures and outputs a new PREFIX_CHI* dset that has two
briks:
brik [0]: chi^2_p,
brik [1]: chi^2_c.
These values are essentially calculated according to
Papadakis et al. (2003, JMRI), Eqs. 4 and 3,
respectively (in chi^2_c, the sigma value is the
variance of measured DWIs *per voxel*). Note for both
chi* values, only DWI signal values are used in the
calculation (i.e., where b>THR; by default,
THR=0.01, which can be changed using '-bmax_ref ...').
In general, chi^2_p values seem to be <<1, consistent
with Papadakis et al.'s Fig. 4; the chi^2_c values are
are also pretty consistent with the same fig and seem to
be best viewed with the upper limit being roughly =Ndwi
or =Ndwi-7 (with the latter being the given degrees
of freedom value by Papadakis et al.)
-cumulative_wts = show overall weight factors for each gradient level
May be useful as a quality control
-verbose nnnnn = print convergence steps every nnnnn voxels that survive
to convergence loops (can be quite lengthy).
-drive_afni nnnnn = show convergence graphs every nnnnn voxels that
survive to convergence loops. AFNI must have NIML
communications on (afni -niml)
-sep_dsets = save tensor, eigenvalues, vectors, FA, MD in separate
datasets
-csf_val n.nnn = assign diffusivity value to DWI data where the mean
values for b=0 volumes is less than the mean of the
remaining volumes at each voxel. The default value is
'1.0 divided by the max bvalue in the grads/bmatrices'.
The assumption is that there are flow artifacts in CSF
and blood vessels that give rise to lower b=0 voxels.
NB: MD, RD L1, L2, L3, Dxx, Dyy, etc. values are all
scaled in the same way.
-min_bad_md N = change the min MD value used as a 'badness check' for
tensor fits that have veeery (-> unreasonably) large MD
values. Voxels where MD > N*(csf_val) will be treated
like CSF and turned into spheres with radius csf_val
(default N=100).
-csf_fa n.nnn = assign a specific FA value to those voxels described
above The default is 0.012345678 for use in tractography
programs that may make special use of these voxels
-opt mname = if mname is 'powell', use Powell's 2004 method for
optimization. If mname is 'gradient' use gradient descent
method. If mname is 'hybrid', use combination of methods.
MJD Powell, "The NEWUOA software for unconstrained
optimization without derivatives", Technical report DAMTP
2004/NA08, Cambridge University Numerical Analysis Group:
See: http://www.ii.uib.no/~lennart/drgrad/Powell2004.pdf
-mean_b0 = use mean of all b=0 volumes for linear computation and initial
linear for nonlinear method
Example:
3dDWItoDT -prefix rw01 -automask -reweight -max_iter 10 \
-max_iter_rw 10 tensor25.1D grad02+orig.
The output is a 6 sub-brick bucket dataset containing
Dxx, Dxy, Dyy, Dxz, Dyz, Dzz
(the lower triangular, row-wise elements of the tensor in symmetric matrix
form). Additional sub-briks may be appended with the -eigs and -debug_briks
options. These results are appropriate as the input to 3dDTeig.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
AFNI program: 3dDWUncert
OVERVIEW ~1~
Use jackknifing to estimate uncertainty of DTI parameters which are
important for probabilistic tractography on per voxel basis.
Produces useful input for 3dTrackID, which does both mini- and full
probabilistic tractography for GM ROIs in networks, part of
FATCAT (Taylor & Saad, 2013) in AFNI.
This version has been reprogrammed to include parallelized running via
OpenMP (as of Oct, 2016). So, it has the potential to run a lot more
quickly, assuming you have an OpenMPable setup for AFNI. The types/formats
of inputs and outputs have not changed from before.
****************************************************************************
OUTPUT ~1~
1) AFNI-format file with 6 subbricks, containing uncertainty
information. The bricks are in the following order:
[0] bias of e1 in direction of e2
[1] stdev of e1 in direction of e2
[2] bias of e1 in direction of e3
[3] stdev of e1 in direction of e3
[4] bias of FA
[5] stdev of FA
RUNNING ~1~
3dDWUncert -inset FILE -input [base of FA/MD/etc.] \
{-grads | -bmatrix_FULL} FILE -prefix NAME -iters NUMBER
... where:
-inset FILE :file with b0 and DWI subbricks
(e.g., input to 3dDWtoDTI)
-prefix PREFIX :output file name part.
-input INPREF :basename of DTI volumes output by,
e.g., 3dDWItoDT or TORTOISE. Assumes format of name
is, e.g.: INPREF_FA+orig.HEAD or INPREF_FA.nii.gz .
Files needed with same prefix are:
*FA*, *L1*, *V1*, *V2*, *V3* .
-input_list FILE :an alternative way to specify DTI input files, where
FILE is a NIML-formatted text file that lists the
explicit/specific files for DTI input. This option is
used in place of '-input INPREF'.
See below for a 'INPUT LIST FILE EXAMPLE'.
-grads FF :file with 3 columns for x-, y-, and z-comps
of DW-gradients (which have unit magnitude).
NB: this option also assumes that only 1st DWI
subbrick has a b=0 image (i.e., all averaging of
multiple b=0 images has been done already); if such
is not the case, then you should convert your grads to
the bmatrix format and use `-bmatrix_FULL'.
OR
-bmatrix_Z FF :using this means that file with gradient info
is in b-matrix format, with 6 columns representing:
b_xx b_yy b_zz b_xy b_xz b_yz.
NB: here, bvalue per image is the trace of the bmatr,
bval = b_xx+b_yy+b_zz, such as 1000 s/mm^2. This
option might be used, for example, if multiple
b-values were used to measure DWI data; this is an
AFNI-style bmatrix that needs to be input.
-bmatrix_FULL FF :exact same as '-bmatrix_Z FF' above (i.e. there are N
rows to the text file and N volumes in the matched
data set) with just a lot more commonsensical name.
Definitely would be preferred way to go, for ease of
usage!
-iters NUMBER :number of jackknife resample iterations,
e.g. 300.
-mask MASK :can include a mask within which to calculate uncert.
Otherwise, data should be masked already.
-calc_thr_FA FF :set a threshold for the minimum FA value above which
one calculates uncertainty; useful if one doesn't want
to waste time calculating uncertainty in very low-FA
voxels that are likely GM/CSF. For example, in adult
subjects one might set FF=0.1 or 0.15, depending on
SNR and user's whims (default: FF=-1, i.e., do all).
-csf_fa NUMBER :number marking FA value of `bad' voxels, such as
those with S0 value <=mean(S_i), which breaks DT
assumptions due to, e.g., bulk/flow motion.
Default value of this matches 3dDWItoDT value of
csf_fa=0.012345678.
* * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * **
DTI LIST FILE EXAMPLE ~1~
Consider, for example, if you hadn't used the '-sep_dsets' option when
outputting all the tensor information from 3dDWItoDT. Then one could
specify the DTI inputs for this program with a file called, e.g.,
FILE_DTI_IN.niml.opts (the name *must* end with '.niml.opts'):
<DTIFILE_opts
dti_V1="SINGLEDT+orig[9..11]"
dti_V2="SINGLEDT+orig[12..14]"
dti_V3="SINGLEDT+orig[15..17]"
dti_FA="SINGLEDT+orig[18]"
dti_L1="SINGLEDT+orig[6]" />
This represents the *minimum* set of input files needed when running
3dDWUncert. (Note that MD isn't needed here.) You can also recycle a
NIMLly formatted file from '3dTrackID -dti_list'-- the extra inputs
needed for the latter are a superset of those needed here, and won't
affect anything detrimentally (I hope).
****************************************************************************
COMMENTS (mainly about running speedily)~1~
+ This program can be slow if you have looots of voxels and/or looots of
of grads. *But*, it is written with OpenMP parallelization, so you
can make use of having multiple CPUs. The system environment variable
to specify the number of CPUs to use is OMP_NUM_THREADS.
You can specify OMP_NUM_THREADS in your ~/.bashrc, ~/.cshrc or other
shell RC file. Or, you can set it in the script you are using.
To verify that your OMP_NUM_THREAD variable has been set as you want,
you can use command line program 'afni_check_omp', and see what number
is output.
+ If your input DWI dataset has not masked, you probably should input a
mask with '-mask ..', because otherwise the program will waste a looot
of time calculating DWI uncertainty of air and skull and other things
of no practical consequence.
EXAMPLES ~1~
1) Basic example (probably assuming data has been masked):
3dDWUncert \
-inset TEST_FILES/DTI/fin2_DTI_3mm_1+orig \
-prefix TEST_FILES/DTI/o.UNCERT \
-input TEST_FILES/DTI/DT \
-grads TEST_FILES/Siemens_d30_GRADS.dat \
-iters 300
2) Same as above, with mask include as opt:
3dDWUncert \
-inset TEST_FILES/DTI/fin2_DTI_3mm_1+orig \
-prefix TEST_FILES/DTI/o.UNCERT \
-input TEST_FILES/DTI/DT \
-grads TEST_FILES/Siemens_d30_GRADS.dat \
-mask TEST_FILES/dwi_mask.nii.gz \
-iters 300
CITING ~1~
If you use this program, please reference the jackknifing algorithm done
with nonlinear fitting described in:
Taylor PA, Biswal BB (2011). Geometric analysis of the b-dependent
effects of Rician signal noise on diffusion tensor imaging
estimates and determining an optimal b value. MRI 29:777-788.
and the introductory/description paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
____________________________________________________________________________
AFNI program: 3dECM
Usage: 3dECM [options] dset
Computes voxelwise eigenvector centrality (ECM) and
stores the result in a new 3D bucket dataset as floats to
preserve their values. ECM of a voxel reflects the strength
and extent of a voxel's global connectivity as well as the
importance of the voxels that it is directly connected to.
Conceptually the process involves:
1. Calculating the correlation between voxel time series for
every pair of voxels in the brain (as determined by masking)
2. Calculate the eigenvector corresponding to the largest
eigenvalue of the similarity matrix.
Guaranteeing that the largest eigenvector is unique and therefore,
that an ECM solution exists, requires that the similarity matrix
is strictly positive. This is enforced by either adding one to
the correlations as in (Lohmann et. al. 2010), or by adding one
and dividing by two (Wink et al. 2012).
Calculating the first eigenvector of a whole-brain similarity matrix
requires a lot of system memory and time. 3dECM uses the optimizations
described in (Wink et al 2012) to improve performance. It additionally
provides a mechanism for limited the amount of system memory used to
avoid memory related crashes.
The performance can also be improved by reducing the number of
connections in the similarity matrix using either a correlation
or sparsity threshold. The correlation threshold simply removes
all connections with a correlation less than the threshold. The
sparsity threshold is a percentage and reflects the fraction of
the strongest connections that should be retained for analysis.
Sparsity thresholding uses a histogram approach to 'learn' a
correlation threshold that would result in the desired level
of sparsity. Due to ties and virtual ties due to poor precision
for differentiating connections, the desired level of sparsity
will not be met exactly, 3dECM will retain more connections than
requested.
Whole brain ECM results in very small voxel values and small
differences between cortical areas. Reducing the number of
connections in the analysis improves the voxel values and
provides greater contrast between cortical areas
. Lohmann G, Margulies DS, Horstmann A, Pleger B, Lepsien J, et al.
(2010) Eigenvector Centrality Mapping for Analyzing
Connectivity Patterns in fMRI Data of the Human Brain. PLoS
ONE 5(4): e10232. doi: 10.1371/journal.pone.0010232
Wink, A. M., de Munck, J. C., van der Werf, Y. D., van den Heuvel,
O. A., & Barkhof, F. (2012). Fast Eigenvector Centrality
Mapping of Voxel-Wise Connectivity in Functional Magnetic
Resonance Imaging: Implementation, Validation, and
Interpretation. Brain Connectivity, 2(5), 265-274.
doi:10.1089/brain.2012.0087
Options:
-full = uses the full power method (Lohmann et. al. 2010).
Enables the use of thresholding and calculating
thresholded centrality. Uses sparse array to reduce
memory requirement. Automatically selected if
-thresh, or -sparsity are used.
-fecm = uses a shortcut that substantially speeds up
computation, but is less flexibile in what can be
done the similarity matrix. i.e. does not allow
thresholding correlation coefficients. based on
fast eigenvector centrality mapping (Wink et. al
2012). Default when -thresh, or -sparsity
are NOT used.
-thresh r = exclude connections with correlation < r. cannot be
used with FECM
-sparsity p = only include the top p% (0 < p <= 100) connections in the calculation
cannot be used with FECM method. (default)
-do_binary = perform the ECM calculation on a binarized version of the
connectivity matrix, this requires a connnectivity or
sparsity threshold.
-shift s = value that should be added to correlation coeffs to
enforce non-negativity, s >= 0. [default = 0.0, unless
-fecm is specified in which case the default is 1.0
(e.g. Wink et al 2012)].
-scale x = value that correlation coeffs should be multiplied by
after shifting, x >= 0 [default = 1.0, unless -fecm is
specified in which case the default is 0.5 (e.g. Wink et
al 2012)].
-eps p = sets the stopping criterion for the power iteration
l2|v_old - v_new| < eps*|v_old|. default = .001 (0.1%)
-max_iter i = sets the maximum number of iterations to use in
in the power iteration. default = 1000
-polort m = Remove polynomial trend of order 'm', for m=0..3.
[default is m=1; removal is by least squares].
Using m=0 means that just the mean is removed.
-autoclip = Clip off low-intensity regions in the dataset,
-automask = so that the correlation is only computed between
high-intensity (presumably brain) voxels. The
mask is determined the same way that 3dAutomask works.
-mask mmm = Mask to define 'in-brain' voxels. Reducing the number
the number of voxels included in the calculation will
significantly speedup the calculation. Consider using
a mask to constrain the calculations to the grey matter
rather than the whole brain. This is also preferable
to using -autoclip or -automask.
-prefix p = Save output into dataset with prefix 'p'
[default prefix is 'ecm'].
-memory G = Calculating eignevector centrality can consume a lot
of memory. If unchecked this can crash a computer
or cause it to hang. If the memory hits this limit
the tool will error out, rather than affecting the
system [default is 2G].
Notes:
* The output dataset is a bucket type of floats.
* The program prints out an estimate of its memory used
when it ends. It also prints out a progress 'meter'
to keep you pacified.
-- RWCox - 31 Jan 2002 and 16 Jul 2010
-- Cameron Craddock - 13 Nov 2015
-- Daniel Clark - 14 March 2016
=========================================================================
* This binary version of 3dECM is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dedge3
Usage: 3dedge3 [options] dset dset ...
Does 3D Edge detection using the library 3DEdge by;
by Gregoire Malandain (gregoire.malandain@sophia.inria.fr)
Options :
-input iii = Input dataset
-verbose = Print out some information along the way.
-prefix ppp = Sets the prefix of the output dataset.
-datum ddd = Sets the datum of the output dataset.
-fscale = Force scaling of the output to the maximum integer range.
-gscale = Same as '-fscale', but also forces each output sub-brick to
to get the same scaling factor.
-nscale = Don't do any scaling on output to byte or short datasets.
-scale_floats VAL = Multiply input by VAL, but only if the input datum is
float. This is needed when the input dataset
has a small range, like 0 to 2.0 for instance.
With such a range, very few edges are detected due to
what I suspect to be truncation problems.
Multiplying such a dataset by 10000 fixes the problem
and the scaling is undone at the output.
-automask = For automatic, internal calculation of a mask in the usual
AFNI way. Again, this mask is only applied after all calcs
(so using this does not speed up the calc or affect
distance values).
** Special note: you can also write '-automask+X', where
X is some integer; this will dilate the initial automask
number of times (as in 3dAllineate); must have X>0.
References for the algorithms:
- Optimal edge detection using recursive filtering
R. Deriche, International Journal of Computer Vision,
pp 167-187, 1987.
- Recursive filtering and edge tracking: two primary tools
for 3-D edge detection, O. Monga, R. Deriche, G. Malandain
and J.-P. Cocquerez, Image and Vision Computing 4:9,
pp 203-214, August 1991.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dedgedog
Overview ~1~
This program calculates edges in an image using the Difference of Gaussians
(DOG) method by Wilson and Giese (1977) and later combined with work by
Marr and Hildreth (1980) to provide a computationally efficient
approximation to their Lagrangian of Gaussian (LOG) method for calculating
edges in an image. This is a fascinating set of papers to read. But you
don't have to take *my* word for it!...
Generating edges in this way has some interesting properties, such as
numerical efficiency and edges that are closed loops/surfaces. The edges
can be tuned to focus on structures of a particular size, too, which can be
particularly useful in some applications.
written by: PA Taylor and DR Glen (SSCC, NIMH, NIH)
Description ~2~
The primary papers for learning more about the DOG and LOG methods are:
Wilson HR, Giese SC (1977). Threshold visibility of frequency
gradient patterns. Vision Res. 17(10):1177-90.
doi: 10.1016/0042-6989(77)90152-3. PMID: 595381.
Marr D, Hildreth E (1980). Theory of edge detection. Proc R Soc
Lond B Biol Sci. 207(1167):187-217.
doi: 10.1098/rspb.1980.0020. PMID: 6102765.
Thanks to C. Rorden for pointing these papers out and discussing them.
The current code here extends/tweaks the MH1980 algorithm a bit. It runs
in 3D by default (a straightforward extension), it also employs the
Euclidean Distance Transform (EDT) to pick out the actual edges from the
DOG step---see 3dDepthMap for more information about the EDT.
The DOG-based edges require specifying a couple parameters, the main
one being interpretable as a minimal 'scale size' for structures. In this
code, this is the 'sigma_rad' (or 'sigma_nvox', if you want to specify it
in terms of the number of voxels along a given axis), which is the 'inner
Gaussian' sigma value, if you are following MH1980. The default for this
sigma_rad parameter is set based on the expected average thickness of adult
human GM, but it is easily alterable at the command line for any other
values.
==========================================================================
Command usage and option list ~1~
3dedgedog [options] -prefix PREF -input DSET
where:
-input DSET :(req) input dataset
-prefix PREF :(req) output prefix name
-mask MASK :mask dataset. NB: this mask is only applied *after*
the EDT has been calculated. Therefore, the boundaries
of this mask have no affect on the calculated distance
values, except for potentially zeroing some out at the
end. Mask only gets made from [0]th vol.
-automask :alternative to '-mask ..', for automatic internal
calculation of a mask in the usual AFNI way. Again, this
mask is only applied after all calcs (so using this does
not speed up the calc or affect distance values).
** Special note: you can also write '-automask+X', where
X is some integer; this will dilate the initial automask
X number of times (as in 3dAllineate); must have X>0.
-sigma_rad RRR :radius for 'inner' Gaussian, in units of mm; RRR must
by greater than zero (def: 1.40). Default is chosen to
capture useful features in typical adult, human GM,
which has typical thickness of 2-2.5mm. So, if you are
analyzing some other kind of data, you might want to
adapt this value appropriately.
-sigma_nvox NNN :define radius for 'inner' Gaussian by providing a
multiplicative factor for voxel edge length, which will
be applied in each direction; NNN can be any float
greater than zero. This is an alternative to the
'-sigma_rad ..' opt (def: use '-sigma_rad' and its
default value).
-ratio_sigma RS :the ratio of inner and outer Gaussian sigma values.
That is, RS defines the size of the outer Gaussian,
by scaling up the inner value. RS can be any float
greater than 1 (def: 1.40). See 'Notes' for more about
this parameter.
-output_intermed :use this option flag if you would like to output some
intermediate dataset(s):
+ DOG (difference of Gaussian)
+ EDT2 (Euclidean Distance Transform, dist**2 vals),
[0]th vol only
+ BLURS (inner- and outer-Gaussian blurred dsets),
[0]th vol only
(def: not output). Output names will be user-entered
prefix with a representative suffix appended.
-edge_bnd_NN EBN :specify the 'nearest neighbor' (NN) value for the
connectedness of the drawn boundaries. EBN must be
one of the following integer values:
1 -> for face only
2 -> for face+edge
3 -> for face+edge+node
(def: 1).
-edge_bnd_side EBS :specify which boundary layer around the zero-layer
to use in the algorithm. EBS must be one of the
following keywords:
"NEG" -> for negative (inner) boundary
"POS" -> for positive (outer) boundary
"BOTH" -> for both (inner+outer) boundary
"BOTH_SIGN" -> for both (inner+outer) boundary,
with pos/neg sides keeping sign
(def: "NEG").
-edge_bnd_scale :by default, this program outputs a mask of edges, so
edge locations have value=1, and everything else is 0.
Using this option means the edges will have values
scaled to have a relative magnitude between 0 and 100
(NB: the output dset will still be datum=short)
depending on the gradient value at the edge.
When using this opt, likely setting the colorbar scale
to 25 will provide nice images (in N=1 cases tested,
at least!).
-only2D SLI :instead of estimating edges in full 3D volume, calculate
edges just in 2D, per plane. Provide the slice plane
you want to run along as the single argument SLI:
"axi" -> for axial slice
"cor" -> for coronal slice
"sag" -> for sagittal slice
==========================================================================
Notes ~1~
The value of sigma_rad ~2~
(... which sounds like the title of a great story, no? Anyways...)
This parameter represents the ratio of the width of the two Gaussians that
are blurred in the first stage of the DOG estimation. In the limit that
sigma_rad approaches 1, the DOG -> LOG. So, we want to keep the value of
this parameter in the general vicinity of 1 (and it can't be less than 1,
because the ratio is of the outer-to-the-inner Gaussian). MH1980 suggested
that sigma_rad=1.6 was optimal 'on engineering grounds' of bandwidth
sensitivity of filters. This is *very approximate* reasoning, but provides
another reference datum for selection.
Because the DOG approximation used here is for visual purposes of MRI
datasets, often even more specifically for alignment purposes, we have
chosen a default value that seemed visually appropriate to real data.
Values of sigma_rad close to one show much noisier, scattered images---that
is, they pick up *lots* of contrast differences, probably too many for most
visualization purposes. Edge images smoothen as sigma_rad increases, but
as it gets larger, it can also blend together edges of features---such as
gyri of the brain with dura. So, long story short, the default value here
tries to pick a reasonable middle ground.
==========================================================================
Examples ~1~
1) Basic case:
3dedgedog \
-input anat+orig.HEAD \
-prefix anat_EDGE.nii.gz
2) Same as above, but output both edges from the DOG+EDT steps, keeping
the sign of each side:
3dedgedog \
-edge_bnd_side BOTH_SIGN \
-input anat+orig.HEAD \
-prefix anat_EDGE_BOTHS.nii.gz
3) Output both sides of edges, and scale the edge values (by DOG value):
3dedgedog \
-edge_bnd_side BOTH_SIGN \
-edge_bnd_scale \
-input anat+orig.HEAD \
-prefix anat_EDGE_BOTHS_SCALE.nii.gz
4) Increase scale size of edged shapes to 2.7mm:
3dedgedog \
-sigma_rad 2.7 \
-edge_bnd_scale \
-input anat+orig.HEAD \
-prefix anat_EDGE_BOTHS_SCALE.nii.gz
5) Apply automasking, with a bit of mask dilation so outer boundary is
included:
3dedgedog \
-automask+2 \
-input anat+orig.HEAD \
-prefix anat_EDGE_AMASK.nii.gz
==========================================================================
AFNI program: 3dEdu_01_scale
Overview ~1~
This is an example starting program for those who want to create a new
AFNI program to see some examples of possible I/O and internal calcs.
Please see the source code file in the main afni/src/3dEdu_01_scale.c
for more information.
This program is intended purely for educational and code-development
purposes.
written by: PA Taylor
Description ~2~
This program will take one dataset as input, and output a copy of its [0]th
volume. A mask can be provided, as well as two multiplicative factors to
mask and scale the output, respectively.
==========================================================================
Command usage and option list ~1~
3dEdu_01_scale [something]
where:
-input DSET :(req) input dataset
-mask DSET_MASK :(opt) mask dataset on same grid/data structure
as the input dset
-some_opt :(opt) option flag to do something
-mult_facs A B :(opt) numerical factors for multiplying each voxel;
that is, each voxel is multiplied by both A and B.
==========================================================================
Examples ~1~
1) Output a copy of the [0]th volume of the input:
3dEdu_01_scale \
-input epi_r1+orig.HEAD \
-prefix OUT_edu_01
2) Output a masked copy of the [0]th volume of the input:
3dEdu_01_scale \
-input epi_r1+orig.HEAD \
-mask mask.auto.nii.gz \
-prefix OUT_edu_02
3) Output a masked+scaled copy of the [0]th volume of the input:
3dEdu_01_scale \
-mult_facs 3 5.5 \
-input epi_r1+orig.HEAD \
-mask mask.auto.nii.gz \
-prefix OUT_edu_03
==========================================================================
AFNI program: 3dEigsToDT
Convert set of DTI eigenvectors and eigenvalues to a diffusion tensor,
while also allowing for some potentially useful value-scaling and vector-
flipping.
May be helpful in converting output from different software packages.
Part of FATCAT (Taylor & Saad, 2013) in AFNI.
It is essentially the inverse of the existing AFNI command: 3dDTeig.
Minor note and caveat:
This program has been checked for consistency with 3dDWItoDT outputs (that
is using its output eigenvalues and eigenvectors to estimate a DT, which
was then compared with that of the original 3dDWItoDT fit).
This program will *mostly* return the same DTs that one would get from
using the eigenvalues and eigenvectors of 3dDWItoDT to very high agreement
The values generally match to <10**-5 or so, except in CSF where there can
be small/medium differences, apparently due to the noisiness or non-
tensor-fittability of the original DWI data in those voxels.
However, these discrepancies *shouldn't* really affect most cases of using
DTI data. This is probably generally true for reconstructing DTs of most
software program output: the results match well for most WM and GM, but
there might be trouble in partial-volumed and CSF regions, where the DT
model likely did not fit well anyways. Caveat emptor.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMAND: 3dEigsToDT -eig_vals NAME1 -eig_vecs NAME2 {-mask MASK } \
{-flip_x | -flip_y | flip_z} {-scale_eigs X} -prefix PREFIX
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ OUTPUT:
1) AFNI-format DT file with 6 subbricks in the same format as output
by, for example, 3dDWItoDT (the lower triangular, row-wise
elements of the tensor in symmetric matrix form)
[0] Dxx
[1] Dxy
[2] Dyy
[3] Dxz
[4] Dyz
[5] Dzz
+ RUNNING:
-eig_vals NAME1 :Should be a searchable descriptor for finding all
three required eigenvalue files. Thus, on a Linux
commandline, one would expect:
$ ls NAME1
to list all three eigenvalue files in descending order
of magnitude. This program will also only take
the first three matches (not including doubling of
BRIK/HEAD files in AFNI-format).
-eig_vecs NAME2 :Should be a searchable descriptor for finding all
three required eigenvector files. Thus, on a Linux
commandline, one would expect:
$ ls NAME2
to list all three eigenvector files in order matching
the eigenvalue files. This program will also only take
the first three matches (not including doubling of
BRIK/HEAD files in AFNI-format).
-> Try to make NAME1 and NAME2 as specific as possible, so
that the search&load gets everything as right as possible.
Also, if using the wildcard character, '*', then make sure
to enclose the option value with apostrophes (see EXAMPLE,
below).
-prefix PREFIX :output file name prefix. Would suggest putting a 'DT'
label in it.
-mask MASK :can include a mask within which to calculate uncert.
Otherwise, data should be masked already.
-flip_x :change sign of first element of eigenvectors.
-flip_y :change sign of second element of eigenvectors.
-flip_z :change sign of third element of eigenvectors.
-> Only a single flip would ever be necessary; the combination
of any two flips is mathematically equivalent to the sole
application of the remaining one.
-scale_eigs X :rescale the eigenvalues, dividing by a number that is
X>0. Could be used to reintroduce the DW scale of the
original b-values, if some other program has
remorselessly scaled it away.
* * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * **
+ EXAMPLE:
3dEigsToDT \
-eig_vals 'DTI/DT_L*' \
-eig_vecs 'DTI/DT_V*' \
-prefix DTI/NEW_DT \
-scale_eigs 1000 \
-flip_y
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional And
Tractographic Connectivity Analysis Toolbox. Brain Connectivity.
AFNI program: 3dEmpty
Usage: 3dEmpty [options]
Makes an 'empty' dataset .HEAD file.
Options:
=======
-prefix p = Prefix name for output file (default = 'Empty')
-nxyz x y z = Set number of voxels to be 'x', 'y', and 'z'
along the 3 axes [defaults=64]
*OR*
-geometry m = Set the 3D geometry of the grid using a
string 'm' of the form
'MATRIX(a11,a12,a13,a14,a21,a22,a23,a24,a31,a32,a33,a34):nx,ny,nz'
which defines the number of grid points, as well as
relationship between grid indexes (voxel centers)
and the 3D xyz coordinates.
* Sample 'MATRIX()' entries can be found by using
program 3dinfo on an existing datasets.
* Each .niml file used by 3dGroupInCorr has a
'geometry="MATRIX(...)" entry.
-nt = Number of time points [default=1]
* Other dataset parameters can be changed with 3drefit.
* The purpose of this program (combined with 3drefit) is to
allow you to make up an AFNI header for an existing data file.
* This program does NOT create data to fill up the dataset.
* If you want to create a dataset of a given size with random
values attached, a command like
3dcalc -a jRandomDataset:32,32,16,10 -expr a -prefix Something
would work. In this example, nx=ny=32 nz=16 nt=10.
(Changing '-expr a' to '-expr 0' would fill the dataset with zeros.)
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dEntropy
Usage: 3dEntropy [-zskip] dataset ...
* Datasets must be stored as 16 bit shorts.
* -zskip option means to skip 0 values in the computation.
* This program is not very useful :) :(
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dExchange
Usage: 3dExchange [-prefix PREFIX] <-input DATASET>
Replaces voxel values using mapping file with two columns of numbers
with the first column of the input value and the second has the output value
-input DATASET : Input dataset
Acceptable data types are:
byte, short, and floats.
-map MAPCOLS.1D : Mapping columns - input is first column
output is second column
-prefix PREFIX: Output prefix
-ver = print author and version info
-help = print this help screen
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dExtractGroupInCorr
++ 3dExtractGroupInCorr: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: RW Cox
Usage: 3dExtractGroupInCorr [options] AAA.grpincorr.niml
This program breaks the collection of images from a GroupInCorr
file back into individual AFNI 3D+time datasets.
Of course, only the data inside the mask used in 3dSetupGroupInCorr
is stored in the .data file, so only those portions of the input
files can be reconstructed :)
The output datasets will be stored in float format, no matter what
the storage type of the original datasets or of the .data file.
OPTION:
-------
-prefix PPP The actual dataset prefix with be the internal dataset
label with the string 'PPP_' pre-prended.
++ Use NULL to skip the use of the prefix.
Author -- RWCox -- May 2012
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dExtrema
++ 3dExtrema: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
This program finds local extrema (minima or maxima) of the input
dataset values for each sub-brick of the input dataset. The extrema
may be determined either for each volume, or for each individual slice.
Only those voxels whose corresponding intensity value is greater than
the user specified data threshold will be considered.
Usage: 3dExtrema options datasets
where the options are:
-prefix pname = Use 'pname' for the output dataset prefix name.
OR [default = NONE; only screen output]
-output pname
-session dir = Use 'dir' for the output dataset session directory.
[default='./'=current working directory]
-quiet = Flag to suppress screen output
-mask_file mname = Use mask statistic from file mname.
Note: If file mname contains more than 1 sub-brick,
the mask sub-brick must be specified!
-mask_thr m Only voxels whose mask statistic is >= m
in absolute value will be considered.
A default value of 1 is assumed.
-data_thr d Only voxels whose value (intensity) is greater
than d in absolute value will be considered.
-nbest N Only print the first N extrema.
-sep_dist d Min. separation distance [mm] for distinct extrema
Choose type of extrema (one and only one choice):
-minima Find local minima.
-maxima [default] Find local maxima.
Choose form of binary relation (one and only one choice):
-strict [default] > for maxima, < for minima
-partial >= for maxima, <= for minima
Choose boundary criteria (one and only one choice):
-interior [default]Extrema must be interior points (not on boundary)
-closure Extrema may be boundary points
Choose domain for finding extrema (one and only one choice):
-slice [default] Each slice is considered separately
-volume The volume is considered as a whole
Choose option for merging of extrema (one and only one choice):
-remove [default] Remove all but strongest of neighboring extrema
-average Replace neighboring extrema by average
-weight Replace neighboring extrema by weighted average
Command line arguments after the above are taken to be input datasets.
Examples:
Compute maximum value in amygdala region of Talairach-transformed dataset
3dExtrema -volume -closure -sep_dist 512 \
-mask_file 'TT_Daemon::amygdala' func_slim+tlrc'[0]'
Show minimum voxel values not on edge of mask, where the mask >= 0.95
3dExtrema -minima -volume -mask_file 'statmask+orig' \
-mask_thr 0.95 func_slim+tlrc'[0]'
Get the maximum 3 values across the given ROI.
3dExtrema -volume -closure -mask_file MY_ROI+tlrc \
-nbest 3 func_slim+tlrc'[0]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dFDR
++ 3dFDR: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
This program implements the False Discovery Rate (FDR) algorithm for
thresholding of voxelwise statistics.
Program input consists of a functional dataset containing one (or more)
statistical sub-bricks. Output consists of a bucket dataset with one
sub-brick for each input sub-brick. For non-statistical input sub-bricks,
the output is a copy of the input. However, statistical input sub-bricks
are replaced by their corresponding FDR values, as follows:
For each voxel, the minimum value of q is determined such that
E(FDR) <= q
leads to rejection of the null hypothesis in that voxel. Only voxels inside
the user specified mask will be considered. These q-values are then mapped
to z-scores for compatibility with the AFNI statistical threshold display:
stat ==> p-value ==> FDR q-value ==> FDR z-score
The reason for the final conversion from q to z is so that larger values
are more 'significant', which is how the usual thresholding procedure
in the AFNI GUI works.
Usage:
3dFDR
-input fname fname = filename of input 3d functional dataset
OR
-input1D dname dname = .1D file containing column of p-values
-mask_file mname Use mask values from file mname.
*OR* Note: If file mname contains more than 1 sub-brick,
-mask mname the mask sub-brick must be specified!
Default: No mask
** Generally speaking, you really should use a mask
to avoid counting non-brain voxels. However, with
the changes described below, the program will
automatically ignore voxels where the statistics
are set to 0, so if the program that created the
dataset used a mask, then you don't need one here.
-mask_thr m Only voxels whose corresponding mask value is
greater than or equal to m in absolute value will
be considered. Default: m=1
Constant c(N) depends on assumption about p-values:
-cind c(N) = 1 p-values are independent across N voxels
-cdep c(N) = sum(1/i), i=1,...,N any joint distribution
Default: c(N) = 1
-quiet Flag to suppress screen output
-list Write sorted list of voxel q-values to screen
-prefix pname Use 'pname' for the output dataset prefix name.
OR
-output pname
===========================================================================
January 2008: Changes to 3dFDR
------------------------------
The default mode of operation of 3dFDR has altered somewhat:
* Voxel p-values of exactly 1 (e.g., from t=0 or F=0 or correlation=0)
are ignored by default; in the old mode of operation, they were
included in the count which goes into the FDR algorithm. The old
process tends to increase the q-values and so decrease the z-scores.
* The array of voxel p-values are now sorted via Quicksort, rather than
by binning, as in the old mode. This (by itself) probably has no
discernible effect on the results, but should be faster.
New Options:
------------
-old = Use the old mode of operation (for compatibility/nostalgia)
-new = Use the new mode of operation [now the default]
N.B.: '-list' does not work in the new mode!
-pmask = Instruct the program to ignore p=1 voxels
[the default in the new mode, but not in the old mode]
N.B.: voxels that were masked in 3dDeconvolve (etc.)
will have their statistics set to 0, which means p=1,
which means that such voxels are implicitly masked
with '-new', and so don't need to be explicitly
masked with the '-mask' option.
-nopmask = Instruct the program to count p=1 voxels
[the default in the old mode, but NOT in the new mode]
-force = Force the conversion of all sub-bricks, even if they
are not marked as with a statistical code; such
sub-bricks are treated as though they were p-values.
-float = Force the output of z-scores in floating point format.
-qval = Force the output of q-values rather than z-scores.
N.B.: A smaller q-value is more significant!
[-float is strongly recommended when -qval is used]
* To be clear, you can use '-new -nopmask' to have the new mode of computing
carried out, but with p=1 voxels included (which should give results
nearly identical to '-old').
* Or you can use '-old -pmask' to use the old mode of computing but where
p=1 voxels are not counted (which should give results virtually
identical to '-new').
* However, the combination of '-new', '-nopmask' and '-mask_file' does not
work -- if you try it, '-pmask' will be turned back on and a warning
message printed to aid your path towards elucidation and enlightenment.
Other Notes:
------------
* '3drefit -addFDR' can be used to add FDR curves of z(q) as a function
of threshold for all statistic sub-bricks in a dataset; in turn, these
curves let you see the (estimated) q-value as you move the threshold
slider in AFNI.
- Since 3drefit doesn't have a '-mask' option, you will have to mask
statistical sub-bricks yourself via 3dcalc (if desired):
3dcalc -a stat+orig -b mask+orig -expr 'a*step(b)' -prefix statmm
- '-addFDR' runs as if '-new -pmask' were given to 3dFDR, so that
stat values == 0 are ignored in the FDR calculations.
- most AFNI statistical programs now automatically add FDR curves to
the output dataset header, so you can see the q-value as you adjust
the threshold slider.
* q-values are estimates of the False Discovery Rate at a given threshold;
that is, about 5% of all voxels with q <= 0.05 (z >= 1.96) are
(presumably) 'false positive' detections, and the other 95% are
(presumably) 'true positives'. Of course, there is no way to tell
which above-threshold voxels are 'true' detections and which are 'false'.
* Note the use of the words 'estimate' and 'about' in the above statement!
In particular, the accuracy of the q-value calculation depends on the
assumption that the p-values calculated from the input statistics are
correctly distributed (e.g., that the DOF parameters are correct).
* The z-score is the conversion of the q-value to a double-sided tail
probability of the unit Gaussian N(0,1) distribution; that is, z(q)
is the value such that if x is a N(0,1) random variable, then
Prob[|x|>z] = q: for example, z(0.05) = 1.95996.
The reason for using z-scores here is simply that their range is
highly compressed relative to the range of q-values
(e.g., z(1e-9) = 6.10941), so z-scores are easily stored as shorts,
whereas q-values are much better stored as floats.
* Changes above by RWCox -- 18 Jan 2008 == Cary Grant's Birthday!
26 Mar 2009 -- Yet Another Change [RWCox]
-----------------------------------------
* FDR calculations in AFNI now 'adjust' the q-values downwards by
estimating the number of true negatives [m0 in the statistics
literature], and then reporting
q_new = q_old * m0 / m, where m = number of voxels being tested.
If you do NOT want this adjustment, then set environment variable
AFNI_DONT_ADJUST_FDR to YES. You can do this on the 3dFDR command
line with the option '-DAFNI_DONT_ADJUST_FDR=YES'
For Further Reading and Amusement
---------------------------------
* cf. http://en.wikipedia.org/wiki/False_discovery_rate [Easy overview of FDR]
* cf. http://dx.doi.org/10.1093/bioinformatics/bti448 [False Negative Rate]
* cf. http://dx.doi.org/10.1093/biomet/93.3.491 [m0 adjustment idea]
* cf. C implementation in mri_fdrize.c [trust in the Source]
* cf. https://afni.nimh.nih.gov/pub/dist/doc/misc/FDR/FDR_Jan2008.pdf
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dFFT
Usage: 3dFFT [options] dataset
* Does the FFT of the input dataset in 3 directions (x,y,z) and
produces the output dataset.
* Why you'd want to do this is an interesting question.
* Program 3dcalc can operate on complex-valued datasets, but
only on one component at a time (cf. the '-cx2r' option).
* Most other AFNI programs can only operate on real-valued
datasets.
* You could use 3dcalc (twice) to split a complex-valued dataset
into two real-valued datasets, do your will on those with other
AFNI programs, then merge the results back into a complex-valued
dataset with 3dTwotoComplex.
Options
=======
-abs = Outputs the magnitude of the FFT [default]
-phase = Outputs the phase of the FFT (-PI..PI == no unwrapping!)
-complex = Outputs the complex-valued FFT
-inverse = Does the inverse FFT instead of the forward FFT
-Lx xx = Use FFT of length 'xx' in the x-direction
-Ly yy = Use FFT of length 'yy' in the y-direction
-Lz zz = Use FFT of length 'zz' in the z-direction
* Set a length to 0 to skip the FFT in that direction
-altIN = Alternate signs of input data before FFT, to bring
zero frequency from edge of FFT-space to center of grid
for cosmetic purposes.
-altOUT = Alternate signs of output data after FFT. If you
use '-altI' on the forward transform, then you should
use '-altO' an the inverse transform, to get the
signs of the recovered image correct.
**N.B.: You cannot use '-altIN' and '-altOUT' in the same run!
-input dd = Read the input dataset from 'dd', instead of
from the last argument on the command line.
-prefix pp = Use 'pp' for the output dataset prefix.
Notes
=====
* The program can only do FFT lengths that are positive
even integers.
* The 'x', 'y', and 'z' axes here refer to the order the
data is stored, not DICOM coordinates; cf. 3dinfo.
* If you force (via '-Lx' etc.) an FFT length that is not
allowed, the program will stop with an error message.
* If you force an FFT length that is shorter than an dataset
axis dimension, the program will stop with an error message.
* If you don't force an FFT length along a particular axis,
the program will pick the smallest legal value that is
greater than or equal to the corresponding dataset dimension.
+ e.g., 123 would be increased to 124.
* If an FFT length is longer than an axis length, then the
input data in that direction is zero-padded at the end.
* For -abs and -phase, the output dataset is in float format.
* If you do the forward and inverse FFT, then you should get back
the original dataset, except for roundoff error and except that
the new dataset axis dimensions may be longer than the original.
* Forward FFT = sum_{k=0..N-1} [ exp(-2*PI*i*k/N) * data(k) ]
* Inverse FFT = sum_{k=0..N-1} [ exp(+2*PI*i*k/N) * data(k) ] / N
* Started a long time ago, but only finished in Aug 2009 at the
request of John Butman, because he asked so nicely. (Now pay up!)
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dfim+
++ 3dfim+: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
[7m*+ WARNING:[0m This program (3dfim+) is very old, may not be useful, and will not be maintained.
Program to calculate the cross-correlation of an ideal reference waveform
with the measured FMRI time series for each voxel.
Usage:
3dfim+
-input fname fname = filename of input 3d+time dataset
[-input1D dname] dname = filename of single (fMRI) .1D time series
[-mask mname] mname = filename of 3d mask dataset
[-nfirst fnum] fnum = number of first dataset image to use in
the cross-correlation procedure. (default = 0)
[-nlast lnum] lnum = number of last dataset image to use in
the cross-correlation procedure. (default = last)
[-polort pnum] pnum = degree of polynomial corresponding to the
baseline model (pnum = 0, 1, etc.)
(default: pnum = 1). Use -1 for no baseline model.
[-fim_thr p] p = fim internal mask threshold value (0 <= p <= 1)
to get rid of low intensity voxels.
(default: p = 0.0999), set p = 0.0 for no masking.
[-cdisp cval] Write (to screen) results for those voxels
whose correlation stat. > cval (0 <= cval <= 1)
(default: disabled)
[-ort_file sname] sname = input ort time series file name
-ideal_file rname rname = input ideal time series file name
Note: The -ort_file and -ideal_file commands may be used
more than once.
Note: If files sname or rname contain multiple columns,
then ALL columns will be used as ort or ideal
time series. However, individual columns or
a subset of columns may be selected using a file
name specification like 'fred.1D[0,3,5]', which
indicates that only columns #0, #3, and #5 will
be used for input.
[-out param] Flag to output the specified parameter, where
the string 'param' may be any one of the following:
Fit Coef L.S. fit coefficient for Best Ideal
Best Index Index number for Best Ideal (count starts at 1)
% Change P-P amplitude of signal response / Baseline
Baseline Average of baseline model response
Correlation Best Ideal product-moment correlation coefficient
% From Ave P-P amplitude of signal response / Average
Average Baseline + average of signal response
% From Top P-P amplitude of signal response / Topline
Topline Baseline + P-P amplitude of signal response
Sigma Resid Std. Dev. of residuals from best fit
All This specifies all of the above parameters
Spearman CC Spearman correlation coefficient
Quadrant CC Quadrant correlation coefficient
Note: Multiple '-out' commands may be used.
Note: If a parameter name contains embedded spaces, the
entire parameter name must be enclosed by quotes,
e.g., -out 'Fit Coef'
[-bucket bprefix] Create one AFNI 'bucket' dataset containing the
parameters of interest, as specified by the above
'-out' commands.
The output 'bucket' dataset is written to a file
with the prefix name bprefix.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dfractionize
Usage: 3dfractionize [options]
* For each voxel in the output dataset, computes the fraction
of it that is occupied by nonzero voxels from the input.
* The fraction is stored as a short in the range 0..10000,
indicating fractions running from 0..1.
* The template dataset is used only to define the output grid;
its brick(s) will not be read into memory. (The same is
true of the warp dataset, if it is used.)
* The actual values stored in the input dataset are irrelevant,
except in that they are zero or nonzero (UNLESS the -preserve
option is used).
The purpose of this program is to allow the resampling of a mask
dataset (the input) from a fine grid to a coarse grid (defined by
the template). When you are using the output, you will probably
want to threshold the mask so that voxels with a tiny occupancy
fraction aren't used. This can be done in 3dmaskave, by using
3calc, or with the '-clip' option below.
Options are [the first 2 are 'mandatory options']:
-template tset = Use dataset 'tset' as a template for the output.
The output dataset will be on the same grid as
this dataset.
-input iset = Use dataset 'iset' for the input.
Only the sub-brick #0 of the input is used.
You can use the sub-brick selection technique
described in '3dcalc -help' to choose the
desired sub-brick from a multi-brick dataset.
-prefix ppp = Use 'ppp' for the prefix of the output.
[default prefix = 'fractionize']
-clip fff = Clip off voxels that are less than 'fff' occupied.
'fff' can be a number between 0.0 and 1.0, meaning
the fraction occupied, can be a number between 1.0
and 100.0, meaning the percent occupied, or can be
a number between 100.0 and 10000.0, meaning the
direct output value to use as a clip level.
** Some sort of clipping is desirable; otherwise,
an output voxel that is barely overlapped by a
single nonzero input voxel will enter the mask.
[default clip = 0.0]
-warp wset = If this option is used, 'wset' is a dataset that
provides a transformation (warp) from +orig
coordinates to the coordinates of 'iset'.
In this case, the output dataset will be in
+orig coordinates rather than the coordinates
of 'iset'. With this option:
** 'tset' must be in +orig coordinates
** 'iset' must be in +acpc or +tlrc coordinates
** 'wset' must be in the same coordinates as 'iset'
-preserve = When this option is used, the program will copy
or the nonzero values of input voxels to the output
-vote dataset, rather than create a fractional mask.
Since each output voxel might be overlapped
by more than one input voxel, the program 'votes'
for which input value to preserve. For example,
if input voxels with value=1 occupy 10% of an
output voxel, and inputs with value=2 occupy 20%
of the same voxel, then the output value in that
voxel will be set to 2 (provided that 20% is >=
to the clip fraction).
** Voting can only be done on short-valued datasets,
or on byte-valued datasets.
** Voting is a relatively time-consuming option,
since a separate loop is made through the
input dataset for each distinct value found.
** Combining this with the -warp option does NOT
make a general +tlrc to +orig transformer!
This is because for any value to survive the
vote, its fraction in the output voxel must be
>= clip fraction, regardless of other values
present in the output voxel.
Sample usage:
1. Compute the fraction of each voxel occupied by the warped input.
3dfractionize -template grid+orig -input data+tlrc \
-warp anat+tlrc -clip 0.2
2. Apply the (inverse) -warp transformation to transform the -input
from +tlrc space to +orig space, storing it according to the grid
of the -template.
A voxel in the output dataset gets the value that occupies most of
its volume, providing that value occupies 20% of the voxel.
Note that the essential difference from above is '-preserve'.
3dfractionize -template grid+orig -input data+tlrc \
-warp anat+tlrc -preserve -clip 0.2 \
-prefix new_data
Note that 3dAllineate can also be used to warp from +tlrc to +orig
space. In this case, data is computed through interpolation, rather
than voting based on the fraction of a voxel occupied by each data
value. The transformation comes from the WARP_DATA attribute directly.
Nearest neighbor interpolation is used in this 'mask' example.
cat_matvec -ONELINE anat+tlrc::WARP_DATA > tlrc.aff12.1D
3dAllineate -1Dmatrix_apply tlrc.aff12.1D -source group_mask+tlrc \
-master subj_epi+orig -prefix subj_mask -final NN
This program will also work in going from a coarse grid to a fine grid,
but it isn't clear that this capability has any purpose.
-- RWCox - February 1999
- October 1999: added -warp and -preserve options
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dFriedman
++ 3dFriedman: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
This program performs nonparametric Friedman test for
randomized complete block design experiments.
Usage:
3dFriedman
-levels s s = number of treatments
-dset 1 filename data set for treatment #1
. . . . . .
-dset 1 filename data set for treatment #1
. . . . . .
-dset s filename data set for treatment #s
. . . . . .
-dset s filename data set for treatment #s
[-workmem mega] number of megabytes of RAM to use
for statistical workspace
[-voxel num] screen output for voxel # num
-out prefixname Friedman statistics are written
to file prefixname
N.B.: For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used, e.g.:
-dset 2 'fred+orig[3]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dFWHMx
Usage: 3dFWHMx [options] dataset
**** NOTICE ****
You should use the '-acf' option (which is what afni_proc.py uses now).
The 'Classic' method giving just a Gaussian FWHM can no longer be
considered reliable for FMRI statistical analyses!
****************
>>>>> 20 July 2017: Results from the 'Classic' method are no longer output!
>>>>> If you want to see these values, you must give the
>>>>> command line option '-ShowMeClassicFWHM'.
>>>>> You no longer need to give the '-acf' option, as it
>>>>> is now the default method of calculation (and
>>>>> cannot be turned off). Note that if you need the
>>>>> FWHM estimate, the '-acf' method gives a value
>>>>> for that as its fourth output.
>>>>> Options and comments that only apply to the 'Classic' FWHM estimation
>>>>> method are now marked below with this '>>>>>' marker, to indicate that
>>>>> they are obsolete, archaic, and endangered (as well as fattening).
>>>>> Unlike the older 3dFWHM, this program computes FWHMs for all sub-bricks
>>>>> in the input dataset, each one separately. The output for each one is
>>>>> written to the file specified by '-out'. The mean (arithmetic or geometric)
>>>>> of all the FWHMs along each axis is written to stdout. (A non-positive
>>>>> output value indicates something bad happened; e.g., FWHM in z is meaningless
>>>>> for a 2D dataset; the estimation method computed incoherent intermediate results.)
(Classic) METHOD: <<<<< NO LONGER OUTPUT -- SEE ABOVE >>>>>
- Calculate ratio of variance of first differences to data variance.
- Should be the same as 3dFWHM for a 1-brick dataset.
(But the output format is simpler to use in a script.)
**----------------------------------------------------------------------------**
************* IMPORTANT NOTE [Dec 2015] ****************************************
**----------------------------------------------------------------------------**
A completely new method for estimating and using noise smoothness values is
now available in 3dFWHMx and 3dClustSim. This method is implemented in the
'-acf' options to both programs. 'ACF' stands for (spatial) AutoCorrelation
Function, and it is estimated by calculating moments of differences out to
a larger radius than before.
Notably, real FMRI data does not actually have a Gaussian-shaped ACF, so the
estimated ACF is then fit (in 3dFWHMx) to a mixed model (Gaussian plus
mono-exponential) of the form
ACF(r) = a * exp(-r*r/(2*b*b)) + (1-a)*exp(-r/c)
where 'r' is the radius, and 'a', 'b', 'c' are the fitted parameters.
The apparent FWHM from this model is usually somewhat larger in real data
than the FWHM estimated from just the nearest-neighbor differences used
in the 'classic' analysis.
The longer tails provided by the mono-exponential are also significant.
3dClustSim has also been modified to use the ACF model given above to generate
noise random fields.
**----------------------------------------------------------------------------**
** The take-away (TL;DR or summary) message is that the 'classic' 3dFWHMx and **
** 3dClustSim analysis, using a pure Gaussian ACF, is not very correct for **
** FMRI data -- I cannot speak for PET or MEG data. **
**----------------------------------------------------------------------------**
OPTIONS:
-mask mmm = Use only voxels that are nonzero in dataset 'mmm'.
-automask = Compute a mask from THIS dataset, a la 3dAutomask.
[Default = use all voxels]
-input ddd }=
*OR* }= Use dataset 'ddd' as the input.
-dset ddd }=
-demed = If the input dataset has more than one sub-brick
(e.g., has a time axis), then subtract the median
of each voxel's time series before processing FWHM.
This will tend to remove intrinsic spatial structure
and leave behind the noise.
[Default = don't do this]
-unif = If the input dataset has more than one sub-brick,
then normalize each voxel's time series to have
the same MAD before processing FWHM. Implies -demed.
[Default = don't do this]
-detrend [q]= Instead of demed (0th order detrending), detrend to
order 'q'. If q is not given, the program picks q=NT/30.
-detrend disables -demed, and includes -unif.
**N.B.: I recommend this option IF you are running 3dFWHMx on
functional MRI time series that have NOT been processed
to remove any activation and/or physiological artifacts.
**** If you are running 3dFWHMx on the residual (errts) time
series from afni_proc.py, you don't need -detrend.
**N.B.: This is the same detrending as done in 3dDespike;
using 2*q+3 basis functions for q > 0.
******* If you don't use '-detrend', the program checks
if a large number of voxels are have significant
nonzero means. If so, the program will print a warning
message suggesting the use of '-detrend', since inherent
spatial structure in the image will bias the estimation
of the FWHM of the image time series NOISE (which is usually
the point of using 3dFWHMx).
-detprefix d= Save the detrended file into a dataset with prefix 'd'.
Used mostly to figure out what the hell is going on,
when strange results transpire.
>>>>>
-geom }= If the input dataset has more than one sub-brick,
*OR* }= compute the final estimate as the geometric mean
-arith }= or the arithmetic mean of the individual sub-brick
FWHM estimates. [Default = -geom, for no good reason]
>>>>>
-combine = combine the final measurements along each axis into
one result
>>>>>
-out ttt = Write output to file 'ttt' (3 columns of numbers).
If not given, the sub-brick outputs are not written.
Use '-out -' to write to stdout, if desired.
Note that this option outputs the 'Classic' (which
means simply Gaussian, *not* ACF) parameters for each
sub-brick.
>>>>>
-compat = Be compatible with the older 3dFWHM, where if a
voxel is in the mask, then its neighbors are used
for differencing, even if they are not themselves in
the mask. This was an error; now, neighbors must also
be in the mask to be used in the differencing.
Use '-compat' to use the older method.
** NOT RECOMMENDED except for comparison purposes! **
-ACF [anam] = ** new option Nov 2015 **
*or* The '-ACF' option computes the spatial autocorrelation
-acf [anam] of the data as a function of radius, then fits that
to a model of the form
ACF(r) = a * exp(-r*r/(2*b*b)) + (1-a)*exp(-r/c)
and outputs the 3 model parameters (a,b,c) to stdout.
* The model fit assumes spherical symmetry in the ACF.
* The results shown on stdout are in the format
>>>>> The first 2 lines below will only be output <<<<<
>>>>> if you use the option '-ShowMeClassicFWHM'. <<<<<
>>>>> Otherwise, the 'old-style' FWHM values will <<<<<
>>>>> show up as all zeros (0 0 0 0). <<<<<
# old-style FWHM parameters
10.4069 10.3441 9.87341 10.2053
# ACF model parameters for a*exp(-r*r/(2*b*b))+(1-a)*exp(-r/c) plus effective FWHM
0.578615 6.37267 14.402 16.1453
The lines that start with '#' are comments.
>>>>> The first numeric line contains the 'old style' FWHM estimates,
>>>>> FWHM_x FWHM_y FHWM_z FWHM_combined
The second numeric line contains the a,b,c parameters, plus the
combined estimated FWHM from those parameters. In this example,
the fit was about 58% Gaussian shape, 42% exponential shape,
and the effective FWHM from this fit was 16.14mm, versus 10.21mm
estimated in the 'old way'.
* If you use '-acf' instead of '-ACF', then the comment #lines
in the stdout information will be omitted. This might help
in parsing the output inside a script.
* The empirical ACF results are also written to the file
'anam' in 4 columns:
radius ACF(r) model(r) gaussian_NEWmodel(r)(r)
where 'gaussian_NEWmodel' is the Gaussian with the FWHM estimated
from the ACF, NOT via the 'classic' (Forman 1995) method.
* If 'anam' is not given (that is, another option starting
with '-' immediately follows '-acf'), then '3dFWHMx.1D' will
be used for this filename. If 'anam' is set to 'NULL', then
the corresponding output files will not be saved.
* By default, the ACF is computed out to a radius based on
a multiple of the 'classic' FWHM estimate. If you want to
specify that radius (in mm), you can put that value after
the 'anam' parameter, as in '-acf something.1D 40.0'.
* In addition, a graph of these functions will be saved
into file 'anam'.png, for your pleasure and elucidation.
* Note that the ACF calculations are slower than the
'classic' FWHM calculations.
To reduce this sloth, 3dFWHMx now uses OpenMP to speed things up.
* The ACF modeling is intended to enhance 3dClustSim, and
may or may not be useful for any other purpose!
>>>>> SAMPLE USAGE: (tcsh)
>>>>> set zork = ( `3dFWHMx -automask -input junque+orig` )
>>>>> Captures the FWHM-x, FWHM-y, FWHM-z values into shell variable 'zork'.
INPUT FILE RECOMMENDATIONS:
* For FMRI statistical purposes, you DO NOT want the FWHM or ACF to reflect
any spatial structure of the underlying anatomy. Rather, you want
the FWHM/ACF to reflect the spatial structure of the NOISE. This means
that the input dataset should not have anatomical (spatial) structure.
* One good form of input is the output of '3dDeconvolve -errts', which is
the dataset of residuals left over after the GLM fitted signal model is
subtracted out from each voxel's time series.
* If you don't want to go to that much trouble, use '-detrend' to approximately
subtract out the anatomical spatial structure, OR use the output of 3dDetrend
for the same purpose.
* If you do not use '-detrend', the program attempts to find non-zero spatial
structure in the input, and will print a warning message if it is detected.
*** Do NOT use 3dFWHMx on the statistical results (e.g., '-bucket') from ***
*** 3dDeconvolve or 3dREMLfit!!! The function of 3dFWHMx is to estimate ***
*** the smoothness of the time series NOISE, not of the statistics. This ***
*** proscription is especially true if you plan to use 3dClustSim next!! ***
*** ------------------- ***
*** NOTE FOR SPM USERS: ***
*** ------------------- ***
*** If you are using SPM for your analyses, and wish to use 3dFHWMX plus ***
*** 3dClustSim for cluster-level thresholds, you need to understand the ***
*** process that AFNI uses. Otherwise, you will likely make some simple ***
*** mistake (such as using 3dFWHMx on the statistical maps from SPM) ***
*** that will render your cluster-level thresholding completely wrong! ***
>>>>>
IF YOUR DATA HAS SMOOTH-ISH SPATIAL STRUCTURE YOU CAN'T GET RID OF:
For example, you only have 1 volume, say from PET imaging. In this case,
the standard estimate of the noise smoothness will be mixed in with the
structure of the background. An approximate way to avoid this problem
is provided with the semi-secret '-2difMAD' option, which uses a combination of
first-neighbor and second-neighbor differences to estimate the smoothness,
rather than just first-neighbor differences, and uses the MAD of the differences
rather than the standard deviation. (If you must know the details, read the
source code in mri_fwhm.c!) [For Jatin Vaidya, March 2010]
ALSO SEE:
* The older program 3dFWHM is now completely superseded by 3dFWHMx.
* The program 3dClustSim takes as input the ACF estimates and then
estimates the cluster sizes thresholds to help you get 'corrected'
(for multiple comparisons) p-values.
>>>>>
* 3dLocalstat -stat FWHM will estimate the FWHM values at each voxel,
using the same first-difference algorithm as this program, but applied
only to a local neighborhood of each voxel in turn.
* 3dLocalACF will estimate the 3 ACF parameters in a local neighborhood
around each voxel.
>>>>>
* 3dBlurToFWHM will iteratively blur a dataset (inside a mask) to have
a given global FWHM. This program may or may not be useful :)
* 3dBlurInMask will blur a dataset inside a mask, but doesn't measure FWHM or ACF.
-- Zhark, Ruler of the (Galactic) Cluster!
=========================================================================
* This binary version of 3dFWHMx is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dGenFeatureDist
3dGenFeatureDist produces hives.
-classes 'CLASS_STRING': CLASS_STRING is a semicolon delimited
string of class labels. For example
-classes 'CSF; WM; GM'
-OTHER: Add histograms for an 'OTHER' class that has a uniform pdf.
-no_OTHER: Opposite of -OTHER.
-features 'FEATURES_STRING': FEATURES_STRING is a semicolon delimited
string of features. For example
-features 'MEAN.00_mm; median.19_mm; ...'
-sig 'FEATURE_VOL1 FEATURE_VOL2 ...': Specify volumes that define
the features. Each sub-brick is a feature
and the sub-brick's name is used to name the
feature. Multiple volumes get catenated.
Each occurrence of -sig option must be paired with
a -samp option. Think of each pair of '-sig, -samp'
options as describing data on the same voxel grid;
Think from the same subject. When specifying
training data from K subjects, you will end up using
K pairs of '-sig, -samp'.
All volumes from the kth -sig instance should have
the same voxel grid as each other and as that of
the kth -samp datasets.
-samp 'SAMPLE_VOX1 SAMPLE_VOX2 ...': Specify which voxels belong to
each class of interest. Each of the volumes
should contain voxel values (keys) that are
defined in -labeltable. You can specify multiple
volumes, they all get catenated. Any volume can
contain voxels from 1 or more classes.
Each occurrence of -samp option must be paired with
a -sig option. Think of each pair of '-sig, -samp'
options as describing data on the same voxel grid;
Think from the same subject. When specifying
training data from K subjects, you will end up using
K pairs of '-sig, -samp'.
All volumes from the kth -samp instance should have
the same voxel grid as each other and as that of
the kth -sig datasets.
-hspec FEATURE MIN MAX NBINS: Set histogram parameters for feature FEATURE
FEATURE: String label of feature
MIN, MAX: Range of histogram
NBINS: Number of bins
Use this option to set the histogram parameters for the features for
the automatic parameter selection was lousy. You can specify
for multiple features by using multiple -hspec instances. The only
condition is that all feature labels (FEATURE) must be part of the
set named in -features.
-prefix PREF: PREF is the prefix for all output volume that are not
debugging related.
default: GenFeatDist
-ShowTheseHists HISTNAMES: Show histograms specified by HISTNAMES and quit.
HISTNAMES can specify just one .niml.hist file or a bunch of
them using a space, or comma separated list.
List multiple names between quotes.
-overwrite: An option common to almost all AFNI programs. It is
automatically turned on if you provide no PREF.
-debug: Debugging level
default: 1
-labeltable LT: Specify the label table
default: 1
AFNI program: 3dGenPriors
3dGenPriors produces classification priors based on voxel signatures.
At this stage, its main purpose is to speed up the performance of
3dSignatures when using the probabilistic method as opposed to SVM.
Example:
3dGenPriors -sig sigs+orig \
-tdist train.niml.td \
-pprefix anat.p \
-cprefix anat.c \
-labeltable DSC.niml.lt \
-do pc
Options:
-sig SIGS: Signatures dataset. A dataset with F features per voxel.
-tdist TDIST: Training results. This file is generated by 3dSignatures.
ONLY training files generated by 3dSignatures' method 'prob'
can be used by this program. The number of features in this
file should match the number of features (F) in SIGS
This file also contains the names of the K classes that
will be references in the output datasets
-prefix PREF: Specify root prefix and let program suffix it for output
Volumes. This way you need not use the -*prefix options
below.
-pprefix PPREF: Prefix for probability dset
-cprefix CPREF: Prefix for class dset
If you use -regroup_classes then you can also specify:
-pgprefix PGPREF, and -cgprefix CGPREF
-labeltable LTFILE: Labeltable to attach to output dset
This labeltable should contain all the classes
in TDIST
-cmask CMASK: Provide cmask expression. Voxels where expression is 0
are excluded from computations
-mask MASK: Provide mask dset
To run the program on one voxel only, you can set MASK to
the key word VOX_DEBUG. In this mode a mask is created
with only the one voxel specified in -vox_debug set to 1.
-mrange M0 M1: Consider MASK only for values between M0 and M1, inclusive
-do WHAT: Specify the output that this program should create.
Each character in WHAT specifies an output.
a 'c' produces the most likely class
a 'p' produces probability of belonging to a class
a 'pc' produces both of the above and that is the default.
You'd be deranged to use anything else at the moment.
-debug DBG: Set debug level
-vox_debug 1D_DBG_INDEX: 1D index of voxel to debug.
OR
-vox_debug I J K: where I, J, K are the 3D voxel indices
(not RAI coordinates in mm)
-vox_debug_file DBG_OUTPUT_FILE: File in which debug information is output
use '-' for stdout, '+' for stderr.
-uid UID : User identifier string. It is used to generate names for
temporary files to speed up the process.
You must use different UID for different subjects otherwise
you will run the risk of using bad temporary files.
By default, uid is set to a random string.
-use_tmp: Use temporary storage to speed up the program (see -uid )
This is the default
-no_tmp: Opposite of use_tmp
-pset PSET: Reuse probability output from an earlier run.
-cset CSET: Reuse classification output from an earlier run.
-regroup_classes 'C1 C2 C3': Regroup classes into parent classes C1 C2 C3
For this to work, the original classes must
be named something like C1.*, C2.*, etc.
This option can be used to replace @RegroupLabels script.
For example:
3dGenPriors -sig sigs+orig \
-tdist train.niml.td \
-pprefix anat.p \
-cprefix anat.c \
-labeltable DSC.niml.lt \
-do pc \
-regroup_classes 'CSF GM WM Out'
or if you have the output already, you can do:
3dGenPriors -sig sigs+orig \
-tdist train.niml.td \
-pset anat.p \
-cset anat.c \
-labeltable DSC.niml.lt \
-do pc \
-regroup_classes 'CSF GM WM Out'
-classes 'C1 C2 C3': Classify into these classes only. Alternative is
to classify from all the classes in the training data
-features 'F1 F2 F3 ...': Use these features only. Otherwise use all
features in the signature file will be used.
Note that partial matching is used to resolve
which features to keep from training set. If you
want exact feature name matching, use
option -strict_feature_match
-strict_feature_match: Use strict feature name matching when resolving
which feature to keep from the training set.
-featgroups 'G1 G2 G3 ...': TO BE WRITTEN
Example: -featgroups 'MEDI MAD. P2S'
-ShowThisDist DIST: Show information obtained from the training data about
the distribution of DIST. For example: -
-ShowThisDist 'd(mean.20_mm|PER02)'
Set DIST to ALL to see them all.
-fast: Use OpenMPized routines (default).
Considerably faster than alternative.
-slow: Not -fast.
=========================================================================
* This binary version of 3dGenPriors is NOT compiled using OpenMP, a
semi-automatic parallelizer software toolkit, which splits the work
across multiple CPUs/cores on the same shared memory computer.
* However, the source code is compatible with OpenMP, and can be compiled
with an OpenMP-capable compiler, such as gcc 8.x+, Intel's icc, and
Oracle Developer Studio.
* If you wish to compile this program with OpenMP, see the man page for
your C compiler, and (if needed) consult the AFNI message board, and
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* However, it would probably be simplest to download a pre-compiled AFNI
binary set that uses OpenMP!
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/index.html
AFNI program: 3dGetrow
Program to extract 1 row from a dataset and write it as a .1D file
Usage: 3dGetrow [options] dataset
OPTIONS:
-------
Exactly ONE of the following three options is required:
-xrow j k = extract row along the x-direction at fixed y-index of j
and fixed z-index of k.
-yrow i k = similar for a row along the y-direction
-zrow i j = similar for a row along the z-direction
-input ddd = read input from dataset 'ddd'
(instead of putting dataset name at end of command line)
-output ff = filename for output .1D ASCII file will be 'ff'
(if 'ff' is '-', then output is to stdout, the default)
N.B.: if the input dataset has more than one sub-brick, each
sub-brick will appear as a separate column in the output file.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dGLMM
================== Welcome to 3dGLMM ==================
Program for Voxelwise Generalized Linear Mixed-Models (GLMMs)
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Version 0.0.3, Feb 18, 2025
Author: Gang Chen (gangchen@mail.nih.gov)
SSCC/NIMH, National Institutes of Health, Bethesda MD 20892, USA
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
------
### Generalized Linear Mixed-Models (GLMM) Overview
Generalized Linear Mixed-Models (GLMMs) extend Linear Mixed-Models (LMMs) to
handle non-normal response variables, such as binary, count, or categorical data.
The response variable in GLMMs can follow distributions like binomial, Poisson,
or other members of the exponential family.
### 3dGLMM: Extension of 3dLMEr
The program **3dGLMM** builds on **3dLMEr**, adding support for Student's
*t*-distribution for model residuals in addition to the standard normal
distribution. This functionality requires the R packages **glmmTMB**,
**car**, and **emmeans**.
Like **3dLMEr**, 3dGLMM automatically provides outputs for all main effects
and interactions. However, users must explicitly request marginal effects
and their comparisons through the options `-level` or `-slope` in 3dGLMM
instead of `-gltcode` or -glfCode in 3dLMEr.
1. **Random-Effects Specification**:
Random-effects components must be directly incorporated into the model
specification via the `-model` option. The `-ranEff` option used in
3dLMEr is no longer needed. Users are responsible for formulating
appropriate model structures. For detailed guidance, refer to the blog post:
[How to specify individual-level random effects in hierarchical modeling]
(https://discuss.afni.nimh.nih.gov/t/how-to-specify-individual-level-random-effects-in-hierarchical-modeling/6462).
2. **Marginal Effects and Pairwise Comparisons**:
Users can specify marginal effects and pairwise comparisons through the
options `-level` and `-slope`.
### Input and Output Formats
3dGLMM accepts input files in various formats, including AFNI, NIfTI,
surface (`niml.dset`), or 1D. To match the output format with the
input, append an appropriate suffix to the output option `-prefix`
(e.g., `.nii` for NIfTI, `.niml.dset` for surface, or `.1D` for 1D).
### Incorporation of Explanatory Variables
3dGLMM supports various types of explanatory variables and covariates:
- **Categorical variables**: Between- and within-subject factors.
- **Quantitative variables**: Continuous predictors like age or
behavioral data.
#### Declaring Quantitative Variables
When including quantitative variables, you must explicitly declare
them using the `-qVars` option. Additionally, consider the
**centering** of these variables:
- Global centering.
- Within-group (or within-condition) centering.
For further guidance on centering, see: [AFNI documentation on centering]
(https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/statistics/center.html).
### Installation of Required R Packages
Before running 3dGLMM, ensure the following R packages are installed:
- `glmmTMB`
- `car`
- `emmeans`
- `snow`
You can install them via AFNI’s R installation script:
rPkgsInstall -pkgs "glmmTMB,car,emmeans,snow"
Alternatively, install them directly in R:
```
install.packages("glmmTMB")
install.packages("car")
install.packages("emmeans")
install.packages("snow")
```
### Example Scripts
The following example scripts demonstrate 3dGLMM applications. More
examples will be added as scenarios are crowdsourced from users. If
one of the examples matches your data structure, use it as a template
to build your own script.
### Running 3dGLMM
Once you’ve constructed your command script, run it in the terminal.
Save the script as a text file (e.g., `GLMM.txt`) and execute it with:
```
nohup tcsh -x GLMM.txt &
```
Alternatively, for progress tracking, redirect output to a log file:
```
nohup tcsh -x GLMM.txt > diary.txt &
nohup tcsh -x GLMM.txt |& tee diary.txt &
```
This method saves output in `diary.txt`, allowing you to review
progress and troubleshoot if needed.
Here’s a revised version with improved clarity, grammar, and formatting:
---
### Example 1: one within-individual factor and a quantitiave predictor
-------------------------------------------------------------------------
3dGLMM -prefix glmm.student -jobs 12 \
-family student.t \
-model 'task*age+(1|Subj)' \
-qVars 'age' \
-qVarCenters 0 \
-level LAB task CAT task \
-level LAB pos.slp2 CAT 1 FIX task=pos,age=2 \
-slope LAB pos.age CAT 1 FIX task=pos QUANT age \
-slope LAB task.by.age CAT task QUANT age \
-dataTable \
Subj age task InputFile \
s1 3.03 pos data/pos_s1+tlrc. \
s1 0.82 neg data/neg_s1+tlrc. \
s2 2.67 pos data/pos_s2+tlrc. \
s2 0.24 neg data/neg_s2+tlrc. \
...
#### Data Structure Overview
This example involves a **within-individual factor** (task with two levels:
*pos* and *neg*) and a **between-individual quantitative variable** (*age*).
The GLMM analysis is conducted using a **Student's t-distribution** for the
model residuals.
#### Reserved Keywords for Post-Hoc Estimations
The following four reserved keywords should not be used in custom
specifications for post-hoc estimations:
- **LAB**: Used to define a label for the estimated effect.
- **CAT**: Specifies a categorical variable for which effects are estimated
each level, and all possible pairwise comparisons. Use *1* for
the intercept or overall mean of the model.
- **FIX**: Indicates variables fixed at specific levels or values.
- **QUANT**: Specifies the estimation of a slope for a quantitative variable.
---
#### Explanations for Post-Hoc Estimations
1. **`-level LAB task CAT task`**
- Estimates the effects for both levels of the task (*pos* and *neg*) and
their contrast (evaluated at *age = 0*).
2. **`-level LAB pos.slp2 CAT 1 FIX task=pos,age=2`**
- Estimates the effect of the *pos* task at *age = 2* (relative to the
centered value of age). The number *1* represents the intercept or grand
mean of the model.
3. **`-slope LAB pos.age CAT 1 FIX task=pos QUANT age`**
- Estimates the slope effect of *age* for the *pos* task. The number *1*
represents the intercept or grand mean of the model.
4. **`-slope LAB task.by.age CAT task QUANT age`**
- Estimates the slope effect of *age* for both *pos* and *neg* tasks and
their contrast.
Options in alphabetical order:
------------------------------
-bounds lb ub: This option is for outlier removal. Two numbers are expected from
the user: the lower bound (lb) and the upper bound (ub). The input data will
be confined within [lb, ub]: any values in the input data that are beyond
the bounds will be removed and treated as missing. Make sure the first number
is less than the second. The default (the absence of this option) is no
outlier removal.
**NOTE**: Using the -bounds option to remove outliers should be approached
with caution due to its arbitrariness. A more principled alternative is to
use the -family option with a Student's t-distribution.
-cio: Use AFNI's C io functions, which is the default. Alternatively, -Rio
can be used.
-dataTable TABLE: List the data structure with a header as the first line.
NOTE:
1) This option has to occur last in the script; that is, no other
options are allowed thereafter. Each line should end with a backslash
except for the last line.
2) The order of the columns should not matter except that the last
column has to be the one for input files, 'InputFile'. Unlike 3dGLMM, the
subject column (Subj in 3dGLMM) does not have to be the first column;
and it does not have to include a subject ID column under some situations
Each row should contain only one input file in the table of long format
(cf. wide format) as defined in R. Input files can be in AFNI, NIfTI or
surface format. AFNI files can be specified with sub-brick selector (square
brackets [] within quotes) specified with a number or label.
3) It is fine to have variables (or columns) in the table that are
not modeled in the analysis.
4) When the table is part of the script, a backslash is needed at the end
of each line (except for the last line) to indicate the continuation to the
next line. Alternatively, one can save the context of the table as a separate
file, e.g., calling it table.txt, and then in the script specify the data
with '-dataTable @table.txt'. However, when the table is provided as a
separate file, do NOT put any quotes around the square brackets for each
sub-brick, otherwise the program would not properly read the files, unlike the
situation when quotes are required if the table is included as part of the
script. Backslash is also not needed at the end of each line, but it would
not cause any problem if present. This option of separating the table from
the script is useful: (a) when there are many input files so that the program
complains with an 'Arg list too long' error; (b) when you want to try
different models with the same dataset.
-dbgArgs: This option will enable R to save the parameters in a
file called .3dGLMM.dbg.AFNI.args in the current directory
so that debugging can be performed.
-family: This option specifies the distribution of model residuals. Currently
two families are supported: "Gaussian" (default) and "student.t".
-help: this help message
-IF var_name: var_name is used to specify the column name that is designated for
input files of effect estimate. The default (when this option is not invoked
is 'InputFile', in which case the column header has to be exactly as 'InputFile'
This input file for effect estimates has to be the last column.
-jobs NJOBS: On a multi-processor machine, parallel computing will speed
up the program significantly.
Choose 1 for a single-processor computer.
-level LAB ... CAT ... BY ... FIX ...: Specify the label, categorical variable
....
-mask MASK: Process voxels inside this mask only.
Default is no masking.
-model FORMULA: Specify the model structure for all the variables. The
expression FORMULA with more than one variable has to be surrounded
within (single or double) quotes. Variable names in the formula
should be consistent with the ones used in the header of -dataTable.
In the GLMM context the simplest model is "1+(1|Subj)" in
which the random effect from each of the two subjects in a pair is
symmetrically incorporated in the model. Each random-effects factor is
specified within parentheses per formula convention in R. Any
effects of interest and confounding variables (quantitative or
categorical variables) can be added as fixed effects without parentheses.
-prefix PREFIX: Output file name. For AFNI format, provide prefix only,
with no view+suffix needed. Filename for NIfTI format should have
.nii attached (otherwise the output would be saved in AFNI format).
-qVarCenters VALUES: Specify centering values for quantitative variables
identified under -qVars. Multiple centers are separated by
commas (,) without any other characters such as spaces and should
be surrounded within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -qVarCenters) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable YOURSELF first before the values are fed into -dataTable.
-qVars variable_list: Identify quantitative variables (or covariates) with
this option. The list with more than one variable has to be
separated with comma (,) without any other characters such as
spaces and should be surrounded within (single or double) quotes.
For example, -qVars "Age,IQ"
WARNINGS:
1) Centering a quantitative variable through -qVarsCenters is
very critical when other fixed effects are of interest.
2) Between-subjects covariates are generally acceptable.
However EXTREME caution should be taken when the groups
differ substantially in the average value of the covariate.
-R2: Enabling this option will prompt the program to provide both
conditional and marginal coefficient of determination (R^2)
values associated with the adopted model. Marginal R^2 indicates
the proportion of variance explained by the fixed effects in the
model, while conditional R^2 represents the proportion of variance
explained by the entire model, encompassing both fixed and random
effects. Two sub-bricks labeled 'R2m' and 'R2c' will be provided
in the output.
-resid PREFIX: Output file name for the residuals. For AFNI format, provide
prefix only without view+suffix. Filename for NIfTI format should
have .nii attached, while file name for surface data is expected
to end with .niml.dset. The sub-brick labeled with the '(Intercept)',
if present, should be interpreted as the effect with each factor
at the reference level (alphabetically the lowest level) for each
factor and with each quantitative covariate at the center value.
-Rio: Use R's io functions. The alternative is -cio.
-show_allowed_options: list of allowed options
-slope LAB ... CAT ... BY ... FIX ... QUANT ...: Specify the label, categorical variable
....
-SS_type NUMBER: Specify the type for sums of squares in the F-statistics.
Three options are: sequential (1), hierarchical (2), and marginal (3).
When this option is absent (default), marginal (3) is automatically set.
Some discussion regarding their differences can be found here:
https://sscc.nimh.nih.gov/sscc/gangc/SS.html
-vVarCenters VALUES: Specify centering values for voxel-wise covariates
identified under -vVars. Multiple centers are separated by
commas (,) within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -vVarsCenters) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable yourself first before the files are fed under -dataTable.
-vVars variable_list: Identify voxel-wise covariates with this option.
Currently one voxel-wise covariate is allowed only. By default
mean centering is performed voxel-wise across all subjects.
Alternatively centering can be specified through a global value
under -vVarsCenters. If the voxel-wise covariates have already
been centered, set the centers at 0 with -vVarsCenters.
AFNI program: 3dGrayplot
Make a grayplot from a 3D+time dataset, sort of like Jonathan Power:
https://www.ncbi.nlm.nih.gov/pubmed/27510328
https://www.jonathanpower.net/2017-ni-the-plot.html
Result is saved to a PNG image for your viewing delight.
* This style of plot is also called a carpet plot,
but REAL carpets are much more attractive, IMHO.
* The horizontal axis of the grayplot is time, and the
vertical axis is all 3 spatial dimensions collapsed into 1.
* Also see AFNI script @grayplot, as well as the QC output
generated by afni_proc.py.
Usage:
3dGrayplot [options] inputdataset
OPTIONS: [lots of them]
--------
-mask mset = Name of mask dataset
* Voxels that are 0 in mset will not be processed.
* Dataset must be byte-valued (8 bits: 0..255);
shorts (16 bits) are also acceptable, but only
values from 1.255 will be processed.
* Each distinct value from 1..255 will be processed
separately, and the output image will be ordered
with the mask=1 voxels on top, mask=2 voxels next,
and so on down the image.
* A partition (e.g., mask=3) with fewer than 9 voxels
will not be processed at all.
* If there is more than one partition, horizontal dashed
lines will drawn between them.
* If '-mask' is not given, then all voxels will be used,
except those at the very edge of a volume. Doing this is
usually not a good idea, as the non-brain tissue will
take up a lot of useless space in the output grayplot.
-input dataset = Alternative way to input the dataset to process.
-prefix ppp.png = Name for output file.
* Default is Grayplot.png (if you don't use this option)
* If the filename ends in '.jpg', a JPEG file is output.
* If the filename ends in '.pgm', a PGM file is output.
[PGM files can be manipulated with the NETPBM package.]
* If the filename does not end in '.jpg' OR in '.png'
OR in '.pgm', then '.png' will be added at the end.
-dimen X Y = Output size of image in pixels.
* X = width = time axis direction
* Y = height = voxel/space dimensions
* Defaults are X=1024 Y=512 -- suitable for screen display.
* For publication, you might want more pixels, as in
-dimen 1800 1200
which would be 6 inches wide by 4 inches high, at the usual
300 dots-per-inch (dpi) of high resolution image printing.
** Note that there are usually many more voxels in the Y direction
than there are pixels in the output image. This fact requires
coarsening the Y output grid and resampling the data to match.
See the next option for a little more information about
how this resampling is implemented.
-oldresam = The method for resampling the processed dataset to the final
grayscale image size was changed/improved in a major way.
If you want to use the original method, then give this option.
* The only reason for using this option is for
comparison with the new method.
* The new resampling method uses minimum-sidelobe local averaging
when coarsening the grid (vertical direction Y = voxels/space)
-- whose purpose is to reduce aliasing artifacts
* And uses cubic interpolation when refining the grid
(usually horizontal direction = time) -- whose purpose
is purely beauty -- compared to the older linear interpolation.
* Note that the collapsing of multiple voxels into one pixel in
the Y direction will tend to cancel out signals that change sign
within neighbors in the voxel ordering method you choose.
(See the 'order' options below.)
-polort p = Order of polynomials for detrending.
* Default value is 2 (mean, slope, quadratic curve).
* Use '-1' if data is already detrended and de-meaned.
(e.g., is an AFNI errts.* file or other residual dataset)
-fwhm f = FWHM of blurring radius to use in the dataset before
making the image.
* Each partition (i.e., mask=1, mask=2, ...) is blurred
independently, as in program 3dBlurInMask.
* Default value is 0 mm = no blurring.
[In the past, the default value was 6.]
* If the dataset was NOT previously blurred, a little
spatial blurring here will help bring out larger scale
features in the times series, which might otherwise
look very noisy.
** The following four options control the ordering of **
** voxels in the grayplot, in the vertical direction. **
-pvorder = Within each mask partition, order the voxels (top to
bottom) by how well they match the two leading principal
components of that partition. The result is to make the
top part of each partition be made up of voxels with
similar time series, and the bottom part will be more
'random looking'.
++ The presence of a lot of temporal structure in a
grayplot of a 'errts' residual dataset indicates
that the 'removal' of unwanted time series components
did not work well.
++ Using '-pvorder' to put all the structured time series
close together will make such problems more visible.
++ IMHO, this is the most useful ordering.
-LJorder = Within each mask partition, order the voxels (top to
bottom) by their Ljung-Box statistics, which is a measure
of temporal correlation.
++ Experimental; probably not useful.
-peelorder = Within each mask partition, order the voxels by how
many 'peel' steps are needed to get from the partition
boundary to a given voxel.
++ This ordering puts voxels in 'similar' geometrical
positions sort-of close together in the image.
And is usually not very interesting, IMHO.
-ijkorder = Set the intra-partition ordering to the default, by
dataset 3D index ('ijk').
++ In AFNI's +tlrc ordering, this ordering primarily will
be from Inferior to Superior in the brain (from top to
bottom in the grayplot image).
++ This is the default ordering method, but not the best.
** These options control the scaling from voxel value to gray level **
-range X = Set the range of the data to be plotted to be 'X'.
Each time series is first normalized by its values to:
Z[i] = (t[i] - mean_t)/stdev_t.
When this option is used, then:
* a value of 0 will be plotted as middle-gray
* a value of +X (or above) will be plotted as white
* a value of -X (or below) will be plotted as black
Thus, this option should be used with data that is centered
around zero -- or will be so after '-polort' detrending.
* For example, if you are applying this option to an
afni_proc.py 'errts' (residuals) dataset, a good value
of X to use is 3 or 4, since those values are in percents.
* The @grayplot script uses '-range 3.89' since that is the
value at which a standard normal N(0,1) deviate has a 1e-4
two-sided tail probability. (If nothing else, this sounds cool.)
If you do NOT use '-range', then the data will be automatically
normalized so each voxel time series has RMS value 1, and then
the grayscale plot will be black-to-white being the min-to-max,
where the min and max computed over the entire detrended
and normalized dataset.
* This default automatic normalizing and scaling makes it
almost impossible to directly compare grayplots from
different datasets. This difficulty is why the '-range'
and '-percent' options were added.
-percent = Use this option on 'raw' time series datasets, to compute
the mean of each voxel timeseries and then use that value
to scale the values to percent differences from the mean.
* NOT suitable for use with a residual 'errts' dataset!
* Should be combined with '-range'.
* Detrending will be applied while calculating the mean.
By default, that will be quadratic detrending of each
voxel time series, but that can be changed with the
'-polort' option.
-raw_with_bounds A B
= Use this option on 'raw' time series datasets, map values
<= A to black, those >= B to white, and intermediate values
to grays.
* Can be used with any kind of dataset, but probably makes
most sense to use with scaled ones (errts, fitts or
all_runs).
* Should NOT be combined with '-range' or '-percent'.
** Quick hack for Cesar Caballero-Gaudes, April 2018, by @AFNIman.
As such, this program may be modified in the future to be more useful,
or at least more beautifully gorgeous.
** Applied to 'raw' EPI data, the results may not be very informative.
It seems to be more useful to look at the grayplot calculated from
pre-processed data (e.g., time series registered, filtered, etc.).
** See also the script @grayplot, which can process the results from
afni_proc.py and produce an image with the grayplot combined with
a graph of the motion magnitude, and with the GM, WM, and CSF in
different partitions.
** afni_proc.py uses this program to create grayplots of the residuals
from regression analysis, as part of its Quality Control (QC) output.
--------
EXAMPLE:
--------
The following commands first generate a time series dataset,
then create grayplots using each of the ordering methods
(so you can compare them). No mask is given.
3dcalc -a jRandomDataset:64:64:30:256 -datum float \
-prefix Qsc.nii -expr 'abs(.3+cos(0.1*i))*sin(0.1*t+0.1*i)+gran(0,3)'
3dGrayplot -pvorder -prefix QscPV.png -input Qsc.nii -fwhm 8
3dGrayplot -ijkorder -prefix QscIJK.png -input Qsc.nii -fwhm 8
3dGrayplot -peelorder -prefix QscPEEL.png -input Qsc.nii -fwhm 8
AFNI program: 3dGroupInCorr
Usage: 3dGroupInCorr [options]
* Also see
https://afni.nimh.nih.gov/pub/dist/edu/latest/afni_handouts/afni20_instastuff.pdf
* This program operates as a server for AFNI or SUMA. It reads in dataset
collections that have been prepared by 3dSetupGroupInCorr, and then
connects to the AFNI or SUMA GUI program (via TCP/IP). Then it waits
for a command to be sent from AFNI/SUMA before it actually does anything.
* The command from AFNI is sent when the user (you) clicks the 'InstaCorr Set' *
* button in the [A] controller image viewer right-mouse-click popup menu; or, *
* when you hold down the Shift and Control (Ctrl) keys on the keyboard at the *
* same time you left-mouse-click in the image viewer. *
(-: However, the new [Feb 2011] '-batch' option, described far below, :-)
(-: lets you run 3dGroupInCorr by itself, without AFNI or SUMA, writing :-)
(-: results to disk instead of transmitting them to the client program. :-)
* At the same time as you run 3dGroupInCorr, you also have to run the
AFNI GUI program, with a command like 'afni -niml'. 3dGroupInCorr
by itself will only do something when AFNI sends it a command, which
you do by using the 'InstaCorr Set' button on the [A] image viewer
right-click popup menu, after 3dGroupInCorr has connected to AFNI.
* When AFNI sends a seed voxel command, 3dGroupInCorr will extract
that voxel times series from each input dataset, will compute the
correlation map of each dataset with the corresponding seed time
series, then will compute the voxel-wise collection of t-tests of
that bunch of correlation maps, and return the resulting 3D volumes
to AFNI for display.
++ A lot of computing can be required if there are a lot of datasets
in the input collections. 3dGroupInCorr is carefully written to
be fast. For example, on a Mac Pro with 8 3GHz CPUs, running
with 1.2 GBytes of data (100 datasets each with 69K voxels), each
group correlation map takes about 0.3 seconds to calculate and
transmit to AFNI -- this speed is why it's called 'Insta'.
* You must start AFNI with the '-niml' option to allow it to accept
incoming TCP/IP socket connections.
++ Or you can press the 'NIML+PO' button in the GUI, if you forgot
to type the AFNI command line correctly.
++ If you are running 3dGroupInCorr and AFNI on separate computers,
you also have to setup 'host trusting' correctly -- for details,
see the description of the '-ah' option, far below.
* In the AFNI 'A' controller, once 3dGroupInCorr is connected to AFNI,
you don't have to switch to 'GrpInCorr' on the 'InstaCorr' menu to
use the 'InstaCorr Set' controls -- unlike the individual subject
InstaCorr, which requires setup inside AFNI. For Group InstaCorr,
the setup is already done in 3dSetupGroupInCorr. The ONLY reason
for using the 'GrpInCorr' setup controls in AFNI is to change the
value of the '-seedrad' option' radius interactively.
* More detailed outline of processing in 3dGroupInCorr:
++ For each 3D+time dataset in the input dataset collections:
-- Extract the seed voxel time series (averaging locally per 'seedrad')
[you could do this manually with 3dmaskave]
-- Correlate it with all other voxel time series in the same dataset
[you could do this manually with 3dDeconvolve or 3dfim]
-- Result is one 3D correlation map per input dataset
-- The standard processing uses Pearson correlation between time series
vectors. You can also pre-process the data to use Spearman (rank)
correlation instead. This alteration must be done in program
3dSetupGroupInCorr, or with program 3dTransformGroupInCorr.
++ Then carry out the t-test between/among these 3D correlation maps,
possibly allowing for dataset-level covariates.
-- Actually, between the arctanh() of these maps:
cf. RA Fisher:
https://en.wikipedia.org/wiki/Fisher_transformation
[you could do the arctanh() conversion manually via 3dcalc;]
[then do the t-tests manually with 3dttest++; then convert]
[the t-statistics to Z-scores using yet another 3dcalc run.]
-- To be overly precise, if the correlation is larger than 0.999329,
then the arctanh is clipped to 4.0, to avoid singularities.
If you consider this clipping to be a problem, please go away.
++ The dataset returned to AFNI converts the t-statistic maps
to Z-scores, for various reasons of convenience.
-- Conversion is done via the same mechanism used in program
cdf -t2z fitt TSTAT DOF
-- The individual correlation maps that were t-test-ed are discarded.
-- Unless you use the new [Jan 2011] '-sendall' option :-)
* When 3dGroupInCorr starts up, it has to 'page fault' all the data
into memory. This can take several minutes, if it is reading (say)
10 Gbytes of data from a slow disk. After that, if your computer
has enough RAM, then the program should run pretty quickly.
++ If your computer DOESN'T have enough RAM to hold all the data,
then this program will be painfully slow -- buy more memory!
++ Note that the .data file(s) are mapped directly into memory (mmap),
rather than being read with standard file input methods (read function).
++ This memory-mapping operation may not work well on network-mounted
drives, in which case you will have to run 3dGroupInCorr on the same
computer with the data files [Feb 2016 -- but see the new '-read' option].
++ However, 3dGroupInCorr does NOT need to be run on the same computer
as AFNI or SUMA: see the '-ah' option (described far below).
* Once 3dGroupInCorr is connected to AFNI, you can 'drive' the selection
of seed points via the AFNI driver commands (e.g., via the plugout_drive
program). For details, see the README.driver document.
* One reason this program is a server (rather than being built in
to AFNI) is that it is compiled to use OpenMP, which will let
it make use of multiple CPU cores on the computer system :-)
++ For more information, see the very end of this '-help' output.
* If you have only the .niml and .data files, and not original datasets,
you can partially reconstruct the datasets by using the program
3dExtractGroupInCorr.
===================================================================
COMMAND LINE OPTIONS
[Most options are not case sensitive -- e.g., '-apair' == '-Apair']
===================================================================
-----------------------*** Input Files ***-------------------------
-setA AAA.grpincorr.niml
= Give the setup file (from 3dSetupGroupInCorr) that describes
the first dataset collection:
++ This 'option' is MANDATORY (you have to input SOMETHING).
++ Of course, 'AAA' should be replaced with the correct name of
your input dataset collection file!
++ 3dGroupInCorr can use byte-valued or short-valued data as
produced by the '-byte' or '-short' options to 3dSetupGroupInCorr.
++ You can also put the '.data' filename here, or leave off the '.niml';
the program will look for these cases and patch the filename as needed.
-setB BBB.grpincorr.niml
= Give the setup file that describes the second dataset collection:
++ This option IS optional.
++ If you use only -setA, then the program computes a one-sample t-test.
++ If you use also -setB, then the program computes a two-sample t-test.
-- The exact form of the 2-sample t-test used is controlled by one of the
three options described below (which are mutually exclusive).
++ The sign of a two sample t-test is 'A-B'; that is, a positive result
means that the A set of correlations average larger than the B set.
++ The output t-statistics are converted to Z-scores for transmission to AFNI,
using the same code as the 'fitt_t2z(t,d)' function in 3dcalc:
-- e.g, the output of the command
ccalc 'fitt_t2z(4,15)'
is 3.248705, showing that a t-statistic of 4 with 15 degrees-of-freedom
(DOF) has the same p-value as a Z-score [N(0,1) deviate] of 3.248705.
-- One reason for using Z-scores is that the DOF parameter varies between
voxels when you choose the -unpooled option for a 2-sample t-test.
-Apair = Instead of using '-setB', this option tells the program to use
the '-setA' collection in its place; however, the seed location
for this second copy of setA is a different voxel/node. The result
is to contrast (via a paired t-test) the correlation maps from the
different seeds.
++ For Alex Martin and his horde of myrmidons.
-->> You cannot use '-Apair' with '-setB' or with '-batch'.
++ To use this in the AFNI GUI, you first have to set the Apair seed
using the 'GIC: Apair Set' button on the image viewer right-click
popup menu. After that, the standard 'InstaCorr Set' button will
pick the new seed to contrast with the Apair seed.
++ Or you can select 'GIC: Apair MirrorOFF' to switch it to 'MirrorON*'.
In that case, selecting 'InstaCorr Set' will automatically also set
the Apair seed to the left-right mirror image location (+x -> -x).
++ The resulting correlation maps will have a positive (red) hotspot
near the InstaCorr seed and a negative (blue) hotspot near the
Apair seed. If you don't understand why, then your understanding
of resting state FMRI correlation analyses needs some work.
-->> It is regions AWAY from the positive and negative seeds that are
potentially interesting -- significant results at region Q indicate
a difference in 'connectivity' between Q and the two seeds.
++ In the case of mirroring, Q is asymmetrically 'connected' to one
side of brain vs. the other; e.g., I've found that the left Broca's
area (BA 45) makes a good seed -- much of the left temporal lobe is
asymmetrically connected with respect to this seed and its mirror,
but not so much of the right temporal lobe.
-labelA aaa = Label to attach (in AFNI) to sub-bricks corresponding to setA.
If you don't give this option, the label used will be the prefix
from the -setA filename.
-labelB bbb = Label to attach (in AFNI) to sub-bricks corresponding to setB.
++ At most the first 11 characters of each label will be used!
++ In the case of '-Apair', you can still use '-labelB' to indicate
the label for the negative (Apair) seed; otherwise, the -setA
filename will be used with 'AP:' prepended.
-----------------------*** Two-Sample Options ***-----------------------
-pooled = For a two-sample un-paired t-test, use a pooled variance estimator
-unpooled = For a two-sample un-paired t-test, use an unpooled variance estimator
++ Statistical power declines a little, and in return,
the test becomes a little more robust.
-paired = Use a two-sample paired t-test
++ Which is the same as subtracting the two sets of 3D correlation
maps, then doing a one-sample t-test.
++ To use '-paired', the number of datasets in each collection
must be the same, and the datasets must have been input to
3dSetupGroupInCorr in the same relative order when each
collection was created. (Duh.)
++ '-paired' is automatically turned on when '-Apair' is used.
-nosix = For a 2-sample situation, the program by default computes
not only the t-test for the difference between the samples,
but also the individual (setA and setB) 1-sample t-tests, giving
6 sub-bricks that are sent to AFNI. If you don't want
these 4 extra 1-sample sub-bricks, use the '-nosix' option.
++ See the Covariates discussion, below, for an example of how
'-nosix' affects which covariate sub-bricks are computed.
++ In the case of '-Apair', you may want to keep these extra
sub-bricks so you can see the separate maps from the positive
and negative seeds, to make sure your results make sense.
**-->> None of these 'two-sample' options means anything for a 1-sample
t-test (i.e., where you don't use -setB or -Apair).
-----------------*** Dataset-Level Covariates [May 2010] ***-----------------
-covariates cf = Read file 'cf' that contains covariates values for each dataset
input (in both -setA and -setB; there can only at most one
-covariates option). Format of the file
FIRST LINE --> subject IQ age
LATER LINES --> Elvis 143 42
Fred 85 59
Ethel 109 49
Lucy 133 32
This file format should be compatible with 3dMEMA.
++ The first column contains the labels that must match the dataset
labels stored in the input *.grpincorr.niml files, which are
either the dataset prefixes or whatever you supplied in the
3dSetupGroupInCorr program via '-labels'.
-- If you ran 3dSetupGroupInCorr before this update, its output
.grpincorr.niml file will NOT have dataset labels included.
Such a file cannot be used with -covariates -- Sorry.
++ The later columns contain numbers: the covariate values for each
input dataset.
-- 3dGroupInCorr does not allow voxel-level covariates. If you
need these, you will have to use 3dttest++ on the '-sendall'
output (of individual dataset correlations), which might best
be done using '-batch' mode (cf. far below).
++ The first line contains column headers. The header label for the
first column isn't used for anything. The later header labels are
used in the sub-brick labels sent to AFNI.
++ If you want to omit some columns in file 'cf' from the analysis,
you can do so with the standard AFNI column selector '[...]'.
However, you MUST include column #0 first (the dataset labels) and
at least one more numeric column. For example:
-covariates Cov.table'[0,2..4]'
to skip column #1 but keep columns #2, #3, and #4.
++ At this time, only the -paired and -pooled options can be used with
covariates. If you use -unpooled, it will be changed to -pooled.
-unpooled still works with a pure t-test (no -covariates option).
-- This restriction might be lifted in the future. Or it mightn't.
++ If you use -paired, then the covariates for -setB will be the same
as those for -setA, even if the dataset labels are different!
-- This also applies to the '-Apair' case, of course.
++ By default, each covariate column in the regression matrix will have
its mean removed (centered). If there are 2 sets of subjects, each
set's matrix will be centered separately.
-- See the '-center' option (below) to alter this default.
++ For each covariate, 2 sub-bricks are produced:
-- The estimated slope of arctanh(correlation) vs covariate
-- The Z-score of the t-statistic of this slope
++ If there are 2 sets of subjects, then each pair of sub-bricks is
produced for the setA-setB, setA, and setB cases, so that you'll
get 6 sub-bricks per covariate (plus 6 more for the mean, which
is treated as a special covariate whose values are all 1).
-- At present, there is no way to tell 3dGroupInCorr not to send
all this information back to AFNI/SUMA.
++ The '-donocov' option, described later, lets you get the results
calculated without covariates in addition to the results with
covariate regression included, for comparison fun.
-- Thus adding to the number of output bricks, of course.
++ EXAMPLE:
If there are 2 groups of datasets (with setA labeled 'Pat', and setB
labeled 'Ctr'), and one covariate (labeled IQ), then the following
sub-bricks will be produced:
# 0: Pat-Ctr_mean = mean difference in arctanh(correlation)
# 1: Pat-Ctr_Zscr = Z score of t-statistic for above difference
# 2: Pat-Ctr_IQ = difference in slope of arctanh(correlation) vs IQ
# 3: Pat-Ctr_IQ_Zscr = Z score of t-statistic for above difference
# 4: Pat_mean = mean of arctanh(correlation) for setA
# 5: Pat_Zscr = Z score of t-statistic for above mean
# 6: Pat_IQ = slope of arctanh(correlation) vs IQ for setA
# 7: Pat_IQ_Zscr = Z score of t-statistic for above slope
# 8: Ctr_mean = mean of arctanh(correlation) for setB
# 9: Ctr_Zscr = Z score of t-statistic for above mean
#10: Ctr_IQ = slope of arctanh(correlation) vs IQ for setB
#11: Ctr_IQ_Zscr = Z score of t-statistic for above slope
++ However, the single-set results (sub-bricks #4-11) will NOT be
computed if the '-nosix' option is used.
++ If '-sendall' is used, the individual dataset arctanh(correlation)
maps (labeled with '_zcorr' at the end) will be appended to this
list. These setA sub-brick labels will start with 'A_' and these
setB labels with 'B_'.
++ If you are having trouble getting the program to read your covariates
table file, then set the environment variable AFNI_DEBUG_TABLE to YES
and run the program -- the messages may help figure out the problem.
For example:
3dGroupInCorr -DAFNI_DEBUG_TABLE=YES -covariates cfile.txt |& more
-->>**++ A maximum of 31 covariates are allowed. If you need more, then please
consider the possibility that you are completely deranged or demented.
*** CENTERING ***
Covariates are processed using linear regression. There is one column in the
regression matrix for each covariate, plus a column of all 1s for the mean
value. 'Centering' refers to the process of subtracting some value from each
number in a covariate's column, so that the fitted model for the covariate's
effect on the data is zero at this subtracted value; the model (1 covariate) is:
data[i] = mean + slope * ( covariate[i] - value )
where i is the dataset index. The standard (default) operation is that 'value'
is the mean of the covariate[i] numbers.
-center NONE = Do not remove the mean of any covariate.
-center DIFF = Each set will have the means removed separately [default].
-center SAME = The means across both sets will be computed and subtracted.
* This option only applies to a 2-sample unpaired test.
* You can attach '_MEDIAN' after 'DIFF' or 'SAME' to have the
centering be done at the median of covariate values, rather
than the mean, as in 'DIFF_MEDIAN' or 'SAME_MEDIAN'.
(Why you would do this is up to you, as always.)
-center VALS A.1D [B.1D]
This option (for Gang Chen) allows you to specify the
values that will be subtracted from each covariate before
the regression analysis. If you use this option, then
you must supply a 1D file that gives the values to be
subtracted from the covariates; if there are 3 covariates,
then the 1D file for the setA datasets should have 3 numbers,
and the 1D file for the setB datasets (if present) should
also have 3 numbers.
* For example, to put these values directly on the command line,
you could do something like this:
-center VALS '1D: 3 7 9' '1D: 3.14159 2.71828 0.91597'
* As a special case, if you want the same values used for
the B.1D file as in the A.1D file, you can use the word
'DITTO' in place of repeating the A.1D filename.
* Of course, you only have to give the B.1D filename if there
is a setB collection of datasets, and you are not doing a
paired t-test.
Please see the discussion of CENTERING in the 3dttest++ help output. If
you change away from the default 'DIFF', you should really understand what
you are doing, or an elephant may sit on your head, which no one wants.
---------------------------*** Other Options ***---------------------------
-seedrad r = Before performing the correlations, average the seed voxel time
series for a radius of 'r' millimeters. This is in addition
to any blurring done prior to 3dSetupGroupInCorr. The default
radius is 0, but the AFNI user can change this interactively.
-sendall = Send all individual subject results to AFNI, as well as the
various group statistics.
++ These extra sub-bricks will be labeled like 'xxx_zcorr', where
'xxx' indicates which dataset the results came from; 'zcorr'
denotes that the values are the arctanh of the correlations.
++ If there are a lot of datasets, then the results will be VERY
large and take up a lot of memory in AFNI.
**++ Use this option with some judgment and wisdom, or bad things
might happen! (e.g., your computer runs out of memory.)
++ This option is also known as the 'Tim Ellmore special'.
-donocov = If covariates are used, this option tells 3dGroupInCorr to also
compute the results without using covariates, and attach those
to the output dataset -- presumably to facilitate comparison.
++ These extra output sub-bricks have 'NC' attached to their labels.
++ If covariates are NOT used, this option has no effect at all.
-dospcov = If covariates are used, compute the Spearman (rank) correlation
coefficient of the subject correlation results vs. each covariate.
++ These extra sub-bricks are in addition to the standard
regression analysis with covariates, and are added here at
the request of the IMoM (PK).
++ These sub-bricks will be labeled as 'lll_ccc_SP', where
'lll' is the group label (from -labelA or -labelB)
'ccc' is the covariate label (from the -covariates file)
'_SP' is the signal that this is a Spearman correlation
++ There will be one sub-brick produced for each covariate,
for each group (1 or 2 groups).
-clust PP = This option lets you input the results from a 3dClustSim run,
to be transmitted to AFNI to aid with the interactive Clusterize.
3dGroupInCorr will look for files named
PP.NN1_1sided.niml PP.NN1_2sided.niml PP.NN1_bisided.niml
(and similarly for NN2 and NN3 clustering), plus PP.mask
and if at least one of these .niml files is found, will send
it to AFNI to be incorporated into the dataset. For example,
if the datasets' average smoothness is 8 mm, you could do
3dClustSim -fwhm 8 -mask Amask+orig -niml -prefix Gclus
3dGroupInCorr ... -clust Gclus
-->> Presumably the mask would be the same as used when you ran
3dSetupGroupInCorr, and the smoothness you would have estimated
via 3dFWHMx, via sacred divination, or via random guesswork.
It is your responsibility to make sure that the 3dClustSim files
correspond properly to the 3dGroupInCorr setup!
-->>++ This option only applies to AFNI usage, not to SUMA.
++ See the Clusterize notes, far below, for more information on
using the interactive clustering GUI in AFNI with 3dGroupInCorr.
-read = Normally, the '.data' files are 'memory mapped' rather than read
into memory. However, if your files are on a remotely mounted
server (e.g., a remote RAID), then memory mapping may not work.
Or worse, it may seem to work, but return 'data' that is all zero.
Use this '-read' option to force the program to read the data into
allocated memory.
++ Using read-only memory mapping is a way to avoid over-filling
the system's swap file, when the .data files are huge.
++ You must give '-read' BEFORE '-setA' or '-setB', so that the
program knows what to do when it reaches those options!
-ztest = Test the input to see if it is all zero. This option is for
debugging, not for general use all the time.
-ah host = Connect to AFNI/SUMA on the computer named 'host', rather than
on the current computer system 'localhost'.
++ This allows 3dGroupInCorr to run on a separate system than
the AFNI GUI.
-- e.g., If your desktop is weak and pitiful, but you have access
to a strong and muscular multi-CPU server (and the network
connection is fast).
++ Note that AFNI must be setup with the appropriate
'AFNI_TRUSTHOST_xx' environment variable, so that it will
allow the external socket connection (for the sake of security):
-- Example: AFNI running on computer 137.168.0.3 and 3dGroupInCorr
running on computer 137.168.0.7
-- Start AFNI with a command like
afni -DAFNI_TRUSTHOST_01=137.168.0.7 -niml ...
-- Start 3dGroupInCorr with a command like
3dGroupInCorr -ah 137.168.0.3 ...
-- You may use hostnames in place of IP addresses, but numerical
IP addresses will work more reliably.
-- If you are very trusting, you can set NIML_COMPLETE_TRUST to YES
to allow NIML socket connections from anybody. (This only affects
AFNI programs, not any other software on your computer.)
-- You might also need to adjust your firewall settings to allow
the reception of TCP/IP socket connections from outside computers.
Firewalls are a separate issue from setting up AFNI host 'trusting',
and the mechanics of how you can setup your firewall permissions is
not something about which we can give you advice.
-np PORT_OFFSET: Provide a port offset to allow multiple instances of
AFNI <--> SUMA, AFNI <--> 3dGroupIncorr, or any other
programs that communicate together to operate on the same
machine.
All ports are assigned numbers relative to PORT_OFFSET.
The same PORT_OFFSET value must be used on all programs
that are to talk together. PORT_OFFSET is an integer in
the inclusive range [1025 to 65500].
When you want to use multiple instances of communicating programs,
be sure the PORT_OFFSETS you use differ by about 50 or you may
still have port conflicts. A BETTER approach is to use -npb below.
-npq PORT_OFFSET: Like -np, but more quiet in the face of adversity.
-npb PORT_OFFSET_BLOC: Similar to -np, except it is easier to use.
PORT_OFFSET_BLOC is an integer between 0 and
MAX_BLOC. MAX_BLOC is around 4000 for now, but
it might decrease as we use up more ports in AFNI.
You should be safe for the next 10 years if you
stay under 2000.
Using this function reduces your chances of causing
port conflicts.
See also afni and suma options: -list_ports and -port_number for
information about port number assignments.
You can also provide a port offset with the environment variable
AFNI_PORT_OFFSET. Using -np overrides AFNI_PORT_OFFSET.
-max_port_bloc: Print the current value of MAX_BLOC and exit.
Remember this value can get smaller with future releases.
Stay under 2000.
-max_port_bloc_quiet: Spit MAX_BLOC value only and exit.
-num_assigned_ports: Print the number of assigned ports used by AFNI
then quit.
-num_assigned_ports_quiet: Do it quietly.
Port Handling Examples:
-----------------------
Say you want to run three instances of AFNI <--> SUMA.
For the first you just do:
suma -niml -spec ... -sv ... &
afni -niml &
Then for the second instance pick an offset bloc, say 1 and run
suma -niml -npb 1 -spec ... -sv ... &
afni -niml -npb 1 &
And for yet another instance:
suma -niml -npb 2 -spec ... -sv ... &
afni -niml -npb 2 &
etc.
Since you can launch many instances of communicating programs now,
you need to know wich SUMA window, say, is talking to which AFNI.
To sort this out, the titlebars now show the number of the bloc
of ports they are using. When the bloc is set either via
environment variables AFNI_PORT_OFFSET or AFNI_PORT_BLOC, or
with one of the -np* options, window title bars change from
[A] to [A#] with # being the resultant bloc number.
In the examples above, both AFNI and SUMA windows will show [A2]
when -npb is 2.
-NOshm = Do NOT reconnect to AFNI using shared memory, rather than TCP/IP,
when using 'localhost' (i.e., AFNI and 3dGroupInCorr are running
on the same system).
++ The default is to use shared memory for communication when
possible, since this method of transferring large amounts of
data between programs on the same computer is much faster.
++ If you have a problem with the shared memory communication,
use '-NOshm' to use TCP/IP for all communications.
++ If you use '-VERB', you will get a very detailed progress report
from 3dGroupInCorr as it computes, including elapsed times for
each stage of the process, including transmit time to AFNI.
-suma = Talk to suma instead of afni, using surface-based i/o data.
-sdset_TYPE = Set the output format in surface-based batch mode to
TYPE. For allowed values of TYPE, search for option
called -o_TYPE in ConvertDset -help.
Typical values would be:
-sdset_niml, -sdset_1D, or -sdset_gii
-quiet = Turn off the 'fun fun fun in the sun sun sun' informational messages.
-verb = Print out extra informational messages for more fun!
-VERB = Print out even more informational messages for even more fun fun!!
-debug = Do some internal testing (slows things down a little)
---------------*** Talairach (+trlc) vs. Original (+orig) ***---------------
Normally, AFNI assigns the dataset sent by 3dGroupInCorr to the +tlrc view.
However, you can tell AFNI to assign it to the +orig view instead.
To do this, set environment variable AFNI_GROUPINCORR_ORIG to YES when
starting AFNI; for example:
afni -DAFNI_GROUPINCORR_ORIG=YES -niml
This feature might be useful to you if you are doing a longitudinal study on
some subject, comparing resting state maps before and after some treatment.
-----------*** Group InstaCorr and AFNI's Clusterize function ***-----------
In the past, you could not use Clusterize in the AFNI A controller at the
same time that 3dGroupInCorr was actively connected.
***** This situation is no longer the case: *****
****** Clusterize is available with InstaCorr! ******
In particular, the 'Rpt' (report) button is very useful with 3dGroupInCorr.
If you use '-covariates' AND '-sendall', 3dGroupInCorr will send to AFNI
a set of 1D files containing the covariates. You can use one of these
as a 'Scat.1D' file in the Clusterize GUI to plot the individual subject
correlations (averaged across a cluster) vs. the covariate values -- this
graph can be amusing and even useful.
-- If you don't know how to use this feature in Clusterize, then learn!
---------------*** Dataset-Level Scale Factors [Sep 2012] ***---------------
-scale sf = Read file 'sf' that contains a scale factor value for each dataset
The file format is essentially the same as that for covariates:
* first line contains labels (which are ignored)
* each later line contains a dataset identifying label and a number
FIRST LINE --> subject factor
LATER LINES --> Elvis 42.1
Fred 37.2
Ethel 2.71828
Lucy 3.14159
* The arctanh(correlation) values from dataset Elvis will be
multiplied by 42.1 before being put into the t-test analysis.
* All values reported and computed by 3dGroupInCorr will reflect
this scaling (e.g., the results from '-sendall').
* This option is for the International Man Of Mystery, PK.
-- And just for PK, if you use this option in the form '-SCALE',
then each value X in the 'sf' file is replaced by sqrt(X-3).
--------------------------*** BATCH MODE [Feb 2011] ***-----------------------
* In batch mode, instead of connecting AFNI or SUMA to get commands on
what to compute, 3dGroupInCorr computes correlations (etc.) based on
commands from an input file.
++ Batch mode works to produce 3D (AFNI, or NIfTI) or 2D surface-based
(SUMA or GIFTI format) datasets.
* Each line in the command file specifies the prefix for the output dataset
to create, and then the set of seed vectors to use.
++ Each command line produces a distinct dataset.
++ If you want to put results from multiple commands into one big dataset,
you will have to do that with something like 3dbucket or 3dTcat after
running this program.
++ If an error occurs with one command line (e.g., a bad seed location is
given), the program will not produce an output dataset, but will try
to continue with the next line in the command file.
++ Note that I say 'seed vectors', since a distinct one is needed for
each dataset comprising the inputs -setA (and -setB, if used).
* Batch mode is invoked with the following option:
-batch METHOD COMMANDFILENAME
where METHOD specifies how the seed vectors are to be computed, and
where COMMANDFILENAME specifies the file with the commands.
++ As a special case, if COMMANDFILENAME contains a space character,
then instead of being interpreted as a filename, it will be used
as the contents of a single line command file; for example:
-batch IJK 'something.nii 33 44 55'
could be used to produce a single output dataset named 'something.nii'.
++ Only one METHOD can be used per batch mode run of 3dGroupInCorr!
You can't mix up 'IJK' and 'XYZ' modes, for example.
++ Note that this program WILL overwrite existing datasets, unlike most
AFNI programs, so be careful.
* METHOD must be one of the following strings (not case sensitive):
++ IJK ==> the 3D voxel grid index triple (i,j,k) is given in FILENAME,
or IJKAVE which tells the program to extract the time series from
each input dataset at that voxel and use that as the seed
vector for that dataset (if '-seedrad' is given, then the
seed vector will be averaged as done in interactive mode).
** This is the same mode of operation as the interactive seed
picking via AFNI's 'InstaCorr Set' menu item.
-- FILE line format: prefix i j k
++ XYZ ==> very similar to 'IJK', but instead of voxel indexes being
or XYZAVE given to specify the seed vectors, the RAI (DICOM) (x,y,z)
coordinates are given ('-seedrad' also applies).
** If you insist on using LPI (neurological) coordinates, as
Some other PrograMs (which are Fine Software tooLs) do,
set environment variable AFNI_INSTACORR_XYZ_LPI to YES,
before running this program.
-- FILE line format: prefix x y z
++ NODE ==> the index of the surface node where the seed is located.
A simple line would contain a prefix and a node number.
The prefix sets the output name and the file format,
if you include the extension. See also -sdset_TYPE option.
for controlling output format.
The node number specifies the seed node. Because you might
have two surfaces (-LRpairs option in 3dSetupGroupInCorr)
you can add 'L', or 'R' to the node index to specify its
hemisphere.
For example:
OccipSeed1 L720
OccipSeed2 R2033
If you don't specify the side in instances where you are
working with two hemispheres, the default is 'L'.
++ MASKAVE ==> each line on the command file specifies a mask dataset;
the nonzero voxels in that dataset are used to define
the list of seed voxels that will be averaged to give
the set of seed vectors.
** You can use the usual '[..]' and '<..>' sub-brick and value
range selectors to modify the dataset on input. Do not
put these selectors inside quotes in the command file!
-- FILE line format: prefix maskdatasetname
++ IJKPV ==> very similar to IJKAVE, XYZAVE, and MASKAVE (in that order),
++ XYZPV but instead of extracting the average over the region
++ MASKPV indicated, extracts the Principal Vector (in the SVD sense;
cf. program 3dLocalPV).
** Note that IJKPV and XYZPV modes only work if seedrad > 0.
** In my limited tests, the differences between the AVE and PV
methods are very small. YMMV.
++ VECTORS ==> each line on the command file specifies an ASCII .1D
file which contains the set of seed vectors to use.
There must be as many columns in the .1D file as there
are input datasets in -setA and -setB combined. Each
column must be as long as the maximum number of time
points in the longest dataset in -setA and -setB.
** This mode is for those who want to construct their own
set of reference vectors in some clever way.
** N.B.: This method has not yet been tested!
-- FILE line format: prefix 1Dfilename
-----------------------*** NEW BATCH MODES [Aug 2012] ***--------------------
* These new modes allow you to specify a LOT of output datasets directly on the
command line with a single option. They are:
-batchRAND n prefix ==> scatter n seeds around in space and compute the
output dataset for each of these seed points, where
'n' is an integer greater than 1.
-batchGRID d prefix ==> for every d-th point along each of the x,y,z axes,
create an output dataset, where 'd' is an integer
in the range 1..9. Note that setting d=1 will use
every voxel as a seed, and presumably produce a vast
armada of datasets through which you'll have to churn.
* Each output dataset gets a filename of the form 'prefix_xxx_yyy_zzz', where
'prefix' is the second argument after the '-batchXXXX' option, and 'xxx'
is the x-axis index of the seed voxel, 'yyy' is the y-axis index of the
seed voxel, and 'zzz' is the z-axis index of the seed voxel.
* These options are like using the 'IJK' batch mode of operation at each seed
voxel. The only difference is that the set of seed points is generated by
the program rather than being given by the user (i.e., you). These two options
differ only in the way the seed points are chosen (pseudo-randomly or regularly).
** You should be prepared for a LONG run and filling up a **
** LOT of disk space when you use either of these options! **
=========================================================================
* This binary version of 3dGroupInCorr is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Authors: Bob Cox and Ziad Saad
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dHist
3dHist computes histograms using functions for generating priors.
If you are not sure you need this particular program, use 3dhistog instead.
Example:
3dHist -input sigs+orig \n
Options:
-input DSET: Dset providing values for histogram. Exact 0s are not counted
-dind SB: Use sub-brick SB from the input rather than 0
-mask MSET: Provide mask dataset to select subset of input.
-mask_range BOT TOP: Specify the range of values to consider from MSET.
Default is anything non-zero
-cmask CMASK: Provide cmask expression. Voxels where expression is 0
are excluded from computations. For example:
-cmask '-a T1.div.r+orig -b T1.uni.r+orig -expr step(a/b-10)'
-thishist HIST.niml.hist: Read this previously created histogram instead
of forming one from DSET.
Obviously, DSET, or -mask options are not needed
-prefix PREF: Write histogram to niml file called PREF.niml.hist
-equalized PREF: Write a histogram equalized version of the input dataset
Histogram Creation Parameters:
By default, the program will select bin number, bin width,
and range automatically. You can also set the parameters manually with
the following options.
-nbin K: Use K bins.
-min MIN: Minimum intensity.
-max MAX: Maximum intensity.
-binwidth BW: Bin width
-ignore_out: Do not count samples outside the user specified range.
-rhist RHIST.niml.hist: Use previously created histogram to set range
and binwidth parameters.
-showhist: Display histogram to stdout
You can also graph it with: 1dRplot HistOut.niml.hist
Histogram Queries:
-at VAL: Set the value at which you want histogram values
-get 'PAR1,PAR2,PAR3..': Return the following PAR* properties at VAL
Choose from:
freq: Frequency (normalized count)
count: Count
bin: Continuous bin location estimate
cdf: Cumulative count
rcdf: Reverse cumulative count (from the top)
ncdf: The normalized version of cdf
nrcdf: The reverse version of ncdf
outl: 1.0-(2*smallest tail area)
0 means VAL splits area in the middle
1 means VAL is at either end of the histogram
ALL: All the above.
You can select multiple ones with something like:
-get 'freq, count, bin'
You can also set one of the PAR* to 'upvol' to get
the volume (liters) of voxels with values exceeding VAL
The use of upvol usually requires option -voxvol too.
-voxvol VOL_MM3: A voxel's volume in mm^3. To be used with upvol if
no dataset is available or if you want to override
it.
-val_at PAR PARVAL: Return the value (magnitude) where histogram property
PAR is equal to PARVAL
PAR can only be one of: cdf, rcdf, ncdf, nrcdf, upvol
For upvol, PARVAL is in Liters
-quiet: Return a concise output to simplify parsing. For the moment, this
option only affects output of option -val_at
Examples:
#A histogram a la 3dhistog:
3dHist -input T1+orig.
#Getting parameters from previously created histogram:
3dHist -thishist HistOut.niml.hist -at 144.142700
#Or the reverse query:
3dHist -thishist HistOut.niml.hist -val_at ncdf 0.132564
#Compute histogram and find dataset threshold (approximate)
#such that 1.5 liters of voxels remain above it.
3dHist -prefix toy -input flair_axial.nii.gz -val_at upvol 1.5
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dhistog
++ 3dhistog: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
Compute histogram of 3D Dataset
Usage: 3dhistog [editing options] [histogram options] dataset
The editing options are the same as in 3dmerge
(i.e., the options starting with '-1').
The histogram options are:
-nbin # Means to use '#' bins [default=100]
-dind i Means to take data from sub-brick #i, rather than #0
-omit x Means to omit the value 'x' from the count;
-omit can be used more than once to skip multiple values.
-mask m Means to use dataset 'm' to determine which voxels to use
-roi_mask r Means to create a histogram for each non-zero value in
dataset 'r'. If -mask option is also used, dataset 'r' is
masked by 'm' before creating the histograms.
-doall Means to include all sub-bricks in the calculation;
otherwise, only sub-brick #0 (or that from -dind) is used.
-noempty Only output bins that are not empty.
This does not apply to NIML output via -prefix.
-notitle Means to leave the title line off the output.
-log10 Output log10() of the counts, instead of the count values.
This option cannot be used with -pdf or with -prefix
-pdf Output the counts divided by the number of samples.
This option is only valid with -prefix
-min x Means specify minimum (inclusive) of histogram.
-max x Means specify maximum (inclusive) of histogram.
-igfac Means to ignore sub-brick scale factors and histogram-ize
the 'raw' data in each volume.
Output options for integer and floating point data
By default, the program will determine if the data is integer or float
even if the data is stored as shorts with a scale factor.
Integer data will be binned by default to be 100 or the maximum number of
integers in the range, whichever is less. For example, data with the range
(0..20) gives 21 bins for each integer, and non-integral bin boundaries
will be raised to the next integer (2.3 will be changed to 3, for instance).
If the number of bins is higher than the number of integers in the range,
the bins will be labeled with floating point values, and multiple bins
may be zero between the integer values
Float data will be binned by default to 100 bins with absolute limits for
the min and max if these are specified as inclusive. For example,
float data ranging from (0.0 to 20.0) will be binned into bins that
are 0.2 large (0..0.199999, 0.2..0.399999,...,19.8..20.0)
To have bins divided at 1.0 instead, specify the number of bins as 20
Bin 0 is 0..0.9999, Bin 1 is 1.0 to 1.9999, ..., Bin 20 is 19 to 20.0000
giving a slight bias to the last bin
-int Treat data and output as integers
-float Treat data and output as floats
-unq U.1D Writes out the sorted unique values to file U.1D.
This option is not allowed for float data
If you have a problem with this, write
Ziad S. Saad (saadz@mail.nih.gov)
-prefix HOUT: Write a copy of the histogram into file HOUT.1D
you can plot the file with:
1dplot -hist -sepscl -x HOUT.1D'[0]' HOUT.1D'[1,2]'
or
1dRplot -input HOUT.1D
Without -prefix, the histogram is written to stdout.
Use redirection '>' if you want to save it to a file.
The format is a title line, then three numbers printed per line:
bottom-of-interval count-in-interval cumulative-count
There is no 1dhistog program, for the simple reason that you can use
this program for the same purpose, as in this example:
3dhistog -nbin 50 -notitle -min 0 -max .01 err.1D > ehist.1D
1dplot -hist -x ehist.1D'[0]' -xlabel 'err.1D' -ylabel 'histo' ehist.1D'[1]'
-- by RW Cox, V Roopchansingh, and ZS Saad
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dICC
================== Welcome to 3dICC ==================
AFNI Program for IntraClass Correlation (ICC) Analysis
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Version 1.0, Oct 4, 2023
Author: Gang Chen (gangchen@mail.nih.gov)
Website - ATM
SSCC/NIMH, National Institutes of Health, Bethesda MD 20892, USA
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Usage:
------
Intraclass correlation (ICC) measures the extent of consistency, agreement or
reliability of an effect (e.g., BOLD respoonse) across two or more measures.
3dICC is a program that computes whole-brain voxel-wise ICC when each subject
has two or more effect estimates (e.g., sessions, scanners, etc. ). All three
typical types of ICC are available through proper model specification:
ICC(1, 1), ICC(2,1) and ICC(3,1). The latter two types are popular in
neuroimaging because ICC(1,1) is usually applicable for scenarios such as twins.
The program can be applied to even wider situations (e.g., incorporation of
confounding effects or more than two random-effects variables). The modeling
approaches are laid out in the following paper:
Chen, G., Taylor, P.A., Haller, S.P., Kircanski, K., Stoddard, J., Pine, D.S.,
Leibenluft, E., Brotman, M.A., Cox, R.W., 2018. Intraclass correlation:
Improved modeling approaches and applications for neuroimaging. Human Brain
Mapping 39, 1187–1206. https://doi.org/10.1002/hbm.23909
Currently it provides in the output the ICC value and the corresponding
F-statistic at each voxel. In future, inferences for intercept and covariates
may be added.
Input files for 3dICC can be in AFNI, NIfTI, or surface (niml.dset) format.
Two input scenarios are considered: 1) effect estimates only, and 2) effect
estimates plus their t-statistic values which are used for weighting based
on the precision contained in the t-statistic.
In addition to R installation, the following R packages need to be installed
in R first before running 3dICC: "lme4", "blme" and "metafor". In addition,
the "snow" package is also needed if one wants to take advantage of parallel
computing. To install these packages, run the following command at the terminal:
rPkgsInstall -pkgs "blme,lme4,metafor,snow"
Alternatively you may install them in R:
install.packages("blme")
install.packages("lme4")
install.packages("metafor")
install.packages("snow")
Once the 3dICC command script is constructed, it can be run by copying and
pasting to the terminal. Alternatively (and probably better) you save the
script as a text file, for example, called ICC.txt, and execute it with the
following (assuming on tc shell),
nohup tcsh -x ICC.txt &
or,
nohup tcsh -x ICC.txt > diary.txt &
nohup tcsh -x ICC.txt |& tee diary.txt &
The advantage of the latter commands is that the progression is saved into
the text file diary.txt and, if anything goes awry, can be examined later.
Example 1 --- Compute ICC(2,1) values between two sessions. With the option
-bounds, values beyond [-2, 2] will be treated as outliers and considered
as missing. If you want to set a range, choose the bounds that make sense
with your input data.
-------------------------------------------------------------------------
3dICC -prefix ICC2 -jobs 12 \
-mask myMask+tlrc \
-model '1+(1|session)+(1|Subj)' \
-bounds -2 2 \
-dataTable \
Subj session InputFile \
s1 one s1_1+tlrc'[pos#0_Coef]' \
s1 two s1_2+tlrc'[pos#0_Coef]' \
...
s21 two s21_2+tlrc'[pos#0_Coef]' \
...
Example 2 --- Compute ICC(3,1) values between two sessions. With the option
-bounds, values beyond [-2, 2] will be treated as outliers and considered
as missing. If you want to set a range, choose the bounds that make sense
with your input data.
-------------------------------------------------------------------------
3dICC -prefix ICC3 -jobs 12 \
-mask myMask+tlrc \
-model '1+session+(1|Subj)' \
-bounds -2 2 \
-dataTable \
Subj session InputFile \
s1 one s1_1+tlrc'[pos#0_Coef]' \
s1 two s1_2+tlrc'[pos#0_Coef]' \
...
s21 two s21_2+tlrc'[pos#0_Coef]' \
...
Example 3 --- Compute ICC(3,1) values between two sessions with both effect
estimates and their t-statistics as input. The subject column is explicitly
declared because it is named differently from the default ('Subj').
-------------------------------------------------------------------------
3dICC -prefix ICC3 -jobs 12 \
-mask myMask+tlrc \
-model '1+age+session+(1|Subj)' \
-bounds -2 2 \
-Subj 'subject' \
-tStat 'tFile' \
-dataTable \
subject age session tFile InputFile \
s1 21 one s1_1+tlrc'[pos#0_tstat]' s1_1+tlrc'[pos#0_Coef]' \
s1 21 two s1_2+tlrc'[pos#0_tstat]' s1_2+tlrc'[pos#0_Coef]' \
...
s21 28 two s21_2+tlrc'[pos#0_tstat]' s21_2+tlrc'[pos#0_Coef]' \
...
Example 4 --- Compute ICC(2,1) values between two sessions while controlling
for age effect. With the option -bounds, values beyond [-2, 2] will be
be treated as outliers and considered as missing. If you want to set a range,
choose the bounds that make sense with your input data.
-------------------------------------------------------------------------
3dICC -prefix ICC2a -jobs 12 \
-mask myMask+tlrc \
-model '1+age+(1|session)+(1|Subj)' \
-bounds -2 2 \
-Subj 'subjct' \
-InputFile 'inputfile' \
-dataTable \
subject age session inputfile \
s1 21 one s1_1+tlrc'[pos#0_Coef]' \
s1 21 two s1_2+tlrc'[pos#0_Coef]' \
...
s21 28 two s21_2+tlrc'[pos#0_Coef]' \
...
Options in alphabetical order:
------------------------------
-bounds lb ub: This option is for outlier removal. Two numbers are expected from
the user: the lower bound (lb) and the upper bound (ub). The input data will
be confined within [lb, ub]: any values in the input data that are beyond
the bounds will be removed and treated as missing. Make sure the first number
less than the second. You do not have to use this option to censor your data!
-cio: Use AFNI's C io functions, which is default. Alternatively -Rio
can be used.
-dataTable TABLE: List the data structure with a header as the first line.
NOTE:
1) This option has to occur last; that is, no other options are
allowed thereafter. Each line should end with a backslash except for
the last line.
2) The first column is fixed and reserved with label 'Subj', and the
last is reserved for 'InputFile'. Each row should contain only one
effect estimate in the table of long format (cf. wide format) as
defined in R. The level labels of a factor should contain at least
one character. Input files can be in AFNI, NIfTI or surface format.
AFNI files can be specified with sub-brick selector (square brackets
[] within quotes) specified with a number or label.
3) It is fine to have variables (or columns) in the table that are
not modeled in the analysis.
4) The context of the table can be saved as a separate file, e.g.,
called table.txt. In the 3dICC script, specify the data with
'-dataTable @table.txt'. Do NOT put any quotes around the square
brackets for each sub-brick; Otherwise, the program cannot properly
read the files. This option is useful: (a) when there are many input
files so that the program complains with an 'Arg list too long' error;
(b) when you want to try different models with the same dataset.
-dbgArgs: This option will enable R to save the parameters in a
file called .3dICC.dbg.AFNI.args in the current directory
so that debugging can be performed.
-help: this help message
-IF var_name: var_name is used to specify the last column name that is designated for
input files of effect estimate. The default (when this option is not invoked
is 'InputFile', in which case the column header has to be exactly as 'InputFile'.
-jobs NJOBS: On a multi-processor machine, parallel computing will speed
up the program significantly.
Choose 1 for a single-processor computer.
-mask MASK: Process voxels inside this mask only.
Default is no masking.
-model FORMULA: Specify the model structure for all the variables. The
expression FORMULA with more than one variable has to be surrounded
within (single or double) quotes. Variable names in the formula
should be consistent with the ones used in the header of -dataTable.
Suppose that each subject ('subj') has two sessions ('ses'), a model
ICC(2,1) without any covariate is "1+(1|ses)+(1|subj)" while one
for ICC(3,1) is "1+ses+(1|subj)". Each random-effects factor is
specified within parentheses per formula convention in R. Any
confounding effects (quantitative or categorical variables) can be
added as fixed effects without parentheses.
-prefix PREFIX: Output file name. For AFNI format, provide prefix only,
with no view+suffix needed. Filename for NIfTI format should have
.nii attached, while file name for surface data is expected
to end with .niml.dset. The sub-brick labeled with the '(Intercept)',
if present, should be interpreted as the effect with each factor
at the reference level (alphabetically the lowest level) for each
factor and with each quantitative covariate at the center value.
-qVarCenters VALUES: Specify centering values for quantitative variables
identified under -qVars. Multiple centers are separated by
commas (,) without any other characters such as spaces and should
be surrounded within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -qVarsCetners) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable YOURSELF first before the values are fed into -dataTable.
-qVars variable_list: Identify quantitative variables (or covariates) with
this option. The list with more than one variable has to be
separated with comma (,) without any other characters such as
spaces and should be surrounded within (single or double) quotes.
For example, -qVars "Age,IQ"
WARNINGS:
1) Centering a quantitative variable through -qVarsCenters is
very critical when other fixed effects are of interest.
2) Between-subjects covariates are generally acceptable.
However EXTREME caution should be taken when the groups
differ significantly in the average value of the covariate.
3) Within-subject covariates are better modeled with 3dICC.
-Rio: Use R's io functions. The alternative is -cio.
-show_allowed_options: list of allowed options
-Subj var_name: var_name is used to specify the column name that is designated as
as the measuring entity variable (usually subject). The default (when this
option is not invoked) is 'Subj', in which case the column header has to be
exactly as 'Subj'.
-tStat col_name: col_name is used to specify the column name that is designated as
as the t-statistic. The default (when this option is not invoked) is 'NA',
in which case no t-stat is provided as part of the input; otherwise declare
the t-stat column name with this option.
AFNI program: 3dinfill
A program to fill holes in a volumes.
3dinfill <-input DSET>
Options:
-input DSET: Fill volume DSET
-prefix PREF: Use PREF for output prefix.
-Niter NITER: Do not allow the fill function to do more than NITER
passes. A -1 (default) lets the function go to a maximum
of 500 iterations. You will be warned if you run our of
iterations and holes persist.
-blend METH: Sets method for assigning a value to a hole.
MODE: Fill with most frequent neighbor value. Use MODE when
filling integral valued data such as ROIs or atlases.
AVG: Fill with average of neighboring values.
AUTO: Use MODE if DSET is integral, AVG otherwise.
SOLID: No blending, brutish fill. See also -minhits
SOLID_CLEAN: SOLID, followed by removal of dangling chunks
Dangling chunks are defined as non-zero regions
that surround lesser holes, i.e. holes that have
less than MH. The cleanup step is not iterative
though, and you are most likely better off using
option -ed to do the cleanup.
-minhits MH: Crietrion for considering a zero voxel to be a hole
MH refers to the total number of directions along which a
zero voxel is considered surrounded by non zero values.
a value of 1 is the least strict criterion, and a value of 3
is the strictest.
This parameter can only be used with -blend SOLID
-ed N V: Erode N times then dilate N times to get rid of hanging chunks.
Values filled in by this process get value V.
-mask MSET: Provide mask dataset to select subset of input.
-mask_range BOT TOP: Specify the range of values to consider from MSET.
Default is anything non-zero.
-mrange BOT TOP: Same as option -mask_range
-cmask CMASK: Provide cmask expression. Voxels where expression is 0
are excluded from computations. For example:
-cmask '-a T1.div.r+orig -b T1.uni.r+orig -expr step(a/b-10)'
NOTE: For the moment, masking is only implemented for the SOLID* fill
method.
Example 1:
Starting from a whole head mask that has some big holes in it where CSF and
cavities are. Fill the inside of the mask and remove dangling chunks in the
end with -ed
3dinfill -blend SOLID -ed 3 1 -prefix filledmask \
-minhits 2 -input holymask+orig.
This program will be slow for high res datasets with large holes.
If you are trying to fill holes in masks, consider also:
3dmask_tool -fill_holes
AFNI program: 3dinfo
Prints out sort-of-useful information from a 3D dataset's header
Usage: 3dinfo [-verb OR -short] dataset [dataset ...] ~1~
-verb means to print out lots of stuff
-VERB means even more stuff [including slice time offsets]
-short means to print out less stuff [now the default]
-no_hist means to omit the HISTORY text
-h: Mini help, at time, same as -help in many cases.
-help: The entire help output
-HELP: Extreme help, same as -help in majority of cases.
-h_view: Open help in text editor. AFNI will try to find a GUI editor
-hview : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_web: Open help in web browser. AFNI will try to find a browser.
-hweb : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_find WORD: Look for lines in this programs's -help output that match
(approximately) WORD.
-h_raw: Help string unedited
-h_spx: Help string in sphinx loveliness, but do not try to autoformat
-h_aspx: Help string in sphinx with autoformatting of options, etc.
-all_opts: Try to identify all options for the program from the
output of its -help option. Some options might be missed
and others misidentified. Use this output for hints only.
----------------------------------------------------------------------
Alternative Usage 1 (without either of the above options): ~1~
Output a large block of text per dataset. This has multiple options:
-label2index label dataset : output index corresponding to label ~2~
example: 3dinfo -label2index aud#0_Coef stats.FT+tlrc
Prints to stdout the index corresponding to the sub-brick with
the name label, or a blank line if label not found.
The ONLY output is this sub-brick index.
This is intended for used in a script, as in this tcsh fragment:
set face = `3dinfo -label2index Face#0 AA_Decon+orig`
set hous = `3dinfo -label2index House#0 AA_Decon+orig`
3dcalc -a AA_Decon+orig"[$face]" -b AA_Decon+orig"[$hous]" ...
* Added per the request and efforts of Colm Connolly.
-niml_hdr dataset : output entire NIML-formatted header ~2~
example: 3dinfo -niml_hdr stats.FT+tlrc
Prints to stdout the NIML-formatted equivalent of the .HEAD file.
-subbrick_info dataset : output only sub-brick part of info ~2~
example: 3dinfo -subbrick_info stats.FT+tlrc
Prints to stdout only the part of the full '3dinfo -VERB. output
that includes sub-brick info. The first such line might look like:
-- At sub-brick #0 'Full_Fstat' datum type is float: 0 to 971.2
----------------------------------------------------------------------
Alternate Usage 2: ~1~
3dinfo <OPTION> [OPTION ..] dataset [dataset ...]
Outputs a specific piece of information depending on OPTION.
This can form a table of outputs per dataset.
==============================================================
Options producing one value (string) ~2~
==============================================================
-exists: 1 if dset is loadable, 0 otherwise
This works on prefix also.
-id: Idcodestring of dset
-is_labeltable: 1 if dset has a labeltable attached.
-is_atlas: 1 if dset is an atlas.
-is_atlas_or_labeltable: 1 if dset has an atlas or has a labeltable.
-is_nifti: 1 if dset is NIFTI format, 0 otherwise
-is_slice_timing_nz: is there slice timing, and is it not uniformly 0
-dset_extension: show filename extension for valid dataset (e.g. .nii.gz)
-storage_mode: show internal storage mode of dataset (e.g. NIFTI)
-space: dataset's space
-gen_space: datasets generic space
-av_space: AFNI format's view extension for the space
-nifti_code: what AFNI would use for an output NIFTI (q)sform_code
-is_oblique: 1 if dset is oblique
-handedness: L if orientation is Left handed, R if it is right handed
-obliquity: Angle from plumb direction.
Angles of 0 (or close) are for cardinal orientations
-prefix: Return the prefix
-prefix_noext: Return the prefix without extensions
-ni: Return the number of voxels in i dimension
-nj: Return the number of voxels in j dimension
-nk: Return the number of voxels in k dimension
-nijk: Return ni*nj*nk
-nv: Return number of points in time or the number of sub-bricks
-nt: same as -nv
-n3: same as -ni -nj -nk
-n4: same as -ni -nj -nk -nv
-nvi: The maximum sub-brick index (= nv -1 )
-nti: same as -nvi
-ntimes: Return number of sub-bricks points in time
This is an option for debugging use, stay away from it.
-max_node: For a surface-based dset, return the maximum node index
-di: Signed displacement per voxel along i direction, aka dx
-dj: Signed displacement per voxel along j direction, aka dy
-dk: Signed displacement per voxel along k direction, aka dz
-d3: same as -di -dj -dk
-adi: Voxel size along i direction (abs(di))
-adj: Voxel size along j direction (abs(dj))
-adk: Voxel size along k direction (abs(dk))
-ad3: same as -adi -adj -adk
-voxvol: Voxel volume in cubic millimeters
-oi: Volume origin along the i direction
-oj: Volume origin along the j direction
-ok: Volume origin along the k direction
-o3: same as -oi -oj -ok
-dcx: volumetric center in x direction (DICOM coordinates)
-dcy: volumetric center in y direction (DICOM coordinates)
-dcz: volumetric center in z direction (DICOM coordinates)
-dc3: same as -dcx -dcy -dcz
-tr: The TR value in seconds.
-dmin: The dataset's minimum value, scaled by fac
-dmax: The dataset's maximum value, scaled by fac
-dminus: The dataset's minimum value, unscaled.
-dmaxus: The dataset's maximum value, unscaled.
-smode: Dset storage mode string.
-header_name: Value of dset structure (sub)field 'header_name'
-brick_name: Value of dset structure (sub)field 'brick_name'
-iname: Name of dset as input on the command line
-orient: Value of orientation string.
For example, LPI means:
i direction grows from Left(negative) to Right(positive).
j direction grows from Posterior (neg.) to Anterior (pos.)
k direction grows from Inferior (neg.) to Superior (pos.)
-extent: The spatial extent of the dataset along R, L, A, P, I and S
-Rextent: Extent along R
-Lextent: Extent along L
-Aextent: Extent along P
-Pextent: Extent along P
-Iextent: Extent along I
-Sextent: Extent along S
-all_names: Value of various dset structures handling filenames.
==============================================================
Options producing one value per sub-brick ~2~
==============================================================
-fac: Return the float scaling factor
-label: The label of each sub-brick
-datum: The data storage type
-min: The minimum value, scaled by fac
-max: The maximum value, scaled by fac
-minus: The minimum value, unscaled.
-maxus: The maximum value, unscaled.
==============================================================
Options producing multiple values (strings of multiple lines) ~2~
==============================================================
You can specify the delimiter between sub-brick parameters with
-sb_delim DELIM. Default DELIM is "|"
-labeltable: Show label table, if any
-labeltable_as_atlas_points: Show label table in atlas point format.
-atlas_points: Show atlas points list, if any
-history: History note.
-slice_timing: Show slice timing.
==============================================================
Options affecting output format ~2~
==============================================================
-header_line: Output as the first line the names of attributes
in each field (column)
-hdr: Same as -header_line
-sb_delim SB_DELIM: Delimiter string between sub-brick values
Default SB_DELIM is "|"
-NA_flag NAFLAG: String to use when a field is not found or not
applicable. Default is "NA"
-atr_delim ATR_DELIM: Delimiter string between attributes
Default ATR_DELIM is the tab character.
==============================================================
Options for displaying ijk_to_xyz matrices ~2~
==============================================================
A set of functions for displaying the matrices that tell us where
the data actually is in space! These 4x4---well 3x4, in practice,
because the bottom row of the matrix *must* be (0, 0, 0, 1)---
can be related to the NIFTI sform and qform matrices (which are LPI
native), but these aform_* matrices are RAI (DICOM) native.
There are several types of matrices. Linear affine are the most general
(containing translation, rotation, shear and scaling info), followed by
orthogonal (no shear info; only translation, rotation and scale),
followed by cardinal (no rotation info; only translation and scale).
The 'scale' info is the voxel sizes. The 'translation' determines the
origin location in space. The 'rotation' describes a, well, rotation
relative to the scanner coords---this is the dreaded 'obliquity'. The
'shear'... well, that could also be present, but it is not common, at
least to describe just-acquired data: it would tilt the axes away from
being mutually 90 deg to each other (i.e., they wouldn't be
orthogonal); this would likely just result from an alignment process.
Note: the NIFTI sform can be linear affine, in general; in practice, it
is often just orthogonal. The NIFTI qform is a quaternion representation
of the orthogonalized sform; if sform is orthogonal, then they contain
the same information (common, but not required).
The aform_real matrix is AFNI's equivalent of the NIFTI sform; it *can*
encode general linear affine mappings. (In practice, it rarely does so.)
The aform_orth is the orthogonalized aform_real, and thus equivalent
to the NIFTI qform. If aform_real is orthogonal (no shear info), then
these two matrices are equal. The aform_card is the cardinalized form of
the aform_orth; NIFTI does not have an equivalent. AFNI typically uses
this matrix to display your data on a rectangle that is parallel to your
computer screen, without any need to regrid/resample the data (hence, no
blurring introduced). This can be though of displaying your dataset in
a way that you *wish* your subject had been oriented. Note that if
there is no obliquity in the acquired data (that is, aform_orth does not
contain any rotation relative to the scanner coords), then
aform_card == aform_orth.
The aform_card is an AFNI convenience (ha!) matrix, it does not have an
equivalent in the NIFTI stable of matrices.
-aform_real: Display full 3x4 'aform_real' matrix (AFNI's RAI equivalent
of the sform matrix in NIFTI, may contain obliquity info),
with comment line first.
-aform_real_oneline: Display full 'aform_real' matrix (see '-aform_real')
as 1 row of 12 numbers. No additional comment.
-aform_real_refit_ori XXX: Display full 3x4 'aform_real' matrix (see
'-aform_real')
*if* the dset were reoriented (via 3drefit) to
new orient XXX. Includes comment line first.
-is_aform_real_orth: if true, aform_real == aform_orth, which should be
a very common occurrence.
-aform_orth: Display full 3x4 'aform_orth' matrix (AFNI's RAI matrix
equivalent of the NIFTI quaternion, which may contain
obliquity info), with comment line first.
This matrix is the orthogonalized form of aform_real,
and veeery often AFNI-produced dsets, we will have:
aform_orth == aform_real.
-perm_to_orient YYY: Display 3x3 permutation matrix to go from the
dset's current orientation to the YYY orient.
==============================================================
Options requiring dataset pairing at input ~2~
==============================================================
3dinfo allows you to make some comparisons between dataset pairs.
The comparison is always done in both directions whether or not
the answer can be different. For example:
3dinfo -same_grid dset1 dset2
will output two values, one comparing dset1 to dset2 and the second
comparing dset2 to dset1. With -same_grid, the answers will always
be identical, but this might be different for other queries.
This behaviour allows you to mix options requiring dataset pairs
with those that do not. For example:
3dinfo -header_line -prefix -n4 -same_grid \
DSET1+orig DSET2.nii DSET3.nii DSET4.nii
-same_grid: Output 1 if the grid is identical between two dsets
0 otherwise.
For -same_grid to be 1, all of -same_dim, -same_delta,
-same_orient, -same_center, and -same_obl must return 1
-same_dim: 1 if dimensions (nx,ny,nz) are the same between dset pairs
-same_delta: 1 if voxels sizes are the same between dset pairs
-same_orient: 1 if orientation is the same between dset pairs
-same_center: 1 if geometric center is the same between dset pairs
-same_obl: 1 if obliquity is the same between dset pairs
-same_all_grid: Equivalent to listing all of -same_dim -same_delta
-same_orient, -same_center, and -same_obl on the
command line.
-val_diff: Output the sum of absolute differences of all voxels in the
dataset pair. A -1.0 value indicates a grid mismatch between
volume pairs.
-sval_diff: Same as -val_diff, but the sum is divided (scaled) by the
total number of voxels that are not zero in at least one
of the two datasets.
-monog_pairs: Instead of pairing each dset with the first, pair each
couple separately. This requires you to have an even
number of dsets on the command line
Examples with csh syntax using datasets in your afni binaries directory ~1~
0- First get some datasets with which we'll play
set dsets = ( `apsearch -list_all_afni_P_dsets` )
1- The classic
3dinfo $dsets[1]
2- Produce a table of results using 1-value-options for two datasets
3dinfo -echo_edu -prefix_noext -prefix -space -ni -nj -nk -nt \
$dsets[1-2]
3- Use some of the options that operate on pairs, mix with other options
3dinfo -echo_edu -header_line -prefix -n4 -same_grid $dsets[1-4]
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dIntracranial
++ 3dIntracranial: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. D. Ward
[7m*+ WARNING:[0m This program (3dIntracranial) is old, obsolete, and not maintained!
++ 3dSkullStrip is almost always superior to 3dIntracranial :)
3dIntracranial - performs automatic segmentation of intracranial region.
This program will strip the scalp and other non-brain tissue from a
high-resolution T1 weighted anatomical dataset.
** Nota Bene: the newer program 3dSkullStrip should also be considered
** for this functionality -- it usually works better.
-----------------------------------------------------------------------
Usage:
-----
3dIntracranial
-anat filename => Filename of anat dataset to be segmented
[-min_val a] => Minimum voxel intensity limit
Default: Internal PDF estimate for lower bound
[-max_val b] => Maximum voxel intensity limit
Default: Internal PDF estimate for upper bound
[-min_conn m] => Minimum voxel connectivity to enter
Default: m=4
[-max_conn n] => Maximum voxel connectivity to leave
Default: n=2
[-nosmooth] => Suppress spatial smoothing of segmentation mask
[-mask] => Generate functional image mask (complement)
Default: Generate anatomical image
[-quiet] => Suppress output to screen
-prefix pname => Prefix name for file to contain segmented image
** NOTE **: The newer program 3dSkullStrip will probably give
better segmentation results than 3dIntracranial!
-----------------------------------------------------------------------
Examples:
--------
3dIntracranial -anat elvis+orig -prefix elvis_strip
3dIntracranial -min_val 30 -max_val 350 -anat elvis+orig -prefix strip
3dIntracranial -nosmooth -quiet -anat elvis+orig -prefix elvis_strip
-----------------------------------------------------------------------
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dInvFMRI
Usage: 3dInvFMRI [options]
Program to compute stimulus time series, given a 3D+time dataset
and an activation map (the inverse of the usual FMRI analysis problem).
-------------------------------------------------------------------
OPTIONS:
-data yyy =
*OR* = Defines input 3D+time dataset [a non-optional option].
-input yyy =
-map aaa = Defines activation map; 'aaa' should be a bucket dataset,
each sub-brick of which defines the beta weight map for
an unknown stimulus time series [also non-optional].
-mapwt www = Defines a weighting factor to use for each element of
the map. The dataset 'www' can have either 1 sub-brick,
or the same number as in the -map dataset. In the
first case, in each voxel, each sub-brick of the map
gets the same weight in the least squares equations.
[default: all weights are 1]
-mask mmm = Defines a mask dataset, to restrict input voxels from
-data and -map. [default: all voxels are used]
-base fff = Each column of the 1D file 'fff' defines a baseline time
series; these columns should be the same length as
number of time points in 'yyy'. Multiple -base options
can be given.
-polort pp = Adds polynomials of order 'pp' to the baseline collection.
The default baseline model is '-polort 0' (constant).
To specify no baseline model at all, use '-polort -1'.
-out vvv = Name of 1D output file will be 'vvv'.
[default = '-', which is stdout; probably not good]
-method M = Determines the method to use. 'M' is a single letter:
-method C = least squares fit to data matrix Y [default]
-method K = least squares fit to activation matrix A
-alpha aa = Set the 'alpha' factor to 'aa'; alpha is used to penalize
large values of the output vectors. Default is 0.
A large-ish value for alpha would be 0.1.
-fir5 = Smooth the results with a 5 point lowpass FIR filter.
-median5 = Smooth the results with a 5 point median filter.
[default: no smoothing; only 1 of these can be used]
-------------------------------------------------------------------
METHODS:
Formulate the problem as
Y = V A' + F C' + errors
where Y = data matrix (N x M) [from -data]
V = stimulus (N x p) [to -out]
A = map matrix (M x p) [from -map]
F = baseline matrix (N x q) [from -base and -polort]
C = baseline weights (M x q) [not computed]
N = time series length = length of -data file
M = number of voxels in mask
p = number of stimulus time series to estimate
= number of parameters in -map file
q = number of baseline parameters
and ' = matrix transpose operator
Next, define matrix Z (Y detrended relative to columns of F) by
-1
Z = [I - F(F'F) F'] Y
-------------------------------------------------------------------
The method C solution is given by
-1
V0 = Z A [A'A]
This solution minimizes the sum of squares over the N*M elements
of the matrix Y - V A' + F C' (N.B.: A' means A-transpose).
-------------------------------------------------------------------
The method K solution is given by
-1 -1
W = [Z Z'] Z A and then V = W [W'W]
This solution minimizes the sum of squares of the difference between
the A(V) predicted from V and the input A, where A(V) is given by
-1
A(V) = Z' V [V'V] = Z'W
-------------------------------------------------------------------
Technically, the solution is unidentfiable up to an arbitrary
multiple of the columns of F (i.e., V = V0 + F G, where G is
an arbitrary q x p matrix); the solution above is the solution
that is orthogonal to the columns of F.
-- RWCox - March 2006 - purely for experimental purposes!
===================== EXAMPLE USAGE =====================================
** Step 1: From a training dataset, generate activation map.
The input dataset has 4 runs, each 108 time points long. 3dDeconvolve
is used on the first 3 runs (time points 0..323) to generate the
activation map. There are two visual stimuli (Complex and Simple).
3dDeconvolve -x1D xout_short_two.1D -input rall_vr+orig'[0..323]' \
-num_stimts 2 \
-stim_file 1 hrf_complex.1D -stim_label 1 Complex \
-stim_file 2 hrf_simple.1D -stim_label 2 Simple \
-concat '1D:0,108,216' \
-full_first -fout -tout \
-bucket func_ht2_short_two -cbucket cbuc_ht2_short_two
N.B.: You may want to de-spike, smooth, and register the 3D+time
dataset prior to the analysis (as usual). These steps are not
shown here -- I'm presuming you know how to use AFNI already.
** Step 2: Create a mask of highly activated voxels.
The F statistic threshold is set to 30, corresponding to a voxel-wise
p = 1e-12 = very significant. The mask is also lightly clustered, and
restricted to brain voxels.
3dAutomask -prefix Amask rall_vr+orig
3dcalc -a 'func_ht2_short+orig[0]' -b Amask+orig -datum byte \
-nscale -expr 'step(a-30)*b' -prefix STmask300
3dmerge -dxyz=1 -1clust 1.1 5 -prefix STmask300c STmask300+orig
** Step 3: Run 3dInvFMRI to estimate the stimulus functions in run #4.
Run #4 is time points 324..431 of the 3D+time dataset (the -data
input below). The -map input is the beta weights extracted from
the -cbucket output of 3dDeconvolve.
3dInvFMRI -mask STmask300c+orig \
-data rall_vr+orig'[324..431]' \
-map cbuc_ht2_short_two+orig'[6..7]' \
-polort 1 -alpha 0.01 -median5 -method K \
-out ii300K_short_two.1D
3dInvFMRI -mask STmask300c+orig \
-data rall_vr+orig'[324..431]' \
-map cbuc_ht2_short_two+orig'[6..7]' \
-polort 1 -alpha 0.01 -median5 -method C \
-out ii300C_short_two.1D
** Step 4: Plot the results, and get confused.
1dplot -ynames VV KK CC -xlabel Run#4 -ylabel ComplexStim \
hrf_complex.1D'{324..432}' \
ii300K_short_two.1D'[0]' \
ii300C_short_two.1D'[0]'
1dplot -ynames VV KK CC -xlabel Run#4 -ylabel SimpleStim \
hrf_simple.1D'{324..432}' \
ii300K_short_two.1D'[1]' \
ii300C_short_two.1D'[1]'
N.B.: I've found that method K works better if MORE voxels are
included in the mask (lower threshold) and method C if
FEWER voxels are included. The above threshold gave 945
voxels being used to determine the 2 output time series.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dISC
================== Welcome to 3dISC ==================
Program for Voxelwise Inter-Subject Correlation (ISC) Analysis
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Version 1.0.8, Feb 14, 2025
Author: Gang Chen (gangchen@mail.nih.gov)
SSCC/NIMH, National Institutes of Health, Bethesda MD 20892, USA
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
------
Intersubject correlation (ISC) quantifies the similarity or synchronization of
BOLD responses between two subjects experiencing the same stimulus, such as
watching a movie or listening to music. The analysis is performed voxelwise
using linear mixed-effects modeling, as detailed in the following paper:
Chen, G., Taylor, P.A., Shin, Y.W., Reynolds, R.C., Cox, R.W., 2017. *Untangling
the Relatedness among Correlations, Part II: Inter-Subject Correlation Group
Analysis through Linear Mixed-Effects Modeling.* NeuroImage, 147, 825-840.
**Input Requirements:**
The input files for 3dISC consist of voxelwise correlation values from all
subject pairs. If these correlations have not been Fisher-transformed, the
`-r2z` option in 3dISC should be used to apply the transformation. When
analyzing multiple groups, ISC values across groups must also be provided
unless the groups are analyzed separately. Input files can be in AFNI, NIfTI,
or surface (niml.dset) format. For *n* subjects, a total of *n(n-1)/2* input
files should be supplied, ensuring no duplicate pairs.
**Output:**
3dISC generates voxelwise effect estimates (e.g., ISC values) along with the
corresponding t-statistics.
**Preprocessing Recommendations:**
For data preprocessing guidelines, refer to Appendix B of the above paper. To
compute voxelwise ISC of time series between any two subjects, AFNI’s
`3dTcorrelate` can be used.
The LME platform supports a wide range of explanatory variables, including
categorical variables (both between- and within-subject factors) and
quantitative variables (e.g., age, behavioral data). However, the responsibility
of correctly specifying the weights for each effect (e.g., contrasts) falls on
the user. Determining the appropriate number and order of predictors can be
particularly challenging, especially when dealing with more than two factor
levels or interaction effects.
To navigate this complexity, it is essential to understand two common factor
coding strategies: **dummy coding** and **deviation coding**. A helpful
resource on these coding systems can be found here:
https://stats.idre.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/
### Example Scripts
The four example scripts provided below demonstrate various modeling scenarios.
If any of them resemble your data structure, you can use them as templates to
build your own script. More examples may be added in the future, and user-
contributed scenarios (including yours) are welcome.
### Required R Packages
Before running 3dISC, ensure that the following R packages are installed:
To install via the AFNI command line:
rPkgsInstall -pkgs "lme4,snow"
Alternatively, you can install them directly in R:
install.packages("lme4")
install.packages("snow")
Once the 3dISC command script is prepared, you can run it by copying and
pasting it into the terminal. However, a more practical approach is to
save the script as a text file (e.g., `ISC.txt`) and execute it using the
following command (assuming you are using the **tcsh** shell):
nohup tcsh -x ISC.txt &
Alternatively, to capture the output for later review, use one of the following
commands:
nohup tcsh -x ISC.txt > diary.txt &
or
nohup tcsh -x ISC.txt |& tee diary.txt &
or,
The advantage of these latter commands is that they log the execution
progress into diary.txt, allowing you to review the output and
troubleshoot any issues if something goes wrong.
Example 1 --- Simplest case: ISC analysis for one group of subjects without
any explanatory variables. In other words, the effect of interest is the ISC
at the populaton level. The output is the group ISC plus its t-statistic.
The components within parentheses in the -model specifications are R
notations for random effects.
-------------------------------------------------------------------------
3dISC -prefix ISC -jobs 12 \
-mask myMask+tlrc \
-model '1+(1|Subj1)+(1|Subj2) \
-dataTable \
Subj1 Subj2 InputFile \
s1 s2 s1_s2+tlrc \
s1 s3 s1_s3+tlrc \
s1 s4 s1_s4+tlrc \
s1 s5 s1_s5+tlrc \
s1 s6 s1_s6+tlrc \
s1 s7 s1_s7+tlrc \
...
s2 s3 s2_s3+tlrc \
s2 s4 s2_s4+tlrc \
s2 s5 s2_s5+tlrc \
...
Example 2 --- ISC analysis with two groups (G1 and G2). Three ISCs can be
inferred at the population level, G11 (ISC among subjects within the first
group G1), G22 (ISC among subjects within the second group G2), and G12 (ISC
between subjects in the first group G1 and those in the second group G2). The
research interest can be various comparisons among G11, G22 and G12, and this
is the reason the group column 'grp' is coded with three types of population
ISC: G11, G22 and G12. By default each factor (categorical variable) is
internally quantified in the model using deviation coding with alphabetically
the last level as the reference. Notice the semi-esoteric weights for those
comparisons with -gltCode: the first weight corresponds to the intercept in
the model, which is the average effect across all the factor levels (and
corresponds to the zero value of a quantitative variable if present). If dummy
coding is preferred, check out the next script below. The components within
parentheses in the -model specifications are R notations for random effects.
Here is a good reference about factor coding strategies:
https://stats.idre.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/
-------------------------------------------------------------------------
3dISC -prefix ISC2a -jobs 12 \
-mask myMask+tlrc \
-model 'grp+(1|Subj1)+(1|Subj2)' \
-gltCode ave '1 0 -0.5' \
-gltCode G11 '1 1 0' \
-gltCode G12 '1 0 1' \
-gltCode G22 '1 -1 -1' \
-gltCode G11vG22 '0 2 1' \
-gltCode G11vG12 '0 1 -2' \
-gltCode G12vG22 '0 1 2' \
-gltCode ave-G12 '0 0 -1.5' \
-dataTable \
Subj1 Subj2 grp InputFile \
s1 s2 G11 s1_2+tlrc \
s1 s3 G11 s1_3+tlrc \
s1 s4 G11 s1_4+tlrc \
...
s1 s25 G12 s1_25+tlr \
s1 s26 G12 s1_26+tlr \
s1 s27 G12 s1_26+tlr \
...
s25 s26 G22 s25_26+tlr \
s25 s27 G22 s25_27+tlr \
s25 s48 G22 s51_28+tlr \
...
The above script is equivalent to the one below. The only difference is that
we force 3dISC to adopt dummy coding by adding a zero in the -model
specification, which makes the weight coding much more intuitive. In this
particular case, the three weights are associated with the three
categories, G11, G12 and G22 (no intercept is assumed in the model as
requested with the zero (0) in the model specifications).
** Alert ** This coding strategy, using no intercept, only works when
there is a single explanatory variable (e.g., 'group' in this example).
For cases with more than one explanatory variable, consider adopting
other coding methods.
-------------------------------------------------------------------------
3dISC -prefix ISC2b -jobs 12 \
-model '0+grp+(1|Subj1)+(1|Subj2)' \
-gltCode ave '0.5 0 0.5' \
-gltCode G11 '1 0 0' \
-gltCode G12 '0 1 0' \
-gltCode G22 '0 0 1' \
-gltCode G11vG22 '1 0 -1' \
-gltCode G11vG12 '1 -1 0' \
-gltCode G12vG22 '0 1 -1' \
-gltCode ave-G12 '0.5 -1 0.5' \
-dataTable \
Subj1 Subj2 grp InputFile \
s1 s2 G11 s1_2+tlrc \
s1 s3 G11 s1_3+tlrc \
s1 s4 G11 s1_4+tlrc \
...
s1 s25 G12 s1_25+tlr \
s1 s26 G12 s1_26+tlr \
s1 s27 G12 s1_26+tlr \
...
s25 s26 G22 s25_26+tlr \
s25 s27 G22 s25_27+tlr \
s25 s48 G22 s51_28+tlr \
...
There is a third way to analyze this same dataset if we are NOT
interested in the between-group ISC, G12. First, we adopt deviation
coding for the two groups by replacing two groups G1 and G2 with 0.5 and
-0.5. Then add up the two values for each row (each subject pair),
resulting in three possible values of 1, -1 and 0. Put those three values
in the group column in the data table.
-------------------------------------------------------------------------
3dISC -prefix ISC2c -jobs 12 \
-model 'grp+(1|Subj1)+(1|Subj2)' \
-qVars grp \
-gltCode ave '1 0' \
-gltCode G11vG22 '0 1' \
-gltCode G11 '1 0.5' \
-gltCode G22 '1 -0.5' \
-dataTable \
Subj1 Subj2 grp InputFile \
s1 s2 1 s1_2+tlrc \
s1 s3 1 s1_3+tlrc \
s1 s4 1 s1_4+tlrc \
...
s1 s25 0 s1_25+tlr \
s1 s26 0 s1_26+tlr \
s1 s27 0 s1_26+tlr \
...
s25 s26 -1 s25_26+tlr \
s25 s27 -1 s25_27+tlr \
s25 s48 -1 s51_28+tlr \
...
Example 3 --- ISC analysis for one group of subjects. The only difference
from Example 1 is that we want to add an explanatory variable 'Age'.
Before the age values are incorporated in the data table, do two things:
1) center the age by subtracting the cener (e.g., overall mean) from each
subject's age, and 2) for each subject pair (each row in the data table)
add up the two ages (after centering). The components within parentheses
in the -model specifications are R notations for random effects.
-------------------------------------------------------------------------
3dISC -prefix ISC3 -jobs 12 \
-mask myMask+tlrc \
-model 'Age+(1|Subj1)+(1|Subj2)' \
-qVars Age \
-gltCode ave '1 0' \
-gltCode Age '0 1' \
-dataTable \
Subj1 Subj2 Age InputFile \
s1 s2 2 s1_s2+tlrc \
s1 s3 5 s1_s3+tlrc \
s1 s4 -4 s1_s4+tlrc \
s1 s5 3 s1_s5+tlrc \
s1 s6 -2 s1_s6+tlrc \
s1 s7 -1 s1_s7+tlrc \
...
s2 s3 2 s2_s3+tlrc \
s2 s4 4 s2_s4+tlrc \
s2 s5 -5 s2_s5+tlrc \
...
Example 4 --- ISC analysis with two groups of subject (Sex: females and males)
plus a quantitative explanatory variable (Age). We are going to combine the
modeling strategy in the third analysis of Example 2 with Example 3. In
addition, we consider the interaction between Sex and Age by adding their
product as another column (called 'SA' in the data table). The components
within parentheses in the -model specifications are R notations for random
effects.
-------------------------------------------------------------------------
3dISC -prefix ISC2c -jobs 12 \
-mask myMask+tlrc \
-model 'Sex+Age+SA+(1|Subj1)+(1|Subj2)' \
-qVars 'Sex,Age,SA' \
-gltCode ave '1 0 0 0' \
-gltCode G11vG22 '0 1 0 0' \
-gltCode G11 '1 0.5 0 0' \
-gltCode G22 '1 -0.5 0 0' \
-gltCode Age '0 0 1 0' \
-gltCode Age1vAge2 '0 0 0 1' \
-gltCode Age1 '0 0 1 0.5' \
-gltCode Age2 '0 0 1 -0.5' \
-dataTable \
Subj1 Subj2 Sex Age SA InputFile \
s1 s2 1 2 2 s1_2+tlrc \
s1 s3 1 5 5 s1_3+tlrc \
s1 s4 1 -4 -4 s1_4+tlrc \
...
s1 s25 0 -2 0 s1_25+tlr \
s1 s26 0 -1 0 s1_26+tlr \
s1 s27 0 3 0 s1_26+tlr \
...
s25 s26 -1 4 -4 s25_26+tlr \
s25 s27 -1 -5 5 s25_27+tlr \
s25 s48 -1 2 -2 s51_28+tlr \
...
Example 5 --- ISC analysis with two conditions (C1 and C2). The research interest
is regarding the contrast of ISC between the two conditions. The basic strategy
is to convert the data to the contrast between the conditions. In other words,
obtain the contrast of ISC after the Fisher-transformation between the two
conditions for each subject pair with a command like the following:
3dcalc -a subj1_subj2_cond1 -b subj1_subj2_cond2 -expr 'atanh(a)-atanh(b)'
-prefix subj1_subj2
The function of inverse hyperbolic tangent 'atanh' is the same as the Fisher
z-transform. Then follow Example 1 with the contrasts from the above 3dcalc output
as input.
Options in alphabetical order:
------------------------------
-cio: Use AFNI's C io functions, which is default. Alternatively -Rio
can be used.
-dataTable TABLE: List the data structure with a header as the first line.
NOTE:
1) This option has to occur last in the script; that is, no other
options are allowed thereafter. Each line should end with a backslash
except for the last line.
2) The table should contain at least three columns, two of which are
for the two subjects in each pair, 'Subj1' and 'Subj2'. These two columns
code the labels of the two subjects involved
for each ISC file that is listed in the column 'InputFile'. The order of
the columns does not matter. Any subject-level explanatory variables
(e.g., age, sex, etc.) can be
specified as columns in the table. Each row should contain only one
ISC file in the table of long format (cf. wide format) as defined in R.
The level labels of a factor should contain at least
one character. Input files can be in AFNI, NIfTI or surface format.
AFNI files can be specified with sub-brick selector (square brackets
[] within quotes) specified with a number or label.
3) It is fine to have variables (or columns) in the table that are
not modeled in the analysis.
4) The context of the table can be saved as a separate file, e.g.,
called table.txt. Do not forget to include a backslash at the end of
each row. In the script specify the data with '-dataTable @table.txt'.
This option is useful: (a) when there are many input files so that
the program complains with an 'Arg list too long' error; (b) when
you want to try different models with the same dataset.
-dbgArgs: This option will enable R to save the parameters in a
file called .3dISC.dbg.AFNI.args in the current directory
so that debugging can be performed.
-gltCode label weights: Specify the label and weights of interest. The
weights should be surrounded with quotes.
-help: this help message
-IF var_name: var_name is used to specify the column name that is designated for
input files of effect estimate. The default (when this option is not invoked
is 'InputFile', in which case the column header has to be exactly as 'InputFile'
This input file for effect estimates has to be the last column.
-jobs NJOBS: On a multi-processor machine, parallel computing will speed
up the program significantly.
Choose 1 for a single-processor computer.
-mask MASK: Process voxels inside this mask only.
Default is no masking.
-model FORMULA: Specify the model structure for all the variables. The
expression FORMULA with more than one variable has to be surrounded
within (single or double) quotes. Variable names in the formula
should be consistent with the ones used in the header of -dataTable.
In the ISC context the simplest model is "1+(1|Subj1)+(1|Subj2)"in
while the random effect from each of the two subjects in a pair is
symmetrically incorporated in the model. Each random-effects factor is
specified within parentheses per formula convention in R. Any
effects of intereste and confounding variables (quantitative or
categorical variables) can be added as fixed effects without parentheses.
-prefix PREFIX: Output file name. For AFNI format, provide prefix only,
with no view+suffix needed. Filename for NIfTI format should have
.nii attached (otherwise the output would be saved in AFNI format).
-qVarCenters VALUES: Specify centering values for quantitative variables
identified under -qVars. Multiple centers are separated by
commas (,) without any other characters such as spaces and should
be surrounded within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -qVarsCetners) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable YOURSELF first before the values are fed into -dataTable.
-qVars variable_list: Identify quantitative variables (or covariates) with
this option. The list with more than one variable has to be
separated with comma (,) without any other characters such as
spaces and should be surrounded within (single or double) quotes.
For example, -qVars "Age,IQ"
WARNINGS:
1) Centering a quantitative variable through -qVarsCenters is
very critical when other fixed effects are of interest.
2) Between-subjects covariates are generally acceptable.
However EXTREME caution should be taken when the groups
differ substantially in the average value of the covariate.
-r2z: This option performs Fisher transformation on the response variable
(input files) if it is correlation value. Do not invoke the option
if the transformation has already been applied.
-Rio: Use R's io functions. The alternative is -cio.
-show_allowed_options: list of allowed options
-Subj1 var_name: var_name is used to specify the column name that is designated as
as the first measuring entity variable (usually subject). This option,
combined with the another option '-Subj2', forms a pair of two subjects;
the order between the two subjects does not matter. The default (when
the option is not invoked) is 'Subj1', in which case the column header has
to be exactly as 'Subj1'.
-Subj2 var_name: var_name is used to specify the column name that is designated as
as the first measuring entity variable (usually subject). This option,
combined with the another option '-Subj1', forms a pair of two subjects;
the order between the two subjects does not matter. The default (when
the option is not invoked) is 'Subj2', in which case the column header has
to be exactly as 'Subj1'.
AFNI program: 3dkmeans
++ 3dkmeans: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: avovk
3d+t Clustering segmentation, command-line version.
Based on The C clustering library.
Copyright (C) 2002 Michiel Jan Laurens de Hoon.
USAGE: 3dkmeans [options]
options:
-v, --version Version information
-f filename: Input data to be clustered.
You can specify multiple filenames in sequence
and they will be catenated internally.
e.g: -f F1+orig F2+orig F3+orig ...
or -f F1+orig -f F2+orig -f F3+orig ...
-input filename: Same as -f
-mask mset Means to use the dataset 'mset' as a mask:
Only voxels with nonzero values in 'mset'
will be printed from 'dataset'. Note
that the mask dataset and the input dataset
must have the same number of voxels.
-mrange a b Means to further restrict the voxels from
'mset' so that only those mask values
between 'a' and 'b' (inclusive) will
be used. If this option is not given,
all nonzero values from 'mset' are used.
Note that if a voxel is zero in 'mset', then
it won't be included, even if a < 0 < b.
-cmask 'opts' Means to execute the options enclosed in single
quotes as a 3dcalc-like program, and produce
produce a mask from the resulting 3D brick.
Examples:
-cmask '-a fred+orig[7] -b zork+orig[3] -expr step(a-b)'
produces a mask that is nonzero only where
the 7th sub-brick of fred+orig is larger than
the 3rd sub-brick of zork+orig.
-cmask '-a fred+orig -expr 1-bool(k-7)'
produces a mask that is nonzero only in the
7th slice (k=7); combined with -mask, you
could use this to extract just selected voxels
from particular slice(s).
Notes: * You can use both -mask and -cmask in the same
run - in this case, only voxels present in
both masks will be dumped.
* Only single sub-brick calculations can be
used in the 3dcalc-like calculations -
if you input a multi-brick dataset here,
without using a sub-brick index, then only
its 0th sub-brick will be used.
* Do not use quotes inside the 'opts' string!
-u jobname Allows you to specify a different name for the
output files.
(default is derived from the input file name)
-prefix PREFIX Allows you to specify a prefix for the output
volumes. Default is the same as jobname
There are two output volumes, one for the cluster
membership and one with distance measures.
The distance dataset, mostly for debugging purposes
is formatted as follows:
Sub-brick 0: Dc = 100*(1-Ci)+100*Di/(Dmax)
with Ci the cluster number for voxel i, Di the
distance of voxel i to the centroid of its
assigned cluster, Dmax is the maximum distance in
cluster Ci.
Sub-bricks 1..k: Dc0k contains the distance of a
voxel's data to the centroid of cluster k.
Sub-brick k+1: Dc_norm = (1.0-Di/Ei)*100.0, where
Ei is the smallest distance of voxel i to
the remaining clusters that is larger than Di.
-g [0..8] Specifies distance measure for clustering
Note: Weight is a vector as long as the signatures
and used when computing distances. However for the
moment, all weights are set to 1
0: No clustering
1: Uncentered correlation distance
Same as Pearson distance, except
the means of v and s are not removed
when computing correlation.
2: Pearson distance
= (1-Weighted_Pearson_Correlation(v,s))
3: Uncentered correlation distance, absolute value
Same as abs(Pearson distance), except
the means of v and s are not removed
when computing correlation.
4: Pearson distance, absolute value
= (1-abs(Weighted_Pearson_Correlation(v,s)))
5: Spearman's rank distance
= (1-Spearman_Rank_Correlation(v,s))
No weighting is used
6: Kendall's distance
= (1-Kendall_Tau(v,s))
No weighting is used
7: Euclidean distance between v and s
= 1/sum(weight) * sum(weight[i]*(v[i]-s[i])^2)
8: City-block distance
= 1/sum(weight) * sum(weight[i]*abs(v[i]-s[i]))
(default for -g is 1, 7 if input has one value per voxel)
-k number Specify number of clusters
-remap METH Reassign clusters numbers based on METH:
NONE: No remapping (default)
COUNT: based on cluster size ascending
iCOUNT: COUNT, descending
MAG: based on ascending magnitude of centroid
iMAG: MAG, descending
-labeltable LTFILE: Attach labeltable LTFILE to clustering
output. This labeltable will overwrite
a table that is taken from CLUST_INIT
should you use -clust_init option.
-clabels LAB1 LAB2 ...: Provide a label for each cluster.
Labels cannot start with '-'.
-clust_init CLUST_INIT: Specify a dataset to initialize
clustering. This option sets -r 0 .
If CLUST_INIT has a labeltable and
you do not specify one then CLUST_INIT's
table is used for the output
-r number For k-means clustering, the number of times the
k-means clustering algorithm is run
(default: 0 with -clust_init, 1 otherwise)
-rsigs SIGS Calculate distances from each voxel's signature
to the signatures in SIGS.
SIGS is a multi-column 1D file with each column
being a signature.
The output is a dset the same size as the input
with as many sub-bricks as there are columns in
SIGS.
With this option, no clustering is done.
-verb verbose
-write_dists Output text files containing various measures.
FILE.kgg.1D : Cluster assignments
FILE.dis.1D : Distance between clusters
FILE.cen.1D : Cluster centroids
FILE.info1.1D: Within cluster sum of distances
FILE.info2.1D: Maximum distance within each cluster
FILE.vcd.1D: Distance from voxel to its centroid
-voxdbg I J K Output debugging info for voxel I J K
-seed SEED Seed for the random number generator.
Default is 1234567
AFNI program: 3dKruskalWallis
++ 3dKruskalWallis: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
This program performs nonparametric Kruskal-Wallis test for
comparison of multiple treatments.
Usage:
3dKruskalWallis
-levels s s = number of treatments
-dset 1 filename data set for treatment #1
. . . . . .
-dset 1 filename data set for treatment #1
. . . . . .
-dset s filename data set for treatment #s
. . . . . .
-dset s filename data set for treatment #s
[-workmem mega] number of megabytes of RAM to use
for statistical workspace
[-voxel num] screen output for voxel # num
-out prefixnamename Kruskal-Wallis statistics are written
to file prefixname
N.B.: For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used, e.g.:
-dset 2 'fred+orig[3]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dLFCD
Usage: 3dLFCD [options] dset
Computes voxelwise local functional connectivity density as defined in:
Tomasi, D and Volkow, PNAS, May 2010, 107 (21) 9885-9890;
DOI: 10.1073/pnas.1001414107
The results are stored in a new 3D bucket dataset
as floats to preserve
their values. Local functional connectivity density (LFCD; as opposed to global
functional connectivity density, see 3dDegreeCentrality), reflects
the extent of the correlation of a voxel within its locally connected cluster.
Conceptually the process involves:
1. Calculating the correlation between voxel time series for
every pair of voxels in the brain (as determined by masking)
2. Applying a threshold to the resulting correlations to exclude
those that might have arisen by chance
3. Find the cluster of above-threshold voxels that are spatially
connected to the target voxel.
4. Count the number of voxels in the local cluster.
Practically the algorithm is ordered differently to optimize for
computational time and memory usage.
The procedure described in the paper defines a voxels
neighborhood to be the 6 voxels with which it shares a face.
This definition can be changed to include edge and corner
voxels using the -neighborhood flags below.
LFCD is a localized variant of binarized degree centrality,
the weighted alternative is calculated by changing step 4
above to calculate the sum of the correlation coefficients
between the seed region and the neighbors. 3dLFCD outputs
both of these values (in separate briks), since they are
so easy to calculate in tandem.
You might prefer to calculate this on your data after
spatial normalization, so that the range of values are
consistent between datasets. Similarly the same brain mask
should be used for all datasets that will be directly compared.
The original paper used a correlation threshold = 0.6 and
excluded all voxels with tSNR < 50. 3dLFCD does not discard
voxels based on tSNR, this would need to be done beforehand.
Options:
-pearson = Correlation is the normal Pearson (product moment)
correlation coefficient [default].
-spearman AND -quadrant are disabled at this time :-(
-thresh r = exclude correlations <= r from calculations
-faces = define neighborhood to include face touching
edges (default)
-faces_edges = define neighborhood to include face and
edge touching voxels
-faces_edges_corners = define neighborhood to include face,
edge, and corner touching voxels
-polort m = Remove polynomial trend of order 'm', for m=-1..3.
[default is m=1; removal is by least squares].
Using m=-1 means no detrending; this is only useful
for data/information that has been pre-processed.
-autoclip = Clip off low-intensity regions in the dataset,
-automask = so that the correlation is only computed between
high-intensity (presumably brain) voxels. The
mask is determined the same way that 3dAutomask works.
This is done automatically if no mask is proveded.
-mask mmm = Mask to define 'in-brain' voxels. Reducing the number
the number of voxels included in the calculation will
significantly speedup the calculation. Consider using
a mask to constrain the calculations to the grey matter
rather than the whole brain. This is also preferable
to using -autoclip or -automask.
-prefix p = Save output into dataset with prefix 'p', this file will
contain bricks for both 'weighted' and 'binarized' lFCD
[default prefix is 'LFCD'].
Notes:
* The output dataset is a bucket type of floats.
* The program prints out an estimate of its memory used
when it ends. It also prints out a progress 'meter'
to keep you pacified.
-- RWCox - 31 Jan 2002 and 16 Jul 2010
-- Cameron Craddock - 13 Nov 2015
=========================================================================
* This binary version of 3dLFCD is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dLME
================== Welcome to 3dLME ==================
AFNI Group Analysis Program with Linear Mixed-Effects Modeling Approach
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Version 2.1.5, March 15, 2024
Author: Gang Chen (gangchen@mail.nih.gov)
Website - https://afni.nimh.nih.gov/sscc/gangc/lme.html
SSCC/NIMH, National Institutes of Health, Bethesda MD 20892
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Usage:
------
3dLME is a group-analysis program that performs linear mixed-effects (LME)
modeling analysis. One simple criterion to decide whether 3dLME is appropriate
is that each subject has to have two or more measurements at each spatial
location (except for a small portion of subjects with missing data). In other
words, at least one within-subject (or repeated-measures) factor serves as
explanatory variable. For complex random-effects structures, use 3dLMEr.
F-statistics for main effects and interactions are automatically included in
the output for all variables. In addition, Student t-tests for quantitative
variables are also in the output. In addition, general linear tests (GLTs) can
be requested via symbolic coding.
If you want to cite the analysis approach, use the following:
Chen, G., Saad, Z.S., Britton, J.C., Pine, D.S., Cox, R.W. (2013). Linear
Mixed-Effects Modeling Approach to FMRI Group Analysis. NeuroImage 73:176-190.
http://dx.doi.org/10.1016/j.neuroimage.2013.01.047
Input files for 3dLME can be in AFNI, NIfTI, or surface (niml.dset) format.
In addition to R installation, the following two R packages need to be acquired
in R first before running 3dLME: "nlme", "lme4" and "phia". In addition, the "snow"
package is also needed if one wants to take advantage of parallel computing.
To install these packages, run the following command at the terminal:
rPkgsInstall -pkgs ALL
Alternatively, you may install them in R:
install.packages("nlme")
install.packages("lme4")
install.packages("phia")
install.packages("snow")
More details about 3dLME can be found at
https://afni.nimh.nih.gov/sscc/gangc/LME.html
Once the 3dLME command script is constructed, it can be run by copying and
pasting to the terminal. Alternatively (and probably better) you save the
script as a text file, for example, called LME.txt, and execute it with the
following (assuming on tcsh shell),
tcsh -x LME.txt &
or,
tcsh -x LME.txt > diary.txt &
tcsh -x LME.txt |& tee diary.txt &
The advantage of the latter command is that the progression is saved into
the text file diary.txt and, if anything goes awry, can be examined later.
Thanks to the R community, Henrik Singmann and Helios de Rosario for the strong
technical support.
Example 1 --- one condition modeled with 8 basis functions (e.g., TENT or TENTzero)
for one group of 13 subjects. With the option -bounds, values beyond the range will
be treated as outliers and considered as missing. If you want to set a range, choose
the bounds that make sense with your input data.
--------------------------------
3dLME -prefix myOutput -jobs 4 \
-mask myMask+tlrc \
-model '0+Time' \
-bounds -2 2 \
-qVars order \
-qVarCenters 0 \
-ranEff '~1' \
-corStr 'order : AR1' \
-SS_type 3 \
-num_glf 1 \
-glfLabel 1 4TimePoints -glfCode 1 'Time : 1*Diff2 & 1*Diff3 & 1*Diff4 & 1*Diff5' \
-dataTable \
Subj Time order InputFile \
c101 Diff0 0 testData/c101time0+tlrc \
c101 Diff1 1 testData/c101time1+tlrc \
c101 Diff2 2 testData/c101time2+tlrc \
c101 Diff3 3 testData/c101time3+tlrc \
c101 Diff4 4 testData/c101time4+tlrc \
c101 Diff5 5 testData/c101time5+tlrc \
c101 Diff6 6 testData/c101time6+tlrc \
c101 Diff7 7 testData/c101time7+tlrc \
c103 Diff0 0 testData/c103time0+tlrc \
c103 Diff1 1 testData/c103time1+tlrc \
...
Example 2 --- one within-subject factor (conditions: House and Face), one
within-subject quantitative variable (reaction time, RT) and one between-
subjects covariate (age). RT values don't differ significantly between the
two conditions, and thus are centered via grand mean. Random effects are
intercept and RT effect whose correlation is estimated from the data. With
the option -bounds, values beyond [-2, 2] will be treated as outliers and
considered as missing.
-------------------------------------------------------------------------
3dLME -prefix Example2 -jobs 24 \
-model "cond*RT+age" \
-bounds -2 2 \
-qVars "RT,age" \
-qVarCenters "105.35,34.7" \
-ranEff '~1+RT' \
-SS_type 3 \
-num_glt 4 \
-gltLabel 1 'House' -gltCode 1 'cond : 1*House' \
-gltLabel 2 'Face-House' -gltCode 2 'cond : 1*Face -1*House' \
-gltLabel 3 'House-AgeEff' -gltCode 3 'cond : 1*House age :' \
-gltLabel 4 'House-Age2' -gltCode 4 'cond : 1*House age : 5.3' \
-num_glf 1 \
-glfLabel 1 'cond_age' -glfCode 1 'cond : 1*House & 1*Face age :' \
-dataTable \
Subj cond RT age InputFile \
s1 House 124 35 s1+tlrc'[House#0_Coef]' \
s2 House 97 51 s2+tlrc'[House#0_Coef]' \
s3 House 107 25 s3+tlrc'[House#0_Coef]' \
...
s1 Face 110 35 s1+tlrc'[Face#0_Coef]' \
s2 Face 95 51 s2+tlrc'[Face#0_Coef]' \
s3 Face 120 25 s3+tlrc'[Face#0_Coef]' \
...
Example 3 --- one within-subject factor (conditions: positive, negative,
and neutral), and one between-subjects factors (groups: control and patients).
Effect estimates for a few subjects are available for only one or two
conditions. These subjects with missing data would have to be abandoned in
the traditional ANOVA approach. All subjects can be included with 3dLME, and
a random intercept is considered.
-------------------------------------------------------------------------
3dLME -prefix Example3 -jobs 24 \
-mask myMask+tlrc \
-model "cond*group" \
-bounds -2 2 \
-ranEff '~1' \
-SS_type 3 \
-num_glt 6 \
-gltLabel 1 'pos-neu' -gltCode 1 'cond : 1*pos -1*neu' \
-gltLabel 2 'neg' -gltCode 2 'cond : 1*neg ' \
-gltLabel 3 'pos+nue-neg' -gltCode 3 'cond : 1*pos +1*neu -1*neg' \
-gltLabel 4 'pat_pos-neu' -gltCode 4 'cond : 1*pos -1*neu group : 1*pat' \
-gltLabel 5 'pat_neg-neu' -gltCode 5 'cond : 1*neg -1*neu group : 1*pat' \
-gltLabel 6 'pat_pos-neg' -gltCode 6 'cond : 1*pos -1*neg group : 1*pat' \
-num_glf 1 \
-glfLabel 1 'pos-neu' -glfCode 1 'Group : 1*ctr & 1*pat cond : 1*pos -1*neu & 1*pos -1*neg' \
-dataTable \
Subj cond group InputFile \
s1 pos ctr s1+tlrc'[pos#0_Coef]' \
s1 neg ctr s1+tlrc'[neg#0_Coef]' \
s1 neu ctr s1+tlrc'[neu#0_Coef]' \
...
s21 pos pat s21+tlrc'[pos#0_Coef]' \
s21 neg pat s21+tlrc'[neg#0_Coef]' \
s21 neu pat s21+tlrc'[neu#0_Coef]' \
...
Example 4 --- Computing ICC values for two within-subject factor (Cond:
positive, negative, and neutral; Scanner: one, and two) plus subjects (factor
Subj).
-------------------------------------------------------------------------
3dLME -prefix Example4 -jobs 12 \
-mask myMask+tlrc \
-model "1" \
-bounds -2 2 \
-ranEff 'Cond+Scanner+Subj' \
-ICCb \
-dataTable \
Subj Cond Scanner InputFile \
s1 pos one s1_1+tlrc'[pos#0_Coef]' \
s1 neg one s1_1+tlrc'[neg#0_Coef]' \
s1 neu one s1_1+tlrc'[neu#0_Coef]' \
s1 pos two s1_2+tlrc'[pos#0_Coef]' \
s1 neg two s1_2+tlrc'[neg#0_Coef]' \
s1 neu two s1_2+tlrc'[neu#0_Coef]' \
...
s21 pos two s21_2+tlrc'[pos#0_Coef]' \
s21 neg two s21_2+tlrc'[neg#0_Coef]' \
s21 neu two s21_2+tlrc'[neu#0_Coef]' \
...
Options in alphabetical order:
------------------------------
-bounds lb ub: This option is for outlier removal. Two numbers are expected from
the user: the lower bound (lb) and the upper bound (ub). The input data will
be confined within [lb, ub]: any values in the input data that are beyond
the bounds will be removed and treated as missing. Make sure the first number
is less than the second. The default (the absence of this option) is no
outlier removal.
-cio: Use AFNI's C io functions, which is default. Alternatively -Rio
can be used.
-corStr FORMULA: Specify the correlation structure of the residuals. For example,
when analyzing the effect estimates from multiple basis functions,
one may consider account for the temporal structure of residuals with
AR or ARMA.
-cutoff threshold: Specify the cutoff value to obtain voxel-wise accuracy
in logistic regression analysis. Default is 0 (no accuracy will
be estimated).
-dataTable TABLE: List the data structure with a header as the first line.
NOTE:
1) This option has to occur last; that is, no other options are
allowed thereafter. Each line should end with a backslash except for
the last line.
2) The first column is fixed and reserved with label 'Subj', and the
last is reserved for 'InputFile'. Each row should contain only one
effect estimate in the table of long format (cf. wide format) as
defined in R. The level labels of a factor should contain at least
one character. Input files can be in AFNI, NIfTI or surface format.
AFNI files can be specified with sub-brick selector (square brackets
[] within quotes) specified with a number or label.
3) It is fine to have variables (or columns) in the table that are
not modeled in the analysis.
4) The context of the table can be saved as a separate file, e.g.,
called table.txt. In the script specify the information with '-dataTable
@table.txt'. This option is useful: (a) when there are many input
files so that the program complains with an 'Arg list too long' error;
(b) when you want to try different models with the same dataset.
When the table is a stand-alone file, quotes should NOT be added around
the sub-brick selector -- square brackets [...]. Also, there is no need
to add a backslash at the end of each line.
-dbgArgs: This option will enable R to save the parameters in a
file called .3dLME.dbg.AFNI.args in the current directory
so that debugging can be performed.
-glfCode k CODING: Specify the k-th general linear F-test (GLF) through a
weighted combination among factor levels. The symbolic coding has
to be within (single or double) quotes. For example, the coding
'Condition : 1*A -1*B & 1*A -1*C Emotion : 1*pos' tests the main
effect of Condition at the positive Emotion. Similarly, the coding
'Condition : 1*A -1*B & 1*A -1*C Emotion : 1*pos -1*neg' shows
the interaction between the three levels of Condition and the two.
levels of Emotion.
NOTE:
1) The weights for a variable do not have to add up to 0.
2) When a quantitative variable is present, other effects are
tested at the center value of the covariate unless the covariate
value is specified as, for example, 'Group : 1*Old Age : 2', where
the Old Group is tested at the Age of 2 above the center.
3) The absence of a categorical variable in a coding means the
levels of that factor are averaged (or collapsed) for the GLF.
4) The appearance of a categorical variable has to be followed
by the linear combination of its levels.
-glfLabel k label: Specify the label for the k-th general linear F-test
(GLF). A symbolic coding for the GLF is assumed to follow with
each -glfLabel.
-gltCode k CODING: Specify the k-th general linear test (GLT) through a
weighted combination among factor levels. The symbolic coding has
to be within (single or double) quotes. For example, the following
'Condition : 2*House -3*Face Emotion : 1*positive '
requests for a test of comparing 2 times House condition
with 3 times Face condition while Emotion is held at positive
valence.
NOTE:
1) The weights for a variable do not have to add up to 0.
2) When a quantitative variable is present, other effects are
tested at the center value of the covariate unless the covariate
value is specified as, for example, 'Group : 1*Old Age : 2', where
the Old Group is tested at the Age of 2 above the center.
3) The effect for a quantitative variable can be specified with,
for example, 'Group : 1*Old Age : ', or
'Group : 1*Old - 1*Young Age : '
4) The absence of a categorical variable in a coding means the
levels of that factor are averaged (or collapsed) for the GLT.
5) The appearance of a categorial variable has to be followed
by the linear combination of its levels. Only a quantitative
is allowed to have a dangling coding as seen in 'Age :'
-gltLabel k label: Specify the label for the k-th general linear test
(GLT). A symbolic coding for the GLT is assumed to follow with
each -gltLabel.
-help: this help message
-ICC: This option allows 3dLME to compute voxel-wise intra-class correlation
for the variables specified through option -ranEff. See Example 4 in
in the help. Consider using a more flexible program 3dICC. If trial-
level data are available, a more accurate approach is to use the
program TRR at the region level or use the program 3dLMEr at the
level. Refer to the following paper for more detail:
Chen, G., Pine, D.S., Brotman, M.A., Smith, A.R., Cox, R.W., Haller,
S.P., 2021. Trial and error: A hierarchical modeling approach to
test-retest reliability. NeuroImage 245, 118647.
-ICCb: This option allows 3dLME to compute voxel-wise intra-class correlation
through a Bayesian approach with Gamma priors for the variables
specified through option -ranEff. The computation will take much
longer due the sophistication involved. However, the Bayesian method is
preferred to the old approach with -ICC for the typical FMRI data. R
package 'blme' is required for this option. Consider using a more
flexible program 3dICC
-jobs NJOBS: On a multi-processor machine, parallel computing will speed
up the program significantly.
Choose 1 for a single-processor computer.
-LOGIT: This option allows 3dLME to perform voxel-wise logistic modeling.
Currently no random effects are allowed ('-ranEff NA'), but this
limitation can be removed later if demand occurs. The InputFile
column is expected to list subjects' responses in 0s and 1s. In
addition, one voxel-wise covariate is currently allowed. Each
regression coefficient (including the intercept) and its z-statistic
are saved in the output.
-logLik: Add this option if the voxel-wise log likelihood is wanted in the output.
This option currently cannot be combined with -ICC, -ICCb, -LOGIT.
-mask MASK: Process voxels inside this mask only.
Default is no masking.
-ML: Add this option if Maximum Likelihood is wanted instead of the default
method, Restricted Maximum Likelihood (REML).
-model FORMULA: Specify the terms of fixed effects for all explanatory,
including quantitative, variables. The expression FORMULA with more
than one variable has to be surrounded within (single or double)
quotes. Variable names in the formula should be consistent with
the ones used in the header of -dataTable. A+B represents the
additive effects of A and B, A:B is the interaction between A
and B, and A*B = A+B+A:B. Subject should not occur in the model
specification here.
-num_glf NUMBER: Specify the number of general linear F-tests (GLFs). A glf
involves the union of two or more simple tests. See details in
-glfCode.
-num_glt NUMBER: Specify the number of general linear t-tests (GLTs). A glt
is a linear combination of a factor levels. See details in
-gltCode.
-prefix PREFIX: Output file name. For AFNI format, provide prefix only,
with no view+suffix needed. Filename for NIfTI format should have
.nii attached, while file name for surface data is expected
to end with .niml.dset. The sub-brick labeled with the '(Intercept)',
if present, should be interpreted as the effect with each factor
at the reference level (alphabetically the lowest level) for each
factor and with each quantitative covariate at the center value.
-qVarCenters VALUES: Specify centering values for quantitative variables
identified under -qVars. Multiple centers are separated by
commas (,) without any other characters such as spaces and should
be surrounded within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -qVarsCenters) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable YOURSELF first before the values are fed into -dataTable.
-qVars variable_list: Identify quantitative variables (or covariates) with
this option. The list with more than one variable has to be
separated with comma (,) without any other characters such as
spaces and should be surrounded within (single or double) quotes.
For example, -qVars "Age,IQ"
WARNINGS:
1) Centering a quantitative variable through -qVarsCenters is
very critical when other fixed effects are of interest.
2) Between-subjects covariates are generally acceptable.
However EXTREME caution should be taken when the groups
differ significantly in the average value of the covariate.
3) Within-subject covariates are better modeled with 3dLME.
-ranEff FORMULA: Specify the random effects. The simplest and most common
one is random intercept, "~1", meaning that each subject deviates some
amount (called random effect) from the group average. "~RT" or "~1+RT"
means that each subject has a unique intercept as well as a slope,
and the correlation between the two random effects are estimated, not
assumed, from the data. "~0+RT" indicates that only a random effect
of slope is desired. Compound symmetry for a variance-covariance metric
across the levels of factor A can be specified through pdCompSymm(~0+A)
The list of random terms should be separated by space within (single or
double) quotes.
Notice: In the case of computing ICC values, list all the factors with
which the ICC is to be obtained. For example, with two factors "Scanner"
and "Subj", set it as -ranEff "Scanner+Subj". See Example 4 in the
the help.
-RE: Specify the list of variables whose random effects are saved in the output.
For example, "RE "Intercept"" requests for saving the random
intercept for all subjects while "RE "Intercept,time"" asks for
saving both the random intercept and random slope of time for all subjects
The output filename is specified through -REprefix. All random effects are
stored in the same file with each sub-brick named by the variable name plus
the subject label.
-REprefix: Specify the output filename for random effects. All random effects are
stored in the same file with each sub-brick named by the variable name plus
the subject label.
-resid PREFIX: Output file name for the residuals. For AFNI format, provide
prefix only without view+suffix. Filename for NIfTI format should
have .nii attached, while file name for surface data is expected
to end with .niml.dset. The sub-brick labeled with the '(Intercept)',
if present, should be interpreted as the effect with each factor
at the reference level (alphabetically the lowest level) for each
factor and with each quantitative covariate at the center value.
-Rio: Use R's io functions. The alternative is -cio.
-show_allowed_options: list of allowed options
-SS_type NUMBER: Specify the type for sums of squares in the F-statistics.
Two options are currently supported: sequential (1) and marginal (3).
-vVarCenters VALUES: Specify centering values for voxel-wise covariates
identified under -vVars. Multiple centers are separated by
commas (,) within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -vVarsCenters) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable YOURSELF first before the files are fed into -dataTable.
-vVars variable_list: Identify voxel-wise covariates with this option.
Currently one voxel-wise covariate is allowed only, but this
may change if demand occurs...
By default mean centering is performed voxel-wise across all
subjects. Alternatively centering can be specified through a
global value under -vVarsCenters. If the voxel-wise covariates
have already been centered, set the centers at 0 with -vVarsCenters.
AFNI program: 3dLME2
================== Welcome to 3dLME2 ==================
Program for Voxelwise Linear Mixed-Effects (LME) Analysis
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Version 1.0.0, Apr 23, 2024
Author: Gang Chen (gangchen@mail.nih.gov)
SSCC/NIMH, National Institutes of Health, Bethesda MD 20892, USA
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
------
Linear Mixed-Effects (LME) analysis adopts the traditional approach that
differentiates two types of effects: fixed effects capture the population-
level components while random effects characterize the lower-level components
such as individuals, families, scanning sites, etc.
3dLME2 is a revised version of its older counterpart 3dLME in the sense that
3dLME2 is more flexible in specifying the random-effects components and
the variance-covariance structure than the latter.
Like 3dLME, all main effects and interactions are automatically available in
the output while simple effects that tease apart those main effects and
interactions would have to be requested through options -gltCode or -glfCode.
Input files can be in AFNI, NIfTI, surface (niml.dset) or 1D format. To obtain
the output int the same format of the input, append a proper suffix to the
output specification option -prefix (e.g., .nii, .niml.dset or .1D for NIfTI,
surface or 1D).
3dLME2 allows for the incorporation of various types of explanatory variables
including categorical (between- and within-subject factors) and
quantitative variables (e.g., age, behavioral data). The burden of properly
specifying the structure of lower-level effects is placed on the user's
shoulder, so familiarize yourself with the following FAQ in case you want some
clarifications: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html
Whenever a quantitative variable is involved, it is required to explicitly
declare the variable through option -qVars. In addition, be mindful about the
centering issue of each quantitative variable: you have to decide
which makes more sense in the research context - global centering or within-
condition (or within-group) centering? Here is some background and discussion
about the issue:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/statistics/center.html
The following exemplifying scripts are good demonstrations. More examples will
be added in the future if I could crowdsource more scenarios from the users
(including you the reader). In case you find one example like your data
structure, use the example(s) as a template and then build up your own script.
In addition to R installation, the following R packages need to be installed
first before running 3dLME2: "nlme", "phia" and "snow". To install these R
packages, run the following command at the terminal:
rPkgsInstall -pkgs "nlme,phia,snow"
Alternatively, you may install them in R:
install.packages("nlme")
install.packages("phia")
install.packages("snow")
Once the 3dLME2 command script is constructed, it can be run by copying and
pasting to the terminal. Alternatively (and probably better) you save the
script as a text file, for example, called LME.txt, and execute it with the
following (assuming on tc shell),
nohup tcsh -x LME.txt &
or,
nohup tcsh -x LME.txt > diary.txt &
or,
nohup tcsh -x LME.txt |& tee diary.txt &
The advantage of the latter commands is that the progression is saved into
the text file diary.txt and, if anything goes awry, can be examined later.
Example 1 --- Simplest case: LME analysis for one group of subjects each of
which has three effects associated with three emotions (pos, neg and neu),
and the effects of interest are the comparisons among the three emotions
at the population level (missing data allowed). This data structure is usually
considered as one-way repeated-measures (or within-subject) ANOVA if no
missing data occurred. The LME model is typically formulated with a random
intercept in this case. With the option -bounds, values beyond [-2, 2] will
be treated as outliers and considered as missing. If you want to set a range,
choose the bounds that make sense with your input data.
-------------------------------------------------------------------------
3dLME2 -prefix LME -jobs 12 \
-mask myMask+tlrc \
-fixef 'emotion' \
-ranef '~1|Aubj' \
-SS_type 3 \
-bounds -2 2 \
-gltCode pos 'emotion : 1*pos' \
-gltCode neg 'emotion : 1*neg' \
-gltCode neu 'emotion : 1*neu' \
-gltCode pos-neg 'emotion : 1*pos -1*neg' \
-gltCode pos-neu 'emotion : 1*pos -1*neu' \
-gltCode neg-neu 'emotion : 1*neg -1*neu' \
-gltCode em-eff1 'emotion : 0.5*pos +0.5*neg -1*neu' \
-glfCode em-eff2 'emotion : 1*pos -1*neg & 1*pos -1*neu' \
-dataTable \
Subj emotion InputFile \
s1 pos s1_pos+tlrc \
s1 neg s1_neg+tlrc \
s1 neu s1_neu+tlrc \
s2 pos s2_pos+tlrc \
s2 neg s2_neg+tlrc \
s2 pos s2_neu+tlrc \
...
s20 pos s20_pos+tlrc \
s20 neg s20_neg+tlrc \
s20 neu s20_neu+tlrc \
...
Example 2 --- LME analysis for one group of subjects each of which has
three effects associated with three emotions (pos, neg and neu), and the
effects of interest are the comparisons among the three emotions at the
population level. In addition, reaction time (RT) is available per emotion
from each subject. An LME model can be formulated to include both random
intercept and random slope. Be careful about the centering issue about any
quantitative variable: you have to decide which makes more sense - global
centering or within-condition (or within-group) centering?
-------------------------------------------------------------------------
3dLME2 -prefix LME -jobs 12 \
-mask myMask+tlrc \
-fixef 'emotion*RT' \
-ranef '~RT|Subj' \
-corr corSymm '~1|Subj' \
-SS_type 3 \
-bounds -2 2 \
-qVars 'RT' \
-qVarCenters 0 \
-gltCode pos 'emotion : 1*pos' \
-gltCode neg 'emotion : 1*neg' \
-gltCode neu 'emotion : 1*neu' \
-gltCode pos-neg 'emotion : 1*pos -1*neg' \
-gltCode pos-neu 'emotion : 1*pos -1*neu' \
-gltCode neg-neu 'emotion : 1*neg -1*neu' \
-gltCode em-eff1 'emotion : 0.5*pos +0.5*neg -1*neu' \
-glfCode em-eff2 'emotion : 1*pos -1*neg & 1*pos -1*neu' \
-dataTable \
Subj emotion RT InputFile \
s1 pos 23 s1_pos+tlrc \
s1 neg 34 s1_neg+tlrc \
s1 neu 28 s1_neu+tlrc \
s2 pos 31 s2_pos+tlrc \
s2 neg 22 s2_neg+tlrc \
s2 pos 29 s2_neu+tlrc \
...
s20 pos 12 s20_pos+tlrc \
s20 neg 20 s20_neg+tlrc \
s20 neu 30 s20_neu+tlrc \
...
Options in alphabetical order:
------------------------------
-bounds lb ub: This option is for outlier removal. Two numbers are expected from
the user: the lower bound (lb) and the upper bound (ub). The input data will
be confined within [lb, ub]: any values in the input data that are beyond
the bounds will be removed and treated as missing. Make sure the first number
is less than the second. The default (the absence of this option) is no
outlier removal.
-cio: Use AFNI's C io functions, which is the default. Alternatively, -Rio
can be used.
-corr class FORMULA: correlation structure.
-dataTable TABLE: List the data structure with a header as the first line.
NOTE:
1) This option has to occur last in the script; that is, no other
options are allowed thereafter. Each line should end with a backslash
except for the last line.
2) The order of the columns should not matter except that the last
column has to be the one for input files, 'InputFile'. Unlike 3dLME, the
subject column (Subj in 3dLME) does not have to be the first column;
and it does not have to include a subject ID column under some situations
Each row should contain only one input file in the table of long format
(cf. wide format) as defined in R. Input files can be in AFNI, NIfTI or
surface format. AFNI files can be specified with sub-brick selector (square
brackets [] within quotes) specified with a number or label.
3) It is fine to have variables (or columns) in the table that are
not modeled in the analysis.
4) When the table is part of the script, a backslash is needed at the end
of each line (except for the last line) to indicate the continuation to the
next line. Alternatively, one can save the context of the table as a separate
file, e.g., calling it table.txt, and then in the script specify the data
with '-dataTable @table.txt'. However, when the table is provided as a
separate file, do NOT put any quotes around the square brackets for each
sub-brick, otherwise the program would not properly read the files, unlike the
situation when quotes are required if the table is included as part of the
script. Backslash is also not needed at the end of each line, but it would
not cause any problem if present. This option of separating the table from
the script is useful: (a) when there are many input files so that the program
complains with an 'Arg list too long' error; (b) when you want to try
different models with the same dataset.
-dbgArgs: This option will enable R to save the parameters in a
file called .3dLME2.dbg.AFNI.args in the current directory
so that debugging can be performed.
-fixef FORMULA: Specify the model structure for all the variables. The
expression FORMULA with more than one variable has to be surrounded
within (single or double) quotes. Variable names in the formula
should be consistent with the ones used in the header of -dataTable.
In the LME context the simplest model is "1+(1|Subj)" in
which the random effect from each of the two subjects in a pair is
symmetrically incorporated in the model. Each random-effects factor is
specified within parentheses per formula convention in R. Any
effects of interest and confounding variables (quantitative or
categorical variables) can be added as fixed effects without parentheses.
-glfCode label CODING: Specify a general linear F-style (GLF) formulation
with the weights among factor levels in which two or more null
relationships (e.g., A-B=0 and B-C=0) are involved. The symbolic
coding has to be within (single or double) quotes. For example, the
coding '-glfCode AvBvc 'Condition : 1*A -1*B & 1*A -1*C Emotion : 1*pos''
examines the main effect of Condition at the positive Emotion with
the output labeled as AvBvC. Similarly the coding '-glfCode CondByEmo'
'Condition : 1*A -1*B & 1*A -1*C Emotion : 1*pos -1*neg' looks
for the interaction between the three levels of Condition and the
two levels of Emotion and the resulting sub-brick is labeled as
'CondByEmo'.
NOTE:
1) The weights for a variable do not have to add up to 0.
2) When a quantitative variable is present, other effects are
tested at the center value of the covariate unless the covariate
value is specified as, for example, 'Group : 1*Old Age : 2', where
the Old Group is tested at the Age of 2 above the center.
3) The absence of a categorical variable in a coding means the
levels of that factor are averaged (or collapsed) for the GLF.
4) The appearance of a categorical variable has to be followed
by the linear combination of its levels.
-gltCode label weights: Specify the label and weights of interest in a general
linear t-style (GLT) formulation in which only one null relationship is
involved (cf. -glfCode). The weights should be surrounded with quotes. For
example, the specification '-gltCode AvB 'Condition : 1*A -1*B' compares A
and B with a label 'AvB' for the output sub-bricks.
-help: this help message
-IF var_name: var_name is used to specify the column name that is designated for
input files of effect estimate. The default (when this option is not invoked
is 'InputFile', in which case the column header has to be exactly as 'InputFile'
This input file for effect estimates has to be the last column.
-jobs NJOBS: On a multi-processor machine, parallel computing will speed
up the program significantly.
Choose 1 for a single-processor computer.
-mask MASK: Process voxels inside this mask only.
Default is no masking.
-prefix PREFIX: Output file name. For AFNI format, provide prefix only,
with no view+suffix needed. Filename for NIfTI format should have
.nii attached (otherwise the output would be saved in AFNI format).
-qVarCenters VALUES: Specify centering values for quantitative variables
identified under -qVars. Multiple centers are separated by
commas (,) without any other characters such as spaces and should
be surrounded within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -qVarCenters) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable YOURSELF first before the values are fed into -dataTable.
-qVars variable_list: Identify quantitative variables (or covariates) with
this option. The list with more than one variable has to be
separated with comma (,) without any other characters such as
spaces and should be surrounded within (single or double) quotes.
For example, -qVars "Age,IQ"
WARNINGS:
1) Centering a quantitative variable through -qVarsCenters is
very critical when other fixed effects are of interest.
2) Between-subjects covariates are generally acceptable.
However EXTREME caution should be taken when the groups
differ substantially in the average value of the covariate.
-ranef FORMULA: Specify random effects.
-resid PREFIX: Output file name for the residuals. For AFNI format, provide
prefix only without view+suffix. Filename for NIfTI format should
have .nii attached, while file name for surface data is expected
to end with .niml.dset. The sub-brick labeled with the '(Intercept)',
if present, should be interpreted as the effect with each factor
at the reference level (alphabetically the lowest level) for each
factor and with each quantitative covariate at the center value.
-Rio: Use R's io functions. The alternative is -cio.
-show_allowed_options: list of allowed options
-SS_type NUMBER: Specify the type for sums of squares in the F-statistics.
Three options are: sequential (1), hierarchical (2), and marginal (3).
When this option is absent (default), marginal (3) is automatically set.
Some discussion regarding their differences can be found here:
https://sscc.nimh.nih.gov/sscc/gangc/SS.html
-vVarCenters VALUES: Specify centering values for voxel-wise covariates
identified under -vVars. Multiple centers are separated by
commas (,) within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -vVarsCenters) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable yourself first before the files are fed under -dataTable.
-vVars variable_list: Identify voxel-wise covariates with this option.
Currently one voxel-wise covariate is allowed only. By default
mean centering is performed voxel-wise across all subjects.
Alternatively centering can be specified through a global value
under -vVarsCenters. If the voxel-wise covariates have already
been centered, set the centers at 0 with -vVarsCenters.
-wt class FORMULA: correlation structure.
AFNI program: 3dLMEr
================== Welcome to 3dLMEr ==================
Program for Voxelwise Linear Mixed-Effects (LME) Analysis
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Version 1.1.1, Feb 18, 2025
Author: Gang Chen (gangchen@mail.nih.gov)
SSCC/NIMH, National Institutes of Health, Bethesda MD 20892, USA
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
------
### Overview of 3dLMEr
Linear Mixed-Effects (LME) analysis follows a traditional framework that
distinguishes between two types of effects:
- Fixed effects capture population-level components.
- Random effects account for lower-level variability, such as subjects, families,
or scanning sites.
3dLMEr is an advanced and more flexible successor to 3dLME. It enhances model
specification, particularly in handling random-effects components. While 3dLME was
built on the nlme R package, 3dLMEr leverages lme4, allowing for greater flexibility.
Additionally, statistical values for main effects and interactions are approximated
using Satterthwaite’s method.
### Key Differences Between 3dLMEr and 3dLME
1. Random-effects specification:
- In 3dLMEr, random effects are fully integrated into the model formula (via `-model ...`).
- The `-ranEff` option from 3dLME is no longer needed.
- Users must explicitly specify the model structure. See this blogpost for details:
How to Specify Individual-Level Random Effects in Hierarchical Modeling
https://discuss.afni.nimh.nih.gov/t/how-to-specify-individual-level-random-effects-in-hierarchical-modeling/6462
2. Simplified effect specification:
- Labels for simple and composite effects are now part of `-gltCode` and `-glfCode`,
eliminating the need for `-gltLabel`.
3. Output format for statistical values:
- Main effects, interactions, and composite effects (generated automatically by 3dLMEr)
are stored as chi-square statistics (with 2 degrees of freedom).
- Simple effects (specified by the user) are stored as Z-statistics.
- The fixed 2 degrees of freedom for chi-square statistics simplifies interpretation,
as the Satterthwaite method produces varying degrees of freedom.
### Citing 3dLMEr
If you use 3dLMEr in your analysis, cite:
- General LME approach:
Chen, G., Saad, Z.S., Britton, J.C., Pine, D.S., Cox, R.W. (2013).
Linear Mixed-Effects Modeling Approach to FMRI Group Analysis. *NeuroImage, 73*, 176-190.
[DOI: 10.1016/j.neuroimage.2013.01.047](http://dx.doi.org/10.1016/j.neuroimage.2013.01.047)
- Test-retest reliability using trial-level effect estimates (`-TRR` option):
Chen, G., Pine, D.S., Brotman, M.A., Smith, A.R., Cox, R.W., Haller, S.P. (2021).
Trial and error: A hierarchical modeling approach to test-retest reliability.
*NeuroImage, 245*, 118647.
[DOI: 10.1016/j.neuroimage.2021.118647](https://doi.org/10.1016/j.neuroimage.2021.118647)
### Input & Output Formats
Supported input formats:
- AFNI
- NIfTI
- Surface (`niml.dset`)
- 1D text files
To match the output format to the input, append an appropriate suffix to `-prefix`
(e.g., `.nii`, `.niml.dset`, or `.1D`).
### Model Specification & Considerations
Explanatory variables:
3dLMEr supports:
- Categorical variables (e.g., between- and within-subject factors)
- Quantitative variables (e.g., age, behavioral measures)
User responsibility:
- The burden of specifying lower-level effects is on the user.
- For clarifications, refer to this FAQ: [Mixed Models FAQ]
(https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html).
Handling quantitative variables:
- Declare them explicitly using `-qVars`.
- Consider centering options:
- Global centering (across all subjects)
- Within-condition/group centering (depends on research context)
- More details on centering: https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/statistics/center.html
### Example Scripts
Check out example scripts below that demonstrate different data structures. If one
matches your study, use it as a template. More examples will be added over time—
contributions are welcome!
### Installation Requirements
Before running 3dLMEr, install the following R packages:
```
install.packages("lmerTest")
install.packages("phia")
install.packages("snow")
```
Alternatively, use AFNI’s installer:
```
rPkgsInstall -pkgs "lmerTest,phia,snow"
```
### Running 3dLMEr
Once your script is ready, run it in the terminal:
```
nohup tcsh -x LME.txt &
```
or, to save the output log:
```
nohup tcsh -x LME.txt > diary.txt &
```
or, to display output live while saving it:
```
nohup tcsh -x LME.txt |& tee diary.txt &
```
Saving logs allows you to review output later if issues arise.
Example 1 --- Simplest case: LME analysis for one group of subjects each of
which has three effects associated with three emotions (pos, neg and neu),
and the effects of interest are the comparisons among the three emotions
at the population level (missing data allowed). This data structure is usually
considered as one-way repeated-measures (or within-subject) ANOVA if no
missing data occurred. The LME model is typically formulated with a random
intercept in this case. With the option -bounds, values beyond [-2, 2] will
be treated as outliers and considered as missing. If you want to set a range,
choose the bounds that make sense with your input data.
**NOTE**: Using the -bounds option to remove outliers should be approached
with caution due to its arbitrariness. A more principled alternative is
to use 3dGLMM with a Student's t-distribution.
-------------------------------------------------------------------------
3dLMEr -prefix LME -jobs 12 \
-mask myMask+tlrc \
-model 'emotion+(1|Subj)' \
-SS_type 3 \
-bounds -2 2 \
-gltCode mean 'emotion : 0.333*pos +0.333*neg + 0.333*neu' \
-gltCode pos 'emotion : 1*pos' \
-gltCode neg 'emotion : 1*neg' \
-gltCode neu 'emotion : 1*neu' \
-gltCode pos-neg 'emotion : 1*pos -1*neg' \
-gltCode pos-neu 'emotion : 1*pos -1*neu' \
-gltCode neg-neu 'emotion : 1*neg -1*neu' \
-gltCode em-eff1 'emotion : 0.5*pos +0.5*neg -1*neu' \
-glfCode em-eff2 'emotion : 1*pos -1*neg & 1*pos -1*neu' \
-dataTable \
Subj emotion InputFile \
s1 pos s1_pos+tlrc \
s1 neg s1_neg+tlrc \
s1 neu s1_neu+tlrc \
s2 pos s2_pos+tlrc \
s2 neg s2_neg+tlrc \
s2 pos s2_neu+tlrc \
...
s20 pos s20_pos+tlrc \
s20 neg s20_neg+tlrc \
s20 neu s20_neu+tlrc \
...
**Note:** `3dLMEr` does not explicitly output the model intercept (overall mean).
However, you can extract it using the `-gltCode` option, as shown in the script above:
-gltCode mean 'emotion : 0.333*pos +0.333*neg +0.333*neu'
Example 2 --- LME analysis for one group of subjects each of which has
three effects associated with three emotions (pos, neg and neu), and the
effects of interest are the comparisons among the three emotions at the
population level. In addition, reaction time (RT) is available per emotion
from each subject. An LME model can be formulated to include both random
intercept and random slope. Be careful about the centering issue about any
quantitative variable: you have to decide which makes more sense - global
centering or within-condition (or within-group) centering?
**NOTE**: Using the -bounds option to remove outliers should be approached
with caution due to its arbitrariness. A more principled alternative is
to use 3dGLMM with a Student's t-distribution.
-------------------------------------------------------------------------
3dLMEr -prefix LME -jobs 12 \
-mask myMask+tlrc \
-model 'emotion*RT+(RT|Subj)' \
-SS_type 3 \
-bounds -2 2 \
-qVars 'RT' \
-qVarCenters 0 \
-gltCode pos 'emotion : 1*pos' \
-gltCode neg 'emotion : 1*neg' \
-gltCode neu 'emotion : 1*neu' \
-gltCode pos-neg 'emotion : 1*pos -1*neg' \
-gltCode pos-neu 'emotion : 1*pos -1*neu' \
-gltCode neg-neu 'emotion : 1*neg -1*neu' \
-gltCode em-eff1 'emotion : 0.5*pos +0.5*neg -1*neu' \
-glfCode em-eff2 'emotion : 1*pos -1*neg & 1*pos -1*neu' \
-dataTable \
Subj emotion RT InputFile \
s1 pos 23 s1_pos+tlrc \
s1 neg 34 s1_neg+tlrc \
s1 neu 28 s1_neu+tlrc \
s2 pos 31 s2_pos+tlrc \
s2 neg 22 s2_neg+tlrc \
s2 pos 29 s2_neu+tlrc \
...
s20 pos 12 s20_pos+tlrc \
s20 neg 20 s20_neg+tlrc \
s20 neu 30 s20_neu+tlrc \
...
Example 3 --- LME analysis for one group of subjects each of which has three
effects associated with three emotions (pos, neg and neu), and the effects
of interest are the comparisons among the three emotions at the population
level. As the data were acquired across 12 scanning sites, we set up an LME
model with a crossed random-effects structure, one for cross-subjects and one
for cross-sites variability.
**NOTE**: Using the -bounds option to remove outliers should be approached
with caution due to its arbitrariness. A more principled alternative is
to use 3dGLMM with a Student's t-distribution.
-------------------------------------------------------------------------
3dLMEr -prefix LME -jobs 12 \
-mask myMask+tlrc \
-model 'emotion+(1|Subj)+(1|site)' \
-SS_type 3 \
-bounds -2 2 \
-gltCode pos 'emotion : 1*pos' \
-gltCode neg 'emotion : 1*neg' \
-gltCode neu 'emotion : 1*neu' \
-gltCode pos-neg 'emotion : 1*pos -1*neg' \
-gltCode pos-neu 'emotion : 1*pos -1*neu' \
-gltCode neg-neu 'emotion : 1*neg -1*neu' \
-gltCode em-eff1 'emotion : 0.5*pos +0.5*neg -1*neu' \
-glfCode em-eff2 'emotion : 1*pos -1*neg & 1*pos -1*neu' \
-dataTable \
Subj emotion site InputFile \
s1 pos site1 s1_pos+tlrc \
s1 neg site1 s1_neg+tlrc \
s1 neu site2 s1_neu+tlrc \
s2 pos site1 s2_pos+tlrc \
s2 neg site2 s2_neg+tlrc \
s2 pos site3 s2_neu+tlrc \
...
s80 pos site12 s80_pos+tlrc \
s80 neg site12 s80_neg+tlrc \
s80 neu site10 s80_neu+tlrc \
...
Example 4 --- LME analysis with a between-subject factor (group: two groups of
subjects -- control, patient), two within-subject factros (emotion: 3 levels
-- pos, neg, neu; type: 2 levels -- face, word), one quantitative variable (age).
**NOTE**: Using the -bounds option to remove outliers should be approached
with caution due to its arbitrariness. A more principled alternative is to
use 3dGLMM with a Student's t-distribution.
-------------------------------------------------------------------------
3dLMEr -prefix LME -jobs 12 \
-mask myMask+tlrc \
-model 'group*emotion*type+age+(1|Subj)+(1|Subj:emotion)+(1|Subj:type)' \
-SS_type 3 \
-bounds -2 2 \
-gltCode pat.pos 'gruop : 1*patient emotion : 1*pos' \
-gltCode pat.neg 'gruop : 1*patient emotion : 1*neg' \
-gltCode ctr.pos.age 'gruop : 1*control emotion : 1*pos age :' \
-dataTable \
Subj group emotion type age InputFile \
s1 control pos face 35 s1_pos+tlrc \
s1 control neg face 35 s1_neg+tlrc \
s1 control neu face 35 s1_neu+tlrc \
s2 control pos face 23 s2_pos+tlrc \
s2 control neg face 23 s2_neg+tlrc \
s2 control pos face 23 s2_neu+tlrc \
...
s80 patient pos word 28 s80_pos+tlrc \
s80 patient neg word 28 s80_neg+tlrc \
s80 patient neu word 28 s80_neu+tlrc \
...
Example 5 --- Test-retest reliability. LME model can be adopted for test-
retest reliability analysis if trial-level effect estimates (e.g., using
option -stim_times_IM in 3dDeconvolve/3dREMLfit) are available from each
subjects. The following script demonstrates a situation where each subject
performed same two tasks across two sessions. The goal is to obtain the
test-retest reliability at the whole-brain voxel level for the contrast
between the two tasks with the test-retest reliability for the average
effect between the two tasks as a byproduct.
WARNING: numerical failures may occur, especially for a contrast between
two conditions. The failures manifest with a large portion of 0, 1 and -1
values in the output. In that case, use the program TRR to conduct
region-level test-retest reliability analysis.
**NOTE**: Using the -bounds option to remove outliers should be approached
with caution due to its arbitrariness. A more principled alternative is
to use 3dGLMM with a Student's t-distribution.
-------------------------------------------------------------------------
3dLMEr -prefix output -TRR -jobs 16
-qVars 'cond'
-bounds -2 2
-model '0+sess+cond:sess+(0+sess|Subj)+(0+cond:sess|Subj)'
-dataTable @data.tbl
With many trials per condition, it is recommended that the data table
is saved as a separate file in pure text of long format with condition
(variable 'cond' in the script above) through dummy coding of -0.5 and
0.5 with the option -qVars 'cond'. Code subject and session as factor
labels with labels. Below is an example of the data table. There is no
need to add backslash at the end of each line. If sub-brick selector
is used, do NOT use gzipped files (otherwise the file reading time would
be too long) and do NOT add quotes around the square brackets [] for the
sub-brick selector.
Subj sess cond InputFile
Subj1 s1 -0.5 Subj1s1c1_trial1.nii
Subj1 s1 -0.5 Subj1s1c1_trial2.nii
...
Subj1 s1 -0.5 Subj1s1c1_trial40.nii
Subj1 s1 0.5 Subj1s1c2_trial1.nii
Subj1 s1 0.5 Subj1s1c2_trial2.nii
...
Subj1 s1 0.5 Subj1s1c2_trial40.nii
Subj1 s2 -0.5 Subj1s2c1_trial1.nii
Subj1 s2 -0.5 Subj1s2c1_trial2.nii
...
Subj1 s2 -0.5 Subj1s2c1_trial40.nii
Subj1 s2 0.5 Subj1s2c2_trial1.nii
Subj1 s2 0.5 Subj1s2c2_trial2.nii
...
Subj1 s2 0.5 Subj1s2c2_trial40.nii
...
Options in alphabetical order:
------------------------------
-bounds lb ub: This option is for outlier removal. Two numbers are expected from
the user: the lower bound (lb) and the upper bound (ub). The input data will
be confined within [lb, ub]: any values in the input data that are beyond
the bounds will be removed and treated as missing. Make sure the first number
is less than the second. The default (the absence of this option) is no
outlier removal.
**NOTE**: Using the -bounds option to remove outliers should be approached
with caution due to its arbitrariness. A more principled alternative is
to use 3dGLMM with a Student's t-distribution.
-cio: Use AFNI's C io functions, which is the default. Alternatively, -Rio
can be used.
-dataTable TABLE: List the data structure with a header as the first line.
NOTE:
1) This option has to occur last in the script; that is, no other
options are allowed thereafter. Each line should end with a backslash
except for the last line.
2) The order of the columns should not matter except that the last
column has to be the one for input files, 'InputFile'. Unlike 3dLME, the
subject column (Subj in 3dLME) does not have to be the first column;
and it does not have to include a subject ID column under some situations
Each row should contain only one input file in the table of long format
(cf. wide format) as defined in R. Input files can be in AFNI, NIfTI or
surface format. AFNI files can be specified with sub-brick selector (square
brackets [] within quotes) specified with a number or label.
3) It is fine to have variables (or columns) in the table that are
not modeled in the analysis.
4) When the table is part of the script, a backslash is needed at the end
of each line (except for the last line) to indicate the continuation to the
next line. Alternatively, one can save the context of the table as a separate
file, e.g., calling it table.txt, and then in the script specify the data
with '-dataTable @table.txt'. However, when the table is provided as a
separate file, do NOT put any quotes around the square brackets for each
sub-brick, otherwise the program would not properly read the files, unlike the
situation when quotes are required if the table is included as part of the
script. Backslash is also not needed at the end of each line, but it would
not cause any problem if present. This option of separating the table from
the script is useful: (a) when there are many input files so that the program
complains with an 'Arg list too long' error; (b) when you want to try
different models with the same dataset.
-dbgArgs: This option will enable R to save the parameters in a
file called .3dLMEr.dbg.AFNI.args in the current directory
so that debugging can be performed.
-glfCode label CODING: Specify a general linear F-style (GLF) formulation
with the weights among factor levels in which two or more null
relationships (e.g., A-B=0 and B-C=0) are involved. The symbolic
coding has to be within (single or double) quotes. For example, the
coding '-glfCode AvBvc 'Condition : 1*A -1*B & 1*A -1*C Emotion : 1*pos''
examines the main effect of Condition at the positive Emotion with
the output labeled as AvBvC. Similarly the coding '-glfCode CondByEmo'
'Condition : 1*A -1*B & 1*A -1*C Emotion : 1*pos -1*neg' looks
for the interaction between the three levels of Condition and the
two levels of Emotion and the resulting sub-brick is labeled as
'CondByEmo'.
NOTE:
1) The weights for a variable do not have to add up to 0.
2) When a quantitative variable is present, other effects are
tested at the center value of the covariate unless the covariate
value is specified as, for example, 'Group : 1*Old Age : 2', where
the Old Group is tested at the Age of 2 above the center.
3) The absence of a categorical variable in a coding means the
levels of that factor are averaged (or collapsed) for the GLF.
4) The appearance of a categorical variable has to be followed
by the linear combination of its levels.
-gltCode label weights: Specify the label and weights of interest in a general
linear t-style (GLT) formulation in which only one null relationship is
involved (cf. -glfCode). The weights should be surrounded with quotes. For
example, the specification '-gltCode AvB 'Condition : 1*A -1*B' compares A
and B with a label 'AvB' for the output sub-bricks.
-help: this help message
-IF var_name: var_name is used to specify the column name that is designated for
input files of effect estimate. The default (when this option is not invoked
is 'InputFile', in which case the column header has to be exactly as 'InputFile'
This input file for effect estimates has to be the last column.
-jobs NJOBS: On a multi-processor machine, parallel computing will speed
up the program significantly.
Choose 1 for a single-processor computer.
-mask MASK: Process voxels inside this mask only.
Default is no masking.
-model FORMULA: Specify the model structure for all the variables. The
expression FORMULA with more than one variable has to be surrounded
within (single or double) quotes. Variable names in the formula
should be consistent with the ones used in the header of -dataTable.
In the LME context the simplest model is "1+(1|Subj)" in
which the random effect from each of the two subjects in a pair is
symmetrically incorporated in the model. Each random-effects factor is
specified within parentheses per formula convention in R. Any
effects of interest and confounding variables (quantitative or
categorical variables) can be added as fixed effects without parentheses.
-prefix PREFIX: Output file name. For AFNI format, provide prefix only,
with no view+suffix needed. Filename for NIfTI format should have
.nii attached (otherwise the output would be saved in AFNI format).
-qVarCenters VALUES: Specify centering values for quantitative variables
identified under -qVars. Multiple centers are separated by
commas (,) without any other characters such as spaces and should
be surrounded within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -qVarCenters) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable YOURSELF first before the values are fed into -dataTable.
-qVars variable_list: Identify quantitative variables (or covariates) with
this option. The list with more than one variable has to be
separated with comma (,) without any other characters such as
spaces and should be surrounded within (single or double) quotes.
For example, -qVars "Age,IQ"
WARNINGS:
1) Centering a quantitative variable through -qVarsCenters is
very critical when other fixed effects are of interest.
2) Between-subjects covariates are generally acceptable.
However EXTREME caution should be taken when the groups
differ substantially in the average value of the covariate.
-R2: Enabling this option will prompt the program to provide both
conditional and marginal coefficient of determination (R^2)
values associated with the adopted model. Marginal R^2 indicates
the proportion of variance explained by the fixed effects in the
model, while conditional R^2 represents the proportion of variance
explained by the entire model, encompassing both fixed and random
effects. Two sub-bricks labeled 'R2m' and 'R2c' will be provided
in the output.
-resid PREFIX: Output file name for the residuals. For AFNI format, provide
prefix only without view+suffix. Filename for NIfTI format should
have .nii attached, while file name for surface data is expected
to end with .niml.dset. The sub-brick labeled with the '(Intercept)',
if present, should be interpreted as the effect with each factor
at the reference level (alphabetically the lowest level) for each
factor and with each quantitative covariate at the center value.
-Rio: Use R's io functions. The alternative is -cio.
-show_allowed_options: list of allowed options
-SS_type NUMBER: Specify the type for sums of squares in the F-statistics.
Three options are: sequential (1), hierarchical (2), and marginal (3).
When this option is absent (default), marginal (3) is automatically set.
Some discussion regarding their differences can be found here:
https://sscc.nimh.nih.gov/sscc/gangc/SS.html
-TRR: This option will allow the analyst to perform test-retest reliability analysis
at the whole-brain voxel level. To be able to adopt this modeling approach,
trial-level effect estimates have to be provided from each subject (e.g.,
using option -stim_times_IM in 3dDeconvolve/3dREMLfit). Currently it works
with the situation with two conditions for a group of subjects that went
two sessions. The analytical goal to assess test-retest reliability across
the two sessions for the contrast between the two conditions. Check out
Example 4 for model specification. It is possible that numerical failures
may occur for a contrast between two conditions with values of 0, 1 or -1 in
the output. Use program TRR for ROI-level test-retest reliability analysis.
-vVarCenters VALUES: Specify centering values for voxel-wise covariates
identified under -vVars. Multiple centers are separated by
commas (,) within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -vVarsCenters) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable yourself first before the files are fed under -dataTable.
-vVars variable_list: Identify voxel-wise covariates with this option.
Currently one voxel-wise covariate is allowed only. By default
mean centering is performed voxel-wise across all subjects.
Alternatively centering can be specified through a global value
under -vVarsCenters. If the voxel-wise covariates have already
been centered, set the centers at 0 with -vVarsCenters.
AFNI program: 3dLocalACF
Usage: 3dLocalACF [options] inputdataset
Options:
--------
-prefix ppp
-input inputdataset
-nbhd nnn
-mask maskdataset
-automask
Notes:
------
* This program estimates the spatial AutoCorrelation Function (ACF)
locally in a neighborhood around each voxel, unlike '3FWHMx -acf',
which produces an average over the whole volume.
* The input dataset must be a time series dataset, and must have
been detrended, despiked, etc. already. The 'errts' output from
afni_proc.py is recommended!
* A brain mask is highly recommended as well.
* I typically use 'SPHERE(25)' for the neighborhood. YMMV.
* This program is very slow.
This copy of it uses multiple threads (OpenMP), so it is
somewhat tolerable to use.
***** This program is experimental *****
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dLocalBistat
Usage: 3dLocalBistat [options] dataset1 dataset2
This program computes statistics between 2 datasets,
at each voxel, based on a local neighborhood of that voxel.
- The neighborhood is defined by the '-nbhd' option.
- Statistics to be calculated are defined by the '-stat' option(s).
- The 2 input datasets should have the same number of sub-bricks.
- OR dataset1 should have 1 sub-brick and dataset2 can have more than 1:
- In which case, the statistics of dataset2 against dataset1 are
calculated for the #0 sub-brick of dataset1 against each sub-brick
of dataset2.
OPTIONS
-------
-nbhd 'nnn' = The string 'nnn' defines the region around each
voxel that will be extracted for the statistics
calculation. The format of the 'nnn' string are:
* 'SPHERE(r)' where 'r' is the radius in mm;
the neighborhood is all voxels whose center-to-
center distance is less than or equal to 'r'.
** A negative value for 'r' means that the region
is calculated using voxel indexes rather than
voxel dimensions; that is, the neighborhood
region is a "sphere" in voxel indexes of
"radius" abs(r).
* 'RECT(a,b,c)' is a rectangular block which
proceeds plus-or-minus 'a' mm in the x-direction,
'b' mm in the y-direction, and 'c' mm in the
z-direction. The correspondence between the
dataset xyz axes and the actual spatial orientation
can be determined by using program 3dinfo.
** A negative value for 'a' means that the region
extends plus-and-minus abs(a) voxels in the
x-direction, rather than plus-and-minus a mm.
Mutatis mutandum for negative 'b' and/or 'c'.
* 'RHDD(r)' is a rhombic dodecahedron of 'radius' r.
* 'TOHD(r)' is a truncated octahedron of 'radius' r.
-stat sss = Compute the statistic named 'sss' on the values
extracted from the region around each voxel:
* pearson = Pearson correlation coefficient
* spearman = Spearman correlation coefficient
* quadrant = Quadrant correlation coefficient
* mutinfo = Mutual Information
* normuti = Normalized Mutual Information
* jointent = Joint entropy
* hellinger= Hellinger metric
* crU = Correlation ratio (Unsymmetric)
* crM = Correlation ratio (symmetrized by Multiplication)
* crA = Correlation ratio (symmetrized by Addition)
* L2slope = slope of least-squares (L2) linear regression of
the data from dataset1 vs. the dataset2
(i.e., d2 = a + b*d1 ==> this is 'b')
* L1slope = slope of least-absolute-sum (L1) linear regression
of the data from dataset1 vs. the dataset2
* num = number of the values in the region:
with the use of -mask or -automask,
the size of the region around any given
voxel will vary; this option lets you
map that size.
* ALL = all of the above, in that order
More than one '-stat' option can be used.
-mask mset = Read in dataset 'mset' and use the nonzero voxels
therein as a mask. Voxels NOT in the mask will
not be used in the neighborhood of any voxel. Also,
a voxel NOT in the mask will have its statistic(s)
computed as zero (0).
-automask = Compute the mask as in program 3dAutomask.
-mask and -automask are mutually exclusive: that is,
you can only specify one mask.
-weight ws = Use dataset 'ws' as a weight. Only applies to 'pearson'.
-prefix ppp = Use string 'ppp' as the prefix for the output dataset.
The output dataset is always stored as floats.
ADVANCED OPTIONS
----------------
-histpow pp = By default, the number of bins in the histogram used
for calculating the Hellinger, Mutual Information,
and Correlation Ratio statistics is n^(1/3), where n
is the number of data points in the -nbhd mask. You
can change that exponent to 'pp' with this option.
-histbin nn = Or you can just set the number of bins directly to 'nn'.
-hclip1 a b = Clip dataset1 to lie between values 'a' and 'b'. If 'a'
and 'b' end in '%', then these values are percentage
points on the cumulative histogram.
-hclip2 a b = Similar to '-hclip1' for dataset2.
-----------------------------
Author: RWCox - October 2006.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dLocalHistog
++ 3dLocalHistog: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: Thorin Oakenshield
Usage: 3dLocalHistog [options] dataset ...
This program computes, at each voxel, a count of how many times each
unique value occurs in a neighbhood of that voxel, across all the input
datasets.
* The neighborhood is defined by the '-nbhd' option.
* The input datasets should be in short or byte format, without
scaling factors attached.
* You can input float format datasets, but the values will be rounded
to an integer between -32767 and 32767 before being used.
* You can also output the overall histogram of the dataset collection,
via the '-hsave' option (as a 1D file). This is simply the count of how
many times each value occurs.
* For histograms of continuously valued datasets see program 3dLocalstat
with option -stat hist*
OPTIONS
-------
-nbhd 'nnn' = The string 'nnn' defines the region around each
voxel that will be extracted for the statistics
calculation. The format of the 'nnn' string is
the same as in 3dLocalstat:
* 'SPHERE(r)'
* 'RECT(a,b,c)'
* 'RHDD(a)'
* 'TOHD(a)'
* If no '-nbhd' option is given, then just the voxel
itself is used -- in which case, the input dataset(s)
must comprise a total of at least 2 sub-bricks!
-prefix ppp = Use string 'ppp' as the prefix for the output dataset.
-hsave sss = Save the overall histogram into file 'sss'. This file will
have 2 columns: value count
Values with zero count will not be shown in this file.
-lab_file LL = Use file 'LL' as a label file. The first column contains
the numbers, the second column the corresponding labels.
* You can use a column selector to choose the columns you
want. For example, if the first column has the labels
and the second the values, use 'filename[1,0]'.
-exclude a..b = Exclude values from 'a' to 'b' from the counting.
* Zero (0) will never be excluded.
* You can use '-exclude' more than once.
-excNONLAB = If '-lab_file' is used, then exclude all values that are NOT
in the label file (except for 0, of course).
-mincount mm = Exclude values which appear in the overall histogram
fewer than 'mm' times.
* Excluded values will be treated as if they are zero
(and so appear in the '0:Other' output sub-brick).
* The overall histogram output by '-hsave' is NOT altered
by the use of '-mincount' or '-exclude' or '-excNONLAB'.
-prob = Normally, the output dataset is a set of counts. This
option converts each count to a 'probability' by dividing
by the total number of counts at each voxel.
* The resulting dataset is stored as bytes, in units of
0.01, so that p=1 corresponds to 1/0.01=100.
-quiet = Stop the highly informative progress reports.
OUTPUT DATASET
--------------
* For each distinct value a sub-brick is produced.
* The zero value will be first; after that, the values will appear in
increasing order.
* If '-lab_file' is used, then the sub-brick label for a given value's count
will be of the form 'value:label'; for example, '2013:rh.lingual'.
* For values NOT in the '-lab_file', the label will just be of the form 'value:'.
* For the first (value=0) sub-brick, the label will be '0:Other'.
Author: RWCox - April 2013
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dLocalPV
Usage: 3dLocalPV [options] inputdataset
* You may want to use 3dDetrend before running this program,
or at least use the '-polort' option.
* This program is highly experimental. And slowish. Real slowish.
* Computes the SVD of the time series from a neighborhood of each
voxel. An inricate way of 'smoothing' 3D+time datasets, kind of, sort of.
* This is like 3dLocalSVD, except that the '-vproj' option doesn't
allow anything but 1 and 2 dimensional projection. This is because
3dLocalPV uses a special method to compute JUST the first 1 or 2
principal vectors -- faster than 3dLocalSVD, but less general.
Options:
-mask mset = restrict operations to this mask
-automask = create a mask from time series dataset
-prefix ppp = save SVD vector result into this new dataset
[default = 'LocalPV']
-prefix2 qqq = save second principal vector into this new dataset
[default = don't save it]
-evprefix ppp = save singular value at each voxel into this dataset
[default = don't save]
-input inputdataset = input time series dataset
-nbhd nnn = e.g., 'SPHERE(5)' 'TOHD(7)' etc.
-despike = remove time series spikes from input dataset
-polort p = detrending
-vnorm = normalize data vectors [strongly recommended]
-vproj [2] = project central data time series onto local SVD vector;
if followed by '2', then the central data time series
will be projected on the 2-dimensional subspace
spanned by the first 2 principal SVD vectors.
[default: just output principal singular vector]
[for 'smoothing' purposes, '-vnorm -vproj' is an idea]
Notes:
* On my Mac Pro, about 30% faster than 3dLocalSVD computing the same thing.
* If you're curious, the 'special method' used for the eigensolution is
a variant of matrix power iteration, called 'simultaneous iteration'.
* This method uses pseudo-random numbers to initialize the vector iterations.
If you wish to control that seed, set environment variable
AFNI_RANDOM_SEEDVAL to some nonzero number. Otherwise, a random seed will
be selected from the time, which means otherwise identical runs will give
slightly different results.
* By contrast, 3dLocalSVD uses EISPACK functions for eigensolution-izing.
=========================================================================
* This binary version of 3dLocalPV is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dLocalstat
++ 3dLocalstat: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: Emperor Zhark
Usage: 3dLocalstat [options] dataset
This program computes statistics at each voxel, based on a
local neighborhood of that voxel.
- The neighborhood is defined by the '-nbhd' option.
- Statistics to be calculated are defined by the '-stat' option(s).
OPTIONS
-------
-nbhd 'nnn' = The string 'nnn' defines the region around each
voxel that will be extracted for the statistics
calculation. The format of the 'nnn' string are:
* 'SPHERE(r)' where 'r' is the radius in mm;
the neighborhood is all voxels whose center-to-
center distance is less than or equal to 'r'.
** The distances are computed in 3 dimensions,
so a SPHERE(1) on a 1mm3 grid gives a 7 voxel-
neighborhood - the center voxel and the six
facing voxels, 4 in plane and 2 above and below.
A SPHERE(1.42) contains 19 voxels, the center voxel
with the 8 others in plane, and the 5 above and
below (all voxels sharing an edge with the center)
A SPHERE(1.74) contains 27 voxels, all voxels
sharing a face, edge or corner with the center
** A negative value for 'r' means that the region
is calculated using voxel indexes rather than
voxel dimensions; that is, the neighborhood
region is a "sphere" in voxel indexes of
"radius" abs(r).
* 'RECT(a,b,c)' is a rectangular block which
proceeds plus-or-minus 'a' mm in the x-direction,
'b' mm in the y-direction, and 'c' mm in the
z-direction. The correspondence between the
dataset xyz axes and the actual spatial orientation
can be determined by using program 3dinfo.
** Note the a,b,c are not the full dimensions of
of the block. They are radially used - effectively
half the dimension of a side. So if one wanted to
compute a 5-slice projection on a 1mm3 volume,
then a RECT(0,0,2) would be appropriate, and
the program would report 5 voxels used in the mask
Any dimension less than a voxel will avoid
voxels in that direction.
** A negative value for 'a' means that the region
extends plus-and-minus abs(a) voxels in the
x-direction, rather than plus-and-minus a mm.
Mutatis mutandum for negative 'b' and/or 'c'.
* 'RHDD(a)' where 'a' is the size parameter in mm;
this is Kepler's rhombic dodecahedron [volume=2*a^3].
* 'TOHD(a)' where 'a' is the size parameter in mm;
this is a truncated octahedron. [volume=4*a^3]
** This is the polyhedral shape that tiles space
and is the most 'sphere-like'.
* If no '-nbhd' option is given, the region extracted
will just be the voxel and its 6 nearest neighbors.
* Voxels not in the mask (if any) or outside the
dataset volume will not be used. This means that
different output voxels will have different numbers
of input voxels that went into calculating their
statistics. The 'num' statistic can be used to
get this count on a per-voxel basis, if you need it.
-stat sss = Compute the statistic named 'sss' on the values
extracted from the region around each voxel:
* mean = average of the values
* stdev = standard deviation
* var = variance (stdev*stdev)
* cvar = coefficient of variation = stdev/fabs(mean)
* median = median of the values
* osfilt = order statistics filter; similar to mean or median
(also in AFNI GUI Image window -> Disp -> Project)
* MAD = median absolute deviation
* min = minimum
* max = maximum
* absmax = maximum of the absolute values
* mconex = Michelson contrast of extrema:
|A-B|/(|A|+|B|), where A=max and B=min
* mode = mode
* nzmode = non-zero mode
* num = number of the values in the region:
with the use of -mask or -automask,
the size of the region around any given
voxel will vary; this option lets you
map that size. It may be useful if you
plan to compute a t-statistic (say) from
the mean and stdev outputs.
* filled = 1 or fillvalue if all voxels in neighborhood
are within mask
* unfilled = 1 or unfillvalue if not all voxels in neighborhood
are within mask
* hasmask = unfillvalue if neighborhood contains a specified
mask value
* hasmask2 = unfillvalue if neighborhood contains an alternate
mask value
* sum = sum of the values in the region
* FWHM = compute (like 3dFWHM) image smoothness
inside each voxel's neighborhood. Results
are in 3 sub-bricks: FWHMx, FWHMy, and FWHMz.
Places where an output is -1 are locations
where the FWHM value could not be computed
(e.g., outside the mask).
* FWHMbar= Compute just the average of the 3 FWHM values
(normally would NOT do this with FWHM also).
* perc:P0:P1:Pstep =
Compute percentiles between P0 and P1 with a
step of Pstep.
Default P1 is equal to P0 and default P2 = 1
* rank = rank of the voxel's intensity
* frank = rank / number of voxels in neighborhood
* P2skew = Pearson's second skewness coefficient
3 * (mean - median) / stdev
* ALL = all of the above, in that order
(except for FWHMbar and perc).
* mMP2s = Exactly the same output as:
-stat median -stat MAD -stat P2skew
but it a little faster
* mmMP2s = Exactly the same output as:
-stat mean -stat median -stat MAD -stat P2skew
* diffs = Compute differences between central voxel
and all neighbors. Values output are the
average difference, followed by the min and max
differences.
* list = Just output the voxel values in the neighborhood
The order in which the neighbors are listed
depends on the neighborhood selected. Only
SPHERE results in a neighborhood list sorted by
the distance from the center.
Regardless of the neighborhood however, the first
value should always be that of the central voxel.
* hist:MIN:MAX:N[:IGN] = Compute the histogram in the voxel's
neighborhood. You must specify the min, max, and
the number of bins in the histogram. You can also
ignore values outside the [min max] range by
setting IGN to 1. IGN = 0 by default.
The histograms are scaled by the number
of values that went into the histogram.
That would be the number of non-masked voxels
in the neighborhood if outliers are NOT
ignored (default).
For histograms of labeled datasets, use 3dLocalHistog
More than one '-stat' option can be used.
-mask mset = Read in dataset 'mset' and use the nonzero voxels
therein as a mask. Voxels NOT in the mask will
not be used in the neighborhood of any voxel. Also,
a voxel NOT in the mask will have its statistic(s)
computed as zero (0) -- usually (cf. supra).
-automask = Compute the mask as in program 3dAutomask.
-mask and -automask are mutually exclusive: that is,
you can only specify one mask.
-use_nonmask = Just above, I said that voxels NOT in the mask will
not have their local statistics computed. This option
will make it so that voxels not in the mask WILL have
their local statistics computed from all voxels in
their neighborhood that ARE in the mask.
* You could use '-use_nonmask' to compute the average
local white matter time series, for example, even at
non-WM voxels.
-prefix ppp = Use string 'ppp' as the prefix for the output dataset.
The output dataset is normally stored as floats.
-datum type = Coerce the output data to be stored as the given type,
which may be byte, short, or float.
Default is float
-label_ext LABEXT = Append '.LABEXT' to each sub-brick label
-reduce_grid Rx [Ry Rz] = Compute output on a grid that is
reduced by a factor of Rx Ry Rz in
the X, Y, and Z directions of the
input dset. This option speeds up
computations at the expense of
resolution. You should only use it
when the nbhd is quite large with
respect to the input's resolution,
and the resultant stats are expected
to be smooth.
You can either set Rx, or Rx Ry and Rz.
If you only specify Rx the same value
is applied to Ry and Rz.
-reduce_restore_grid Rx [Ry Rz] = Like reduce_grid, but also resample
output back to input grid.
-reduce_max_vox MAX_VOX = Like -reduce_restore_grid, but automatically
set Rx Ry Rz so that the computation grid is
at a resolution of nbhd/MAX_VOX voxels.
-grid_rmode RESAM = Interpolant to use when resampling the output with
reduce_restore_grid option. The resampling method
string RESAM should come from the set
{'NN', 'Li', 'Cu', 'Bk'}. These stand for
'Nearest Neighbor', 'Linear', 'Cubic'
and 'Blocky' interpolation, respectively.
Default is Linear
-quiet = Stop the highly informative progress reports.
-verb = a little more verbose.
-proceed_small_N = Do not crash if neighborhood is too small for
certain estimates.
-fillvalue x.xx = value used for filled statistic, default=1
-unfillvalue x.xx = value used for unfilled statistic, default=1
-maskvalue x.xx = value searched for with has_mask option
-maskvalue2 x.xx = alternate value for has_mask2 option
Author: RWCox - August 2005. Instigator: ZSSaad.
=========================================================================
* This binary version of 3dLocalstat is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dLocalSVD
Usage: 3dLocalSVD [options] inputdataset
* You may want to use 3dDetrend before running this program,
or at least use the '-polort' option.
* This program is highly experimental. And slowish.
* Computes the SVD of the time series from a neighborhood of each
voxel. An inricate way of 'smoothing' 3D+time datasets,
in some sense, maybe.
* For most purposes, program 3dLocalPV does the same thing, but faster.
The only reason to use 3dLocalSVD is if you are using -vproj
with the projection dimension ndim > 2.
Options:
-mask mset = restrict operations to this mask
-automask = create a mask from time series dataset
-prefix ppp = save SVD vector result into this new dataset
-input inputdataset = input time series dataset
-nbhd nnn = e.g., 'SPHERE(5)' 'TOHD(7)' etc.
-polort p [+] = detrending ['+' means to add trend back]
-vnorm = normalize data vectors
[strongly recommended]
-vproj [ndim] = project central data time series onto local SVD subspace
of dimension 'ndim'
[default: just output principal singular vector]
[for 'smoothing' purposes, '-vnorm -vproj 2' is a good idea]
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dLocalUnifize
-------------------------------------------------------------------------
OVERVIEW ~1~
This program takes an input and generates a simple "unifized" output
volume. It estimates the median in the local neighborhood of each
voxel, and uses that to scale each voxel's brightness. The result is
a new dataset of brightness of order 1, which still has the
interesting structure(s) present in the original.
This program's output looks very useful to help with dataset alignment
(esp. EPI-to-anatomical) in a wide array of cases.
ver : 1.2
date : Jan 29, 2024
auth : PA Taylor (SSCC, NIMH, NIH)
USAGE ~1~
This program is generally run as:
3dLocalUnifize [options] -prefix DSET_OUT -input DSET_IN
where the following options exist:
-input DSET_IN :(req) input dataset
-prefix DSET_OUT :(req) output dataset name, including path
-wdir_name WD :name of temporary working directory, which
should not contain any path information---it will be
created in the same directory as the final dataset
is created
(def: __wdir_LocalUni_, plus a random alphanumeric str)
-echo :run this program very verbosely (def: don't do so)
-no_clean :do not remove the working directory (def: remove it)
... and the following are 'tinkering' options, likely not needed in
most cases:
-local_rad LR :the spherical neighborhood's radius for the
3dLocalStat step (def: -3)
-local_perc LP :the percentile used in the 3dLocalStat step,
generating the scaling volume
(def: 50)
-local_mask LM :provide the masking option to be used in the
3dLocalStat step, which should be enclosed in
quotes for passing along to the internal
program call. So, to use a pre-existing mask,
you might call this option like:
-local_mask "-mask my_mask.nii.gz"
To remove any masking, put the special keyword
"None" as the option value.
(def: "-automask")
-filter_thr FT :put a ceiling on values in the final, scaled dataset,
whose values should be of order 1; setting FT to be a
value <=0 turns off this final filtering
(def: 1.5)
NOTES ~1~
This program is designed to not need a lot of tinkering with
options, such as the '-local_* ..' ones. In most cases, the default
scaling will be useful.
EXAMPLES ~1~
1. Basic local unifizing:
3dLocalUnifize \
-prefix vr_base_LU \
-input vr_base_min_outlier+orig.HEAD
1. Same as above, without masking:
3dLocalUnifize \
-prefix vr_base_LU_FOV \
-input vr_base_min_outlier+orig.HEAD \
-local_mask None
AFNI program: 3dLombScargle
++ Reading in options.
Make a periodogram or amplitude-spectrum of a time series that has a
non-constant sampling rate. The spectra output by this program are
'one-sided', so that they represent the half-amplitude or power
associated with a frequency, and they would require a factor of 2 to
account for both the the right- and left-traveling frequency solutions
of the Fourier transform (see below 'OUTPUT' and 'NOTE').
Of particular interest is the application of this functionality to
resting state time series that may have been censored. The theory behind
the mathematics and algorithms of this is due to separate groups, mainly
in the realm of astrophysical applications: Vaníček (1969, 1971),
Lomb (1976), Scargle (1982), and Press & Rybicki (1989). Shoutout to them.
This particular implementation is due to Press & Rybicki (1989), by
essentially translating their published Fortran implementation into C,
while using GSL for the FFT, instead of NR's realft(), and making
several adjustments based on that.
The Lomb-Scargle adaption was done with fairly minimal changes here by
PA Taylor (v1.4, June, 2016).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ USAGE:
Input a 4D volumetric time series (BRIK/HEAD or NIFTI data set)
as well as an optional 1D file of 0s and 1s that defines which points
to censor out (i.e., each 0 represents a point/volume to censor out);
if no 1D file is input, the program will check for volumes that are
uniformly zero and consider those to be censored.
The output is a LS periodogram, describing spectral magnitudes
up to some 'maximum frequency'-- the default max here is what
the Nyquist frequency of the time series *would have been* without
any censoring. (Interestingly, this analysis can actually be
legitimately applied in cases to estimate frequency content >Nyquist.
Wow!)
The frequency spectrum will be in the range [df, f_N], where:
df = 1/T, and T is the total duration of the uncensored time series;
f_N = 1/dt, and dt is the sampling time (i.e., TR);
and the interval of frequencies is also df.
These ranges and step sizes should be *independent* of the censoring
which is a nice property of the Lomb-Scargle-iness.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ OUTPUT:
1) PREFIX_time.1D :a 1D file of the sampled time points (in units of
seconds) of the analyzed (and possibly censored)
data set.
2) PREFIX_freq.1D :a 1D file of the frequency sample points (in units
of 1/seconds) of the output periodogram/spectrum
data set.
3) PREFIX_amp+orig :volumetric data set containing a LS-derived
or amplitude spectrum (by default, named 'amp') or a
PREFIX_pow+orig power spectrum (see '-out_pow_spec', named 'pow')
one per voxel.
Please note that the output amplitude and power
spectra are 'one-sided', to represent the
*half* amplitude or power of a given frequency
(see the following note).
+ A NOTE ABOUT Fourier+Parseval matters (please forgive the awkward
formatting):
In the formulation used here, for a time series x[n] of length N,
the periodogram value S[k] is related to the amplitude value |X[k]|:
(1) S[k] = (|X[k]|)**2,
for each k-th harmonic.
Parseval's theorem relates time fluctuations to spectral amplitudes,
stating that (for real time series with zero mean):
(2) sum_n{ x[n]**2 } = (1/N) * sum_k{ |X[k]|**2 },
= (1/N) * sum_k{ S[k] },
where n=0,1,..,N-1 and k=0,1,..,N-1 (NB: A[0]=0, for zero mean
series). The LHS is essentially the variance of the time series
(times N-1). The above is derived from Fourier transform maths, and
the Lomb-Scargle spectra are approximations to Fourier, so the above
can be expected to approximately hold, if all goes well.
Another Fourier-related result is that for real, discrete time series,
the spectral amplitudes/power values are symmetric and periodic in N.
Therefore, |X[k]| = |X[-k]| = |X[N-k-1]| (in zero-base array
counting);
the distinction between positive- and negative-indexed frequencies
can be thought of as signifying right- and left-traveling waves, which
both contribute to the total power of a specific frequency.
The upshot is that one could write the Parseval formula as:
(3) sum_n{ x[n]**2 } = (2/N) * sum_l{ |X[l]|**2 },
= (2/N) * sum_l{ S[l] },
where n=0,1,..,N-1 and l=0,1,..,(N/2)-1 (note the factor of 2 now
appearing on the RHS relations). These symmetries/considerations
are the reason why ~N/2 frequency values are output here (we assume
that only real-valued time series are input), without any loss of
information.
Additionally, with a view toward expressing the overall amplitude
or power of a given frequency, which many people might want to use to
estimate spectral 'functional connectivity' parameters such as ALFF,
fALFF, RSFA, etc. (using, for example, 3dAmptoRSFC), we therefore
note that the *total* amplitude or power of a given frequency would
be:
A[k] = 2*|X[k]|
P[k] = 2*S[k] = 2*|X[k]|**2 = 0.5*A[k]**2
instead of just that of the left/right traveling part. These types of
quantities (A and P) are also referred to as 'two-sided' spectra. The
resulting Parseval relation could then be written:
(4) sum_n{ x[n]**2 } = (1/(2N)) * sum_l{ A[l]**2 },
= (1/N) * sum_l{ P[l] },
where n=0,1,..,N-1 and l=0,1,..,(N/2)-1. Somehow, it just seems easier
to output the one-sided values, X and S, so that the Parsevalian
summation rules look more similar.
With all of that in mind, the 3dLombScargle results are output as
follows. For amplitudes, the following approx. Parsevellian relation
should hold between the 'holey' time series x[m] of M points and
the frequency series Y[l] of L~M/2 points (where {|Y[l]|} approaches
the Fourier amplitudes {|X[l]|} as the number of censored points
decreases and M->N):
(5) sum_m{ x[m]**2 } = (1/L) * sum_l{ Y[l]**2 },
where m=0,1,..,M-1 and l=0,1,..,L-1. For the power spectrum T[l]
of L~M/2 values, then:
(6) sum_m{ x[m]**2 } = (1/L) * sum_l{ T[l] }
for the same ranges of summations.
So, please consider that when using the outputs of here. 3dAmpToRSFC
is prepared for this when calculating spectral parameters (from
amplitudes).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMAND: 3dLombScargle -prefix PREFIX -inset FILE \
{-censor_1D C1D} {-censor_str CSTR} \
{-mask MASK} {-out_pow_spec} \
{-nyq_mult N2} {-nifti}
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ RUNNING:
-prefix PREFIX :output prefix name for data volume, time point 1D file
and frequency 1D file.
-inset FILE :time series of volumes, a 4D volumetric data set.
-censor_1D C1D :single row or column of 1s (keep) and 0s (censored)
describing which volumes of FILE are kept in the
sampling and which are censored out, respectively. The
length of the list of numbers must be of the
same length as the number of volumes in FILE.
If not entered, then the program will look for subbricks
of all-zeros and assume those are censored out.
-censor_str CSTR :AFNI-style selector string of volumes to *keep* in
the analysis. Such as:
'[0..4,7,10..$]'
Why we refer to it as a 'censor string' when it is
really the list of volumes to keep... well, it made
sense at the time. Future historians can duel with
ink about it.
-mask MASK :optional, mask of volume to analyze; additionally, any
voxel with uniformly zero values across time will
produce a zero-spectrum.
-out_pow_spec :switch to output the amplitude spectrum of the freqs
instead of the periodogram. In the formulation used
here, for a time series of length N, the power spectral
value S is related to the amplitude value X as:
S = (X)**2. (Without this opt, default output is
amplitude spectrum.)
-nyq_mult N2 :L-S periodograms can include frequencies above what
would typically be considered Nyquist (here defined
as:
f_N = 0.5*(number of samples)/(total time interval)
By default, the maximum frequency will be what
f_N *would* have been if no censoring of points had
occurred. (This makes it easier to compare L-S spectra
across a group with the same scan protocol, even if
there are slight differences in censoring, per subject.)
Acceptable values are >0. (For those reading the
algorithm papers, this sets the 'hifac' parameter.)
If you don't have a good reason for changing this,
dooon't change it!
-nifti :switch to output *.nii.gz volume file
(default format is BRIK/HEAD).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3dLombScargle -prefix LSout -inset TimeSeries.nii.gz \
-mask mask.nii.gz -censor_1D censor_list.txt
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
____________________________________________________________________________
AFNI program: 3dLRflip
Usage: 3dLRflip [-LR|-AP|-IS|-X|-Y|-Z] [-prefix ppp] dset dset dset ...
Flips the rows of a dataset along one of the three axes.
* This program is intended to be used in the case where you
(or some other loser) constructed a dataset with one of the
directions incorrectly labeled.
* That is, it is to help you patch up a mistake in the dataset.
It has no other purpose.
Optional options:
-----------------
-LR | -AP | -IS: Axis about which to flip the data
Default is -LR.
or
-X | -Y | -Z: Flip about 1st, 2nd or 3rd directions,
respectively.
Note: Only one of these 6 options can be used at a time.
-prefix ppp: Prefix to use for output. If you have
multiple datasets as input, you are better
off letting the program choose a prefix for
each output.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dLSS
Usage: 3dLSS [options]
** Least-Squares-Sum (LSS) estimation from a -stim_times_IM matrix, as **
* described in the paper: *
* JA Mumford et al. Deconvolving BOLD activation in event-related *
* designs for multivoxel pattern classification analyses. *
* NeuroImage (2011) http://dx.doi.org/10.1016/j.neuroimage.2011.08.076 *
* LSS regression was first mentioned in this poster: *
* B Turner. A comparison of methods for the use of pattern classification *
* on rapid event-related fMRI data. Annual Meeting of the Society for *
** Neuroscience, San Diego, CA (2010). **
The method implemented here can be described (by me) as a 'pull one out'
approach. That is, for a single trial in the list of trials, its individual
regressor is pulled out and kept separate, and all the other trials are
combined to give another regressor - so that if there are N trials, only
2 regressors (instead of N) are used for the response model. This 'pull out'
approach is repeated for each single trial separately (thus doing N separate
regressions), which gives a separate response amplitude (beta coefficient)
for each trial. See the 'Caveats' section below for more information.
----------------------------------------
Options (the first 'option' is mandatory)
----------------------------------------
-matrix mmm = Read the matrix 'mmm', which should have been
output from 3dDeconvolve via the '-x1D' option, and
should have included exactly one '-stim_times_IM' option.
-->> The 3dLSS algorithm requires that at least 2 different
stimulus times be given in the -stim_times_IM option.
If you have only 1 stim time, this program will not run.
In such a case, the normal '-bucket' output from 3dDeconvolve
(or '-Rbuck' output from 3dREMLfit) will have the single
beta for the single stim time.
-input ddd = Read time series dataset 'ddd'
** OR **
-nodata = Just compute the estimator matrix -- to be saved with '-save1D'.
* The number of time points is taken from the matrix header.
* If neither '-input' nor '-nodata' is given, '-nodata' is used.
* If '-input' is used, the number of time points in the dataset
must match the number of time points in the matrix.
-mask MMM = Read dataset 'MMM' as a mask for the input; voxels outside
the mask will not be fit by the regression model.
-automask = If you don't know what this does by now, please don't use
this program.
* Neither of these options has any meaning for '-nodata'.
* If '-input' is used and neither of these options is given,
then all voxels will be processed.
-prefix ppp = Prefix name for the output dataset;
this dataset will contain ONLY the LSS estimates of the
beta weights for the '-stim_times_IM' stimuli.
* If you don't use '-prefix', then the prefix is 'LSSout'.
-save1D qqq = Save the estimator vectors (cf. infra) to a 1D formatted
file named 'qqq'. Each column of this file will be
one estimator vector, the same length as the input
dataset timeseries (after censoring, if any).
* The j-th LSS beta estimate is the dot product of the j-th
column of this file with the data time series (duly censored).
* If you don't use '-save1D', then this file is not saved.
-verb = Write out progress reports, for fun fun fun in the sun sun sun.
-------------------
Method == EQUATIONS
-------------------
3dLSS is fast, since it uses a rank-1 bordering technique to pre-compute
the estimator for each separate stimulus regressor from the fixed part of
the matrix, then applies these estimators to each time series in the input
dataset by a simple dot product. If you wish to peruse the equations, see
https://afni.nimh.nih.gov/pub/dist/doc/misc/3dLSS/3dLSS_mathnotes.pdf
The estimator for each separate beta (as described at '-save1D') is the
N-vector which, when dotted into the N-vector of a voxel's time series,
gives the LSS beta estimate for that voxel.
---------------------
Caveats == READ THIS!
---------------------
The LSS method produces estimates that tend to have smaller variance than the
LSA method that 3dDeconvolve would produce, but the LSS estimates have greater
bias -- in principle, the LSA method is unbiased if the noise is symmetrically
distributed. For the purpose of using the beta estimates for MVPA (e.g., 3dsvm),
the bias may not matter much and the variance reduction may help improve the
classification, as illustrated in the Mumford paper. For other purposes, the
trade-off might well go the other way -- for ANY application of LSS vs. LSA,
you need to assess the situation before deciding -- probably by the judicious
use of simulation (as in the Mumford paper).
The bias in the estimate of any given beta is essentially due to the fact
that for any given beta, LSS doesn't use an estimator vector that is orthogonal
to the regressors for other coefficients -- that is what LSA does, using the
pseudo-inverse. Typically, any given LSS-estimated beta will include a mixture
of the betas from neighboring stimuli -- for example,
beta8{LSS} = beta8{LSA} + 0.3*beta7{LSA} - 0.1*beta9{LSA} + smaller stuff
where the weights of the neighbors are larger if the corresponding stimuli
are closer (so the regressors overlap more).
The LSS betas are NOT biased by including any betas that aren't from the
-stim_times_IM regressors -- the LSS estimator vectors (what '-save1D' gives)
are orthogonal to those 'nuisance' regression matrix columns.
To investigate these weighting and orthogonality issues yourself, you can
multiply the LSS estimator vectors into the 3dDeconvolve regression matrix
and examine the result -- in the ideal world, the matrix would be all 0
except for 1s on diagonal corresponding to the -stim_times_IM betas. This
calculation can be done in AFNI with commands something like the 'toy' example
below, which has only 6 stimulus times:
3dDeconvolve -nodata 50 1.0 -polort 1 -x1D R.xmat.1D -x1D_stop -num_stimts 1 \
-stim_times_IM 1 '1D: 12.7 16.6 20.1 26.9 30.5 36.5' 'BLOCK(0.5,1)'
3dLSS -verb -nodata -matrix R.xmat.1D -save1D R.LSS.1D
1dmatcalc '&read(R.xmat.1D) &transp &read(R.LSS.1D) &mult &write(R.mult.1D)'
1dplot R.mult.1D &
1dgrayplot R.mult.1D &
* 3dDeconvolve is used to setup the matrix into file R.xmat.1D
* 3dLSS is used to compute the LSS estimator vectors into file R.LSS.1D
* 1dmatcalc is used to multiply the '-save1D' matrix into the regression matrix:
[R.mult.1D] = [R.xmat.1D]' [R.LSS.1D]
where [x] = matrix made from columns of numbers in file x, and ' = transpose.
* 1dplot and 1dgrayplot are used to display the results.
* The j-th column in the R.mult.1D file is the set of weights of the true betas
that influence the estimated j-th LSS beta.
* e.g., Note that the 4th and 5th stimuli are close in time (3.6 s), and that
the result is that the LSS estimator for the 4th and 5th beta weights mix up
the 'true' 4th, 5th, and 6th betas. For example, looking at the 4th column
of R.mult.1D, we see that
beta4{LSS} = beta4{LSA} + 0.33*beta5{LSA} - 0.27*beta6{LSA} + small stuff
* The sum of each column of R.mult.1D is 1 (e.g., run '1dsum R.mult.1D'),
and the diagonal elements are also 1, showing that the j-th LSS beta is
equal to the j-th LSA beta plus a weighted sum of the other LSA betas, where
those other weights add up to zero.
--------------------------------------------------------------------------
-- RWCox - Dec 2011 - Compute fast, abend early, leave a pretty dataset --
--------------------------------------------------------------------------
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dMannWhitney
++ 3dMannWhitney: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
This program performs nonparametric Mann-Whitney two-sample test.
Usage:
3dMannWhitney
-dset 1 filename data set for X observations
. . . . . .
-dset 1 filename data set for X observations
-dset 2 filename data set for Y observations
. . . . . .
-dset 2 filename data set for Y observations
[-workmem mega] number of megabytes of RAM to use
for statistical workspace
[-voxel num] screen output for voxel # num
-out prefixname estimated population delta and
Wilcoxon-Mann-Whitney statistics
written to file prefixname
N.B.: For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used, e.g.:
-dset 2 'fred+orig[3]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dmaskave
++ 3dmaskave: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
Usage: 3dmaskave [options] inputdataset
Computes average of all voxels in the input dataset
which satisfy the criterion in the options list.
If no options are given, then all voxels are included.
----------------------------------------------------------------
Examples:
1. compute the average timeseries in epi_r1+orig, over voxels
that are set (any non-zero value) in the dataset, ROI+orig:
3dmaskave -mask ROI+orig epi_r1+orig
2. restrict the ROI to values of 3 or 4, and save (redirect)
the output to the text file run1_roi_34.txt:
3dmaskave -mask ROI+orig -quiet -mrange 3 4 \
epi_r1+orig > run1_roi_34.txt
3. Extract the time series from a single voxel with given
spatial indexes (e.g., for use with 3dTcorr1D):
3dmaskave -quiet -ibox 40 30 20 epi_r1+orig > r1_40_30_20.1D
----------------------------------------------------------------
Options:
-mask mset Means to use the dataset 'mset' as a mask:
Only voxels with nonzero values in 'mset'
will be averaged from 'dataset'. Note
that the mask dataset and the input dataset
must have the same number of voxels.
SPECIAL CASE: If 'mset' is the string 'SELF',
then the input dataset will be
used to mask itself. That is,
only nonzero voxels from the
#miv sub-brick will be used.
-mindex miv Means to use sub-brick #'miv' from the mask
dataset. If not given, miv=0.
-mrange a b Means to further restrict the voxels from
'mset' so that only those mask values
between 'a' and 'b' (inclusive) will
be used. If this option is not given,
all nonzero values from 'mset' are used.
Note that if a voxel is zero in 'mset', then
it won't be included, even if a < 0 < b.
[-mindex and -mrange are old options that predate]
[the introduction of the sub-brick selector '[]' ]
[and the sub-range value selector '<>' to AFNI. ]
-xbox x y z } These options are the same as in
-dbox x y z } program 3dmaskdump:
-nbox x y z } They create a mask by putting down boxes
-ibox x y z } or balls (filled spheres) at the specified
-xball x y z r } locations. See the output of
-dball x y z r } 3dmaskdump -help
-nball x y z r } for the gruesome and tedious details.
https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dmaskdump.html
-dindex div Means to use sub-brick #'div' from the inputdataset.
If not given, all sub-bricks will be processed.
-drange a b Means to only include voxels from the inputdataset whose
values fall in the range 'a' to 'b' (inclusive).
Otherwise, all voxel values are included.
[-dindex and -drange are old options that predate]
[the introduction of the sub-brick selector '[]' ]
[and the sub-range value selector '<>' to AFNI. ]
-slices p q Means to only included voxels from the inputdataset
whose slice numbers are in the range 'p' to 'q'
(inclusive). Slice numbers range from 0 to
NZ-1, where NZ can be determined from the output
of program 3dinfo. The default is to include
data from all slices.
[There is no provision for geometrical voxel]
[selection except in the slice (z) direction]
-sigma Means to compute the standard deviation in addition
to the mean.
-sum Means to compute the sum instead of the mean.
-sumsq Means to compute the sum of squares instead of the mean.
-enorm Means to compute the Euclidean norm instead of the mean.
This is the sqrt() of the sumsq result.
-median Means to compute the median instead of the mean.
-max Means to compute the max instead of the mean.
-min Means to compute the min instead of the mean.
[-sigma is ignored with -sum, -median, -max, or -min.]
[the last given of -sum, -median, -max, or -min wins.]
-perc XX Means to compute the XX-th percentile value (min=0 max=100).
XX should be an integer from 0 to 100.
-dump Means to print out all the voxel values that
go into the result.
-udump Means to print out all the voxel values that
go into the average, UNSCALED by any internal
factors.
N.B.: the scale factors for a sub-brick
can be found using program 3dinfo.
-indump Means to print out the voxel indexes (i,j,k) for
each dumped voxel. Has no effect if -dump
or -udump is not also used.
N.B.: if nx,ny,nz are the number of voxels in
each direction, then the array offset
in the brick corresponding to (i,j,k)
is i+j*nx+k*nx*ny.
-q or
-quiet Means to print only the minimal numerical result(s).
This is useful if you want to create a *.1D file,
without any extra text; for example:
533.814 [18908 voxels] == 'normal' output
533.814 == 'quiet' output
The output is printed to stdout (the terminal), and can be
saved to a file using the usual redirection operation '>'.
Or you can do fun stuff like
3dmaskave -q -mask Mfile+orig timefile+orig | 1dplot -stdin -nopush
to pipe the output of 3dmaskave into 1dplot for graphing.
-- Author: RWCox
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dmaskdump
Usage: 3dmaskdump [options] dataset dataset ...
Writes to an ASCII file values from the input datasets
which satisfy the mask criteria given in the options.
If no options are given, then all voxels are included.
This might result in a GIGANTIC output file.
Options:
-mask mset Means to use the dataset 'mset' as a mask:
Only voxels with nonzero values in 'mset'
will be printed from 'dataset'. Note
that the mask dataset and the input dataset
must have the same number of voxels.
-mrange a b Means to further restrict the voxels from
'mset' so that only those mask values
between 'a' and 'b' (inclusive) will
be used. If this option is not given,
all nonzero values from 'mset' are used.
Note that if a voxel is zero in 'mset', then
it won't be included, even if a < 0 < b.
-index Means to write out the dataset index values.
-noijk Means not to write out the i,j,k values.
-xyz Means to write the x,y,z coordinates from
the 1st input dataset at the start of each
output line. These coordinates are in
the 'RAI' (DICOM) order.
-o fname Means to write output to file 'fname'.
[default = stdout, which you won't like]
-cmask 'opts' Means to execute the options enclosed in single
quotes as a 3dcalc-like program, and produce
produce a mask from the resulting 3D brick.
Examples:
-cmask '-a fred+orig[7] -b zork+orig[3] -expr step(a-b)'
produces a mask that is nonzero only where
the 7th sub-brick of fred+orig is larger than
the 3rd sub-brick of zork+orig.
-cmask '-a fred+orig -expr 1-bool(k-7)'
produces a mask that is nonzero only in the
7th slice (k=7); combined with -mask, you
could use this to extract just selected voxels
from particular slice(s).
Notes: * You can use both -mask and -cmask in the same
run - in this case, only voxels present in
both masks will be dumped.
* Only single sub-brick calculations can be
used in the 3dcalc-like calculations -
if you input a multi-brick dataset here,
without using a sub-brick index, then only
its 0th sub-brick will be used.
* Do not use quotes inside the 'opts' string!
-xbox x y z Means to put a 'mask' down at the dataset (not DICOM)
coordinates of 'x y z' mm.
Notes: * By default, this box is 1 voxel wide in each direction,
rounding to the closest voxel center to the given single
coordinate.
Alternatively, one can specify a range of coordinates
using colon ':' as a separator; for example:
-xbox 22:27 31:33 44
means a box from (x,y,z)=(22,31,44) to (27,33,44).
Use of the colon makes the range strict, meaning voxels
outside the exact range will be omitted. Since 44 is
not specified with a range, the closest z coordinate
to 44 is used, while the x and y coordinates are strict.
* Dataset coordinates are NOT the coordinates you
typically see in AFNI's main controller top left corner.
Those coordinates are typically in either RAI/DICOM order
or in LPI/SPM order and should be used with -dbox and
-nbox, respectively.
-dbox x y z Means the same as -xbox, but the coordinates are in
RAI/DICOM order (+x=Left, +y=Posterior, +z=Superior).
If your AFNI environment variable AFNI_ORIENT is set to
RAI, these coordinates correspond to those you'd enter
into the 'Jump to (xyz)' control in AFNI, and to
those output by 3dclust.
NOTE: It is possible to make AFNI and/or 3dclust output
coordinates in an order different from the one specified
by AFNI_ORIENT, but you'd have to work hard on that.
In any case, the order is almost always specified along
with the coordinates. If you see RAI/DICOM, then use
-dbox. If you see LPI/SPM then use -nbox.
-nbox x y z Means the same as -xbox, but the coordinates are in
LPI/SPM or 'neuroscience' order where the signs of the
x and y coordinates are reversed relative to RAI/DICOM.
(+x=Right, +y=Anterior, +z=Superior)
-ibox i j k Means to put a 'mask' down at the voxel indexes
given by 'i j k'. By default, this picks out
just 1 voxel. Again, you can use a ':' to specify
a range (now in voxels) of locations.
Notes: * Boxes are cumulative; that is, if you specify more
than 1 box, you'll get more than one region.
* If a -mask and/or -cmask option is used, then
the INTERSECTION of the boxes with these masks
determines which voxels are output; that is,
a voxel must be inside some box AND inside the
mask in order to be selected for output.
* If boxes select more than 1 voxel, the output lines
are NOT necessarily in the order of the options on
the command line.
* Coordinates (for -xbox, -dbox, and -nbox) are relative
to the first dataset on the command line.
* It may be helpful to slightly pad boxes, to be sure they
contain the desired voxel centers.
-xball x y z r Means to put a ball (sphere) mask down at dataset
coordinates (x,y,z) with radius r.
-dball x y z r Same, but (x,y,z) are in RAI/DICOM order.
-nball x y z r Same, but (x,y,z) are in LPI/SPM order.
Notes: * The combined (set UNION) of all ball and/or box masks
is created first. Then, if a -mask and/or -cmask
option was used, then the ball+box mask will be
INTERSECTED with the existing mask.
* Balls not centered over voxels, or those applied to
anisotropic volumes may not appear symmetric.
* Consider slight padding to handle truncation.
-nozero Means to skip output of any voxel where all the
data values are zero.
-n_rand N_RAND Means to keep only N_RAND randomly selected
voxels from what would have been the output.
-n_randseed SEED Seed the random number generator with SEED,
instead of the default seed of 1234
-niml name Means to output data in the XML/NIML format that
is compatible with input back to AFNI via
the READ_NIML_FILE command.
* 'name' is the 'target_name' for the NIML header
field, which is the name that will be assigned
to the dataset when it is sent into AFNI.
* Also implies '-noijk' and '-xyz' and '-nozero'.
-quiet Means not to print progress messages to stderr.
Inputs after the last option are datasets whose values you
want to be dumped out. These datasets (and the mask) can
use the sub-brick selection mechanism (described in the
output of '3dcalc -help') to choose which values you get.
Each selected voxel gets one line of output:
i j k val val val ....
where (i,j,k) = 3D index of voxel in the dataset arrays,
and val = the actual voxel value. Note that if you want
the mask value to be output, you have to include that
dataset in the dataset input list again, after you use
it in the '-mask' option.
* To eliminate the 'i j k' columns, use the '-noijk' option.
* To add spatial coordinate columns, use the '-xyz' option.
N.B.: This program doesn't work with complex-valued datasets!
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dmaskSVD
Usage: 3dmaskSVD [options] inputdataset
Author: Zhark the Gloriously Singular
* Computes the principal singular vector of the time series
vectors extracted from the input dataset over the input mask.
++ You can use the '-sval' option to change which singular
vectors are output.
* The sign of the output vector is chosen so that the average
of arctanh(correlation coefficient) over all input data
vectors (from the mask) is positive.
* The output vector is normalized: the sum of its components
squared is 1.
* You probably want to use 3dDetrend (or something similar) first,
to get rid of annoying artifacts, such as motion, breathing,
dark matter interactions with the brain, etc.
++ If you are lazy scum like Zhark, you might be able to get
away with using the '-polort' option.
++ In particular, if your data time series has a nonzero mean,
then you probably want at least '-polort 0' to remove the
mean, otherwise you'll pretty much just get a constant
time series as the principal singular vector!
* An alternative to this program would be 3dmaskdump followed
by 1dsvd, which could give you all the singular vectors you
could ever want, and much more -- enough to confuse you for days.
++ In particular, although you COULD input a 1D file into
3dmaskSVD, the 1dsvd program would make much more sense.
* This program will be pretty slow if there are over about 2000
voxels in the mask. It could be made more efficient for
such cases, but you'll have to give Zhark some 'incentive'.
* Result vector goes to stdout. Redirect per your pleasures and needs.
* Also see program 3dLocalSVD if you want to compute the principal
singular time series vector from a neighborhood of EACH voxel.
++ (Which is a pretty slow operation!)
* http://en.wikipedia.org/wiki/Singular_value_decomposition
-------
Options:
-------
-vnorm = L2 normalize all time series before SVD [recommended!]
-sval a = output singular vectors 0 .. a [default a=0 = first one only]
-mask mset = define the mask [default is entire dataset == slow!]
-automask = you'll have to guess what this option does
-polort p = if you are lazy and didn't run 3dDetrend (like Zhark)
-bpass L H = bandpass [mutually exclusive with -polort]
-ort xx.1D = time series to remove from the data before SVD-ization
++ You can give more than 1 '-ort' option
++ 'xx.1D' can contain more than 1 column
-input ddd = alternative way to give the input dataset name
-------
Example:
-------
You have a mask dataset with discrete values 1, 2, ... 77 indicating
some ROIs; you want to get the SVD from each ROI's time series separately,
and then put these into 1 big 77 column .1D file. You can do this using
a csh shell script like the one below:
# Compute the individual SVD vectors
foreach mm ( `count_afni 1 77` )
3dmaskSVD -vnorm -mask mymask+orig"<${mm}..${mm}>" epi+orig > qvec${mm}.1D
end
# Glue them together into 1 big file, then delete the individual files
1dcat qvec*.1D > allvec.1D
/bin/rm -f qvec*.1D
# Plot the results to a JPEG file, then compute their correlation matrix
1dplot -one -nopush -jpg allvec.jpg allvec.1D
1ddot -terse allvec.1D > allvec_COR.1D
[[ If you use the bash shell, you'll have to figure out the syntax ]]
[[ yourself. Zhark has no sympathy for you bash shell infidels, and ]]
[[ considers you only slightly better than those lowly Emacs users. ]]
[[ And do NOT ever even mention 'nedit' in Zhark's august presence! ]]
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dMaskToASCII
Usage: 3dMaskToASCII [-tobin] dataset > outputfile
This program reads as input a byte-valued 0/1 dataset, such as
produced by 3dAutomask, and turns it into an ASCII string.
This string can be used to specify a mask in a few places
in AFNI, and will be allowed in more as time goes on.
the only OPTION:
----------------
-tobin = read 'dataset' as an ASCII string mask, expand it,
and write the byte-valued mask to stdout. This file
corresponds to the .BRIK file of an AFNI dataset.
The information needed to create a .HEAD file isn't
stored in the ASCII string.
* Jul 2010: -STATmask options in 3dREMLfit and 3dDeconvolve
accept a dataset mask or an ASCII string mask.
SAMPLE OUTPUT:
--------------
eNrlmU+u0zAQh21cySxQzZIFwld4S9gQjsJBEM7RepQeIcssrATp5WfHHnucRIBoSjefXtr8
ef5mxvZEiAf+vAe/LujnhXdwAEe30OPvKVK+cp41oUrZr3z9/W2laNPhsbqMIhLPNbn8OQfw
Bvb4vfgi/u/PT4xL9CzheeEIenD1K4lHDU+BhqFebrOcl1Aut51xe0cYj1/Ad8t57orzs/v3
hDEOJ9CD4f+LcQGKz0/q28CzI/nMeJ6iZ0nyVaXjntDAF0e93C5SgRLE4zjC+QKaGsN1B+Z5
Qvz1oKAM8TCgToXxEYEv59beB+8dV7+zvBalb5nmaZKvinjUy2WXca1Qp5xw3oTrJQzfmxq5
61fiwqRxsBkPHv7HWAdJHLw9mXcN7xbeQd/l8yTyrjIfU99ZnQ756sGKR0WomeP0e0to9nAr
DgYmDpJ5Q2XrmZGsf+L8ENYPHx7b/80Q7+Bks3VTX663uDyXqe/Ee8YZdXvlTKlAA9qdNCn3
+m/Ega76n4n/UAeKeaE7iX9DvNts/Ry831cqpr7TfCXeOf8Ze/jr4bU/4N8y9cEejANN/Gf7
kTgPeuK/2D88jX9ZW5dT/56v27Kd/4V/y/jvNrjl3+I57RH/Sd4z/t05/Q9mb92v1nsu//1K
WasDE+t/3sr/Xf636oFfydWBbL9Q8Z/3NYL/UP9vZ/Ef1n1hvdft9Z9xLONAtub/hn8J6iQO
WvW+O7gOsDv3BXrX/B/Wx97l+6fgv3/0+g//Q3do3X9n4mEk5P1nngtfyXFF2PRcOV+n+wZP
9p+N/SDtV+k0H4o+Yhi3gfgX9sH3fzaP26G97z+w/+PmA0X291l+VjxKhtw+T9fof9P/2id0
9byn3sO4nqUfEONgZ99vu/+jyDpBk/5es++TxIeszRt+5QXHr63r+LKv2PRe+ndv6t7dufJ9
8/Pxj/T7G/1fTeLBMP1eSuqdsMs4Ri7exvK+XB94n/c73d9fn+w9wDdwAot4yPsfZTwoEg/V
+bQSH4qpH+T9T/4eYIDvLd4Jb9x7Qm5dJz6do6/31z7fwR+0TpB4IOMX9knzXF1X9mW80Dqi
auvOtR/lmn55z13e/wz9EKH/3RD/AmrpJfk====65536
[The input binary mask is compressed (like 'gzip -9'), then the result]
[is encoded in Base64, and the number of voxels is appended at the end.]
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dmask_tool
-------------------------------------------------------------------------
3dmask_tool - for combining/dilating/eroding/filling masks
This program can be used to:
1. combine masks, with a specified overlap fraction
2. dilate and/or erode a mask or combination of masks
3. fill holes in masks
The outline of operations is as follows.
- read all input volumes
- optionally dilate/erode inputs (with any needed zero-padding)
- restrict voxels to the fraction of overlap
- optionally dilate/erode combination (with zero-padding)
- optionally fill any holes
- write result
Note : all volumes across inputs are combined into a single output volume
Note : a hole is defined as a fully connected set of zero voxels that
does not contain an edge voxel. For any voxel in such a set, it
is not possible to find a path of voxels to reach an edge.
Such paths are evaluated using 6 face neighbors, no diagonals.
----------------------------------------
examples:
a. dilate a mask by 5 levels
3dmask_tool -input mask_anat.FT+tlrc -prefix ma.dilate \
-dilate_input 5
b. dilate and then erode, which connects areas that are close
3dmask_tool -input mask_anat.FT+tlrc -prefix ma.close.edges \
-dilate_input 5 -5
b2. dilate and erode after combining many masks
3dmask_tool -input mask_anat.*+tlrc.HEAD -prefix ma.close.result \
-dilate_result 5 -5
c1. compute an intersection mask, this time with EPI masks
3dmask_tool -input mask_epi_anat.*+tlrc.HEAD -prefix mask_inter \
-frac 1.0
c2. compute a mask of 70% overlap
3dmask_tool -input mask_epi_anat.*+tlrc.HEAD \
-prefix group_mask_olap.7 -frac 0.7
c3. simply count the voxels that overlap
3dmask_tool -input mask_epi_anat.*+tlrc.HEAD \
-prefix mask.counts -count
d. fill holes
3dmask_tool -input mask_anat.FT+tlrc -prefix ma.filled \
-fill_holes
e. fill holes per slice
3dmask_tool -input mask_anat.FT+tlrc -prefix ma.filled.xy \
-fill_holes -fill_dirs xy
f. read many masks, dilate and erode, restrict to 70%, and fill holes
3dmask_tool -input mask_anat.*+tlrc.HEAD -prefix ma.fill.7 \
-dilate_input 5 -5 -frac 0.7 -fill_holes
----------------------------------------
informational command arguments (execute option and quit):
-help : show this help
-hist : show program history
-ver : show program version
----------------------------------------
optional command arguments:
-count : count the voxels that overlap
Instead of created a binary 0/1 mask dataset, create one with.
counts of voxel overlap, i.e each voxel will contain the number
of masks that it is set in.
-datum TYPE : specify data type for output
e.g: -datum short
default: -datum byte
Valid TYPEs are 'byte', 'short' and 'float'.
-dilate_inputs D1 D2 ... : dilate inputs at the given levels
e.g. -dilate_inputs 3
e.g. -dilate_inputs -4
e.g. -dilate_inputs 8 -8
default: no dilation
Use this option to dilate and/or erode datasets as they are read.
Dilations are across the 18 voxel neighbors that share either a
face or an edge (i.e. of the 26 neighbors in a 3x3x3 box, it is
all but the outer 8 corners).
An erosion is specified by a negative dilation.
One can apply a list of dilations and erosions, though there
should be no reason to apply more than one of each.
Note: use -dilate_result for dilations on the combined masks.
-dilate_result D1 D2 ... : dilate combined mask at the given levels
e.g. -dilate_result 3
e.g. -dilate_result -4
e.g. -dilate_result 8 -8
default: no dilation
Use this option to dilate and/or erode the result of combining
masks that exceed the -frac cutoff.
See -dilate_inputs for details of the operation.
-frac LIMIT : specify required overlap threshold
e.g. -frac 0 (same as -union)
e.g. -frac 1.0 (same as -inter)
e.g. -frac 0.6
e.g. -frac 17
default: union (-frac 0)
When combining masks (across datasets and sub-bricks), use this
option to restrict the result to a certain fraction of the set of
volumes (or to a certain number of volumes if LIMIT > 1).
For example, assume there are 7 volumes across 3 datasets. Then
at each voxel, count the number of masks it is in over the 7
volumes of input.
LIMIT = 0 : union, counts > 0 survive
LIMIT = 1.0 : intersection, counts = 7 survive
LIMIT = 0.6 : 60% fraction, counts >= 5 survive
LIMIT = 5 : count limit, counts >= 5 survive
See also -inter and -union.
-inter : intersection, this means -frac 1.0
-union : union, this means -frac 0
-fill_holes : fill holes within the combined mask
This option can be used to fill holes in the resulting mask, i.e.
after all other processing has been done.
A hole is defined as a connected set of voxels that is surrounded
by non-zero voxels, and which contains no volume edge voxel, i.e.
there is no connected voxels at a volume edge (edge of a volume
meaning any part of any of the 6 volume faces).
To put it one more way, a zero voxel is part of a hole if there
is no path of zero voxels (in 3D space) to a volume face/edge.
Such a path can be curved.
Here, connections are via the 6 faces only, meaning a voxel could
be consider to be part of a hole even if there were a diagonal
path to an edge. Please pester me if that is not desirable.
-fill_dirs DIRS : fill holes only in the given directions
e.g. -fill_dirs xy
e.g. -fill_dirs RA
e.g. -fill_dirs XZ
This option is for use with -fill holes.
By default, a hole is a connected set of zero voxels that does
not have a path to a volume edge. By specifying fill DIRS, the
filling is done restricted to only those axis directions.
For example, to fill holes once slice at a time (in a sagittal
dataset say, with orientation ASL), one could use any one of the
options:
-fill_dirs xy
-fill_dirs YX
-fill_dirs AS
-fill_dirs ip
-fill_dirs APSI
DIRS should be a single string that specifies 1-3 of the axes
using {x,y,z} labels (i.e. dataset axis order), or using the
labels in {R,L,A,P,I,S}. Such labels are case-insensitive.
-input DSET1 ... : specify the set of inputs (taken as masks)
: (-inputs is historically allowed)
e.g. -input group_mask.nii
e.g. -input mask_epi_anat.*+tlrc.HEAD
e.g. -input amygdala_subj*+tlrc.HEAD
e.g. -input ~/abin/MNI152_2009_template_SSW.nii.gz'[0]'
Use this option to specify the input datasets to process. Any
non-zero voxel will be consider part of that volume's mask.
An input dataset is allowed to have multiple sub-bricks.
All volumes across all input datasets are combined to create
a single volume of output.
-NN1 : specify NN connection level: 1, 2 or 3
-NN2 : specify NN connection level: 1, 2 or 3
-NN3 : specify NN connection level: 1, 2 or 3
e.g. -NN1
default: -NN2
Use this option to specify the nearest neighbor level, one of
1, 2 or 3. This defines which voxels are neighbors when
dilating or eroding. The default is NN2.
NN1 : face neighbors (6 first neighbors)
NN2 : face or edge neighbors (+12 second neighbors)
NN3 : face, edge or diagonal (+8 third neighbors (27-1))
-prefix PREFIX : specify a prefix for the output dataset
e.g. -prefix intersect_mask
default: -prefix combined_mask
The resulting mask dataset will be named using the given prefix.
-quiet : limit text output to errors
Restrict text output. This option is equivalent to '-verb 0'.
See also -verb.
-verb LEVEL : specify verbosity level
The default level is 1, while 0 is considered 'quiet'.
The maximum level is currently 3, but most people don't care.
-------------------------------
R. Reynolds April, 2012
----------------------------------------------------------------------
AFNI program: 3dmatcalc
Usage: 3dmatcalc [options]
Apply a matrix to a dataset, voxel-by-voxel, to produce a new
dataset.
* If the input dataset has 'N' sub-bricks, and the input matrix
is 'MxN', then the output dataset will have 'M' sub-bricks; the
results in each voxel will be the result of extracting the N
values from the input at that voxel, multiplying the resulting
N-vector by the matrix, and output the resulting M-vector.
* If the input matrix has 'N+1' columns, then it will be applied
to an (N+1)-vector whose first N elements are from the dataset
and the last value is 1. This convention allows the addition
of a constant vector (the last row of the matrix) to each voxel.
* The output dataset is always stored in float format.
* Useful applications are left to your imagination. The example
below is pretty fracking hopeless. Something more useful might
be to project a 3D+time dataset onto some subspace, then run
3dpc on the results.
OPTIONS:
-------
-input ddd = read in dataset 'ddd' [required option]
-matrix eee = specify matrix, which can be done as a .1D file
or as an expression in the syntax of 1dmatcalc
[required option]
-prefix ppp = write to dataset with prefix 'ppp'
-mask mmm = only apply to voxels in the mask; other voxels
will be set to all zeroes
EXAMPLE:
-------
Assume dataset 'v+orig' has 50 sub-bricks:
3dmatcalc -input v+orig -matrix '&read(1D:50@1,\,50@0.02) &transp' -prefix w
The -matrix option computes a 2x50 matrix, whose first row is all 1's
and whose second row is all 0.02's. Thus, the output dataset w+orig has
2 sub-bricks, the first of which is the voxel-wise sum of all 50 inputs,
and the second is the voxel-wise average (since 0.02=1/50).
-- Zhark, Emperor -- April 2006
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dMatch
++ Loading data.
3dMatch, written by PA Taylor (Nov., 2012), part of FATCAT (Taylor & Saad,
2013) in AFNI.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Find similar subbricks and rearrange order to ease comparison
Comparison simply done by comparing (weighted) correlation maps of
values, which may include thresholding of either refset or inset
values. The weighting is done by squaring each voxel value (whilst
maintaining its original sign). The Dice coefficient is also calculated
to quantify overlap of regions.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMANDS:
3dMatch -inset FILE1 -refset FILE2 {-mask FILE3} {-in_min THR1} \
{-in_max THR2} {-ref_min THR3} {-ref_max THR4} -prefix FILE4 \
{-only_dice_thr}
where:
-inset FILE1 :file with M subbricks of data to match against another
file.
-refset FILE2 :file with N subbricks, serving as a reference for
FILE1. N=M is *not* a requirement; matching is done
based on squares of values (with signs preserved), and
both best fit of in->ref and ref->in are calculated
and output.
-mask FILE3 :a mask of regions to include in the correlation of
data sets; technically not necessary as relative
correlation values shouldn't change, but the magnitudes
would scale up without it. Dice coeff values should not
be affected by absence or presence of wholebrain mask.
-in_min THR1 :during the correlation/matching analysis, values below
THR1 in the `-inset' will be zeroed (and during Dice
coefficient calculation, excluded from comparison).
(See `-only_dice_thr' option, below.)
-in_max THR2 :during the correlation/matching analysis, values above
THR2 in the `-inset' will be zeroed (and during Dice
coefficient calculation, excluded from comparison).
-ref_min THR3 :during the correlation/matching analysis, values below
THR3 in the `-refset' will be zeroed (and during Dice
coefficient calculation, excluded from comparison).
(See `-only_dice_thr' option, below.)
-ref_max THR4 :during the correlation/matching analysis, values above
THR4 in the `-refset' will be zeroed (and during Dice
coefficient calculation, excluded from comparison).
-prefix FILE4 :prefix out output name for both *BRIK/HEAD files, as
well as for the *_coeff.vals text files (see below).
-only_dice_thr :if option is included in command line, the thresholding
above is only applied during Dice evaluation, not
during spatial correlation.
+ OUTPUTS, named using prefix;
*_REF+orig :AFNI BRIK/HEAD file with the same number of subbricks
as the `-refset' file, each one corresponding to a
subbrick of the `-inset' file with highest weighted
correlation. Any unmatched `-inset' subbricks are NOT
appended at the end. (For example, you could underlay
the -ref_set FILE2 and visually inspect the comparisons
per slice.)
*_REF_coeff.vals :simple text file with four columns, recording the
original brick number slices which have been
reordered in the output *_REF+orig file. Cols. 1&2-
orig `-refset' and `-inset' indices, respectively;
Col. 3- weighted correlation coefficient; Col 4.-
simple Dice coefficient.
*_IN+orig :AFNI BRIK/HEAD file with the same number of subbricks
as the `-inset' file, each one corresponding to
a subbrick of the `-refset' file with highest weighted
correlation. Any unmatched `-refset' subbricks are NOT
appended at the end. (For example, you could underlay
the -inset FILE1 and visually inspect the comparisons
per slice.)
*_IN_coeff.vals :simple text file with four columns, recording the
original brick number slices which have been
reordered in the output *_IN+orig file. Cols. 1&2-
orig `-inset' and `-refset' indices, respectively;
Col. 3- weighted correlation coefficient; Col 4.-
simple Dice coefficient.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3dMatch \
-inset CORREL_DATA+orig \
-refset STANDARD_RSNs+orig \
-mask mask+orig \
-in_min 0.4 \
-ref_min 2.3 \
-prefix MATCHED \
-only_dice_thr
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
____________________________________________________________________________
AFNI program: 3dmatmult
-------------------------------------------------------------------------
Multiply AFNI datasets slice-by-slice as matrices.
If dataset A has Ra rows and Ca columns (per slice), and dataset B has
Rb rows and Cb columns (per slice), multiply each slice pair as matrices
to obtain a dataset with Ra rows and Cb columns. Here Ca must equal Rb
and the number of slices must be equal.
In practice the first dataset will probably be a transformation matrix
(or a sequence of them) while the second dataset might just be an image.
For this reason, the output dataset will be based on inputB.
----------------------------------------
examples:
3dmatmult -inputA matrix+orig -inputB image+orig -prefix transformed
3dmatmult -inputA matrix+orig -inputB image+orig \
-prefix transformed -datum float -verb 2
----------------------------------------
informational command arguments (execute option and quit):
-help : show this help
-hist : show program history
-ver : show program version
----------------------------------------
required command arguments:
-inputA DSET_A : specify first (matrix) dataset
The slices of this dataset might be transformation matrices.
-inputB DSET_B : specify second (matrix) dataset
This dataset might be any image.
-prefix PREFIX : specify output dataset prefix
This will be the name of the product (output) dataset.
----------------------------------------
optional command arguments:
-datum TYPE : specify output data type
Valid TYPEs are 'byte', 'short' and 'float'. The default is
that of the inputB dataset.
-verb LEVEL : specify verbosity level
The default level is 1, while 0 is considered 'quiet'.
----------------------------------------
* If you need to re-orient a 3D dataset so that the rows, columns
and slices are correct for 3dmatmult, you can use the one of the
programs 3daxialize or 3dresample for this purpose.
* To multiply a constant matrix into a vector at each voxel, the
program 3dmatcalc is the proper tool.
----------------------------------------------------------------------
R. Reynolds (requested by W. Gaggl)
3dmatmult version 0.0, 29 September 2008
compiled: May 6 2025
AFNI program: 3dmaxdisp
++ 3dmaxdisp: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: Zhark the Displacer
Program 3dmaxdisp!
* Reads in a 3D dataset and a DICOM-based affine matrix
* Outputs the average and maximum displacement that the matrix produces
when applied to the edge voxels of the 3D dataset's automask.
* The motivation for this program was to check if two
affine transformation matrices are 'close' -- but of course,
you can use this program for anything else you like.
* Suppose you have two affine transformation matrices that
transform a dataset XX.nii to MNI space, stored in files
AA.aff12.1D and BB.aff12.1D
and they aren't identical but they are close. How close?
* If these matrices are from 3dAllineate (-1Dmatrix_save),
then each matrix transforms DICOM-order coordinates
in XX.nii to MNI-space.
* So Inverse(AA) transforms MNI-space to XX-space
* So Inverse(AA)*BB transforms MNI-space to MNI-space,
going back to XX-space via matrix Inverse(AA) and then forward
to MNI-space via BB.
* This program (3dmaxdisp) can compute the average and maximum
displacement of Inverse(AA)*BB over the edges of the MNI template,
which will give you the answer to 'How close?' are the matrices.
If these displacements are on the order of a voxel size
(e.g., 1 mm), then the two matrices are for practical purposes
'close enough' (in Zhark's opinion).
* How to do this?
cat_matvec AA.aff12.1D -I BB.aff12.1D > AinvB.aff12.1D
3dmaxdisp -dset ~/abin/MNI152_2009_template_SSW.nii.gz'[0]' -matrix AinvB.aff12.1D
* Results are sent to stdout, two numbers per row (average and maximum),
one row of output for each matrix row given. Usually you will want to
capture stdout to a file with '>' or '| tee', depending on your further plans.
-------
OPTIONS:
-------
-inset ddd }= The input dataset is 'ddd', which is used only to form
*OR* }= the mask over which the displacements will be computed.
-dset ddd }=
-matrix mmm = File 'mmm' has 12 numbers per row, which are assembled
into the 3x4 affine transformation matrix to be applied
to the coordinates of the voxels in the dataset mask.
* As a special case, you can use the word 'IDENTITY'
for the matrix filename, which should result in
a max displacement of 0 mm.
* If there is more than 1 row in 'mmm', then each row
is treated as a separate matrix, and the max displacement
will be computed separately for each matrix.
-verb = Print a few progress reports (to stderr).
------
Author: Zhark the Displacer (AKA Bob the Inverted) -- June 2021
------
AFNI program: 3dmaxima
3dmaxima - used to locate extrema in a functional dataset.
This program reads a functional dataset and locates any relative extrema
(maximums or minimums, depending on the user option). A _relative_
maximum is a point that is greater than all neighbors (not necessarily
greater than all other values in the sub-brick). The output from this
process can be text based (sent to the terminal window) and it can be a
mask (integral) dataset, where the locations of the extrema are set.
When writing a dataset, it is often useful to set a sphere around each
extrema, not to just set individual voxels. This makes viewing those
locations much more reasonable. Also, if the 'Sphere Values' option is
set to 'N to 1', the sphere around the most extreme voxel will get the
value N, giving it the 'top' color in afni (and so on, down to 1).
Notes : The only required option is the input dataset.
Input datasets must be of type short.
All distances are in voxel units.
----------------------------------------------------------------------
*** Options ***
----- Input Dset: -----
-input DSET : specify input dataset
e.g. -input func+orig'[7]'
Only one sub-brick may be specified. So if a dataset has multiple
sub-bricks, the [] selector must be used.
----- Output Dset: -----
-prefix PREFIX : prefix for an output mask dataset
e.g. -prefix maskNto1
This dataset may be viewed as a mask. It will have a value set at
the location of any selected extrema. The -out_rad option can be
used to change those points to 'spheres'.
-spheres_1 : [flag] set all output values to 1
This is the default, which sets all values in the output dataset
to 1. This is for the extreme points, and for the spheres centered
around them.
-spheres_1toN : [flag] output values will range from 1 to N
In this case, the most extreme voxel will be set with a value of 1.
The next most extreme voxel will get 2, and so on.
-spheres_Nto1 : [flag] output values will range from N to 1
With this option, the highest extrema will be set to a value of N,
where N equals the number of reported extrema. The advantage of
this is that the most extreme point will get the highest color in
afni.
----- Threshold: -----
-thresh CUTOFF : provides a cutoff value for extrema
e.g. -thresh 17.4
Extrema not meeting this cutoff will not be considered.
Note that if the '-neg_ext' option is applied, the user
will generally want a negative threshold.
----- Separation: -----
-min_dist VOXELS : minimum acceptable distance between extrema
e.g. -min_dist 4
Less significant extrema which are close to more significant extrema
will be discounted in some way, depending on the 'neighbor style'
options.
See '-n_style_sort' and '-n_style_weight_ave' for more information.
Note that the distance is in voxels, not mm.
----- Output Size: -----
-out_rad SIZE : set the output radius around extrema voxels
e.g. -out_rad 9
If the user wants the output BRIK to consist of 'spheres' centered
at extrema points, this option can be used to set the radius for
those spheres. Note again that this is in voxel units.
----- Neighbor: -----
If extrema are not as far apart as is specified by the '-min_dist'
option, the neighbor style options specify how to handle the points.
-n_style_sort : [flag] use 'Sort-n-Remove' style (default)
The extrema are sorted by magnitude. For each extrema (which has
not previously removed), all less significant extrema neighbors
within the separation radius (-min_dist) are removed.
See '-min_dist' for more information.
-n_style_weight_ave : [flag] use 'Weighted-Average' style
Again, traverse the sorted list of extrema. Replace the current
extrema with the center of mass of all extrema within the Separation
radius of the current point, removing all others within this radius.
This should not change the number of extrema, it should only shift
the locations.
----- Params: -----
-neg_ext : [flag] search for negative extrema (minima)
This will search for the minima of the dataset.
Note that a negative threshold may be desired.
-true_max : [flag] extrema may not have equal neighbors
By default, points may be considered extrema even if they have a
neighbor with the same value. This flag option requires extrema
to be strictly greater than any of their neighbors.
With this option, extrema locations that have neighbors at the same
value are ignored.
----- Output Text: -----
-debug LEVEL : output extra information to the terminal
e.g. -debug 2
-no_text : [flag] do not display the extrma points as text
-coords_only : [flag] only output coordinates (no text or vals)
----- Output Coords: -----
-dset_coords : [flag] display output in the dataset orientation
By default, the xyz-coordinates are displayed in DICOM orientation
(RAI), i.e. right, anterior and inferior coordinates are negative,
and they are printed in that order (RL, then AP, then IS).
If this flag is set, the dataset orientation is used, whichever of
the 48 it happens to be.
Note that in either case, the output orientation is printed above
the results in the terminal window, to remind the user.
----- Other : -----
-help : display this help
-hist : display module history
-ver : display version number
Author: R Reynolds
AFNI program: 3dMean
Usage: 3dMean [options] dset dset ...
Takes the voxel-by-voxel mean of all input datasets;
the main reason is to be faster than 3dcalc.
Options [see 3dcalc -help for more details on these]:
-verbose = Print out some information along the way.
-prefix ppp = Sets the prefix of the output dataset.
-datum ddd = Sets the datum of the output dataset.
-fscale = Force scaling of the output to the maximum integer range.
-gscale = Same as '-fscale', but also forces each output sub-brick to
to get the same scaling factor.
-nscale = Don't do any scaling on output to byte or short datasets.
** Only use this option if you are sure you
want the output dataset to be integer-valued!
-non_zero = Use only non-zero values for calculation of mean,min,max,sum,squares
-sd *OR* = Calculate the standard deviation, sqrt(variance), instead
-stdev of the mean (cannot be used with -sqr, -sum or -non_zero).
-sqr = Average the squares, instead of the values.
-sum = Just take the sum (don't divide by number of datasets).
-count = compute only the count of non-zero voxels.
-max = find the maximum at each voxel
-min = find the minimum at each voxel
-absmax = find maximum absolute value at each voxel
-signed_absmax = find extremes with maximum absolute value
but preserve sign
-mask_inter = Create a simple intersection mask.
-mask_union = Create a simple union mask.
The masks will be set by any non-zero voxels in
the input datasets.
-weightset WSET = Sum of N dsets will be weighted by N volume WSET.
e.g. -weightset opt_comb_weights+tlrc
This weight dataset must be of type float.
N.B.: All input datasets must have the same number of voxels along
each axis (x,y,z,t).
* At least 2 input datasets are required.
* Dataset sub-brick selectors [] are allowed.
* The output dataset origin, time steps, etc., are taken from the
first input dataset.
* Both absmax and signed_absmax are not really appropriate for byte data
because that format does not allow for negative values
*** If you are trying to compute the mean (or some other statistic)
across time for a 3D+time dataset (not across datasets), use
3dTstat instead.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dMedianFilter
Usage: 3dMedianFilter [options] dataset
Computes the median in a spherical nbhd around each point in the
input to produce the output.
Options:
-irad x = Radius in voxels of spherical regions
-iter n = Iterate 'n' times [default=1]
-verb = Be verbose during run
-prefix pp = Use 'pp' for prefix of output dataset
-automask = Create a mask (a la 3dAutomask)
Output dataset is always stored in float format. If the input
dataset has more than 1 sub-brick, only sub-brick #0 is processed.
-- Feb 2005 - RWCox
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dMEMA
Usage:
------
3dMEMA is a program for performing Mixed Effects Meta Analysis at group level
that models both within- and across- subjects variability, thereby requiring
both regression coefficients, or general linear contrasts among them, and the
corresponding t-statistics from each subject as input. To get accurate
t-statistics, 3dREMLfit should be used for the linear regression (a GLS
regression program using an ARMA(1,1) model for the noise), rather than
3dDeconvolve.
It's required to install R (https://www.r-project.org/), plus 'snow' package
if parallel computing is desirable. Version 1.0.1, Dec 21, 2016. If you want to
cite the analysis approach, use the following at this moment:
Chen, G., Saad, Z.S., Nath, A.R., Beauchamp, M.S., Cox, R.W., 2012.
FMRI group analysis combining effect estimates and their variances.
NeuroImage 60, 747–765. https://doi.org/10.1016/j.neuroimage.2011.12.060
The basic usage of 3dMEMA is to derive group effects of a condition, contrast,
or linear combination (GLT) of multiple conditions. It can be used to analyze
data from one, two, or multiple groups. However, if there are more than two
groups or more than one subject-grouping variables (e.g., sex, adolescent/adults,
genotypes, etc.) involved in the analysis, dummy coding (zeros and ones) the
variables as covariates is required, and extremely caution should be exercised
in doing so because different coding strategy may lead to different
interpretation. In addition, covariates (quantiative variables) can be
incorporated in the model, but centering and potential interactions with other
effects in the model should be considered.
Basically, 3dMEMA can run one-sample, two-sample, and all types of BETWEEN-SUBJECTS
ANOVA and ANCOVA. Within-subject variables mostly cannot be modeled, but there are
a few exceptions. For instance, paired-test can be performed through feeding the
contrast of the two conditions as input. Multi-way ANOVA can be analyzed under the
following two scenarios: 1) all factors have only two levels (e.g., 2 X 2 repeated-
measures ANOVA) can be analyzed; or 1) there is only one within-subject (or
repeated-measures) factor and it contains two levels only. See more details at
https://afni.nimh.nih.gov/sscc/gangc/MEMA.html
Notice: When comparing two groups, option "-groups groupA groupB" has to be
present, and the output includes the difference of groupB - groupA, which is
consistent with most AFNI convention except for 3dttest++ where groupA - groupB is
rendered.
Example 1 --- One-sample type (one regression coefficient or general linear
contrast from each subject in a group):
--------------------------------
3dMEMA -prefix ex1 \
-jobs 4 \
-set happy \
ac ac+tlrc'[14]' ac+tlrc'[15]' \
ejk ejk+tlrc'[14]' ejk+tlrc'[15]' \
...
ss ss+tlrc'[14]' ss+tlrc'[15]' \
-max_zeros 4 \
-model_outliers \
-residual_Z
3dMEMA -prefix ex1 \
-jobs 4 \
-set happy \
ac ac+tlrc'[happy#0_Coef]' ac+tlrc'[happy#0_Tstat]' \
ejk ejk+tlrc'[happy#0_Coef]' ejk+tlrc'[happy#0_Tstat]' \
...
ss ss+tlrc'[happy#0_Coef]' ss+tlrc'[happy#0_Tstat]' \
-missing_data 0 \
-HKtest \
-model_outliers \
-residual_Z
Example 2 --- Two-sample type (one regression coefficient or general linear
contrast from each subject in two groups with the contrast being the 2nd group
subtracing the 1st one), heteroskedasticity (different cross-subjects variability
between the two groups), outlier modeling, covariates centering, no payment no
interest till Memorial Day next year. Notice that option -groups has to be
present in this case, and the output includes the difference of the second group
versus the first one.
-------------------------------------------------------------------------
3dMEMA -prefix ex3 \
-jobs 4 \
-groups horses goats \
-set healthy_horses \
ac ac_sad_B+tlrc.BRIK ac_sad_T+tlrc.BRIK \
ejk ejk_sad_B+tlrc.BRIK ejk_sad_T+tlrc.BRIK \
...
ss ss_sad_B+tlrc.BRIK ss_sad_T+tlrc.BRIK \
-set healthy_goats \
jp jp_sad_B+tlrc.BRIK jp_sad_T+tlrc.BRIK \
mb mb_sad_B+tlrc.BRIK mb_sad_T+tlrc.BRIK \
...
trr trr_sad_B+tlrc.BRIK trr_sad_T+tlrc.BRIK \
-n_nonzero 18 \
-HKtest \
-model_outliers \
-unequal_variance \
-residual_Z \
-covariates CovFile.txt \
-covariates_center age = 25 13 weight = 100 150 \
-covariates_model center=different slope=same
where file CovFile.txt looks something like this:
name age weight
ejk 93 117
jcp 3 34
ss 12 200
ac 12 130
jp 65 130
mb 25 630
trr 18 187
delb 9 67
tony 12 4000
Example 3 --- Paired type (difference of two regression coefficients or
general linear contrasts from each subject in a group). One scenario of
general linear combinations is to test linear or higher order trend at
individual level, and then take the trend information to group level.
---------------------------------
3dMEMA -prefix ex2 \
-jobs 4 \
-missing_data happyMiss+tlrc sadMiss+tlrc \
-set happy-sad \
ac ac_hap-sad_B+tlrc ac_hap-sad_T+tlrc \
ejk ejk_hap-sad_B+tlrc ejk_hap-sad_T+tlrc \
...
ss ss_hap-sad_B+tlrc ss_hap-sad_T+tlrc \
Options in alphabetical order:
------------------------------
-cio: Use AFNI's C io functions
-conditions COND1 [COND2]: Name of 1 or 2 conditions, tasks, or GLTs.
Default is one condition named 'c1'
-contrast_name: (no help available)
-covariates COVAR_FILE: Specify the name of a text file containing
a table for the covariate(s). Each column in the
file is treated as a separate covariate, and each
row contains the values of these covariates for
each subject. Option -unequal_variance may not be
used in the presence of covariates with two groups.
To avoid confusion, it is best you format COVAR_FILE in this manner
with BOTH row and column names:
subj age weight
Jane 25 300
Joe 22 313
... .. ...
This way, there is no ambiguity as to which values are attributed to
which subject, nor to the label of the covariate(s). The word 'subj'
must be the first word of the first row. You can still get at the
values of the columns of such a file with AFNI's 1dcat -ok_text,
which will treat the first row, and first column, as all 0s.
Alternate, but less recommended ways to specify the covariates:
(column names only)
age weight
25 300
22 313
.. ...
or
(no row and column names)
25 300
22 313
.. ...
-covariates_center COV_1=CEN_1 [COV_2=CEN_2 ... ]: (for 1 group)
-covariates_center COV_1=CEN_1.A CEN_1.B [COV_2=CEN_2.A CEN_2.B ... ]:
(for 2 groups)
where COV_K is the name assigned to the K-th covariate,
either from the header of the covariates file, or from the option
-covariates_name. This makes clear which center belongs to which
covariate. When two groups are used, you need to specify a center for
each of the groups (CEN_K.A, CEN_K.B).
Example: If you had covariates age, and weight, you would use:
-covariates_center age = 78 55 weight = 165 198
If you want all covariates centered about their own mean,
just use -covariates_center mean. Be alert: Default is mean centering!
If no centering is desired (e.g.,the covariate values have been
pre-centered), set the center value as 0 with -covariates_center.
-covariates_model center=different/same slope=different/same:
Specify whether to use the same or different intercepts
for each of the covariates. Similarly for the slope.
-covariates_name COV_1 [... COV_N]: Specify the name of each of the N
covariates. This is only needed if the covariates' file
has no header. The default is to name the covariates
cov1, cov2, ...
-dbgArgs: This option will enable R to save the parameters in a
file called .3dMEMA.dbg.AFNI.args in the current directory
so that debugging can be performed.
-equal_variance: Assume same cross-subjects variability between GROUP1
and GROUP2 (homoskedasticity). (Default)
-groups GROUP1 [GROUP2]: Name of 1 or 2 groups. This option must be used
when comparing two groups. Default is one group
named 'G1'. The labels here are used to name
the sub-bricks in the output. When there are
two groups, the 1st and 2nd labels here are
associated with the 1st and 2nd datasets
specified respectively through option -set,
and their group difference is the second group
minus the first one, similar to 3dttest but
different from 3dttest++.
-help: this help message
-HKtest: Perform Hartung-Knapp adjustment for the output t-statistic.
This approach is more robust when the number of subjects
is small, and is generally preferred. -KHtest is the default
with t-statistic output.
-jobs NJOBS: On a multi-processor machine, parallel computing will speed
up the program significantly.
Choose 1 for a single-processor computer.
-mask MASK: Process voxels inside this mask only.
Default is no masking.
-max_zeros MM: Do not compute statistics at any voxel that has
more than MM zero beta coefficients or GLTs. Voxels around
the edges of the group brain will not have data from
some of the subjects. Therefore, some of their beta's or
GLTs and t-stats are masked with 0. 3dMEMA can handle
missing data at those voxels but obviously too much
missing data is not good. Setting -max_zeros to 0.25
means process data only at voxels where no more than 1/4
of the data is missing. The default value is 0 (no
missing values allowed). MM can be a positive integer
less than the number of subjects, or a fraction
between 0 and 1. Alternatively option -missing_data
can be used to handle missing data.
-missing_data: This option corrects for inflated statistics for the voxels where
some subjects do not have any data available due to imperfect
spatial alignment or other reasons. The absence of this option
means no missing data will be assumed. Two formats of option
setting exist as shown below.
-missing_data 0: With this format the zero value at a voxel of each subject
will be interpreted as missing data.
-missing_data File1 [File2]: Information about missing data is specified
with file of 1 or 2 groups (the number 1 or 2
and file order should be consistent with those
in option -groups). The voxel value of each file
indicates the number of sujects with missing data
in that group.
-model_outliers: Model outlier betas with a Laplace distribution of
of subject-specific error.
Default is -no_model_outliers
-n_nonzero NN: Do not compute statistics at any voxel that has
less than NN non-zero beta values. This options is
complimentary to -max_zeroes, and matches an option in
the interactive 3dMEMA mode. NN is basically (number of
unique subjects - MM). Alternatively option -missing_data
can be used to handle missing data.
-no_HKtest: Do not make the Hartung-Knapp adjustment. -KHtest is
the default with t-statistic output.
-no_model_outliers: No modeling of outlier betas/GLTs (Default).
-no_residual_Z: Do not output residuals and their Z values (Default).
-prefix PREFIX: Output prefix (just prefix, no view+suffix needed)
-residual_Z: Output residuals and their Z values used in identifying
outliers at voxel level.
Default is -no_residual_Z
-Rio: Use R's io functions
-set SETNAME \
SUBJ_1 BETA_DSET T_DSET \
SUBJ_2 BETA_DSET T_DSET \
... ... ... \
SUBJ_N BETA_DSET T_DSET \
Specify the data for one of two test variables (either group,
contrast/GLTs) A & B.
SETNAME is the name assigned to the set, which is only for the
user's information, and not used by the program. When
there are two groups, the 1st and 2nd datasets are
associated with the 1st and 2nd labels specified
through option -set, and the group difference is
the second group minus the first one, similar to
3dttest but different from 3dttest++.
SUBJ_K is the label for the subject K whose datasets will be
listed next
BETA_DSET is the name of the dataset of the beta coefficient or GLT.
T_DSET is the name of the dataset containing the Tstat
corresponding to BETA_DSET.
To specify BETA_DSET, and T_DSET, you can use the standard AFNI
notation, which, in addition to sub-brick indices, now allows for
the use of sub-brick labels as selectors
e.g: -set Placebo Jane pb05.Jane.Regression+tlrc'[face#0_Beta]' \
pb05.Jane.Regression+tlrc'[face#0_Tstat]' \
-show_allowed_options: list of allowed options
-unequal_variance: Model cross-subjects variability difference between
GROUP1 and GROUP2 (heteroskedasticity). This option
may NOT be invoked when covariate is present in the
model. Default is -equal_variance (homoskedasticity).
This option may not be useded when covariates are
involved in the model.
-verb VERB: VERB is an integer specifying verbosity level.
0 for quiet (Default). 1 or more: talkative.
#######################################################################
Please consider citing the following if this program is useful for you:
Chen, G., Saad, Z.S., Nath, A.R., Beauchamp, M.S., Cox, R.W., 2012.
FMRI group analysis combining effect estimates and their variances.
NeuroImage 60, 747–765. https://doi.org/10.1016/j.neuroimage.2011.12.060
#######################################################################
AFNI program: 3dMEPFM
Usage: 3dMEPFM [options]
------
Brief summary:
==============
* 3dMEPFM is the equivalent program to 3dPFM for Multiecho fMRI data. This program
performs the voxelwise deconvolution of ME-fMRI data to yield time-varying estimates
of the changes in the transverse relaxation (DR2*) and, optionally, the net magnetization
(DS0) assuming a mono-exponential decay model of the signal, i.e. linear dependence of
the BOLD signal on the echo time (TE).
* It is also recommended to read the help of 3dPFM to understand its functionality.
* The ideas behind 3dMEPFM are described in the following papers:
- For a comprehensive description of the algorithm, based on a model that
only considers fluctuations in R2* (DR2*) and thus only estimates DR2*
(i.e. this model is selected with option -R2only), see:
C Caballero-Gaudes, S Moia, P. Panwar, PA Bandettini, J Gonzalez-Castillo
A deconvolution algorithm for multiecho functional MRI: Multiecho Sparse Paradigm Free Mapping
(submitted to Neuroimage)
- For a model that considers both fluctuations in the net magnetization (DS0) and R2*,
but only imposes a regularization term on DR2* (setting -rho 0 and without -R2only),
see
C Caballero-Gaudes, PA Bandettini, J Gonzalez-Castillo
A temporal deconvolution algorithm for multiecho functional MRI
2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)
https://ieeexplore.ieee.org/document/8363649
- For a model that considers both fluctuations in the net magnetization (DS0) and R2*,
and imposes regularization terms on DR2* and DS0 (i.e. setting rho > 0, and without -R2only),
see
The results of this paper were obtained with rho = 0.5
C Caballero-Gaudes, S. Moia, PA Bandettini, J Gonzalez-Castillo
Quantitative deconvolution of fMRI data with Multi-echo Sparse Paradigm Free Mapping
Medical Image Computing and Computer Assisted Intervention (MICCAI 2018)
Lecture Notes in Computer Science, vol. 11072. Springer
https://doi.org/10.1007/978-3-030-00931-1_36
* IMPORTANT. This program is written in R. Please follow the guidelines in
http://afni.nimh.nih.gov/sscc/gangc/Rinstall.html
to install R and make AFNI compatible with R. Particularly, the "snow" library
must be installed for parallelization across CPU nodes.
install.packages("snow",dependencies=TRUE)
In addition, you need to install the following libraries with dependencies:
install.packages("abind",dependencies=TRUE)
install.packages("lars",dependencies=TRUE)
install.packages("wavethresh",dependencies=TRUE)
Also, if you want to run the program with the options "rho > 0", you must install
the R package of the generalized lasso (https://projecteuclid.org/euclid.aos/1304514656)
This package was removed from CRAN repository, but the source code is available in:
https://github.com/glmgen/genlasso
Example usage with a dataset with 3 echoes:
-----------------------------------------------------------------------------
3dMEPFM -input data_TE1.nii 0.015
-input data_TE2.nii 0.030
-input data_TE3.nii 0.045
-mask mask.nii
-criteria bic
-hrf SPMG1
-jobs 1
Options:
--------
-input DSET TE
DSET: Dataset to analyze with Multiecho Paradigm Free Mapping.
and the corresponding TE. DSET can be any of the formats available
in AFNI, e.g: -input Data+orig
TE: echo time of dataset in seconds
Also .1D files where each column is a voxel timecourse.
If an .1D file is input, you MUST specify the TR with option -TR.
-dbgArgs: This option will enable R to save the parameters in a
file called .3dMEPFM.dbg.AFNI.args in the current directory
so that debugging can be performed.
-mask MASK: Process voxels inside this mask only. Default is no masking.
-penalty PEN: Regularization term (a.k.a. penalty) for DR2 & DS0
* Available options for PEN are:
lasso: LASSO (i.e. L1-norm)
* If you are interested in other penalties (e.g. ridge regression,
fused lasso, elastic net), contact c.caballero@bcbl.eu
-criteria CRIT: Model selection of the regularization parameter.
* Available options are:
bic: Bayesian Information Criterion (default)
aic: Akaike Information Criterion
mad: Regularization parameter is selected as the iteration
that makes the standard deviation of the residuals to be
larger than factor_MAD * sigma_MAD, where sigma_MAD is
the MAD estimate of the noise standard deviation
(after wavelet decomposition of the echo signals)
mad2: Regularization parameter is selected so that
the standard deviation of the residuals is the closest
to factor_MAD*sigma_MAD.
* If you want other options, contact c.caballero@bcbl.eu
-maxiterfactor MaxIterFactor (between 0 and 1):
* Maximum number of iterations for the computation of the
regularization path will be 2*MaxIerFactor*nscans
* Default value is MaxIterFactor = 1
-TR tr: Repetition time or sampling period of the input data
* It is required for the generation of the deconvolution HRF model.
* If input datasets are .1D file, TR must be specified in seconds.
If TR is not given, the program will STOP.
* If input datasets are 3D+time volumes and TR is NOT given,
the value of TR is taken from the dataset header.
* If TR is specified and it is different from the TR in the header
of the input datasets, the program will STOP.
-hrf fhrf: haemodynamic response function used for deconvolution
* fhrf can be any of the HRF models available in 3dDeconvolve.
http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dDeconvolve.html
i.e. 3dMEPFM calls 3dDeconvolve with options -x1D_stop & -nodata
to create the HRF with onset at 0 (i.e. -stim_time 1 '1D:0' fhrf )
* [Default] fhrf == 'GAM', the 1 parameter gamma variate
(t/(p*q))^p * exp(p-t/q)
with p=8.6 q=0.547 if only 'GAM' is used
** The peak of 'GAM(p,q)' is at time p*q after
the stimulus. The FWHM is about 2.3*sqrt(p)*q.
* Another option is fhrf == 'SPMG1', the SPM canonical HRF.
* If fhrf is a .1D, the program will use it as the HRF model.
** It should be generated with the same TR as the input data
to get sensible results (i.e. know what you are doing).
** fhrf must be column or row vector, i.e. only 1 hrf allowed.
* The HRF is normalized to maximum absolute amplitude equal to 1.
-R2only:
* If this option is given, the model will only consider R2* changes
and do not estimate S0 changes.
-rho: 0 <= rho <= 1 (default 0):
* Parameter that balances the penalization of the DS0 (rho) and
DR2star (1-rho) coefficients.
* Default is rho = 0, i.e. no penalization of DS0 coefficients.
* It becomes irrelevant with -R2only option.
-factor_min_lambda value >= 0 (default factor_min_lambda = 0.1):
* Stop the computation of the regularization path when
lambda <= factor_min_lambda*sigma_MAD, where sigma_MAD is the
estimate of the standard deviation of the noise (computed after
wavelet decomposition). It must be equal to or larger than 0.
-factor_MAD (default factor_MAD = 1):
* For criteria 'mad', select lambda so that the standard deviation
of residuals is approximately equal to factor_MAD*sigma_MAD
-debias_TEopt: 0 <= debias_TEopt <= number of input datasets
* For debiasing, only consider the 'debias_TEopt' input dataset,
i.e. if debias_TEopt=2, the dataset corresponding to the second
TE will be used for debiasing. This option is available in case
you really know that one of the echoes is the 'optimal' TE ...
As if this information was easy to know and match :)
* Default is debias_TEopt = 0, i.e. all echoes will be considered.
* This option is not recommended unless you understand it,
(i.e. use at your own risk)
-do_prior_debias:
* If this option is given, the algorithm will perform debiasing
before the selection of the regularization parameter.
* This option is not recommended unless you understand it,
(i.e. use at your own risk)
-n_selection_Nscans:
* The equation for model selection for selection of regularization
parameter with the 'bic' and 'aic' criteria depends on the number
of observations (i.e. number of scans * number of echoes)
* If -n_selection_Nscans is given, the formula will assume that
the number of observations is the number of scans. This is
mathematically wrong, but who cares if it gives better results!!
* This option is not recommended unless you understand it,
(i.e. use at your own risk)
-prefix
* The names of the output volumes will be generated automatically
as outputname_prefix_input, e.g. if -input = TheEmperor+orig,
and prefix is Zhark, the names of the DR2 output volumes is
DR2_Zhark_TheEmperor+orig for volume
whereas if prefix is not given, the output will be
DR2_TheEmperor+orig.
* The program will automatically save the following volumes:
-DR2 Changes in R2* parameter, which is assumed to
represent neuronal-related signal changes.
-DR2fit Convolution of DR2 with HRF, i.e. neuronal-related
haemodynamic signal. One volume per echo is created.
-DS0 Changes in net magnetization (S0) (if estimated)
-lambda Regularization parameter
-sigmas_MAD Estimate of the noise standard deviation after
wavelet decomposition for each input dataset
-costs Cost function to select the regularization parameter
(lambda) according to selection criterion
* If you don't want to delete or rename anyone, you can always
delete them later or rename them with 3dcopy.
-jobs NJOBS: On a multi-processor machine, parallel computing will
speed up the program significantly.
Choose 1 for a single-processor computer (DEFAULT).
-nSeg XX: Divide into nSeg segments of voxels to report progress,
e.g. nSeg 5 will report every 20% of processed voxels.
Default = 10
-verb VERB: VERB is an integer specifying verbosity level.
0 for quiet, 1 (default) or more: talkative.
-help: this help message
-show_allowed_options: list of allowed options
AFNI program: 3dmerge
Program 3dmerge
This program has 2 different functions:
(1) To edit 3D datasets in various ways (threshold, blur, cluster, ...);
(2) To merge multiple datasets in various ways (average, max, ...).
Either or both of these can be applied.
The 'editing' operations are controlled by options that start with '-1',
which indicates that they apply to individual datasets
(e.g., '-1blur_fwhm').
The 'merging' operations are controlled by options that start with '-g',
which indicate that they apply to the entire group of input datasets
(e.g., '-gmax').
----------------------------------------------------------------------
Usage: 3dmerge [options] datasets ...
Examples:
1. Apply a 4.0mm FWHM Gaussian blur to EPI run 7.
3dmerge -1blur_fwhm 4.0 -doall -prefix e1.run7_blur run7+orig
* These examples are based on a data grid of 3.75 x 3.75 x 3.5, in mm.
So a single voxel has a volume of ~49.22 mm^3 (mvul), and a 40 voxel
cluster has a volume of ~1969 mm^3 (as used in some examples).
2. F-stat only:
Cluster based on a threshold of F=10 (F-stats are in sub-brick #0),
and require a volume of 40 voxels (1969 mm^3). The output will be
the same F-stats as in the input, but subject to the threshold and
clustering.
3dmerge -1clust 3.76 1969 -1thresh 10.0 \
-prefix e2.f10 stats+orig'[0]'
3. F-stat only:
Perform the same clustering (as in #2), but apply the radius and
cluster size in terms of cubic millimeter voxels (as if the voxels
were 1x1x1). So add '-dxyz=1', and adjust rmm and mvul.
3dmerge -dxyz=1 -1clust 1 40 -1thresh 10.0 \
-prefix e3.f10 stats+orig'[0]'
4. t-stat and beta weight:
For some condition, our beta weight is in sub-brick #4, with the
corresponding t-stat in sub-brick #5. Cluster based on 40 voxels
and a t-stat threshold of 3.25. Output the data from the beta
weights, not the t-stats.
3dmerge -dxyz=1 -1clust 1 40 -1thresh 3.25 \
-1tindex 5 -1dindex 4 \
-prefix e4.t3.25 stats+orig
5. t-stat mask:
Apply the same threshold and cluster as in #4, but output a mask.
Since there are 5 clusters found in this example, the values in
the mask will be from 1 to 5, representing the largest cluster to
the smallest. Use -1clust_order on sub-brick 5.
3dmerge -dxyz=1 -1clust_order 1 40 -1thresh 3.25 \
-prefix e5.mask5 stats+orig'[5]'
Note: this should match the 3dclust output from:
3dclust -1thresh 3.25 -dxyz=1 1 40 stats+orig'[5]'
----------------------------------------------------------------------
EDITING OPTIONS APPLIED TO EACH INPUT DATASET:
-1thtoin = Copy threshold data over intensity data.
This is only valid for datasets with some
thresholding statistic attached. All
subsequent operations apply to this
substituted data.
-2thtoin = The same as -1thtoin, but do NOT scale the
threshold values from shorts to floats when
processing. This option is only provided
for compatibility with the earlier versions
of the AFNI package '3d*' programs.
-1noneg = Zero out voxels with negative intensities
-1abs = Take absolute values of intensities
-1clip val = Clip intensities in range (-val,val) to zero
-2clip v1 v2 = Clip intensities in range (v1,v2) to zero
-1uclip val = These options are like the above, but do not apply
-2uclip v1 v2 any automatic scaling factor that may be attached
to the data. These are for use only in special
circumstances. (The 'u' means 'unscaled'. Program
'3dinfo' can be used to find the scaling factors.)
N.B.: Only one of these 'clip' options can be used; you cannot
combine them to have multiple clipping executed.
-1thresh thr = Use the threshold data to censor the intensities
(only valid for 'fith', 'fico', or 'fitt' datasets)
(or if the threshold sub-brick is set via -1tindex)
N.B.: The value 'thr' is floating point, in the range
0.0 < thr < 1.0 for 'fith' and 'fico' datasets,
and 0.0 < thr < 32.7 for 'fitt' datasets.
-2thresh t1 t2 = Zero out voxels where the threshold sub-brick value
lies between 't1' and 't2' (exclusive). If t1=-t2,
is the same as '-1thresh t2'.
-1blur_sigma bmm = Gaussian blur with sigma = bmm (in mm)
-1blur_rms bmm = Gaussian blur with rms deviation = bmm
-1blur_fwhm bmm = Gaussian blur with FWHM = bmm
-1blur3D_fwhm bx by bz =
Gaussian blur with FWHM (potentially) different in each
of the 3 spatial dimensions. Note that these dimensions
are in mm, and refer to the storage order of the dataset.
(See the output of '3dinfo datasetname' if you
don't know the storage order of your input dataset.)
A blur amount of 0 in a direction means not to apply
any blurring along that axis. For example:
-1blur3D_fwhm 4 4 0
will do in-plane blurring only along the x-y dataset axes.
-t1blur_sigma bmm= Gaussian blur of threshold with sigma = bmm(in mm)
-t1blur_rms bmm = Gaussian blur of threshold with rms deviation = bmm
-t1blur_fwhm bmm = Gaussian blur of threshold with FWHM = bmm
-1zvol x1 x2 y1 y2 z1 z2
= Zero out entries inside the 3D volume defined
by x1 <= x <= x2, y1 <= y <= y2, z1 <= z <= z2 ;
N.B.: The ranges of x,y,z in a dataset can be found
using the '3dinfo' program. Dimensions are in mm.
N.B.: This option may not work correctly at this time, but
I've not figured out why!
CLUSTERING
-dxyz=1 = In the cluster editing options, the spatial clusters
are defined by connectivity in true 3D distance, using
the voxel dimensions recorded in the dataset header.
This option forces the cluster editing to behave as if
all 3 voxel dimensions were set to 1 mm. In this case,
'rmm' is then the max number of grid cells apart voxels
can be to be considered directly connected, and 'vmul'
is the min number of voxels to keep in the cluster.
N.B.: The '=1' is part of the option string, and can't be
replaced by some other value. If you MUST have some
other value for voxel dimensions, use program 3drefit.
The following cluster options are mutually exclusive:
-1clust rmm vmul = Form clusters with connection distance rmm
and clip off data not in clusters of
volume at least vmul microliters
-1clust_mean rmm vmul = Same as -1clust, but all voxel intensities
within a cluster are replaced by the average
intensity of the cluster.
-1clust_max rmm vmul = Same as -1clust, but all voxel intensities
within a cluster are replaced by the maximum
intensity of the cluster.
-1clust_amax rmm vmul = Same as -1clust, but all voxel intensities
within a cluster are replaced by the maximum
absolute intensity of the cluster.
-1clust_smax rmm vmul = Same as -1clust, but all voxel intensities
within a cluster are replaced by the maximum
signed intensity of the cluster.
-1clust_size rmm vmul = Same as -1clust, but all voxel intensities
within a cluster are replaced by the size
of the cluster (in multiples of vmul).
-1clust_order rmm vmul= Same as -1clust, but all voxel intensities
within a cluster are replaced by the cluster
size index (largest cluster=1, next=2, ...).
-1clust_depth rmm vmul= Same as -1clust, but all voxel intensities
are replaced by the number of peeling operations
needed to remove them from the cluster.
That number is an indication of how deep a voxel
is inside a cluster
-isovalue = Clusters will be formed only from contiguous (in the
rmm sense) voxels that also have the same value.
N.B.: The normal method is to cluster all contiguous
nonzero voxels together.
-isomerge = Clusters will be formed from each distinct value
in the dataset; spatial contiguity will not be
used (but you still have to supply rmm and vmul
on the command line).
N.B.: 'Clusters' formed this way may well have components
that are widely separated!
* If rmm is given as 0, this means to use the 6 nearest neighbors to
form clusters of nonzero voxels.
* If vmul is given as zero, then all cluster sizes will be accepted
(probably not very useful!).
* If vmul is given as negative, then abs(vmul) is the minimum number
of voxels to keep.
The following commands produce erosion and dilation of 3D clusters.
These commands assume that one of the -1clust commands has been used.
The purpose is to avoid forming strange clusters with 2 (or more)
main bodies connected by thin 'necks'. Erosion can cut off the neck.
Dilation will minimize erosion of the main bodies.
Note: Manipulation of values inside a cluster (-1clust commands)
occurs AFTER the following two commands have been executed.
-1erode pv For each voxel, set the intensity to zero unless pv %
of the voxels within radius rmm are nonzero.
-1dilate Restore voxels that were removed by the previous
command if there remains a nonzero voxel within rmm.
The following filter options are mutually exclusive:
-1filter_mean rmm = Set each voxel to the average intensity of the
voxels within a radius of rmm.
-1filter_nzmean rmm = Set each voxel to the average intensity of the
non-zero voxels within a radius of rmm.
-1filter_max rmm = Set each voxel to the maximum intensity of the
voxels within a radius of rmm.
-1filter_amax rmm = Set each voxel to the maximum absolute intensity
of the voxels within a radius of rmm.
-1filter_smax rmm = Set each voxel to the maximum signed intensity
of the voxels within a radius of rmm.
-1filter_aver rmm = Same idea as '_mean', but implemented using a
new code that should be faster.
The following threshold filter options are mutually exclusive:
-t1filter_mean rmm = Set each correlation or threshold voxel to the
average of the voxels within a radius of rmm.
-t1filter_nzmean rmm = Set each correlation or threshold voxel to the
average of the non-zero voxels within
a radius of rmm.
-t1filter_max rmm = Set each correlation or threshold voxel to the
maximum of the voxels within a radius of rmm.
-t1filter_amax rmm = Set each correlation or threshold voxel to the
maximum absolute intensity of the voxels
within a radius of rmm.
-t1filter_smax rmm = Set each correlation or threshold voxel to the
maximum signed intensity of the voxels
within a radius of rmm.
-t1filter_aver rmm = Same idea as '_mean', but implemented using a
new code that should be faster.
-1mult factor = Multiply intensities by the given factor
-1zscore = If the sub-brick is labeled as a statistic from
a known distribution, it will be converted to
an equivalent N(0,1) deviate (or 'z score').
If the sub-brick is not so labeled, nothing will
be done.
The above '-1' options are carried out in the order given above,
regardless of the order in which they are entered on the command line.
N.B.: The 3 '-1blur' options just provide different ways of
specifying the radius used for the blurring function.
The relationships among these specifications are
sigma = 0.57735027 * rms = 0.42466090 * fwhm
The requisite convolutions are done using FFTs; this is by
far the slowest operation among the editing options.
OTHER OPTIONS:
-nozero = Do NOT write the output dataset if it would be all zero.
-datum type = Coerce the output data to be stored as the given type,
which may be byte, short, or float.
N.B.: Byte data cannot be negative. If this datum type is chosen,
any negative values in the edited and/or merged dataset
will be set to zero.
-keepthr = When using 3dmerge to edit exactly one dataset of a
functional type with a threshold statistic attached,
normally the resulting dataset is of the 'fim'
(intensity only) type. This option tells 3dmerge to
copy the threshold data (unedited in any way) into
the output dataset.
N.B.: This option is ignored if 3dmerge is being used to
combine 2 or more datasets.
N.B.: The -datum option has no effect on the storage of the
threshold data. Instead use '-thdatum type'.
-doall = Apply editing and merging options to ALL sub-bricks
uniformly in a dataset.
N.B.: All input datasets must have the same number of sub-bricks
when using the -doall option.
N.B.: The threshold specific options (such as -1thresh,
-keepthr, -tgfisher, etc.) are not compatible with
the -doall command. Neither are the -1dindex or
the -1tindex options.
N.B.: All labels and statistical parameters for individual
sub-bricks are copied from the first dataset. It is
the responsibility of the user to verify that these
are appropriate. Note that sub-brick auxiliary data
can be modified using program 3drefit.
-quiet = Reduce the number of messages shown
-1dindex j = Uses sub-brick #j as the data source , and uses sub-brick
-1tindex k = #k as the threshold source. With these, you can operate
on any given sub-brick of the inputs dataset(s) to produce
as output a 1 brick dataset. If desired, a collection
of 1 brick datasets can later be assembled into a
multi-brick bucket dataset using program '3dbucket'
or into a 3D+time dataset using program '3dTcat'.
N.B.: If these options aren't used, j=0 and k=1 are the defaults
The following option allows you to specify a mask dataset that
limits the action of the 'filter' options to voxels that are
nonzero in the mask:
-1fmask mset = Read dataset 'mset' (which can include a
sub-brick specifier) and use the nonzero
voxels as a mask for the filter options.
Filtering calculations will not use voxels
that are outside the mask. If an output
voxel does not have ANY masked voxels inside
the rmm radius, then that output voxel will
be set to 0.
N.B.: * Only the -1filter_* and -t1filter_* options are
affected by -1fmask.
* Voxels NOT in the fmask will be set to zero in the
output when the filtering occurs. THIS IS NEW BEHAVIOR,
as of 11 Oct 2007. Previously, voxels not in the fmask,
but within 'rmm' of a voxel in the mask, would get a
nonzero output value, as those nearby voxels would be
combined (via whatever '-1f...' option was given).
* If you wish to restore this old behavior, where non-fmask
voxels can get nonzero output, then use the new option
'-1fm_noclip' in addition to '-1fmask'. The two comments
below apply to the case where '-1fm_noclip' is given!
* In the linear averaging filters (_mean, _nzmean,
and _expr), voxels not in the mask will not be used
or counted in either the numerator or denominator.
This can give unexpected results if you use '-1fm_noclip'.
For example, if the mask is designed to exclude the volume
outside the brain, then voxels exterior to the brain,
but within 'rmm', will have a few voxels inside the brain
included in the filtering. Since the sum of weights (the
denominator) is only over those few intra-brain
voxels, the effect will be to extend the significant
part of the result outward by rmm from the surface
of the brain. In contrast, without the mask, the
many small-valued voxels outside the brain would
be included in the numerator and denominator sums,
which would barely change the numerator (since the
voxel values are small outside the brain), but would
increase the denominator greatly (by including many
more weights). The effect in this case (no -1fmask)
is to make the filtering taper off gradually in the
rmm-thickness shell around the brain.
* Thus, if the -1fmask is intended to clip off non-brain
data from the filtering, its use should be followed by
masking operation using 3dcalc:
3dmerge -1filter_aver 12 -1fm_noclip -1fmask mask+orig -prefix x input+orig
3dcalc -a x -b mask+orig -prefix y -expr 'a*step(b)'
rm -f x+orig.*
The desired result is y+orig - filtered using only
brain voxels (as defined by mask+orig), and with
the output confined to the brain voxels as well.
The following option allows you to specify an almost arbitrary
weighting function for 3D linear filtering:
-1filter_expr rmm expr
Defines a linear filter about each voxel of radius 'rmm' mm.
The filter weights are proportional to the expression evaluated
at each voxel offset in the rmm neighborhood. You can use only
these symbols in the expression:
r = radius from center
x = dataset x-axis offset from center
y = dataset y-axis offset from center
z = dataset z-axis offset from center
i = x-axis index offset from center
j = y-axis index offset from center
k = z-axis index offset from center
Example:
-1filter_expr 12.0 'exp(-r*r/36.067)'
This does a Gaussian filter over a radius of 12 mm. In this
example, the FWHM of the filter is 10 mm. [in general, the
denominator in the exponent would be 0.36067 * FWHM * FWHM.
This is one way to get a Gaussian blur combined with the
-1fmask option. The radius rmm=12 is chosen where the weights
get smallish.] Another example:
-1filter_expr 20.0 'exp(-(x*x+16*y*y+z*z)/36.067)'
which is a non-spherical Gaussian filter.
** For shorthand, you can also use the new option (11 Oct 2007)
-1filter_blur fwhm
which is equivalent to
-1filter_expr 1.3*fwhm 'exp(-r*r/(.36067*fwhm*fwhm)'
and will implement a Gaussian blur. The only reason to do
Gaussian blurring this way is if you also want to use -1fmask!
The following option lets you apply a 'Winsor' filter to the data:
-1filter_winsor rmm nw
The data values within the radius rmm of each voxel are sorted.
Suppose there are 'N' voxels in this group. We index the
sorted voxels as s[0] <= s[1] <= ... <= s[N-1], and we call the
value of the central voxel 'v' (which is also in array s[]).
If v < s[nw] , then v is replaced by s[nw]
otherwise If v > s[N-1-nw], then v is replace by s[N-1-nw]
otherwise v is unchanged
The effect is to increase 'too small' values up to some
middling range, and to decrease 'too large' values.
If N is odd, and nw=(N-1)/2, this would be a median filter.
In practice, I recommend that nw be about N/4; for example,
-dxyz=1 -1filter_winsor 2.5 19
is a filter with N=81 that gives nice results.
N.B.: This option is NOT affected by -1fmask
N.B.: This option is slow! and experimental.
The following option returns a rank value at each voxel in
the input dataset.
-1rank
If the input voxels were, say, 12 45 9 0 9 12 0
the output would be 2 3 1 0 1 2 0
This option is handy for turning FreeSurfer's segmentation
volumes to ROI volumes that can be easily colorized with AFNI.
For example:
3dmerge -1rank -prefix aparc+aseg_rank aparc+aseg.nii
To view aparc+aseg_rank+orig, use the ROI_128 colormap
and set the colorbar range to 128.
The -1rank option also outputs a 1D file that contains
the mapping from the input dataset to the ranked output.
Sub-brick float factors are ignored.
This option only works on datasets of integral values or
of integral data types. 'float' values are typecast to 'int'
before being ranked.
See also program 3dRank
MERGING OPTIONS APPLIED TO FORM THE OUTPUT DATASET:
[That is, different ways to combine results. The]
[following '-g' options are mutually exclusive! ]
-gmean = Combine datasets by averaging intensities
(including zeros) -- this is the default
-gnzmean = Combine datasets by averaging intensities
(not counting zeros)
-gmax = Combine datasets by taking max intensity
(e.g., -7 and 2 combine to 2)
-gamax = Combine datasets by taking max absolute intensity
(e.g., -7 and 2 combine to 7)
-gsmax = Combine datasets by taking max signed intensity
(e.g., -7 and 2 combine to -7)
-gcount = Combine datasets by counting number of 'hits' in
each voxel (see below for definition of 'hit')
-gorder = Combine datasets in order of input:
* If a voxel is nonzero in dataset #1, then
that value goes into the voxel.
* If a voxel is zero in dataset #1 but nonzero
in dataset #2, then the value from #2 is used.
* And so forth: the first dataset with a nonzero
entry in a given voxel 'wins'
-gfisher = Takes the arctanh of each input, averages these,
and outputs the tanh of the average. If the input
datum is 'short', then input values are scaled by
0.0001 and output values by 10000. This option
is for merging bricks of correlation coefficients.
-nscale = If the output datum is shorts, don't do the scaling
to the max range [similar to 3dcalc's -nscale option]
MERGING OPERATIONS APPLIED TO THE THRESHOLD DATA:
[That is, different ways to combine the thresholds. If none of these ]
[are given, the thresholds will not be merged and the output dataset ]
[will not have threshold data attached. Note that the following '-tg']
[command line options are mutually exclusive, but are independent of ]
[the '-g' options given above for merging the intensity data values. ]
-tgfisher = This option is only applicable if each input dataset
is of the 'fico' or 'fith' types -- functional
intensity plus correlation or plus threshold.
(In the latter case, the threshold values are
interpreted as correlation coefficients.)
The correlation coefficients are averaged as
described by -gfisher above, and the output
dataset will be of the fico type if all inputs
are fico type; otherwise, the output datasets
will be of the fith type.
N.B.: The difference between the -tgfisher and -gfisher
methods is that -tgfisher applies to the threshold
data stored with a dataset, while -gfisher
applies to the intensity data. Thus, -gfisher
would normally be applied to a dataset created
from correlation coefficients directly, or from
the application of the -1thtoin option to a fico
or fith dataset.
OPTIONAL WAYS TO POSTPROCESS THE COMBINED RESULTS:
[May be combined with the above methods.]
[Any combination of these options may be used.]
-ghits count = Delete voxels that aren't !=0 in at least
count datasets (!=0 is a 'hit')
-gclust rmm vmul = Form clusters with connection distance rmm
and clip off data not in clusters of
volume at least vmul microliters
The '-g' and '-tg' options apply to the entire group of input datasets.
OPTIONS THAT CONTROL THE NAMES OF THE OUTPUT DATASET:
-session dirname = write output into given directory (default=./)
-prefix pname = use 'pname' for the output dataset prefix
(default=mrg)
NOTES:
** If only one dataset is read into this program, then the '-g'
options do not apply, and the output dataset is simply the
'-1' options applied to the input dataset (i.e., edited).
** A merged output dataset is ALWAYS of the intensity-only variety.
** You can combine the outputs of 3dmerge with other sub-bricks
using the program 3dbucket.
** Complex-valued datasets cannot be merged.
** This program cannot handle time-dependent datasets without -doall.
** Note that the input datasets are specified by their .HEAD files,
but that their .BRIK files must exist also!
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
** Input datasets using sub-brick selectors are treated as follows:
- 3D+time if the dataset is 3D+time and more than 1 brick is chosen
- otherwise, as bucket datasets (-abuc or -fbuc)
(in particular, fico, fitt, etc. datasets are converted to fbuc)
** If you are NOT using -doall, and choose more than one sub-brick
with the selector, then you may need to use -1dindex to further
pick out the sub-brick on which to operate (why you would do this
I cannot fathom). If you are also using a thresholding operation
(e.g., -1thresh), then you also MUST use -1tindex to choose which
sub-brick counts as the 'threshold' value. When used with sub-brick
selection, 'index' refers the dataset AFTER it has been read in:
-1dindex 1 -1tindex 3 'dset+orig[4..7]'
means to use the #5 sub-brick of dset+orig as the data for merging
and the #7 sub-brick of dset+orig as the threshold values.
** The above example would better be done with
-1tindex 1 'dset+orig[5,7]'
since the default data index is 0. (You would only use -1tindex if
you are actually using a thresholding operation.)
** -1dindex and -1tindex apply to all input datasets.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dMSE
Usage: 3dMSE [options] dset
Computes voxelwise multi-scale entropy.
Options:
-polort m = Remove polynomial trend of order 'm', for m=-1..3.
[default is m=1; removal is by least squares].
Using m=-1 means no detrending; this is only useful
for data/information that has been pre-processed.
-autoclip = Clip off low-intensity regions in the dataset,
-automask = so that the correlation is only computed between
high-intensity (presumably brain) voxels. The
mask is determined the same way that 3dAutomask works.
-mask mmm = Mask to define 'in-brain' voxels. Reducing the number
the number of voxels included in the calculation will
significantly speedup the calculation. Consider using
a mask to constrain the calculations to the grey matter
rather than the whole brain. This is also preferable
to using -autoclip or -automask.
-prefix p = Save output into dataset with prefix 'p', this file will
contain bricks for both 'weighted' or 'degree' centrality
[default prefix is 'MSE'].
-scales N = The number of scales to be used in the calculation.
[default is 5].
-entwin w = The window size used in the calculation.
[default is 2].
-rthresh r = The radius threshold for determining if values are the
same in the SampleEn calculation, in fractions of the
standard deviation.
[default is .5].
Notes:
* The output dataset is a bucket type of floats.
-- RWCox - 31 Jan 2002 and 16 Jul 2010
-- Cameron Craddock - 26 Sept 2015
=========================================================================
* This binary version of 3dMSE is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dMSS
================== Welcome to 3dMSS ==================
Program for Voxelwise Multilevel Smoothing Spline (MSS) Analysis
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Version 1.0.9, May 1, 2025
Author: Gang Chen (gangchen@mail.nih.gov)
Website - https://afni.nimh.nih.gov/gangchen_homepage
SSCC/NIMH, National Institutes of Health, Bethesda MD 20892, USA
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
------
Multilevel Smoothing-Spline (MSS) Modeling
The linearity assumption surrounding a quantitative variable in common
practice may be a reasonable approximation especially when the variable
is confined within a narrow range, but can be inappropriate under some
circumstances when the variable's effect is non-monotonic or tortuous.
As a more flexible and adaptive approach, multilevel smoothing splines
(MSS) offers a more powerful analytical tool for population-level
neuroimaging data analysis that involves one or more quantitative
predictors. More theoretical discussion can be found in
Chen, G., Nash, T.A., Cole, K.M., Kohn, P.D., Wei, S.-M., Gregory, M.D.,
Eisenberg, D.P., Cox, R.W., Berman, K.F., Shane Kippenhan, J., 2021.
Beyond linearity in neuroimaging: Capturing nonlinear relationships with
application to longitudinal studies. NeuroImage 233, 117891.
https://doi.org/10.1016/j.neuroimage.2021.117891
Chen, G., Taylor, P.A., Reynolds, R.C., Leibenluft, E., Pine, D.S.,
Brotman, M.A., Pagliaccio, D., Haller, S.P., 2023. BOLD Response is more
than just magnitude: Improving detection sensitivity through capturing
hemodynamic profiles. NeuroImage 277, 120224.
https://doi.org/10.1016/j.neuroimage.2023.120224
To be able to run 3dMSS, one needs to have the following R packages
installed: "gamm4" and "snow". To install these R packages, run the
following command at the terminal:
rPkgsInstall -pkgs "gamm4,snow"
Alternatively you may install them in R:
install.packages("gamm4")
install.packages("snow")
It is best to go through all the examples below to get hang of the MSS
scripting interface. Once the 3dMSS script is constructed, it can be run
by copying and pasting to the terminal. Alternatively (and probably better)
you save the script as a text file, for example, called MSS.txt, and execute
it with the following (assuming on tc shell),
nohup tcsh -x MSS.txt &
or,
nohup tcsh -x MSS.txt > diary.txt &
or,
nohup tcsh -x MSS.txt |& tee diary.txt &
The advantage of the latter commands is that the progression is saved into
the text file diary.txt and, if anything goes awry, can be examined later.
Example 1 --- simplest case: one group of subjects with a between-subject
quantitative variable that does not vary within subject. MSS analysis is
set up to model the trajectory or trend along age, and can be specified
through the option -mrr, which is solved via a model formuation of ridge
regression. Again, the following exemplary script assumes that 'age' is
a between-subjects variable (not varying within subject):
3dMSS -prefix MSS -jobs 16 \
-mrr 's(age,k=10)' \
-qVars 'age' \
-mask myMask.nii \
-bounds -2 2 \
-prediction @pred.txt \
-dataTable @data.txt
The part 's(age,k=10)' indicates that 'age' is modeled via a smooth curve.
The minimum number of samples should be 6 or more. 'k=10' inside the model
specification s() sets the number of knots. If the number of data samples (e.g.,
age) is less than 10, set k to the number of available samples (e.g., 8).
No empty space is allowed in the model formulation. With the option
-bounds, values beyond [-2, 2] will be treated as outliers and considered
as missing. If you want to set a range, choose one that make sense with
your specific input data.
The file pred.txt lists all the expl1anatory variables (excluding lower-level variables
such as subject) for prediction. The file should be in a data.frame format as below:
label age
time1 1
time2 2
time3 3
...
time8 8
time9 9
time10 10
...
The file data.txt stores the information for all the variables and input data in a
data.frame format. For example:
Subj age InputFile
S1 1 ~/alex/MSS/S1.nii
S2 2 ~/alex/MSS/S2.nii
...
In the output the first sub-brick shows the statistical evidence in the
form of chi-square distribution with 2 degrees of freedom (2 DFs do not mean
anything, just for the convenience of information coding). This sub-brick is
the statistical evidence for the trejectory of the group. If you want to
estimate the trend at the population level, use the option -prediction with a
table that codes the ages you would like to track the trend. In the output
there is one predicted value for each age plus the associated uncertainty
(standard error). For example, with 10 age values, there will be 10 predicted
values plus 10 standard errors. The sub-bricks for prediction and standard
errors are interleaved.
Example 2 --- Largely same as Example 1, but with 'age' as a within-subject
quantitative variable (varying within each subject). The model is better
specified by replacing the line of -mrr in Example 1 with the following
two lines:
-mrr 's(age,k=10)+s(Subj,bs="re")' \
-vt Subj 's(Subj)' \
The part 's(age,k=10)' indicates that 'age' is modeled via a smooth curve.
The minimum number of samples should be 6 or more. 'k=10' inside the model
specification s() sets the number of knots. If the number of data samples (e.g.,
age) is less than 10, set k to the number of available samples (e.g., 8).
The second term 's(Subj,bs="re")' in the model specification means that
each subject is allowed to have a varying intercept or random effect ('re').
To estimate the smooth trajectory through the option -prediction, the option
-vt has to be included in this case to indicate the varying term (usually
subjects). That is, if prediction is desirable, one has to explicitly
declare the variable (e.g., Subj) that is associated with the varying term
(e.g., s(Subj)). No empty space is allowed in the model formulation and the
the varying term.
The full script version is
3dMSS -prefix MSS -jobs 16 \
-mrr 's(age,k=10)+s(Subj,bs="re")' \
-vt Subj 's(Subj)' \
-qVars 'age' \
-mask myMask.nii \
-bounds -2 2 \
-prediction @pred.txt \
-dataTable @data.txt
All the rest remains the same as Example 1.
Alternatively, this model with varying subject-level intercept can be
specified with
-lme 's(age,k=10)' \
-ranEff 'list(Subj=~1)' \
which is solved through the linear mixed-effect (lme) platform. The -vt is
not needed when making prediction through the option -prediction. The two
specifications, -mrr and -lme, would render similar results, but the
runtime may differ depending on the amount of data and model complexity.
Example 3 --- two groups and one quantitative variable (age). MSS analysis is
set up to compare the trajectory or trend along age between the two groups,
which are quantitatively coded as -1 and 1. For example, if the two groups
are females and males, you can code females as -1 and males as 1. The following
script applies to the situation when the quantitative variable does not vary
within subject,
3dMSS -prefix MSS -jobs 16 \
-mrr 's(age,k=10)+s(age,k=10,by=grp)' \
-qVars 'age' \
-mask myMask.nii \
-bounds -2 2 \
-prediction @pred.txt \
-dataTable @data.txt
The part 's(age,k=10)' indicates that 'age' is modeled via a smooth curve.
The minimum number of samples should be 6 or more. 'k=10' inside the model
specification s() sets the number of knots. If the number of data samples (e.g.,
age) is less than 10, set k to the number of available samples (e.g., 8).
Use the script below when the quantitative variable varies within subject,
3dMSS -prefix MSS -jobs 16 \
-mrr 's(age,k=10)+s(age,k=10,by=grp)+s(Subj,bs="re")' \
-vt Subj 's(Subj)' \
-qVars 'age' \
-mask myMask.nii \
-bounds -2 2 \
-prediction @pred.txt \
-dataTable @data.txt
or an LME version:
3dMSS -prefix MSS -jobs 16 \
-lme 's(age,k=10)+s(age,k=10,by=grp)' \
-ranEff 'list(Subj=~1)' \
-qVars 'age' \
-mask myMask.nii \
-bounds -2 2 \
-prediction @pred.txt \
-dataTable @data.txt
Example 4 --- modeling hemodynamic response: this 3dMSS script is
intended to (1) assess the presence of HRF for one group or (2) compare
HRFs between two conditions for one group. For first case, each HRF at
the indiividual level is characterized at 14 time points with a time
resolution TR = 1.25s. In the second case, obtain the HRF contrast
between the two conditions. For either case, each individual should have
14 input files. Two covariates are considered: sex and age.
3dMSS -prefix output -jobs 16 \
-lme 'sex+age+s(TR,k=10)' \
-ranEff 'list(subject=~1)' \
-qVars 'sex,age,TR' \
-prediction @HRF.table \
-dataTable @smooth-HRF.table
The part 's(TR,k=10)' indicates that 'TR' is modeled via a smooth curve.
The minimum number of samples should be 6 or more. 'k=10' inside the model
specification s() sets the number of knots. If the number of data samples (e.g.,
TR) is less than 10, set k to the number of available samples (e.g., 8).
The output filename and number of CPUs for parallelization are
specified through -prefix and -jobs, respectively. The expression
s() in the model specification indicator '-lme' represents the
smooth function, and the term 's(TR)' codes the overall HRF profile groups.
The term 'list(subject=~1)' under the option '-ranEff'
indicates the random effects for the cross-individual variability in
intercept. The number of thin plate spline bases was set to the
default K = 10. The option '-qVars' identifies quantitative
variables (TR and age in this case plus dummy-coded sex and
group). The last two specifiers -prediction and -dataTable list one
table for HRF prediction and another for input data information,
respectively. The input file 'smooth-HRF.table' is structured in a
long data frame format:
subject age sex TR InputFile
s1 29 1 0 s1.Inc.b0.nii
s1 29 1 1 s1.Inc.b1.nii
s1 29 1 2 s1.Inc.b2.nii
s1 29 1 3 s1.Inc.b3.nii
s1 29 1 4 s1.Inc.b4.nii
...
The factor 'sex' is dummy-coded with 1s and -1s. The following
table as the input file 'HRF.table' provides the specifications for
predicted HRFs:
label age sex TR
time1 6.2 1 0.00
time2 6.2 1 0.25
time3 6.2 1 0.50
...
Example 5 --- modeling hemodynamic response: this 3dMSS script is
intended to (1) compares HRFs under one task condition between the
two groups of patients (PT) and healthy volunteer (HV) at the
population level, or (2) assess the interaction between group and
task condition (2 levels). For the second case, obtain the HRF
contrast at each time point. In either case, if the HRF is represented
with 14 time points with a time resolution TR = 1.25s, each individual
should have 14 input files. Two covariates are considered: sex and age.
3dMSS -prefix output -jobs 16 \
-lme 'sex+age+s(TR,k=10)+s(TR,k=10,by=group)' \
-ranEff 'list(subject=~1)' \
-qVars 'sex,age,TR,group' \
-prediction @HRF.table \
-dataTable @smooth-HRF.table
The part 's(age,k=10)' indicates that 'TR' is modeled via a smooth curve.
The minimum number of samples should be 6 or more. 'k=10' inside the model
specification s() sets the number of knots. If the number of data samples (e.g.,
TR) is less than 10, set k to the number of available samples (e.g., 8).
The output filename and number of CPUs for parallelization are
specified through -prefix and -jobs, respectively. The expression
s() in the model specification indicator '-lme' represents the
smooth function, and the two terms 's(TR)' and 's(TR,by=group)' code
the overall HRF profile and the HRF difference between the two
groups. The term 'list(subject=~1)' under the option '-ranEff'
indicates the random effects for the cross-individual variability in
intercept. The number of thin plate spline bases was set to the
default K = 10. The option '-qVars' identifies quantitative
variables (TR and age in this case plus dummy-coded sex and
group). The last two specifiers -prediction and -dataTable list one
table for HRF prediction and another for input data information,
respectively. The input file 'smooth-HRF.table' is structured in a
long data frame format:
subject age sex group TR InputFile
s1 29 1 1 0 s1.Inc.b0.nii
s1 29 1 1 1 s1.Inc.b1.nii
s1 29 1 1 2 s1.Inc.b2.nii
s1 29 1 1 3 s1.Inc.b3.nii
s1 29 1 1 4 s1.Inc.b4.nii
...
Both 'group' and 'sex' are dummy-coded with 1s and -1s. The following
table as the input file 'HRF.table' provides the specifications for
predicted HRFs:
label age sex group TR
g1.t1 6.2 1 1 0.00
g1.t2 6.2 1 1 0.25
g1.t3 6.2 1 1 0.50
...
g2.t1 3.5 -1 -1 0.00
g2.t2 3.5 -1 -1 0.25
g2.t3 3.5 -1 -1 0.50
...
Options in alphabetical order:
------------------------------
-bounds lb ub: This option is for outlier removal. Two numbers are expected from
the user: the lower bound (lb) and the upper bound (ub). The input data will
be confined within [lb, ub]: any values in the input data that are beyond
the bounds will be removed and treated as missing. Make sure the first number
less than the second. You do not have to use this option to censor your data!
-cio: Use AFNI's C io functions, which is default. Alternatively -Rio
can be used.
-dataTable TABLE: List the data structure with a header as the first line.
NOTE:
1) This option has to occur last in the script; that is, no other
options are allowed thereafter. Each line should end with a backslash
except for the last line.
2) The order of the columns should not matter except that the last
column has to be the one for input files, 'InputFile'. Each row should
contain only one input file in the table of long format (cf. wide format)
as defined in R. Input files can be in AFNI, NIfTI or surface format.
AFNI files can be specified with sub-brick selector (square brackets
[] within quotes) specified with a number or label.
3) It is fine to have variables (or columns) in the table that are
not modeled in the analysis.
4) When the table is part of the script, a backslash is needed at the end
of each line to indicate the continuation to the next line. Alternatively,
one can save the context of the table as a separate file, e.g.,
calling it table.txt, and then in the script specify the data with
'-dataTable @table.txt'. However, when the table is provided as a separate
file, do NOT put any quotes around the square brackets for each sub-brick,
otherwise the program would not properly read the files, unlike the
situation when quotes are required if the table is included as part of the
script. Backslash is also not needed at the end of each line, but it would
not cause any problem if present. This option of separating the table from
the script is useful: (a) when there are many input files so that
the program complains with an 'Arg list too long' error; (b) when
you want to try different models with the same dataset.
-dbgArgs: This option will enable R to save the parameters in a
file called .3dMSS.dbg.AFNI.args in the current directory
so that debugging can be performed.
-help: this help message
-IF var_name: var_name is used to specify the column name that is designated for
input files of effect estimate. The default (when this option is not invoked
is 'InputFile', in which case the column header has to be exactly as 'InputFile'
This input file for effect estimates has to be the last column.
-jobs NJOBS: On a multi-processor machine, parallel computing will speed
up the program significantly.
Choose 1 for a single-processor computer.
-lme FORMULA: Specify the fixed effect components of the model. The
expression FORMULA with more than one variable has to be surrounded
within (single or double) quotes. Variable names in the formula
should be consistent with the ones used in the header of -dataTable.
See examples in the help for details.
-mask MASK: Process voxels inside this mask only.
Default is no masking.
-mrr FORMULA: Specify the model formulation through multilevel smoothing splines.
Expression FORMULA with more than one variable has to be surrounded
within (single or double) quotes. Variable names in the formula
should be consistent with the ones used in the header of -dataTable.
The nonlinear trajectory is specified through the expression of s(x,k=?)
where s() indicates a smooth function, x is a quantitative variable with
which one would like to trace the trajectory and k is the number of smooth
splines (knots). The default (when k is missing) for k is 10, which is good
enough most of the time when there are more than 10 data points of x. When
there are less than 10 data points of x, choose a value of k slightly less
than the number of data points.
-prediction TABLE: Provide a data table so that predicted values could be generated for
graphical illustration. Usually the table should contain similar structure as the input
file except that 1) reserve the first column for effect labels which will be used for
sub-brick names in the output for those predicted values; 2) columns for those varying
smoothing terms (e.g., subject) and response variable (i.e., Y) should not be includes.
Try to specify equally-spaced values with a small for the quantitative variable of
modeled trajectory (e.g., age) so that smooth curves could be plotted after the
analysis. See Examples in the help for a couple of specific tables used for predictions.
-prefix PREFIX: Output file name. For AFNI format, provide prefix only,
with no view+suffix needed. Filename for NIfTI format should have
.nii attached (otherwise the output would be saved in AFNI format).
-qVars variable_list: Identify quantitative variables (or covariates) with
this option. The list with more than one variable has to be
separated with comma (,) without any other characters such as
spaces and should be surrounded within (single or double) quotes.
For example, -qVars "Age,IQ"
WARNINGS:
1) Centering a quantitative variable through -qVarsCenters is
very critical when other fixed effects are of interest.
2) Between-subjects covariates are generally acceptable.
However EXTREME caution should be taken when the groups
differ substantially in the average value of the covariate.
-ranEff FORMULA: Specify the random effect components of the model. The
expression FORMULA with more than one variable has to be surrounded
within (single or double) quotes. Variable names in the formula
should be consistent with the ones used in the header of -dataTable.
In the MSS context the simplest model is "list(Subj=~1)" in which the
varying or random effect from each subject is incorporated in the model.
Each random-effects factor is specified within parentheses per formula
convention in R.
-Rio: Use R's io functions. The alternative is -cio.
-sdiff variable_list: This option is used to specify a factor for group comparisons.
For example, if one wants to compare age trajectory between two groups through
"s(age,by=group)" in model specification, use "-sdiff 'group'" to generate
the predicted trajectory of group differences through the values provided in the
prediction table under the option -prediction. Currently it only allows for one group
comparison. Perform separate analyses if more than one group comparison is
desirable. " .
-show_allowed_options: list of allowed options
-vt var formulation: This option is for specifying varying smoothing terms. Two components
are required: the first one 'var' indicates the variable (e.g., subject) around
which the smoothing will vary while the second component specifies the smoothing
formulation (e.g., s(age,subject)). When there is no varying smoothing terms (e.g.,
no within-subject variables), do not use this option.
AFNI program: 3dMultiThresh
Program to apply a multi-threshold (mthresh) dataset
to an input dataset.
Usage:
3dMultiThresh OPTIONS
OPTIONS (in any order)
----------------------
-mthresh mmm = Multi-threshold dataset from 3dXClustSim
(usually via running '3dttest++ -ETAC').
*OR*
globalETAC.mthresh.*.niml threshold file
-input ddd = Dataset to threshold.
-1tindex iii = Index (sub-brick) on which to threshold.
-signed +/- = If the .mthresh.nii file from 3dXClustSim
was created using 1-sided thresholding,
this option tells which sign to choose when
doing voxel-wise thresholding: + or -.
++ If the .mthresh.nii file was created using
2-sided thresholding, this option is ignored.
-pos = Same as '-signed +'
-neg = Same as '-signed -'
-prefix ppp = prefix for output dataset
++ Can be 'NULL' to get no output dataset
-maskonly = Instead of outputting a thresholded version
of the input dataset, just output a 0/1 mask
dataset of voxels that survive the process.
-allmask qqq = Write out a multi-volume dataset with prefix 'qqq'
where each volume is the binary mask of voxels that
pass ONE of the tests. This dataset can be used
to see which tests mattered where in the brain.
++ To be more clear, there will be one sub-brick for
each p-value threshold coded in the '-mthresh'
dataset (e.g., p=0.0100 and p=0.0001).
++ In each sub-brick, the value will be between
0 and 7, and is the sum of these:
1 == hpow=0 was declared 'active'
2 == hpow=1 was declared 'active'
4 == hpow=2 was declared 'active'
Of course, an hpow value will only be tested
if it is so encoded in the '-mthresh' dataset.
-nozero = This option prevents the output of a
dataset if it would be all zero
-quiet = Turn off progress report messages
The number of surviving voxels will be written to stdout.
It can be captured in a csh script by a command such as
set nhits = `3dMultiThresh OPTIONS`
Meant to be used in conjunction with program 3dXClustSim,
which is in turn meant to be used with program 3dttest++ -- RWCox
AFNI program: 3dMVM
Welcome to 3dMVM ~1~
AFNI Group Analysis Program with Multi-Variate Modeling Approach
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Version 4.2.2, May 30, 2024
Author: Gang Chen (gangchen@mail.nih.gov)
Website - https://afni.nimh.nih.gov/MVM
SSCC/NIMH, National Institutes of Health, Bethesda MD 20892
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Usage: ~1~
------
3dMVM is a group-analysis program that performs traditional ANOVA- and ANCOVA-
style computations. In addition, it can run multivariate modeling in the sense
of multiple simultaneous response variables. For univariate analysis, no bound
is imposed on the numbers of explanatory variables, and these variables can be
either categorical (factor) or numerical/quantitative (covariate). F-statistics
for all main effects and interactions are automatically included in the output.
In addition, general linear tests (GLTs) can be requested via symbolic coding.
Input files for 3dMVM can be in AFNI, NIfTI, or surface (niml.dset) format.
Note that unequal number of subjects across groups are allowed, but scenarios
with missing data for a within-subject factor are better modeled with 3dLME or
3dLMEr. Cases with quantitative variables (covariates) that vary across the
levels of a within-subject variable are also better handled with 3dLME or 3dLMEr.
Computational cost with 3dMVM is higher relative to 3dttest++ or 3dANOVAx, but
it has the capability to correct for sphericity violations when within-subject
factors with more than two levels are involved.
Please cite: ~1~
If you want to cite the analysis approach for AN(C)OVA, use the following:~2~
Chen, G., Adleman, N.E., Saad, Z.S., Leibenluft, E., Cox, R.W. (2014).
Applications of Multivariate Modeling to Neuroimaging Group Analysis: A
Comprehensive Alternative to Univariate General Linear Model. NeuroImage 99,
571-588. 10.1016/j.neuroimage.2014.06.027
https://afni.nimh.nih.gov/pub/dist/HBM2014/Chen_in_press.pdf
For group analysis with effect estimates from multiple basis functions, cite: ~2~
Chen, G., Saad, Z.S., Adleman, N.E., Leibenluft, E., Cox, R.W. (2015).
Detecting the subtle shape differences in hemodynamic responses at the
group level. Front. Neurosci., 26 October 2015.
http://dx.doi.org/10.3389/fnins.2015.00375
Installation requirements: ~1~
In addition to R installation, the following two R packages need to be acquired
in R first before running 3dMVM: "afex" and "phia". In addition, the "snow" package
is also needed if one wants to take advantage of parallel computing. To install
these packages, run the following command at the terminal:
rPkgsInstall -pkgs ALL
Alternatively, you may install them in R:
install.packages("afex")
install.packages("phia")
install.packages("snow")
More details about 3dMVM can be found at
https://afni.nimh.nih.gov/MVM
Running: ~1~
Once the 3dMVM command script is constructed, it can be run by copying and
pasting to the terminal. Alternatively (and probably better) you save the
script as a text file, for example, called MVM.txt, and execute it with the
following (assuming on tcsh shell),
tcsh -x MVM.txt &
or,
tcsh -x MVM.txt > diary.txt &
tcsh -x MVM.txt |& tee diary.txt &
The advantage of the latter command is that the progression is saved into
the text file diary.txt and, if anything goes awry, can be examined later.
Thanks to the R community, Henrik Singmann, and Helios de Rosario for the
strong technical support.
--------------------------------
Examples: ~1~
Example 1 --- 3 between-subjects and 2 within-subject variables: ~2~
Three between-subjects (genotype, sex, and scanner) and two within-subject
(condition and emotion) variables.
3dMVM -prefix Example1 -jobs 4 \
-bsVars 'genotype*sex+scanner' \
-wsVars "condition*emotion" \
-mask myMask+tlrc \
-SS_type 2 \
-num_glt 14 \
-gltLabel 1 face_pos_vs_neg -gltCode 1 'condition : 1*face emotion : 1*pos -1*neg' \
-gltLabel 2 face_emot_vs_neu -gltCode 2 'condition : 1*face emotion : 1*pos +1*neg -2*neu' \
-gltLabel 3 sex_by_condition_interaction -gltCode 3 'sex : 1*male -1*female condition : 1*face -1*house' \
-gltLabel 4 3way_interaction -gltCode 4 'sex : 1*male -1*female condition : 1*face -1*house emotion : 1*pos -1*neg' \
...
-num_glf 3 \
-glfLabel 1 male_condXEmo -glfCode 1 'sex : 1*male condition : 1*face -1*house emotion : 1*pos -1*neg & 1*pos -1*neu' \
-glfLabel 2 face_sexXEmo -glfCode 2 'sex : 1*male -1*female condition : 1*face emotion : 1*pos -1*neg & 1*pos -1*neu' \
-glfLabel 3 face_sex2Emo -glfCode 3 'sex : 1*male & 1*female condition : 1*face emotion : 1*pos -1*neg & 1*pos -1*neu' \
-dataTable \
Subj genotype sex scanner condition emotion InputFile \
s1 TT male scan1 face pos s1+tlrc'[face_pos_beta]' \
s1 TT male scan1 face neg s1+tlrc'[face_neg_beta]' \
s1 TT male scan1 face neu s1+tlrc'[face_neu_beta]' \
s1 TT male scan1 house pos s1+tlrc'[house_pos_beta]' \
...
s68 TN female scan2 house pos s68+tlrc'[face_pos_beta]' \
s68 TN female scan2 house neg s68+tlrc'[face_neg_beta]' \
s68 TN female scan2 house neu s68+tlrc'[house_pos_beta]'
NOTE: ~3~
1) The 3rd GLT is for the 2-way 2 x 2 interaction between sex and condition, which
is essentially a t-test (or one degree of freedom for the numerator of F-statistic).
Multiple degrees of freedom for the numerator of F-statistic can be obtained through
option -glfCode (see GLFs #1, #2, and #3).
2) Similarly, the 4th GLT is a 3-way 2 x 2 x 2 interaction, which is a partial (not full)
interaction between the three factors because 'emotion' has three levels. The F-test for
the full 2 x 2 x 3 interaction is automatically spilled out by 3dMVM.
3) The three GLFs show the user how to specify sub-interactions.
4) Option '-SS_type 2' specifies the hierarchical type for the sums of squares in the
omnibus F-statistics in the output. See more details in the help.
--------------------------------
Example 2 --- 2 between-subjects, 1 within-subject, 2 quantitative variables: ~2~
Two between-subjects (genotype and sex), one within-subject
(emotion) factor, plus two quantitative variables (age and IQ).
3dMVM -prefix Example2 -jobs 24 \
-mask myMask+tlrc \
-bsVars "genotype*sex+age+IQ" \
-wsVars emotion \
-qVars "age,IQ" \
-qVarCenters '25,105' \
-num_glt 10 \
-gltLabel 1 pos_F_vs_M -gltCode 1 'sex : 1*female -1*male emotion : 1*pos' \
-gltLabel 2 age_pos_vs_neg -gltCode 2 'emotion : 1*pos -1*neg age :' \
-gltLabel 3 age_pos_vs_neg -gltCode 3 'emotion : 1*pos -1*neg age : 5' \
-gltLabel 4 genotype_by_sex -gltCode 4 'genotype : 1*TT -1*NN sex : 1*male -1*female' \
-gltLabel 5 genotype_by_sex_emotion -gltCode 5 'genotype : 1*TT -1*NN sex : 1*male -1*female emotion : 1*pos -1*neg' \
...
-dataTable \
Subj genotype sex age IQ emotion InputFile \
s1 TT male 24 107 pos s1+tlrc'[pos_beta]' \
s1 TT male 24 107 neg s1+tlrc'[neg_beta]' \
s1 TT male 24 107 neu s1+tlrc'[neu_beta]' \
...
s63 NN female 29 110 pos s63+tlrc'[pos_beta]' \
s63 NN female 29 110 neg s63+tlrc'[neg_beta]' \
s63 NN female 29 110 neu s63+tlrc'[neu_beta]'
NOTE: ~3~
1) The 2nd GLT shows the age effect (slope) while the 3rd GLT reveals the contrast
between the emotions at the age of 30 (5 above the center). On the other hand,
all the other GLTs (1st, 4th, and 5th) should be interpreted at the center Age
value, 25 years old.
2) The 4th GLT is for the 2-way 2 x 2 interaction between genotype and sex, which
is essentially a t-test (or one degree of freedom for the numerator of F-statistic).
Multiple degrees of freedom for the numerator of F-statistic is currently unavailable.
3) Similarly, the 5th GLT is a 3-way 2 x 2 x 2 interaction, which is a partial (not full)
interaction between the three factors because 'emotion' has three levels. The F-test for
the full 2 x 2 x 3 interaction is automatically spilled out by 3dMVM.
---------------------------------
Example 3 --- Getting more complicated: ~2~
BOLD response was modeled with multiple basis functions at individual
subject level. In addition, there are one between-subjects (Group) and one within-
subject (Condition) variable. Furthermore, the variable corresponding to the number
of basis functions, Time, is also a within-subject variable. In the end, the F-
statistics for the interactions of Group:Condition:Time, Group:Time, and
Condition:Time are of specific interest. And these interactions can be further
explored with GLTs in 3dMVM.
3dMVM -prefix Example3 -jobs 12 \
-mask myMask+tlrc \
-bsVars Group \
-wsVars 'Condition*Time' \
-num_glt 32 \
-gltLabel 1 old_t0 -gltCode 1 'Group : 1*old Time : 1*t0' \
-gltLabel 2 old_t1 -gltCode 2 'Group : 1*old Time : 1*t1' \
-gltLabel 3 old_t2 -gltCode 3 'Group : 1*old Time : 1*t2' \
-gltLabel 4 old_t3 -gltCode 4 'Group : 1*old Time : 1*t3' \
-gltLabel 5 yng_t0 -gltCode 5 'Group : 1*yng Time : 1*t0' \
-gltLabel 6 yng_t1 -gltCode 6 'Group : 1*yng Time : 1*t1' \
-gltLabel 7 yng_t2 -gltCode 7 'Group : 1*yng Time : 1*t2' \
-gltLabel 8 yng_t3 -gltCode 8 'Group : 1*yng Time : 1*t3' \
...
-gltLabel 17 old_face_t0 -gltCode 17 'Group : 1*old Condition : 1*face Time : 1*t0' \
-gltLabel 18 old_face_t1 -gltCode 18 'Group : 1*old Condition : 1*face Time : 1*t1' \
-gltLabel 19 old_face_t2 -gltCode 19 'Group : 1*old Condition : 1*face Time : 1*t2' \
-gltLabel 20 old_face_t3 -gltCode 20 'Group : 1*old Condition : 1*face Time : 1*t3' \
...
-dataTable \
Subj Group Condition Time InputFile \
s1 old face t0 s1+tlrc'[face#0_beta]' \
s1 old face t1 s1+tlrc'[face#1_beta]' \
s1 old face t2 s1+tlrc'[face#2_beta]' \
s1 old face t3 s1+tlrc'[face#3_beta]' \
...
s40 yng house t0 s40+tlrc'[house#0_beta]' \
s40 yng house t1 s40+tlrc'[house#1_beta]' \
s40 yng house t2 s40+tlrc'[house#2_beta]' \
s40 yng house t3 s40+tlrc'[house#3_beta]'
NOTE: ~3~
The model for the analysis can also be set up as and is equivalent to
'Group*Condition*Time'.
Options: ~1~
Options in alphabetical order:
------------------------------
-bsVars FORMULA: Specify the fixed effects for between-subjects factors
and quantitative variables. When no between-subject factors
are present, simply put 1 for FORMULA. The expression FORMULA
with more than one variable has to be surrounded within (single or
double) quotes. No spaces are allowed in the FORMULA expression.
Variable names in the formula should be consistent with the ones
used in the header underneath -dataTable. A+B represents the
additive effects of A and B, A:B is the interaction between A
and B, and A*B = A+B+A:B. The effects of within-subject
factors, if present under -wsVars are automatically assumed
to interact with the ones specified here. Subject as a variable
should not occur in the model specification here.
-cio: Use AFNI's C io functions, which is default. Alternatively -Rio
can be used.
-dataTable TABLE: List the data structure with a header as the first line.
NOTE:
1) This option has to occur last; that is, no other options are
allowed thereafter. Each line should end with a backslash except for
the last line.
2) The first column is fixed and reserved with label 'Subj', and the
last is reserved for 'InputFile'. Each row should contain only one
effect estimate in the table of long format (cf. wide format) as
defined in R. The level labels of a factor should contain at least
one character. Input files can be in AFNI, NIfTI or surface format.
AFNI files can be specified with sub-brick selector (square brackets
[] within quotes) specified with a number or label. Unequal number of
subjects across groups are allowed, but situations with missing data
for a within-subject factor are better handled with 3dLME or 3dLMEr.
3) It is fine to have variables (or columns) in the table that are
not modeled in the analysis.
4) The context of the table can be saved as a separate file, e.g.,
called table.txt. Do not forget to include a backslash at the end of
each row. In the script specify the data with '-dataTable @table.txt'.
Do NOT put any quotes around the square brackets for each sub-brick!
Otherwise, the program cannot properly read the files for some reason.
This option is useful: (a) when there are many input files so that
the program complains with an 'Arg list too long' error; (b) when
you want to try different models with the same dataset (see 3) above).
-dbgArgs: This option will enable R to save the parameters in a
file called .3dMVM.dbg.AFNI.args in the current directory
so that debugging can be performed.
-GES: As an analog of the determination coefficient R^2 in multiple
regression, generalized eta-squared (GES) provides a measure
of effect size for each F-stat in ANOVA or general GLM, and
renders a similar interpretation: proportion of variance in
the response variable by the explanatory variable on hand.
It ranges within [0, 1]. Notice that this option is only
available with R version 3.2 and afex version 0.14 or later.
-glfCode k CODING: Specify the k-th general linear F-test (GLF) through a
weighted combination among factor levels. The symbolic coding has
to be within (single or double) quotes. For example, the coding
'Condition : 1*A -1*B & 1*A -1*C Emotion : 1*pos' tests the main
effect of Condition at the positive Emotion. Similarly, the coding
'Condition : 1*A -1*B & 1*A -1*C Emotion : 1*pos -1*neg' shows
the interaction between the three levels of Condition and the two
levels of Emotion.
NOTE:
1) The weights for a variable do not have to add up to 0.
2) When a quantitative variable is present, other effects are
tested at the center value of the covariate unless the covariate
value is specified as, for example, 'Group : 1*Old Age : 2', where
the Old Group is tested at the Age of 2 above the center.
3) The absence of a categorical variable in a coding means the
levels of that factor are averaged (or collapsed) for the GLF.
4) The appearance of a categorical variable has to be followed
by the linear combination of its levels.
-glfLabel k label: Specify the label for the k-th general linear F-test
(GLF). A symbolic coding for the GLF is assumed to follow with
each -glfLabel.
-gltCode k CODING: Specify the k-th general linear t-test (GLT) through a
weighted combination among factor levels. The symbolic coding has
to be within (single or double) quotes. For example, the following
'Condition : 2*House -3*Face Emotion : 1*positive '
requests for a test of comparing 2 times House condition
with 3 times Face condition while Emotion is held at positive
valence.
NOTE:
1) The weights for a variable do not have to add up to 0.
2) When a quantitative covariate is involved in the model, the
absence of the covariate in the GLT coding means that the test
will be performed at the center value of the covariate. However,
if the covariate value is specified with a value after the colon,
for example, 'Group : 1*Old Age : 2', the effect of the Old Group
would be tested at the value of 2 above the center. On the other
hand, 'Group : 1*Old' tests for the effect of the Old Group at the
center age.
3) The effect for a quantitative variable (or slope) can be specified
by omitting the value after the colon. For example,
'Group : 1*Old Age : ', or 'Group : 1*Old - 1*Young Age : '.
4) The absence of a categorical variable in a coding means the
levels of that factor are averaged (or collapsed) for the GLT.
5) The appearance of a categorical variable has to be followed
by the linear combination of its levels. Only a quantitative variable
is allowed to have a dangling coding as seen in 'Age :'.
6) Some special interaction effects can be tested under -gltCode
when the numerical DF is 1. For example, 'Group : 1*Old -1*Young
Condition : 1*House -1*Face Emotion : 1*positive'. Even though
this is typically an F-test that can be coded under -glfCode, it
can be tested under -gltCode as well. An extra bonus is that the
t-test shows the directionality while F-test does not.
-gltLabel k label: Specify the label for the k-th general linear t-test
(GLT). A symbolic coding for the GLT is assumed to follow with
each -gltLabel.
-help: this help message
-jobs NJOBS: On a multi-processor machine, parallel computing will speed
up the program significantly.
Choose 1 for a single-processor computer.
-mask MASK: Process voxels inside this mask only.
Default is no masking.
-model FORMULA: This option will phase out at some point. So use -bsVars
instead. Specify the fixed effects for between-subjects factors
and quantitative variables. When no between-subject factors
are present, simply put 1 for FORMULA. The expression FORMULA
with more than one variable has to be surrounded within (single or double)
quotes. Variable names in the formula should be consistent with
the ones used in the header of -dataTable. A+B represents the
additive effects of A and B, A:B is the interaction between A
and B, and A*B = A+B+A:B. The effects of within-subject
factors, if present under -wsVars, are automatically assumed
to interact with the ones specified here. Subject as a variable
should not occur in the model specification here.
-mVar variable: With this option, the levels of the within-subject factor
will be treated as simultaneous variables in a multivariate model.
For example, when the hemodynamic response time course is modeled
through multiple basis functions such as TENT, TENTzero, CSPLIN,
CSPLINzero, SPMG2/3, etc., the effect estimates at the multiple
time points can be treated as simultaneous response variables in
a multivariate model. Only one within-subject variable is allowed
currently under -mVar. In addition, in the presence of -mVar, no
other within-subject factors should be included. If modeling
extra within-subject factors with -mVar is desirable, consider
flattening such factors; that is, perform multiple analyses
at each level or their contrasts of the factor. The output
for multivariate testing are labeled with -MV0- in the sub-brick
names.
-num_glf NUMBER: Specify the number of general linear F-tests (GLFs). A glf
involves the union of two or more simple tests. See details in
-glfCode.
-num_glt NUMBER: Specify the number of general linear t-tests (GLTs). A glt
is a linear combination of a factor levels. See details in
-gltCode.
-prefix PREFIX: Output file name. For AFNI format, provide prefix only,
with no view+suffix needed. Filename for NIfTI format should have
.nii attached, while file name for surface data is expected
to end with .niml.dset. The sub-brick labeled with the '(Intercept)',
if present, should be interpreted as the overall average
across factor levels at the center value of each covariate.
-qVarCenters VALUES: Specify centering values for quantitative variables
identified under -qVars. Multiple centers are separated by
commas (,) within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -qVarCetners) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable YOURSELF first before the values are fed into -dataTable.
-qVars variable_list: Identify quantitative variables (or covariates) with
this option. The list with more than one variable has to be
separated with comma (,) without any other characters such as
spaces, and should be surrounded within (single or double) quotes.
For example, -qVars "Age,IQ"
WARNINGS:
1) Centering a quantitative variable through -qVarCenters is
very critical when other fixed effects are of interest.
2) Between-subjects covariates are generally acceptable.
However EXTREME caution should be taken when the groups
differ significantly in the average value of the covariate.
3) Within-subject covariates vary across the levels of a
within-subject factor, and can be analyzed with 3dLME or 3dLMEr,
but not 3dMVM.
-resid PREFIX: Output file name for the residuals. For AFNI format, provide
prefix only without view+suffix. Filename for NIfTI format should
have .nii attached, while file name for surface data is expected
to end with .niml.dset.
-Rio: Use R's io functions. The alternative is -cio.
-robust: Robust regression is performed so that outliers can be
reasonably handled through MM-estimation. Currently it
only works without involving any within-subject factors.
That is, anything that can be done with 3dttest++ could
be analyzed through robust regression here (except for
one-sample case which can be added later on if requested).
Pairwise comparisons can be performed by providing
contrast from each subject as input). Post hoc F-tests
through option -glfCode are currently not available with
robust regression. This option requires that the user
install R package robustbase.
-SC: If a within-subject factor with more than *two* levels is
involved in the model, 3dMVM automatically provides the
F-statistics for main and interaction effects with
sphericity assumption. If the assumption is violated,
the F-statistics could be inflated to some extent. This option,
will enable 3dMVM to additionally output the F-statistics of
sphericity correction for main and interaction effects, which
are labeled with -SC- in the sub-brick names.
NOTE: this option should be used only when at least one
within-subject factor has more than TWO levels.
-show_allowed_options: list of allowed options
-SS_type 2/3: Specify the type for the sums of squares for the omnibus
F-statistics. Type 2 is hierarchical or partially sequential
while type 3 is marginal. Type 2 is more powerful if all the
relevant higher-order interactions do not exist. The default
is 3. The controversy surrounding the different types can be
found at https://sscc.nimh.nih.gov/sscc/gangc/SS.html
-verb VERB: Specify verbosity level.
-vVarCenters VALUES: Specify centering values for voxel-wise covariates
identified under -vVars. Multiple centers are separated by
commas (,) within (single or double) quotes. The order of the
values should match that of the quantitative variables in -qVars.
Default (absence of option -vVarsCetners) means centering on the
average of the variable across ALL subjects regardless their
grouping. If within-group centering is desirable, center the
variable YOURSELF first before the files are fed into -dataTable.
-vVars variable_list: Identify voxel-wise covariates with this option.
Currently one voxel-wise covariate is allowed only, but this
may change if demand occurs...
By default, mean centering is performed voxel-wise across all
subjects. Alternatively centering can be specified through a
global value under -vVarsCenters. If the voxel-wise covariates
have already been centered, set the centers at 0 with -vVarsCenters.
-wsE2: If at least one within-subject factor is involved in the model, any
omnibus F-test associated with a within-subject factor is assessed
with both univariate and within-subject multivariate tests. Use
the option only if at least one within-subject factor has more
than two levels. By default, 3dMVM provides an F-stat through the
univariate testing (UVT) method for each effect that involves a
within-subject factor. With option -wsE2 UVT is combined with the
within-subject multivariate approach, and the merged result remains
the same as UVT most of the time (or in most brain regions), but
occasionally it may be more powerful.
-wsMVT: By default, 3dMVM provides an F-stat through univariate testing (UVT)
for each effect that involves a within-subject factor. If at least
one within-subject factor is involved in the model, option -wsMVT
provides within-subject multivariate testing for any effect
associated with a within-subject variable. The testing strategy is
different from the conventional univariate GLM, see more details in
Chen et al. (2014), Applications of Multivariate Modeling to
Neuroimaging Group Analysis: A Comprehensive Alternative to
Univariate General Linear Model. NeuroImage 99, 571-588. If
all the within-subject factors have two levels, the multivariate
testing would render the same results as the univariate version.
So, use the option only if at least one within-subject factor has
more than two levels. The F-statistics from the multivariate
testing are labeled with -wsMVT- in the sub-brick names. Note that
the conventional univariate F-statistics are automatically included
in the beginning of the output regardless the presence of this option.
-wsVars FORMULA: Within-subject factors, if present, have to be listed
here, otherwise the program will choke. If no within-subject
exists, don't include this option in the script. Coding for
additive effects and interactions is the same as in -bsVars. The
FORMULA with more than one variable has to be surrounded
within (single or double) quotes. Note that the within-subject
variables are assumed to interact with those between-subjects
variables specified under -bsVars. The hemodynamic response
time courses are better modeled as simultaneous outcomes through
option -mVar, and not as the levels of a within-subject factor.
The variables under -wsVars and -mVar are exclusive from each
other.
AFNI program: 3dMVM_validator
----------------------------------------------------------------------------
3dMVM_validator
Launch the 3dMVM model validation shiny app in a web browser.
Input is a file containing a table formatted like the 3dMVM "-dataTable".
See 3dMVM -help for the correct format.
This will create a temporary folder in the current directory with a
random name similar to:
__8726_3dMVM_validator_temp_delete
It will be deleted when you close the shiny app. If it is still there
after you close the app, it is safe to delete.
If you seem to be missing some R packages, you may need to run:
@afni_R_package_install -shiny
-----------------------------------------------------------------------------
options:
-dataTable : A file containing a data table formatted like the
3dMVM "-dataTable".
-ShinyFolder : Use a custom shiny folder (for testing purposes).
-help : show this help
-----------------------------------------------------------------------------
examples:
3dMVM_validator -dataTable ~/my_dataTable.csv
-----------------------------------------------------------------------------
Justin Rajendra 11/2017
AFNI program: 3dNetCorr
Overview ~1~
Calculate correlation matrix of a set of ROIs (using mean time series of
each). Several networks may be analyzed simultaneously, one per brick.
Written by PA Taylor (March, 2013), part of FATCAT (Taylor & Saad,
2013) in AFNI.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Usage ~1~
Input a set of 4D data and a set of ROI masks (i.e., a bunch of
ROIs in a brik each labelled with a distinct integer), and get a
matrix of correlation values for it.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Output ~1~
Output will be a simple text file, first with the number N of ROIs
in the set, then an empty line, then a list of the ROI labels in the
file (i.e., col/row labels), empty line, and then an NxN matrix of
correlation values (diagonals should be unity). One can also output
the Fisher Z-transform of the matrix (with zeros along diag).
If multiple subbricks are entered, one gets multiple files output,
one per subbrick/network.
Naming convention of outputs: PREFIX_???.netcc, where `???'
represents a zero-padded version of the network number, based on the
number of subbricks in the `in_rois' option (i.e., 000, 001,...).
If the `-ts_out' option is used, the mean time series per ROI, one
line, are output in PREFIX_???.netts files.
Labeltables are now also supported; when an '-inset FILE' contains
a labeltable, the labels will then be passed to the *.netcc file.
These labels may then be referred to in plotting/output, such as
using fat_mat_sel.py.
+NEW+ (Dec. 2014): A PREFIX_???.niml.dset is now also output
automatically. This NIML/SUMA-esque file is mainly for use in SUMA,
for visualizing connectivity matrix info in a 3D brain. It can be
opened via, for example:
$ suma -vol ROI_FILE -gdset FILE.niml.dset
It is now also possible to output whole brain correlation maps,
generated from the average time series of each ROI,
as either Pearson r or Fisher-transformed Z-scores (or both); see
the '-ts_wb*' options below.
[As of April, 2017] There is now more checking done for having any
null time series in ROIs. They are bad to have around, esp. when
they fill an ROI. A new file called 'PREFIX.roidat' is now output,
whose columns contain information for each ROI in the used mask:
[Nvox] [Nvox with non-null ts] [non-null frac] # [ROI number] [label]
The program also won't run now by default if an ROI contains more
than 10 percent null time series; one can use a '-push*' option
(see below) to still calculate anyways, but it will definitely cease
if any ROI is full of null time series.
... And the user can flag to output a binary mask of the non-null
time series, called 'PREFIX_mask_nnull*', with the new option
'-output_mask_nonnull'. This might be useful to check if your data
are well-masked, if you haven't done so already (and you know who
you are...).
[As of April, 2017] On a minor note, one can also apply string labels
to the WB correlation/Z-score output files; see the option
'-ts_wb_strlabel', below.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Command ~1~
3dNetCorr -prefix PREFIX {-mask MASK} {-fish_z} {-part_corr} \
-inset FILE -in_rois INROIS {-ts_out} {-ts_label} \
{-ts_indiv} {-ts_wb_corr} {-ts_wb_Z} {-nifti} \
{-push_thru_many_zeros} {-ts_wb_strlabel} \
{-output_mask_nonnull} {-weight_ts WTS} \
{-weight_corr WCORR}
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Running ~1~
-prefix PREFIX :(req) output file name part (see description below).
-inset FILE :(req) time series file (4D data set).
-in_rois INROIS :(req) can input a set of ROIs, each labelled with
distinct integers. Multiple subbricks can be input,
each will be treated as a separate network.
-mask MASK :can include a whole brain mask within which to
calculate correlation. If no mask is input, then
the program will internally 'automask', based on
when non-uniformly-zero time series are.
If you want to neither put in a mask *nor* have the
automasking occur, see '-automask_off', below.
-fish_z :switch to also output a matrix of Fisher Z-transform
values for the corr coefs (r):
Z = atanh(r) ,
(with Z=4 being output along matrix diagonals where
r=1, as the r-to-Z conversion is ceilinged at
Z = atanh(r=0.999329) = 4, which is still *quite* a
high Pearson-r value.
-part_corr :output the partial correlation matrix. It is
calculated from the inverse of regular Pearson
matrix, R, as follows: let M = R^{I} be in the inverse
of the Pearson cc matrix. Then each element p_{ij} of
the partial correlation (PC) matrix is given as:
p_{ij} = -M_{ij}/sqrt( M_{ii} * M_{jj} ).
This will also calculate the PC-beta (PCB) matrix,
which is not symmetric, and whose values are given as:
b_{ij} = -M_{ij}/M_{ii}.
Use as you wish. For both PC and PCB, the diagonals
should be uniformly (negative) unity.
-ts_out :switch to output the mean time series of the ROIs that
have been used to generate the correlation matrices.
Output filenames mirror those of the correlation
matrix files, with a '.netts' postfix.
-ts_label :additional switch when using '-ts_out'. Using this
option will insert the integer ROI label at the start
of each line of the *.netts file created. Thus, for
a time series of length N, each line will have N+1
numbers, where the first is the integer ROI label
and the subsequent N are scientific notation values.
-ts_indiv :switch to create a directory for each network that
contains the average time series for each ROI in
individual files (each file has one line).
The directories are labelled PREFIX_000_INDIV/,
PREFIX_001_INDIV/, etc. (one per network). Within each
directory, the files are labelled ROI_001.netts,
ROI_002.netts, etc., with the numbers given by the
actual ROI integer labels.
-ts_wb_corr :switch to perform whole brain correlation for each
ROI's average time series; this will automatically
create a directory for each network that contains the
set of whole brain correlation maps (Pearson 'r's).
The directories are labelled as above for '-ts_indiv'
Within each directory, the files are labelled
WB_CORR_ROI_001+orig, WB_CORR_ROI_002+orig, etc., with
the numbers given by the actual ROI integer labels.
-ts_wb_Z :same as above in '-ts_wb_corr', except that the maps
have been Fisher transformed to Z-scores the relation:
Z=atanh(r).
To avoid infinities in the transform, Pearson values
are effectively capped at |r| = 0.999329 (where
|Z| = 4.0; hope that's good enough).
Files are labelled WB_Z_ROI_001+orig, etc.
-weight_ts WTS :input a 1D file WTS of weights that will be applied
multiplicatively to each ROI's average time series.
WTS can be a column- or row-file of values, but it
must have the same length as the input time series
volume.
If the initial average time series was A[n] for
n=0,..,(N-1) time points, then applying a set of
weights w[n] of the same length from WTS would
produce a new time series: B[n] = A[n] * W[n].
-weight_corr WCORR :input a 1D file WTS of weights that will be applied
to estimate a weighted Pearson Correlation. This
is different than the '-weight_ts ..' weighting.
-ts_wb_strlabel :by default, '-ts_wb_{corr,Z}' output files are named
using the int number of a given ROI, such as:
WB_Z_ROI_001+orig.
with this option, one can replace the int (such as
'001') with the string label (such as 'L-thalamus')
*if* one has a labeltable attached to the file.
-nifti :output any correlation map files as NIFTI files
(default is BRIK/HEAD). Only useful if using
'-ts_wb_corr' and/or '-ts_wb_Z'.
-output_mask_nonnull
:internally, this program checks for where there are
nonnull time series, because we don't like those, in
general. With this flag, the user can output the
determined mask of non-null time series.
-push_thru_many_zeros
:by default, this program will grind to a halt and
refuse to calculate if any ROI contains >10 percent
of voxels with null times series (i.e., each point is
0), as of April, 2017. This is because it seems most
likely that hidden badness is responsible. However,
if the user still wants to carry on the calculation
anyways, then this option will allow one to push on
through. However, if any ROI *only* has null time
series, then the program will not calculate and the
user will really, really, really need to address
their masking.
-allow_roi_zeros :by default, this program will end unhappily if any ROI
contains only time series that are all zeros (which
might occur if you applied a mask to your data that
is smaller than your ROI map). This is because the
correlation with an all-zero time series is undefined.
However, if you want to allow ROIs to have all-zero
time series, use this option; each row and column
element in the Pearson and Fisher-Z transformed
matrices for this ROI will be 0. NB: you cannot
use -part_corr when this option is used, to avoid
of mathematical badness.
See the NOTE about this option, below
-automask_off :if you do not enter a mask, this program will
make an internal automask of where time series are
not uniformly zero. However, if you don't want this
done (e.g., you have a map of N ROIs that has greater
extent than your masked EPI data, and you are using
'-allow_roi_zeros' to get a full NxN matrix, even if
some rows and columns are zero), then use this option.
-ignore_LT :switch to ignore any label table labels in the
'-in_rois' file, if there are any labels attached.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
NOTES ~1~
Re. Allowing zero-filled ROIs ('-allow_roi_zeros') ~2~
If you use the '-allow_roi_zeros' option, you can get rows and columns
of all zeros in the output *.netcc matrices (indeed, you are probably
using it specifically to have the 'full' NxN matrix from N input ROIs,
even with ROIs that only contain all-zero time series).
Note that, at present, you should NOT put *.netcc files that contain
such rows/columns of zeros into the fat_proc* pipeline, because 0 is a
valid correlation (or Fisher Z-transform) value, and the pipeline is not
designed to filter these values out (like it would for *.grid files).
Therefore, the zeros will be included as 'real' correlation values,
which would not be correct.
So, these matrices could be output into OTHER analyses fine, but for
preparing to do fat_proc_* comparisons, you would want to run this
program without '-allow_roi_zeros'. So, sometimes you might run it
twice, with and without that option, which should be OK, because it
is not a very time consuming program.
Also note that if an average ROI time series is zero (which will occur
when all voxel time series within it are zero and the '-allow_roi_zeros'
is being utilized) and the user has asked for WB correlation maps with
'-ts_wb_cor' and/or '-ts_wb_Z', no volume will be output for any ROI
that is all-zeros.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Examples ~1~
3dNetCorr \
-inset REST_in_DWI.nii.gz \
-in_rois ROI_ICMAP_GM+orig \
-fish_z \
-ts_wb_corr \
-mask mask_DWI+orig \
-prefix FMRI/REST_corr
3dNetCorr \
-inset REST_in_DWI.nii.gz \
-in_rois ROI_ICMAP_GM+orig \
-fish_z \
-ts_wb_corr \
-automask_off \
-all_roi_zeros \
-prefix FMRI/out
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
____________________________________________________________________________
AFNI program: 3dnewid
Assigns a new ID code to a dataset; this is useful when making
a copy of a dataset, so that the internal ID codes remain unique.
Usage: 3dnewid dataset [dataset ...]
or
3dnewid -fun [n]
to see what n randomly generated ID codes look like.
(If the integer n is not present, 1 ID code is printed.)
or
3dnewid -fun11
to get an 11 character ID code (for use in scripting).
or
3dnewid -int
to get a random positive integer.
The values are usually between 1 million and 1 billion.
Such a value could be used as a random seeds in various AFNI
programs, such as 3dttest++ -seed.
or
3dnewid -hash STR
to get a unique hashcode of STR
(Unlike the other ways of using 3dnewid, if STR is the)
(same in 2 different runs, the output will be the same.)
(The -hash algorithm begins at step 2 in the list below.)
or
3dnewid -MD5 STR
to get the MD5 hash of STR, should be same as -hash output
without the prefix and without the + and / char substitutions.
How ID codes are created (here and in other AFNI programs):
----------------------------------------------------------
The AFNI ID code generator attempts to create a globally unique
string identifier, using the following steps.
1) A long string is created from the system identifier
information ('uname -a'), the current epoch time in seconds
and microseconds, the process ID, and the number of times
the current process has called the ID code function.
2) This string is then hashed into a 128 bit code using the
MD5 algorithm. (cf. file thd_md5.c)
3) This bit code is then converted to a 22 character string
using Base64 encoding, replacing '/' with '-' and '+' with '_'.
With these changes, the ID code can be used as a Unix filename
or an XML name string. (cf. file thd_base64.c)
4) A 4 character prefix is attached at the beginning to produce
the final ID code. If you set the environment variable
IDCODE_PREFIX to something, then its first 3 characters and an
underscore will be used for the prefix of the new ID code,
provided that the first character is alphabetic and the other
2 alphanumeric; otherwise, the default prefix 'NIH_' will be
used.
The source code is function UNIQ_idcode() in file niml_uuid.c
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dNLfim
++ 3dNLfim: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
This program calculates a nonlinear regression for each voxel of the
input AFNI 3d+time data set. The nonlinear regression is calculated
by means of a least squares fit to the signal plus noise models which
are specified by the user.
Usage with terminal options:
3dNLfim
-help show this help
-help_models show model help from any that have it
(can come via AFNI_MODEL_HELP_ALL)
One can get help for an individual model, *if* it exists, by
setting a similar environment variable, and providing some
non-trivial function option (like -load_models), e.g.,
3dNLfim -DAFNI_MODEL_HELP_CONV_PRF_6=Y -load_models
Indifidual help should be available for any model with help
via -help_models.
-load_models simply load all models and exit
(this is for testing or getting model help)
General usage:
3dNLfim
-input fname fname = filename of 3d + time data file for input
[-mask mset] Use the 0 sub-brick of dataset 'mset' as a mask
to indicate which voxels to analyze (a sub-brick
selector is allowed) [default = use all voxels]
[-ignore num] num = skip this number of initial images in the
time series for regression analysis; default = 0
****N.B.: default ignore value changed from 3 to 0,
on 04 Nov 2008 (BHO day).
[-inTR] set delt = TR of the input 3d+time dataset
[The default is to compute with delt = 1.0 ]
[The model functions are calculated using a
time grid of: 0, delt, 2*delt, 3*delt, ... ]
[-TR delt] directly set the TR of the time series model;
can be useful if the input file is a .1D file
(transposed with the \' operator)
[-time fname] fname = ASCII file containing each time point
in the time series. Defaults to even spacing
given by TR (this option overrides -inTR).
-signal slabel slabel = name of (non-linear) signal model
-noise nlabel nlabel = name of (linear) noise model
-sconstr k c d constraints for kth signal parameter:
c <= gs[k] <= d
**N.B.: It is important to set the parameter
constraints with care!
**N.B.: -sconstr and -nconstr options must appear
AFTER -signal and -noise on the command line
-nconstr k c d constraints for kth noise parameter:
c+b[k] <= gn[k] <= d+b[k]
[-nabs] use absolute constraints for noise parameters:
c <= gn[k] <= d [default=relative, as above]
[-nrand n] n = number of random test points [default=19999]
[-nbest b] b = use b best test points to start [default=9]
[-rmsmin r] r = minimum rms error to reject reduced model
[-fdisp fval] display (to screen) results for those voxels
whose f-statistic is > fval [default=999.0]
[-progress ival] display (to screen) results for those voxels
every ival number of voxels
[-voxel_count] display (to screen) the current voxel index
--- These options choose the least-square minimization algorithm ---
[-SIMPLEX] use Nelder-Mead simplex method [default]
[-POWELL] use Powell's NEWUOA method instead of the
Nelder-Mead simplex method to find the
nonlinear least-squares solution
[slower; usually more accurate, but not always!]
[-BOTH] use both Powell's and Nelder-Mead method
[slowest, but should be most accurate]
--- These options generate individual AFNI 2 sub-brick datasets ---
--- [All these options must be AFTER options -signal and -noise]---
[-freg fname] perform f-test for significance of the regression;
output 'fift' is written to prefix filename fname
[-frsqr fname] calculate R^2 (coef. of multiple determination);
store along with f-test for regression;
output 'fift' is written to prefix filename fname
[-fsmax fname] estimate signed maximum of signal; store along
with f-test for regression; output 'fift' is
written to prefix filename fname
[-ftmax fname] estimate time of signed maximum; store along
with f-test for regression; output 'fift' is
written to prefix filename fname
[-fpsmax fname] calculate (signed) maximum percentage change of
signal from baseline; output 'fift' is
written to prefix filename fname
[-farea fname] calculate area between signal and baseline; store
with f-test for regression; output 'fift' is
written to prefix filename fname
[-fparea fname] percentage area of signal relative to baseline;
store with f-test for regression; output 'fift'
is written to prefix filename fname
[-fscoef k fname] estimate kth signal parameter gs[k]; store along
with f-test for regression; output 'fift' is
written to prefix filename fname
[-fncoef k fname] estimate kth noise parameter gn[k]; store along
with f-test for regression; output 'fift' is
written to prefix filename fname
[-tscoef k fname] perform t-test for significance of the kth signal
parameter gs[k]; output 'fitt' is written
to prefix filename fname
[-tncoef k fname] perform t-test for significance of the kth noise
parameter gn[k]; output 'fitt' is written
to prefix filename fname
--- These options generate one AFNI 'bucket' type dataset ---
[-bucket n prefixname] create one AFNI 'bucket' dataset containing
n sub-bricks; n=0 creates default output;
output 'bucket' is written to prefixname
The mth sub-brick will contain:
[-brick m scoef k label] kth signal parameter regression coefficient
[-brick m ncoef k label] kth noise parameter regression coefficient
[-brick m tmax label] time at max. abs. value of signal
[-brick m smax label] signed max. value of signal
[-brick m psmax label] signed max. value of signal as percent
above baseline level
[-brick m area label] area between signal and baseline
[-brick m parea label] signed area between signal and baseline
as percent of baseline area
[-brick m tscoef k label] t-stat for kth signal parameter coefficient
[-brick m tncoef k label] t-stat for kth noise parameter coefficient
[-brick m resid label] std. dev. of the full model fit residuals
[-brick m rsqr label] R^2 (coefficient of multiple determination)
[-brick m fstat label] F-stat for significance of the regression
[-noFDR] Don't write the FDR (q vs. threshold)
curves into the output dataset.
(Same as 'setenv AFNI_AUTOMATIC_FDR NO')
--- These options write time series fit for ---
--- each voxel to an AFNI 3d+time dataset ---
[-sfit fname] fname = prefix for output 3d+time signal model fit
[-snfit fname] fname = prefix for output 3d+time signal+noise fit
-jobs J Run the program with 'J' jobs (sub-processes).
On a multi-CPU machine, this can speed the
program up considerably. On a single CPU
machine, using this option is silly.
J should be a number from 1 up to the
number of CPU sharing memory on the system.
J=1 is normal (single process) operation.
The maximum allowed value of J is 32.
* For more information on parallelizing, see
https://sscc.nimh.nih.gov/afni/doc/misc/afni_parallelize/index_html/view
* Use -mask to get more speed; cf. 3dAutomask.
----------------------------------------------------------------------
Signal Models (see the appropriate model_*.c file for exact details) :
Null : No Signal
(no parameters)
see model_null.c
SineWave_AP : Sinusoidal Response
(amplitude, phase)
see model_sinewave_ap.c
SquareWave_AP : Square Wave Response
(amplitude, phase)
see model_squarewave_ap.c
TrnglWave_AP : Triangular Wave Response
(amplitude, phase)
see model_trnglwave_ap.c
SineWave_APF : Sinusoidal Wave Response
(amplitude, phase, frequency)
see model_sinewave_apf.c
SquareWave_APF : Sinusoidal Wave Response
(amplitude, phase, frequency)
see model_squarewave_apf.c
TrnglWave_APF : Sinusoidal Wave Response
(amplitude, phase, frequency)
see model_trnglwave_apf.c
Exp : Exponential Function
(a,b): a * exp(b * t)
see model_exp.c
DiffExp : Differential-Exponential Drug Response
(t0, k, alpha1, alpha2)
see model_diffexp.c
GammaVar : Gamma-Variate Function Drug Response
(t0, k, r, b)
see model_gammavar.c
Beta : Beta Distribution Model
(t0, tf, k, alpha, beta)
see model_beta.c
* The following convolved functions are generally convolved with
the time series in AFNI_CONVMODEL_REF, allowing one to specify
multiple event onsets, varying durations and varying response
magnitudes.
ConvGamma : Gamma Vairate Response Model
(t0, amp, r, b)
see model_convgamma.c
ConvGamma2a : Gamma Convolution with 2 Input Time Series
(t0, r, b)
see model_convgamma2a.c
ConvDiffGam : Difference of 2 Gamma Variates
(A0, T0, E0, D0, A1, T1, E1, D1)
see model_conv_diffgamma.c
for help : setenv AFNI_MODEL_HELP_CONVDIFFGAM YES
3dNLfim -signal ConvDiffGam
demri_3 : Dynamic (contrast) Enhanced MRI
(K_trans, Ve, k_ep)
see model_demri_3.c
for help : setenv AFNI_MODEL_HELP_DEMRI_3 YES
3dNLfim -signal demri_3
ADC : Diffusion Signal Model
(So, D)
see model_diffusion.c
michaelis_menton : Michaelis/Menten Concentration Model
(v, vmax, k12, k21, mag)
see model_michaelis_menton.c
Expr2 : generic (3dcalc-like) expression with
exactly 2 'free' parameters and using
symbol 't' as the time variable;
see model_expr2.c for details.
ConvCosine4 : 4-piece Cosine Convolution Model
(A, C1, C2, M1, M2, M3, M4)
see model_conv_cosine4.c
for help : setenv AFNI_MODEL_HELP_CONV_COSINE4 YES
3dNLfim -signal ConvCosine4
Conv_PRF : 4-param Population Receptive Field Model
(A, X, Y, sigma)
see model_conv_PRF.c
for help : setenv AFNI_MODEL_HELP_CONV_PRF YES
3dNLfim -signal bunnies
Conv_PRF_6 : 6-param Population Receptive Field Model
(A, X, Y, sigma, sigrat, theta)
see model_conv_PRF_6.c
for help : setenv AFNI_MODEL_HELP_CONV_PRF_6 YES
3dNLfim -signal bunnies
Conv_PRF_DOG : 6-param 'Difference of Gaussians' PRF Model
(as Conv_PRF, but with second A and sigma)
(A, X, Y, sig, A2, sig2)
see model_conv_PRF_DOG.c
for help : setenv AFNI_MODEL_HELP_CONV_PRF_DOG YES
3dNLfim -signal bunnies
----------------------------------------
Noise Models (see the appropriate model_*.c file for exact details) :
Zero : Zero Noise Model
(no parameters)
see model_zero.c
Constant : Constant Noise Model
(constant)
see model_constant.c
Linear : Linear Noise Model
(constant, linear)
see model_linear.c
Linear+Ort : Linear+Ort Noise Model
(constant, linear, Ort)
see model_linplusort.c
Quadratic : Quadratic Noise Model
(constant, linear, quadratic)
see model_quadratic.c
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dNormalityTest
Program: 3dNormalityTest
* This program tests the input values at each voxel for normality,
using the Anderson-Darling method:
http://en.wikipedia.org/wiki/Anderson-Darling_test
* Each voxel must have at least 5 values (sub-bricks).
* The resulting dataset has the Anderson-Darling statistic converted
to an exponentially distributed variable, so it can be thresholded
with the AFNI slider and display a nominal p-value below. If you
want the A-D statistic un-converted, use the '-noexp' option.
* Conversion of the A-D statistic to a p-value is done via simulation
of the null distribution.
OPTIONS:
--------
-input dset = Specifies the input dataset.
Alternatively, the input dataset can be given as the
last argument on the command line, after all other
options.
-prefix ppp = Specifies the name for the output dataset.
-noexp = Do not convert the A-D statistic to an exponentially
distributed value -- just leave the raw A-D score in
the output dataset.
-pval = Output the results as a pure (estimated) p-value.
EXAMPLES:
---------
(1) Simulate a 2D square dataset with the values being normal on one
edge and exponentially distributed on the other, and mixed in-between.
3dUndump -dimen 101 101 1 -prefix UUU
3dcalc -datum float -a UUU+orig -b '1D: 0 0 0 0 0 0 0 0 0 0' -prefix NNN \
-expr 'i*gran(0,1.4)+(100-i)*eran(4)'
rm -f UUU+orig.*
3dNormalityTest -prefix Ntest -input NNN+orig
afni -com 'OPEN_WINDOW axialimage' Ntest+orig
In the above script, the UUU+orig dataset is created just to provide a spatial
template for 3dcalc. The '1D: 0 ... 0' input to 3dcalc is a time template
to create a dataset with 10 time points. The values are random deviates,
ranging from pure Gaussian where i=100 to pure exponential at i=0.
(2) Simulate a single logistic random variable into a 1D file and compute
the A-D nominal p-value:
1deval -num 200 -expr 'lran(2)' > logg.1D
3dNormalityTest -input logg.1D\' -prefix stdout: -pval
Note the necessity to transpose the logg.1D file (with the \' operator),
since 3D programs interpret each 1D file row as a voxel time series.
++ March 2012 -- by The Ghost of Carl Friedrich Gauss
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dNotes
Program: 3dNotes
Author: T. Ross
(c)1999 Medical College of Wisconsin
3dNotes - a program to add, delete and show notes for AFNI datasets.
-----------------------------------------------------------------------
Usage: 3dNotes [-a "string"] [-h "string"][-d num] [-help] dataset
Examples:
3dNotes -a "Subject sneezed in scanner, Aug 13 2004" elvis+orig
3dNotes -h "Subject likes fried PB & banana sandwiches" elvis+orig
3dNotes -HH "Subject has left the building" elvis+orig
3dNotes -d 2 -h "Subject sick of PB'n'banana sandwiches" elvis+orig
-----------------------------------------------------------------------
Explanation of Options:
----------------------
dataset : AFNI compatible dataset [required].
-a "str" : Add the string "str" to the list of notes.
Note that you can use the standard C escape codes,
\n for newline \t for tab, etc.
-h "str" : Append the string "str" to the dataset's history. This
can only appear once on the command line. As this is
added to the history, it cannot easily be deleted. But,
history is propagated to the children of this dataset.
-HH "str" : Replace any existing history note with "str". This
line cannot be used with '-h'.
-d num : deletes note number num.
-ses : Print to stdout the expanded notes.
-help : Displays this screen.
The default action, with no options, is to display the notes for the
dataset. If there are options, all deletions occur first and essentially
simultaneously. Then, notes are added in the order listed on the command
line. If you do something like -d 10 -d 10, it will delete both notes 10
and 11. Don't do that.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dnvals
Usage: 3dnvals [-all] [-verbose] dataset [dataset dataset ...]
* Prints (to stdout) the number of sub-bricks in a 3D dataset.
* If -all is specified, prints out all 4 dimensions:
Nx, Ny, Nz, Nvals
* If -verbose is used then the header name of the dataset is printed first.
* The function of this simple program is to help in scripting.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dNwarpAdjust
Usage: 3dNwarpAdjust [options]
This program takes as input a bunch of 3D warps, averages them,
and computes the inverse of this average warp. It then composes
each input warp with this inverse average to 'adjust' the set of
warps. Optionally, it can also read in a set of 1-brick datasets
corresponding to the input warps, and warp each of them, and average
those.
Input warps: Wi(x) for i=1..N
Average warp: Wbar(x) = mean of the displacements in Wi(x)
Inverse average: Wbin(x) = inverse of Wbar(x)
Adjusted warps: Ai(x) = Wi(Wbin(x))
Source datasets: Di(x) for i=1..N
Output mean dataset: average of Di(Ai(x))
The logic behind this arcane necromancy is the following sophistry:
We use 3dQwarp to warp each Di(x) to match a template T(x), giving
warp Wi(x) such that Di(Wi(x)) matches T(x). Now we want to average
these warped Di datasets to create a new template; say
B(x) = average of Di(Wi(x))
But the warps might be biased (e.g., have net shrinkage of the volumes).
So we compute the average warp Wbar(x), and its inverse Wbin(x), and then
instead we want to use as the new template B(Wbin(x)), which will 'put back'
each x to a bias-corrected location. So then we have
B(Wbin(x)) = average of Di(Wi(Wbin(x)))
which is where the 'adjusted warp' Ai(x) = Wi(Wbin(x)) comes from.
All these calculations could be done with other programs and a script,
but the goal of this program is to make them faster and simpler to combine.
It is intended to be used in an incremental template-building script, and
probably has no other utility (cf. the script @toMNI_Qwarpar).
OPTIONS:
--------
-nwarp w1 w2 ... = List of input 3D warp datasets (at least 5).
The list ends when a command line argument starts
with a '-' or the command line itself ends.
* This 'option' is REQUIRED!
-->>** Each input warp is adjusted, and the altered warp
over-writes the input dataset. (Therefore, there is
no reason to run 3dNwarpAdjust twice over the same
collection of warp datasets!)
* These input warps do not have to be defined on
exactly the same grids, but the grids must be
'conformant' -- that is, they have to have the
the same orientation and grid spacings. Warps
will be extended to match the minimum containing
3D rectangular grid, as needed.
-source d1 d2 ... = List of input 3D datasets to be warped by the adjusted
warp datasets. There must be exactly as many of these
datasets as there are input warps.
* This option is NOT required.
* These datasets will NOT be altered by this program.
* These datasets DO have to be on the same 3D grid
(so they can be averaged after warping).
-prefix ppp = Use 'ppp' for the prefix of the output mean dataset.
(Only needed if the '-source' option is also given.)
The output dataset will be on the common grid shared
by the source datasets.
=========================================================================
* This binary version of 3dNwarpAdjust is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dNwarpApply
Usage: 3dNwarpApply [options]
Program to apply a nonlinear 3D warp saved from 3dQwarp (or 3dNwarpCat, etc.)
to a 3D dataset, to produce a warped version of the source dataset.
The '-nwarp' and '-source' options are MANDATORY. For both of these options,
as well as '-prefix', the input arguments after the option name are applied up
until an argument starts with the '-' character, or until the arguments run out.
This program has been heavily modified [01 Dec 2014], including the following
major improvements:
(1) Allow catenation of warps with different grid spacings -- the functions
that deal with the '-nwarp' option will automatically deal with the grids.
(2) Allow input of affine warps with multiple time points, so that 3D+time
datasets can be warped with a time dependent '-nwarp' list.
(3) Allow input of multiple source datasets, so that several datasets can be
warped the same way at once. This operation is more efficient than running
3dNwarpApply several times, since the auto-regridding and auto-catenation
in '-nwarp' will only have to be done once.
* Specification of the output dataset names can be done via multiple
arguments to the '-prefix' option, or via the new '-suffix' option.
New Feature [28 Mar 2018]:
(4) If a source dataset contains complex numbers, then 3dNwarpApply will warp
the real and imaginary parts separately, combine them, and produce a
complex-valued dataset as output.
* Previously, the program would have warped the magnitude of the input
dataset and written out a float-valued dataset.
* No special option is needed to warp complex-valued datasets.
* If you WANT to warp the magnitude of a complex-valued dataset, you will
have to convert the dataset to a float dataset via 3dcalc, then use
3dNwarpApply on THAT dataset instead.
* You cannot use option '-short' with complex-valued source datasets!
More precisely, you can try to use this option, but it will be ignored.
* This ability is added for those of you who deal with complex-valued
EPI datasets (I'm looking at YOU, O International Man of Mystery).
OPTIONS:
--------
-nwarp www = 'www' is the name of the 3D warp dataset
(this is a mandatory option!)
++ Multiple warps can be catenated here.
-->> Please see the lengthier discussion below on this feature!
-->> Also see the help for 3dNwarpCat for some more information
on the formats allowed for inputting warp fields; for
example, warping in one direction only (e.g., 'AP') is
possible.
++ NOTE WELL: The interpretation of this option has changed somewhat,
as of 01 Dec 2014. In particular, this option is
generalized from the version in other programs, including
3dNwarpCat, 3dNwarpFuncs, and 3dNwarpXYZ. The major
change is that multi-line matrix files are allowed to
be included in the 'www' mixture, so that the nonlinear
warp being calculated can be time-dependent.
In addition, the warps supplied need not all be on the
same 3D grid -- this ability lets you catenate a warp
defined on the EPI data grid with a warp defined on the
structural data grid (e.g.).
-iwarp = After the warp specified in '-nwarp' is computed,
invert it. If the input warp would take a dataset
from space A to B, then the inverted warp will do
the reverse.
++ The combination "-iwarp -nwarp 'A B C'" is equivalent
to "-nwarp 'INV(C) INV(B) INV(A)'" -- that is, inverting
each warp/matrix in the list *and* reversing their order.
++ The '-iwarp' option is provided for convenience, and
may prove to be very slow for time-dependent '-nwarp' inputs.
-affter aaa = *** THIS OPTION IS NO LONGER AVAILABLE ***
See the discussion of the new '-nwarp' option above to see
how to do include time-dependent matrix transformations
in this program.
-source sss = 'sss' is the name of the source dataset.
++ That is, the dataset to be warped.
++ Multiple datasets can be supplied here; they MUST
all be defined over the same 3D grid.
-->>** You can no longer simply supply the source
dataset as the last argument on the command line.
-master mmm = 'mmm is the name of the master dataset.
++ Which defines the output grid.
++ If '-master' is not used, then output
grid is the same as the source dataset grid.
++ It is often the case that it makes more sense to
use the '-nwarp' dataset as the master, since
that is the grid on which the transformation is
defined, and is (usually) the grid to which the
transformation 'pulls' the source data.
++ You can use '-master WARP' or '-master NWARP'
for this purpose -- but ONLY if all the warps listed
in the '-nwarp' option have the same 3D grid structure.
++ In particular, if the transformation includes a
long-distance translation, then the source dataset
grid may not have a lot of overlap with the source
dataset after it is transformed -- in this case, you
really want to use this '-master' option -- or you
will end up cutting of a lot of the output dataset
since it will not overlap with the source dataset.
-newgrid dd = 'dd' is the new grid spacing (cubical voxels, in mm)
*OR = ++ This lets you resize the master dataset grid spacing.
-dxyz dd = for example, to bring EPI data to a 1 mm template, but at
a coarser resolution, use '-dxyz 2'.
++ The same grid orientation as the source is used if
the '-master' option is not given.
-interp iii = 'iii' is the interpolation mode
++ Default interpolation mode is 'wsinc5' (slowest, bestest)
++ Available modes are the same as in 3dAllineate:
NN linear cubic quintic wsinc5
++ The same interpolation mode is used for the warp
itself (if needed) and then for the data being warped.
++ The warp will be interpolated if the output dataset is
not on the same 3D grid as the warp itself, or if a warp
expression is used in the '-nwarp' option. Otherwise,
it won't need to be interpolated.
-ainterp jjj = This option lets you specify a different interpolation mode
for the data than might be used for the warp.
++ In particular, '-ainterp NN' would be most logical for
atlas datasets, where the data values being mapped are
integer labels.
-prefix ppp = 'ppp' is the name of the new output dataset
++ If more than 1 source dataset is supplied, then you
should supply more than one prefix. Otherwise, the
program will invent prefixes for each output, by
attaching the suffix '_Nwarp' to each source
dataset's prefix.
-suffix sss = If the program generates prefixes, you can change the
default '_Nwarp' suffix to whatever you want (within
reason) by this option.
++ His Holiness Emperor Zhark defines 'within reason', of course.
++ By using '-suffix' and NOT using '-prefix', the program
will generate prefix names for all output datasets in
a systematic way -- this might be useful for some people.
++ Note that only ONE suffix can be supplied even if many source
datasets are input -- unlike the case with '-prefix'.
-short = Write output dataset using 16-bit short integers, rather than
the usual 32-bit floats.
++ Intermediate values are rounded to the nearest integer.
No scaling is performed.
++ This option is intended for use with '-ainterp' and for
source datasets that contain integral values.
++ If the source dataset is complex-valued, this option will
be ignored.
-wprefix wp = If this option is used, then every warp generated in the process
of application will be saved to a 3D dataset with prefix 'wp_XXXX',
where XXXX is the index of the sub-brick being created.
For example, '-wprefix Zork.nii' will create datasets with names
'Zork_0000.nii', et cetera.
-quiet = Don't be verbose :-(
-verb = Be extra verbose :-)
SPECIFYING THE NONLINEAR WARP IN '-nwarp'
[If you are catenating warps, read this carefully!]
---------------------------------------------------
A single nonlinear warp (usually created by 3dQwarp) is an AFNI or NIfTI-1
dataset with 3 sub-bricks, holding the 3D displacements of each voxel.
(All coordinates and displacements are expressed in DICOM order.)
The '-nwarp' option is used to specify the nonlinear transformation used
to create the output dataset from the source dataset. For many purposes,
the only input needed here is the name of a single dataset holding the
warp to be used.
However, the '-nwarp' option also allows the catenation of a sequence of
spatial transformations (in short, 'warps') that will be combined before
being applied to the source dataset. Each warp is either a nonlinear
warp dataset or a matrix warp (a linear transformation of space).
A single affine (or linear) warp is a set of 12 numbers, defining a 3x4 matrix
a11 a12 a13 a14
a21 a22 a23 a24
a31 a32 a33 a34
A matrix is stored on a single line, in a file with the extension
'.1D' or '.txt', in this order
a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34
For example, the identity matrix is given by
1 0 0 0 0 1 0 0 0 0 1 0
This format is output by the '-1Dmatrix_save' options in 3dvolreg and
3dAllineate, for example.
If the argument 'www' following '-nwarp' is made up of more than one warp
filename, separated by blanks, then the nonlinear warp to be used is
composed on the fly as needed to transform the source dataset. For
example,
-nwarp 'AA_WARP.nii BB.aff12.1D CC_WARP.nii'
specifies 3 spatial transformations, call them A(x), B(x), and C(x) --
where B(x) is just the 3-vector x multiplied into the matrix in the
BB.aff12.1D file. The resulting nonlinear warp function N(x) is
obtained by applying these transformations in the order given, A(x) first:
N(x) = C( B( A(x) ) )
That is, the first warp A is applied to the output grid coordinate x,
then the second warp B to that results, then the third warp C. The output
coordinate y = C(B(A(x))) is the coordinate in the source dataset at which
the output value will be interpolated (for the voxel at coordinate x).
The Proper Order of Catenated Warps:
....................................
To determine the correct order in which to input the warps, it is necessary
to understand what a warp of the source dataset actually computes. Call the
source image S(x) = (scalar) value of source image at voxel location x.
For each x in the output grid, the warped result is S(N(x)) -- that is,
N(x) tells where each output location x must be warped to in order to
find the corresponding value of the source S.
N(x) does *NOT* tell to where an x in the source image must be moved to in
the output space -- which is what you might think if you mentally prioritize
the idea of 'warping the source image' or 'pushing the source image' -- DO NOT
THINK THIS WAY! It is better to think of N(x) as reaching out from x in the
output space to a location in the source space and then the program will
interpolate from the discrete source space grid at that location -- which
is unlikely to be exactly on a grid node. Another way to think of this is
that the warp 'pulls' the source image back to the coordinate system on which
the warp is defined.
Now suppose the sequence of operations on an EPI dataset is
(1) Nonlinearly unwarp the dataset via warp AA_WARP.nii (perhaps
from 3dQwarp -plusminus).
(2) Perform linear volume registration on the result from (1) (with
program 3dvolreg) to get affine matrix file BB.aff12.1D -- which
will have 1 line per time point in the EPI dataset.
(3) Linearly register the structural volume to the EPI dataset
(via script align_epi_anat.py). Note that this step transforms
the structural volume to match the EPI, not the EPI to match the
structural volume, so this step does not affect the chain of
transformations being applied to the EPI dataset.
(4) Nonlinearly warp the structural image from (3) to MNI space via
warp CC_WARP.nii (generated by 3dQwarp).
Finally, the goal is to take the original EPI time series dataset, and
warp it directly to MNI space, including the time series registration for
each sub-brick in the dataset, with only one interplation being used --
rather than the 3 interpolations that would come by serially implementing
steps (1), (2), and (4). This one-big-step transformation can be done
with 3dNwarpApply using the '-nwarp' option:
-nwarp 'CC_WARP.nii BB.aff12.1D AA_WARP.nii'
that is, N(x) = A( B( C(x) ) ) -- the opposite order to the sample above,
and with the transformations occurring in the opposite order to the sequence
in which they were calculated. The reason for this apparent backwardness
is that the 'x' being transformed is on the output grid -- in this case, in
MNI-template space. So the warp C(x) transforms such an output grid 'x' to
the EPI-aligned structural space. The warp B(x) then transforms THAT
coordinate from aligned spaced back to the rotated head position of the subject.
And the warp A(x) transforms THAT coordinate back to the original grid that had
to be unwarped (e.g., from susceptibility and/or eddy current artifacts).
Also note that in step (2), the matrix file BB.aff12.1D has one line for
each time point. When transforming a source dataset, the i-th time point
will be transformed by the warp computed using the i-th line from any
multi-line matrix file in the '-nwarp' specification. (If there are more
dataset time points than matrix lines, then the last line will be reused.)
In this way, 3dNwarpApply can be used to carry out time-dependent warping
of time-dependent datasets, provided that the time-dependence in the warp
only occurs in the affine (matrix) parts of the transformation.
Note that the now-obsolete option '-affter' is subsumed into the new way
that '-nwarp' works. Formerly, the only time-dependent matrix had to
be specified as being at the end of the warp chain, and was given via
the '-affter' option. Now, a time-dependent matrix (or more than one)
can appear anywhere in the warp chain, so there is no need for a special
option. If you DID use '-affter', you will have to alter your script
simply by putting the final matrix filename at the end of the '-nwarp'
chain. (If this seems too hard, please consider another line of work.)
The other 3dNwarp* programs that take the '-nwarp' option operate similarly,
but do NOT allow time-dependent matrix files. Those programs are built to
operate with one nonlinear warp, so allowing a time-dependent warp doesn't
make sense for them.
NOTE: If a matrix is NOT time-dependent (just a single set of 12 numbers),
it can be input in the .Xat.1D format of 3 rows, each with 4 values:
a11 a12 a13 a14 } 1 0 0 0
a21 a22 a23 a24 } e.g, identity matrix = 0 1 0 0
a31 a32 a33 a34 } 0 0 1 0
This option is just for convenience. Remember that the coordinates
are DICOM order, and if your matrix comes from Some other PrograM
or from a Fine Software Library, you probably have to change some
signs in the matrix to get things to work correctly.
RANDOM NOTES:
-------------
* At present, this program doesn't work with 2D warps, only with 3D.
(That is, each warp dataset must have 3 sub-bricks.)
* At present, the output dataset is stored in float format, no matter what
absurd data format the input dataset uses (but cf. the '-short' option).
* As described above, 3dNwarpApply allows you to catenate warps directly on
the command line, as if you used 3dNwarpCat before running 3dNwarpApply.
For example:
++ You have aligned dataset Fred+orig to MNI-affine space using @auto_tlrc,
giving matrix file Fred.Xaff12.1D
++ Then you further aligned from MNI-affine to MNI-qwarp via 3dQwarp,
giving warp dataset Fred_WARP+tlrc
++ You can combine the transformations and interpolate Fred+orig directly
to MNI-qwarp space using a command like
3dNwarpApply -prefix Fred_final \
-source Fred+orig \
-master NWARP \
-nwarp 'Fred_WARP+tlrc Fred.Xaff12.1D'
Note the warps to be catenated are enclosed in quotes to make a single
input argument passed to the program. The processing used for this
purpose is the same as in 3dNwarpCat -- see the help output for that
program for a little more information.
++ When you specify a nonlinear warp dataset, you can use the 'SQRT()' and
'INV()' and 'INVSQRT()' operators, as well as the various 1D-to-3D
displacement prefixes ('AP:' 'RL:' 'IS:' 'VEC:', as well as 'FAC:') --
for example, the following is a legal (and even useful) definition of a
warp herein:
'SQRT(AP:epi_BU_yWARP+orig)'
where the 'AP:' transforms the y-displacements in epi_BU_ywarp+orig to a
full 3D warp (with x- and z-displacments set to zero), then calculates the
square root of that warp, then applies the result to some input dataset.
+ This is a real example, where the y-displacement-only warp is computed between
blip-up and blip-down EPI datasets, and then the SQRT warp is applied to
warp them into the 'intermediate location' which should be better aligned
with the subject's anatomical datasets.
-->+ However: see also the '-plusminus' option for 3dQwarp for another way to
reach the same goal, as well as the unWarpEPI.py script.
+ See the output of 3dNwarpCat -help for a little more information on the
1D-to-3D warp prefixes ('AP:' 'RL:' 'IS:' 'VEC:').
++ You can scale the displacements in a 3D warp file via the 'FAC:' prefix, as in
FAC:0.6,0.4,-0.2:fred_WARP.nii
which will scale the x-displacements by 0.6, the y-displacements by 0.4, and
the z-displacments by -0.2.
+ So if you need to reverse the sign of x- and y-displacments, since in AFNI
+x=Left and +y=Posterior while another package uses +x=Right and +y=Anterior,
you could use 'FAC:-1,-1,1:Warpdatasetname' to apply a warp from that
other software package.
++ You can also use 'IDENT(dataset)' to define a "nonlinear" 3D warp whose
grid is defined by the dataset header -- nothing else from the dataset will
be used. This warp will be filled with all zero displacements, which represents
the identity warp. The purpose of such an object is to let you apply a pure
affine warp -- since this program requires a '-nwarp' option, you can use
-nwarp 'IDENT(dataset)' to define the 3D grid for the 'nonlinear' 3D warp and
then catenate the affine warp.
* PLEASE note that if you use the '-allineate' option in 3dQwarp, then the affine
warp is already included in the output nonlinear warp from 3dQwarp, and so it
does NOT need to be applied again in 3dNwarpApply! This mistake has been made
in the past, and the results were not good.
* When using '-allineate' in 3dQwarp, and when there is a large coordinate shift
between the base and source datasets, then the _WARP dataset output by 3dQwarp
will cover a huge grid to encompass both the base and source. In turn, this
can cause 3dNwarpApply to need a lot of memory when it applies that warp.
++ Some changes were made [Jan 2019] to reduce the size of this problem,
but it still exists.
++ We have seen this most often in source datasets which have the (0,0,0)
point not in the middle of the volume, but at a corner of the volume.
Since template datasets (such as MNI152_2009_template_SSW.nii.gz) have
(0,0,0) inside the brain, a dataset with (0,0,0) at a corner of the 3D
volume will need a giant coordinate shift to match the template dataset.
And in turn, the encompassing grid that overlaps the source and template
(base) datasets will be huge.
++ The simplest way to fix this problem is to do something like
@Align_Centers -base MNI152_2009_template_SSW.nii.gz -dset Fred.nii
which will produce dataset Fred_shft.nii, that will have its grid
center approximately lined up with the template (base) dataset.
And from then on, use Fred_shft.nii as your input dataset.
=========================================================================
* This binary version of 3dNwarpApply is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dNwarpCalc
*******************************************************************
Program 3dNwarpCalc has been retired, and is no longer available :(
*******************************************************************
AFNI program: 3dNwarpCat
Usage: 3dNwarpCat [options] warp1 warp2 ...
------
* This program catenates (composes) 3D warps defined on a grid,
OR via a matrix.
++ All transformations are from DICOM xyz (in mm) to DICOM xyz.
* Matrix warps are in files that end in '.1D' or in '.txt'. A matrix
warp file should have 12 numbers in it, as output (for example), by
'3dAllineate -1Dmatrix_save'.
++ The matrix (affine) warp can have either 12 numbers on one row,
or be in the 3x4 format.
++ The 12-numbers-on-one-row format is preferred, and is the format
output by the '-1Dmatrix_save' option in 3dvolreg and 3dAllineate.
++ The matrix warp is a transformation of coordinates, not voxels,
and its use presumes the correctness of the voxel-to-coordinate
transformation stored in the header of the datasets involved.
* Nonlinear warps are in dataset files (AFNI .HEAD/.BRIK or NIfTI .nii)
with 3 sub-bricks giving the DICOM order xyz grid displacements in mm.
++ Note that it is not required that the xyz order of voxel storage be in
DICOM order, just that the displacements be in DICOM order (and sign).
++ However, it is important that the warp dataset coordinate order be
properly specified in the dataset header, since warps are applied
based on coordinates, not on voxels.
++ Also note again that displacements are in mm, NOT in voxel.
++ You can 'edit' the warp on the command line by using the 'FAC:'
scaling prefix, described later. This input editing could be used
to change the sign of the xyz displacements, if needed.
* If all the input warps are matrices, then the output is a matrix
and will be written to the file 'prefix.aff12.1D'.
++ Unless the prefix already contains the string '.1D', in which case
the filename is just the prefix.
++ If 'prefix' is just 'stdout', then the output matrix is written
to standard output.
++ In any of these cases, the output format is 12 numbers in one row.
* If any of the input warps are datasets, they must all be defined on
the same 3D grid!
++ And of course, then the output will be a dataset on the same grid.
++ However, you can expand the grid using the '-expad' option.
* The order of operations in the final (output) warp is, for the
case of 3 input warps:
OUTPUT(x) = warp3( warp2( warp1(x) ) )
That is, warp1 is applied first, then warp2, et cetera.
The 3D x coordinates are taken from each grid location in the
first dataset defined on a grid.
* For example, if you aligned a dataset to a template with @auto_tlrc,
then further refined the alignment with 3dQwarp, you would do something
like this:
warp1 is the output of 3dQwarp
warp2 is the matrix from @auto_tlrc
This is the proper order, since the desired warp takes template xyz
to original dataset xyz, and we have
3dQwarp warp: takes template xyz to affinely aligned xyz, and
@auto_tlrc matrix: takes affinely aligned xyz to original xyz
3dNwarpCat -prefix Fred_total_WARP -warp1 Fred_WARP+tlrc.HEAD -warp2 Fred.Xat.1D
The dataset Fred_total_WARP+tlrc.HEAD could then be used to transform original
datasets directly to the final template space, as in
3dNwarpApply -prefix Wilma_warped \
-nwarp Fred_total_WARP+tlrc \
-source Wilma+orig \
-master Fred_total_WARP+tlrc
* If you wish to invert a warp before it is used here, supply its
input name in the form of
INV(warpfilename)
To produce the inverse of the warp in the example above:
3dNwarpCat -prefix Fred_total_WARPINV \
-warp2 'INV(Fred_WARP+tlrc.HEAD)' \
-warp1 'INV(Fred.Xat.1D)'
Note the order of the warps is reversed, in addition to the use of 'INV()'.
* The final warp may also be inverted simply by adding the '-iwarp' option, as in
3dNwarpCat -prefix Fred_total_WARPINV -iwarp -warp1 Fred_WARP+tlrc.HEAD -warp2 Fred.Xat.1D
* Other functions you can apply to modify a 3D dataset warp are:
SQRT(datasetname) to get the square root of a warp
SQRTINV(datasetname) to get the inverse square root of a warp
However, you can't do more complex expressions, such as 'SQRT(SQRT(warp))'.
If you think you need something so rococo, use 3dNwarpCalc. Or think again.
* You can also manufacture a 3D warp from a 1-brick dataset with displacements
in a single direction. For example:
AP:0.44:disp+tlrc.HEAD (note there are no blanks here!)
means to take the 1-brick dataset disp+tlrc.HEAD, scale the values inside
by 0.44, then load them into the y-direction displacements of a 3-brick 3D
warp, and fill the other 2 directions with zeros. The prefixes you can use
here for the 1-brick to 3-brick displacement trick are
RL: for x-displacements (Right-to-Left)
AP: for y-displacements (Anterior-to-Posterior)
IS: for z-displacements (Inferior-to-Superior)
VEC:a,b,c: for displacements in the vector direction (a,b,c),
which vector will be scaled to be unit length.
Following the prefix's colon, you can put in a scale factor followed
by another colon (as in '0.44:' in the example above). Then the name
of the dataset with the 1D displacements follows.
* You might reasonably ask of what possible value is this peculiar format?
This was implemented to use Bz fieldmaps for correction of EPI datasets,
which are distorted only along the phase-encoding direction. This format
for specifying the input dataset (the fieldmap) is built to make the
scripting a little easier. Its principal use is in the program 3dNwarpApply.
* You can scale the displacements in a 3D warp file via the 'FAC:' prefix, as in
FAC:0.6,0.4,-0.2:fred_WARP.nii
which will scale the x-displacements by 0.6, the y-displacements by 0.4, and
the z-displacments by -0.2.
* Finally, you can input a warp catenation string directly as in the '-nwarp'
option of 3dNwarpApply, as in
3dNwarpCat -prefix Fred_total_WARP 'Fred_WARP+tlrc.HEAD Fred.Xat.1D'
OPTIONS
-------
-interp iii == 'iii' is the interpolation mode:
++ Modes allowed are a subset of those in 3dAllineate:
linear quintic wsinc5
++ The default interpolation mode is 'wsinc5'.
++ 'linear' is much faster but less accurate.
++ 'quintic' is between 'linear' and 'wsinc5',
in both accuracy and speed.
-verb == print (to stderr) various fun messages along the road.
-prefix ppp == prefix name for the output dataset that holds the warp.
-space sss == attach string 'sss' to the output dataset as its atlas
space marker.
-warp1 ww1 == alternative way to specify warp#1
-warp2 ww2 == alternative way to specify warp#2 (etc.)
++ If you use any '-warpX' option for X=1..99, then
any addition warps specified after all command
line options appear AFTER these enumerated warps.
That is, '-warp1 A+tlrc -warp2 B+tlrc C+tlrc'
is like using '-warp3 C+tlrc'.
++ At most 99 warps can be used. If you need more,
PLEASE back away from the computer slowly, and
get professional counseling.
-iwarp == Invert the final warp before output.
-expad PP == Pad the nonlinear warps by 'PP' voxels in all directions.
The warp displacements are extended by linear extrapolation
from the faces of the input grid.
AUTHOR -- RWCox -- March 2013
=========================================================================
* This binary version of 3dNwarpCat is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dNwarpFuncs
Usage: 3dNwarpFuncs [options]
This program reads in a nonlinear 3D warp (from 3dQwarp, etc.) and
computes some functions of the displacements. See the OPTIONS below
for information on what can be computed. The NOTES sections describes
the formulae of the functions that are available.
--------
OPTIONS:
--------
-nwarp www = 'www' is the name of the 3D warp dataset
(this is a mandatory option!)
++ This can be computed on the fly, as in 3dNwarpApply.
-prefix ppp = 'ppp' is the name of the new output dataset
-bulk = Compute the (fractional) bulk volume change.
++ e.g., Jacobian determinant minus 1.
++ see 'MORE...' (below) for interpreting the sign of '-bulk'.
-shear = Compute the shear energy.
-vorticity = Compute the vorticity enerty.
-all = Compute all 3 of these fun fun functions.
If none of '-bulk', '-shear', or '-vorticity' are given, then '-bulk'
will be assumed.
------
NOTES:
------
Denote the displacement vector field (warp) by
[ p(x,y,z) , q(x,y,z) , r(x,y,z) ]
Define the Jacobian matrix by
[ 1+dp/dx dp/dy dp/dz ] [ Jxx Jxy Jxz ]
J = [ dq/dx 1+dq/dy dq/dz ] = [ Jyx Jyy Jyz ]
[ dr/dx dr/dy 1+dr/dz ] [ Jzx Jzy Jzz ]
* The '-bulk' output is the determinant of this matrix (det[J]), minus 1.
* It measures the fractional amount of volume distortion.
* Negative means the warped coordinates are shrunken (closer together)
than the input coordinates. Also see the 'MORE...' section below.
* The '-shear' output is the sum of squares of the J matrix elements --
which equals the sum of squares of its eigenvalues -- divided by
det[J]^(2/3), then minus 3.
* It measures the amount of shearing distortion (normalized by the amount
of volume distortion).
* The '-vorticity' output is the sum of squares of the skew part of
the J matrix = [ Jxy-Jyx , Jxz-Jzx , Jyz-Jzy ], divided by det[J]^(2/3).
* It measures the amount of twisting distortion (also normalized).
* All 3 of these functions are dimensionless.
* The penalty used in 3dQwarp is a combination of the bulk, shear,
and vorticity functions.
------------------------------
MORE about interpreting -bulk:
------------------------------
If the warp N(x,y,z) is the '_WARP' output from 3dQwarp, then N(x,y,z)
maps the base dataset (x,y,z) coordinates to the source dataset (x,y,z)
coordinates. If the source dataset has to expand in size to match
the base dataset, then going from base coordinates to source must
be a shrinkage. Thus, negative '-bulk' in this '_WARP' dataset
corresponds to expansion going from source to base. Conversely,
in this situation, positive '-bulk' will show up in the '_WARPINV'
dataset from 3dQwarp as that is the map from source (x,y,z) to
base (x,y,z).
The situation above happens a lot when using one of the MNI152 human
brain templates as the base dataset. This family of datasets is larger
than the average human brain, due to the averaging process used to
define the first MNI152 template back in the 1990s.
I have no easy interpretation handy for the '-shear' and '-vorticity'
outputs, alas. They are computed as part of the penalty function used
to control weirdness in the 3dQwarp optimization process.
---------------------------
AUTHOR -- RWCox == @AFNIman
---------------------------
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dNwarpXYZ
Usage: 3dNwarpXYZ [options] -nwarp 'warp specification' XYZfile.1D > Output.1D
Transforms the DICOM xyz coordinates in the input XYZfile.1D (3 columns)
based on the '-nwarp' specification -- which is as in 3dNwarpApply
(e.g., allows inversion, catenation, et cetera).
If this warp is the _WARP output from 3dQwarp, then it takes XYZ values
from the base dataset and transforms them to the corresponding source
dataset location.
To do the reverse operation -- to take an XYZ in the source dataset
and find out where it goes to in the base dataset -- do one of these:
* use the _WARPINV output from 3dQwarp instead of the _WARP output;
* use the 'INV(dataset)' form for '-nwarp' (will be slow);
* use the '-iwarp' option described below.
The first 2 choices should be equivalent. The third choice will give
slightly different results, since the method used for warp inversion
for just a few discrete points is very different than the full warp
inversion algorithm -- this difference is for speed.
The mean Euclidean error between '-iwarp' and _WARPINV is about 0.006 mm
in one test. The largest error (using 1000 random points) in this test
was about 0.05 mm. About 95% of points had 0.015 mm error or less.
For any 3D brain MRI purpose that Zhark can envision, this level of
concordance should be adequately good-iful.
----------------------------------------------------------------
CLARIFICATION about the confusing forward and inverse warp issue
----------------------------------------------------------------
If the following is the correct command to take a source dataset to
the place that you want it to go:
3dNwarpApply -nwarp 'SOME_WARP' -source DATASET -prefix JUNK
then the next command is the one to take coordinates in the source
dataset to the same place
3dNwarpXYZ -nwarp 'SOME_WARP' -iwarp XYZsource.1D > XYZwarped.1D
For example, a command like the above has been used to warp (x,y,z)
coordinates for ECOG sensors that were picked out manually on a CT volume.
An AFNI nonlinear warp stores the displacements (in DICOM mm) from the
base dataset grid to the source dataset grid. For computing the source
dataset warped to the base dataset grid, these displacements are needed,
so that for each grid point in the output (warped) dataset, the corresponding
location in the source dataset can be found. That is, this 'forward' warp is
good for finding where a given point in the base dataset maps to in the
source dataset.
However, for finding where a given point in the source dataset maps to
in the base dataset, the 'inverse' warp is needed, which is why the
'-iwarp' option was added to 3dNwarpXYZ.
Zhark knows the above is confusing, and hopes that your distraction by
this issue will aid him in his ruthless quest for Galactic Domination!
(And for warm cranberry scones with fresh clotted cream.)
-------------
OTHER OPTIONS (i.e., besides the mandatory '-nwarp')
-------------
-iwarp = Compute the inverse warp for each input (x,y,z) triple.
++ As mentioned above, this program does NOT compute the
inverse warp over the full grid (unlike the 'INV()' method
and the '-iwarp' options to other 3dNwarp* programs), but
uses a different method that is designed to be fast when
applied to a relatively few input points.
++ The upshot is that using '-iwarp' here will give slightly
different results than using 'INV()', but for any practical
application the differences should be negligible.
July 2014 - Zhark the Coordinated
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dOverlap
Usage: 3dOverlap [options] dset1 dset2 ...
Output = count of number of voxels that are nonzero in ALL
of the input dataset sub-bricks
The result is simply a number printed to stdout. (If a single
brick was input, this is just the count of number of nonzero
voxels in that brick.)
Options:
-save ppp = Save the count of overlaps at each voxel into a
dataset with prefix 'ppp' (properly thresholded,
this could be used as a mask dataset).
Example:
3dOverlap -save abcnum a+orig b+orig c+orig
3dmaskave -mask 'abcnum+orig<3..3>' a+orig
Also see program 3dABoverlap :)
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dPAR2AFNI.pl
Unknown option: e
Unknown option: l
Unknown option: p
3dPAR2ANFI
Version: 2008/07/18 11:12
Command line Options:
-h This help message.
-v Be verbose in operation.
-s Skip the outliers test when converting 4D files
The default is to perform the outliers test.
-n Output NIfTI files instead of HEAD/BRIK.
The default is create HEAD/BRIK files.
-a Output ANALYZE files instead of HEAD/BRIK.
-o The name of the directory where the created files should be
placed. If this directory does not exist the program exits
without performing any conversion.
The default is to place created files in the same directory
as the PAR files.
-g Gzip the files created.
The default is not to gzip the files.
-2 2-Byte-swap the files created.
The default is not to 2 byte-swap.
-4 4-Byte-swap the files created.
The default is not to 4 byte-swap.
Sample invocations:
3dPAR2AFNI subject1.PAR
Converts the file subject1.PAR file to subject1+orig.{HEAD,BRIK}
3dPAR2AFNI -s subject1.PAR
Same as above but skip the outlier test
3dPAR2AFNI -n subject1.PAR
Converts the file subject1.PAR file to subject1.nii
3dPAR2AFNI -n -s subject1.PAR
Same as above but skip the outlier test
3dPAR2AFNI -n -s -o ~/tmp subject1.PAR
Same as above but skip the outlier test and place the
created NIfTI files in ~/tmp
3dPAR2AFNI -n -s -o ~/tmp *.PAR
Converts all the PAR/REC files in the current directory to
NIfTI files, skip the outlier test and place the created
NIfTI files in ~/tmp
AFNI program: 3dpc
Principal Component Analysis of 3D Datasets
Usage: 3dpc [options] dataset dataset ...
Each input dataset may have a sub-brick selector list.
Otherwise, all sub-bricks from a dataset will be used.
OPTIONS:
-dmean = remove the mean from each input brick (across space)
-vmean = remove the mean from each input voxel (across bricks)
[N.B.: -dmean and -vmean are mutually exclusive]
[default: don't remove either mean]
-vnorm = L2 normalize each input voxel time series
[occurs after the de-mean operations above,]
[and before the brick normalization below. ]
-normalize = L2 normalize each input brick (after mean subtraction)
[default: don't normalize]
-nscale = Scale the covariance matrix by the number of samples
This is not done by default for backward compatibility.
You probably want this option on.
-pcsave sss = 'sss' is the number of components to save in the output;
it can't be more than the number of input bricks
[default = none of them]
* To get all components, set 'sss' to a very large
number (more than the time series length), like 99999
You can also use the key word ALL, as in -pcsave ALL
to save all the components.
-reduce r pp = Compute a 'dimensionally reduced' dataset with the top
'r' eigenvalues and write to disk in dataset 'pp'
[default = don't compute this at all]
* If '-vmean' is given, then each voxel's mean will
be added back into the reduced time series. If you
don't want this behaviour, you could remove the mean
with 3dDetrend before running 3dpc.
* On the other hand, the effects of '-vnorm' and '-dmean'
and '-normalize' are not reversed in this output
(at least at present -- send some cookies and we'll talk).
-prefix pname = Name for output dataset (will be a bucket type);
* Also, the eigen-timeseries will be in 'pname'_vec.1D
(all of them) and in 'pnameNN.1D' for eigenvalue
#NN individually (NN=00 .. 'sss'-1, corresponding
to the brick index in the output dataset)
* The eigenvalues will be printed to file 'pname'_eig.1D
All eigenvalues are printed, regardless of '-pcsave'.
[default value of pname = 'pc']
-1ddum ddd = Add 'ddd' dummy lines to the top of each *.1D file.
These lines will have the value 999999, and can
be used to align the files appropriately.
[default value of ddd = 0]
-verbose = Print progress reports during the computations
-quiet = Don't print progress reports [the default]
-eigonly = Only compute eigenvalues, then
write them to 'pname'_eig.1D, and stop.
-float = Save eigen-bricks as floats
[default = shorts, scaled so that |max|=10000]
-mask mset = Use the 0 sub-brick of dataset 'mset' as a mask
to indicate which voxels to analyze (a sub-brick
selector is allowed) [default = use all voxels]
Example using 1D data a input, with each column being the equivalent
of a sub-brick:
3dpc -prefix mmm -dmean -nscale -pcsave ALL datafile.1D
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dPeriodogram
Usage: 3dPeriodogram [options] dataset
Computes the periodogram of each voxel time series.
(Squared FFT = a crude estimate of the power spectrum)
--------
Options:
--------
-prefix p = use string 'p' for the prefix of the
output dataset [DEFAULT = 'pgram']
-taper = fraction of data to taper [DEFAULT = 0.1]
-nfft L = set FFT length to 'L' points
(longer than the data ==> zero padding)
(shorter than the data ==> data pruning)
------
Notes:
------
* Output is in float format; number of sub-bricks will be
half the FFT length; sub-brick #0 = FFT bin #1, etc.
* Grid spacing in the frequency (sub-brick) dimension will
be 1/(nfft*TR) where nfft=FFT length, TR=dataset timestep.
* There is no '-mask' option. The hyper-clever user could
use something like
'3dcalc( -a dset+orig -b mask+orig -expr a*b )'
to apply a binary mask on the command line.
* Data is not scaled exactly as in the AFNI Power plugin.
* Each time series is linearly detrended prior to FFT-ization.
* FFT length defaults to be the next legal length >= input dataset.
* The program can only do FFT lengths that are positive even integers.
++ '-nfft' with an illegal value will cause the program to fail.
* If you want to do smaller FFTs, then average the periodograms
(to reduce random fluctuations), you can use 3dPeriodogram in
a script with "[...]" sub-brick selectors, then average
the results with 3dMean.
* Or you could use the full-length FFT, then smooth that FFT
in the frequency direction (e.g., with 3dTsmooth).
* This is a really quick hack for DH and PB and SfN.
* Author = RWCox -- who doesn't want any bribe at all for this!
-- http://ethics.od.nih.gov/topics/gifts.htm
---------------------------------------------------
More Details About What 3dPeriodogram Actually Does
---------------------------------------------------
* Tapering is done with the Hamming window (if taper > 0):
Define npts = number of time points analyzed (<= nfft)
(i.e., the length of the input dataset)
ntaper = taper * npts / 2 (0 < taper <= 1)
= number of points to taper on each end
ktop = npts - ntaper
phi = PI / ntaper
Then the k-th point (k=0..nfft-1) is tapered by
w(k) = 0.54 - 0.46 * cos(k*phi) 0 <= k < ntaper
w(k) = 0.54 + 0.46 * cos((k-ktop+1)*phi) ktop <= k < npts
w(k) = 1.0 otherwise
Also define P = sum{ w(k)*w(k) } from k=0..npts-1
(if ntaper = 0, then P = npts).
* The result is the squared magnitude of the FFT of w(k)*data(k),
divided by P. This division makes the result be the 'power',
which is to say the data's sum-of-squares ('energy') per unit
time (in units of 1/TR, not 1/sec) ascribed to each FFT bin.
* Normalizing by P also means that the values output for different
amounts of tapering or different lengths of data are comparable.
* To be as clear as I can: this program does NOT do any averaging
across multiple windows of the data (such as Welch's method does)
to estimate the power spectrum. This program:
++ tapers the data,
++ zero-pads it to the FFT length,
++ FFTs it (in time),
++ squares it and divides by the P factor.
* The number of output sub-bricks is nfft/2:
sub-brick #0 = FFT bin #1 = frequency 1/(nfft*dt)
#1 = FFT bin #2 = frequency 2/(nfft*dt)
et cetera, et cetera, et cetera.
* If you desire to implement Welch's method for spectrum estimation
using 3dPeriodogram, you will have to run the program multiple
times, using different subsets of the input data, then average
the results with 3dMean.
++ https://en.wikipedia.org/wiki/Welch's_method
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dPFM
Usage: 3dPFM [options]
------
Brief summary:
==============
* 3dPFM is a program that identifies brief BOLD events (order of sec) in fMRI time series
without prior knowledge of their timing. 3dPFM deconvolves a hemodynamic response
function for each fMRI voxel and estimates the neuronal-related signal that generates
the BOLD events according to the linear haemodynamic model. In many ways,
the neuronal-related signal could be understood as the stimulus signal defined by the
experimental paradigm in a standard GLM approach, where the onsets
and duration of the experimental conditions are known a-priori. Alternatively,
3dPFM does not assume such information and estimates the signal underlying the
BOLD events with NO PARADIGM INFORMATION, i.e. PARADIGM FREE MAPPING (PFM). For instance,
this algorithm can be useful to identify spontaneous BOLD events in resting-state
fMRI data.
* The ideas behind 3dPFM are described in
C Caballero-Gaudes, N Petridou, ST Francis, IL Dryden, and PA Gowland.
Paradigm Free Mapping with Sparse Regression Automatically detects Single-Trial
Functional Magnetic Resonance Imaging Blood Oxygenation Level Dependent Responses.
Human Brain Mapping, 34(3):501-18, 2013.
http://dx.doi.org/10.1002/hbm.21452
* For the deconvolution, 3dPFM assumes a linear convolution model and that
the neuronal-related signal is sparse in time, i.e. it has a non-zero amplitude
in a relatively small number of time points. How relative depends on the number
of time points of the signal, i.e. the length of the signal, a.k.a. scans, volumes.
* In many ways, the rationale of 3dPFM is very similar to 3dTfitter with the -FALTUNG
(deconvolution) option. Both programs differ in the manner the deconvolution
is solved and several other relevant and interesting options.
**** I would also recommend you to read 3dTfitter -help for useful tips *****
************* !!! 3dPFM is neither for the casual user !!!! ****************
* IMPORTANT. This program is written in R. Please follow the guidelines in
https://afni.nimh.nih.gov/sscc/gangc/Rinstall.html
to install R and make AFNI compatible with R. In addition, you need to install
the following libraries with dependencies:
install.packages("abind",dependencies=TRUE)
install.packages("MASS",dependencies=TRUE)
install.packages("lars",dependencies=TRUE)
You can find a demo on how to run this program in @Install_3dPFM_Demo
A brief in deconvolution and regularization
===========================================
Only for the non-casual user !!!:
===========================================
The basic idea of 3dPFM is to assume that the time series at each voxel y(t)
is given by the linear convolution model (e.g., a linear haemodynamic model)
y(t) = sum { h(j) * s(t-j) } + e(t)
j>=0
where h(t) is an user-supplied kernel function (e.g., haemodynamic response
function (HRF)), s(t) is the neuronal-related time series to be estimated, and e(t) is
a noise term capturing all noisy components of the signal. In matrix notation,
the convolution model can be "simply" written as
y = H*s + e
where y, s and e are the input voxel, the neuronal-related and the error time series,
respectively, and H is a matrix with time-shifted versions of the kernel function
across columns. The convolution model is defined such that the size of H is N x N,
where N is the length of the input time series and, accordingly, the estimated
neuronal-related time series has the same length as the input time series.
Assuming that the noise is random and following a Gaussian distribution, a very
sensible way to estimate the time series s would be to minimize the sum of squares
of the residuals (RSS), a.k.a. L2fit, Least-Squares (LS) fit, and so forth, i.e.
s* = min || y - H*s ||_2^2
s
Unfortunately, in our problem the least squares solution tends to overfit the
input time series (i.e. the input time series tend to produce a perfect fit of the
input signal including the noise) since the number of variables to estimate is
equal to the number of observations in the original time series. In addition,
since the columns of the convolution matrix H are highly correlated, the LS estimates
can become poorly determined and exhibit high variance.
One solution to these drawbacks is to impose a regularization term on (or penalization of)
the coefficient estimates based on prior information about the input signal. Typically,
regularization terms based on the Lp-norm of the estimates are used, such that the estimate
of s is computed by solving
s* = min || y - H*s ||_2^2 subject to || s ||_p <= λ
s
or, similarly,
s* = min || s ||_p subject to || y - H*s ||_2^2 <= λ
s
or, using Lagrangian multipliers,
s* = min || y - H*s ||_2^2 + λ || s ||_p
s
The three optimization problems are relatively equivalent, where λ is
a positive regularization parameter that balance the tradeoff between the term
of the residuals sum of squares (RSS) and the regularization or penalty term.
Note: The value of λ in the Lagrangian formulation is not equal (i.e. does
not have one-to-one correspondence) to the value of λ in the constrained problems.
The L1-norm (p = 1) is a convex, and widely studied, regularization term that promotes
sparse estimates. Relevant for fMRI data analysis, if BOLD responses were generated
by brief (on the fMRI time scale) bursts of neuronal activation, it could be assumed
that the neuronal-related time series s is a sparse vector with few coefficients
whose amplitude are significantly different from zero. In fact, this is typically assumed
in event-related fMRI experiments where we assume that one voxel responds to brief stimuli
in some, but not all, conditions.
In 3dPFM, two regularized estimation problems are currently implemented based on the L1-norm:
* LASSO: The least absolute shrinkage and selection operator (LASSO) [Tibshirani, 1996],
which is equivalent to basis pursuit denoising (BPDN) [Chen et al., 1998]:
s* = min || y - H*s ||_2^2 subject to || s ||_1 <= λ
s
* DS: The Dantzig Selector [Candes and Tao, 2007]
s* = min || s ||_1 subject to || H^T (y - H*s) ||_infty <= λ
s
where the L_infty (infinity-norm) refers to the maximum absolute value of a vector.
In practice, minimizing the error term subject to a constraint in the norm is often
equivalent to minimizing the norm subject to a constraint in the error term,
with a one-to-one correspondence between the regularization parameters of both problems.
All in all, one can see that the main difference between the LASSO and the DS relates
to the error term. The LASSO considers the residual sum of squares (RSS), whereas
the DS considers the maximum correlation (in absolute value) of the residuals with
the model. Very intelligent minds have shown that there are very strong links
between the DS and the LASSO (see Bickel et al., 2009
http://projecteuclid.org/euclid.aos/1245332830; and James et al., 2009
http://dx.doi.org/10.1111/j.1467-9868.2008.00668.x for more information).
For lesser mortals, it is enough to know that the L_infty norm term in the DS is
equivalent to the differentiation of the RSS term with respect to s in the LASSO.
Actually, in practice the results of 3dPFM with the DS are usually very similar
to the ones obtained with the LASSO (and vice-versa).
Algorithms for solving the LASSO and DS
---------------------------------------
3dPFM relies on homotopy continuation procedures to solve the above optimization
problems. These procedures are very useful since they compute the complete
set of solutions of the problem for all possible regularization parameters.
This is known as the regularization path. In particular, 3dPFM employs an R-version
of homotopy continuation algorithms for the DS (L1-homotopy) developed by Asif and Romberg
(see http://dx.doi.org/10.1109/CISS.2010.5464890), and the R-package LARS for the LASSO.
Choice of regularization parameter
----------------------------------
Once the regularization path with all solutions is computed, what is the optimal one?
i.e., what is the optimal regularization parameter λ ??. This is a very difficult question.
In fact, it is nearly impossible to select the optimal λ unless one is aware of
the optimal solution in advance (i.e. be the ORACLE) (but then we would not need to
estimate anymore!!!). In 3dPFM, the choice of the regularization parameter is done
based on model selection criteria that balance the degrees of freedom (df) that are
employed to fit the signal and the RSS relative to the number of observations.
For instance, when we use the Least Squares estimator to fit a general linear model
(GLM), as in 3dDeconvolve, the value of df is approximately equal to number of
regressors that we define in the model. So, here is the key question in 3dPFM:
If the convolution model used in 3dPFM (i.e. the matrix) has as many columns as
the number of observations, is not the degrees of freedom equal or higher than
the number of time points of the signal? The answer is NO for the L1-norm
regularization problems as the LASSO.
The trick is that an unbiased estimate of the degrees of freedom of the LASSO is
the number of non-zero coefficients of the LASSO estimate (for demonstration see
http://projecteuclid.org/euclid.aos/1194461726) if the matrix H is orthogonal.
Unfortunately, the matrix H in 3dPFM is not orthogonal and this result is not
completely accurate. Yet, we consider it valid as it works quite nicely
in our application, i.e. counting the number of non-zero coefficients in the solution is
a very good approximation of the degrees of freedom. Moreover, 3dPFM also uses this
approximation for the Dantzig Selector due to the close link with the LASSO.
Therefore, the unbiased estimate of the degrees of freedom can be used to construct
model selection criteria to select the regularization parameter. Two different
criteria are implemented in 3dPFM:
* -bic: (Bayesian Information Criterion, equivalent to Minimum Description Length)
λ* = min N*log(|| y - H*s(λ) ||_2^2) + log(N)*df(λ)
λ
* -aic: (Akaike Information Criterion)
λ* = min N*log(|| y - H*s(λ) ||_2^2) + 2*df(λ)
λ
where s(λ) and df(λ) denote that the estimate and df depend on the regularization
parameter λ.
As shown in (Caballero-Gaudes et al. 2013), the bayesian information criterion (bic)
typically gives better results than the akaike information crition (aic).
If you want the 3dPFM ORACLE (i.e. the author of this program) to implement other
criteria, such as AICc, MDLc, please write him an email.
Option -nonzeros Q:
Alternatively, one could also select the regularization parameter such that
the estimate only includes Q coefficients with non-zero amplitude, where Q
is an arbitrary number given as input. In statistics, the set of nonzero coefficients
for a given regularization parameter is defined as the active (or support) set.
A typical use of this option would be that we hypothesize that our signal
only includes Q nonzero coefficients (i.e. haemodynamic events of TR duration)
but we do not know when they ocurr.
IMPORTANT: If two successive events are non-zero, do both coefficients represent one or
two events? Intuitively, one could think that both coefficients model a single event
that spans several coefficients and, thus, requires several non-zero coefficients to
to be properly modelled. This case is NOT considered in the program.
To deal with this situation, 3dPFM should have an option like "-nevents Q",
where Q is the number of events or successive non-zero coefficients. Unfortunately,
this cannot be easily defined. For instance, an estimate where all coefficients are
non-zero would represent a SINGLE event!!!
If you think of a sensible manner to implement this option, please contact THE ORACLE.
VERY IMPORTANT: In practice, the regularization path could include 2 different solutions
for 2 different regularization parameters but with equal number of non-zero coefficients!!!
This occurs because in the process of computing the regularization path for decreasing values
of the regularization parameter (i.e. λ1 > λ2 > λ3), the number of elements in the active set
(i.e. the set of coefficients with non-zero amplitide) can increase or decrease. In fact,
the knots of the regularization path are the points where one element of the active set changes
(i.e. it is removed or added to the active set) as λ decreases to zero. Consequently, the
active set could include Q non-zero elements for λ1, Q+1 for λ2 < λ1, and Q for λ3 < λ2.
In that case, the estimate given by 3dPFM is the solution for the largest regularization
parameter.
CAREFUL!! use option -nonzeros at your own risk!!
- Not all voxels show neuronal related BOLD events.
- These options are appropriate for ROI or VOI analyses where there is a clear hypothesis
that a given number of BOLD events should exist but we have no clue of their timing.
------------
References:
------------
If you find 3dPFM useful, the papers to cite are:
C Caballero-Gaudes, N Petridou, ST Francis, IL Dryden, and PA Gowland.
Paradigm Free Mapping with Sparse Regression Automatically detects Single-Trial
Functional Magnetic Resonance Imaging Blood Oxygenation Level Dependent Responses.
Human Brain Mapping, 34(3):501-18, 2013.
http://dx.doi.org/10.1002/hbm.21452
C Caballero-Gaudes, N Petridou, IL Dryden, L Bai, ST Francis and PA Gowland.
Detection and characterization of single-trial fMRI bold responses:
Paradigm free mapping. Human Brain Mapping, 32(9):1400-18, 2011
http://dx.doi.org/10.1002/hbm.21116.
If you find 3dPFM very useful for the analysis of resting state data and finding invisible
sponteneous BOLD events, the paper to cite is:
N Petridou, C Caballero-Gaudes, IL Dryden, ST Francis and PA Gowland
Periods of rest in fMRI contain individual spontaneous events which
are related to slowly fluctuating spontaneous activity. Human Brain Mapping,
34(6):1319-29, 2013.
http://dx.doi.org/10.1002/hbm.21513
If you use the Dantzig Selector in 3dPFM and want to know more about the homotopy algorithm
for solving it, the paper to read (and cite) is:
M Salman Asif and J Romberg, On the LASSO and Dantzig selector equivalence,
Conference on Information Sciences and Systems (CISS), Princeton, NJ, March 2010.
http://dx.doi.org/10.1109/CISS.2010.5464890
Finally, additional references for the LASSO and the Dantzig Selector are:
R Tibshirani. Regression Shrinkage and Selection via the Lasso. Journal of
the Royal Statistical Society. Series B (Methodological), 58(1): 267-288, 1996.
http://www.jstor.org/stable/2346178
H Zou, T Hastie, R Tibshirani. On the “degrees of freedom” of the lasso.
Annals of Statistics 35(5): 2173--2192, 2007.
http://projecteuclid.org/euclid.aos/1194461726.
B Efron, T Hastie, I. Johnstone, R Tibshirani. Least Angle Regression.
Annals of Statistics 32(2): 407–-499, 2004.
http://projecteuclid.org/euclid.aos/1083178935
E Candes and T. Tao. The Dantzig selector: Statistical estimation when p is
much larger than n. The Annals of Statistics 35(6):2313--2351, 2007.
http://projecteuclid.org/euclid.aos/1201012958.
M Salman Asif and J Romberg, On the LASSO and Dantzig selector equivalence,
Conference on Information Sciences and Systems (CISS), Princeton, NJ, March 2010.
http://dx.doi.org/10.1109/CISS.2010.5464890
---------------------------------------------------------------------------------------
Author: C. Caballero Gaudes, THE ORACLE (c.caballero@bcbl.eu) (May 1st, 2015)
(many thanks to Z. Saad, R.W. Cox, J. Gonzalez-Castillo, G. Chen, and N. Petridou for neverending support)
Example usage:
-----------------------------------------------------------------------------
3dPFM -input epi.nii
-mask mask.nii
-algorithm dantzig
-criteria bic
-LHS regparam.1D
-hrf SPMG1
-jobs 1
-outALL yes
Options:
--------
-input DSET1
Specify the dataset to analyze with Paradigm Free Mapping (3dPFM).
It can be any of the formats available in AFNI.
e.g: -input Data+orig
Also .1D files where each column is a voxel timecourse.
If an .1D file is input, you MUST specify the TR with option -TR.
-mask MASK: Process voxels inside this mask only. Default is no masking.
-algorithm ALG: Regularization (a.k.a. penalty) function used for HRF deconvolution.
* Available options for ALG are:
dantzig: Dantzig Selector (default)
lasso: LASSO
* If you want other options, contact with the ORACLE (c.caballero@bcbl.eu).
-criteria CRIT: Model selection criterion for HRF deconvolution.
* Available options are:
BIC: Bayesian Information Criterion
AIC: Akaike Information Criterion
* Default is BIC since it tends to produce more accurate deconvolution (see 3dPFM paper).
* If you want other options, write to the ORACLE.
* This option is incompatible with -nonzeros.
-nonzeros XX:
* Choose the estimate of the regularization path with XX nonzero coefficients
as the output of the deconvolution.
* Since the regularization path could have several estimates with identical
number of nonzero coefficients, the program will choose the first one in the
regularization path, i.e. the solution with the largest regularization parameter.
* This option is incompatible with -criteria.
* This option is not used by default.
-maxiter MaxIter:
* Maximum number of iterations in the homotopy procedure (absolute value).
* Setting up MaxIter < 1 might be useful to speed up the program, e.g.
with the option -nonzeros Q, MaxIter = 2*Q is reasonable (default)
-maxiterfactor MaxIterFactor:
* Maximum number of iterations in the homotopy procedure is relative to
the number of volumes of the input time series, i.e. MaxIterFactor*nscans,
* Default value is MaxIterFactor = 1
MaxIter OR MaxIterFactor
--------------------------
* If both MaxIterFactor and MaxIter are given for any mistaken reason,
the program will STOP. It only admits one of the two options.
* If none of them is given, the number of iterations is equal to nscans.
* The homotopy procedure adds or removes one coefficient from the active
set of non-zero coefficients in the estimate in each iteration.
* If you expect Q non-zero coefficients in the deconvolved time-series,
a reasonable choice is MaxIter = 2*Q (default with -nonzero Q)
* If you want to speed up the program, choose MaxIterfactor = 1 or 0.5.
-TR tr: Repetition time or sampling period of the input data
* It is required for the generation of the deconvolution HRF model.
* If input dataset is .1D file, TR must be specified in seconds.
If TR is not given, the program will STOP.
* If input dataset is a 3D+time volume and tr is NOT given,
the value of TR is taken from the dataset header.
* If TR is specified and it is different from the TR in the header
of the input dataset, the program will STOP.
I am not sure know why you want to do that!!!
but if you want, first change the TR of the input with 3drefit.
-hrf fhrf: haemodynamic response function used for deconvolution
* Since July 2015, fhrf can be any of the HRF models available in 3dDeconvolve.
Check https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dDeconvolve.html
* I.e. 3dPFM calls 3dDeconvolve with the -x1D_stop and -nodata options
to create the HRF with onset at 0 (i.e. -stim_time 1 '1D:0' fhrf )
* [Default] fhrf == 'GAM', the 1 parameter gamma variate
(t/(p*q))^p * exp(p-t/q)
with p=8.6 q=0.547 if only 'GAM' is used
** The peak of 'GAM(p,q)' is at time p*q after
the stimulus. The FWHM is about 2.3*sqrt(p)*q.
* Another option is fhrf == 'SPMG1', the SPM canonical HRF.
* If fhrf is a .1D, the program will use it as the HRF model.
** It should be generated with the same TR as the input data
to get sensible results (i.e. know what you are doing).
** fhrf must be column or row vector, i.e. only 1 hrf allowed.
In future, this option might be changed to model the hrf as
a linear combination of functions.
* The HRF is normalized to maximum absolute amplitude equal to 1.
-hrf_vol hrf_DSET: 3D+time dataset with voxel/nodes/vertex -dependent HRFs.
* The grid and TR of hrf_DSET must be the same as the input dataset.
* This dataset can be the output of -iresp option in 3dDeconvolve, which
contains the estimated HRF (a.k.a. impulse responses) for a given stimuli.
* In 3dPFM, the HRF response is assumed constant during the acquisition.
* See also -idx_hrf, an interesting option to use voxel dependent HRFs.
-idx_hrf idx_hrf_DSET: 3D dataset with voxel-dependent indexes that indicate
which column of the .1D file in option -hrf should be used for each voxel.
* Of course, the grid of idx_hrf_DSET must be the same as the input dataset.
* The number of HRFs in option -hrf must be <= maximum index in idx_hrf_DSET.
Otherwise, the program will STOP before starting any calculation.
* Only positive integers > 0 are allowed in this option.
* For instance, this dataset can be created by clustering (e.g. with 3dKmeans)
the estimated HRF generated with option -iresp in 3dDeconvolve.
* In 3dPFM, the HRF response is assumed constant during the acquisition
* An index equal to 1 will select the first column of the .1D fhrf,
which is usually column 0 in AFNI nomenclature.
-LHS lset:
Options: file.1D or functional dataset(s)
* Additional regressors that will be fitted to the data after deconvolution.
* Usually, these will be nuisance regressors that explain some variability
of the data, e.g. the realignment parameters estimated with 3dVolreg.
* More than one 'lset' can follow the '-LHS' option and it can be any of the AFNI formats.
* Each 'lset' can be a 3D+time dataset or a 1D file with 1 or more columns.
* A 3D+time dataset defines one column in the LHS matrix.
++ If input is a 1D file, then you cannot input a 3D+time
dataset with '-LHS'.
++ If input is a 3D+time dataset, then the LHS 3D+time dataset(s)
must have the same voxel grid as the input.
* A 1D file will include all its columns in the LHS matrix.
++ For example, you could input the LHS matrix from the
.xmat.1D file output by 3dDeconvolve, if you wanted
to repeat the same linear regression using 3dPFM.
* Columns are assembled in the order given on the command line,
which means that LHS parameters will be output in that order!
NOTE: These notes are ALMOST a copy of the -LHS option in 3dTfitter and
they are replicated here for simplicity and because it is difficult
to do it better !!
-jobs NJOBS: On a multi-processor machine, parallel computing will speed
up the program significantly.
Choose 1 for a single-processor computer (DEFAULT).
-nSeg XX: Divide into nSeg segments of voxels to report progress,
e.g. nSeg 5 will report every 20% of processed voxels.
Default = 10
-verb VERB: VERB is an integer specifying verbosity level.
0 for quiet, 1 (default) or more: talkative.
-help: this help message
-beta Prefix for the neuronal-related (i.e. deconvolved) time series.
It wil have the same length as the input time series.
This volume is always saved with default name 'PFM' if not given.
++ If you don't want this time series (why?), set it to NULL.
This is another similarity with 3dTfitter.
-betafitts Prefix of the convolved neuronal-related time series.
It wil have the same length as the input time series
Default = NULL, which means that the program will not save it.
-fitts Prefix for the fitted time series.
Default = NULL, although it's recommendable to save it
to check the fit of the model to the data.
-resid Prefix for the residuals of the fit to the data.
Default = NULL.
It could also be computed as input - ffitts with 3dcalc.
-mean Prefix for the intercept of the model
Default = NULL.
-LHSest Prefix for the estimates of the LHS parameters.
Default = NULL.
-LHSfitts Prefix for the fitted time series of the LHS parameters.
Default = NULL.
-lambda Prefix for output volume with the regularization parameter
of the deconvolution of each voxel.
Default = NULL.
-costs Prefix for output volume of the cost function used to select the
regularization parameter according to the selected criteria.
Default = NULL.
Output volumes of T-stats, F-stats and Z-stats
==============================================
-Tstats_beta Prefix for the T-statistics of beta at each time point
according to a linear model including the nonzero coefficients
of the deconvolved signal, plus LHS regressors and intercept
It wil have the same length as the input time series
Recommendation: Use -Tdf_beta too!!
Default = NULL.
-Tdf_beta Prefix for degrees of freedom of the T-statistics of beta.
Useful if you want to check Tstats_beta since different voxels
might have different degrees of freedom.
Default = NULL.
-Z_Tstats_beta Prefix for (normalized) z-scores of the T-statistics of beta.
Recommendable option to visualize the results instead of
Tstats_beta and Tdf_beta since (again) different voxels
might be fitted with different degrees of freedom.
Default = NULL.
-Fstats_beta Prefix for the F-statistics of the deconvolved component.
Recommendation: Use -Fdf_beta too!! for the very same reasons.
Default = NULL.
-Fdf_beta Prefix for degrees of freedom of Fstats_beta.
Useful to check Fstats_beta for the very same reasons.
Default = NULL.
-Z_Fstats_beta Prefix for (normalized) z-scores of the Fstats_beta.
Recomendable option instead of Fstats_beta and Fdf_beta.
Default = NULL.
-Tstats_LHS Prefix for T-statistics of LHS regressors at each time point.
It wil have the same length as the total number of LHS regressors.
Recommendation: Use -Tdf_LHS too!!
Default = NULL.
-Tdf_LHS Prefix for degrees of freedom of the Tstats_LHS.
Useful if you want to check Tstats_LHS since different voxels
might have different degrees of freedom.
Default = NULL.
-Z_Tstats_LHS Prefix for (normalized) z-scores of the Tstats_LHS.
Recommendable option instead of Tstats_LHS and Tdf_LHS.
Default = NULL.
-Fstats_LHS Prefix for the F-statistics of the LHS regressors.
Recommendation: Use -Fdf_LHS too!!
Default = NULL.
-Fdf_LHS Prefix for degrees of freedom of the Fstats_LHS.
Default = NULL.
-Z_Fstats_LHS Prefix for (normalized) z-scores of Fstats_LHS.
Recommendable option instead of Fstats_LHS and Fdf_LHS.
Default = NULL.
-Fstats_full Prefix for the F-statistics of the full (deconvolved) model.
Default = NULL.
-Fdf_full Prefix for the degrees of freedom of the Fstats_full.
Default = NULL.
-Z_Fstats_full Prefix for (normalized) z-scores of Fstats_full.
Default = NULL.
-R2_full Prefix for R^2 (i.e. coefficient of determination) of the full model.
Default = NULL.
-R2adj_full Prefix for Adjusted R^2 coefficient of the full model.
Default = NULL.
-outALL suffix
* If -outALL is used, the program will save ALL output volumes.
* The names of the output volumes will be automatically generated as
outputname_suffix_input, e.g. if -input = TheEmperor+orig, and suffix is Zhark,
the names of the volumes will be beta_Zhark_TheEmperor+orig for -beta option,
betafitts_Zhark_TheEmperor+orig for -betafitts option, and so forth.
* If suffix = 'yes', then no suffix will be used and the names will be just
outputname_input, i.e. beta_TheEmperor+orig.
* If you want to specify a given name for an output volume, you must define
the name of the output volume in the options above. The program will use it
instead of the name automatically generated.
Default = NULL.
-outZAll suffix
* If -outZAll is used, the program will save ALMOST ALL output volumes.
* Similar to -outALL, but the program will only save the Z_Tstats_* and Z_Fstats_* volumes
i.e. it will not save the Tstats_*, Tdf_*, Fstats_* and Fdf_* volumes.
* This option is incompatible with -outALL. The program will STOP if both options are given.
Default = NULL.
-show_allowed_options: list of allowed options
AFNI program: 3dPolyfit
Usage: 3dPolyfit [options] dataset ~1~
* Fits a polynomial in space to the input dataset and outputs that fitted dataset.
* You can also add your own basis datasets to the fitting mix, using the
'-base' option.
* You can get the fit coefficients using the '-1Dcoef' option.
--------
Options: ~1~
--------
-nord n = Maximum polynomial order (0..9) [default order=3]
[n=0 is the constant 1]
[n=-1 means only use volumes from '-base']
-blur f = Gaussian blur input dataset (inside mask) with FWHM='f' (mm)
-mrad r = Radius (voxels) of preliminary median filter of input
[default is no blurring of either type; you can]
[do both types (Gaussian and median), but why??]
[N.B.: median blur is slower than Gaussian]
-prefix pp = Use 'pp' for prefix of output dataset (the fit).
[default prefix is 'Polyfit'; use NULL to skip this output]
-resid rr = Use 'rr' for the prefix of the residual dataset.
[default is not to output residuals]
-1Dcoef cc = Save coefficients of fit into text file cc.1D.
[default is not to save these coefficients]
-automask = Create a mask (a la 3dAutomask)
-mask mset = Create a mask from nonzero voxels in 'mset'.
[default is not to use a mask, which is probably a bad idea]
-mone = Scale the mean value of the fit (inside the mask) to 1.
[probably this option is not useful for anything]
-mclip = Clip fit values outside the rectilinear box containing the
mask to the edge of that box, to avoid weird artifacts.
-meth mm = Set 'mm' to 2 for least squares fit;
set it to 1 for L1 fit [default method=2]
[Note that L1 fitting is slower than L2 fitting!]
-base bb = In addition to the polynomial fit, also use
the volumes in dataset 'bb' as extra basis functions.
[If you use a base dataset, then you can set nord]
[to -1, to skip using any spatial polynomial fit.]
-verb = Print fun and useful progress reports :-)
------
Notes: ~1~
------
* Output dataset is always stored in float format.
* If the input dataset has more than 1 sub-brick, only sub-brick #0
is processed. To fit more than one volume, you'll have to use a script
to loop over the input sub-bricks, and then glue (3dTcat) the results
together to get a final result. A simple example:
#!/bin/tcsh
set base = model.nii
set dset = errts.nii
set nval = `3dnvals $dset`
@ vtop = $nval - 1
foreach vv ( `count_afni 0 $vtop` )
3dPolyfit -base "$base" -nord 0 -mask "$base" -1Dcoef QQ.$vv -prefix QQ.$vv.nii $dset"[$vv]"
end
3dTcat -prefix QQall.nii QQ.0*.nii
1dcat QQ.0*.1D > QQall.1D
m QQ.0*
exit 0
* If the '-base' dataset has multiple sub-bricks, all of them are used.
* You can use the '-base' option more than once, if desired or needed.
* The original motivation for this program was to fit a spatial model
to a field map MRI, but that didn't turn out to be useful. Nevertheless,
I make this program available to someone who might find it beguiling.
* If you really want, I could allow you to put sign constraints on the
fit coefficients (e.g., say that the coefficient for a given base volume
should be non-negative). But you'll have to beg for this.
-- Emitted by RWCox
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dPval
Usage: 3dPval [options] dataset
* Converts a dataset's statistical sub-bricks to p-values.
* Sub-bricks not internally marked as statistical volumes are unchanged.
* However, all output volumes will be converted to float format!
* If you wish to convert only sub-brick #3 (say) of a dataset, then
something like this command should do the job:
3dPval -prefix Zork.nii InputDataset.nii'[3]'
* Note that sub-bricks being marked as statistical volumes, and
having value-to-FDR conversion curves attached, are AFNI-only
ideas, and are not part of any standard, NIfTI or otherwise!
In other words, this program will be useless for a random dataset
which you download from some random non-AFNI-centric site :(
* Also note that SMALLER p- and q-values are more 'significant', but
that the AFNI GUI provides interactive thresholding for values
ABOVE a user-chosen level, so using the GUI to threshold on a
p-value or q-value volume will have the opposite result to what
you might wish for.
* Although the program now allows conversion of statistic values
to z-scores or FDR q-values, instead of p-values, you can only
do one type of conversion per run of 3dPval. If you want p-values
AND q-values, you'll have to run this program twice.
* Finally, 'sub-brick' is AFNI jargon for a single 3D volume inside
a multi-volume dataset.
Options:
=======
-zscore = Convert statistic to a z-score instead, an N(0,1) deviate
that represents the same p-value.
-log2 = Convert statistic to -log2(p)
-log10 = Convert statistic to -log10(p)
-qval = Convert statistic to a q-value (FDR) instead:
+ This option only works with datasets that have
FDR curves inserted in their headers, which most
AFNI statistics programs will do. The program
3drefit can also do this, with the -addFDR option.
-prefix p = Prefix name for output file (default name is 'Pval')
AUTHOR: The Man With The Golden p < 0.000001
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dPVmap
3dPVmap [-prefix XXX] [-mask MMM] [-automask] inputdataset
Computes the first 2 principal component vectors of a
time series datasets, then outputs the R-squared coefficient
of each voxel time series with these first 2 components.
Each voxel times series from the input dataset is minimally pre-processed
before the PCA is computed:
Despiking
Legendre polynomial detrending
L2 normalizing (sum-of-squares = 1)
If you want more impressive pre-processing, you'll have to do that
before running 3dPVmap (e.g., use the errts dataset from afni_proc.py).
Program also outputs the first 2 principal component time series
vectors into a 1D file, for fun and profit.
The fractions of total-sum-of-squares allocable to the first 2
principal components are written to stdout at the end of the program.
along with a 3rd number that is a measure of the spatial concentration
or dispersion of the PVmap.
These values can be captured into a file by Unix shell redirection
or into a shell variable by assignment:
3dPVmap -mask AUTO Fred.nii > Fred.sval.1D
set sval = ( `3dPVmap -mask AUTO Fred.nii` ) # csh syntax
If the first value is very large, for example, this might indicate
the widespread presence of some artifact in the dataset.
If the 3rd number is bigger than 1, it indicates that the PVmap
is more concentrated in space; if it is less than one, it indicates
that it is more dispersed in space (relative to a uniform density).
3dPVmap -mask AUTO Zork.nii
++ mask has 21300 voxels
++ Output dataset ./PVmap+orig.BRIK
0.095960 0.074847 1.356635
The first principal component accounted for 9.6% of the total sum-of-squares,
the second component for 7.5%, and the PVmap is fairly concentrated in space.
These % values are not very unusual, but the concentration is fairly high
and the dataset should be further investigated.
A concentration value below 1 indicates the PVmap is fairly dispersed; this
often means the larger PVmap values are found near the edges of the brain
and can be caused by motion or respiration artifacts.
The goal is to visualize any widespread time series artifacts.
For example, if a 'significant' part of the brain shows R-squared > 0.25,
that could be a subject for concern -- look at your data!
Author: Zhark the Unprincipaled
AFNI program: 3dQwarp
++ OpenMP thread count = 1
++ 3dQwarp: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: Zhark the (Hermite) Cubically Warped
Usage: 3dQwarp [OPTIONS] ~1~
* Computes a nonlinearly warped version of source_dataset to match base_dataset.
++ Written by Zhark the Warped, so take nothing here too seriously.
++ The detail allowed in the warping is set by the '-minpatch' option.
++ The discrete warp computed herein is a representation of an underlying
piecewise polynomial C1 diffeomorphism.
++ See the OUTLINE OF WARP OPTIMIZATION METHOD section, far below, for details.
* Other AFNI programs in this nonlinear warping collection include:
++ 3dNwarpAdjust = adjust a set of nonlinear warps to remove any mean warp
++ 3dNwarpApply = apply a nonlinear warp to transform a dataset
++ 3dNwarpCat = catenate/compose two or more warps to produce a new warp
++ 3dNwarpFuncs = compute some functions of a nonlinear warp
++ 3dNwarpXYZ = apply a nonlinear warp to discrete set of (x,y,z) triples
++ @SSwarper = Script that combines 3dQwarp and 3dSkullStrip (SS) to
produce a brain volume warped to a template and with
the non-brain tissue ('skull') removed.
++ auto_warp.py = Python program to run 3dQwarp for you
++ unWarpEPI.py = Python program to unwarp EPI datasets, using
a reverse-blip reference volume
++ afni_proc.py = General AFNI pipeline for FMRI datasets, which can use
auto_warp.py and unWarpEPI.py along the way.
* 3dQwarp is where nonlinear warps come from (in AFNIland).
++ For the most part, the above programs either use warps from 3dQwarp,
or they provide easier ways to run 3dQwarp.
** NEVER use the obsolete '-nwarp' option to 3dAllineate. It is not
compatible with these other programs, and it does not produce
useful results.
* The simplest way to use 3dQwarp is via the @SSwarper script, for
warping a T1-weighted dataset to the (human brain) MNI 2009 template
dataset supplied with AFNI binaries (other templates also available).
* The next simplest way to use 3dQwarp is via the auto_warp.py program.
* You can use 3dQwarp directly if you want to control (or play with) the
various options for setting up the warping process.
* Input datasets must be on the same 3D grid (unlike program 3dAllineate)!
++ Or you will get a fatal error when the program checks the datasets!
++ However, you can use the '-allineate' option in 3dQwarp to do
affine alignment before the nonlinear alignment, which will also
resample the aligned source image to the base dataset grid.
++ OR, you can use the '-resample' option in 3dQwarp to resample the
source dataset to the base grid before doing the nonlinear stuff,
without doing any preliminary affine alignment. '-resample' is much
faster than '-allineate', but of course doesn't do anything but
make the spatial grids match. Normally, I would not recommend this!
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++ UNLESS the base and source datasets are fairly close to each other ++
++ already, the '-allineate' option will make the process better. For ++
++ example, if the two datasets are rotated off 5 degrees, using ++
++ 3dQwarp alone will be less effective than using '3dQwarp -allineate'. ++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
* 3dQwarp CAN be used on 2D images -- that is, datasets with a single
slice. How well it works on such datasets has not been investigated
much, but it DOES work (and quickly, since the amount of data is small).
++ You CAN input .jpg or .png files as the source and base images.
++ 3dQwarp will convert RGB images to grayscale and attempt to align those.
The output will still be in dataset format (not image format) and
will be in grayscale floating point (not color). To get the warped
image output in .jpg or .png format, you can open the output dataset
in the AFNI GUI and save the image -- after turning off crosshairs!
+ To get an RGB copy of a warped image, you have to apply the warp to
each channel (R, G, B) separately and then fuse the results.
Other approaches are possible, of course.
++ Applying this program to 2D images is entirely for fun; the actual
utility of it in brain imaging is not clear to Emperor Zhark.
(Which is why the process of getting a color warped image is so clumsy.)
* Input datasets should be reasonably well aligned already
(e.g., as from an affine warping via 3dAllineate).
++ The standard result from 3dAllineate will resample the affinely
aligned dataset to the same 3D grid as the -base dataset, so this
new dataset will be ready to run in 3dQwarp against the same base.
++ Again, the '-allineate' option can now do this for you, inside 3dQwarp.
* Input datasets should be 'alike'.
++ For example, if the '-base' dataset is skull stripped, then the '-source'
dataset should be skull stripped also -- e.g., via 3dSkullStrip.
+ Warping a skull-on dataset (source) to a skull-off dataset (base) will
sometimes work OK, but sometimes will fail in weird-looking ways.
++ If the datasets have markedly different contrasts (e.g., T1 and T2), then
using a non-standard matching function such as '-nmi' or '-hel' or '-lpa'
might work better than the default Pearson correlation matching function.
(This type of warping has not been tested much here at AFNI headquarters.)
+ Warping T2 to T1 would likely be best done by inverting the contrast of
the T2 dataset, via '3dUnifize -T2 -T2', to make it look like a T1 volume.
+ These non-standard methods are slower than the Pearson correlation default.
******************************************************************************
* If the input datasets do NOT overlap reasonably well (please look at them *
* them in AFNI), or when the source is in scanner space and the base is in a *
* template space (e.g., MNI), then you need to use '-allineate', or you will *
* probably get *
* (a) a very bad result (or a program crash) *
* (b) that takes a long time and a lot of memory to compute. *
* 'Overlap well' means that the datasets match well in coordinate space. *
* In some cases, datasets may match well voxel-wise, but the xyz coordinates *
* defined in the dataset headers do not match -- in such a case, 3dQwarp *
* will fail. This is why Zhark urges you to LOOK at the overlap in AFNI, *
* which uses coordinates for display matching, not voxel indexes. Or use *
* the '-allineate' option to get 3dAllineate to line up the dataset by *
* brute force, just to be safe (at the cost of a some extra CPU time). *
******************************************************************************
* Outputs of 3dQwarp are the warped dataset and the warp that did it.
++ These datasets are stored in float format, no matter what the
data type of the source dataset.
++ MANY other optional outputs are described later.
* Simple example:
3dQwarp -allineate -blur 0 3 \
-base ~/abin/MNI152_2009_template_SSW.nii.gz \
-source sub637_T1.nii \
-prefix sub637_T1qw.nii
which will produce a dataset warped to match the MNI152 T1 template
at a 1 mm resolution. Since the MNI152 template is already somewhat
blurry, the amount of blurring applied to it is set to zero, while
the source dataset (presumably not blurry) will be Gaussian blurred
with a FWHM of 3 mm.
* Matching uses the 'clipped Pearson' method by default, and
can be changed to 'pure Pearson' with the '-pear' option.
++ The purpose of 'clipping' is to reduce the impact of outlier values
(small or large) on the correlation.
++ For the adventurous, you can also try these matching functions:
'-hel' for Hellinger distance
'-mi' for Mutual Information
'-nmi' for Normalized Mutual Information
These options have NOT been extensively tested for usefulness,
and should be considered experimental at this infundibulum.
++ The 'local' correlation options are also now available:
'-lpc' for Local Pearson minimization (i.e., EPI-T1 registration)
'-lpa' for Local Pearson maximization (i.e., T1-FA registration)
However, the '+ZZ' modifier is not available for these cost functions,
unlike in program 3dAllineate :(
These advanced cost options will slow 3dQwarp down significantly.
** For aligning EPI to T1, the '-lpc' option can be used; my advice
would be to do something like the following:
3dSkullStrip -input SUBJ_anat+orig -prefix SUBJ_anatSS
3dbucket -prefix SUBJ_epiz SUBJ_epi+orig'[0]'
align_epi_anat.py -anat SUBJ_anat+orig \
-epi SUBJ_epiz+orig -epi_base 0 -partial_axial \
-epi2anat -master_epi SUBJ_anat+orig \
-big_move
3dQwarp -source SUBJ_anatSS+orig.HEAD \
-base SUBJ_epiz_al+orig \
-prefix SUBJ_anatSSQ \
-lpc -maxlev 0 -verb -iwarp -blur 0 3
3dNwarpApply -nwarp SUBJ_anatSSQ_WARPINV+orig \
-source SUBJ_epiz_al+orig \
-prefix SUBJ_epiz_alQ
* Zeroth, the T1 is prepared by skull stripping and the EPI is prepared
by extracting just the 0th sub-brick for registration purposes.
* First, the EPI is aligned to the T1 using the affine 3dAllineate, and
at the same time resampled to the T1 grid (via align_epi_anat.py).
* Second, it is nonlinearly aligned ONLY using the global warping -- it is
futile to try to align such dissimilar image types precisely.
* The EPI is used as the base in 3dQwarp so that it provides the weighting,
and so partial brain coverage (as long as it covers MOST of the brain)
should not cause a problem (fondly do we hope).
* Third, 3dNwarpApply is used to take the inverse warp from 3dQwarp to
transform the EPI to the T1 space, since 3dQwarp transformed the T1 to
EPI space. This inverse warp was output by 3dQwarp using '-iwarp'.
* Someday, this procedure may be incorporated into align_epi_anat.py :-)
** It is vitally important to visually look at the results of this process! **
* In humans, the central structures usually match a template very well,
but smaller cortical gyri can match well in some places and not match
in others.
* In macaques, where there is less inter-animal variation, cortical
matching will be better than humans (but not perfect).
* For aligning T1-weighted anatomical volumes, Zhark recommends that
you use the 3dUnifize program to (approximately) spatially uniformize
and normalize their intensities -- this helps in the matching process,
especially when using datasets from different scanners.
++ Skull stripping a la 3dSkullStrip is also a good idea (prior to 3dUnifize),
even if you are registering datasets from the same subject; see the
SAMPLE USAGE section below for an example.
+ But if you are matching to a skull-on template as the -base dataset,
then keeping the skull on the -source dataset is necessary, since the
goal of the program is to distort the source to 'look like' the base,
and if major parts of the two datasets cannot be made to look like
each other, the poor poor program will get lost in warp-land.
++ If you ultimately want a non-3dUnifize-d transformed dataset, you can use
the output WARP dataset and 3dNwarpApply to transform the un-3dUnifize-d
source dataset; again, see the SAMPLE USAGE section below.
++ Some people prefer to nonlinearly align datasets with the 'skull' left on.
You are free to try this, of course, but we have not tested this method.
+ We give you tools; you build things with them (hopefully nice things).
++ Note again the script @SSwarper, which is for skull-stripping and warping
a T1-weighted dataset to a template volume; AFNI provides such a template
volume for the MNI152 space.
* If for some deranged reason you have datasets with very non-cubical voxels,
they should be resampled to a cubical grid before trying 3dQwarp. For example,
if you have acquired 1x1x4 mm T1-weighted structural volumes (why?), then
resample them to 1x1x1 mm before doing any other registration processing.
For example:
3dAllineate -input anatT1_crude+orig -newgrid 1.0 \
-prefix anatT1_fine -final wsinc5 \
-1Dparam_apply '1D: 12@0'\'
This operation will be done using the '-allineate' or '-resample'
options to 3dQwarp, if the -base dataset has cubical voxels.
** Please note that this program is very CPU intensive, and is what computer
scientists call a 'pig' (i.e., run time from 10s of minutes to hours).
------------
SAMPLE USAGE ~1~
------------
* For registering a T1-weighted anat to a mildly blurry template at about
a 1x1x1 mm resolution (note that the 3dAllineate step, to give the
preliminary alignment, will also produce a dataset on the same 3D grid
as the TEMPLATE+tlrc dataset, which 3dQwarp requires):
3dUnifize -prefix anatT1_U -input anatT1+orig
3dSkullStrip -input anatT1_U+orig -prefix anatT1_US -niter 400 -ld 40
3dAllineate -prefix anatT1_USA -base TEMPLATE+tlrc \
-source anatT1_US+orig -twopass -cost lpa \
-1Dmatrix_save anatT1_USA.aff12.1D \
-autoweight -fineblur 3 -cmass
3dQwarp -prefix anatT1_USAQ -blur 0 3 \
-base TEMPLATE+tlrc -source anatT1_USA+tlrc
You can then use the anatT1_USAQ_WARP+tlrc dataset to transform other
datasets (that were aligned with the input anatT1+orig) in the same way
using program 3dNwarpApply, as in
3dNwarpApply -nwarp 'anatT1_USAQ_WARPtlrc anatT1_USA.aff12.1D' \
-source NEWSOURCE+orig -prefix NEWSOURCE_warped
For example, if you want a warped copy of the original anatT1+orig dataset
(without the 3dUnifize and 3dSkullStrip modifications), put 'anatT1' in
place of 'NEWSOURCE' in the above command.
Note that the '-nwarp' option to 3dNwarpApply has TWO filenames inside
single quotes. This feature tells that program to compose (catenate) those
2 spatial transformations before applying the resulting warp. See the -help
output of 3dNwarpApply for more sneaky/cunning ways to make the program warp
datasets (and also see the example just below).
** PLEASE NOTE that if you use the '-allineate' option in 3dQwarp, to **
** do the 3dAllineate step inside 3dQwarp, then you do NOT catenate **
** the affine and nonlinear warps as in the 3dNwarpApply example above, **
** since the output nonlinear warp will ALREADY have be catenated with **
** the affine warp -- this output warp is the transformation directly **
** between the '-source' and '-base' datasets (as is reasonable IZHO). **
If the NEWSOURCE+orig dataset is integer-valued (e.g., anatomical labels),
then you would use the '-ainterp NN' with 3dNwarpApply, to keep the program
from interpolating the voxel values.
* If you use align_epi_anat.py to affinely transform several EPI datasets to
match a T1 anat, and then want to nonlinearly warp the EPIs to the template,
following the warp generated above, the procedure is something like this:
align_epi_anat.py -anat anatT1+orig -epi epi_r1+orig \
-epi_base 3 -epi2anat -big_move \
-child_epi epi_r2+orig epi_r3+orig
3dNwarpApply -source epi_r1+orig \
-nwarp 'anatT1_USAQ_WARP+tlrc anatT1_USA.aff12.1D' \
-affter epi_r1_al_reg_mat.aff12.1D \
-master WARP -newgrid 2.0 \
-prefix epi_r1_AQ
(mutatis mutandis for 'child' datasets epi_r2, epi_r3, etc.).
The above procedure transforms the data directly from the un-registered
original epi_r1+orig dataset, catenating the EPI volume registration
transformations (epi_r1_al_reg_mat.aff12.1D) with the affine anat to
template transformation (anatT1_USA.aff12.1D) and with the nonlinear
anat to template transformation (anatT1_USAQ_WARP+tlrc). 3dNwarpApply
will use the default 'wsinc5' interpolation method, which does not blur
the results much -- an important issue for statistical analysis of the
EPI time series.
Various functions, such as volume change fraction (Jacobian determinant)
can be calculated from the warp dataset via program 3dNwarpFuncs.
--------------------
COMMAND LINE OPTIONS (too many of them) ~1~
--------------------
++++++++++ Input and Outputs +++++++++++++
-base base_dataset = Alternative way to specify the base dataset.
-source source_dataset = Alternative way to specify the source dataset.
* You can either use both '-base' and '-source',
OR you can put the base and source dataset
names last on the command line.
* But you cannot use just one of '-base' or '-source'
and then put the other input dataset name at the
end of the command line!
*** Please note that if you are using 3dUnifize on one
dataset (or the template was made with 3dUnifize-d
datasets), then the other dataset should also be
processed the same way for better results. This
dictum applies in general: the source and base
datasets should be pre-processed the same way,
as far as practicable.
-prefix ppp = Sets the prefix for the output datasets.
* The source dataset is warped to match the base
and gets prefix 'ppp'. (Except if '-plusminus' is used.)
* The final interpolation to this output dataset is
done using the 'wsinc5' method. See the output of
3dAllineate -HELP
(in the "Modifying '-final wsinc5'" section) for
the lengthy technical details.
* The 3D warp used is saved in a dataset with
prefix '{prefix}_WARP' -- this dataset can be used
with 3dNwarpApply and 3dNwarpCat, for example.
* To be clear, this is the warp from source dataset
coordinates to base dataset coordinates, where the
values at each base grid point are the xyz displacements
needed to move that grid point's xyz values to the
corresponding xyz values in the source dataset:
base( (x,y,z) + WARP(x,y,z) ) matches source(x,y,z)
Another way to think of this warp is that it 'pulls'
values back from source space to base space.
* 3dNwarpApply would use '{prefix}_WARP' to transform datasets
aligned with the source dataset to be aligned with the
base dataset.
** If you do NOT want this warp saved, use the option '-nowarp'.
-->> (But: This warp is usually the most valuable possible output!)
* If you want to calculate and save the inverse 3D warp,
use the option '-iwarp'. This inverse warp will then be
saved in a dataset with prefix '{prefix}_WARPINV'.
* This inverse warp could be used to transform data from base
space to source space, if you need to do such an operation.
* You can easily compute the inverse later, say by a command like
3dNwarpCat -prefix Z_WARPINV 'INV(Z_WARP+tlrc)'
or the inverse can be computed as needed in 3dNwarpApply, like
3dNwarpApply -nwarp 'INV(Z_WARP+tlrc)' -source Dataset.nii ...
-nowarp = Do not save the _WARP file.
* By default, the {prefix}_WARP dataset will be saved.
-iwarp = Do compute and save the _WARPINV file.
* By default, the {prefix}_WARPINV file is NOT saved.
-nodset = Do not save the warped source dataset (i.e., if you only
need the _WARP).
* By default, the warped source dataset {prefix} is saved.
-awarp = If '-allineate' is used, output the nonlinear warp that
transforms from the 3dAllineate-d affine alignment of
source-to-base to the base. This warp (output {prefix}_AWARP)
combined with the affine transformation {prefix}.aff12.1D is
the same as the final {prefix}_WARP nonlinear transformation
directly from source-to-base.
* The '-awarp' output is mostly useful when you need to have
this incremental nonlinear warp for various purposes; for
example, it is used in the @SSwarper script.
* '-awarp' will not do anything unless '-allineate' is also
used, because it doesn't have anything to do!
* By default, this {prefix}_AWARP file is NOT saved.
-inwarp = This option is for debugging, and is only documented here
for completenes.
* It causes an extra dataset to be written out whenever a warp
is output. This dataset will have the string '_index' added
to the warp dataset's prefix, as in 'Fred_AWARP_index.nii'.
* This extra dataset contains the 'index warp', which is the
internal form of the warp.
* Instead of displacements between (x,y,z) coordinates, an
index warp stores displacements between (i,j,k) 3D indexes.
* An index warp dataset has no function outside of being
something to look at when trying to figure out what the hell
the program did.
++++++++++ Preliminary affine (linear transformation) alignment ++++++++++
-allineate = This option will make 3dQwarp run 3dAllineate first, to align
*OR* the source dataset to the base with an affine transformation.
-allin It will then use that alignment as a starting point for the
*OR* nonlinear warping.
-allinfast * With '-allineate', the source dataset does NOT have to be on
the same 3D grid as the base, since the intermediate output
of 3dAllineate (the substitute source) will be on the grid
as the base.
* If the datasets overlap reasonably already, you can use the
option '-allinfast' (instead of '-allineate') to add the
option '-onepass' to the 3dAllineate command line, to make
it run faster (by avoiding the time-consuming coarse pass
step of trying lots of shifts and rotations to find an idea
of how to start). But you should KNOW that the datasets do
overlap well before using '-allinfast'. (This fast option
does include center-of-mass correction, so it will usually
work well if the orientations of the two volumes are close
-- say, within 10 degrees of each other.)
-->>** The final output warp dataset is the warp directly between
the original source dataset and the base (i.e., the catenation
of the affine matrix from 3dAllineate and the nonlinear warp
from the 'warpomatic' procedure in 3dQwarp).
-->>** The above point means that you should NOT NOT NOT use the
affine warp output by the '-allineate' option in combination
with the nonlinear warp output by 3dQwarp (say, when using
3dNwarpApply), since the affine warp would then be applied
twice -- which would be WRONG WRONG WRONG.
-->>** The final output warped dataset is warped directly from the
original source dataset, NOT from the substitute source.
* The intermediate files from 3dAllineate (the substitute source
dataset and the affine matrix) are saved, using 'prefix_Allin'
in the filenames.
*** The following 3dQwarp options CANNOT be used with -allineate:
-plusminus -inilev -iniwarp
* The '-awarp' option will output the computed warp from the
intermediate 3dAllineate-d dataset to the base dataset,
in case you want that for some reason. This option will
only have meaning if '-allineate' or '-allinfast' is used.
The prefix of the '-awarp' output will have the string
'_AWARP' appended to the {prefix} for the output dataset.
-allineate_opts '-opt ...'
*OR* * This option lets you add extra options to the 3dAllineate
-allopt command to be run by 3dQwarp. Normally, you won't need
to do this.
* Note that the default cost functional in 3dAllineate is
the Hellinger metric ('-hel'); many people prefer '-lpa+ZZ',
and so should use something like this:
-allopt '-cost lpa+ZZ'
to ensure 3dAllineate uses the desired cost functional.
-> Note that if you use '-lpa' in 3dQwarp, then 3dAllineate
will automatically be supplied with '-cost lpa+ZZ'.
* If '-emask' is used in 3dQwarp, the same option will be
passed to 3dAllineate automatically, so you don't have to
do that yourself.
*** Do NOT attempt to use the (obsolescent) '-nwarp' option in
3dAllineate from inside 3dQwarp -- bad things will probably
happen, and you won't EVER get any birthday presents again!
-resample = This option simply resamples the source dataset to match the
*OR* base dataset grid. You can use this if the two datasets
-resample mm overlap well (as seen in the AFNI GUI), but are not on the
same 3D grid.
* If they don't overlap very well, use '-allineate' instead.
* As with -allineate, the final output dataset is warped
directly from the source dataset, not from the resampled
source dataset.
* The reampling here (and with -allineate) is done with the
'wsinc5' method, which has very little blurring artifact.
* If the base and source datasets ARE on the same 3D grid,
then the -resample option will be ignored.
* You CAN use -resample with these 3dQwarp options:
-plusminus -inilev -iniwarp
In particular, '-iniwarp' and '-resample' will work
together if you need to re-start a warp job from the
output of '-allsave'.
* Unless you are in a hurry, '-allineate' is better.
*** After '-resample', you can supply an affine transformation
matrix to apply during the resampling. This feature is
useful if you already have the affine transformation
from source to base pre-computed by some other program
-- for example, from 3dAllineate.
The command line argument that follows '-resample',
if it does not start with a '-', is taken to be
a filename with 12 values in one row: the usual
affine matrix representation from 3dAllineate and
other AFNI programs (in DICOM order coordinates);
for example '-resample ZharkRules.aff12.1D'
You can also use the following form to supply the
matrix directly on the command line:
'1D: 1 2 3 4 5 6 7 8 9 10 11 12'
where the numbers after the initial '1D: ' are
to be replaced by the actual matrix entries!
-aniso = Before aligning, do a little bit of anisotropic smoothing
(see 3danisosmooth) on the source dataset.
* Note that the final output dataset is warped directly
from the input dataset, NOT this smoothed dataset.
If you want the warped output dataset to be from the
smoothed dataset, you'll have to use 3danisosmooth
separately before 3dQwarp, and supply that result
as the source dataset.
* The purpose of '-aniso' is just to smooth out the noise
a little before other processing, and maybe make things
work a little betterer.
* Anisotropic smoothing comes before 3dAllineate, if both
are used together.
++++++++++ Computing the 'cost' functional = how datasets are matched ++++++++++
** If '-allineate' is used, AND one of these options is given, then the **
** corresponding option is also passed to 3dAllineate for its optimization. **
** Otherwise, 3dAllineate will use its default optimization cost functional. **
-pcl = clipped Pearson correlation [default method]; clipping reduces
the impact of outlier values.
-pear = Use strict Pearson correlation for matching.
* Not usually recommended, because without the clipping-ness
used by '-pcl', then outliers can have more effect.
* No partridges or trees are implied by this option.
-hel = Hellinger metric
-mi = Mutual information
-nmi = Normalized mutual information
-lpc = Local Pearson correlation (signed).
-lpa = Local Pearson correlation (absolute value)
* These options mirror those in 3dAllineate.
* In particular, nonlinear warping of low resolution EPI
data to T1 data is a difficult task, and can introduce
more distortions to the result than it fixes.
* If you use one of these 5 options, and also use '-allineate' or
'-allinfast', then the corresponding option is passed to
3dAllineate: '-hel' => '-cost hel'
'-mi' => '-cost mi'
'-nmi' => '-cost nmi'
'-lpc' => '-cost lpc+ZZ'
'-lpa' => '-cost lpa+ZZ'
'-pcl' or -pear' => '-cost ls'
-noneg = Replace negative values in either input volume with 0.
-zclip * If there ARE negative input values, and you do NOT use -noneg,
then strict Pearson correlation will be used, since the
'clipped' method only is implemented for non-negative volumes.
* '-noneg' is not the default, since there might be situations
where you want to align datasets with positive and negative
values mixed.
* But, in many cases, the negative values in a dataset are just
the result of interpolation artifacts (or other peculiarities),
and so they should be ignored. That is what '-noneg' is for.
* Therefore, '-noneg' is recommended for most applications.
-nopenalty = Don't use a penalty on the cost functional; the goal
of the penalty is to reduce grid distortions.
* If there penalty is turned off AND you warp down to
a fine scale (e.g., '-minpatch 11'), you will probably
get strange-looking results.
-penfac ff = Use the number 'ff' to weight the penalty.
The default value is 1. Larger values of 'ff' mean the
penalty counts more, reducing grid distortions,
insha'Allah; '-nopenalty' is the same as '-penfac 0'.
-warpscale f = This option allows you to downsize the scale of the warp
displacements for smaller patch sizes. In some applications,
the amount of displacement allowed is overly aggressive at
small patch sizes, but larger displacements at large patch
sizes are needed to get the overall shapes of the base and
template to match. The factor 'f' should be a number between
0.1 and 1.0 (inclusive), and indicates the amount the max
displacement should shrink when the patch size shrinks by
a factor of 10. I suggest '-warpscale 0.5' as a starting
point for experimentation.
* This option is currently [Feb 2020] for experimenting
only, and in the future it may change! In particular,
the equivalent of '-warpscale 0.5' may become the default.
-useweight = With '-useweight', each voxel in the base automask is weighted
by the intensity of the (blurred) base image. This makes
white matter count more in T1-weighted volumes, for example.
-->>* [24 Mar 2014] This option is is now the default.
-wtgaus G = This option lets you define the amount of Gaussian smoothing
applied to the base image when creating the weight volume.
The default value of G is 4.5 (FWHM voxels). See the 'WEIGHT'
section (far below) for details on how the automatic
weight volume is calculated. Using '-wtgaus 0' means that
no Gaussian blurring is applied in creating the weight.
* [15 Jan 2020] This option is really just for fooling around.
-noweight = If you want a binary weight (the old default), use this option.
That is, each voxel in the base volume automask will be
weighted the same in the computation of the cost functional.
-weight www = Instead of computing the weight from the base dataset,
directly input the weight volume from dataset 'www'.
* Useful if you know what over parts of the base image you
want to emphasize or de-emphasize the matching functional.
-wball x y z r f =
Enhance automatic weight from '-useweight' by a factor
of 1+f*Gaussian(FWHM=r) centered in the base image at
DICOM coordinates (x,y,z) and with radius 'r'. The
goal of this option is to try and make the alignment
better in a specific part of the brain.
* Example: -wball 0 14 6 30 40
to emphasize the thalamic area (in MNI/Talairach space).
* The 'r' parameter must be positive (in mm)!
* The 'f' parameter must be between 1 and 100 (inclusive).
* '-wball' does nothing if you input your own weight
with the '-weight' option :(
* '-wball' does change the binary weight created by
the '-noweight' option.
* You can only use '-wball' once in a run of 3dQwarp.
*** The effect of '-wball' is not dramatic. The example
above makes the average brain image across a collection
of subjects a little sharper in the thalamic area, which
might have some small value. If you care enough about
alignment to use '-wball', then you should examine the
results from 3dQwarp for each subject, to see if the
alignments are good enough for your purposes.
-wmask ws f = Similar to '-wball', but here, you provide a dataset 'ws'
that indicates where to increase the weight.
* The 'ws' dataset must be on the same 3D grid as the base
dataset.
* 'ws' is treated as a mask -- it only matters where it
is nonzero -- otherwise, the values inside are not used.
* After 'ws' comes the factor 'f' by which to increase the
automatically computed weight. Where 'ws' is nonzero,
the weighting will be multiplied by (1+f).
* As with '-wball', the factor 'f' should be between 1 and 100.
* You cannot use '-wball' and '-wmask' together!
-wtprefix p = Saves auto-computed weight volume to a dataset with prefix 'p'.
If you are sufficiently dedicated, you could manually edit
this volume, in the AFNI GUI, in 3dcalc, et cetera. And then
use it, instead of the auto-computed default weight, via the
'-weight' option.
* If you use the '-emask' option, the effects of the exclusion
mask are NOT shown in this output dataset!
-emask ee = Here, 'ee' is a dataset to specify a mask of voxels
to EXCLUDE from the analysis -- all voxels in 'ee'
that are NONZERO will not be used in the alignment.
* The base image is always automasked -- the emask is
extra, to indicate voxels you definitely DON'T want
included in the matching process, even if they are
inside the brain.
-->>* Note that 3dAllineate has the same option. Since you
usually have to use 3dAllineate before 3dQwarp, you
will probably want to use -emask in both programs.
[ Unless, of course, you are using '-allineate', which ]
[ will automatically include '-emask' in the 3dAllineate ]
[ phase if '-emask' is used here in 3dQwarp. ]
* Applications: exclude a tumor or resected region
(e.g., draw a mask in the AFNI Drawing plugin).
-->>* Note that the emask applies to the base dataset,
so if you are registering a pre- and post-surgery
volume, you would probably use the post-surgery
dataset as the base. If you eventually want the
result back in the pre-surgery space, then you
would use the inverse warp afterwards (in 3dNwarpApply).
-inedge = Enhance interior edges in the base and source volumes, to
make the cost functional give more weight to these edges.
* This option MIGHT produce slightly better alignments, but
its effect is usually small.
* The output transformed source dataset will NOT have these
enhanced edges; the enhancement is done internally on the
volume image copies that are being matched.
*** This option has been disabled, until problems with it
can be resolved. Sorry .... 01 Apr 2021 [not a joke].
++++++++++ Blurring the inputs (avoid trying to match TOO much detail) +++++++++
-blur bb = Gaussian blur the input images by 'bb' (FWHM) voxels before
doing the alignment (the output dataset will not be blurred).
The default is 2.345 (for no good reason).
* Optionally, you can provide 2 values for 'bb', and then
the first one is applied to the base volume, the second
to the source volume.
-->>* e.g., '-blur 0 3' to skip blurring the base image
(if the base is a blurry template, for example).
* A negative blur radius means to use 3D median filtering,
rather than Gaussian blurring. This type of filtering will
better preserve edges, which might be important in alignment.
* If the base is a template volume that is already blurry,
you probably don't want to blur it again, but blurring
the source volume a little is probably a good idea, to
help the program avoid trying to match tiny features.
-pblur = Use progressive blurring; that is, for larger patch sizes,
the amount of blurring is larger. The general idea is to
avoid trying to match finer details when the patch size
and incremental warps are coarse. When '-blur' is used
as well, it sets a minimum amount of blurring that will
be used. [06 Aug 2014 -- '-pblur' may be the default someday].
* You can optionally give the fraction of the patch size that
is used for the progressive blur by providing a value between
0 and 0.25 after '-pblur'. If you provide TWO values, the
the first fraction is used for progressively blurring the
base image and the second for the source image. The default
parameters when just '-pblur' is given is the same as giving
the options as '-pblur 0.09 0.09'.
* '-pblur' is useful when trying to match 2 volumes with high
amounts of detail; e.g, warping one subject's brain image to
match another's, or trying to match a detailed template.
* Note that using negative values with '-blur' means that the
progressive blurring will be done with median filters, rather
than Gaussian linear blurring.
-->>*** The combination of the -allineate and -pblur options will make
the results of using 3dQwarp to align to a template somewhat
less sensitive to initial head position and scaling.
-nopblur = Don't use '-pblur'; equivalent to '-pblur 0 0'.
++++++++++ Restricting the warp directions ++++++++++
-noXdis = These options let you specify that the warp should not
-noYdis = displace in the given direction. For example, combining
-noZdis = -noXdis and -noZdis would mean only warping along the
y-direction would be allowed.
* Here, 'x' refers to the first coordinate in the dataset,
which is usually the Right-to-Left direction. Et cetera.
* Note that the output WARP dataset(s) will have sub-bricks
for the displacements which are all zero; every WARP dataset
has 3 sub-bricks.
++++++++++ Controlling the warp calculation process in detail ++++++++++
-iniwarp ww = 'ww' is a dataset with an initial nonlinear warp to use.
* If this option is not used, the initial warp is the identity.
* You can specify a catenation of warps (in quotes) here, as in
program 3dNwarpApply.
* You can scale a 3D warp's displacements by prefixing the dataset
name by 'FAC:a,b,c:Warpdatasetname' where a b c are numbers
by which to scale the x- y- z-displacments.
* As a special case, if you just input an affine matrix in a .1D
file, that also works -- it is treated as giving the initial
warp via the string "IDENT(base_dataset) matrix_file.aff12.1D".
* -iniwarp is usually used with -inilev to re-start 3dQwarp from
a previous stopping point, or from the output of '-allsave'.
* In particular, '-iniwarp' and '-resample' will work
together if you need to re-start a warp job from the
output of '-allsave'.
-inilev lv = 'lv' is the initial refinement 'level' at which to start.
* The combination of -inilev and -iniwarp lets you take the
results of a previous 3dQwarp run and refine them further:
3dQwarp -prefix Q25 -source SS+tlrc -base TEMPLATE+tlrc \
-minpatch 25 -blur 0 3
3dQwarp -prefix Q11 -source SS+tlrc -base TEMPLATE+tlrc \
-inilev 7 -iniwarp Q25_WARP+tlrc -blur 0 2
Note that the source dataset in the second run is the SAME as
in the first run. If you don't see why this is necessary,
then you probably need to seek help from an AFNI guru.
-->>** Also see the script @toMNI_Qwarpar for the use of this option
in creating a template dataset from a collection of scans from
different subjects.
-minpatch mm = Set the minimum patch size for warp searching to 'mm' voxels.
*OR* * The value of mm should be an odd integer.
-patchmin mm * The default value of mm is 25.
* For more accurate results than mm=25, try 19 or 13.
* The smallest allowed patch size is 5.
* OpenMP parallelization becomes inefficient for patch sizes
smaller than about 15x15x15 -- which is why running 3dQwarp
down to the minimum patch level of 5 can be very slow.
* You may want stop at a larger patch size (say 7 or 9) and use
the -Qfinal option to run that final level with quintic warps,
which might run faster and provide the same degree of warp
detail.
* Trying to make two different brain volumes match in fine detail
is usually a waste of time, especially in humans. There is too
much variability in anatomy to match gyrus to gyrus accurately,
especially in the small foldings in the outer cerebral cortex.
For this reason, the default minimum patch size is 25 voxels.
Using a smaller '-minpatch' might try to force the warp to
match features that do not match, and the result can be useless
image distortions -- another reason to LOOK AT THE RESULTS.
-------------------
-maxlev lv = Here, 'lv' is the maximum refinement 'level' to use. This
is an alternate way to specify when the program should stop.
* To only do global polynomial warping, use '-maxlev 0'.
* If you use both '-minpatch' and '-maxlev', then you are
walking on the knife edge of danger.
* Of course, I know that you LIVE for such thrills.
-gridlist gl = This option provides an alternate way to specify the patch
grid sizes used in the warp optimization process. 'gl' is
a 1D file with a list of patches to use -- in most cases,
you will want to use it in the following form:
-gridlist '1D: 0 151 101 75 51'
* Here, a 0 patch size means the global domain. Patch sizes
otherwise should be odd integers >= 5.
* If you use the '0' patch size again after the first position,
you will actually get an iteration at the size of the
default patch level 1, where the patch sizes are 75% of
the volume dimension. There is no way to force the program
to literally repeat the sui generis step of lev=0.
* You cannot use -gridlist with: -plusminus :(
-allsave = This option lets you save the output warps from each level
*OR* of the refinement process. Mostly used for experimenting.
-saveall * Cannot be used with: -nopadWARP :(
* You could use the saved warps to create different versions
of the warped source datasets (using 3dNwarpApply), to help
you visualize how the warping process makes progress.
* The saved warps are written out at the end of each level,
before the next level starts computation. Thus, they could
be used to re-start the computation if the program crashed
(by using options '-inilev' and '-iniwarp').
* If '-allsave' is used with '-plusminus', the intermediate
saved warps are the "PLUS" half-warps (which are what the
program is optimizing).
-duplo = *** THIS OPTION IS NO LONGER AVAILABLE ***
-workhard = Iterate more times, which can help when the volumes are
hard to align at all, or when you hope to get a more precise
alignment.
* Slows the program down (possibly a lot), of course.
* Combined with '-lite', takes about the same amount of time
as '-nolite' without '-workhard' :)
* For finer control over which refinement levels work hard,
you can use this option in the form (for example)
-workhard:4:7
which implies the extra iterations will be done at levels
4, 5, 6, and 7, but not otherwise.
* You can also use '-superhard' to iterate even more, but
this extra option will REALLY slow things down.
-->>* Under most circumstances, you should not need to use either
-workhard or -superhard.
-->>* If you use this option in the form '-Workhard' (first letter
in upper case), then the second iteration at each level is
done with quintic polynomial warps.
-Qfinal = At the finest patch size (the final level), use Hermite
quintic polynomials for the warp instead of cubic polynomials.
* In a 3D 'patch', there are 2x2x2x3=24 cubic polynomial basis
function parameters over which to optimize (2 polynomials
dependent on each of the x,y,z directions, and 3 different
directions of displacement).
* There are 3x3x3x3=81 quintic polynomial parameters per patch.
* With -Qfinal, the final level will have more detail in
the allowed warps, at the cost of yet more CPU time.
* However, no patch below 7x7x7 in size will be done with quintic
polynomials.
* This option is also not usually needed, and is experimental.
(((........... Also see the section 'The warp polynomials' below ...........)))
-cubic12 = Use 12 parameter cubic polynomials, instead of 24 parameter
polynomials (the current default patch warps are 24 parameter).
* '-cubic12' will be faster than '-cubic24' and combining
it with '-workhard' will make '-cubic12' run at about the
same speed as the 24 parameter cubics.
* Is it less accurate than '-cubic24'? That is very hard
to say accurately without more work. In priniple, No.
* This option is now the default.
-cubic24 = Use 24 parameter cubic Hermite polynomials.
* This is the older set of basis functions [pre-2019], and
would normally be used only for backwards compatibility or
for testing.
-Qonly = Use Hermite quintic polynomials at all levels.
* Very slow (about 4 times longer than cubic).
* Will produce a (discrete representation of a) C2 warp.
-Quint81 = When quintic polynomials are used, use the full 81 parameter
set of basis functions.
* This is the older set of basis functions [pre-2019], and
would normally be used only for backwards compatibility or
for testing.
-Quint30 = Use the smaller 30 parameter set of quintic basis functions.
* These options ('-Quint81' and '-Quint30') only change
the operation if you also use some other option that
implies the use of quintic polynomials for warping.
-lite = Another way to specify the use of the 12 parameter cubics
and the 30 parameter quintics.
* This option now works with the '-plusminus' warping method :)
* THIS OPTION IS NOW THE DEFAULT * [Jan 2019]
-nolite = Turn off the '-lite' warp functions and use the 24 parameter
cubics *and* the 81 parameter quintics.
* This option is present for if you wish to have backwards
warping compatibility with older versions of 3dQwarp.
-nopad = Do NOT use zero-padding on the 3D base and source images.
[Default == zero-pad as needed]
* The underlying model for deformations goes to zero at the
edge of the volume being warped. However, if there is
significant data near an edge of the volume, then it won't
get displaced much, and so the results might not be good.
* Zero padding is designed as a way to work around this potential
problem. You should NOT need the '-nopad' option for any
reason that Zhark can think of, but it is here to be
symmetrical with 3dAllineate.
++ If the base dataset is closely cropped, so that the edges of
its 3D grid come close to the significant part of the volume,
using '-nopad' may cause poor fitting of the source to the
base, as the distortions required near the grid edges will
not be available in the restricted model. For this reason,
Zhark recommends that you do NOT use '-nopad'.
* Note that the output (warped from source) dataset will be on
the base dataset grid whether or not zero-padding is allowed.
However, unless you use option '-nopadWARP', allowing zero-
padding (i.e., the default operation) will make the output WARP
dataset(s) be on a larger grid (also see '-expad' below).
**** When grid centers of the base and source dataset are far apart
in (x,y,z) coordinates, then a large amount of zero-padding
is required to make the grid spaces overlap. This situation can
cause problems, and most often arises when the (x,y,z)=(0,0,0)
point in the source grid is in a corner of the volume instead
of the middle. You can fix that problem by using a command
like
@Align_Centers \
-base MNI152_2009_template_SSW.nii.gz \
-dset Fred.nii
and then using dataset Fred_shft.nii as your input file for all
purposes (including afni_proc.py).
++ One problem that happens with very large spatial shifts (from
3dAllineate initial alignment) is that the warp dataset can
be very huge. Not only does this cause a large file on output,
it also uses a lot of memory in the 3dQwarp optimization - so
much memory in some cases to cause the program to crash.
* A warning message will be output to the screen if very large
amounts of zero-padding are required.
* Intermediate between large amounts of padding and no padding
is the option below:
-Xpad = Puts an upper limit on the amount of padding, to prevent huge
warp datasets from being created.
-nopadWARP = If you do NOT use '-nopad' (that is, you DO allow zero-padding
during the warp computations), then the computed warp will often
be bigger than the base volume. This situation is normally not
an issue, but if for some reason you require the warp volume to
match the base volume, then use '-nopadWARP' to have the output
WARP dataset(s) truncated.
* Note that 3dNwarpApply and 3dNwarpAdjust will deal with warps
that are defined over grids that are larger than the datasets
to which they are applied; this is why Zhark says above that
a padded warp 'is normally not an issue'.
* However, if you want to use an AFNI nonlinear warp in some
external non-AFNI program, you might have to use this option :(
-expad EE = This option instructs the program to pad the warp by an extra
'EE' voxels (and then 3dQwarp starts optimizing it).
* This option is seldom needed, but can be useful if you
might later catenate the nonlinear warp -- via 3dNwarpCat --
with an affine transformation that contains a large shift.
Under that circumstance, the nonlinear warp might be shifted
partially outside its original grid, so expanding that grid
can avoid this problem.
* Note that this option perforce turns off '-nopadWARP'.
-ballopt = Normally, the incremental warp parameters are optimized inside
a rectangular 'box' (e.g., 24 dimensional for cubic patches, 81
for quintic patches), which limits the amount of distortion
allowed at each step. Using '-ballopt' switches these limits
to be applied to a 'ball' (interior of a hypersphere), which
can allow for larger incremental displacements. Use this
option if you think things need to be able to move farther.
* Note also that the '-lite' polynomial warps allow for
larger incremental displacements than the '-nolite' warps.
-boxopt = Use the 'box' optimization limits instead of the 'ball'
[this is the default at present].
* Note that if '-workhard' is used, then ball and box
optimization are alternated in the different iterations at
each level, so these two options have no effect in that case.
++++++++++ Meet-in-the-middle warping - Also know as '-plusminus' +++++++++
-plusminus = Normally, the warp displacements dis(x) are defined to match
base(x) to source(x+dis(x)). With this option, the match
is between base(x-dis(x)) and source(x+dis(x)) -- the two
images 'meet in the middle'.
* One goal is to mimic the warping done to MRI EPI data by
field inhomogeneities, when registering between a 'blip up'
and a 'blip down' down volume, which will have opposite
distortions.
* Define Wp(x) = x+dis(x) and Wm(x) = x-dis(x). Then since
base(Wm(x)) matches source(Wp(x)), by substituting INV(Wm(x))
wherever we see x, we have base(x) matches
source(Wp(INV(Wm(x))));
that is, the warp V(x) that one would get from the 'usual' way
of running 3dQwarp is V(x) = Wp(INV(Wm(x))).
* Conversely, we can calculate Wp(x) in terms of V(x) as follows:
If V(x) = x + dv(x), define Vh(x) = x + dv(x)/2;
then Wp(x) = V(INV(Vh(x)))
*** Also see the '-pmBASE' option described below.
-->>* Alas: -plusminus does not work with: -allineate :-(
++ If a prior linear alignment is needed, it will have
to be done "manually" using 3dAllineate, and then use
the output of that program as the '-source' dataset for
3dQwarp.
++ -plusminus works well if the base and source datasets
are reasonably well-aligned to start with. By this, I
mean that they overlap well, are not wildly rotated from
each other, and need some 'wiggling' to make them aligned.
-->>++ This option is basically meant for unwarping EPI data,
as described above.
* However, you can use -iniwarp with -plusminus :-)
-->>* The outputs have _PLUS (from the source dataset) and _MINUS
(from the base dataset) in their filenames, in addition to
the {prefix}. The -iwarp option, if present, will be ignored.
* If you use '-iniwarp' with '-plusminus', the warp dataset to
provide with '-iniwarp' is the '_PLUS' warp. That is, you can't
use a "full base-to-source warp" for the initial warp
(one reason '-allineate' doesn't work with '-plusminus').
-pmNAMES p m = This option lets you change the PLUS and MINUS prefix appendages
alluded to directly above to something else that might be more
easy for you to grok. For example, if you are warping EPI
volumes with phase-encoding in the LR-direction with volumes
that had phase-encoding in the RL-direction, you might do
something like
-base EPI_LR+orig -source EPI_RL+orig -plusminus -pmNAMES RL LR -prefix EPIuw
recalling that the PLUS name goes with the source (RL) and the
MINUS name goes with the base (RL). Then you'd end up with
datasets
EPIuw_LR+orig and EPIuw_LR_WARP+orig from the base
EPIuw_RL+orig and EPIuw_RL_WARP+orig from the source
The EPIuw_LR_WARP+orig file could then be used to unwarp (e.g.,
using 3dNwarpApply) other LR-encoded EPI datasets from the same
scanning session.
-pmBASE = With '-plusminus', computes the V(x) warp (source to base)
from the plusminus half-warp, and writes it to disk.
Also writes out the source dataset warped to base space,
in addition to the Wp(x) '_PLUS' and Wm(x) '_MINUS' results
* Sneaky aside: if you want potentially larger displacements
than 'normal' 3dQwarp, use '-plusminus', since the meet-in-the-
middle approach will allow the full-size displacements in EACH
of the half-warps, so that the overall displacement between
base and source can be larger. The use of '-pmBASE' will let
you get the source-transformed-to-base result at the end.
If you don't want the plusminus 'in-the-middle' outputs,
just delete them later.
++++++++++ How 'LOUD' do you want this program to be? ++++++++++
-verb = Print out very verbose progress messages (to stderr) :-)
-quiet = Cut out most of the fun fun fun progress messages :-(
-----------------------------------
INTERRUPTING the program gracefully ~1~
-----------------------------------
If you want to stop the program AND have it write out the results up to
the current point, you can do so with a Unix command like
kill -s QUIT processID
where 'processID' is the process identifier number (pid) for the 3dQwarp
program you want to terminate. A command like
ps aux | grep 3dQwarp
will give you a list of all your processes with the string '3dQwarp' in
the command line. For example, at the moment I wrote this text, I would
get the response
rwcox 62873 693.8 2.3 3274496 755284 p2 RN+ 12:36PM 380:25.26 3dQwarp -prefix ...
rwcox 6421 0.0 0.0 2423356 184 p0 R+ 1:33PM 0:00.00 grep 3dQwarp
rwcox 6418 0.0 0.0 2563664 7344 p4 S+ 1:31PM 0:00.15 vi 3dQwarp.c
so the processID for the actual run of 3dQwarp was 62873.
(Also, you can see that Zhark is a 'vi' acolyte, not an 'emacs' heretic.)
The program will 'notice' the QUIT signal at the end of the optimization
of the next patch, so it may be a moment or two before it actually saves
the output dataset(s) and exits.
Of course, if you just want to kill the process in a brute force way, with
nothing left behind to examine, then 'kill processID' will work.
Using 'kill -s QUIT' combined with '-allsave' might be useful in some
circumstances. At least to get some idea of what happened before you
were forced to stop 3dQwarp.
---------------------------------------------------------------------
CLARIFICATION about the very confusing forward and inverse warp issue ~1~
---------------------------------------------------------------------
An AFNI nonlinear warp dataset stores the displacements (in DICOM mm) from
the base dataset grid to the source dataset grid. For computing the source
dataset warped to the base dataset grid, these displacements are needed,
so that for each grid point in the output (warped) dataset, the corresponding
location in the source dataset can be found, and then the value of the source
at that point can be computed (interpolated).
That is, this forward warp is good for finding where a given point in the
base dataset maps to in the source dataset. However, for finding where a
given point in the source dataset maps to in the base dataset, the inverse
warp is needed. Or, if you wish to warp the base dataset to 'look like' the
source dataset, then you use 3dNwarpApply with the input warp being the
inverse warp from 3dQwarp.
---------------------------
STORAGE of 3D warps in AFNI ~1~
---------------------------
AFNI stores a 3D warp as a 3-volume dataset (NiFTI or AFNI format), with the
voxel values being the displacements in mm (32-bit floats) needed to
'reach out' and bring (interpolate) another dataset into alignment -- that is,
'pulling it back' to the grid defined in the warp dataset header. Thus, the
identity warp is all zero. These 3 volumes I refer to as ‘xd’, ‘yd’, and ‘zd’
in the internal comments, and they store (delta-x,delta-y,delta-z)
respectively (duh).
There is no provision in the warping software for 2D-only warps; that is,
warping one 2D image to another will still result in a 3D warp, with the zd
brick being chock full of zeros. This happenstance rarely occurs, since Zhark
believes he is the only person who actually has run the AFNI warping program
on 2D images.
In AFNI, (xd,yd,zd) are stored internally in DICOM order, in which +x=Left,
+y=Posterior, +z=Superior (LPS+); thus, DICOM x and y are sign-reversed from
the customary 'neuroscience order' RAS+. Note that the warp dataset grid need
not be stored in this same DICOM (x,y,z) order, which is sometimes confusing.
In the template datasets to which we nonlinearly warp data, we always use
DICOM order for the grids, so in practice warps generated in AFNI are usually
also physically ordered in the DICOM way -- but of course, someone can run our
warping software any which way they like and so get a warp dataset whose grid
order is not DICOM. But the (xd,yd,zd) entries will be in DICOM order.
On occasion (for example, when composing warps), the math will want the
displacement from a location outside of the warp dataset’s grid domain.
Originally, AFNI just treated those ghost displacements as zero or as equal
to the (xd,yd,zd) value at the closest edge grid point. However, this
method sometimes led to unhappy edge effects, and so now the software
linearly extrapolates the (xd,yd,zd) fields from each of the 6 faces of the
domain box to allow computation of such displacements. These linear
coefficients are computed from the warp displacement fields when the warp
dataset is read in, and so are not stored in the warp dataset header.
Inverse warps are computed when needed, and are not stored in the same
dataset with the forward warp. At one time, I thought that I’d always
keep them paired, but that idea fell by the wayside. AFNI does not make
use of deformation fields stored in datasets; that is, it does not
store or use datasets whose components are (x+xd,y+yd,z+zd). Such
a dataset could easily be computed with 3dcalc, of course.
There is no special header code in an AFNI warp dataset announcing that
'I am a warp!' By AFNI convention, 3D warp datasets have the substring
'_WARP' in their name, and inverse warps '_WARPINV'. But this is just a
convention, and no software depends on this being true. When AFNI warps
2 input datasets (A and B) together to 'meet in the middle' via the
'-plusminus' option (vs. bringing dataset A to be aligned directly to B),
two warp files are produced, one with the warp that brings A to the middle
'point' and one which brings 'B' to the middle point. These warps are
labeled with '_PLUS_WARP' and '_MINUS_WARP' in their filenames, as in
'Fred_PLUS_WARP.nii'. ('PLUS' and 'MINUS' can be altered via the
'-pmNAMES' option to 3dQwarp.)
If one is dealing only with affine transformation of coordinates, these
are stored (again referring to transformation of coordinates in DICOM
order) in plain ASCII text files, either with 3 lines of 4 numbers each,
(with the implicit last row of the matrix being 0 0 0 1, as usual).
or as all 12 numbers catenated into a single line (first 4 numbers are
the first row of the matrix, et cetera). This latter format is
always used when dealing with time-dependent affine transformations,
as from FMRI time series registration. A single matrix can be stored in
either format. At present, there is no provision for storing time-dependent
nonlinear warp datasets, since the use case has not arisen. When catenating
a time-dependent affine transform and a nonlinear warp (e.g., for direct
transformation from original EPI data to MNI space), the individual nonlinear
warp for each time point is computed and applied on-the-fly. Similarly, the
inverse warp can be computed on-the-fly, rather than being stored permanently.
Such on-the-fly warp+apply calculations are done in program 3dNwarpApply.
-----------------------------------
OUTLINE of warp optimization method ~1~
-----------------------------------
Repeated composition of incremental warps defined by Hermite cubic basis
functions, first over the entire volume, then over steadily shrinking and
overlapping patches at increasing 'levels': the patches shrink by a factor
of 0.75 at each level. Patches at levels 1 and higher have a 50% overlap.
NOTE: Internally, warps are stored as 'index warps', which are displacements
between 3D (i,j,k) grid indexes rather than between (x,y,z) coordinates.
The reason for this approach is that indexes are what is needed to
find the location in a dataset that a warp maps to. On output and on
input, the (x,y,z) displacements are converted from/to (i,j,k)
displacements. The '-inwarp' option allows you to output an 'index warp'
dataset, but this dataset has no function other than looking at it in
order to understand what the program was working with internally.
At 'level 0' (1 patch over the entire volume), Hermite quintic basis functions
are also employed, but these are not used at the more refined levels -- unless
one of the '-Qxxx' options is used. All basis functions herein are (at least)
continuously differentiable, so the discrete warp computed can be thought of
as a representation of an underlying C1 diffeomorphism. The basis functions
go to zero at the edge of each patch, so the overall warp will decay to the
identity warp (displacements=0) at the edge of the base volume. (However, use
of '-allineate' can make the final output warp be nonzero at the edges; the
programs that apply warps to datasets linearly extrapolate warp displacements
outside the 3D box over which the warp is defined.)
NOTE: * Option '-Qfinal' will use quintic polynomials at the final (smallest)
patch level.
* Option '-Qonly' will use quintic polynomials at all patch levels.
* Option '-Workhard' will run optimization on each patch twice,
first using cubic polynomials and later using quintic polynomials.
For this procedure to work, the source and base datasets need to be reasonably
well aligned already (e.g., via 3dAllineate, if necessary), as the nonlinear
optimization can only deal with relatively small displacements -- fractions of
a patch size.. Multiple warps can later be composed and applied via program
3dNwarpApply and/or 3dNwarpCat.
Note that it is not correct to say that the resulting warp is a piecewise cubic
(or quintic) polynomial. The first warp created (at level 0) is such a warp;
call that W0(x). Then the incremental warp W1(x) applied at the next iteration
is also a cubic polynomial warp (say), and the result is W0(W1(x)), which is
more complicated than a cubic polynomial -- and so on. The incremental warps
aren't added, but composed, so that the mathematical form of the final warp
would be very unwieldy to express in polynomial form. Of course, the program
just keeps track of the displacements, not the polynomial coefficients, so it
doesn't 'care' much about the underlying polynomials at all
One reason for incremental improvement by composition, rather than by addition,
is the simple fact that if W0(x) is invertible and W1(x) is invertible, then
W0(W1(x)) is also invertible -- but W0(x)+W1(x) might not be. The incremental
polynomial warps are kept invertible by simple constraints on the magnitudes
of their coefficients (i.e., the maximum size of incremental displacements).
The penalty is a Neo-Hookean elastic energy function, based on a combination of
bulk and shear distortions: cf. http://en.wikipedia.org/wiki/Neo-Hookean_solid
The goal is to keep the warps from becoming too 'weird' (doesn't always work).
By perusing the many options above, you can see that the user can control the
warp optimization in various ways. All these options make using 3dQwarp seem
pretty complicated. The reason there are so many options is that many different
cases arise, and we are trying to make the program flexible enough to deal with
them all. The SAMPLE USAGE section above is a good place to start for guidance.
*OR* you can use the @SSwarper or auto_warp.py scripts.
-------------- The warp polynomials: '-lite' and '-nolite' ---------------- ~1~
The '-nolite' cubics have 8 basis functions per spatial dimension, since they
are the full tensor product of the 2 Hermite cubics H0 and H1:
H0(x)*H0(y)*H0(z) H1(x)*H0(y)*H0(z) H0(x)*H1(y)*H0(z) H0(x)*H0(y)*H1(z)
H1(x)*H1(y)*H0(z) H1(x)*H0(y)*H1(z) H0(x)*H1(y)*H1(z) H1(x)*H1(y)*H1(z)
and then there are 3 sets of these for x, y, and z displacements, giving
24 total basis functions for a cubic 3D warp patch. The '-lite' cubics
omit any of the tensor product functions whose indexes sum to more than 1,
so there are only 4 basis functions per spatial dimension:
H0(x)*H0(y)*H0(z) H1(x)*H0(y)*H0(z) H0(x)*H1(y)*H0(z) H0(x)*H0(y)*H1(z)
yielding 12 total basis functions (again, 3 of each function above for each
spatial dimension). The 2 1D basis functions, defined over the interval
[-1,1], and scaled to have maximum magnitude 1, are
H0(x) = (1-abs(x))^2 * (1+2*abs(x)) // H0(0) = 1 H0'(0) = 0
H1(x) = (1-abs(x))^2 * x * 6.75 // H1(0) = 0 H1'(0) = 6.75 H1(1/3) = 1
These functions and their first derivatives are 0 at x=+/-1, which is apparent
from the '(1-abs(x))^2' factor they have in common. The functions are also
continuous and differentiable at x=0; thus, they and their unit translates
can serve as a basis for C1(R): the Banach space of continuously differentiable
functions on the real line.
One effect of using the '-lite' polynomial warps is that 3dQwarp runs faster,
since there are fewer parameters to optimize for each patch. Accuracy should
not be imparied,as the approximation quality (in the mathematical sense) of
the '-lite' polynomials is the same order as the '-nolite' full tensor product.
Another effect is that the upper limits on the displacements by any individual
warp patch are somewhat larger than for the full basis set, which may be useful
in some situations.
Similarly, the '-nolite' quintics have 27 basis functions per spatial
dimension, since they are the tensor products of the 3 Hermite quintics
Q0, Q1, Q2. The '-lite' quintics omit any tensor product whose indexes sum
to more than 2. Formulae for these 3 polynomials can be found in function
HQwarp_eval_basis() in AFNI file mri_nwarp.c. For each monomial Qi(x),
Qi(+/-1)=Qi'(+/-1)=Qi''(+/-1) = 0;
these functions are twice continuously differentiable, and can serve as
a basis for C2(R).
--------- Why is it 'cost functional' and not 'cost function' ??? -------- ~1~
In mathematics, a 'functional' is a function that maps an object in an infinite
dimensional space to a scalar. A typical example of a functional is a function
of a function, such as I(f) = definite integral from -1 to 1 of f(x) dx.
In this example, 'f' is a function, which is presumed to be integrable, and thus
an element of the infinite dimensional linear space denoted by L1(-1,1).
Thus, as Zhark was brutally trained in his mathematics bootcamp, the value
being optimized, being a number (scalar) that is calculated from a function
(warp), the 'machine' that calculates this value is a 'functional'. It also
gets the word 'cost' attached as it is something the program is trying to
reduce, and everyone wants to reduce the cost of what they are buying, right?
(AFNI does not come with coupons :-)
-------------------
WEIGHT construction ~1~
-------------------
The cost functional is computed giving (potentially) different weights to
different voxels. The default weight 3D volume is constructed from the
base dataset as follows (i.e., this is what '-useweight' does):
(0) Take absolute value of each voxel value.
(1) Zero out voxels within 4% of each edge
(i.e., 10 voxels in a 256x256x256 dataset).
(2) Define L by applying the '3dClipLevel -mfrac 0.5' algorithm
and then multiplying the result by 3. Then, any values over this
L 'large' value are reduced to L -- i.e., spikes are squashed.
(3) A 3D median filter over a ball with radius 2.25 voxels is applied
to further squash any weird stuff. (This radius is fixed.)
(4) A Gaussian blur of FWHM '-wtgaus' is applied (default = 4.5 voxels).
(5) Define L1 = 0.05 times the maximum of the result from (4).
Define L2 = 0.33 times '3dClipLevel -mfrac 0.33' applied to (4).
Define L = max(L1,L2).
Create a binary mask of all voxels from (4) that are >= L.
Find the largest contiguous cluster in that mask, erode it
a little, and then again find the largest cluster in what remains.
(The purpose of this to is guillotine off any small 'necks'.)
Zero out all voxels in (4) that are NOT in this surviving cluster.
(6) Scale the result from (5) to the range 0..1. This is the weight
volume.
(X) Remember you can save the computed weight volume to a dataset by
using the '-wtprefix' option.
Where did this scheme come from? A mixture of experimentation, intuition,
and plain old SWAG.
-------------------------------------------------------------------------------
***** This program is experimental and subject to sudden horrific change! *****
((((( OK, it's less experimental now, and so sudden changes will be mild. )))))
-------------------------------------------------------------------------------
----- AUTHOR = Zhark the Grotesquely Warped -- Fall/Winter/Spring 2012-13 -----
----- (but still strangely handsome) -----
=========================================================================
* This binary version of 3dQwarp is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
* Tests show that using more 12-16 CPUs with 3dQwarp doesn't help much.
If you have more CPUs on one system, it's faster to run two or three
separate registration jobs in parallel than to use all the CPUs on
one 3dQwarp task at a time.
=========================================================================
AFNI program: 3dRank
Usage: 3dRank [-prefix PREFIX] <-input DATASET1 [DATASET2 ...]>
Replaces voxel values by their rank in the set of
values collected over all voxels in all input datasets
If you input one dataset, the output should be identical
to the -1rank option in 3dmerge
This program only works on datasets of integral storage type,
and on integral valued data stored as floats.
-input DATASET1 [DATASET2 ...]: Input datasets.
Acceptable data types are:
byte, short, and floats.
-prefix PREFIX: Output prefix.
If you have multiple datasets on input
the prefix is preceded by r00., r01., etc.
If no prefix is given, the default is
rank.DATASET1, rank.DATASET2, etc.
In addition to the ranked volume, a rank map
1D file is created. It shows the mapping from
the rank (1st column) to the integral values
(2nd column) in the input dataset. Sub-brick float
factors are ignored.
-ver = print author and version info
-help = print this help screen
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dRankizer
++ 3dRankizer: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: Zhark of the Ineffable Rank
Usage: 3dRankizer [options] dataset
Output = Rank of each voxel as sorted into increasing value.
- Ties get the average rank.
- Not the same as 3dRank!
- Only sub-brick #0 is processed at this time!
- Ranks start at 1 and increase:
Input = 0 3 4 4 7 9
Output = 1 2 3.5 3.5 5 6
Options:
-brank bbb Set the 'base' rank to 'bbb' instead of 1.
(You could also do this with 3dcalc.)
-mask mset Means to use the dataset 'mset' as a mask:
Only voxels with nonzero values in 'mset'
will be used from 'dataset'. Voxels outside
the mask will get rank 0.
-prefix ppp Write results into float-format dataset 'ppp'
Output is in float format to allow for
non-integer ranks resulting from ties.
-percentize : Divide rank by the number of voxels in the dataset x 100.0
-percentize_mask : Divide rank by the number of voxels in the mask x 100.0
Author: RW Cox [[a quick hack for his own purposes]]
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3drefit
++ 3drefit: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: RW Cox
Changes some of the information inside a 3D dataset's header. ~1~
Note that this program does NOT change the .BRIK file at all;
the main purpose of 3drefit is to fix up errors made when
using to3d.
To see the current values stored in a .HEAD file, use the command
'3dinfo dataset'. Using 3dinfo both before and after 3drefit is
a good idea to make sure the changes have been made correctly!
20 Jun 2006: 3drefit will now work on NIfTI datasets (but it will write
out the entire dataset, into the current working directory)
Usage: 3drefit [options] dataset ... ~1~
where the options are
-quiet Turn off the verbose progress messages
-orient code Sets the orientation of the 3D volume(s) in the .BRIK.
The code must be 3 letters, one each from the
pairs {R,L} {A,P} {I,S}. The first letter gives
the orientation of the x-axis, the second the
orientation of the y-axis, the third the z-axis:
R = right-to-left L = left-to-right
A = anterior-to-posterior P = posterior-to-anterior
I = inferior-to-superior S = superior-to-inferior
** WARNING: when changing the orientation, you must be sure
to check the origins as well, to make sure that the volume
is positioned correctly in space.
-xorigin distx Puts the center of the edge voxel off at the given
-yorigin disty distance, for the given axis (x,y,z); distances in mm.
-zorigin distz (x=first axis, y=second axis, z=third axis).
Usually, only -zorigin makes sense. Note that this
distance is in the direction given by the corresponding
letter in the -orient code. For example, '-orient RAI'
would mean that '-zorigin 30' sets the center of the
first slice at 30 mm Inferior. See the to3d manual
for more explanations of axes origins.
** SPECIAL CASE: you can use the string 'cen' in place of
a distance to force that axis to be re-centered.
-xorigin_raw xx Puts the center of the edge voxel at the given COORDINATE
-yorigin_raw yy rather than the given DISTANCE. That is, these values
-zorigin_raw zz directly replace the offsets in the dataset header,
without any possible sign changes.
-duporigin cset Copies the xorigin, yorigin, and zorigin values from
the header of dataset 'cset'.
-dxorigin dx Adds distance 'dx' (or 'dy', or 'dz') to the center
-dyorigin dy coordinate of the edge voxel. Can be used with the
-dzorigin dz values input to the 'Nudge xyz' plugin.
** WARNING: you can't use these options at the same
time you use -orient.
** WARNING: consider -shift_tags if dataset has tags
-xdel dimx Makes the size of the voxel the given dimension,
-ydel dimy for the given axis (x,y,z); dimensions in mm.
-zdel dimz ** WARNING: if you change a voxel dimension, you will
probably have to change the origin as well.
-keepcen When changing a voxel dimension with -xdel (etc.),
also change the corresponding origin to keep the
center of the dataset at the same coordinate location.
-xyzscale fac Scale the size of the dataset voxels by the factor 'fac'.
This is equivalent to using -xdel, -ydel, -zdel together.
-keepcen is used on the first input dataset, and then
any others will be shifted the same amount, to maintain
their alignment with the first one.
** WARNING: -xyzscale can't be used with any of the other
options that change the dataset grid coordinates!
** N.B.: 'fac' must be positive, and using fac=1.0 is stupid.
-TR time Changes the TR time to a new value (see 'to3d -help').
** You can also put the name of a dataset in for 'time', in
which case the TR for that dataset will be used.
** N.B.: If the dataset has slice time offsets, these will
be scaled by the factor newTR/oldTR. This scaling does not
apply if you use '-Tslices' in the same 3drefit run.
-notoff Removes the slice-dependent time-offsets.
-Torg ttt Set the time origin of the dataset to value 'ttt'.
(Time origins are set to 0 in to3d.)
** WARNING: These 3 options apply only to 3D+time datasets.
**N.B.: Using '-TR' on a dataset without a time axis
will add a time axis to the dataset.
-Tslices a b c d ...
Reset the slice time offsets to be 'a', 'b', 'c', ...
(in seconds). The number of values following '-Tslices'
should be the same as the number of slices in the dataset,
but 3drefit does NOT check that this is true.
** If any offset time is < 0 or >= TR, a warning will be
printed (to stderr), but this is not illegal even though
it is a bad idea.
** If the dataset does not have a TR set, then '-Tslices'
will fail. You can use '-TR' to set the inter-volume time
spacing in the same 3drefit command.
** If you have the slices times stored (e.g., from DICOM) in
some other units, you can scale them to be in seconds by
putting a scale factor after the '-Tslices' option as follows:
-Tslices '*0.001' 300 600 900 ...
which would be used to scale from milliseconds to seconds.
The format is to start the scale factor with a '*' to tell
3drefit that this number is not a slice offset but is to be
used a a scale factor for the rest of the following values.
Since '*' is a filename wildcard, it needs to be in quotes!
** The program stops looking for number values after '-Tslices'
when it runs into something that does not look like a number.
Here, 'look like a number' means a character string that:
* starts with a digit 0..9
* starts with a decimal point '.' followed by a digit
* starts with a minus sign '-' followed by a digit
* starts with '-.' followed by a digit
So if the input dataset name starts with a digit, and the
last command line option '-Tslices', 3drefit will think
the filename is actually a number for a slice offset time.
To avoid this problem, you can do one of these things:
* Put in an option that is just the single character '-'
* Don't use '-Tslices' as the last option
* Put a directory name before the dataset name, as in
'./Galacticon.nii'
** If you have the slice time offsets stored in a text file
as a list of values, then you can input these values on
the command line using the Unix backquote operator, as in
-Tslices `cat SliceTimes.1D`
** For example, if the slice time offsets are in a JSON
sidecar (a la BIDS), you might be able to something like
the following to extract the timings into a file:
abids_json_tool.py -json2txt -input sub-10506_task-pamenc_bold.json -prefix junk.txt
grep SliceTiming junk.txt | sed -e 's/^SliceTiming *://' > SliceTimes.1D
\rm junk.txt
-newid Changes the ID code of this dataset as well.
-nowarp Removes all warping information from dataset.
-apar aset Set the dataset's anatomy parent dataset to 'aset'
** N.B.: The anatomy parent is the dataset from which the
transformation from +orig to +acpc and +tlrc coordinates
is taken. It is appropriate to use -apar when there is
more than 1 anatomical dataset in a directory that has
been transformed. In this way, you can be sure that
AFNI will choose the correct transformation. You would
use this option on all the +orig dataset that are
aligned with 'aset' (i.e., that were acquired in the
same scanning session).
** N.B.: Special cases of 'aset'
aset = NULL --> remove the anat parent info from the dataset
aset = SELF --> set the anat parent to be the dataset itself
-wpar wset Set the warp parent (the +orig version of a +tlrc dset).
This option is used by @auto_tlrc. Do not use it unless
you know what you're doing.
-clear_bstat Clears the statistics (min and max) stored for each sub-brick
in the dataset. This is useful if you have done something to
modify the contents of the .BRIK file associated with this
dataset.
-redo_bstat Re-computes the statistics for each sub-brick. Requires
reading the .BRIK file, of course. Also does -clear_bstat
before recomputing statistics, so that if the .BRIK read
fails for some reason, then you'll be left without stats.
-statpar v ... Changes the statistical parameters stored in this
dataset. See 'to3d -help' for more details.
-markers Adds an empty set of AC-PC markers to the dataset,
if it can handle them (is anatomical, is in the +orig
view, and isn't 3D+time).
** WARNING: this will erase any markers that already exist!
-shift_tags Apply -dxorigin (and y and z) changes to tags.
-dxtag dx Add dx to the coordinates of all tags.
-dytag dy Add dy to the coordinates of all tags.
-dztag dz Add dz to the coordinates of all tags.
-view code Changes the 'view' to be 'code', where the string 'code'
is one of 'orig', 'acpc', or 'tlrc'.
** WARNING: The program will also change the .HEAD and .BRIK
filenames to match. If the dataset filenames already
exist in the '+code' view, then this option will fail.
You will have to rename the dataset files before trying
to use '-view'. If you COPY the files and then use
'-view', don't forget to use '-newid' as well!
** WARNING2: Changing the view without specifying the new
might lead to conflicting information. Consider specifying
the space along with -view
-space spcname Associates the dataset with a specific template type, e.g.
TLRC, MNI, ORIG. The default assumed for +tlrc datasets is
'TLRC'. One use for this attribute is to use MNI space
coordinates and atlases instead of the default TLRC space.
** See WARNING2 for -view option.
-cmap cmaptype Associate colormap type with dataset. Available choices are
CONT_CMAP (the default), INT_CMAP (integer colormap display)
and SPARSE_CMAP (for sparse integer colormaps). INT_CMAP is
appropriate for showing ROI mask datasets or Atlas datasets
where the continuous color scales are not useful.
-label2 llll Set the 'label2' field in a dataset .HEAD file to the
string 'llll'. (Can be used as in AFNI window titlebars.)
-labeltable TTT Inset the label table TTT in the .HEAD file.
The label table format is described in README.environment
under the heading: 'Variable: AFNI_VALUE_LABEL_DTABLE'
See also -copytables
-denote Means to remove all possibly-identifying notes from
the header. This includes the History Note, other text
Notes, keywords, and labels.
-deoblique Replace transformation matrix in header with cardinal matrix.
This option DOES NOT deoblique the volume. To do so
you should use 3dWarp -deoblique. This option is not
to be used unless you really know what you're doing.
-oblique_origin
assume origin and orientation from oblique transformation
matrix rather than traditional cardinal information
-oblique_recenter
Adjust the origin so that the cardinalized 0,0,0 is in
the same brain location as that of the original (oblique?)
(scanner?) coordinates.
Round this to the nearest voxel center.
* Even if cardinal, rounding might cause an origin shift
(see -oblique_recenter_raw).
-oblique_recenter_raw
Like -oblique_recenter, but do not round.
So coordinate 0,0,0 is in the exact same location, even
if not at a voxel center.
-byteorder bbb Sets the byte order string in the header.
Allowable values for 'bbb' are:
LSB_FIRST MSB_FIRST NATIVE_ORDER
Note that this does not change the .BRIK file!
This is done by programs 2swap and 4swap.
-checkaxes Doesn't alter the input dataset; rather, this just
checks the dataset axes orientation codes and the
axes matrices for consistency. (This option was
added primarily to check for bugs in various codes.)
-appkey ll Appends the string 'll' to the keyword list for the
whole dataset.
-repkey ll Replaces the keyword list for the dataset with the
string 'll'.
-empkey Destroys the keyword list for the dataset.
-atrcopy dd nn Copy AFNI header attribute named 'nn' from dataset 'dd'
into the header of the dataset(s) being modified.
For more information on AFNI header attributes, see
documentation file README.attributes. More than one
'-atrcopy' option can be used.
**N.B.: This option is for those who know what they are doing!
Without the -saveatr option, this option is
meant to be used to alter attributes that are NOT
directly mapped into dataset internal structures, since
those structures are mapped back into attribute values
as the dataset is being written to disk. If you want
to change such an attribute, you have to use the
corresponding 3drefit option directly or use the
-saveatr option.
If you are confused, try to understand this:
Option -atrcopy was never intended to modify AFNI-
specific attributes. Rather, it was meant to copy
user-specific attributes that had been added to some
dataset using -atrstring option. A cursed day came when
it was convenient to use -atrcopy to copy an AFNI-specific
attribute (BRICK_LABS to be exact) and for that to
take effect in the output, the option -saveatr was added.
Contact Daniel Glen and/or Rick Reynolds for further
clarification and any other needs you may have.
Do NOT use -atrcopy or -atrstring with other modification
options.
See also -copyaux
-atrstring n 'x' Copy the string 'x' into the dataset(s) being
modified, giving it the attribute name 'n'.
To be safe, the 'x' string should be in quotes.
**N.B.: You can store attributes with almost any name in
the .HEAD file. AFNI will ignore those it doesn't
know anything about. This technique can be a way of
communicating information between programs. However,
when most AFNI programs write a new dataset, they will
not preserve any such non-standard attributes.
**N.B.: Special case: if the string 'x' is of the form
'file:name', then the contents of the file 'name' will
be read in as a single string and stored in the attribute.
-atrfloat name 'values'
-atrint name 'values'
Create or modify floating point or integer attributes.
The input values may be specified as a single string
in quotes or as a 1D filename or string. For example,
3drefit -atrfloat IJK_TO_DICOM_REAL '1 0.2 0 0 -0.2 1 0 0 0 0 1 0' dset+orig
3drefit -atrfloat IJK_TO_DICOM_REAL flipZ.1D dset+orig
3drefit -atrfloat IJK_TO_DICOM_REAL \
'1D:1,0.2,2@0,-0.2,1,2@0,2@0,1,0' \
dset+orig
Almost all afni attributes can be modified in this way
-saveatr (default) Copy the attributes that are known to AFNI into
the dset->dblk structure thereby forcing changes to known
attributes to be present in the output.
This option only makes sense with -atrcopy
**N.B.: Don't do something like copy labels of a dataset with
30 sub-bricks to one that has only 10, or vice versa.
This option is for those who would deservedly earn a
hunting license.
-nosaveatr Opposite of -saveatr
Example:
3drefit -saveatr -atrcopy WithLabels+tlrc BRICK_LABS NeedsLabels+tlrc
-'type' Changes the type of data that is declared for this
dataset, where 'type' is chosen from the following:
ANATOMICAL TYPES
spgr == Spoiled GRASS fse == Fast Spin Echo
epan == Echo Planar anat == MRI Anatomy
ct == CT Scan spct == SPECT Anatomy
pet == PET Anatomy mra == MR Angiography
bmap == B-field Map diff == Diffusion Map
omri == Other MRI abuc == Anat Bucket
FUNCTIONAL TYPES
fim == Intensity fith == Inten+Thr
fico == Inten+Cor fitt == Inten+Ttest
fift == Inten+Ftest fizt == Inten+Ztest
fict == Inten+ChiSq fibt == Inten+Beta
fibn == Inten+Binom figt == Inten+Gamma
fipt == Inten+Poisson fbuc == Func-Bucket
-copyaux auxset Copies the 'auxiliary' data from dataset 'auxset'
over the auxiliary data for the dataset being
modified. Auxiliary data comprises sub-brick labels,
keywords, statistics codes, nodelists, and labeltables
AND/OR atlas point lists.
'-copyaux' occurs BEFORE the '-sub' operations below,
so you can use those to alter the auxiliary data
that is copied from auxset.
-copytables tabset Copies labeltables AND/OR atlas point lists, if any,
from tabset to the input dataset.
'-copyaux' occurs BEFORE the '-sub' operations below,
so you can use those to alter the auxiliary data
that is copied from tabset.
-relabel_all xx Reads the file 'xx', breaks it into strings,
and puts these strings in as the sub-brick
labels. Basically a batch way of doing
'-sublabel' many times, for n=0, 1, ...
** This option is executed BEFORE '-sublabel',
so any labels from '-sublabel' will over-ride
labels from this file.
** Strings in the 'xx' file are separated by
whitespace (blanks, tabs, new lines).
-relabel_all_str 'lab0 lab1 ... lab_p': Just like -relabel_all
but with labels all present in one string
-sublabel_prefix PP: Prefix each sub-brick's label with PP
-sublabel_suffix SS: Suffix each sub-brick's label with SS
The options below attach auxiliary data to sub-bricks in the dataset. ~1~
Each option may be used more than once so that
multiple sub-bricks can be modified in a single run of 3drefit.
-sublabel n ll Attach to sub-brick #n the label string 'll'.
-subappkey n ll Add to sub-brick #n the keyword string 'll'.
-subrepkey n ll Replace sub-brick #n's keyword string with 'll'.
-subempkey n Empty out sub-brick #n' keyword string
-substatpar n type v ...
Attach to sub-brick #n the statistical type and
the auxiliary parameters given by values 'v ...',
where 'type' is one of the following:
Stat Types: ~2~
type Description PARAMETERS
---- ----------- ----------------------------------------
fico Cor SAMPLES FIT-PARAMETERS ORT-PARAMETERS
fitt Ttest DEGREES-of-FREEDOM
fift Ftest NUMERATOR and DENOMINATOR DEGREES-of-FREEDOM
fizt Ztest N/A
fict ChiSq DEGREES-of-FREEDOM
fibt Beta A (numerator) and B (denominator)
fibn Binom NUMBER-of-TRIALS and PROBABILITY-per-TRIAL
figt Gamma SHAPE and SCALE
fipt Poisson MEAN
You can also use option '-unSTAT' to remove all statistical encodings
from sub-bricks in the dataset. This operation would be desirable if
you modified the values in the dataset (e.g., via 3dcalc).
['-unSTAT' is done BEFORE the '-substatpar' operations, so you can ]
[combine these options to completely redo the sub-bricks, if needed.]
[Option '-unSTAT' also implies that '-unFDR' will be carried out. ]
The following options allow you to modify VOLREG fields: ~1~
-vr_mat val1 ... val12 Use these twelve values for VOLREG_MATVEC_index.
-vr_mat_ind index Index of VOLREG_MATVEC_index field to be modified.
Optional, default index is 0.
NB: You can only modify one VOLREG_MATVEC_index at a time
-vr_center_old x y z Use these 3 values for VOLREG_CENTER_OLD.
-vr_center_base x y z Use these 3 values for VOLREG_CENTER_BASE.
The following options let you modify the FDR curves stored in the header: ~1~
-addFDR = For each sub-brick marked with a statistical code, (re)compute
the FDR curve of z(q) vs. statistic, and store in the dataset header
* '-addFDR' runs as if '-new -pmask' were given to 3dFDR, so that
stat values == 0 will be ignored in the FDR algorithm.
-FDRmask mset = load dataset 'mset' and use it as a mask
-STATmask mset for the '-addFDR' calculations.
* This can be useful if you ran 3dDeconvolve/3dREMLFIT
without a mask, and want to apply a mask to improve
the FDR estimation procedure.
* If '-addFDR' is NOT given, then '-FDRmask' does nothing.
* 3drefit does not generate an automask for FDR purposes
(unlike 3dREMLfit and 3dDeconvolve), since the input
dataset may contain only statistics and no structural
information about the brain.
-unFDR = Remove all FDR curves from the header
[you will want to do this if you have done something to ]
[modify the values in the dataset statistical sub-bricks]
++ Last program update: 27 Mar 2009
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dRegAna
++ 3dRegAna: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
This program performs multiple linear regression analysis.
Usage:
3dRegAna
-rows n number of input datasets
-cols m number of X variables
-xydata X11 X12 ... X1m filename X variables and Y observations
. .
. .
. .
-xydata Xn1 Xn2 ... Xnm filename X variables and Y observations
-model i1 ... iq : j1 ... jr definition of linear regression model;
reduced model:
Y = f(Xj1,...,Xjr)
full model:
Y = f(Xj1,...,Xjr,Xi1,...,Xiq)
[-diskspace] print out disk space required for program execution
[-workmem mega] number of megabytes of RAM to use for statistical
workspace (default = 750 (was 12))
[-rmsmin r] r = minimum rms error to reject constant model
[-fdisp fval] display (to screen) results for those voxels
whose F-statistic is > fval
[-flof alpha] alpha = minimum p value for F due to lack of fit
The following commands generate individual AFNI 2 sub-brick datasets:
[-fcoef k prefixname] estimate of kth regression coefficient
along with F-test for the regression
is written to AFNI `fift' dataset
[-rcoef k prefixname] estimate of kth regression coefficient
along with coef. of mult. deter. R^2
is written to AFNI `fith' dataset
[-tcoef k prefixname] estimate of kth regression coefficient
along with t-test for the coefficient
is written to AFNI `fitt' dataset
The following commands generate one AFNI 'bucket' type dataset:
[-bucket n prefixname] create one AFNI 'bucket' dataset having
n sub-bricks; n=0 creates default output;
output 'bucket' is written to prefixname
The mth sub-brick will contain:
[-brick m coef k label] kth parameter regression coefficient
[-brick m fstat label] F-stat for significance of regression
[-brick m rstat label] coefficient of multiple determination R^2
[-brick m tstat k label] t-stat for kth regression coefficient
[-datum DATUM] write the output in DATUM format.
Choose from short (default) or float.
N.B.: For this program, the user must specify 1 and only 1 sub-brick
with each -xydata command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used, e.g.:
-xydata 2.17 4.59 7.18 'fred+orig[3]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dReHo
REHO/Kendall W code, written by PA Taylor (July, 2012), part of FATCAT
(Taylor & Saad, 2013) in AFNI.
ReHo (regional homogeneity) is just a renaming of the Kendall's W
(or Kendall's coefficient of concordance, KCC, (Kendall & Babington
Smith, 1939)) for set of time series. Application to fMRI data was
described in paper: <<Regional homogeneity approach to fMRI data
analysis>> by Zang, Jiang, Lu, He, and Tiana (2004, NeuroImage),
where it was applied to the study of both task and resting state
functional connectivity (RSFC).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ USAGE: This program is made to read in data from 4D time series data set
and to calculate Kendall's W per voxel using neighborhood voxels.
Instead of the time series values themselves, Kendall's W uses the
relative rank ordering of a 'hood over all time points to evaluate
a parameter W in range 0-1, with 0 reflecting no trend of agreement
between time series and 1 reflecting perfect agreement. From W, one
can simply get Friedman's chi-square value (with degrees of freedom
equal to `the length of the time series minus one'), so this can
also be calculated here and returned in the second sub-brick:
chi-sq = (N_n)*(N_t - 1)*W, with N_dof = N_t - 1,
where N_n is the size of neighborhood; N_t is the number of
time points; W is the ReHo or concordance value; and N_dof is the
number of degrees of freedom. A switch is included to have the
chi-sq value output as a subbrick of the ReHo/W. (In estimating W,
tied values are taken into account by averaging appropriate
rankings and adjusting other factors in W appropriately, which
only makes a small difference in value, but the computational time
still isn't that bad).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMAND: 3dReHo -prefix PREFIX -inset FILE {-nneigh 7|19|27} \
{-chi_sq} {-mask MASK} {-in_rois INROIS}
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ RUNNING, need to provide:
-prefix PREFIX :output file name part.
-inset FILE :time series file.
-chi_sq :switch to output Friedman chi-sq value per voxel
as a subbrick.
-mask MASK :can include a whole brain mask within which to
calculate ReHo. Otherwise, data should be masked
already.
-nneigh NUMBER :number of voxels in neighborhood, inclusive; can be:
7 (for facewise neighbors, only),
19 (for face- and edge-wise neighbors),
27 (for face-, edge-, and node-wise neighbors).
The default is: 27.
-neigh_RAD R :for additional voxelwise neighborhood control, the
radius R of a desired neighborhood can be put in; R is
a floating point number, and must be >1. Examples of
the numbers of voxels in a given radius are as follows
(you can roughly approximate with the ol' 4*PI*(R^3)/3
thing):
R=2.0 -> V=33,
R=2.3 -> V=57,
R=2.9 -> V=93,
R=3.1 -> V=123,
R=3.9 -> V=251,
R=4.5 -> V=389,
R=6.1 -> V=949,
but you can choose most any value.
-neigh_X A
-neigh_Y B :as if *that* weren't enough freedom, you can even have
-neigh_Z C ellipsoidal volumes of voxelwise neighbors. This is
done by inputing the set of semi-radius lengths you
want, again as floats/decimals. The 'hood is then made
according to the following relation:
(i/A)^2 + (j/B)^2 + (k/C)^2 <=1.
which will have approx. V=4*PI*A*B*C/3. The impetus for
this freedom was for use with data having anisotropic
voxel edge lengths.
-box_RAD BR :for additional voxelwise neighborhood control, the
one can make a cubic box centered on a given voxel;
BR specifies the number of voxels outward in a given
cardinal direction, so the number of voxels in the
volume would be as follows:
BR=1 -> V=27,
BR=2 -> V=125,
BR=3 -> V=343,
etc. In this case, BR should only be integer valued.
-box_X BA
-box_Y BB :as if that *still* weren't enough freedom, you can have
-box_Z BC box volume neighborhoods of arbitrary dimension; these
values put in get added in the +/- directions of each
axis, so the volume in terms of number of voxels would
be calculated:
if BA = 1, BB = 2 and BC = 4,
then V = (1+2*1)*(1+2*2)*(1+2*4) = 135.
--> NB: you can't mix-n-match '-box_*' and '-neigh_*' settings.
Mi dispiace (ma sol'un po).
-in_rois INROIS :can input a set of ROIs, each labelled with distinct
integers. ReHo will be calculated per ROI. The output
for this info is in a file called PREFIX_ROI_reho.vals
(or PREFIX_ROI_reho_000.vals, PREFIX_ROI_reho_001.vals,
etc. if the INROIS has >1 subbrick); if `-chi_sq'
values are being output, then those values for the
ROIs will be output in an analogously formatted
file called PREFIX_ROI_reho.chi (with similar
zeropadded numbering for multibrick input).
As of March, text format in the *.vals and *.chi files
has changed: it will be 2 columns of numbers per file,
with the first column being the ROI (integer) value
and the second column being the ReHo or Chi-sq value.
Voxelwise ReHo will still be calculated and output.
+ OUTPUT:
[A] single file with name, e.g., PREFIX+orig.BRIK, which may have
two subbricks (2nd subbrick if `-chi_sq' switch is used):
[0] contains the ReHo (Kendall W) value per voxel;
[1] contains Friedman chi-square of ReHo per voxel (optional);
note that the number of degrees of freedom of this value
is the length of time series minus 1.
[B] can get list of ROI ReHo values, as well (optional).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3dReHo \
-mask MASK+orig. \
-inset REST+orig \
-prefix REST_REHO \
-neigh_RAD 2.9 \
-chi_sq
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
____________________________________________________________________________
AFNI program: 3dREMLfit
Usage: 3dREMLfit [options] ~1~
**** Generalized least squares time series fit, with REML ****
**** estimation of the temporal auto-correlation structure. ****
---------------------------------------------------------------------
**** The recommended way to use 3dREMLfit is via afni_proc.py, ****
**** which will pre-process the data, and also give some useful ****
**** diagnostic tools/outputs for assessing the data's quality. ****
**** [afni_proc.py will make your FMRI-analysis life happier!] ****
---------------------------------------------------------------------
* This program provides a generalization of 3dDeconvolve:
it allows for serial correlation in the time series noise.
* It solves the linear equations for each voxel in the generalized
(prewhitened) least squares sense, using the REML estimation method
to find a best-fit ARMA(1,1) model for the time series noise
correlation matrix in each voxel (i.e., each voxel gets a separate
pair of ARMA parameters).
++ Note that the 2-parameter ARMA(1,1) correlation model is hard-coded
into this program -- there is no way to use a more elaborate model,
such as the 5-parameter ARMA(3,2), in 3dREMLfit.
++ A 'real' REML optimization of the autocorrelation model is made,
not simply an adjustment based on the residuals from a preliminary
OLSQ regression.
++ See the section 'What is ARMA(1,1)' (far below) for more fun details.
++ And the section 'What is REML' (even farther below).
* You MUST run 3dDeconvolve first to generate the input matrix
(.xmat.1D) file, which contains the hemodynamic regression
model, censoring and catenation information, the GLTs, etc.
See the output of '3dDeconvolve -help' for information on
using that program to setup the analysis.
++ However, you can input a 'naked' (non-3dDeconvolve) matrix
file using the '-matim' option, if you know what you are doing.
* If you don't want the 3dDeconvolve analysis to run, you can
prevent that by using 3dDeconvolve's '-x1D_stop' option.
* 3dDeconvolve also prints out a cognate command line for running
3dREMLfit, which should get you going with relative ease.
* The output datasets from 3dREMLfit are structured to resemble
the corresponding results from 3dDeconvolve, to make it
easy to adapt your scripts for further processing.
* Is this type of analysis (generalized least squares) important?
That depends on your point of view, your data, and your goals.
If you really want to know the answer, you should run
your analyses both ways (with 3dDeconvolve and 3dREMLfit),
through to the final step (e.g., group analysis), and then
decide if your neuroscience/brain conclusions depend strongly
on the type of linear regression that was used.
* If you are planning to use 3dMEMA for group analysis, then using
3dREMLfit instead of 3dDeconvolve is a good idea. 3dMEMA uses
the t-statistic of the beta weight as well as the beta weight
itself -- and the t-values from 3dREMLfit are probably more
more accurate than those from 3dDeconvolve, since the underlying
variance estimate should be more accurate (less biased).
* When there is significant temporal correlation, and you are using
'IM' regression (estimated individual betas for each event),
the REML GLSQ regression can be superior to OLSQ beta
estimates -- in the sense that the resulting betas
have somewhat less variance with GLSQ than with OLSQ.
-------------------------------------------
Input Options (the first two are mandatory) ~1~
-------------------------------------------
-input ddd = Read time series dataset 'ddd'.
* This is the dataset without censoring!
* The '-matrix' file, on the other hand, encodes
which time points are to be censored, and the
matrix stored therein is already censored.
* The doc below has a discussion of censoring in 3dREMLfit:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/statistics/remlfit.html
-matrix mmm = Read the matrix 'mmm', which should have been
output from 3dDeconvolve via the '-x1D' option.
** N.B.: You can omit entirely defining the regression matrix,
but then the program will fabricate a matrix consisting
of a single column with all 1s. This option is
mostly for the convenience of the author; for
example, to have some fun with an AR(1) time series:
1deval -num 1001 -expr 'gran(0,1)+(i-i)+0.7*z' > g07.1D
3dREMLfit -input g07.1D'{1..$}'' -Rvar -.1D -grid 5 -MAXa 0.9
** N.B.: 3dREMLfit now supports all zero columns, if you use
the '-GOFORIT' option. [Ides of March, MMX A.D.]
More Primitive Alternative Ways to Define the Regression Matrix
--------------------------------------------------------------------------
-polort P = If no -matrix option is given, AND no -matim option,
create a matrix with Legendre polynomial regressors
up to order 'P'. The default value is P=0, which
produces a matrix with a single column of all ones.
(That is the default matrix described above.)
-matim M = Read a standard .1D file as the matrix.
* That is, an ASCII files of numbers layed out in a
rectangular array. The number of rows must equal the
number of time points in the input dataset. The number
of columns is the number of regressors.
* Advanced features, such as censoring, can only be implemented
by providing a true .xmat.1D file via the '-matrix' option.
** However, censoring can still be applied (in a way) by including
extra columns in the matrix. For example, to censor out time
point #47, a column that is 1 at time point #47 and zero at
all other time points can be used.
++ Remember that AFNI counting starts at 0, so this column
would start with 47 0s, then a single 1, then the rest
of the entries would be 0s.
++ 3dDeconvolve option '-x1D_regcensored' will create such a
.xmat.1D file, with the censoring indicated by 0-1 columns
rather than by the combination of 'GoodList' and omitted
rows. That is, instead of shrinking the matrix (by rows)
it will expand the matrix (by columns).
++ You can strip out the XML-ish header from the .xmat.1D
file with a Unix command like this:
grep -v '^#' Fred.xmat.1D > Fred.rawmat.1D
++ In cases with lots of censoring, expanding the matrix
by lots of columns will make 3dREMLfit run more slowly.
For most situations, this slowdown will not be horrific.
* An advanced intelligence could reverse engineer the XML
format used by the .xmat.1D files, to fully activate all the
regression features of this software :)
** N.B.: You can use only 'Col' as a name in GLTs ('-gltsym')
with these nonstandard matrix input methods, since
the other column names come from the '-matrix' file.
** These mutually exclusive options are ignored if -matrix is used.
----------------------------------------------------------------------------
The matrix supplied is the censored matrix, if any time points are marked
as to be removed from the analysis -- that is, if GoodList (infra) is NOT
the entire list of time points from 0..(nrow-1).
Information supplied in the .xmat.1D format XML header's attributes
includes the following (but is not limited to):
* ColumnLabels = a string label for each column in the matrix
* ColumnGroups = groupings of columns into associated regressors
(e.g., motion, baseline, task)
* RowTR = TR in seconds
* GoodList = list of time points to use (inverse of censor list)
* NRowFull = size of full matrix (without censoring)
* RunStart = time point indexes of start of the runs
* Nstim = number of distinct stimuli
* StimBots = column indexes for beginning of each stimulus's regressors
* StimTops = column indexes for ending of each stimulus's regressors
* StimLabels = names for each stimulus
* CommandLine = string of command used to create the file
See the doc below for a lengthier description of the matrix format:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/statistics/remlfit.html
----------------------------------------------------------------------------
---------------
Masking options ~1~
---------------
-mask MMM = Read dataset 'MMM' as a mask for the input; voxels outside
the mask will not be fit by the regression model.
-automask = If you don't know what this does by now, I'm not telling.
*** If you don't specify ANY mask, the program will
build one automatically (from each voxel's RMS)
and use this mask SOLELY for the purpose of
computing the FDR curves in the bucket dataset's header.
* If you DON'T want this to happen, then use '-noFDR'
and later run '3drefit -addFDR' on the bucket dataset.
* To be precise, the FDR automask is only built if
the input dataset has at least 5 voxels along each of
the x and y axes, to avoid applying it when you run
3dREMLfit on 1D timeseries inputs.
-STATmask ss = Build a mask from file 'ss', and use this for the purpose
of computing the FDR curves.
* The actual results ARE not masked with this option
(only with '-mask' or '-automask' options).
* If you don't use '-STATmask', then the mask from
'-mask' or '-automask' is used for the FDR work.
If neither of those is given, then the automatically
generated mask described just above is used for FDR.
--------------------------------------------------------------------------
Options to Add Baseline (Null Hypothesis) Columns to the Regression Matrix ~1~
--------------------------------------------------------------------------
-addbase bb = You can add baseline model columns to the matrix with
this option. Each column in the .1D file 'bb' will
be appended to the matrix. This file must have at
least as many rows as the matrix does.
* Multiple -addbase options can be used, if needed.
* More than 1 file can be specified, as in
-addbase fred.1D ethel.1D elvis.1D
* None of the .1D filename can start with the '-' character,
since that is the signal for the next option.
* If the matrix from 3dDeconvolve was censored, then
this file (and '-slibase' files) can either be
censored to match, OR 3dREMLfit will censor these
.1D files for you.
+ If the column length (number of rows) of the .1D file
is the same as the column length of the censored
matrix, then the .1D file WILL NOT be censored.
+ If the column length of the .1D file is the same
as the column length of the uncensored matrix,
then the .1D file WILL be censored -- the same
rows excised from the matrix in 3dDeconvolve will
be resected from the .1D file before the .1D file's
columns are appended to the matrix.
+ The censoring information from 3dDeconvolve is stored
in the matrix file header, and you don't have to
provide it again on the 3dREMLfit command line.
-dsort dset = Similar to -addbase in concept, BUT the dataset 'dset'
provides a different baseline regressor for every
voxel. This dataset must have the same number of
time points as the input dataset, and have the same
number of voxels. [Added 22 Jul 2015]
+ The REML (a,b) estimation is done WITHOUT this extra
voxel-wise regressor, and then the selected (a,b)
ARMA parameters are used to do the final regression for
the '-R...' output datasets. This method is not ideal,
but the alternative of re-doing the (a,b) estimation with
a different matrix for each voxel would be VERY slow.
-- The -dsort estimation is thus different from the -addbase
and/or -slibase estimations, in that the latter cases
incorporate the extra regressors into the REML estimation
of the ARMA (a,b) parameters. The practical difference
between these two methods is usually very small ;-)
+ If any voxel time series from -dsort is constant through time,
the program will print a warning message, and peculiar things
might happen. Gleeble, fitzwilly, blorten, et cetera.
-- Actually, if this event happens, the 'offending' -dsort voxel
time series is replaced by the mean time series from that
-dsort dataset.
+ The '-Rbeta' (and/or '-Obeta') option will include the
fit coefficient for the -dsort regressor (last).
+ There is no way to include the -dsort regressor beta in a GLT.
+ You can use -dsort more than once. Please don't go crazy.
+ Using this option slows the program down in the GLSQ loop,
since a new matrix and GLT set must be built up and torn down
for each voxel separately.
-- At this time, the GLSQ loop is not OpenMP-ized.
+++ This voxel-wise regression capability is NOT implemented in
3dDeconvolve, so you'll have to use 3dREMLfit if you want
to use this method, even if you only want ordinary least
squares regression.
+ The motivation for -dsort is to apply ANATICOR to task-based
FMRI analyses. You might be clever and have a better idea!?
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2897154/
https://afni.nimh.nih.gov/pub/dist/doc/program_help/afni_proc.py.html
-dsort_nods = If '-dsort' is used, the output datasets reflect the impact of the
voxel-wise regressor(s). If you want to compare those results
to the case where you did NOT give the '-dsort' option, then
also use '-dsort_nods' (nods is short for 'no dsort').
The linear regressions will be repeated without the -dsort
regressor(s) and the results put into datasets with the string
'_nods' added to the prefix.
-slibase bb = Similar to -addbase in concept, BUT each .1D file 'bb'
must have an integer multiple of the number of slices
in the input dataset; then, separate regression
matrices are generated for each slice, with the
[0] column of 'bb' appended to the matrix for
the #0 slice of the dataset, the [1] column of 'bb'
appended to the matrix for the #1 slice of the dataset,
and so on. For example, if the dataset has 3 slices
and file 'bb' has 6 columns, then the order of use is
bb[0] --> slice #0 matrix
bb[1] --> slice #1 matrix
bb[2] --> slice #2 matrix
bb[3] --> slice #0 matrix
bb[4] --> slice #1 matrix
bb[5] --> slice #2 matrix
** If this order is not correct, consider -slibase_sm.
* Intended to help model physiological noise in FMRI,
or other effects you want to regress out that might
change significantly in the inter-slice time intervals.
* Slices are the 3rd dimension in the dataset storage
order -- 3dinfo can tell you what that direction is:
Data Axes Orientation:
first (x) = Right-to-Left
second (y) = Anterior-to-Posterior
third (z) = Inferior-to-Superior [-orient RAI]
In the above example, the slice direction is from
Inferior to Superior, so the columns in the '-slibase'
input file should be ordered in that direction as well.
* '-slibase' will slow the program down, and make it use
a lot more memory (to hold all the matrix stuff).
*** At this time, 3dSynthesize has no way of incorporating the
extra baseline timeseries from -addbase or -slibase or -dsort.
*** Also see option '-dsort' for how to include voxel-dependent
regressors into the analysis.
-slibase_sm bb = Similar to -slibase above, BUT each .1D file 'bb'
must be in slice major order (i.e. all slice0 columns
come first, then all slice1 columns, etc).
For example, if the dataset has 3 slices and file
'bb' has 6 columns, then the order of use is
bb[0] --> slice #0 matrix, regressor 0
bb[1] --> slice #0 matrix, regressor 1
bb[2] --> slice #1 matrix, regressor 0
bb[3] --> slice #1 matrix, regressor 1
bb[4] --> slice #2 matrix, regressor 0
bb[5] --> slice #2 matrix, regressor 1
** If this order is not correct, consider -slibase.
-usetemp = Write intermediate stuff to disk, to economize on RAM.
Using this option might be necessary to run with
'-slibase' and with '-Grid' values above the default,
since the program has to store a large number of
matrices for such a problem: two for every slice and
for every (a,b) pair in the ARMA parameter grid.
* '-usetemp' can actually speed the program up, interestingly,
even if you have enough RAM to hold all the intermediate
matrices needed with '-slibase'. YMMV :)
* '-usetemp' also writes temporary files to store dataset
results, which can help if you are creating multiple large
dataset (e.g., -Rfitts and -Rerrts in the same program run).
* Temporary files are written to the directory given
in environment variable TMPDIR, or in /tmp, or in ./
(preference is in that order).
+ If the program crashes, these files are named
REML_somethingrandom, and you might have to
delete them manually.
+ If the program ends normally, it will delete
these temporary files before it exits.
+ Several gigabytes of disk space might be used
for this temporary storage!
+ When running on a cluster, or some other system
using networked storage, '-usetemp' will work
MUCH better if the temporary storage directory
is a local drive rather than a networked drive.
You will have to figure out how to do this on
your cluster, since configurations vary so much.
* If you are at the NIH, then see this Web page:
https://hpc.nih.gov/docs/userguide.html#local
* If the program crashes with a 'malloc failure' type of
message, then try '-usetemp' (malloc=memory allocator).
*** NOTE THIS: If a Unix program stops suddenly with the
mysterious one word message 'killed', then it
almost certainly ran over some computer system
limitations, and was immediately stopped without
any recourse. Usually the resource it ran out
of is memory. So if this happens to you when
running 3dREMLfit, try using the '-usetemp' option!
* '-usetemp' disables OpenMP multi-CPU usage.
Only use this option if you need to, since OpenMP should
speed the program up significantly on multi-CPU computers.
-nodmbase = By default, baseline columns added to the matrix
via '-addbase' or '-slibase' or '-dsort' will each have
their mean removed (as is done in 3dDeconvolve). If you
do NOT want this operation performed, use '-nodmbase'.
* Using '-nodmbase' would make sense if you used
'-polort -1' to set up the matrix in 3dDeconvolve, and/or
you actually care about the fit coefficients of the extra
baseline columns (in which case, don't use '-nobout').
------------------------------------------------------------------------
Output Options (at least one must be given; 'ppp' = dataset prefix name) ~1~
------------------------------------------------------------------------
-Rvar ppp = dataset for saving REML variance parameters
* See the 'What is ARMA(1,1)' section, far below.
* This dataset has 6 volumes:
[0] = 'a' = ARMA parameter
= decay rate of correlations with lag
[1] = 'b' = ARMA parameter
[2] = 'lam' = (b+a)(1+a*b)/(1+2*a*b+b*b)
= correlation at lag=1
correlation at lag=k is lam * a^(k-1) (k>0)
[3] = 'StDev' = standard deviation of prewhitened
residuals (used in computing statistics
in '-Rbuck' and in GLTs)
[4] = '-LogLik' = negative of the REML log-likelihood
function (see the math notes)
[5] = 'LjungBox'= Ljung-Box statistic of the pre-whitened
residuals, an indication of how much
temporal correlation is left-over.
+ See the 'Other Commentary' section far below
for a little more information on the LB
statistic.
* The main purpose of this dataset is to check when weird
things happen in the calculations. Or just to have fun.
-Rbeta ppp = dataset for beta weights from the REML estimation
[similar to the -cbucket output from 3dDeconvolve]
* This dataset will contain all the beta weights, for
baseline and stimulus regressors alike, unless the
'-nobout' option is given -- in that case, this
dataset will only get the betas for the stimulus
regressors.
-Rbuck ppp = dataset for beta + statistics from the REML estimation;
also contains the results of any GLT analysis requested
in the 3dDeconvolve setup.
[similar to the -bucket output from 3dDeconvolve]
* This dataset does NOT get the betas (or statistics) of
those regressors marked as 'baseline' in the matrix file.
* If the matrix file from 3dDeconvolve does not contain
'Stim attributes' (which will happen if all inputs
to 3dDeconvolve were labeled as '-stim_base'), then
-Rbuck won't work, since it is designed to give the
statistics for the 'stimuli' and there aren't any matrix
columns labeled as being 'stimuli'.
* In such a case, to get statistics on the coefficients,
you'll have to use '-gltsym' and '-Rglt'; for example,
to get t-statistics for all coefficients from #0 to #77:
-tout -Rglt Colstats -gltsym 'SYM: Col[[0..77]]' ColGLT
where 'Col[3]' is the generic label that refers to matrix
column #3, et cetera.
* FDR curves for so many statistics (78 in the example)
might take a long time to generate!
-Rglt ppp = dataset for beta + statistics from the REML estimation,
but ONLY for the GLTs added on the 3dREMLfit command
line itself via '-gltsym'; GLTs from 3dDeconvolve's
command line will NOT be included.
* Intended to give an easy way to get extra contrasts
after an earlier 3dREMLfit run.
* Use with '-ABfile vvv' to read the (a,b) parameters
from the earlier run, where 'vvv' is the '-Rvar'
dataset output from that run.
[If you didn't save the '-Rvar' file, then it will]
[be necessary to redo the REML loop, which is slow]
-fout = put F-statistics into the bucket dataset
-rout = put R^2 statistics into the bucket dataset
-tout = put t-statistics into the bucket dataset
[if you use -Rbuck and do not give any of -fout, -tout,]
[or -rout, then the program assumes -fout is activated.]
-noFDR = do NOT add FDR curve data to bucket datasets
[FDR curves can take a long time if -tout is used]
-nobout = do NOT add baseline (null hypothesis) regressor betas
to the -Rbeta and/or -Obeta output datasets.
['stimulus' columns are marked in the .xmat.1D matrix ]
[file; all other matrix columns are 'baseline' columns]
-Rfitts ppp = dataset for REML fitted model
[like 3dDeconvolve, a censored time point gets]
[the actual data values from that time index!!]
-Rerrts ppp = dataset for REML residuals = data - fitted model
[like 3dDeconvolve, a censored time]
[point gets its residual set to zero]
-Rwherr ppp = dataset for REML residual, whitened using the
estimated ARMA(1,1) correlation matrix of the noise
[Note that the whitening matrix used is the inverse ]
[of the Choleski factor of the correlation matrix C; ]
[however, the whitening matrix isn't uniquely defined]
[(any matrix W with C=inv(W'W) will work), so other ]
[whitening schemes could be used and these would give]
[different whitened residual time series datasets. ]
-gltsym g h = read a symbolic GLT from file 'g' and label it with
string 'h'
* As in 3dDeconvolve, you can also use the 'SYM:' method
to put the definition of the GLT directly on the
command line.
* The symbolic labels for the stimuli are as provided
in the matrix file, from 3dDeconvolve.
*** Unlike 3dDeconvolve, you supply the label 'h' for
the output coefficients and statistics directly
after the matrix specification 'g'.
* Like 3dDeconvolve, the matrix generated by the
symbolic expression will be printed to the screen
unless environment variable AFNI_GLTSYM_PRINT is NO.
* These GLTs are in addition to those stored in the
matrix file, from 3dDeconvolve.
* If you don't create a bucket dataset using one of
-Rbuck or -Rglt (or -Obuck / -Oglt), using
-gltsym is completely pointless and stupid!
** Besides the stimulus labels read from the matrix
file (put there by 3dDeconvolve), you can refer
to regressor columns in the matrix using the
symbolic name 'Col', which collectively means
all the columns in the matrix. 'Col' is a way
to test '-addbase' and/or '-slibase' regressors
for significance; for example, if you have a
matrix with 10 columns from 3dDeconvolve and
add 2 extra columns to it, then you could use
-gltsym 'SYM: Col[[10..11]]' Addons -tout -fout
to create a GLT to include both of the added
columns (numbers 10 and 11).
-- 'Col' cannot be used to test the '-dsort'
regressor for significance!
The options below let you get the Ordinary Least SQuares outputs
(without adjustment for serial correlation), for comparisons.
These datasets should be essentially identical to the results
you would get by running 3dDeconvolve (with the '-float' option!):
-Ovar ppp = dataset for OLSQ st.dev. parameter (kind of boring)
-Obeta ppp = dataset for beta weights from the OLSQ estimation
-Obuck ppp = dataset for beta + statistics from the OLSQ estimation
-Oglt ppp = dataset for beta + statistics from '-gltsym' options
-Ofitts ppp = dataset for OLSQ fitted model
-Oerrts ppp = dataset for OLSQ residuals (data - fitted model)
[there is no -Owherr option; if you don't]
[see why, then think about it for a while]
Note that you don't have to use any of the '-R' options; you could
use 3dREMLfit just for the '-O' options if you want. In that case,
the program will skip the time consuming ARMA(1,1) estimation for
each voxel, by pretending you used the option '-ABfile =0,0'.
-------------------------------------------------------------------
The following options control the ARMA(1,1) parameter estimation ~1~
for each voxel time series; normally, you do not need these options
-------------------------------------------------------------------
-MAXa am = Set max allowed AR a parameter to 'am' (default=0.8).
The range of a values scanned is 0 .. +am (-POScor)
or is -am .. +am (-NEGcor).
-MAXb bm = Set max allow MA b parameter to 'bm' (default=0.8).
The range of b values scanned is -bm .. +bm.
* The largest value allowed for am and bm is 0.9.
* The smallest value allowed for am and bm is 0.1.
* For a nearly pure AR(1) model, use '-MAXb 0.1'
* For a nearly pure MA(1) model, use '-MAXa 0.1'
-Grid pp = Set the number of grid divisions in the (a,b) grid
to be 2^pp in each direction over the range 0..MAX.
The default (and minimum) value for 'pp' is 3.
Larger values will provide a finer resolution
in a and b, but at the cost of some CPU time.
* To be clear, the default settings use a grid
with 8 divisions in the a direction and 16 in
the b direction (since a is non-negative but
b can be either sign).
* If -NEGcor is used, then '-Grid 3' means 16 divisions
in each direction, so that the grid spacing is 0.1
if MAX=0.8. Similarly, '-Grid 4' means 32 divisions
in each direction, '-Grid 5' means 64 divisions, etc.
* I see no reason why you would ever use a -Grid size
greater than 5 (==> parameter resolution = 0.025).
++ However, if you like burning up CPU time, values up
to '-Grid 7' are allowed :)
* In the future, '-Grid 5' might become the default, since
it is a little more accurate and computers are a lot
faster than in the days when I was hunting brontosauri.
* In my limited experiments, there was little appreciable
difference in activation maps between '-Grid 3' and
'-Grid 5', especially at the group analysis level.
++ To be fair, skipping prewhitening by using OLSQ
(e.g., 3dDeconvolve) at the single subject level
has little effect on the group analysis UNLESS you
are going to use 3dMEMA, which relies on accurate
single subject t-statistics, which in turn requires
accurate temporal autocorrelation modeling.
++ If you are interested in the REML parameters themselves,
or in getting the 'best' prewhitening possible, then
'-Grid 5' makes sense.
* The program is somewhat slower as the -Grid size expands.
And uses more memory, to hold various matrices for
each (a,b) case.
-NEGcor = Allows negative correlations to be used; the default
is that only positive correlations are searched.
When this option is used, the range of a scanned
is -am .. +am; otherwise, it is 0 .. +am.
* Note that when -NEGcor is used, the number of grid
points in the a direction doubles to cover the
range -am .. 0; this will slow the program down.
-POScor = Do not allow negative correlations. Since this is
the default, you don't actually need this option.
[FMRI data doesn't seem to need the modeling ]
[of negative correlations, but you never know.]
-WNplus = Do not allow negative correlations, AND only allow
(a,b) parameter combinations that fit the model
AR(1) + white noise:
* a > 0 and -a < b < 0
* see 'What is ARMA(1,1)' far below
* you should use '-Grid 5' with this option, since
it restricts the number of possible ARMA(1,1) models
-Mfilt mr = After finding the best fit parameters for each voxel
in the mask, do a 3D median filter to smooth these
parameters over a ball with radius 'mr' mm, and then
use THOSE parameters to compute the final output.
* If mr < 0, -mr is the ball radius in voxels,
instead of millimeters.
[No median filtering is done unless -Mfilt is used.]
* This option is not recommended; it is just here for
experimentation.
-CORcut cc = The exact ARMA(1,1) correlation matrix (for a != 0)
has no non-zero entries. The calculations in this
program set correlations below a cutoff to zero.
The default cutoff is 0.00010, but can be altered with
this option. The usual reason to use this option is
to test the sensitivity of the results to the cutoff.
-ABfile ff = Instead of estimating the ARMA(a,b) parameters from the
data, read them from dataset 'ff', which should have
2 float-valued sub-bricks.
* Note that the (a,b) values read from this file will
be mapped to the nearest ones on the (a,b) grid
before being used to solve the generalized least
squares problem. For this reason, you may want
to use '-Grid 5' to make the (a,b) grid finer, if
you are not using (a,b) values from a -Rvar file.
* Using this option will skip the slowest part of
the program, which is the scan for each voxel
to find its optimal (a,b) parameters.
* One possible application of -ABfile:
+ save (a,b) using -Rvar in 3dREMLfit
+ process them in some way (spatial smoothing?)
+ use these modified values for fitting in 3dREMLfit
[you should use '-Grid 5' for such a case]
* Another possible application of -ABfile:
+ use (a,b) from -Rvar to speed up a run with -Rglt
when you want to run some more contrast tests.
* Special case:
-ABfile =0.7,-0.3
e.g., means to use a=0.7 and b=-0.3 for all voxels.
The program detects this special case by looking for
'=' as the first character of the string 'ff' and
looking for a comma in the middle of the string.
The values of a and b must be in the range -0.9..+0.9.
* The purpose of this special case is to facilitate
comparison with Software PrograMs that use the same
temporal correlation structure for all voxels.
-GOFORIT = 3dREMLfit checks the regression matrix for tiny singular
values (as 3dDeconvolve does). If the matrix is too
close to being rank-deficient, then the program will
not proceed. You can use this option to force the
program to continue past such a failed collinearity
check, but you MUST check your results to see if they
make sense!
** '-GOFORIT' is required if there are all zero columns
in the regression matrix. However, at this time
[15 Mar 2010], the all zero columns CANNOT come from
the '-slibase' inputs.
** Nor from the '-dsort' inputs.
** If there are all zero columns in the matrix, a number
of WARNING messages will be generated as the program
pushes forward in the solution of the linear systems.
---------------------
Miscellaneous Options ~1~
---------------------
-quiet = turn off most progress messages :(
-verb = turn on more progress messages :)
==========================================================================
=========== Various Notes (as if this help weren't long enough) =========
==========================================================================
------------------
What is ARMA(1,1)? ~1~
------------------
* The correlation coefficient r(k) of noise samples k units apart in time,
for k >= 1, is given by r(k) = lam * a^(k-1)
where lam = (b+a)(1+a*b)/(1+2*a*b+b*b)
(N.B.: lam=a when b=0 -- AR(1) noise has r(k)=a^k for k >= 0)
(N.B.: lam=b when a=0 -- MA(1) noise has r(k)=b for k=1, r(k)=0 for k>1)
* lam can be bigger or smaller than a, depending on the sign of b:
b > 0 means lam > a; b < 0 means lam < a.
* What I call (a,b) here is sometimes called (p,q) in the ARMA literature.
* For a noise model which is the sum of AR(1) and white noise, 0 < lam < a
(i.e., a > 0 and -a < b < 0 ). Thus, the model 'AR(1)+white noise'
is a proper subset of ARMA(1,1) -- and also a proper subset of the default
-POScor setting (which also allows 0 < a < lam via b > 0).
+ This restricted model can be specified with the '-WNplus' option.
With '-WNplus', you should use '-Grid 5', since you are restricting
the number of available noise models fairly substantially.
+ If the variance of the white noise is T and the variance of the AR(1) noise
is U, then lam = (a*U)/(U+T*(1-a^2)), and U/T = (lam*(1-a^2))/(a^2-lam).
+ In principal, one could estimate the fraction of the noise that is
white vs. correlated using this U/T formula (e.g., via 3dcalc on the
'-Rvar' output).
+ It is not clear that such an estimate is useful for any purpose,
or indeed that the '-Rvar' outputs of the ARMA(1,1) parameters
are useful for more than code testing reasons. YMMV :)
* The natural range of a and b is -1..+1. However, unless -NEGcor is
given, only non-negative values of a will be used, and only values
of b that give lam > 0 will be allowed. Also, the program doesn't
allow values of a or b to be outside the range -0.9..+0.9.
* The program sets up the correlation matrix using the censoring and run
start information saved in the header of the .xmat.1D matrix file, so
that the actual correlation matrix used will not always be Toeplitz.
For details of how time series with such gaps are analyzed, see the
math notes.
* The 'Rvar' dataset has 5 sub-bricks with variance parameter estimates:
#0 = a = factor by which correlations decay from lag k to lag k+1
#1 = b parameter
#2 = lam (see the formula above) = correlation at lag 1
#3 = standard deviation of ARMA(1,1) noise in that voxel
#4 = -log(REML likelihood function) = optimized function at (a,b)
For details about this, see the math notes.
* The 'Rbeta' dataset has the beta (model fit) parameters estimates
computed from the prewhitened time series data in each voxel,
as in 3dDeconvolve's '-cbucket' output, in the order in which
they occur in the matrix. -addbase and -slibase and -dsort beta
values come last in this file.
[The '-nobout' option will disable output of baseline parameters.]
* The 'Rbuck' dataset has the beta parameters and their statistics
mixed together, as in 3dDeconvolve's '-bucket' output.
-------------------------------------------------------------------
What is REML = REsidual (or REstricted) Maximum Likelihood, anyway? ~1~
-------------------------------------------------------------------
* Ordinary Least SQuares (which assumes the noise correlation matrix is
the identity) is consistent for estimating regression parameters,
but is NOT consistent for estimating the noise variance if the
noise is significantly correlated in time - 'serial correlation'
or 'temporal correlation'.
* Maximum likelihood estimation (ML) of the regression parameters and
variance/correlation together is asymptotically consistent as the
number of samples goes to infinity, but the variance estimates
might still have significant bias at a 'reasonable' number of
data points.
* REML estimates the variance/correlation parameters in a space
of residuals -- the part of the data left after the model fit
is subtracted. The amusing/cunning part is that the model fit
used to define the residuals is itself the generalized least
squares fit where the variance/correlation matrix is the one found
by the REML fit itself. This feature makes REML estimation nonlinear,
and the REML equations are usually solved iteratively, to maximize
the log-likelihood in the restricted space. In this program, the
REML function is instead simply optimized over a finite grid of
the correlation matrix parameters a and b. The matrices for each
(a,b) pair are pre-calculated in the setup phase, and then are
reused in the voxel loop. The purpose of this grid-based method
is speed -- optimizing iteratively to a highly accurate (a,b)
estimation for each voxel would be very time consuming, and pretty
pointless. If you are concerned about the sensitivity of the
results to the resolution of the (a,b) grid, you can use the
'-Grid 5' option to increase this resolution and see if your
activation maps change significantly. In test cases, the resulting
betas and statistics have not changed appreciably between '-Grid 3'
and '-Grid 5'; however, you might want to test this on your own data
(just for fun, because who doesn't want more fun?).
* REML estimates of the variance/correlation parameters are still
biased, but are generally significantly less biased than ML estimates.
Also, the regression parameters (betas) should be estimated somewhat
more accurately (i.e., with smaller variance than OLSQ). However,
this effect is generally small in FMRI data, and probably won't affect
your group results noticeably (if you don't carry parameter variance
estimates to the inter-subject analysis, as is done in 3dMEMA).
* After the (a,b) parameters are estimated, then the solution to the
linear system is available via Generalized Least SQuares; that is,
via prewhitening using the Choleski factor of the estimated
variance/covariance matrix.
* In the case with b=0 (that is, AR(1) correlations), and if there are
no time gaps (no censoring, no run breaks), then it is possible to
directly estimate the a parameter without using REML. This program
does not implement such a method (e.g., the Yule-Walker equation).
The reasons why should be obvious.
* If you like linear algebra, see my scanned math notes about 3dREMLfit:
https://afni.nimh.nih.gov/pub/dist/doc/misc/3dREMLfit/3dREMLfit_mathnotes.pdf
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/statistics/remlfit.html
* I have been asked if 3dREMLfit prewhitens the design matrix as well as
the data. The short answer to this somewhat uninformed question is YES.
The long answer follows (warning: math ahead!):
* Mathematically, the GLSQ solution is expressed as
f = inv[ X' inv(R) X] X' inv(R) y
where X = model matrix, R = symmetric correlation matrix
of noise (R depends on the a,b parameters),
f = parameter estimates, and y = data vector.
Notation: ' = transpose, inv() = inverse matrix.
A symmetric matrix S such that SS = R is called a square root of R
(there are many such matrices). The matrix inv(S) is a prewhitening
matrix. That is, if the noise vector q is such that E(q q') = R
(here E = expected value), and vector t = inv(S) q, then
E(t t') = E[ inv(S)q q'inv(S) ] = inv(S) S S inv(S) = I.
Note that inv(R) = inv(S) inv(S), and we can rewrite the GLSQ solution as
f = inv[ X' inv(S) inv(S) X ] X' inv(S) inv(S) y
= inv[ (inv(S)X)' (inv(S)X) ] (inv(S)X)' (inv(S)y)
so the GLSQ solution is equivalent to the OLSQ solution, with the model
matrix X replaced by inv(S)X and the data vector y replaced by inv(S)y;
that is, we prewhiten both of them. In 3dREMLfit, this is done implicitly
in the solution method outlined in the 7-step procedure on the fourth page
of my math notes -- a procedure designed for efficient implementation
with banded R. The prewhitened X matrix is never explicitly computed:
it is not needed, since the goal is to compute vector f, not inv(S)X.
* The idea of pre-whitening the data but NOT the matrix is a very bad plan.
(This also was a suggestion by a not-well-informed user.)
If you work through the linear algebra, you'll see that the resulting
estimate for f is not statistically consistent with the underlying model!
In other words, prewhitening only the data but not the matrix is WRONG.
* Someone asking the question above might actually be asking if the residuals
are whitened. The answer is YES and NO. The output of -Rerrts is not
whitened; in the above notation, -Rerrts gives y-Xf = data - model fit.
The output of -Rwherr is whitened; -Rwherr gives S[y-Xf], which is the
residual (eps) vector for the pre-whitened linear system Sf = SXf + eps.
* The estimation method for (a,b) is nonlinear; that is, these parameters
are NOT estimated by doing an initial OLSQ (or any other one-shot initial
calculation), then fitting (a,b) to the resulting residuals. Rather,
a number of different (a,b) values are tried out to find the parameter pair
where the log-likelihood of the Gaussian model is optimized. To be precise,
the function that is minimized in each voxel (over the discrete a,b grid) is
L(a,b) = log(det(R(a,b))) + log(det(X' inv(R(a,b)) X))
+ (n-m)log(y'P(a,b)y) - log(det(X'X'))
where R(a,b) = ARMA(1,1) correlation matrix (symmetric n X n)
n = dimension of data vector = number of rows in X
m = number of columns in X = number of regressors
y = data vector for a given voxel
P(a,b) = prewhitening projection matrix (symmetric n X n)
= inv(R) - inv(R)X inv(X' inv(R) X) X' inv(R)
The first 2 terms in L only depend on the (a,b) parameters, and can be
thought of as a penalty that favors some (a,b) values over others,
independent of the data -- for ARMA(1,1), the a=b=0 white noise
model is penalized somewhat relative to the non-white noise cases.
The 3rd term uses the 2-norm of the prewhitened residuals.
The 4th term depends only on X, and is not actually used herein, since
we don't include a model for varying X as well as R.
* The method for estimating (a,b) does not require the time series data to be
perfectly uniform in time. Gaps due to censoring and run break are allowed
for properly.
* Again, see the math notes for more fun fun algorithmic details:
https://afni.nimh.nih.gov/pub/dist/doc/misc/3dREMLfit/3dREMLfit_mathnotes.pdf
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/statistics/remlfit.html
----------------
Other Commentary ~1~
----------------
* Again: the ARMA(1,1) parameters 'a' (AR) and 'b' (MA) are estimated
only on a discrete grid, for the sake of CPU time.
* Each voxel gets a separate pair of 'a' and 'b' parameters.
There is no option to estimate global values for 'a' and 'b'
and use those for all voxels. Such an approach might be called
'kindergarten statistics' by the promulgators of Some People's Methods.
* OLSQ = Ordinary Least SQuares; these outputs can be used to compare
the REML/GLSQ estimations with the simpler OLSQ results
(and to test this program vs. 3dDeconvolve).
* GLSQ = Generalized Least SQuares = estimated linear system solution
taking into account the variance/covariance matrix of the noise.
* The '-matrix' file must be from 3dDeconvolve; besides the regression
matrix itself, the header contains the stimulus labels, the GLTs,
the censoring information, etc.
+ Although you can put in a 'raw' matrix using the '-matim' option,
described earlier.
* If you don't actually want the OLSQ results from 3dDeconvolve, you can
make that program stop after the X matrix file is written out by using
the '-x1D_stop' option, and then running 3dREMLfit; something like this:
3dDeconvolve -bucket Fred -nodata 800 2.5 -x1D_stop ...
3dREMLfit -matrix Fred.xmat.1D -input ...
In the above example, no 3D dataset is input to 3dDeconvolve, so as to
avoid the overhead of having to read it in for no reason. Instead,
the '-nodata 800 2.5' option is used to setup the time series of the
desired length (corresponding to the real data's length, here 800 points),
and the appropriate TR (here, 2.5 seconds). This will properly establish
the size and timing of the matrix file.
* The bucket output datasets are structured to mirror the output
from 3dDeconvolve with the default options below:
-nobout -full_first
Note that you CANNOT use options like '-bout', '-nocout', and
'-nofull_first' with 3dREMLfit -- the bucket datasets are ordered
the way they are and you'll just have to live with it.
* If the 3dDeconvolve matrix generation step did NOT have any non-base
stimuli (i.e., everything was '-stim_base'), then there are no 'stimuli'
in the matrix file. In that case, since by default 3dREMLfit doesn't
compute statistics of baseline parameters, to get statistics you will
have to use the '-gltsym' option here, specifying the desired column
indexes with the 'Col[]' notation, and then use '-Rglt' to get these
values saved somewhere (since '-Rbuck' won't work if there are no
'Stim attributes').
* All output datasets are in float format [i.e., no '-short' option].
Internal calculations are done in double precision.
* If the regression matrix (including any added columns from '-addbase'
or '-slibase') is rank-deficient (e.g., has collinear columns),
then the program will print a message something like
** ERROR: X matrix has 1 tiny singular value -- collinearity
The program will NOT continue past this type of error, unless
the '-GOFORIT' option is used. You should examine your results
carefully to make sure they are reasonable (e.g., look at
the fitted model overlay on the input time series).
* The Ljung-Box (LB) statistic computed via the '-Rvar' option is a
measure of how correlated the ARMA(1,1) pre-whitened residuals are
in time. A 'small' value indicates that the pre-whitening was
reasonably successful (e.g., small LB = 'good').
+ The LB volume will be marked as a chi-squared statistic with h-2 degrees
of freedom, where 'h' is the semi-arbitrarily chosen maximum lag used.
A large LB value indicates noticeable temporal correlation in the
pre-whitened residuals (e.g., that the ARMA(1,1) model wasn't adequate).
+ If a voxel has LB statistic = 0, this means that the LB value could not
be computed for some reason (e.g., residuals are all zero).
+ For yet more information, see this article:
On a measure of lack of fit in time series models.
GM Ljung, GEP Box. Biometrika, 1978.
https://www.jstor.org/stable/2335207
https://academic.oup.com/biomet/article/65/2/297/236869
+ The calculation of the LB statistic is adjusted to allow for gaps in
the time series (e.g., censoring, run gaps).
+ Note that the LB statistic is computed if and only if you give the
'-Rvar' option. You don't have to give the '-Rwherr' option, which is
used to save the pre-whitened residuals to a dataset.
+ If you want to test the LB statistic calculation under the null
hypothesis (i.e., that the ARMA(1,1) model is correct), then
you can use program 3dSimARMA11 to create a time series dataset,
then run that through 3dREMLfit, then peruse the histogram
of the resulting LB statistic. Have fun!
* Depending on the matrix and the options, you might expect CPU time
to be about 2..4 times that of the corresponding 3dDeconvolve run.
+ A careful choice of algorithms for solving the multiple linear
systems required (e.g., QR method, sparse matrix operations,
bordering, etc.) and some other code optimizations make
running 3dREMLfit tolerable.
+ Especially on modern fast CPUs. Kids these days have NO idea
about how we used to suffer waiting for computer runs, and
how we passed the time by walking uphill through the snow.
---------------------------------------------------------------
How 3dREMLfit handles all zero columns in the regression matrix
---------------------------------------------------------------
* One salient (to the user) difference from 3dDeconvolve is how
3dREMLfit deals with the beta weight from an all zero column when
computing a statistic (e.g., a GLT). The beta weight will simply
be ignored, and its entry in the GLT matrix will be set to zero.
Any all zero rows in the GLT matrix are then removed. For example,
the 'Full_Fstat' for a model with 3 beta weights is computed from
the GLT matrix [ 1 0 0 ]
[ 0 1 0 ]
[ 0 0 1 ]. If the last beta weight corresponds to
an all zero column, then the matrix becomes [ 1 0 0 ]
[ 0 1 0 ]
[ 0 0 0 ], and then
then last row is omitted. This excision reduces the number of
numerator degrees of freedom in this test from 3 to 2. The net
effect is that the F-statistic will be larger than in 3dDeconvolve,
which does not modify the GLT matrix (or its equivalent).
* A similar adjustment is made to denominator degrees of freedom, which
is usually n-m, where n=# of data points and m=# of regressors.
3dDeconvolve counts all zero regressors in with m, but 3dREMLfit
does not. The net effect is again to (slightly) increase F-statistic
values over the equivalent 3dDeconvolve computation.
-----------------------------------------------------------
To Dream the Impossible Dream, to Write the Uncodeable Code
-----------------------------------------------------------
* Add options for -iresp/-sresp for -stim_times.
* Prevent Daniel Glen from referring to this program as 3dARMAgeddon.
* Establish incontrovertibly the nature of quantum mechanical observation.
* Create an iPad version of the AFNI software suite.
* Get people to stop asking me 'quick questions'!
----------------------------------------------------------
* For more information, please see the contents of
https://afni.nimh.nih.gov/pub/dist/doc/misc/3dREMLfit/3dREMLfit_mathnotes.pdf
which includes comparisons of 3dDeconvolve and 3dREMLfit
activations (individual subject and group maps), and an
outline of the mathematics implemented in this program.
----------------------------------------------------------
============================
== RWCox - July-Sept 2008 ==
============================
=========================================================================
* This binary version of 3dREMLfit is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
* The REML matrix setup and REML voxel ARMA(1,1) estimation loops are
parallelized, across (a,b) parameter sets and across voxels, respectively.
* The GLSQ and OLSQ loops are not parallelized. They are usually much
faster than the REML voxel loop, and so I made no effort to speed
these up (now and forever, two and inseparable).
* '-usetemp' disables OpenMP multi-CPU usage, since the file I/O for
saving and restoring various matrices and results is not easily
parallelized. To get OpenMP speedup for large problems (just where
you want it), you'll need a lot of RAM.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3drename
++ 3drename: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
Usage 1: 3drename old_prefix new_prefix
Will rename all datasets using the old_prefix to use the new_prefix;
3drename fred ethel
will change fred+orig.HEAD to ethel+orig.HEAD
fred+orig.BRIK to ethel+orig.BRIK
fred+tlrc.HEAD to ethel+tlrc.HEAD
fred+tlrc.BRIK.gz to ethel+tlrc.BRIK.gz
Usage 2: 3drename old_prefix+view new_prefix
Will rename only the dataset with the given view (orig, acpc, tlrc).
You cannot have paths in the old or the new prefix
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dresample
3dresample - reorient and/or resample a dataset
This program can be used to change the orientation of a
dataset (via the -orient option), or the dx,dy,dz
grid spacing (via the -dxyz option), or change them
both to match that of a master dataset (via the -master
option).
Note: if both -master and -dxyz are used, the dxyz values
will override those from the master dataset.
** It is important to note that once a dataset of a certain
grid is created (i.e. orientation, dxyz, field of view),
if other datasets are going to be resampled to match that
first one, then using -master should be used, instead of
-dxyz. That will guarantee that all grids match.
Otherwise, even using both -orient and -dxyz, one may not
be sure that the fields of view will identical, for example.
** Warning: this program is not meant to transform datasets
between view types (such as '+orig' and '+tlrc').
For that purpose, please see '3dfractionize -help'
or 'adwarp -help'.
------------------------------------------------------------
usage: 3dresample [options] -prefix OUT_DSET -input IN_DSET
examples:
3dresample -orient asl -rmode NN -prefix asl.dset -input in+orig
3dresample -dxyz 1.0 1.0 0.9 -prefix 119.dset -input in+tlrc
3dresample -master master+orig -prefix new.dset -input old+orig
note:
Information about a dataset's voxel size and orientation
can be found in the output of program 3dinfo
------------------------------------------------------------
options:
-help : show this help information
-hist : output the history of program changes
-debug LEVEL : print debug info along the way
e.g. -debug 1
default level is 0, max is 2
-version : show version information
-bound_type TYPE : specify which boundary is preserved
e.g. -bound_type SLAB
default is FOV (field of view)
The default and original use preserves the field of
of view when resampling, allowing the extents (SLABs)
to grow or shrink by half of the difference in the
dimension size (big voxels to small will cause the
extents to expand, for example, while small to big
will cause them to shrink).
Using -bound_type SLAB will have the opposite effect.
The extents should be unchanged, while the FOV will
grow or shrink in the opposite way as above).
Note that when using SLAB, edge voxels should be
mostly unaffected by the interpolation.
-dxyz DX DY DZ : resample to new dx, dy and dz
e.g. -dxyz 1.0 1.0 0.9
default is to leave unchanged
Each of DX,DY,DZ must be a positive real number,
and will be used for a voxel delta in the new
dataset (according to any new orientation).
-orient OR_CODE : reorient to new axis order.
e.g. -orient asl
default is to leave unchanged
The orientation code is a 3 character string,
where the characters come from the respective
sets {A,P}, {I,S}, {L,R}.
For example OR_CODE = LPI is the standard
'neuroscience' orientation, where the x-axis is
Left-to-Right, the y-axis is Posterior-to-Anterior,
and the z-axis is Inferior-to-Superior.
-rmode RESAM : use this resampling method
e.g. -rmode Linear
default is NN (nearest neighbor)
The resampling method string RESAM should come
from the set {'NN', 'Li', 'Cu', 'Bk'}. These
are for 'Nearest Neighbor', 'Linear', 'Cubic'
and 'Blocky' interpolation, respectively.
For details, go to the 'Define Datamode' panel
of the afni GUI, click BHelp and then the
'ULay resam mode' menu.
-master MAST_DSET: align dataset grid to that of MAST_DSET
e.g. -master master.dset+orig
Get dxyz and orient from a master dataset. The
resulting grid will match that of the master. This
option can be used with -dxyz, but not with -orient.
-prefix OUT_DSET : required prefix for output dataset
e.g. -prefix reori.asl.pickle
-input IN_DSET : required input dataset to reorient
e.g. -input old.dset+orig
-inset IN_DSET : alternative to -input
------------------------------------------------------------
Author: R. Reynolds - Version 1.10 <June 26, 2014>
AFNI program: 3dRetinoPhase
Usage: 3dRetinoPhase [-prefix ppp] dataset
where dataset is a time series from a retinotpy stimulus
-exp EXP: These four options specify the type of retinotpy
-con CON: stimulus. EXP and CON are for expanding and
-clw CLW : contracting rings, respectively. CLW and CCW are
-ccw CCW: for clockwise and counter clockwise moving polar
polar angle mapping stimuli. You can specify one,
or all stimuli in one command. When all are specified
polar angle stimuli, and eccentricity stimuli of
opposite directions are combined.
-prefix PREF: Prefix of output datasets.
PREF is suffixed with the following:
.ecc+ for positive (expanding) eccentricity (EXP)
.ecc- for negative (contracting) eccentricity (CON)
.pol+ for clockwise polar angle mapping (CLW)
.pol- for counterclockwise polar angle mapping (CCW)
At a minimum each input gets a phase dataset output. It contains
response phase (or delay) in degrees.
If both directions are given for polar and/or eccentricity
then a visual field angle data set is created.
The visual field angle is obtained by averaging phases of opposite
direction stimuli. The hemodynamic offset is half the phase difference.
Each output also contains a thresholding sub-brick. Its type
depends on the phase estimation method (-phase_estimate).
Note on the thresholding sub-bricks
-----------------------------------
Both FFT and DELAY values of -phase_estimate produce thresholding
sub-bricks with the phase estimates. Those thresholds have associated
significance levels, but they should be taken with a grain of
salt. There is no correction for autocorrelation, so the DOFs
are generous.
The program also attaches a thresholding sub-brick to the
visual field angle datasets which are estimated by averaging the phase
estimates in order to remove the hemodynamic offset. This composite
thresholding sub-brick contains at each voxel/node, the maximum
threshold from the datasets of stimli of opposite direction.
This thresholding sub-brick is for convenience, allowing you to
threshold with a mask that is the union of the individual
thresholded maps. Significance levels are purposefully not
attached. I don't know how to compute them properly.
-spectra: Output amplitude and phase spectra datasets.
-Tstim T: Period of stimulus in seconds. This parameter does
not depend on the number of wedges or rings (Nr/Nw).
It is the duration of a full cycle of the stimulus.
Use -Tpol TPOL, and -Tecc TECC, to specify periods
for each stimulus type separately. -Tstim sets both
periods to T.
-nrings Nr: Nr is the number of rings in the stimulus.
The default is 1.
-nwedges Nw: Nw is the number of wedges in the stimulus.
The default is 1.
-ort_adjust: Number of DOF lost in detrending outside of this
program.
-pre_stim PRE: Blank period, in seconds, before stimulus began
-sum_adjust y/n: Adjust sum of angles for wrapping based on the
angle difference. Default is 'y'
-phase_estimate METH: Select method of phase estimation
METH == FFT uses the phase of the fundamental frequency.
METH == DELAY uses the 3ddelay approach for estimating
the phase. This requires the use of option
-ref_ts . See references [3] and [4] below.
The DELAY option appears to be good as the FFT for high SNR
and high duty cycle. See results produced by @Proc.PK.All_D
in the demo archive AfniRetinoDemo.tgz.
However,the DELAY option seems much better for low duty cycle stimuli.
It is not set as the default for backward compatibility. Positive and
negative feedback about this option are welcome.
Thanks to Ikuko Mukai and Masaki Fukunaga for making the case
for DELAY's addition; they were right.
-ref_ts REF_TS: 0 lag reference time series of response. This is
needed for the DELAY phase estimation method.
With the DELAY method, the phase results are comparable to
what you'd get with the following 3ddelay command:
For illustration, say you have stimuli of 32 second periods
with the polar stimuli having two wedges. After creating
the reference time series with waver (32 sec. block period
eccentricity, 32/2=16 sec. block period for polar), run
4 3ddelay commands as such:
for an expanding ring of 32 second period:
3ddelay -input exp.niml.dset \
-ideal_file ECC.1D \
-fs 0.5 -T 32 \
-uD -nodsamp \
-phzreverse -phzscale 1.0 \
-prefix ecc+.del.niml.dset\n
Repeat for contracting ring, remove -phzreverse
for clockwise two wedge of 32 second period:
3ddelay -input clw.niml.dset \
-ideal_file POL.1D \
-fs 0.5 -T 16 \
-uD -nodsamp \
-phzreverse -phzscale 0.5 \
-prefix pol+.del.niml.dset\n
Repeat for counterclockwise remove -phzreverse
Instead of the 3ddelay mess, all you do is run 3dRetinoPhase with the
following extra options: -phase_estimate DELAY -ref_ts ECC.1D
or -phase_estimate DELAY -ref_ts POL.1D
If you are not familiar with the use of program 'waver' for creating
reference time series, take a look at demo script @Proc.PK.All_D in
AfniRetinoDemo.tgz.
-multi_ref_ts MULTI_REF_TS: Multiple 0 lag reference time series.
This allows you to test multiple regressors.
The program will run a separate analysis for
each regressor (column), and combine the results
in the output dataset this way:
([.] denotes output sub-brick)
[0]: Phase from regressor that yields the highest correlation coeff.
[1]: Maximum correlation coefficient.
[2]: Number of regressor that yields the highest correlation coeff.
Counting begins at 1 (not 0)
[3]: Phase from regressor 1
[4]: Correlation coefficient from regressor 1
[5]: Phase from regressor 2
[6]: Correlation coefficient from regressor 2
... etc.
In general, for regressor k (k starts at 1)
[2*k+1] contains the Phase and [2*k+2] the Correlation coefficient
N.B: If MULTI_REF_TS has only one timeseries, -multi_ref_ts produces
an output identical to that of -ref_ts.
See usage in @RetinoProc and demo data in
https://afni.nimh.nih.gov/pub/dist/tgz/AfniRetinoDemo.tgz
References for this program:
[1] RW Cox. AFNI: Software for analysis and visualization of functional
magnetic resonance neuroimages.
Computers and Biomedical Research, 29: 162-173, 1996.
[2] Saad Z.S., et al. SUMA: An Interface For Surface-Based Intra- And
Inter-Subject Analysis With AFNI.
Proc. 2004 IEEE International Symposium on Biomedical Imaging, 1510-1513
If you use the DELAY method:
[3] Saad, Z.S., et al. Analysis and use of FMRI response delays.
Hum Brain Mapp, 2001. 13(2): p. 74-93.
[4] Saad, Z.S., E.A. DeYoe, and K.M. Ropella, Estimation of FMRI
Response Delays. Neuroimage, 2003. 18(2): p. 494-504.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dretroicor
Usage: 3dretroicor [options] dataset
Performs Retrospective Image Correction for physiological
motion effects, using a slightly modified version of the
RETROICOR algorithm described in:
Glover, G. H., Li, T., & Ress, D. (2000). Image-based method
for retrospective correction of physiological motion effects in
fMRI: RETROICOR. Magnetic Resonance in Medicine, 44, 162-167.
Options (defaults in []'s):
-ignore = The number of initial timepoints to ignore in the
input (These points will be passed through
uncorrected) [0]
-prefix = Prefix for new, corrected dataset [retroicor]
-card = 1D cardiac data file for cardiac correction
-cardphase = Filename for 1D cardiac phase output
-threshold = Threshold for detection of R-wave peaks in input
(Make sure it's above the background noise level;
Try 3/4 or 4/5 times range plus minimum) [1]
-resp = 1D respiratory waveform data for correction
-respphase = Filename for 1D resp phase output
-order = The order of the correction (2 is typical;
higher-order terms yield little improvement
according to Glover et al.) [2]
-help = Display this message and stop (must be first arg)
Dataset: 3D+time dataset to process
** The input dataset and at least one of -card and -resp are
required.
NOTES
-----
The durations of the physiological inputs are assumed to equal
the duration of the dataset. Any constant sampling rate may be
used, but 40 Hz seems to be acceptable. This program's cardiac
peak detection algorithm is rather simplistic, so you might try
using the scanner's cardiac gating output (transform it to a
spike wave if necessary).
This program uses slice timing information embedded in the
dataset to estimate the proper cardiac/respiratory phase for
each slice. It makes sense to run this program before any
program that may destroy the slice timings (e.g. 3dvolreg for
motion correction).
Author -- Fred Tam, August 2002
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
AFNI program: 3dROIMaker
ROIMaker, written by PA Taylor (Nov, 2012), part of FATCAT (Taylor & Saad,
2013) in AFNI.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
THE GENERAL PURPOSE of this code is to create a labelled set of ROIs from
input data. It was predominantly written with a view of aiding the process
of combining functional and tractographic/structural data. Thus, one might
input a brain map (or several, as subbricks) of functional parameters
(e.g., correlation coefficients or ICA maps of Z-scores), set a value
threshold and/or a cluster-volume threshold, and this program will find
distinct ROIs in the data and return a map of them, each labelled with
an integer. One can also provide a reference map so that, for example, in
group studies, each subject would have the same number label for a given
region (i.e., the L motor cortex is always labelled with a `2'). In order
to be prepared for tractographic application, one can also enlarge the
gray matter ROIs so that they intersect with neighboring white matter.
One can either specify a number of voxels with which to pad each ROI,
and/or input a white matter skeleton (such as could be defined from a
segmented T1 image or an FA map) and use this as an additional guide for
inflating the GM ROIs. The output of this program can be used directly
for guiding tractography, such as with 3dTrackID.
If an input dataset ('-inset INSET') already contains integer delineation,
such as using a parcellation method, then you can preserve these integers
*even if the ROIs are contiguous* by using the same set as the reference
set (-> '-refset INSET', as well). Otherwise, contiguous blobs defined
will likely be given a single integer value in the program.
Labeltable functionality is now available. If an input '-refset REFSET'
has a labeltable attached, it will also be attached to the output GM and
inflated GMI datasets by default (if you don't want to do this, you can
use the '-dump_no_labtab' to turn off this functionality). If either no
REFSET is input or it doesn't have a labeltable, one will be made from
zeropadding the GM and GMI map integer values-- this may not add a lot of
information, but it might make for more useful output.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
OUTPUTS:
+ `GM' map of ROIs :based on value- and volume-thresholding, would
correspond most closely to gray matter regions of
activation. The values of each voxel are an integer,
distinct per ROI.
+ `GMI' map of ROIs :map of inflated GM ROIs, based on GM map, with the
ROIs inflated either by a user-designed number of
voxels, or also possibly including information of
the WM skeleton (so that inflation is halted after
encountering WM). The values of each voxel are the
same integers as in the GM map.
+ RUNNING, need to provide:
-inset INSET :3D volume(s) of values, esp. of functionally-derived
quantities like correlation values or ICA Z-scores.
-thresh MINTHR :threshold for values in INSET, used to great ROI
islands from the 3D volume's sea of values.
-prefix PREFIX :prefix of output name, with output files being:
PREFIX_GM* and PREFIX_GMI* (see `Outputs', above).
and can provide:
-refset REFSET :3D (or multi-subbrick) volume containing integer
values with which to label specific GM ROIs after
thresholding. This can be useful to assist in having
similar ROIs across a group labelled with the same
integer in the output GM and GMI maps.
If an INSET ROI has no corresponding REFSET label,
then the former is marked with an integer greater
than the max refset label. If an INSET ROI overlaps
with multiple REFSET ROIs, then the former is split
amongst the latter-- overlap regions get labelled
first, and then REFSET labels grow to cover the INSET
ROI in question. NB: it is possible to utilize
negative-valued ROIs (voxels =-1) to represent NOT-
regions for tracking, for example.
-volthr MINVOL :integer number representing minimum size a cluster of
voxels must have in order to remain a GM ROI after
the values have been thresholded. Number might be
estimated with 3dAlphaSim, or otherwise, to reduce
number of `noisy' clusters.
-only_some_top N :after '-volthr' but before any ref-matching or
inflating, one can restrict each found region
to keep only N voxels with the highest inset values.
(If an ROI has <N voxels, then all would be kept.)
This option can result in unconnected pieces.
-only_conn_top N :similar-ish to preceding option, but instead of just
selecting only N max voxels, do the following
algorithm: start the ROI with the peak voxel; search
the ROI's neighbors for the highest value; add that
voxel to the ROI; continue until either the ROI has
reached N voxels or whole region has been added.
The returned ROI is contiguous and 'locally' maximal
but not necessarily globally so within the original
volume.
-inflate N_INFL :number of voxels which with to pad each found ROI in
order to turn GM ROIs into inflated (GMI) ROIs.
ROIs won't overlap with each other, and a WM skeleton
can also be input to keep ROIs from expanding through
a large amount of WM ~artificially (see below).
-trim_off_wm :switch to trim the INSET to exclude voxels in WM,
by excluding those which overlap an input WM
skeleton, SKEL (see `-wm_skel', below; to trim off
CSF, see separate `-csf_skel'). NB: trimming is done
before volume thresholding the ROIs, so fewer ROIs
might pass, or some input regions might be split
apart creating a greater number of regions.
-wm_skel SKEL :3D volume containing info of WM, as might be defined
from an FA map or anatomical segmentation. Can be
to guide ROI inflation with `-skel_stop'.
-skel_thr THR :if the skeleton is not a mask, one can put in a
threshold value for it, such as having THR=0.2 if
SKEL were a FA map.
-skel_stop :switch to stop inflation at locations which are
already on WM skeleton (default: off; and need
`-wm_skel' to be able to use).
-skel_stop_strict :similar to '-skel_stop', but this also does not
allow any inflation *into* the skel-region. The
'-skel_stop' let's the inflation go one layer
*into* the skel-region, so this is stricter. This
option might be my preference these days.
-csf_skel CSF_SK :similar to SKEL, a 3D volume containing info of CSF.
NB: however, with CSF_SK, info must just be a binary
mask already, and it will only be applied in trimming
procedure (no affect on inflation); if input, INSET
is automatically trimmed of CSF, independent of
using `-trim_off_wm'. Again, trimming done before
volume thresholding, so may decrease/separate regions
(though, that may be useful/more physiological).
-mask MASK :can include a mask within which to apply threshold.
Otherwise, data should be masked already. Guess this
would be useful if the MINTHR were a negative value.
It's also useful to ensure that the output *_GMI*
ROI masks stay within the brain-- this probably won't
often matter too much.
For an N-brick inset, one can input an N- or 1-brick
mask.
-neigh_face_only : **DEPRECATED SWITCH** -> it's now default behavior
to have facewise-only neighbors, in order to be
consistent with the default usage of the clusterize
function in the AFNI window.
-neigh_face_edge :can loosen the definition of neighbors, so that
voxels can share a face or an edge in order to be
grouped into same ROI (AFNI default is that neighbors
share at least one edge).
-neigh_upto_vert :can loosen the definition of neighbors, so that
voxels can be grouped into the same ROI if they share
at least one vertex (see above for default).
-nifti :switch to output *.nii.gz GM and GMI files
(default format is BRIK/HEAD).
-preinfl_inset PSET :as a possible use, one might want to start with a WM
ROI, inflate it to find the nearest GM, then expand
that GM, and subtract away the WM+CSF parts. Requires
use of a '-wm_skel' and '-skel_stop', and replaces
using '-inset'.
The size of initial expansion through WM is entered
using the option below; then WM+CSF is subtracted.
The *_GM+orig* set is returned. In the *_GMI+orig*
set, the number of voxels expanded in GM is set using
the '-inflate' value (WM+CSF is subtracted again
before output).
-preinfl_inflate PN :number of voxels for initial inflation of PSET.
-dump_no_labtab :switch for turning off labeltable attachment to the
output GM and GMI files (from either from a '-refset
REFSET' or from automatic generation from integer
labels.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3dROIMaker \
-inset CORR_VALUES+orig. \
-thresh 0.6 \
-prefix ROI_MAP \
-volthr 100 \
-inflate 2 \
-wm_skel WM_T1+orig. \
-skel_stop_strict
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
____________________________________________________________________________
AFNI program: 3dROIstats
Usage: 3dROIstats -mask[n] mset [options] datasets
Display statistics over masked regions. The default statistic
is the mean.
There will be one line of output for every sub-brick of every
input dataset. Across each line will be every statistic for
every mask value. For instance, if there 3 mask values (1,2,3),
then the columns Mean_1, Mean_2 and Mean_3 will refer to the
means across each mask value, respectively. If 4 statistics are
requested, then there will be 12 stats displayed on each line
(4 for each mask region), besides the file and sub-brick number.
Examples:
3dROIstats -mask mask+orig. 'func_slim+orig[1,3,5]'
3dROIstats -minmax -sigma -mask mask+orig. 'func_slim+orig[1,3,5]'
Options:
-mask[n] mset Means to use the dataset 'mset' as a mask:
If n is present, it specifies which sub-brick
in mset to use a la 3dcalc. Note: do not include
the brackets if specifying a sub-brick, they are
there to indicate that they are optional. If not
present, 0 is assumed
Voxels with the same nonzero values in 'mset'
will be statisticized from 'dataset'. This will
be repeated for all the different values in mset.
I.e. all of the 1s in mset are one ROI, as are all
of the 2s, etc.
Note that the mask dataset and the input dataset
must have the same number of voxels and that mset
must be BYTE or SHORT (i.e., float masks won't work
without the -mask_f2short option).
-mask_f2short Tells the program to convert a float mask to short
integers, by simple rounding. This option is needed
when the mask dataset is a 1D file, for instance
(since 1D files are read as floats).
Be careful with this, it may not be appropriate to do!
-numROI n Forces the assumption that the mask dataset's ROIs are
denoted by 1 to n inclusive. Normally, the program
figures out the ROIs on its own. This option is
useful if a) you are certain that the mask dataset
has no values outside the range [0 n], b) there may
be some ROIs missing between [1 n] in the mask data-
set and c) you want those columns in the output any-
way so the output lines up with the output from other
invocations of 3dROIstats. Confused? Then don't use
this option!
-zerofill ZF For ROI labels not found, use 'ZF' instead of a blank
in the output file. This option is useless without -numROI.
The option -zerofill defaults to '0'.
-roisel SEL.1D Only considers ROIs denoted by values found in SEL.1D
Note that the order of the ROIs as specified in SEL.1D
is not preserved. So an SEL.1D of '2 8 20' produces the
same output as '8 20 2'
-debug Print out debugging information
-quiet Do not print out labels for columns or rows
-nomeanout Do not print out the mean column. Default is
to always start with the mean value.
This option cannot be used with -summary
-longnames Prints the entire name of the sub-bricks
-nobriklab Do not print the sub-brick label next to its index
-1Dformat Output results in a 1D format that includes
commented labels
-1DRformat Output results in a 1D format that includes
uncommented labels. This format does not work well
with typical 1D programs, but it is useful for R
functions.
-float_format FORM output floats using an alternate format:
float : the default, (%f)
pretty : prettier format, (%g)
sci : scientific notation (%e)
OTHER : C-style format string, as with ccalc
: e.g. '%7.3f'
-float_format_sep SEP specify alternate float separator string:
The default is '\t'. Consider ', ' for CSV.
The following options specify what stats are computed. By default
the mean is always computed.
-nzmean Compute the mean using only non_zero voxels. Implies
the opposite for the normal mean computed
-nzsum Compute the sum using only non_zero voxels.
-nzvoxels Compute the number of non_zero voxels
-nzvolume Compute the volume of non-zero voxels
-minmax Compute the min/max of all voxels
-nzminmax Compute the min/max of non_zero voxels
-sigma Compute the standard deviation of all voxels
-nzsigma Compute the standard deviation of all non_zero voxels
-median Compute the median of all voxels.
-nzmedian Compute the median of non_zero voxels.
-summary Only output a summary line with the grand mean
across all briks in the input dataset.
This option cannot be used with -nomeanout.
-mode Compute the mode of all voxels. (integral valued sets only)
-nzmode Compute the mode of non_zero voxels.
-pcxyz Compute the principal direction of the voxels in the ROI
including the three eigen values. You'll get 12 values out
per ROI, per sub-brick, with this option.
pc0x pc0y pc0z pc1x pc1y pc1z pc2x pc2y pc2z eig0 eig1 eig2
-nzpcxyz Same as -pcxyz, but exclude zero valued voxels.
-pcxyz+ Same as -pcxyz, but also with FA, MD, Cl, Cp, and Cs computed
from the three eigen values.
You will get 17 values out per ROI, per sub-brick, beginning
with all the values from -pcxyz and -nzpcxyz then followed by
FA MD Cl Cp Cs
-nzpcxyz+ Same as -nzpcxyz, but also with FA, MD, Cl, Cp, and Cs.
-key Output the integer key for the ROI in question
The output is printed to stdout (the terminal), and can be
saved to a file using the usual redirection operation '>'.
N.B.: The input datasets and the mask dataset can use sub-brick
selectors, as detailed in the output of 3dcalc -help.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3drotate
Usage: 3drotate [options] dataset
Rotates and/or translates all bricks from an AFNI dataset.
'dataset' may contain a sub-brick selector list.
GENERIC OPTIONS:
-prefix fname = Sets the output dataset prefix name to be 'fname'
-verbose = Prints out progress reports (to stderr)
OPTIONS TO SPECIFY THE ROTATION/TRANSLATION:
-------------------------------------------
*** METHOD 1 = direct specification:
At most one of these shift options can be used:
-ashift dx dy dz = Shifts the dataset 'dx' mm in the x-direction, etc.,
AFTER rotation.
-bshift dx dy dz = Shifts the dataset 'dx' mm in the x-direction, etc.,
BEFORE rotation.
The shift distances by default are along the (x,y,z) axes of the dataset
storage directions (see the output of '3dinfo dataset'). To specify them
anatomically, you can suffix a distance with one of the symbols
'R', 'L', 'A', 'P', 'I', and 'S', meaning 'Right', 'Left', 'Anterior',
'Posterior', 'Inferior', and 'Superior', respectively.
-rotate th1 th2 th3
Specifies the 3D rotation to be composed of 3 planar rotations:
1) 'th1' degrees about the 1st axis, followed by
2) 'th2' degrees about the (rotated) 2nd axis, followed by
3) 'th3' degrees about the (doubly rotated) 3rd axis.
Which axes are used for these rotations is specified by placing
one of the symbols 'R', 'L', 'A', 'P', 'I', and 'S' at the end
of each angle (e.g., '10.7A'). These symbols denote rotation
about the 'Right-to-Left', 'Left-to-Right', 'Anterior-to-Posterior',
'Posterior-to-Anterior', 'Inferior-to-Superior', and
'Superior-to-Inferior' axes, respectively. A positive rotation is
defined by the right-hand rule.
*** METHOD 2 = copy from output of 3dvolreg:
-rotparent rset
Specifies that the rotation and translation should be taken from the
first 3dvolreg transformation found in the header of dataset 'rset'.
-gridparent gset
Specifies that the output dataset of 3drotate should be shifted to
match the grid of dataset 'gset'. Can only be used with -rotparent.
This dataset should be one this is properly aligned with 'rset' when
overlaid in AFNI.
* If -rotparent is used, then don't use -matvec, -rotate, or -[ab]shift.
* If 'gset' has a different number of slices than the input dataset,
then the output dataset will be zero-padded in the slice direction
to match 'gset'.
* These options are intended to be used to align datasets between sessions:
S1 = SPGR from session 1 E1 = EPI from session 1
S2 = SPGR from session 2 E2 = EPI from session 2
3dvolreg -twopass -twodup -base S1+orig -prefix S2reg S2+orig
3drotate -rotparent S2reg+orig -gridparent E1+orig -prefix E2reg E2+orig
The result will have E2reg rotated from E2 in the same way that S2reg
was from S2, and also shifted/padded (as needed) to overlap with E1.
*** METHOD 3 = give the transformation matrix/vector directly:
-matvec_dicom mfile
-matvec_order mfile
Specifies that the rotation and translation should be read from file
'mfile', which should be in the format
u11 u12 u13 v1
u21 u22 u23 v2
u31 u32 u33 u3
where each 'uij' and 'vi' is a number. The 3x3 matrix [uij] is the
orthogonal matrix of the rotation, and the 3-vector [vi] is the -ashift
vector of the translation.
*** METHOD 4 = copy the transformation from 3dTagalign:
-matvec_dset mset
Specifies that the rotation and translation should be read from
the .HEAD file of dataset 'mset', which was created by program
3dTagalign.
* If -matvec_dicom is used, the matrix and vector are given in Dicom
coordinate order (+x=L, +y=P, +z=S). This is the option to use
if mfile is generated using 3dTagalign -matvec mfile.
* If -matvec_order is used, the matrix and vector are given in the
coordinate order of the dataset axes, whatever they may be.
* You can't mix -matvec_* options with -rotate and -*shift.
*** METHOD 5 = input rotation+shift parameters from an ASCII file:
-dfile dname *OR* -1Dfile dname
With these methods, the movement parameters for each sub-brick
of the input dataset are read from the file 'dname'. This file
should consist of columns of numbers in ASCII format. Six (6)
numbers are read from each line of the input file. If the
'-dfile' option is used, each line of the input should be at
least 7 numbers, and be of the form
ignored roll pitch yaw dS dL dP
If the '-1Dfile' option is used, then each line of the input
should be at least 6 numbers, and be of the form
roll pitch yaw dS dL dP
(These are the forms output by the '-dfile' and
'-1Dfile' options of program 3dvolreg; see that
program's -help output for the hideous details.)
The n-th sub-brick of the input dataset will be transformed
using the parameters from the n-th line of the dname file.
If the dname file doesn't contain as many lines as the
input dataset has sub-bricks, then the last dname line will
be used for all subsequent sub-bricks. Excess columns or
rows will be ignored.
N.B.: Rotation is always about the center of the volume.
If the parameters are derived from a 3dvolreg run
on a dataset with a different center in xyz-space,
the results may not be what you want!
N.B.: You can't use -dfile/-1Dfile with -points (infra).
POINTS OPTIONS (instead of datasets):
------------------------------------
-points
-origin xo yo zo
These options specify that instead of rotating a dataset, you will
be rotating a set of (x,y,z) points. The points are read from stdin.
* If -origin is given, the point (xo,yo,zo) is used as the center for
the rotation.
* If -origin is NOT given, and a dataset is given at the end of the
command line, then the center of the dataset brick is used as
(xo,yo,zo). The dataset will NOT be rotated if -points is given.
* If -origin is NOT given, and NO dataset is given at the end of the
command line, then xo=yo=zo=0 is assumed. You probably don't
want this.
* (x,y,z) points are read from stdin as 3 ASCII-formatted numbers per
line, as in 3dUndump. Any succeeding numbers on input lines will
be copied to the output, which will be written to stdout.
* The input (x,y,z) coordinates are taken in the same order as the
axes of the input dataset. If there is no input dataset, then
negative x = R positive x = L }
negative y = A positive y = P } e.g., the DICOM order
negative z = I positive z = S }
One way to dump some (x,y,z) coordinates from a dataset is:
3dmaskdump -mask something+tlrc -o xyzfilename -noijk
'3dcalc( -a dset+tlrc -expr x -datum float )'
'3dcalc( -a dset+tlrc -expr y -datum float )'
'3dcalc( -a dset+tlrc -expr z -datum float )'
(All of this should be on one command line.)
============================================================================
Example: 3drotate -prefix Elvis -bshift 10S 0 0 -rotate 30R 0 0 Sinatra+orig
This will shift the input 10 mm in the superior direction, followed by a 30
degree rotation about the Right-to-Left axis (i.e., nod the head forward).
============================================================================
Algorithm: The rotation+shift is decomposed into 4 1D shearing operations
(a 3D generalization of Paeth's algorithm). The interpolation
(i.e., resampling) method used for these shears can be controlled
by the following options:
-Fourier = Use a Fourier method (the default: most accurate; slowest).
-NN = Use the nearest neighbor method.
-linear = Use linear (1st order polynomial) interpolation (least accurate).
-cubic = Use the cubic (3rd order) Lagrange polynomial method.
-quintic = Use the quintic (5th order) Lagrange polynomial method.
-heptic = Use the heptic (7th order) Lagrange polynomial method.
-Fourier_nopad = Use the Fourier method WITHOUT padding
* If you don't mind - or even want - the wraparound effect
* Works best if dataset grid size is a power of 2, possibly
times powers of 3 and 5, in all directions being altered.
* The main use would seem to be to un-wraparound poorly
reconstructed images, by using a shift; for example:
3drotate -ashift 30A 0 0 -Fourier_nopad -prefix Anew A+orig
* This option is also available in the Nudge Dataset plugin.
-clipit = Clip results to input brick range [now the default].
-noclip = Don't clip results to input brick range.
-zpad n = Zeropad around the edges by 'n' voxels during rotations
(these edge values will be stripped off in the output)
N.B.: Unlike to3d, in this program '-zpad' adds zeros in
all directions.
N.B.: The environment variable AFNI_ROTA_ZPAD can be used
to set a nonzero default value for this parameter.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dRowFillin
Usage: 3dRowFillin [options] dataset
Extracts 1D rows in the given direction from a 3D dataset,
searches for blank (zero) regions, and fills them in if
the blank region isn't too large and it is flanked by
the same value on either edge. For example:
input row = 0 1 2 0 0 2 3 0 3 0 0 4 0
output row = 0 1 2 2 2 2 3 3 3 0 0 4 0
OPTIONS:
-maxgap N = set the maximum length of a blank region that
will be filled in to 'N' [default=9].
-dir D = set the direction of fill to 'D', which can
be one of the following:
A-P, P-A, I-S, S-I, L-R, R-L, x, y, z,
XYZ.OR, XYZ.AND
The first 6 are anatomical directions;
x,y, and z, are reference to the dataset
internal axes.
XYZ.OR means do the fillin in x, followed by y,
followed by z directions.
XYZ.AND is like XYZ.OR but only accept voxels that
would have been filled in each of the three fill
calls.
Note that with XYZ* options, the fill value depends on
on the axis orientation. So you're better off sticking
to single valued dsets when using them.
See also -binary option below
-binary: Turn input dataset to 0 and 1 before filling in.
Output will also be a binary valued dataset.
-prefix P = set the prefix to 'P' for the output dataset.
N.B.: If the input dataset has more than one sub-brick,
only the first one will be processed.
* The intention of this program is to let you fill in slice gaps
made when drawing ROIs with the 'Draw Dataset' plugin. If you
draw every 5th coronal slice, say, then you could fill in using
3dRowFillin -maxgap 4 -dir A-P -prefix fredfill fred+orig
* This program is moderately obsolescent, since I later added
the 'Linear Fillin' controls to the 'Draw Dataset' plugin.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dRprogDemo
Usage:
------
3dRprogDemo is a template program to help users write their own R
processing routines on MRI volumes without having to deal with
things like volume I/O or command line argument parsing.
This template program shows rudimentary command line option parsing,
volume reading, calling a silly processing function on each voxel time series,
and writing the output.
This 3dRprogDemo.R file is paired with the script 3dRprogDemo which
allows users to run R programs directlt from the shell. To create your
own 3dSOMETHING program you would do at least the following:
cp 3dRprogDemo.R 3dSOMETHING.R
cp 3dRprogDemo 3dSOMETHING
Modify the variable ExecName in 3dSOMETHING.R to reflect your program name
Replace the function RprogDemo.Scale() with your own function
Unfortunately at this stage, there is little help for the AFNI R API
beyond this sample code. If you find yourself using this and need
to ask questions about other dataset utility functions contact the author
for help. The AFNIio.R file in the AFNI distribution contains most of the IO
functions. Below are some notable ones, grep for them in the .R files for
usage examples.
dset.attr() for getting and setting attributes, such as the TR in seconds
e.g. dset$NI_head <- dset.attr(dset$NI_head, "TR", val = 1.5)
read.AFNI()
write.AFNI()
show.dset.attr()
dset.index3Dto1D()
dset.index1Dto3D()
dset.dimBRKarray()
dset.3DBRKarrayto1D()
dset.1DBRKarrayto3D()
parse.AFNI.name() for parsing a filename into AFNI relevant parameters
exists.AFNI.name()
note.AFNI(), err.AFNI(), warn.AFNI(), exit.AFNI()
Debugging Note:
===============
When running the program from the shell prompt, you cannot use R's
browser() function to halt execution and step through the code.
However, the utility function load.debug.AFNI.args() makes it very easy
for you to run the command line equivalent from the R prompt. Doing so
would make available the browser() functionality. To use load.debug.AFNI.args()
follow these steps:
1- Run the program from the shell command line. The program will
automatically create a hidden file called .YOUR_PROGRAM_NAME.dbg.AFNI.args
2- Start R from the same directory or change to the directory where
you ran the program if you started R elesewhere
3- Run the function: load.debug.AFNI.args() and follow the prompts.
The function will look for possible debug files, prompt you to pick
the one you want, and start the execution from the R shell.
Example 1 --- Read a dataset, scale it, then write the results:
-----------------------------------------------------------------------------
3dRprogDemo -input epi.nii
-mask mask.nii
-scale 7
-prefix toy.nii
Options in alphabetical order:
------------------------------
-h_aspx: like -h_spx, with autolabeling
-h_raw: this help message, as is in the code.
-h_spx: this help message, in sphinx format
-h_txt: this help message, in simple text
-help: this help message, in simple text.
-input DSET1 \
Specify the dataset to be scaled. Note that you can use
the various sub-brick selectors used by AFNI
e.g: -input pb05.Regression+tlrc'[face#0_Beta]' \
You can use multiple instances of -input in one command line
to process multiple datasets in the same manner.
-mask MASK: Process voxels inside this mask only.
Default is no masking.
-prefix PREFIX: Output prefix (just prefix, no view+suffix needed)
-scale SS: Multiply each voxel by SS
-show_allowed_options: list of allowed options
-verb VERB: VERB is an integer specifying verbosity level.
0 for quiet (Default). 1 or more: talkative.
AFNI program: 3dRSFC
Program to calculate common resting state functional connectivity (RSFC)
parameters (ALFF, mALFF, fALFF, RSFA, etc.) for resting state time
series. This program is **heavily** based on the existing
3dBandPass by RW Cox, with the amendments to calculate RSFC
parameters written by PA Taylor (July, 2012).
This program is part of FATCAT (Taylor & Saad, 2013) in AFNI. Importantly,
its functionality can be included in the `afni_proc.py' processing-script
generator; see that program's help file for an example including RSFC
and spectral parameter calculation via the `-regress_RSFC' option.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
All options of 3dBandPass may be used here (with a couple other
parameter options, as well): essentially, the motivation of this
program is to produce ALFF, etc. values of the actual RSFC time
series that you calculate. Therefore, all the 3dBandPass processing
you normally do en route to making your final `resting state time
series' is done here to generate your LFFs, from which the
amplitudes in the LFF band are calculated at the end. In order to
calculate fALFF, the same initial time series are put through the
same processing steps which you have chosen but *without* the
bandpass part; the spectrum of this second time series is used to
calculate the fALFF denominator.
For more information about each RSFC parameter, see, e.g.:
ALFF/mALFF -- Zang et al. (2007),
fALFF -- Zou et al. (2008),
RSFA -- Kannurpatti & Biswal (2008).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ USAGE: 3dRSFC [options] fbot ftop dataset
* One function of this program is to prepare datasets for input
to 3dSetupGroupInCorr. Other uses are left to your imagination.
* 'dataset' is a 3D+time sequence of volumes
++ This must be a single imaging run -- that is, no discontinuities
in time from 3dTcat-ing multiple datasets together.
* fbot = lowest frequency in the passband, in Hz
++ fbot can be 0 if you want to do a lowpass filter only;
HOWEVER, the mean and Nyquist freq are always removed.
* ftop = highest frequency in the passband (must be > fbot)
++ if ftop > Nyquist freq, then it's a highpass filter only.
* Set fbot=0 and ftop=99999 to do an 'allpass' filter.
++ Except for removal of the 0 and Nyquist frequencies, that is.
* You cannot construct a 'notch' filter with this program!
++ You could use 3dRSFC followed by 3dcalc to get the same effect.
++ If you are understand what you are doing, that is.
++ Of course, that is the AFNI way -- if you don't want to
understand what you are doing, use Some other PrograM, and
you can still get Fine StatisticaL maps.
* 3dRSFC will fail if fbot and ftop are too close for comfort.
++ Which means closer than one frequency grid step df,
where df = 1 / (nfft * dt) [of course]
* The actual FFT length used will be printed, and may be larger
than the input time series length for the sake of efficiency.
++ The program will use a power-of-2, possibly multiplied by
a power of 3 and/or 5 (up to and including the 3rd power of
each of these: 3, 9, 27, and 5, 25, 125).
* Note that the results of combining 3dDetrend and 3dRSFC will
depend on the order in which you run these programs. That's why
3dRSFC has the '-ort' and '-dsort' options, so that the
time series filtering can be done properly, in one place.
* The output dataset is stored in float format.
* The order of processing steps is the following (most are optional), and
for the LFFs, the bandpass is done between the specified fbot and ftop,
while for the `whole spectrum' (i.e., fALFF denominator) the bandpass is:
done only to exclude the time series mean and the Nyquist frequency:
(0) Check time series for initial transients [does not alter data]
(1) Despiking of each time series
(2) Removal of a constant+linear+quadratic trend in each time series
(3) Bandpass of data time series
(4) Bandpass of -ort time series, then detrending of data
with respect to the -ort time series
(5) Bandpass and de-orting of the -dsort dataset,
then detrending of the data with respect to -dsort
(6) Blurring inside the mask [might be slow]
(7) Local PV calculation [WILL be slow!]
(8) L2 normalization [will be fast.]
(9) Calculate spectrum and amplitudes, for RSFC parameters.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
--------
OPTIONS:
--------
-despike = Despike each time series before other processing.
++ Hopefully, you don't actually need to do this,
which is why it is optional.
-ort f.1D = Also orthogonalize input to columns in f.1D
++ Multiple '-ort' options are allowed.
-dsort fset = Orthogonalize each voxel to the corresponding
voxel time series in dataset 'fset', which must
have the same spatial and temporal grid structure
as the main input dataset.
++ At present, only one '-dsort' option is allowed.
-nodetrend = Skip the quadratic detrending of the input that
occurs before the FFT-based bandpassing.
++ You would only want to do this if the dataset
had been detrended already in some other program.
-dt dd = set time step to 'dd' sec [default=from dataset header]
-nfft N = set the FFT length to 'N' [must be a legal value]
-norm = Make all output time series have L2 norm = 1
++ i.e., sum of squares = 1
-mask mset = Mask dataset
-automask = Create a mask from the input dataset
-blur fff = Blur (inside the mask only) with a filter
width (FWHM) of 'fff' millimeters.
-localPV rrr = Replace each vector by the local Principal Vector
(AKA first singular vector) from a neighborhood
of radius 'rrr' millimeters.
++ Note that the PV time series is L2 normalized.
++ This option is mostly for Bob Cox to have fun with.
-input dataset = Alternative way to specify input dataset.
-band fbot ftop = Alternative way to specify passband frequencies.
-prefix ppp = Set prefix name of output dataset. Name of filtered time
series would be, e.g., ppp_LFF+orig.*, and the parameter
outputs are named with obvious suffices.
-quiet = Turn off the fun and informative messages. (Why?)
-no_rs_out = Don't output processed time series-- just output
parameters (not recommended, since the point of
calculating RSFC params here is to have them be quite
related to the time series themselves which are used for
further analysis).
-un_bp_out = Output the un-bandpassed series as well (default is not
to). Name would be, e.g., ppp_unBP+orig.* .
with suffix `_unBP'.
-no_rsfa = If you don't want RSFA output (default is to do so).
-bp_at_end = A (probably unnecessary) switch to have bandpassing be
the very last processing step that is done in the
sequence of steps listed above; at Step 3 above, only
the time series mean and nyquist are BP'ed out, and then
the LFF series is created only after Step 9. NB: this
probably makes only very small changes for most
processing sequences (but maybe not, depending usage).
-notrans = Don't check for initial positive transients in the data:
*OR* ++ The test is a little slow, so skipping it is OK,
-nosat if you KNOW the data time series are transient-free.
++ Or set AFNI_SKIP_SATCHECK to YES.
++ Initial transients won't be handled well by the
bandpassing algorithm, and in addition may seriously
contaminate any further processing, such as inter-
voxel correlations via InstaCorr.
++ No other tests are made [yet] for non-stationary
behavior in the time series data.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
____________________________________________________________________________
=========================================================================
* This binary version of 3dRSFC is NOT compiled using OpenMP, a
semi-automatic parallelizer software toolkit, which splits the work
across multiple CPUs/cores on the same shared memory computer.
* However, the source code is compatible with OpenMP, and can be compiled
with an OpenMP-capable compiler, such as gcc 8.x+, Intel's icc, and
Oracle Developer Studio.
* If you wish to compile this program with OpenMP, see the man page for
your C compiler, and (if needed) consult the AFNI message board, and
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* However, it would probably be simplest to download a pre-compiled AFNI
binary set that uses OpenMP!
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/index.html
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dSeg
3dSeg segments brain volumes into tissue classes. The program allows
for adding a variety of global and voxelwise priors. However for the moment,
only mixing fractions and MRF are documented.
I do not recommend you use this program for quantitative segmentation,
at least not yet. I have a lot of emotional baggage to overcome on that
front.
Example 1: Segmenting a skull-stripped T1 volume with:
Brain mask, No prior volumes, Uniform mixing fraction
3dSeg -anat anat.nii -mask AUTO \
-classes 'CSF ; GM ; WM' -bias_classes 'GM ; WM' \
-bias_fwhm 25 -mixfrac UNI -main_N 5 \
-blur_meth BFT
Options:
-anat ANAT: ANAT is the volume to segment
-mask MASK: MASK only non-zero voxels in MASK are analyzed.
MASK is useful when no voxelwise priors are available.
MASK can either be a dataset or the string 'AUTO'
which would use AFNI's automask function to create the mask.
-blur_meth BMETH: Set the blurring method for bias field estimation.
-blur_meth takes one of: BFT, BIM,
BFT: Use Fourier smoothing, masks be damned.
BIM: Blur in mask, slower, more accurate, not necessarily
better bias field estimates.
BNN: A crude blurring in mask. Faster than BIM but it does
not result in accurate FWHM. This option is for
impatient testing. Do not use it.
LSB: Localstat moving average smoothing. Debugging only.
Do not use.
default: BFT
-bias_fwhm BIAS_FWHM: The amount of blurring used when estimating the
field bias with the Wells method.
[Wells et. al. IEEE TMI 15, 4, 1997].
Use 0.0 to turn off bias field estimation.
default: 25.0
-classes 'CLASS_STRING': CLASS_STRING is a semicolon delimited
string of class labels. At the moment
CLASS_STRING can only be 'CSF; GM; WM'
default: CSF; GM; WM
-Bmrf BMRF: Weighting factor controlling spatial homogeneity of the
classifications. The larger BMRF, the more homogeneous the
classifications will be.
See Berthod et al. Image and Vision Computing 14 (1996),
MRFs are also used in FSL's FAST program.
BMRF = 0.0 means no MRF, 1.0 is a start.
Use this option if you have noisy data and no good
voxelwise priors.
default: 0.0
-bias_classes 'BIAS_CLASS_STRING': A semcolon demlimited string of
classes that contribute to the
estimation of the bias field.
default: 'GM; WM'
-prefix PREF: PREF is the prefix for all output volume that are not
debugging related.
default: Segsy
-overwrite: An option common to almost all AFNI programs. It is
automatically turned on if you provide no PREF.
-debug LEVEL: Set debug level to 0(default), 1, or 2
-mixfrac 'MIXFRAC': MIXFRAC sets up the volume-wide (within mask)
tissue fractions while initializing the
segmentation (see IGNORE for exception).
You can specify the mixing fractions
directly such as with '0.1 0.45 0.45', or with
the following special flags:
'UNI': Equal mixing fractions
'AVG152_BRAIN_MASK': Mixing fractions reflecting AVG152
template.
'IGNORE': Ignore mixing fraction while computing posterior
probabilities for all the iterations, not just at the
initialization as for the preceding variants
default: UNI
-mixfloor 'FLOOR': Set the minimum value for any class's mixing fraction.
The value should be between 0 and 1 and not to exceed
1/(number of classes). This parameter should be kept to
a small value.
default: 0.0001
-gold GOLD: A goldstandard segmentation volume should you wish to
compare 3dSeg's results to it.
-gold_bias GOLD: A goldstandard bias volume should you wish to
compare 3dSeg's bias estimate to it.
-main_N Niter: Number of iterations to perform.
default: 5
-cset CSET: Initial classification. If CSET is not given,
initialization is carried out with 3dkmean's engine.
-labeltable LT: Label table containing integer keys and corresponding labels.
-vox_debug 1D_DBG_INDEX: 1D index of voxel to debug.
OR
-vox_debug I J K: where I, J, K are the 3D voxel indices
(not RAI coordinates in mm).
-vox_debug_file DBG_OUTPUT_FILE: File in which debug information is output
use '-' for stdout, '+' for stderr.
AFNI program: 3dSetupGroupInCorr
++ 3dSetupGroupInCorr: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: RW Cox
Usage: 3dSetupGroupInCorr [options] dataset dataset ...
This program is used to pre-process a collection of AFNI
3D+time datasets for use with Group InstaCorr (3dGroupInCorr).
* By itself, this program just collects all its input datasets
together for convenient processing later. Pre-processing
(e.g., detrending, bandpassing, despiking) must be done BEFORE
running 3dSetupGroupInCorr -- for example, with 3dBandpass.
The actual calculations of group t-tests of correlations is
done AFTER running 3dSetupGroupInCorr, in program 3dGroupInCorr.
* All the datasets input here will be treated as one sample
for the t-test performed in 3dGroupInCorr. If you are going
to do a 2-sample t-test, then you will need to run this
program twice, once for each collection of datasets
(e.g., once for 'control subjects' and once for 'patients').
* All datasets must have the same grid layout, since 3dGroupInCorr
will do voxel-by-voxel comparisons. Usually, this means that
the datasets have been transformed to a standard space; for
example, using the @auto_tlrc script.
* All the datasets use the same mask -- only voxels inside
this mask will be stored and processed. If you do not give the
'-mask' option, then all voxels will be processed -- not usually
a good idea, since non-brain voxels will use up a LOT of memory
and CPU time in 3dGroupInCorr.
++ If you use '-mask', you MUST use the same mask dataset
in all runs of 3dSetupGroupInCorr that will be input
at the same time to 3dGroupInCorr -- otherwise, the
computations in that program will make no sense AT ALL!
++ This requirement is why there is no '-automask' option.
* However, the datasets do NOT all have to have the same number
of time points or time spacing. But each dataset must have
at least 9 points along the time axis!
* The ONLY pre-processing herein for each time series is to L2
normalize it (sum of squares = 1) and scale it to 8-bit bytes
(or to 16-bit shorts).
++ You almost certainly want to use 3dBandpass and/or some other
code to pre-process the datasets BEFORE input to this program.
++ See the SAMPLE SCRIPT below for a semi-reasonable way to
pre-process a collection of datasets for 3dGroupInCorr.
++ [10 May 2012] The '-prep' option now allows for some limited
pre-processing operations.
* The outputs from this program are 2 files:
++ PREFIX.grpincorr.niml is a text file containing the header
information that describes the data file. This file is input
to 3dGroupInCorr to define one sample in the t-test.
++ PREFIX.grpincorr.data is the data file, which contains
all the time series (in the mask) from all the datasets.
++ The data file will usually be huge (gigabytes, perhaps).
You need to be sure you have enough disk space and RAM.
++ If the output files already exist when you run this program,
then 3dSetupGroupInCorr will exit without processing the datasets!
* See the help for 3dGroupInCorr for information on running that program.
* The PDF file
https://afni.nimh.nih.gov/pub/dist/edu/latest/afni_handouts/afni20_instastuff.pdf
also has some information on the Group InstaCorr process (as well as all
the other 'Insta' functions added to AFNI).
* The program 3dExtractGroupInCorr can be used to reconstruct the
input datasets from the .niml and .data files, if needed.
-------
OPTIONS
-------
-mask mset = Mask dataset [highly recommended for volumetric e data!]
-prefix PREFIX = Set prefix name of output dataset
-short = Store data as 16-bit shorts [used to be the default]
++ This will double the amount of disk space and RAM needed.
++ For most GroupInCorr purposes, you don't need this option,
since there is so much averaging going on that truncation
noise is washed away.
-byte = Store data as 8-bit bytes rather than 16-bit shorts.
++ This will save memory in 3dGroupInCorr (and disk space),
which can be important when using large collections of
datasets. Results will be very slightly less accurate
than with '-short', but you'll have a hard time finding
any place where this matters.
++ This option is now the default [08 Feb 2010].
++ The amount of data stored is (# of voxels in the mask)
* (# of time points per subject)
* (# of subjects)
For a 3x3x3 mm^3 grid in MNI space, there are typically
about 70,000 voxels in the brain. If you have an average
of 200 time points per scan, then one subject's scan will
take up 7e4*2e2 = 14 MB of space; 100 subjects would thus
require about 1.4 GB of space.
-labels fff = File 'fff' should be a list of labels, a unique one for each
dataset input. These labels can be used in 3dGroupInCorr to
select a subset of datasets to be processed therein.
++ If you don't use this option, then the list of labels will
comprise the list of prefixes from the input datasets.
++ Labels cannot contain a space character, a comma, or a semicolon.
++ When using the -LRpairs option, you should specify only
one label for eah pair.
If you don't use the -labels option with -LRpairs the
labels are taken from the 'L' only dataset names, that
would be the first name of each LRpair.
-DELETE = Delete input datasets from disk after
processing them one at a time into the
output data file -- this very highly
destructive option is intended to let
you save disk space, if absolutely
necessary. *** BE CAREFUL OUT THERE! ***
++ If you are setting up for 3dGroupInCorr
in a script that first uses 3dBandpass
to filter the datasets, and then uses this
program to finish the setup, then you
COULD use '-DELETE' to remove the
temporary 3dBandpass outputs as soon
as they are no longer needed.
-prep XXX = Prepare (or preprocess) each data time series in some
fashion before L2 normalization and storing, where
'XXX' is one of these:
++ SPEARMAN ==> convert data to ranks, so that the
resulting individual subject correlations
in 3dGroupInCorr are Spearman correlations.
++ DEMEAN ==> remove the mean
Variations for surface-based data:
----------------------------------
If you are working with one surface, no special options are needed.
However, it is often the case that you want to perform correlations
on both hemispheres. So in that case, you'll want to provide volume
pairs (Left Hemi data, Right Hemi data). To help reduce the risk of
user errors (the only kind we know of), you should also provide the
domain parents for each of the hemispheres.
-LRpairs L_SURF R_SURF: This option sets the domains for the left
and right hemisphere surfaces, and
indicates that the datasets to follow
are arranged in (Left, Right) pairs.
-------------
SAMPLE SCRIPT (tcsh syntax)
-------------
* Assume datasets are named in the following scheme (sub01, sub02, ...)
++ T1-weighted anatomical = sub01_anat+orig
++ Resting state EPI = sub01_rest+orig
++ Standard space template = ~/abin/MNI_avg152T1+tlrc
#!/bin/tcsh
# MNI-ize each subject's anat, then EPIs (at 2 mm resolution)
cp -f ~/abin/MNI_avg152T1+tlrc.* .
foreach fred ( sub*_anat+orig.HEAD )
set sub = `basename $fred _anat+orig.HEAD`
@auto_tlrc -base MNI_avg152T1+tlrc.HEAD -input $fred
adwarp -apar ${sub}_anat+tlrc.HEAD -dpar ${sub}_rest+orig.HEAD \
-resam Cu -dxyz 2.0
3dAutomask -dilate 1 -prefix ${sub}_amask ${sub}_rest+tlrc.HEAD
end
# Combine individual EPI automasks into a group mask
3dMean -datum float -prefix ALL_amaskFULL *_amask+tlrc.HEAD
3dcalc -datum byte -prefix ALL_amask5050 -a ALL_amaskFULL+tlrc -expr 'step(a-0.499)'
/bin/rm -f *_amask+tlrc.*
# Bandpass and blur each dataset inside the group mask
# * Skip first 4 time points.
# * If you want to remove the global mean signal, you would use the '-ort'
# option for 3dBandpass -- but we recommend that you do NOT do this:
# http://dx.doi.org/10.1089/brain.2012.0080
foreach fred ( sub*_rest+tlrc.HEAD )
set sub = `basename $fred _rest+tlrc.HEAD`
3dBandpass -mask ALL_amask5050+tlrc -blur 6.0 -band 0.01 0.10 -prefix ${sub}_BP\
-input $fred'[4..$]'
end
# Extract data for 3dGroupInCorr
3dSetupGroupInCorr -mask ALL_amask5050 -prefix ALLshort -short *_BP+tlrc.HEAD
# OR
3dSetupGroupInCorr -mask ALL_amask5050 -prefix ALLbyte -byte *_BP+tlrc.HEAD
/bin/rm -f *_BP+tlrc.*
### At this point you could run (in 2 separate terminal windows)
### afni -niml MNI_avg152T1+tlrc
### 3dGroupInCorr -setA ALLbyte.grpincorr.niml -verb
### And away we go ....
------------------
CREDITS (or blame)
------------------
* Written by RWCox, 31 December 2009.
* With a little help from my friends: Alex Martin, Steve Gotts, Ziad Saad.
* With encouragement from MMK.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dSharpen
Usage: 3dSharpen [options] dataset
Applies a simple 3D sharpening filter to the POSITIVE values
in the #0 volume of the input dataset, and writes out a new
dataset.
Only operates on positive valued voxels in the dataset.
Non-positive values will not be altered.
Options:
--------
-phi fff = Sharpening factor, between 0.1 and 0.9 (inclusive).
Larger means more sharpening. Default is 0.4.
-input dataset = An option to input the dataset anywhere,
not just at the end of the command line.
-prefix pref = Select the name of the output dataset
(it will be in floating point format).
* A quick hack for experimental purposes.
* e.g., Cleaning up the results of brain template construction.
* RWCox - Feb 2017.
AFNI program: 3dSignatures
Loading required package: fitdistrplus
Loading required package: MASS
Warning message:
In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE, :
there is no package called ‘fitdistrplus’
Error in library(fitdistrplus) :
there is no package called ‘fitdistrplus’
Calls: source -> withVisible -> eval -> eval -> library
Execution halted
AFNI program: 3dSkullStrip
Usage: A program to extract the brain from surrounding.
tissue from MRI T1-weighted images.
The simplest command would be:
3dSkullStrip <-input DSET>
Also consider the script @SSwarper, which combines the use of
3dSkullStrip and nonlinear warping to an MNI template to produce
a skull-stripped dataset in MNI space, plus the nonlinear warp
that can used to transform other datasets from the same subject
(e.g., EPI) to MNI space. (This script only applies to human brain
images.)
The fully automated process consists of three steps:
1- Preprocessing of volume to remove gross spatial image
non-uniformity artifacts and reposition the brain in
a reasonable manner for convenience.
** Note that in many cases, using 3dUnifize before **
** using 3dSkullStrip will give better results. **
2- Expand a spherical surface iteratively until it envelopes
the brain. This is a modified version of the BET algorithm:
Fast robust automated brain extraction,
by Stephen M. Smith, HBM 2002 v 17:3 pp 143-155
Modifications include the use of:
. outer brain surface
. expansion driven by data inside and outside the surface
. avoidance of eyes and ventricles
. a set of operations to avoid the clipping of certain brain
areas and reduce leakage into the skull in heavily shaded
data
. two additional processing stages to ensure convergence and
reduction of clipped areas.
. use of 3d edge detection, see Deriche and Monga references
in 3dedge3 -help.
3- The creation of various masks and surfaces modeling brain
and portions of the skull
Common examples of usage:
-------------------------
o 3dSkullStrip -input VOL -prefix VOL_PREFIX
Vanilla mode, should work for most datasets.
o 3dSkullStrip -input VOL -prefix VOL_PREFIX -push_to_edge
Adds an aggressive push to brain edges. Use this option
when the chunks of gray matter are not included. This option
might cause the mask to leak into non-brain areas.
o 3dSkullStrip -input VOL -surface_coil -prefix VOL_PREFIX -monkey
Vanilla mode, for use with monkey data.
o 3dSkullStrip -input VOL -prefix VOL_PREFIX -ld 30
Use a denser mesh, in the cases where you have lots of
csf between gyri. Also helps when some of the brain is clipped
close to regions of high curvature.
Tips:
-----
I ran the program with the default parameters on 200+ datasets.
The results were quite good in all but a couple of instances, here
are some tips on fixing trouble spots:
Clipping in frontal areas, close to the eye balls:
+ Try -push_to_edge option first.
Can also try -no_avoid_eyes option.
Clipping in general:
+ Try -push_to_edge option first.
Can also use lower -shrink_fac, start with 0.5 then 0.4
Problems down below:
+ Piece of cerebellum missing, reduce -shrink_fac_bot_lim
from default value.
+ Leakage in lower areas, increase -shrink_fac_bot_lim
from default value.
Some lobules are not included:
+ Use a denser mesh. Start with -ld 30. If that still fails,
try even higher density (like -ld 50) and increase iterations
(say to -niter 750).
Expect the program to take much longer in that case.
+ Instead of using denser meshes, you could try blurring the data
before skull stripping. Something like -blur_fwhm 2 did
wonders for some of my data with the default options of 3dSkullStrip
Blurring is a lot faster than increasing mesh density.
+ Use also a smaller -shrink_fac is you have lots of CSF between
gyri.
Massive chunks missing:
+ If brain has very large ventricles and lots of CSF between gyri,
the ventricles will keep attracting the surface inwards.
This often happens with older brains. In such
cases, use the -visual option to see what is happening.
For example, the options below did the trick in various
instances.
-blur_fwhm 2 -use_skull
or for more stubborn cases increase csf avoidance with this cocktail
-blur_fwhm 2 -use_skull -avoid_vent -avoid_vent -init_radius 75
+ Too much neck in the volume might throw off the initialization
step. You can fix this by clipping tissue below the brain with
@clip_volume -below ZZZ -input INPUT
where ZZZ is a Z coordinate somewhere below the brain.
Large regions outside brain included:
+ Usually because noise level is high. Try @NoisySkullStrip.
Make sure that brain orientation is correct. This means the image in
AFNI's axial slice viewer should be close to the brain's axial plane.
The same goes for the other planes. Otherwise, the program might do a lousy
job removing the skull.
Eye Candy Mode:
---------------
You can run 3dSkullStrip and have it send successive iterations
to SUMA and AFNI. This is very helpful in following the
progression of the algorithm and determining the source
of trouble, if any.
Example:
afni -niml -yesplugouts &
suma -niml &
3dSkullStrip -input Anat+orig -o_ply anat_brain -visual
Help section for the intrepid:
------------------------------
3dSkullStrip < -input VOL >
[< -o_TYPE PREFIX >] [< -prefix VOL_PREFIX >]
[< -spatnorm >] [< -no_spatnorm >] [< -write_spatnorm >]
[< -niter N_ITER >] [< -ld LD >]
[< -shrink_fac SF >] [< -var_shrink_fac >]
[< -no_var_shrink_fac >] [< -shrink_fac_bot_lim SFBL >]
[< -pushout >] [< -no_pushout >] [< -exp_frac FRAC]
[< -touchup >] [< -no_touchup >]
[< -fill_hole R >] [< -NN_smooth NN_SM >]
[< -smooth_final SM >] [< -avoid_vent >] [< -no_avoid_vent >]
[< -use_skull >] [< -no_use_skull >]
[< -avoid_eyes >] [< -no_avoid_eyes >]
[< -use_edge >] [< -no_use_edge >]
[< -push_to_edge >] [<-no_push_to_edge>]
[< -perc_int PERC_INT >]
[< -max_inter_iter MII >] [-mask_vol | -orig_vol | -norm_vol]
[< -debug DBG >] [< -node_debug NODE_DBG >]
[< -demo_pause >]
[< -monkey >] [< -marmoset >] [<-rat>]
NOTE: Please report bugs and strange failures
to saadz@mail.nih.gov
Mandatory parameters:
-input VOL: Input AFNI (or AFNI readable) volume.
Optional Parameters:
-monkey: the brain of a monkey.
-marmoset: the brain of a marmoset.
this one was tested on one dataset
and may not work with non default
options. Check your results!
-rat: the brain of a rat.
By default, no_touchup is used with the rat.
-surface_coil: Data acquired with a surface coil.
-o_TYPE PREFIX: prefix of output surface.
where TYPE specifies the format of the surface
and PREFIX is, well, the prefix.
TYPE is one of: fs, 1d (or vec), sf, ply.
More on that below.
-skulls: Output surface models of the skull.
-4Tom: The output surfaces are named based
on PREFIX following -o_TYPE option below.
-prefix VOL_PREFIX: prefix of output volume.
If not specified, the prefix is the same
as the one used with -o_TYPE.
The output volume is skull stripped version
of the input volume. In the earlier version
of the program, a mask volume was written out.
You can still get that mask volume instead of the
skull-stripped volume with the option -mask_vol .
NOTE: In the default setting, the output volume does not
have values identical to those in the input.
In particular, the range might be larger
and some low-intensity values are set to 0.
If you insist on having the same range of values as in
the input, then either use option -orig_vol, or run:
3dcalc -nscale -a VOL+VIEW -b VOL_PREFIX+VIEW \
-expr 'a*step(b)' -prefix VOL_SAME_RANGE
With the command above, you can preserve the range
of values of the input but some low-intensity voxels would
still be masked. If you want to preserve them, then use
-mask_vol in the 3dSkullStrip command that would produce
VOL_MASK_PREFIX+VIEW. Then run 3dcalc masking with voxels
inside the brain surface envelope:
3dcalc -nscale -a VOL+VIEW -b VOL_MASK_PREFIX+VIEW \
-expr 'a*step(b-3.01)' -prefix VOL_SAME_RANGE_KEEP_LOW
-norm_vol: Output a masked and somewhat intensity normalized and
thresholded version of the input. This is the default,
and you can use -orig_vol to override it.
-orig_vol: Output a masked version of the input AND do not modify
the values inside the brain as -norm_vol would.
-mask_vol: Output a mask volume instead of a skull-stripped
volume.
The mask volume contains:
0: Voxel outside surface
1: Voxel just outside the surface. This means the voxel
center is outside the surface but inside the
bounding box of a triangle in the mesh.
2: Voxel intersects the surface (a triangle), but center
lies outside.
3: Voxel contains a surface node.
4: Voxel intersects the surface (a triangle), center lies
inside surface.
5: Voxel just inside the surface. This means the voxel
center is inside the surface and inside the
bounding box of a triangle in the mesh.
6: Voxel inside the surface.
-spat_norm: (Default) Perform spatial normalization first.
This is a necessary step unless the volume has
been 'spatnormed' already.
-no_spatnorm: Do not perform spatial normalization.
Use this option only when the volume
has been run through the 'spatnorm' process
-spatnorm_dxyz DXYZ: Use DXY for the spatial resolution of the
spatially normalized volume. The default
is the lowest of all three dimensions.
For human brains, use DXYZ of 1.0, for
primate brain, use the default setting.
-write_spatnorm: Write the 'spatnormed' volume to disk.
-niter N_ITER: Number of iterations. Default is 250
For denser meshes, you need more iterations
N_ITER of 750 works for LD of 50.
-ld LD: Parameter to control the density of the surface.
Default is 20 if -no_use_edge is used,
30 with -use_edge. See CreateIcosahedron -help
for details on this option.
-shrink_fac SF: Parameter controlling the brain vs non-brain
intensity threshold (tb). Default is 0.6.
tb = (Imax - t2) SF + t2
where t2 is the 2 percentile value and Imax is the local
maximum, limited to the median intensity value.
For more information on tb, t2, etc. read the BET paper
mentioned above. Note that in 3dSkullStrip, SF can vary across
iterations and might be automatically clipped in certain areas.
SF can vary between 0 and 1.
0: Intensities < median inensity are considered non-brain
1: Intensities < t2 are considered non-brain
-var_shrink_fac: Vary the shrink factor with the number of
iterations. This reduces the likelihood of a surface
getting stuck on large pools of CSF before reaching
the outer surface of the brain. (Default)
-no_var_shrink_fac: Do not use var_shrink_fac.
-shrink_fac_bot_lim SFBL: Do not allow the varying SF to go
below SFBL . Default 0.65, 0.4 when edge detection is used.
This option helps reduce potential for leakage below
the cerebellum.
In certain cases where you have severe non-uniformity resulting
in low signal towards the bottom of the brain, you will need to
reduce this parameter.
-pushout: Consider values above each node in addition to values
below the node when deciding on expansion. (Default)
-no_pushout: Do not use -pushout.
-exp_frac FRAC: Speed of expansion (see BET paper). Default is 0.1.
-touchup: Perform touchup operations at end to include
areas not covered by surface expansion.
Use -touchup -touchup for aggressive makeup.
(Default is -touchup)
-no_touchup: Do not use -touchup
-fill_hole R: Fill small holes that can result from small surface
intersections caused by the touchup operation.
R is the maximum number of pixels on the side of a hole
that can be filled. Big holes are not filled.
If you use -touchup, the default R is 10. Otherwise
the default is 0.
This is a less than elegant solution to the small
intersections which are usually eliminated
automatically.
-NN_smooth NN_SM: Perform Nearest Neighbor coordinate interpolation
every few iterations. Default is 72
-smooth_final SM: Perform final surface smoothing after all iterations.
Default is 20 smoothing iterations.
Smoothing is done using Taubin's method,
see SurfSmooth -help for detail.
-avoid_vent: avoid ventricles. Default.
Use this option twice to make the avoidance more
aggressive. That is at times needed with old brains.
-no_avoid_vent: Do not use -avoid_vent.
-init_radius RAD: Use RAD for the initial sphere radius.
For the automatic setting, there is an
upper limit of 100mm for humans.
For older brains with lots of CSF, you
might benefit from forcing the radius
to something like 75mm
-avoid_eyes: avoid eyes. Default
-no_avoid_eyes: Do not use -avoid_eyes.
-use_edge: Use edge detection to reduce leakage into meninges and eyes.
Default.
-no_use_edge: Do no use edges.
-push_to_edge: Perform aggressive push to edge at the end.
This option might cause leakage.
-no_push_to_edge: (Default).
-use_skull: Use outer skull to limit expansion of surface into
the skull due to very strong shading artifacts.
This option is buggy at the moment, use it only
if you have leakage into skull.
-no_use_skull: Do not use -use_skull (Default).
-send_no_skull: Do not send the skull surface to SUMA if you are
using -talk_suma
-perc_int PERC_INT: Percentage of segments allowed to intersect
surface. Ideally this should be 0 (Default).
However, few surfaces might have small stubborn
intersections that produce a few holes.
PERC_INT should be a small number, typically
between 0 and 0.1. A -1 means do not do
any testing for intersection.
-max_inter_iter N_II: Number of iteration to remove intersection
problems. With each iteration, the program
automatically increases the amount of smoothing
to get rid of intersections. Default is 4
-blur_fwhm FWHM: Blur dset after spatial normalization.
Recommended when you have lots of CSF in brain
and when you have protruding gyri (finger like)
Recommended value is 2..4.
-interactive: Make the program stop at various stages in the
segmentation process for a prompt from the user
to continue or skip that stage of processing.
This option is best used in conjunction with options
-talk_suma and -feed_afni
-demo_pause: Pause at various step in the process to facilitate
interactive demo while 3dSkullStrip is communicating
with AFNI and SUMA. See 'Eye Candy' mode below and
-talk_suma option.
-fac FAC: Multiply input dataset by FAC if range of values is too
small.
Specifying output surfaces using -o or -o_TYPE options:
-o_TYPE outSurf specifies the output surface,
TYPE is one of the following:
fs: FreeSurfer ascii surface.
fsp: FeeSurfer ascii patch surface.
In addition to outSurf, you need to specify
the name of the parent surface for the patch.
using the -ipar_TYPE option.
This option is only for ConvertSurface
sf: SureFit surface.
For most programs, you are expected to specify prefix:
i.e. -o_sf brain. In some programs, you are allowed to
specify both .coord and .topo file names:
i.e. -o_sf XYZ.coord TRI.topo
The program will determine your choice by examining
the first character of the second parameter following
-o_sf. If that character is a '-' then you have supplied
a prefix and the program will generate the coord and topo names.
vec (or 1D): Simple ascii matrix format.
For most programs, you are expected to specify prefix:
i.e. -o_1D brain. In some programs, you are allowed to
specify both coord and topo file names:
i.e. -o_1D brain.1D.coord brain.1D.topo
coord contains 3 floats per line, representing
X Y Z vertex coordinates.
topo contains 3 ints per line, representing
v1 v2 v3 triangle vertices.
ply: PLY format, ascii or binary.
stl: STL format, ascii or binary (see also STL under option -i_TYPE).
byu: BYU format, ascii or binary.
mni: MNI obj format, ascii only.
gii: GIFTI format, ascii.
You can also enforce the encoding of data arrays
by using gii_asc, gii_b64, or gii_b64gz for
ASCII, Base64, or Base64 Gzipped.
If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
the default encoding is ASCII, otherwise it is Base64.
obj: No support for writing OBJ format exists yet.
Note that if the surface filename has the proper extension,
it is enough to use the -o option and let the programs guess
the type from the extension.
SUMA communication options:
-talk_suma: Send progress with each iteration to SUMA.
-refresh_rate rps: Maximum number of updates to SUMA per second.
The default is the maximum speed.
-send_kth kth: Send the kth element to SUMA (default is 1).
This allows you to cut down on the number of elements
being sent to SUMA.
-sh <SumaHost>: Name (or IP address) of the computer running SUMA.
This parameter is optional, the default is 127.0.0.1
-ni_text: Use NI_TEXT_MODE for data transmission.
-ni_binary: Use NI_BINARY_MODE for data transmission.
(default is ni_binary).
-feed_afni: Send updates to AFNI via SUMA's talk.
-np PORT_OFFSET: Provide a port offset to allow multiple instances of
AFNI <--> SUMA, AFNI <--> 3dGroupIncorr, or any other
programs that communicate together to operate on the same
machine.
All ports are assigned numbers relative to PORT_OFFSET.
The same PORT_OFFSET value must be used on all programs
that are to talk together. PORT_OFFSET is an integer in
the inclusive range [1025 to 65500].
When you want to use multiple instances of communicating programs,
be sure the PORT_OFFSETS you use differ by about 50 or you may
still have port conflicts. A BETTER approach is to use -npb below.
-npq PORT_OFFSET: Like -np, but more quiet in the face of adversity.
-npb PORT_OFFSET_BLOC: Similar to -np, except it is easier to use.
PORT_OFFSET_BLOC is an integer between 0 and
MAX_BLOC. MAX_BLOC is around 4000 for now, but
it might decrease as we use up more ports in AFNI.
You should be safe for the next 10 years if you
stay under 2000.
Using this function reduces your chances of causing
port conflicts.
See also afni and suma options: -list_ports and -port_number for
information about port number assignments.
You can also provide a port offset with the environment variable
AFNI_PORT_OFFSET. Using -np overrides AFNI_PORT_OFFSET.
-max_port_bloc: Print the current value of MAX_BLOC and exit.
Remember this value can get smaller with future releases.
Stay under 2000.
-max_port_bloc_quiet: Spit MAX_BLOC value only and exit.
-num_assigned_ports: Print the number of assigned ports used by AFNI
then quit.
-num_assigned_ports_quiet: Do it quietly.
Port Handling Examples:
-----------------------
Say you want to run three instances of AFNI <--> SUMA.
For the first you just do:
suma -niml -spec ... -sv ... &
afni -niml &
Then for the second instance pick an offset bloc, say 1 and run
suma -niml -npb 1 -spec ... -sv ... &
afni -niml -npb 1 &
And for yet another instance:
suma -niml -npb 2 -spec ... -sv ... &
afni -niml -npb 2 &
etc.
Since you can launch many instances of communicating programs now,
you need to know wich SUMA window, say, is talking to which AFNI.
To sort this out, the titlebars now show the number of the bloc
of ports they are using. When the bloc is set either via
environment variables AFNI_PORT_OFFSET or AFNI_PORT_BLOC, or
with one of the -np* options, window title bars change from
[A] to [A#] with # being the resultant bloc number.
In the examples above, both AFNI and SUMA windows will show [A2]
when -npb is 2.
-visual: Equivalent to using -talk_suma -feed_afni -send_kth 5
-debug DBG: debug levels of 0 (default), 1, 2, 3.
This is no Rick Reynolds debug, which is oft nicer
than the results, but it will do.
-node_debug NODE_DBG: Output lots of parameters for node
NODE_DBG for each iteration.
The next 3 options are for specifying surface coordinates
to keep the program from having to recompute them.
The options are only useful for saving time during debugging.
-brain_contour_xyz_file BRAIN_CONTOUR_XYZ.1D
-brain_hull_xyz_file BRAIN_HULL_XYZ.1D
-skull_outer_xyz_file SKULL_OUTER_XYZ.1D
-help: The help you need
[-novolreg]: Ignore any Rotate, Volreg, Tagalign,
or WarpDrive transformations present in
the Surface Volume.
[-noxform]: Same as -novolreg
[-setenv "'ENVname=ENVvalue'"]: Set environment variable ENVname
to be ENVvalue. Quotes are necessary.
Example: suma -setenv "'SUMA_BackgroundColor = 1 0 1'"
See also options -update_env, -environment, etc
in the output of 'suma -help'
Common Debugging Options:
[-trace]: Turns on In/Out debug and Memory tracing.
For speeding up the tracing log, I recommend
you redirect stdout to a file when using this option.
For example, if you were running suma you would use:
suma -spec lh.spec -sv ... > TraceFile
This option replaces the old -iodbg and -memdbg.
[-TRACE]: Turns on extreme tracing.
[-nomall]: Turn off memory tracing.
[-yesmall]: Turn on memory tracing (default).
NOTE: For programs that output results to stdout
(that is to your shell/screen), the debugging info
might get mixed up with your results.
Global Options (available to all AFNI/SUMA programs)
-h: Mini help, at time, same as -help in many cases.
-help: The entire help output
-HELP: Extreme help, same as -help in majority of cases.
-h_view: Open help in text editor. AFNI will try to find a GUI editor
-hview : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_web: Open help in web browser. AFNI will try to find a browser.
-hweb : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_find WORD: Look for lines in this programs's -help output that match
(approximately) WORD.
-h_raw: Help string unedited
-h_spx: Help string in sphinx loveliness, but do not try to autoformat
-h_aspx: Help string in sphinx with autoformatting of options, etc.
-all_opts: Try to identify all options for the program from the
output of its -help option. Some options might be missed
and others misidentified. Use this output for hints only.
Compile Date:
May 6 2025
Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov
AFNI program: 3dSliceNDice
OVERVIEW ~1~
This program is for calculating the Dice coefficient between two
volumes on a slice-by-slice basis. The user enters two volumes on the
same grid, and Dice coefficients along each axis are calculated; three
separate text (*.1D) files are output.
The Dice coefficient (Dice, 1945) is known by many names and in many
applications. In the present context it is defined as follows.
Consider two sets voxels (i.e., masks), A and B. The Dice coefficient
D is the ratio of their intersection to their union. Let N(x) be a
function that calculates the number of voxels in a set x. Then:
D = 2*N(intersection of A and B)/(N(A) + N(B)).
The range of D is 0 (no overlap of A and B at all) to 1 (perfect
overlap of A and B), inclusively.
This program calculates D in a slicewise manner across all 3 major
axes of a dset; other programs of interest for a volumewise Dice
coefficient or more general overlap calculations include 3dABoverlap,
for example.
Nonzero values in a dset are considered part of the mask. 3dcalc
might be useful in creating a mask from a dset if things like
thresholding are required.
written by PA Taylor (NIMH, NIH).
USAGE ~1~
Input:
+ two single-volume datasets
Output:
+ three text files, each a *.1D file of columns of numbers (and
note that the executed 3dSliceNDice command is echoed into a
comment in the top line of each 1D file by output). File name
indicates along which axis the particular results were
calculated, such as ending in '0_RL.1D', '1_AP.1D', '2_IS.1D',
etc.
For each file, there are currently 5 columns of data output,
in the following order:
[index] the i, j, or k index of the slice (starting from 0).
[coord] the x, y, or z coordinate of the slice.
[size of A ROI] the number of voxels in set A's ROI in the slice.
[size of B ROI] the number of voxels in set B's ROI in the slice.
[Dice coef] the Dice coefficient of that slice.
1dplot can be useful for viewing output results quickly.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
COMMAND ~1~
3dSliceNDice \
-insetA AA \
-insetB BB \
-prefix PP \
{-out_range all|AorB|AandB}
where
-insetA AA :name of an input set to make a mask from; mask will
be made from nonzero values in AA;
-insetB BB :name of an input set to make a mask from; mask will
be made from nonzero values in BB;
-prefix PP :prefix of output files.
Three output text files will be named
according to the orientation of the input AA
and BB files. So, outputs might look like:
PP_0_RL.1D or PP_0_RL.1D,
PP_1_AP.1D or PP_0_PA.1D,
PP_2_IS.1D or PP_0_SI.1D.
-out_domain all|AorB|AandB
:optional specification of the slices over which to
output Dice coefficient results along each axis,
via keyword. Argument options at present:
'all': report Dice values for all slices (default);
'AorB': report values only in slices where sets A or
B (or both) have at least one nonzero voxel;
'AandB': report values only in slices where both sets
A and B have at least one nonzero voxel;
'Amask': report values only in slices where set A
has at least one nonzero voxel;
'Bmask': report values only in slices where set B
has at least one nonzero voxel;
-no_cmd_echo :turn OFF recording the command line call to
3dSliceNDice in the output *.1D files (default is
to do the recording).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
EXAMPLES ~1~
1. Report slicewise overlap of two masks through full FOV along each
axis.
3dSliceNDice \
-insetA mask_1.nii.gz \
-insetB mask_2.nii.gz \
-prefix mask_olap_all
2. Report slicewise overlap of two masks only for slices where both
dsets have >0 voxels in their masks
3dSliceNDice \
-insetA mask_1.nii.gz \
-insetB mask_2.nii.gz \
-out_domain AandB \
-prefix mask_olap_AandB
To view the SliceNDice results: NB, you can use 1dplot for viewing
either of the about output results, choosing slice number or DICOM
coordinate value for the abscissa (x-axis) value.
# use integer index values along x-axis of the plot, for one
# encoding direction of the volume:
1dplot -x mask_olap_all_1_PA.1D'[0]' mask_olap_all_1_PA.1D'[4]'
# use DICOM coordinate values along x-axis of the plot:
1dplot -x mask_olap_all_1_PA.1D'[1]' mask_olap_all_1_PA.1D'[4]'
# ----------------------------------------------------------------------
AFNI program: 3dSpaceTimeCorr
3dSpaceTimeCorr
v1.2 (PA Taylor, Aug. 2019)
This program is for calculating something *similar* to the (Pearson)
correlation coefficient between corresponding voxels between two data
sets, which is what 3dTcorrelate does. However, this is program
operates differently. Here, two data sets are loaded in, and for each
voxel in the brain:
+ for each data set, an ijk-th voxel is used as a seed to generate a
correlation map within a user-defined mask (e.g., whole brain,
excluding the seed location where r==1, by definition);
+ that correlation map is Fisher Z transformed;
+ the Z-correlation maps are (Pearson) correlated with each other,
generating a single correlation coefficient;
+ the correlation coefficient is stored at the same ijk-th voxel
location in the output data set;
and the process is repeated. Thus, the output is a whole brain map
of r-correlation coefficients for corresponding voxels from the two data
sets, generated by temporal and spatial patterns (-> space+time
correlation!).
This could be useful when someone *wishes* that s/he could use
3dTcorrelate on something like resting state FMRI data. Maybe.
Note that this program could take several minutes or more to run,
depending on the size of the data set and mask.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ USAGE: Load in 2 data sets and a mask. This computation can get pretty
time consuming-- it depends on the number of voxels N like N**2.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMAND: two 4D data sets need to be put in (order doesn't matter),
and a mask also *should* be.
3dSpaceTimeCorr -insetA FILEA -insetB FILEB -prefix PREFIX \
{-mask MASK} {-out_Zcorr}
{-freeze_insetA_ijk II JJ KK}
{-freeze_insetA_xyz XX YY ZZ}
where:
-insetA FILEA :one 4D data set.
-insetB FILEB :another 4D data set; must have same spatial dimensions as
FILEA, as well as same number of time points.
-mask MASK :optional mask. Highly recommended to use for speed of
calcs (and probably for interpretability, too).
-prefix PREFIX :output filename/base.
-out_Zcorr :switch to output Fisher Z transform of spatial map
correlation (default is Pearson r values).
-freeze_insetA_ijk II JJ KK
:instead of correlating the spatial correlation maps
of A and B that have matching seed locations, with this
option you can 'freeze' the seed voxel location in
the input A dset, while the seed location in B moves
throughout the volume or mask as normal.
Here, one inputs three values, the ijk indices in
the dataset. (See next opt for freezing at xyz location.)
-freeze_insetA_xyz XX YY ZZ
:same behavior as using '-freeze_insetA_ijk ..', but here
one inputs the xyz (physical coordinate) indices.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ OUTPUT:
A data set with one value at each voxel, representing the space-time
correlation of the two input data sets within the input mask.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3dSpaceTimeCorr \
-insetA SUB_01.nii.gz \
-insetB SUB_02.nii.gz \
-mask mask_GM.nii.gz \
-prefix stcorr_01_02 \
____________________________________________________________________________
AFNI program: 3dStatClust
++ 3dStatClust: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
Perform agglomerative hierarchical clustering for user specified
parameter sub-bricks, for all voxels whose threshold statistic
is above a user specified value.
Usage: 3dStatClust options datasets
where the options are:
-prefix pname = Use 'pname' for the output dataset prefix name.
OR [default='SC']
-output pname
-session dir = Use 'dir' for the output dataset session directory.
[default='./'=current working directory]
-verb = Print out verbose output as the program proceeds.
Options for calculating distance between parameter vectors:
-dist_euc = Calculate Euclidean distance between parameters
-dist_ind = Statistical distance for independent parameters
-dist_cor = Statistical distance for correlated parameters
The default option is: Euclidean distance.
-thresh t tname = Use threshold statistic from file tname.
Only voxels whose threshold statistic is greater
than t in absolute value will be considered.
[If file tname contains more than 1 sub-brick,
the threshold stat. sub-brick must be specified!]
-nclust n = This specifies the maximum number of clusters for
output (= number of sub-bricks in output dataset).
Command line arguments after the above are taken as parameter datasets.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dSurf2Vol
3dSurf2Vol - map data from a surface domain to an AFNI volume domain
usage: 3dSurf2Vol [options] -spec SPEC_FILE -surf_A SURF_NAME \
-grid_parent AFNI_DSET -sv SURF_VOL \
-map_func MAP_FUNC -prefix OUTPUT_DSET
This program is meant to take as input a pair of surfaces,
optionally including surface data, and an AFNI grid parent
dataset, and to output a new AFNI dataset consisting of the
surface data mapped to the dataset grid space. The mapping
function determines how to map the surface values from many
nodes to a single voxel.
Surfaces (from the spec file) are specified using '-surf_A'
(and '-surf_B', if a second surface is input). If two
surfaces are input, then the computed segments over node
pairs will be in the direction from surface A to surface B.
The basic form of the algorithm is:
o for each node pair (or single node)
o form a segment based on the xyz node coordinates,
adjusted by any '-f_pX_XX' options
o divide the segment up into N steps, according to
the '-f_steps' option
o for each segment point
o if the point is outside the space of the output
dataset, skip it
o locate the voxel in the output dataset which
corresponds to this segment point
o if the '-cmask' option was given, and the voxel
is outside the implied mask, skip it
o if the '-f_index' option is by voxel, and this
voxel has already been considered, skip it
o insert the surface node value, according to the
user-specified '-map_func' option
Surface Coordinates:
Surface coordinates are assumed to be in the Dicom
orientation. This information may come from the option
pair of '-spec' and '-sv', with which the user provides
the name of the SPEC FILE and the SURFACE VOLUME, along
with '-surf_A' and optionally '-surf_B', used to specify
actual surfaces by name. Alternatively, the surface
coordinates may come from the '-surf_xyz_1D' option.
See these option descriptions below.
Note that the user must provide either the three options
'-spec', '-sv' and '-surf_A', or the single option,
'-surf_xyz_1D'.
Surface Data:
Surface domain data can be input via the '-sdata_1D'
or '-sdata' option. In such a case, the data is with
respect to the input surface.
Note: With -sdata_1D, the first column of the file
should contain a node's index, and following columns are
that node's data. See the '-sdata_1D' option for more info.
Option -sdata takes NIML or GIFTI input which contain
node index information in their headers.
If the surfaces have V values per node (pair), then the
resulting AFNI dataset will have V sub-bricks (unless the
user applies the '-data_expr' option).
Mapping Functions:
Mapping functions exist because a single volume voxel may
be occupied by multiple surface nodes or segment points.
Depending on how dense the surface mesh is, the number of
steps provided by the '-f_steps' option, and the indexing
type from '-f_index', even a voxel which is only 1 cubic
mm in volume may have quite a few contributing points.
The mapping function defines how multiple surface values
are combined to get a single result in each voxel. For
example, the 'max' function will take the maximum of all
surface values contributing to each given voxel.
Current mapping functions are listed under the '-map_func'
option, below.
------------------------------------------------------------
examples:
1. Map a single surface to an anatomical volume domain,
creating a simple mask of the surface. The output
dataset will be fred_surf+orig, and the orientation and
grid spacing will follow that of the grid parent. The
output voxels will be 1 where the surface exists, and 0
elsewhere.
3dSurf2Vol \
-spec fred.spec \
-surf_A pial \
-sv fred_anat+orig \
-grid_parent fred_anat+orig \
-map_func mask \
-prefix fred_surf
2. Map the cortical grey ribbon (between the white matter
surface and the pial surface) to an AFNI volume, where
the resulting volume is restricted to the mask implied by
the -cmask option.
Surface data will come from the file sdata_10.1D, which
has 10 values per node, and lists only a portion of the
entire set of surface nodes. Each node pair will be form
a segment of 15 equally spaced points, the values from
which will be applied to the output dataset according to
the 'ave' filter. Since the index is over points, each
of the 15 points will have its value applied to the
appropriate voxel, even multiple times. This weights the
resulting average by the fraction of each segment that
occupies a given voxel.
The output dataset will have 10 sub-bricks, according to
the 10 values per node index in sdata_10.1D.
3dSurf2Vol \
-spec fred.spec \
-surf_A smoothwm \
-surf_B pial \
-sv fred_anat+orig \
-grid_parent 'fred_func+orig[0]' \
-cmask '-a fred_func+orig[2] -expr step(a-0.6)' \
-sdata_1D sdata_10.1D \
-map_func ave \
-f_steps 15 \
-f_index points \
-prefix fred_surf_ave
3. The inputs in this example are identical to those in
example 2, including the surface dataset, sdata_10.1D.
Again, the output dataset will have 10 sub-bricks.
The surface values will be applied via the 'max_abs'
filter, with the intention of assigning to each voxel the
node value with the most significance. Here, the index
method does not matter, so it is left as the default,
'voxel'.
In this example, each node pair segment will be extended
by 20% into the white matter, and by 10% outside of the
grey matter, generating a "thicker" result.
3dSurf2Vol \
-spec fred.spec \
-surf_A smoothwm \
-surf_B pial \
-sv fred_anat+orig \
-grid_parent 'fred_func+orig[0]' \
-cmask '-a fred_func+orig[2] -expr step(a-0.6)' \
-sdata_1D sdata_10.1D \
-map_func max_abs \
-f_steps 15 \
-f_p1_fr -0.2 \
-f_pn_fr 0.1 \
-prefix fred_surf_max_abs
4. This is similar to example 2. Here, the surface nodes
(coordinates) come from 'surf_coords_2.1D'. But these
coordinates do not happen to be in Dicom orientation,
they are in the same orientation as the grid parent, so
the '-sxyz_orient_as_gpar' option is applied.
Even though the data comes from 'sdata_10.1D', the output
AFNI dataset will only have 1 sub-brick. That is because
of the '-data_expr' option. Here, each applied surface
value will be the average of the sines of the first 3
data values (columns of sdata_10.1D).
3dSurf2Vol \
-surf_xyz_1D surf_coords_2.1D \
-sxyz_orient_as_gpar \
-grid_parent 'fred_func+orig[0]' \
-sdata_1D sdata_10.1D \
-data_expr '(sin(a)+sin(b)+sin(c))/3' \
-map_func ave \
-f_steps 15 \
-f_index points \
-prefix fred_surf_ave_sine
5. In this example, voxels will get the maximum value from
column 3 of sdata_10.1D (as usual, column 0 is used for
node indices). The output dataset will have 1 sub-brick.
Here, the output dataset is forced to be of type 'short',
regardless of what the grid parent is. Also, there will
be no scaling factor applied.
To track the numbers for surface node #1234, the '-dnode'
option has been used, along with '-debug'. Additionally,
'-dvoxel' is used to track the results for voxel #6789.
3dSurf2Vol \
-spec fred.spec \
-surf_A smoothwm \
-surf_B pial \
-sv fred_anat+orig \
-grid_parent 'fred_func+orig[0]' \
-sdata_1D sdata_10.1D'[0,3]' \
-map_func max \
-f_steps 15 \
-datum short \
-noscale \
-debug 2 \
-dnode 1234 \
-dvoxel 6789 \
-prefix fred_surf_max
6. Draw some surface ROIs, and map them to the volume. Some
voxels may contain nodes from multiple ROIs, so take the
most common one (the mode), as suggested by R Mruczek.
ROIs are left in 1D format for the -sdata_1D option.
setenv AFNI_NIML_TEXT_DATA YES
ROI2dataset -prefix rois.1D.dset -input rois.niml.roi
3dSurf2Vol \
-spec fred.spec \
-surf_A smoothwm \
-surf_B pial \
-sv fred_anat+orig \
-grid_parent 'fred_func+orig[0]' \
-sdata_1D rois.1D.dset \
-map_func mode \
-f_steps 10 \
-prefix rois.from.surf
------------------------------------------------------------
REQUIRED COMMAND ARGUMENTS:
-spec SPEC_FILE : SUMA spec file
e.g. -spec fred.spec
The surface specification file contains the list of
mappable surfaces that are used.
See @SUMA_Make_Spec_FS and @SUMA_Make_Spec_SF.
Note: this option, along with '-sv', may be replaced
by the '-surf_xyz_1D' option.
-surf_A SURF_NAME : specify surface A (from spec file)
-surf_B SURF_NAME : specify surface B (from spec file)
e.g. -surf_A smoothwm
e.g. -surf_A lh.smoothwm
e.g. -surf_B lh.pial
This parameter is used to tell the program with surfaces
to use. The '-surf_A' parameter is required, but the
'-surf_B' parameter is an option.
The surface names must uniquely match those in the spec
file, though a sub-string match is good enough. The
surface names are compared with the names of the surface
node coordinate files.
For instance, given a spec file that has only the left
hemisphere in it, 'pial' should produce a unique match
with lh.pial.asc. But if both hemispheres are included,
then 'pial' would not be unique (matching rh.pial.asc,
also). In that case, 'lh.pial' would be better.
-sv SURFACE_VOLUME : AFNI dataset
e.g. -sv fred_anat+orig
This is the AFNI dataset that the surface is mapped to.
This dataset is used for the initial surface node to xyz
coordinate mapping, in the Dicom orientation.
Note: this option, along with '-spec', may be replaced
by the '-surf_xyz_1D' option.
-surf_xyz_1D SXYZ_NODE_FILE : 1D coordinate file
e.g. -surf_xyz_1D my_surf_coords.1D
This ascii file contains a list of xyz coordinates to be
considered as a surface, or 2 sets of xyz coordinates to
considered as a surface pair. As usual, these points
are assumed to be in Dicom orientation. Another option
for coordinate orientation is to use that of the grid
parent dataset. See '-sxyz_orient_as_gpar' for details.
This option is an alternative to the pair of options,
'-spec' and '-sv'.
The number of rows of the file should equal the number
of nodes on each surface. The number of columns should
be either 3 for a single surface, or 6 for two surfaces.
sample line of an input file (one surface):
11.970287 2.850751 90.896111
sample line of an input file (two surfaces):
11.97 2.85 90.90 12.97 2.63 91.45
-grid_parent AFNI_DSET : AFNI dataset
e.g. -grid_parent fred_function+orig
This dataset is used as a grid and orientation master
for the output AFNI dataset.
-map_func MAP_FUNC : surface to dataset function
e.g. -map_func max
e.g. -map_func mask -f_steps 20
This function applies to the case where multiple data
points get mapped to a single voxel, which is expected
since surfaces tend to have a much higher resolution
than AFNI volumes. In the general case data points come
from each point on each partitioned line segment, with
one segment per node pair. Note that these segments may
have length zero, such as when only a single surface is
input.
See "Mapping Functions" above, for more information.
The current mapping function for one surface is:
mask : For each xyz location, set the corresponding
voxel to 1.
The current mapping functions for two surfaces are as
follows. These descriptions are per output voxel, and
over the values of all points mapped to a given voxel.
mask2 : if any points are mapped to the voxel, set
the voxel value to 1
ave : average all values
nzave : ave, but ignoring any zero values
count : count the number of mapped data points
min : find the minimum value from all mapped points
max : find the maximum value from all mapped points
max_abs: find the number with maximum absolute value
(the resulting value will retain its sign)
median : median of all mapped values
nzmedian: median, but ignoring any zero values
mode : apply the most common value per voxel
(minimum mode, if they are not unique)
(appropriate where surf ROIs overlap)
nzmode : mode, but ignoring any zero values
-prefix OUTPUT_PREFIX : prefix for the output dataset
e.g. -prefix anat_surf_mask
This is used to specify the prefix of the resulting AFNI
dataset.
------------------------------
SUB-SURFACE DATA FILE OPTIONS:
-sdata_1D SURF_DATA.1D : 1D sub-surface file, with data
e.g. -sdata_1D roi3.1D
This is used to specify a 1D file, which contains
surface indices and data. The indices refer to the
surface(s) read from the spec file.
The format of this data file is a surface index and a
list of data values on each row. To be a valid 1D file,
each row must have the same number of columns.
-sdata SURF_DATA_DSET: NIML, or GIFTI formatted dataset.
------------------------------
OPTIONS SPECIFIC TO SEGMENT SELECTION:
(see "The basic form of the algorithm" for more details)
-f_steps NUM_STEPS : partition segments
e.g. -f_steps 10
default: -f_steps 2 (or 1, the number of surfaces)
This option specifies the number of points to divide
each line segment into, before mapping the points to the
AFNI volume domain. The default is the number of input
surfaces (usually, 2). The default operation is to have
the segment endpoints be the actual surface nodes,
unless they are altered with the -f_pX_XX options.
-f_index TYPE : index by points or voxels
e.g. -f_index points
e.g. -f_index voxels
default: -f_index voxels
Along a single segment, the default operation is to
apply only those points mapping to a new voxel. The
effect of the default is that a given voxel will have
at most one value applied per voxel pair.
If the user applies this option with 'points' or 'nodes'
as the argument, then every point along the segment will
be applied. This may be preferred if, for example, the
user wishes to have the average weighted by the number
of points occupying a voxel, not just the number of node
pair segments.
Note: the following -f_pX_XX options are used to alter the
locations of the segment endpoints, per node pair.
The segments are directed, from the node on the first
surface to the node on the second surface. To modify
the first endpoint, use a -f_p1_XX option, and use
-f_pn_XX to modify the second.
-f_p1_fr FRACTION : offset p1 by a length fraction
e.g. -f_p1_fr -0.2
e.g. -f_p1_fr -0.2 -f_pn_fr 0.2
This option moves the first endpoint, p1, by a distance
of the FRACTION times the original segment length. If
the FRACTION is positive, it moves in the direction of
the second endpoint, pn.
In the example, p1 is moved by 20% away from pn, which
will increase the length of each segment.
-f_pn_fr FRACTION : offset pn by a length fraction
e.g. -f_pn_fr 0.2
e.g. -f_p1_fr -0.2 -f_pn_fr 0.2
This option moves pn by a distance of the FRACTION times
the original segment length, in the direction from p1 to
pn. So a positive fraction extends the segment, and a
negative fraction reduces it.
In the example above, using 0.2 adds 20% to the segment
length past the original pn.
-f_p1_mm DISTANCE : offset p1 by a distance in mm.
e.g. -f_p1_mm -1.0
e.g. -f_p1_mm -1.0 -f_pn_fr 1.0
This option moves p1 by DISTANCE mm., in the direction
of pn. If the DISTANCE is positive, the segment gets
shorter. If DISTANCE is negative, the segment will get
longer.
In the example, p1 is moved away from pn, extending the
segment by 1 millimeter.
-f_pn_mm DISTANCE : offset pn by a distance in mm.
e.g. -f_pn_mm 1.0
e.g. -f_p1_mm -1.0 -f_pn_fr 1.0
This option moves pn by DISTANCE mm., in the direction
from the first point to the second. So if DISTANCE is
positive, the segment will get longer. If DISTANCE is
negative, the segment will get shorter.
In the example, pn is moved 1 millimeter farther from
p1, extending the segment by that distance.
-stop_gap : stop when a zero gap has been hit
This limits segment processing such that once a non-zero
mask value has been encountered, the segment will be
terminated on any subsequent zero mask value.
The goal is to prevent mixing masked cortex regions.
------------------------------
GENERAL OPTIONS:
-cmask MASK_COMMAND : command for dataset mask
e.g. -cmask '-a fred_func+orig[2] -expr step(a-0.8)'
This option will produce a mask to be applied to the
output dataset. Note that this mask should form a
single sub-brick.
This option follows the style of 3dmaskdump (since the
code for it was, uh, borrowed from there (thanks Bob!)).
See '3dmaskdump -help' for more information.
-data_expr EXPRESSION : apply expression to surface input
e.g. -data_expr 17
e.g. -data_expr '(a+b+c+d)/4'
e.g. -data_expr '(sin(a)+sin(b))/2'
This expression is applied to the list of data values
from the surface data file input via '-sdata_1D'. The
expression is applied for each node or node pair, to the
list of data values corresponding to that node.
The letters 'a' through 'z' may be used as input, and
refer to columns 1 through 26 of the data file (where
column 0 is a surface node index). The data file must
have enough columns to support the expression. It is
valid to have a constant expression without a data file.
-datum DTYPE : set data type in output dataset
e.g. -datum short
default: based on the map function
(was grid_parent, but that made little sense)
This option specifies the data type for the output data
volume. Valid choices are byte, short and float, which
are 1, 2 and 4 bytes for each data point, respectively.
The default is based on the map function, generally
implying float, unless using mask or mask2 (byte), or
count or mode (short).
-debug LEVEL : verbose output
e.g. -debug 2
This option is used to print out status information
during the execution of the program. Current levels are
from 0 to 5.
-dnode DEBUG_NODE : extra output for that node
e.g. -dnode 123456
This option requests additional debug output for the
given surface node. This index is with respect to the
input surface (included in the spec file, or through the
'-surf_xyz_1D' option).
This will have no effect without the '-debug' option.
-dvoxel DEBUG_VOXEL : extra output for that voxel
e.g. -dvoxel 234567
This option requests additional debug output for the
given volume voxel. This 1-D index is with respect to
the output data volume. One good way to find a voxel
index to supply is from output via the '-dnode' option.
This will have no effect without the '-debug' option.
-hist : show revision history
Display module history over time.
-help : show this help
If you can't get help here, please get help somewhere.
-noscale : no scale factor in output dataset
If the output dataset is an integer type (byte, shorts
or ints), then the output dataset may end up with a
scale factor attached (see 3dcalc -help). With this
option, the output dataset will not be scaled.
-sxyz_orient_as_gpar : assume gpar orientation for sxyz
This option specifies that the surface coordinate points
in the '-surf_xyz_1D' option file have the orientation
of the grid parent dataset.
When the '-surf_xyz_1D' option is applied the surface
coordinates are assumed to be in Dicom orientation, by
default. This '-sxyz_orient_as_gpar' option overrides
the Dicom default, specifying that the node coordinates
are in the same orientation as the grid parent dataset.
See the '-surf_xyz_1D' option for more information.
-version : show version information
Show version and compile date.
------------------------------------------------------------
Author: R. Reynolds - version 3.10 (June 22, 2021)
(many thanks to Z. Saad and R.W. Cox)
AFNI program: 3dSurfMask
Usage: 3dSurfMask <-i_TYPE SURFACE> <-prefix PREFIX>
[<-fill_method METH>]
<-grid_parent GRID_VOL> [-sv SURF_VOL] [-mask_only]
Creates 2 volumetric datasets that mark voxel based on their
location relative to the surface.
Voxels in the first volume (named PREFIX.m) label voxel positions
relative to the surface. With -fill_method set to FAST, you get a
a CRUDE mask with voxel values set to the following:
0: Voxel outside surface
1: Voxel just outside the surface. This means the voxel
center is outside the surface but inside the
bounding box of a triangle in the mesh.
2: Voxel intersects the surface (a triangle),
but center lies outside.
3: Voxel contains a surface node.
4: Voxel intersects the surface (a triangle),
center lies inside surface.
5: Voxel just inside the surface. This means the voxel
center is inside the surface and inside the
bounding box of a triangle in the mesh.
6: Voxel inside the surface.
Masks obtained with -fill_method FAST could have holes in them.
To decide on whether a voxel lies inside or outside the surface
you should use the signed distances in PREFIX.d below, or use
-fill_method slow.
With -fill_method set to SLOW you get a better mask with voxels set
to the following:
0: Voxel outside surface
1: Voxel outside the surface but in its bounding box
2: Voxel inside the surface
Voxels values in the second volume (named PREFIX.d) reflect the
shortest distance of voxels in PREFIX.m to the surface.
The distances are signed to reflect whether a voxel is inside
or outsider the surface. Voxels inside the surface have positive
distances, voxels outside have a negative distance.
If the signs appear reversed, use option -flip_orientation.
Mandatory Parameters:
-i_TYPE SURFACE: Specify input surface.
You can also use -t* and -spec and -surf
methods to input surfaces. See below
for more details.
-prefix PREFIX: Prefix of output dataset.
-grid_parent GRID_VOL: Specifies the grid for the
output volume.
Other parameters:
-mask_only: Produce an output dataset where voxels
are 1 inside the surface and 0 outside,
instead of the more nuanced output above.
-flip_orientation: Flip triangle winding of surface mesh.
Use this option when the sign of the distances
in PREFIX.m comes out wrong. Voxels inside
the surface have a positive distance.
This can happen when the winding of the triangles
is reversed.
-fill_method METH: METH can take two values; SLOW, and FAST[default].
FAST can produce holes under certain conditions.
-no_dist: Do not compute the distances, just the mask from the first
step.
Example: (tcsh syntax)
1- Find distance of voxels around and inside of toy surface:
echo 'Create toy data'
@auto_tlrc -base TT_N27+tlrc -base_copy ToyVolume
CreateIcosahedron -rad 50 -ld 1
sed 's/Anatomical = N/Anatomical = Y/' CreateIco.spec > __ttt
mv __ttt CreateIco.spec
echo 'Do computations'
3dSurfMask -i_fs CreateIco.asc -sv ToyVolume+tlrc \
-prefix ToyMasks -flip_orientation \
-grid_parent ToyVolume+tlrc
echo 'Cut and paste commands below to show you the results'
suma -npb 70 -niml -spec CreateIco.spec -sv ToyVolume+tlrc &
afni -npb 70 -niml -yesplugouts &
DriveSuma -npb 70 -com viewer_cont -key 't'
plugout_drive -npb 70 -com 'SET_OVERLAY A ToyMasks.d' \
-com 'SET_THRESHOLD A.0' \
-com 'SET_PBAR_NUMBER A.10' \
-quit
See also examples in SurfPatch -help
Specifying input surfaces using -i or -i_TYPE options:
-i_TYPE inSurf specifies the input surface,
TYPE is one of the following:
fs: FreeSurfer surface.
If surface name has .asc it is assumed to be
in ASCII format. Otherwise it is assumed to be
in BINARY_BE (Big Endian) format.
Patches in Binary format cannot be read at the moment.
sf: SureFit surface.
You must specify the .coord followed by the .topo file.
vec (or 1D): Simple ascii matrix format.
You must specify the coord (NodeList) file followed by
the topo (FaceSetList) file.
coord contains 3 floats per line, representing
X Y Z vertex coordinates.
topo contains 3 ints per line, representing
v1 v2 v3 triangle vertices.
ply: PLY format, ascii or binary.
Only vertex and triangulation info is preserved.
stl: STL format, ascii or binary.
This format of no use for much of the surface-based
analyses. Objects are defined as a soup of triangles
with no information about which edges they share. STL is only
useful for taking surface models to some 3D printing
software.
mni: MNI .obj format, ascii only.
Only vertex, triangulation, and node normals info is preserved.
byu: BYU format, ascii.
Polygons with more than 3 edges are turned into
triangles.
bv: BrainVoyager format.
Only vertex and triangulation info is preserved.
dx: OpenDX ascii mesh format.
Only vertex and triangulation info is preserved.
Requires presence of 3 objects, the one of class
'field' should contain 2 components 'positions'
and 'connections' that point to the two objects
containing node coordinates and topology, respectively.
gii: GIFTI XML surface format.
obj: OBJ file format for triangular meshes only. The following
primitives are preserved: v (vertices), f (faces, triangles
only), and p (points)
Note that if the surface filename has the proper extension,
it is enough to use the -i option and let the programs guess
the type from the extension.
You can also specify multiple surfaces after -i option. This makes
it possible to use wildcards on the command line for reading in a bunch
of surfaces at once.
-onestate: Make all -i_* surfaces have the same state, i.e.
they all appear at the same time in the viewer.
By default, each -i_* surface has its own state.
For -onestate to take effect, it must precede all -i
options with on the command line.
-anatomical: Label all -i surfaces as anatomically correct.
Again, this option should precede the -i_* options.
More variants for option -i:
-----------------------------
You can also load standard-mesh spheres that are formed in memory
with the following notation
-i ldNUM: Where NUM is the parameter controlling
the mesh density exactly as the parameter -ld linDepth
does in CreateIcosahedron. For example:
suma -i ld60
create on the fly a surface that is identical to the
one produced by: CreateIcosahedron -ld 60 -tosphere
-i rdNUM: Same as -i ldNUM but with NUM specifying the equivalent
of parameter -rd recDepth in CreateIcosahedron.
To keep the option confusing enough, you can also use -i to load
template surfaces. For example:
suma -i lh:MNI_N27:ld60:smoothwm
will load the left hemisphere smoothwm surface for template MNI_N27
at standard mesh density ld60.
The string following -i is formatted thusly:
HEMI:TEMPLATE:DENSITY:SURF where:
HEMI specifies a hemisphere. Choose from 'l', 'r', 'lh' or 'rh'.
You must specify a hemisphere with option -i because it is
supposed to load one surface at a time.
You can load multiple surfaces with -spec which also supports
these features.
TEMPLATE: Specify the template name. For now, choose from MNI_N27 if
you want to use the FreeSurfer reconstructed surfaces from
the MNI_N27 volume, or TT_N27
Those templates must be installed under this directory:
/home/afniHQ/.afni/data/
If you have no surface templates there, download
https://afni.nimh.nih.gov/pub/dist/tgz/suma_MNI_N27.tgz
and/or
https://afni.nimh.nih.gov/pub/dist/tgz/suma_TT_N27.tgz
and/or
https://afni.nimh.nih.gov/pub/dist/tgz/suma_MNI152_2009.tgz
and untar them under directory /home/afniHQ/.afni/data/
DENSITY: Use if you want to load standard-mesh versions of the template
surfaces. Note that only ld20, ld60, ld120, and ld141 are in
the current distributed templates. You can create other
densities if you wish with MapIcosahedron, but follow the
same naming convention to enable SUMA to find them.
SURF: Which surface do you want. The string matching is partial, as long
as the match is unique.
So for example something like: suma -i l:MNI_N27:ld60:smooth
is more than enough to get you the ld60 MNI_N27 left hemisphere
smoothwm surface.
The order in which you specify HEMI, TEMPLATE, DENSITY, and SURF, does
not matter.
For template surfaces, the -sv option is provided automatically, so you
can have SUMA talking to AFNI with something like:
suma -i l:MNI_N27:ld60:smooth &
afni -niml /home/afniHQ/.afni/data/suma_MNI_N27
Specifying surfaces using -t* options:
-tn TYPE NAME: specify surface type and name.
See below for help on the parameters.
-tsn TYPE STATE NAME: specify surface type state and name.
TYPE: Choose from the following (case sensitive):
1D: 1D format
FS: FreeSurfer ascii format
PLY: ply format
MNI: MNI obj ascii format
BYU: byu format
SF: Caret/SureFit format
BV: BrainVoyager format
GII: GIFTI format
NAME: Name of surface file.
For SF and 1D formats, NAME is composed of two names
the coord file followed by the topo file
STATE: State of the surface.
Default is S1, S2.... for each surface.
Specifying a Surface Volume:
-sv SurfaceVolume [VolParam for sf surfaces]
If you supply a surface volume, the coordinates of the input surface.
are modified to SUMA's convention and aligned with SurfaceVolume.
You must also specify a VolParam file for SureFit surfaces.
Specifying a surface specification (spec) file:
-spec SPEC: specify the name of the SPEC file.
As with option -i, you can load template
spec files with symbolic notation trickery as in:
suma -spec MNI_N27
which will load the all the surfaces from template MNI_N27
at the original FreeSurfer mesh density.
The string following -spec is formatted in the following manner:
HEMI:TEMPLATE:DENSITY where:
HEMI specifies a hemisphere. Choose from 'l', 'r', 'lh', 'rh', 'lr', or
'both' which is the default if you do not specify a hemisphere.
TEMPLATE: Specify the template name. For now, choose from MNI_N27 if
you want surfaces from the MNI_N27 volume, or TT_N27
for the Talairach version.
Those templates must be installed under this directory:
/home/afniHQ/.afni/data/
If you have no surface templates there, download one of:
https://afni.nimh.nih.gov/pub/dist/tgz/suma_MNI_N27.tgz
https://afni.nimh.nih.gov/pub/dist/tgz/suma_TT_N27.tgz
https://afni.nimh.nih.gov/pub/dist/tgz/suma_MNI152_2009.tgz
and untar them under directory /home/afniHQ/.afni/data/
DENSITY: Use if you want to load standard-mesh versions of the template
surfaces. Note that only ld20, ld60, ld120, and ld141 are in
the current distributed templates. You can create other
densities if you wish with MapIcosahedron, but follow the
same naming convention to enable SUMA to find them.
This parameter is optional.
The order in which you specify HEMI, TEMPLATE, and DENSITY, does
not matter.
For template surfaces, the -sv option is provided automatically, so you
can have SUMA talking to AFNI with something like:
suma -spec MNI_N27:ld60 &
afni -niml /home/afniHQ/.afni/data/suma_MNI_N27
Specifying a surface using -surf_? method:
-surf_A SURFACE: specify the name of the first
surface to load. If the program requires
or allows multiple surfaces, use -surf_B
... -surf_Z .
You need not use _A if only one surface is
expected.
SURFACE is the name of the surface as specified
in the SPEC file. The use of -surf_ option
requires the use of -spec option.
Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov
AFNI program: 3dsvm
Program: 3dsvm
+++++++++++ 3dsvm: support vector machine analysis of brain data +++++++++++
3dsvm - temporally predictive modeling with the support vector machine
This program provides the ability to perform support vector machine
(SVM) learning on AFNI datasets using the SVM-light package (version 5)
developed by Thorsten Joachims (http://svmlight.joachims.org/).
-----------------------------------------------------------------------------
Usage:
------
3dsvm [options]
Examples:
---------
1. Training: basic options require a training run, category (class) labels
for each timepoint, and an output model. In general, it usually makes
sense to include a mask file to exclude at least non-brain voxels
3dsvm -trainvol run1+orig \
-trainlabels run1_categories.1D \
-mask mask+orig \
-model model_run1
2. Training: obtain model alphas (a_run1.1D) and
model weights (fim: run1_fim+orig)
3dsvm -alpha a_run1 \
-trainvol run1+orig \
-trainlabels run1_categories.1D \
-mask mask+orig \
-model model_run1
-bucket run1_fim
3. Training: exclude some time points using a censor file
3dsvm -alpha a_run1 \
-trainvol run1+orig \
-trainlabels run1_categories.1D \
-censor censor.1D \
-mask mask+orig \
-model model_run1
-bucket run1_fim
4. Training: control svm model complexity (C value)
3dsvm -c 100.0 \
-alpha a_run1 \
-trainvol run1+orig \
-trainlabels run1_categories.1D \
-censor censor.1D \
-mask mask+orig \
-model model_run1
-bucket run1_fim
5. Training: using a kernel
3dsvm -c 100.0 \
-kernel polynomial -d 2 \
-alpha a_run1 \
-trainvol run1+orig \
-trainlabels run1_categories.1D \
-censor censor.1D \
-mask mask+orig \
-model model_run1
6. Training: using regression
3dsvm -type regression \
-c 100.0 \
-e 0.001 \
-alpha a_run1 \
-trainvol run1+orig \
-trainlabels run1_categories.1D \
-censor censor.1D \
-mask mask+orig \
-model model_run1
7. Testing: basic options require a testing run, a model, and an output
predictions file
3dsvm -testvol run2+orig \
-model model_run1+orig \
-predictions pred2_model1
8. Testing: compare predictions with 'truth'
3dsvm -testvol run2+orig \
-model model_run1+orig \
-testlabels run2_categories.1D \
-predictions pred2_model1
9. Testing: use -classout to output integer thresholded class predictions
(rather than continuous valued output)
3dsvm -classout \
-testvol run2+orig \
-model model_run1+orig \
-testlabels run2_categories.1D \
-predictions pred2_model1
options:
--------
------------------- TRAINING OPTIONS -------------------------------------------
-type tname Specify tname:
classification [default]
regression
to select between classification or regression.
-trainvol trnname A 3D+t AFNI brik dataset to be used for training.
-mask mname Specify a mask dataset to only perform the analysis
on non-zero mask voxels.
++ If '-mask' is not used '-nomodelmask must be
specified.
For example, a mask of the whole brain can be
generated by using 3dAutomask, or more specific ROIs
could be generated with the Draw Dataset plugin or
converted from a thresholded functional dataset.
The mask is specified during training but is also
considered part of the model output and is
automatically applied to test data.
-nomodelmask Flag to enable the omission of a mask file. This is
required if '-mask' is not used.
-trainlabels lname lname = filename of class category .1D labels
corresponding to the stimulus paradigm for the
training data set. The number of labels in the
selected file must be equal to the number of
time points in the training dataset. The labels
must be arranged in a single column, and they can
be any of the following values:
0 - class 0
1 - class 1
n - class n (where n is a positive integer)
9999 - censor this point
See also -censor.
-censor cname Specify a .1D censor file that allows the user
to ignore certain samples in the training data.
To ignore a specific sample, put a 0 in the
row corresponding to the time sample - i.e., to
ignore sample t, place a 0 in row t of the file.
All samples that are to be included for training
must have a 1 in the corresponding row. If no
censor file is specified, all samples will be used
for training. Note the lname file specified by
trainlabels can also be used to censor time points
(see -trainlabels).
-kernel kfunc kfunc = string specifying type of kernel function:
linear : <u,v> [default]
polynomial : (s<u,v> + r)^d
rbf : radial basis function
exp(-gamma ||u-v||^2)
sigmoid : tanh(s <u,v> + r))
note: kernel parameters use SVM-light syntax:
-d int : d parameter in polyniomial kernel
3 [default]
-g float : gamma parameter in rbf kernel
1.0 [default]
-s float : s parameter in sigmoid/poly kernel
1.0 [default]
-r float : r parameter in sigmoid/poly kernel
1.0 [default]
-max_iterations int Specify the maximum number of iterations for the
optimization. 1 million [default].
-alpha aname Write the alphas to aname.1D
-wout Flag to output sum of weighted linear support
vectors to the bucket file. This is one means of
generating an "activation map" from linear kernel
SVMs see (LaConte et al., 2005). NOTE: this is
currently not required since it is the only output
option.
-bucket bprefix Currently only outputs the sum of weighted linear
support vectors written out to a functional (fim)
brik file. This is one means of generating an
"activation map" from linear kernel SVMS
(see LaConte et al, 2005).
------------------- TRAINING AND TESTING MUST SPECIFY MODNAME ------------------
-model modname modname = basename for the model brik.
Training: modname is the basename for the output
brik containing the SVM model
3dsvm -trainvol run1+orig \
-trainlabels run1_categories.1D \
-mask mask+orig \
-model model_run1
Testing: modname is the name for the input brik
containing the SVM model.
3dsvm -testvol run2+orig \
-model model_run1+orig \
-predictions pred2_model1
-nomodelfile Flag to enable the omission of a model file. This is
required if '-model' is not used during training.
** Be careful, you might not be able to perform testing!
------------------- TESTING OPTIONS --------------------------------------------
-testvol tstname A 3D or 3D+t AFNI brik dataset to be used for testing.
A major assumption is that the training and testing
volumes are aligned, and that voxels are of same number,
volume, etc.
-predictions pname pname = basename for .1D prediction file(s).
Prediction files contain a single column, where each line
holds the predicted value for the corresponding volume in
the test dataset. By default, the predicted values take
on a continuous range; to output integer-valued class
decision values use the -classout flag.
For classification: Values below 0.5 correspond to
(class A) and values above 0.5 to (class B), where A < B.
For more than two classes a separate prediction file for
each possible pair of training classes and one additional
"overall" file containing the predicted (integer-valued)
class membership is generated.
For regression: Each value is the predicted parametric rate
for the corresponding volume in the test dataset.
-classout Flag to specify that pname files should be integer-
valued, corresponding to class category decisions.
-nopredcensored Do not write predicted values for censored time-points
to predictions file.
-nodetrend Flag to specify that pname files should NOT be
linearly detrended (detrending is performed by default).
** Set this options if you are using GLM beta maps as
input for example. Temporal detrending only
makes sense if you are using time-dependent
data (chronological order!) as input.
-nopredscale Do not scale predictions. If used, values below 0.0
correspond to (class A) and values above 0.0 to
(class B).
-testlabels tlname tlname = filename of 'true' class category .1D labels
for the test dataset. It is used to calculate the
prediction accuracy performance of SVM classification.
If this option is not specified, then performance
calculations are not made. Format is the same as
lname specified for -trainlabels.
-multiclass mctype mctype specifies the multiclass algorithm for
classification. Current implementations use 1-vs-1
two-class SVM models.
mctype must be one of the following:
DAG : Directed Acyclic Graph [default]
vote : Max Wins from votes of all 1-vs-1 models
see https://lacontelab.org/3dsvm.htm for details and
references.
------------------- INFORMATION OPTIONS ---------------------------------------
-help this help
-version print version history including rough description
of changes
-------------------- SVM-light learn help -----------------------------
SVM-light V5.00: Support Vector Machine, learning module 30.06.02stim
Copyright: Thorsten Joachims, thorsten@ls8.cs.uni-dortmund.de
This software is available for non-commercial use only. It must not
be modified and distributed without prior permission of the author.
The author is not responsible for implications from the use of this
software.
usage: svm_learn [options] example_file model_file
Arguments:
example_file-> file with training data
model_file -> file to store learned decision rule in
General options:
-? -> this help
-v [0..3] -> level (default 1)
Learning options:
-z {c,r,p} -> select between classification (c), regression (r),
and preference ranking (p) (default classification)
-c float -> C: trade-off between training error
and margin (default [avg. x*x]^-1)
-w [0..] -> epsilon width of tube for regression
(default 0.1)
-j float -> Cost: cost-factor, by which training errors on
positive examples outweigh errors on negative
examples (default 1) (see [4])
-b [0,1] -> use biased hyperplane (i.e. x*w+b>0) instead
of unbiased hyperplane (i.e. x*w>0) (default 1)
-i [0,1] -> remove inconsistent training examples
and retrain (default 0)
Performance estimation options:
-x [0,1] -> compute leave-one-out estimates (default 0)
(see [5])
-o ]0..2] -> value of rho for XiAlpha-estimator and for pruning
leave-one-out computation (default 1.0) (see [2])
-k [0..100] -> search depth for extended XiAlpha-estimator
(default 0)
Transduction options (see [3]):
-p [0..1] -> fraction of unlabeled examples to be classified
into the positive class (default is the ratio of
positive and negative examples in the training data)
Kernel options:
-t int -> type of kernel function:
0: linear (default)
1: polynomial (s a*b+c)^d
2: radial basis function exp(-gamma ||a-b||^2)
3: sigmoid tanh(s a*b + c)
4: user defined kernel from kernel.h
-d int -> parameter d in polynomial kernel
-g float -> parameter gamma in rbf kernel
-s float -> parameter s in sigmoid/poly kernel
-r float -> parameter c in sigmoid/poly kernel
-u string -> parameter of user defined kernel
Optimization options (see [1]):
-q [2..] -> maximum size of QP-subproblems (default 10)
-n [2..q] -> number of new variables entering the working set
in each iteration (default n = q). Set n<q to prevent
zig-zagging.
-m [5..] -> size of cache for kernel evaluations in MB (default 40)
The larger the faster...
-e float -> eps: Allow that error for termination criterion
[y [w*x+b] - 1] >= eps (default 0.001)
-h [5..] -> number of iterations a variable needs to be
optimal before considered for shrinking (default 100)
-f [0,1] -> do final optimality check for variables removed
by shrinking. Although this test is usually
positive, there is no guarantee that the optimum
was found if the test is omitted. (default 1)
Output options:
-l string -> file to write predicted labels of unlabeled
examples into after transductive learning
-a string -> write all alphas to this file after learning
(in the same order as in the training set)
More details in:
[1] T. Joachims, Making Large-Scale SVM Learning Practical. Advances in
Kernel Methods - Support Vector Learning, B. Schoelkopf and C. Burges and
A. Smola (ed.), MIT Press, 1999.
[2] T. Joachims, Estimating the Generalization performance of an SVM
Efficiently. International Conference on Machine Learning (ICML), 2000.
[3] T. Joachims, Transductive Inference for Text Classification using Support
Vector Machines. International Conference on Machine Learning (ICML),
1999.
[4] K. Morik, P. Brockhausen, and T. Joachims, Combining statistical learning
with a knowledge-based approach - A case study in intensive care
monitoring. International Conference on Machine Learning (ICML), 1999.
[5] T. Joachims, Learning to Classify Text Using Support Vector
Machines: Methods, Theory, and Algorithms. Dissertation, Kluwer,
2002.
-------------------- SVM-light classify help -----------------------------
SVM-light V5.00: Support Vector Machine, classification module 30.06.02
Copyright: Thorsten Joachims, thorsten@ls8.cs.uni-dortmund.de
This software is available for non-commercial use only. It must not
be modified and distributed without prior permission of the author.
The author is not responsible for implications from the use of this
software.
usage: svm_classify [options] example_file model_file output_file
options: -h -> this help
-v [0..3] -> verbosity level (default 2)
-f [0,1] -> 0: old output format of V1.0
-> 1: output the value of decision function (default)
--------------------------------------------------------------------------
Significant programming contributions by:
Jeff W. Prescott, William A. Curtis, Ziad Saad, Rick Reynolds,
R. Cameron Craddock, Jonathan M. Lisinski, and Stephen M. LaConte
Original version written by JP and SL, August 2006
Released to general public, July 2007
Questions/Comments/Bugs - email slaconte@vtc.vt.edu
Reference:
LaConte, S., Strother, S., Cherkassky, V. and Hu, X. 2005. Support vector
machines for temporal classification of block design fMRI data.
NeuroImage, 26, 317-329.
Specific to real-time fMRI:
S. M. LaConte. (2011). Decoding fMRI brain states in real-time.
NeuroImage, 56:440-54.
S. M. LaConte, S. J. Peltier, and X. P. Hu. (2007). Real-time fMRI using
brain-state classification. Hum Brain Mapp, 208:1033–1044.
Please also consider to reference:
T. Joachims, Making Large-Scale SVM Learning Practical.
Advances in Kernel Methods - Support Vector Learning,
B. Schoelkopf and C. Burges and A. Smola (ed.), MIT Press, 1999.
RW Cox. AFNI: Software for analysis and visualization of
functional magnetic resonance neuroimages.
Computers and Biomedical Research, 29:162-173, 1996.
AFNI program: 3dsvm_linpredict
Usage: 3ddot [options] w dset
Output = linear prediction for w from 3dsvm
- you can use sub-brick selectors on the dsets
- the result is a number printed to stdout
Options:
-mask mset Means to use the dataset 'mset' as a mask:
Only voxels with nonzero values in 'mset'
will be averaged from 'dataset'. Note
that the mask dataset and the input dataset
must have the same number of voxels.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dSynthesize
Usage: 3dSynthesize options
Reads a '-cbucket' dataset and a '.xmat.1D' matrix from 3dDeconvolve,
and synthesizes a fit dataset using selected sub-bricks and
matrix columns.
Options (actually, the first 3 are mandatory)
---------------------------------------------
-cbucket ccc = Read the dataset 'ccc', which should have been
output from 3dDeconvolve via the '-cbucket' option.
-matrix mmm = Read the matrix 'mmm', which should have been
output from 3dDeconvolve via the '-x1D' option.
-select sss = Selects specific columns from the matrix (and the
corresponding coefficient sub-bricks from the
cbucket). The string 'sss' can be of the forms:
baseline = All baseline coefficients.
polort = All polynomial baseline coefficients
(skipping -stim_base coefficients).
allfunc = All coefficients that are NOT marked
(in the -matrix file) as being in
the baseline (i.e., all -stim_xxx
values except those with -stim_base)
allstim = All -stim_xxx coefficients, including
those with -stim_base.
all = All coefficients (should give results
equivalent to '3dDeconvolve -fitts').
something = All columns/coefficients that match
this -stim_label from 3dDeconvolve
[to be precise, all columns whose ]
[-stim_label starts with 'something']
[will be selected for inclusion. ]
digits = Columns can also be selected by
numbers (starting at 0), or number
ranges of the form 3..7 and 3-7.
[A string is a number range if it]
[comprises only digits and the ]
[characters '.' and/or '-'. ]
[Otherwise, it is used to match ]
[a -stim_label. ]
More than one '-select sss' option can be used, or
you can put more than one string after the '-select',
as in this example:
3dSynthesize -matrix fred.xmat.1D -cbucket fred+orig \
-select baseline FaceStim -prefix FS
which synthesizes the baseline and 'FaceStim'
responses together, ignoring any other stimuli
in the dataset and matrix.
-dry = Don't compute the output, just check the inputs.
-TR dt = Set TR in the output to 'dt'. The default value
of TR is read from the header of the matrix file.
-prefix ppp = Output result into dataset with name 'ppp'.
-cenfill xxx = Determines how censored time points from the
3dDeconvolve run will be filled. 'xxx' is one of:
zero = 0s will be put in at all censored times
nbhr = average of non-censored neighboring times
none = don't put the censored times in at all
(in which case the created dataset is)
(shorter than the input to 3dDeconvolve)
If you don't give some -cenfill option, the default
operation is 'zero'. This default is different than
previous versions of this program, which did 'none'.
**N.B.: You might like the program to compute the model fit
at the censored times, like it does at all others.
This CAN be done if you input the matrix file saved
by the '-x1D_uncensored' option in 3dDeconvolve.
NOTES:
-- You could do the same thing in 3dcalc, but this way is simpler
and faster. But less flexible, of course.
-- The output dataset is always stored as floats.
-- The -cbucket dataset must have the same number of sub-bricks as
the input matrix has columns.
-- Each column in the matrix file is a time series, used to model
some component of the data time series at each voxel.
-- The sub-bricks of the -cbucket dataset give the weighting
coefficients for these model time series, at each voxel.
-- If you want to calculate a time series dataset wherein the original
time series data has the baseline subtracted, then you could
use 3dSynthesize to compute the baseline time series dataset, and
then use 3dcalc to subtract that dataset from the original dataset.
-- Other similar applications are left to your imagination.
-- To see the column labels stored in matrix file 'fred.xmat.1D', type
the Unix command 'grep ColumnLabels fred.xmat.1D'; sample output:
# ColumnLabels = "Run#1Pol#0 ; Run#1Pol#1 ; Run#2Pol#0 ; Run#2Pol#1 ;
FaceStim#0 ; FaceStim#1 ; HouseStim#0 ; HouseStim#1"
which shows the 4 '-polort 1' baseline parameters from 2 separate
imaging runs, and then 2 parameters each for 'FaceStim' and
'HouseStim'.
-- The matrix file written by 3dDeconvolve has an XML-ish header
before the columns of numbers, stored in '#' comment lines.
If you want to generate your own 'raw' matrix file, without this
header, you can still use 3dSynthesize, but then you can only use
numeric '-select' options (or 'all').
-- When using a 'raw' matrix, you'll probably also want the '-TR' option.
-- When putting more than one string after '-select', do NOT combine
these separate strings together in quotes. If you do, they will be
seen as a single string, which almost surely won't match anything.
-- Author: RWCox -- March 2007
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTagalign
Usage: 3dTagalign [options] dset
Rotates/translates dataset 'dset' to be aligned with the master,
using the tagsets embedded in their .HEAD files.
Options:
-master mset = Use dataset 'mset' as the master dataset
[this is a nonoptional option]
-tagset tfile = Use the tagset in the .tag file instead of dset.
-nokeeptags = Don't put transformed locations of dset's tags
into the output dataset [default = keep tags]
-matvec mfile = Write the matrix+vector of the transformation to
file 'mfile'. This can be used as input to the
'-matvec_in2out' option of 3dWarp, if you want
to align other datasets in the same way (e.g.,
functional datasets).
-rotate = Compute the best transformation as a rotation + shift.
This is the default.
-affine = Compute the best transformation as a general affine
map rather than just a rotation + shift. In all
cases, the transformation from input to output
coordinates is of the form
[out] = [R] [in] + [V]
where [R] is a 3x3 matrix and [V] is a 3-vector.
By default, [R] is computed as a proper (det=1)
rotation matrix (3 parameters). The '-affine'
option says to fit [R] as a general matrix
(9 parameters).
N.B.: An affine transformation can rotate, rescale, and
shear the volume. Be sure to look at the dataset
before and after to make sure things are OK.
-rotscl = Compute transformation as a rotation times an isotropic
scaling; that is, [R] is an orthogonal matrix times
a scalar.
N.B.: '-affine' and '-rotscl' do unweighted least squares.
-prefix pp = Use 'pp' as the prefix for the output dataset.
[default = 'tagalign']
-verb = Print progress reports
-dummy = Don't actually rotate the dataset, just compute
the transformation matrix and vector. If
'-matvec' is used, the mfile will be written.
-linear }
-cubic } = Chooses spatial interpolation method.
-NN } = [default = cubic]
-quintic }
Nota Bene:
* The transformation is carried out
using the same methods as program 3dWarp.
Author: RWCox - 16 Jul 2000, etc.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTcat
Concatenate sub-bricks from input datasets into one big 3D+time dataset.
Usage: 3dTcat options
where the options are:
-prefix pname = Use 'pname' for the output dataset prefix name.
OR -output pname [default='tcat']
-session dir = Use 'dir' for the output dataset session directory.
[default='./'=current working directory]
-glueto fname = Append bricks to the end of the 'fname' dataset.
This command is an alternative to the -prefix
and -session commands.
-dry = Execute a 'dry run'; that is, only print out
what would be done. This is useful when
combining sub-bricks from multiple inputs.
-verb = Print out some verbose output as the program
proceeds (-dry implies -verb).
Using -verb twice results in quite lengthy output.
-rlt = Remove linear trends in each voxel time series loaded
from each input dataset, SEPARATELY. That is, the
data from each dataset is detrended separately.
At least 3 sub-bricks from a dataset must be input
for this option to apply.
Notes: (1) -rlt removes the least squares fit of 'a+b*t'
to each voxel time series; this means that
the mean is removed as well as the trend.
This effect makes it impractical to compute
the % Change using AFNI's internal FIM.
(2) To have the mean of each dataset time series added
back in, use this option in the form '-rlt+'.
In this case, only the slope 'b*t' is removed.
(3) To have the overall mean of all dataset time
series added back in, use this option in the
form '-rlt++'. In this case, 'a+b*t' is removed
from each input dataset separately, and the
mean of all input datasets is added back in at
the end. (This option will work properly only
if all input datasets use at least 3 sub-bricks!)
(4) -rlt can be used on datasets that contain shorts
or floats, but not on complex- or byte-valued
datasets.
-relabel = Replace any sub-brick labels in an input dataset
with the input dataset name -- this might help
identify the sub-bricks in the output dataset.
-tpattern PATTERN = Specify the timing pattern for the output
dataset, using patterns described in the
'to3d -help' output (alt+z, seq, alt-z2, etc).
-tr TR = Specify the TR (in seconds) for the output dataset.
-DAFNI_GLOB_SELECTORS=YES
Setting the environment variable AFNI_GLOB_SELECTORS
to YES (as done temporarily with this option) means
that sub-brick selectors '[..]' will not be used
as wildcards. For example:
3dTcat -DAFNI_GLOB_SELECTORS=YES -relabel -prefix EPIzero 'rest_*+tlrc.HEAD[0]'
will work to make a dataset with the #0 sub-brick
from each of a number of 3D+time datasets.
** Note that the entire dataset specification is in quotes
to prevent the shell from doing the '*' wildcard expansion
-- it will be done inside the program itself, after the
sub-brick selector is temporarily detached from the string
-- and then a copy of the selector is re-attached to each
expanded filename.
** Very few other AFNI '3d' programs do internal
wildcard expansion -- most of them rely on the shell.
Command line arguments after the above are taken as input datasets.
A dataset is specified using one of these forms:
prefix+view
prefix+view.HEAD
prefix+view.BRIK
prefix.nii
prefix.nii.gz
SUB-BRICK SELECTION:
--------------------
You can also add a sub-brick selection list after the end of the
dataset name. This allows only a subset of the sub-bricks to be
included into the output (by default, all of the input dataset
is copied into the output). A sub-brick selection list looks like
one of the following forms:
fred+orig[5] ==> use only sub-brick #5
fred+orig[5,9,17] ==> use #5, #9, and #17
fred+orig[5..8] or [5-8] ==> use #5, #6, #7, and #8
fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
Sub-brick indexes start at 0. You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
fred+orig[0..$(3)]
You can reverse the order of sub-bricks with a list like
fred+origh[$..0(-1)]
(Exactly WHY you might want to time-reverse a dataset is a mystery.)
You can also use a syntax based on the usage of the program count.
This would be most useful when randomizing (shuffling) the order of
the sub-bricks. Example:
fred+orig[count -seed 2 5 11 s] is equivalent to something like:
fred+orig[ 6, 5, 11, 10, 9, 8, 7]
You could also do: fred+orig[`count_afni -seed 2 -digits 1 -suffix ',' 5 11 s`]
but if you have lots of numbers, the command line would get too
long for the shell to process it properly. Omit the seed option if
you want the code to generate a seed automatically.
You cannot mix and match count syntax with other selection gimmicks.
If you have a lot of bricks to select in a particular order, you will
also run into name length problems. One solution is to put the indices
in a .1D file then use the following syntax. For example, say you have
the selection in file reorder.1D. You can extract the sub-bricks with:
fred+orig'[1dcat reorder.1D]'
As with count, you cannot mix and match 1dcat syntax with other
selection gimmicks.
NOTES:
------
You can also add a sub-brick selection list after the end of the
* The TR and other time-axis properties are taken from the
first input dataset that is itself 3D+time. If no input
datasets contain such information, then TR is set to 1.0.
This can be altered later using the 3drefit program.
* The sub-bricks are output in the order specified, which may
not be the order in the original datasets. For example, using
fred+orig[0..$(2),1..$(2)]
will cause the sub-bricks in fred+orig to be output into the
new dataset in an interleaved fashion. Using
fred+orig[$..0]
will reverse the order of the sub-bricks in the output.
If the -rlt option is used, the sub-bricks selected from each
input dataset will be re-ordered into the output dataset, and
then this sequence will be detrended.
* You can use the '3dinfo' program to see how many sub-bricks
a 3D+time or a bucket dataset contains.
* The '$', '(', ')', '[', and ']' characters are special to
the shell, so you will have to escape them. This is most easily
done by putting the entire dataset plus selection list inside
single quotes, as in 'fred+orig[5..7,9]'.
* You may wish/need to use the 3drefit program on the output
dataset to modify some of the .HEAD file parameters.
* The program does internal wildcard expansion on the filenames
provided to define the datasets. The software first strips the
sub-brick selector string '[...]' off the end of each filename
BEFORE wildcard expansion, then re-appends it to the results
AFTER the expansion; for example, '*+orig.HEAD[4..7]' might
expand to 'fred+orig.HEAD[4..7]' and 'wilma+orig.HEAD[4..7]'.
++ However, the '[...]' construct is also a shell wildcard,
It is not practicable to use this feature for filename
selection with 3dTcat if you are also using sub-brick
selectors.
++ Since wildcard expansion looks for whole filenames, you must
use wildcard expansion in the form (e.g.) of '*+orig.HEAD',
NOT '*+orig' -- since the latter form doesn't match filenames.
++ Don't use '*+orig.*' since that will match both the .BRIK and
.HEAD files, and each dataset will end up being read in twice!
++ If you want to see the filename expansion results, run 3dTcat
with the option '-DAFNI_GLOB_DEBUG=YES'
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTcorr1D
Usage: 3dTcorr1D [options] xset y1D ~1~
Computes the correlation coefficient between each voxel time series
in the input 3D+time dataset 'xset' and each column in the 1D time
series file 'y1D', and stores the output values in a new dataset.
--------
OPTIONS: ~1~
--------
-pearson = Correlation is the normal Pearson (product moment)
correlation coefficient [this is the default method].
-spearman = Correlation is the Spearman (rank) correlation
coefficient.
-quadrant = Correlation is the quadrant correlation coefficient.
-ktaub = Correlation is Kendall's tau_b coefficient.
++ For 'continuous' or finely-discretized data, tau_b and
rank correlation are nearly equivalent (but not equal).
-dot = Doesn't actually compute a correlation coefficient; just
calculates the dot product between the y1D vector(s)
and the dataset time series.
-Fisher = Apply the 'Fisher' (inverse hyperbolic tangent = arctanh)
transformation to the results.
++ It does NOT make sense to use this with '-ktaub', but if
you want to do it, the program will not stop you.
++ Cannot be used with '-dot'!
-prefix p = Save output into dataset with prefix 'p'
[default prefix is 'Tcorr1D'].
-mask mmm = Only process voxels from 'xset' that are nonzero
in the 3D mask dataset 'mmm'.
++ Other voxels in the output will be set to zero.
-float = Save results in float format [the default format].
-short = Save results in scaled short format [to save disk space].
++ Cannot be used with '-dot'!
------
NOTES: ~1~
------
* The output dataset is functional bucket type, with one sub-brick
per column of the input y1D file.
* No detrending, blurring, or other pre-processing options are available;
if you want these things, see 3dDetrend or 3dTproject or 3dcalc.
[In other words, this program presumes you know what you are doing!]
* Also see 3dTcorrelate to do voxel-by-voxel correlation of TWO
3D+time datasets' time series, with similar options.
* You can extract the time series from a single voxel with given
spatial indexes using 3dmaskave, and then run it with 3dTcorr1D:
3dmaskave -quiet -ibox 40 30 20 epi_r1+orig > r1_40_30_20.1D
3dTcorr1D -pearson -Fisher -prefix c_40_30_20 epi_r1+orig r1_40_30_20.1D
* http://en.wikipedia.org/wiki/Correlation
* http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient
* http://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient
* http://en.wikipedia.org/wiki/Kendall_tau_rank_correlation_coefficient
-- RWCox - Apr 2010
- Jun 2010: Multiple y1D columns; OpenMP; -short; -mask.
=========================================================================
* This binary version of 3dTcorr1D is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTcorrelate
++ 3dTcorrelate: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
Usage: 3dTcorrelate [options] xset yset ~1~
Computes the correlation coefficient between corresponding voxel
time series in two input 3D+time datasets 'xset' and 'yset', and
stores the output in a new 1 sub-brick dataset.
--------
Options: ~1~
--------
-pearson = Correlation is the normal Pearson (product moment)
correlation coefficient [this is the default method].
-spearman = Correlation is the Spearman (rank) correlation
coefficient.
-quadrant = Correlation is the quadrant correlation coefficient.
-ktaub = Correlation is Kendall's tau_b coefficient.
++ For 'continuous' or finely-discretized data, tau_b
and rank correlation are nearly equivalent.
-covariance = Covariance instead of correlation. That would be
the Pearson correlation without scaling by the product
of the standard deviations.
-partial z = Partial Pearson's Correlation of X & Y, adjusting for Z
Supply dataset z to be taken into account after '-partial'.
** EXPERIMENTAL **
-ycoef = Least squares coefficient that best fits y(t) to x(t),
after detrending. That is, if yd(t) is the detrended
y(t) and xd(t) is the detrended x(t), then the ycoef
value is from the OLSQ fit to xd(t) = ycoef * y(t) + error.
-Fisher = Apply the 'Fisher' (inverse hyperbolic tangent = arctanh)
transformation to (correlation) results.
++ It does NOT make sense to use this with '-ktaub', but if
you want to do it, the program will not stop you.
++ This option does not apply to '-covariance' or '-ycoef'.
-polort m = Remove polynomial trend of order 'm', for m=-1..9.
[default is m=1; removal is by least squares].
Using m=-1 means no detrending; this is only useful
for data/information that has been pre-processed.
-ort r.1D = Also detrend using the columns of the 1D file 'r.1D'.
Only one -ort option can be given. If you want to use
more than one, create a temporary file using 1dcat.
-autoclip = Clip off low-intensity regions in the two datasets,
-automask = so that the correlation is only computed between
high-intensity (presumably brain) voxels. The
intensity level is determined the same way that
3dClipLevel works.
** At present, this program does not have a '-mask'
option. Maybe someday?
-zcensor = Omit (censor out) any time points where the xset
volume is all zero OR where the yset volume is all
zero (in mask). Please note that using -zcensor
with any detrending is unlikely to be useful.
** That is, you should use '-polort -1' with this
option, and NOT use '-ort'.
* In fact, using '-zcensor' will set polort = -1,
and if you insist on using detrending, you will
have to put the '-polort' option AFTER '-zcensor.
** Since correlation is calculated from the sum
of the point-by-point products xset(t)*yset(t),
why censor out points where xset or yset is 0?
Because the denominator of correlation is from
the sum of xset(t)*xset(t) and yset(t)*yset(t)
and unless the t-points where the datasets are
censored are BOTH zero at the same time, the
denominator will be incorrect.
** [RWCox - Dec 2019, day of Our Lady of Guadalupe]
[for P Molfese and E Finn]
-prefix p = Save output into dataset with prefix 'p'
[default prefix is 'Tcorr'].
------
Notes: ~1~
------
* The output dataset is functional bucket type, with just one
sub-brick, stored in floating point format.
* Because both time series are detrended prior to correlation,
the results will not be identical to using FIM or FIM+ to
calculate correlations (whose ideal vector is not detrended).
* Also see 3dTcorr1D if you want to correlate each voxel time series
in a dataset xset with a single 1D time series file, instead of
separately with time series from another 3D+time dataset.
* https://en.wikipedia.org/wiki/Correlation
* https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient
* https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient
* https://en.wikipedia.org/wiki/Kendall_tau_rank_correlation_coefficient
* https://en.wikipedia.org/wiki/Partial_correlation
-- RWCox - Aug 2001++
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTcorrMap
Usage: 3dTcorrMap [options]
For each voxel time series, computes the correlation between it
and all other voxels, and combines this set of values into the
output dataset(s) in some way.
Supposed to give a measure of how 'connected' each voxel is
to the rest of the brain. [[As if life were that simple.]]
---------
WARNINGS:
---------
** This program takes a LONG time to run.
** This program will use a LOT of memory.
** Don't say I didn't warn you about these facts, and don't whine.
--------------
Input Options:
--------------
-input dd = Read 3D+time dataset 'dd' (a mandatory option).
This provides the time series to be correlated
en masse.
** This is a non-optional 'option': you MUST supply
and input dataset!
-seed bb = Read 3D+time dataset 'bb'.
** If you use this option, for each voxel in the
-seed dataset, its time series is correlated
with every voxel in the -input dataset, and
then that collection of correlations is processed
to produce the output for that voxel.
** If you don't use -seed, then the -input dataset
is the -seed dataset [i.e., the normal usage].
** The -seed and -input datasets must have the
same number of time points and the same number
of voxels!
** Unlike the -input dataset, the -seed dataset is not
preprocessed (i.e., no detrending/bandpass or blur).
(The main purpose of this -seed option is to)
(allow you to preprocess the seed voxel time)
(series in some personalized and unique way.)
-mask mmm = Read dataset 'mmm' as a voxel mask.
-automask = Create a mask from the input dataset.
** -mask and -automask are mutually exclusive!
** If you don't use one of these masking options, then
all voxels will be processed, and the program will
probably run for a VERY long time.
** Voxels with constant time series will be automatically
excluded.
----------------------------------
Time Series Preprocessing Options: (applied only to -input, not to -seed)
[[[[ In general, it would be better to pre-process with afni_proc.py ]]]]
----------------------------------
TEMPORAL FILTERING:
-------------------
-polort m = Remove polynomial trend of order 'm', for m=-1..19.
[default is m=1; removal is by least squares].
** Using m=-1 means no detrending; this is only useful
for data/information that has been pre-processed
(e.g., using the 3dBandpass program).
-bpass L H = Bandpass the data between frequencies L and H (in Hz).
** If the input dataset does not have a time step defined,
then TR = 1 s will be assumed for this purpose.
**** -bpass and -polort are mutually exclusive!
-ort ref = 1D file with other time series to be removed from -input
(via least squares regression) before correlation.
** Each column in 'ref' will be regressed out of
each -input voxel time series.
** -ort can be used with -polort and/or -bandpass.
** You can use programs like 3dmaskave and 3dmaskSVD
to create reference files from regions of the
input dataset (e.g., white matter, CSF).
SPATIAL FILTERING: (only for volumetric input datasets)
-----------------
-Gblur ff = Gaussian blur the -input dataset (inside the mask)
using a kernel width of 'ff' mm.
** Uses the same approach as program 3dBlurInMask.
-Mseed rr = When extracting the seed voxel time series from the
(preprocessed) -input dataset, average it over a radius
of 'rr' mm prior to doing the correlations with all
the voxel time series from the -input dataset.
** This extra smoothing is said by some mystics to
improve and enhance the results. YMMV.
** Only voxels inside the mask will be used.
** A negative value for 'rr' means to treat the voxel
dimensions as all equal to 1.0 mm; thus, '-Mseed -1.0'
means to average a voxel with its 6 nearest
neighbors in the -input dataset 3D grid.
** -Mseed and -seed are mutually exclusive!
(It makes NO sense to use both options.)
---------------
Output Options: (at least one of these must be given!)
---------------
-Mean pp = Save average correlations into dataset prefix 'pp'
** As pointed out to me by CC, '-Mean' is the same
as computing the correlation map with the 1D file
that is the mean of all the normalized time series
in the mask -- that is, a form of the global signal.
Such a calculation could be done much faster with
program 3dTcorr1D.
** Nonlinear combinations of the correlations, as done by
the options below, can't be done in such a simple way.
-Zmean pp = Save tanh of mean arctanh(correlation) into 'pp'
-Qmean pp = Save RMS(correlation) into 'pp'
-Pmean pp = Save average of squared positive correlations into 'pp'
(negative correlations don't count in this calculation)
-Thresh tt pp
= Save the COUNT of how many voxels survived thresholding
at level abs(correlation) >= tt (for some tt > 0).
-VarThresh t0 t1 dt pp
= Save the COUNT of how many voxels survive thresholding
at several levels abs(correlation) >= tt, for
tt = t0, t0+dt, ..., t1. This option produces
a multi-volume dataset, with prefix 'pp'.
-VarThreshN t0 t1 dt pp
= Like '-VarThresh', but the output counts are
'Normalized' (divided) by the expected number
of such supra-threshold voxels that would occur
from white noise timeseries.
** N.B.: You can't use '-VarThresh' and '-VarThreshN'
in the same run of the program!
-CorrMap pp
Output at each voxel the entire correlation map, into
a dataset with prefix 'pp'.
** Essentially this does what 3dAutoTcorrelate would,
with some of the additional options offered here.
** N.B.: Output dataset will be HUGE and BIG in most cases.
-CorrMask
By default, -CorrMap outputs a sub-brick for EACH
input dataset voxel, even those that are NOT in
the mask (such sub-bricks will be all zero).
If you want to eliminate these sub-bricks, use
this option.
** N.B.: The label for the sub-brick that was seeded
from voxel (i,j,k) will be of the form
v032.021.003 (when i=32, j=21, k=3).
--** The following 3 options let you create a customized **--
--** method of combining the correlations, if the above **--
--** techniques do not meet your needs. (Of course, you **--
--** could also use '-CorrMap' and then process the big **--
--** output dataset yourself later, in some clever way.) **--
-Aexpr expr ppp
= For each correlation 'r', compute the calc-style
expression 'expr', and average these values to get
the output that goes into dataset 'ppp'.
-Cexpr expr ppp
= As in '-Aexpr', but only average together nonzero
values computed by 'expr'. Example:
-Cexpr 'step(r-0.3)*r' TCa03
would compute (for each voxel) the average of all
correlation coefficients larger than 0.3.
-Sexpr expr ppp
= As above, but the sum of the expressions is computed
rather than the average. Example:
-Sexpr 'step(r-0.3)' TCn03
would compute the number of voxels with correlation
coefficients larger than 0.3.
** N.B.: At most one '-?expr' option can be used in
the same run of the program!
** N.B.: Only the symbols 'r' and 'z' [=atanh(r)] have any
meaning in the expression; all other symbols will
be treated as zeroes.
-Hist N ppp
= For each voxel, save a histogram of the correlation
coefficients into dataset ppp.
** N values will be saved per voxel, with the i'th
sub-brick containing the count for the range
-1+i*D <= r < -1+(i+1)*D with D=2/N and i=0..N-1
** N must be at least 20, and at most 1000.
* N=200 is good; then D=0.01, yielding a decent resolution.
** The output dataset is short format; thus, the maximum
count in any bin will be 32767.
** The output from this option will probably require further
processing before it can be useful -- but it is fun to
surf through these histograms in AFNI's graph viewer.
----------------
Random Thoughts:
----------------
-- In all output calculations, the correlation of a voxel with itself
is ignored. If you don't understand why, step away from the keyboard.
-- This purely experimental program is somewhat time consuming.
(Of course, it's doing a LOT of calculations.)
-- For Kyle, AKA the new Pat (assuming such a thing were possible).
-- For Steve, AKA the new Kyle (which makes him the newest Pat).
-- RWCox - August 2008 et cetera.
=========================================================================
* This binary version of 3dTcorrMap is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTfilter
3dTfilter takes as input a dataset, filters the time series in
each voxel as ordered by the user, and outputs a new dataset.
The data in each voxel is processed separately.
The user (you?) specifies the filter functions to apply.
They are applied in the order given on the command line:
-filter rank -filter adaptive:7
means to do the following operations
(1) turn the data into ranks
(2) apply the adaptive mean filter to the ranks
Notes:
------
** This program is a work in progress, and more capabilities
will be added as time allows, as the need arises, and as
the author's whims bubble to the surface of his febrile brain.
** This program is for people who have Sisu.
Options:
--------
-input inputdataset
-prefix outputdataset
-filter FunctionName
At least one '-filter' option is required!
The FunctionName values that you can give are:
rank = smallest value is replaced by 0,
next smallest value by 1, and so forth.
** This filter is pretty useless.
adaptive:H = adaptive mean filter with half-width of
'H' time points (H > 0).
** At most one 'adaptive' filter can be used!
** The filter 'footprint' is 2*H+1 points.
** This filter does local smoothing over the
'footprint', with values far away from
the local median being weighted less.
adetrend:H = apply adaptive mean filter with half-width
of 'H' time points to get a local baseline,
then subtract this baseline from the actual
data, to provide an adaptive detrending.
** At most one 'adaptive' OR 'adetrend' filter
can be used.
despike = apply the 'NEW25' despiking algorithm, as in
program 3dDespike.
despike:H = apply the despiking algorithm over a window
of half-with 'H' time points (667 > H > 3).
** H=12 is the same as 'despike'.
** At most one 'despike' filter can be used.
detrend:P = (least squares) detrend with polynomials of up
order 'P' for P=0, 1, 2, ....
** At most one 'detrend' filter can be used!
** You can use both '-adetrend' and '-detrend',
but I don't know why you would try this.
Example:
--------
3dTfilter -input fred.nii -prefix fred.af.nii -filter adaptive:7
-------
Author: The Programmer with No Name
-------
AFNI program: 3dTfitter
Usage: 3dTfitter [options]
* At each voxel, assembles and solves a set of linear equations.
++ The matrix at each voxel may be the same or may be different.
++ This flexibility (for voxel-wise regressors) is one feature
that makes 3dTfitter different from 3dDeconvolve.
++ Another distinguishing feature is that 3dTfitter allows for
L2, L1, and L2+L1 (LASSO) regression solvers, and allows you
to impose sign constraints on the solution parameters.
* Output is a bucket dataset with the beta parameters at each voxel.
* You can also get output of fitted time series at each voxel, and
the error sum of squares (e.g., for generating statistics).
* You can also deconvolve with a known kernel function (e.g., an HRF
model in FMRI, or an arterial input function in DSC-MRI, et cetera),
in which case the output dataset is a new time series dataset,
containing the estimate of the source function that, when convolved
with your input kernel function, fits the data (in each voxel).
* The basic idea is to compute the beta_i so that the following
is approximately true:
RHS(t) = sum { beta_i * LHS_i(t) }
i>=1
With the '-FALTUNG' (deconvolution) option, the model expands to be
RHS(t) = sum { K(j)*S(t-j) } + sum { beta_i * LHS_i(t) }
j>=0 i>=1
where K() is the user-supplied causal kernel function, and S() is
the source time series to be estimated along with the betas
(which can be thought of as the 'baseline' fit).
* The model basis functions LHS_i(t) and the kernel function K(t)
can be .1D files (fixed for all voxels) and/or 3D+time datasets
(different for each voxel).
* The fitting approximation can be done in 4 different ways, minimizing
the errors (differences between RHS(t) and the fitted equation) in
the following ways:
++ L2 [-l2fit option] = least sum of squares of errors
++ L1 [-l1fit option] = least sum of absolute values of errors
++ L2 LASSO = least sum of squares of errors, with an added
[-l2lasso option] L1 penalty on the size of the solution parameters
++ L2 Square Root LASSO = least square root of the sum of squared errors
[-l2sqrtlasso option] with an added L1 penalty on the solution parameters
***** Which fitting method is better?
The answer to that question depends strongly on what you are
going to use the results for! And on the quality of the data.
*************************************************
***** 3dTfitter is not for the casual user! *****
***** It has a lot of options which let you *****
***** control the complex solution process. *****
*************************************************
----------------------------------
SPECIFYING THE EQUATIONS AND DATA:
----------------------------------
-RHS rset = Specifies the right-hand-side 3D+time dataset.
('rset' can also be a 1D file with 1 column)
* Exactly one '-RHS' option must be given to 3dTfitter.
-LHS lset = Specifies a column (or columns) of the left-hand-side matrix.
* More than one 'lset' can follow the '-LHS' option, but each
input filename must NOT start with the '-' character!
* Or you can use multiple '-LHS' options, if you prefer.
* Each 'lset' can be a 3D+time dataset, or a 1D file
with 1 or more columns.
* A 3D+time dataset defines one column in the LHS matrix.
++ If 'rset' is a 1D file, then you cannot input a 3D+time
dataset with '-LHS'.
++ If 'rset' is a 3D+time dataset, then the 3D+time dataset(s)
input with '-LHS' must have the same voxel grid as 'rset'.
* A 1D file defines as many columns in the LHS matrix as
are in the file.
++ For example, you could input the LHS matrix from the
.xmat.1D matrix file output by 3dDeconvolve, if you wanted
to repeat the same linear regression using 3dTfitter,
for some bizarre unfathomable twisted psychotic reason.
(See https://shorturl.at/boxU9 for more details.)
** If you have a problem where some LHS vectors might be tiny,
causing stability problems, you can choose to omit them
by using the '-vthr' option. By default, only all-zero
vectors will be omitted from the regression.
** Note that if the scales of the LHS vectors are grossly different
(e.g., 0 < vector#1 < 0.01 and 0 < vector#2 < 1000),
then numerical errors in the calculations might cause the
results to be unreliable. To avoid this problem, you can
scale the vectors (before running 3dTfitter) so that they
have similar magnitudes.
** Note that if you are fitting a time series dataset that has
nonzero mean, then at least some of your basis vectors
should have nonzero mean, or you won't be able to get a
good fit. If necessary, use '-polort 0' to fit the mean
value of the dataset, so that the zero-mean LHS vectors
can do their work in fitting the fluctuations in the data!
[This means you, HJJ!]
*** Columns are assembled in the order given on the command line,
which means that LHS parameters will be output in that order!
*** If all LHS inputs are 1D vectors AND you are using least
squares fitting without constraints, then 3dDeconvolve would
be more efficient, since each voxel would have the same set
of equations -- a fact that 3dDeconvolve exploits for speed.
++ But who cares about CPU time? Come on baby, light my fire!
-polort p = Add 'p+1' Legendre polynomial columns to the LHS matrix.
* These columns are added to the LHS matrix AFTER all other
columns specified by the '-LHS' option, even if the '-polort'
option appears before '-LHS' on the command line.
** By default, NO polynomial columns will be used.
-vthr v = The value 'v' (between 0.0 and 0.09, inclusive) defines the
threshold below which LHS vectors will be omitted from
the regression analysis. Each vector's L1 norm (sum of
absolute values) is computed. Any vector whose L1 norm
is less than or equal to 'v' times the largest L1 norm
will not be used in the analysis, and will get 0 weight
in the output. The purpose of this option is to let you
have tiny inputs and have them be ignored.
* By default, 'v' is zero ==> only exactly zero LHS columns
will be ignored in this case.
** Prior to 18 May 2010, the built-in (and fixed) value of
'v' was 0.000333. Thus, to get the old results, you should
use option '-vthr 0.000333' -- this means YOU, Rasmus Birn!
* Note that '-vthr' column censoring is done separately for
each voxel's regression problem, so if '-LHS' had any
dataset components (i.e., voxelwise regressors), a different
set of omitted columns could be used betwixt different voxels.
--------------
DECONVOLUTION:
--------------
-FALTUNG fset fpre pen fac
= Specifies a convolution (German: Faltung) model to be
added to the LHS matrix. Four arguments follow the option:
-->** 'fset' is a 3D+time dataset or a 1D file that specifies
the known kernel of the convolution.
* fset's time point [0] is the 0-lag point in the kernel,
[1] is the 1-lag into the past point, etc.
++ Call the data Z(t), the unknown signal S(t), and the
known kernel H(t). The equations being solved for
the set of all S(t) values are of the form
Z(t) = H(0)S(t) + H(1)S(t-1) + ... + H(L)S(t-L) + noise
where L is the last index in the kernel function.
++++ N.B.: The TR of 'fset' (the source of H) and the TR of the
RHS dataset (the source of Z) MUST be the same, or
the deconvolution results will be revoltingly
meaningless drivel (or worse)!
-->** 'fpre' is the prefix for the output time series S(t) to
be created -- it will have the same length as the input
'rset' time series.
++ If you don't want this time series (why?), set 'fpre'
to be the string 'NULL'.
++ If you want to see the fit of the model to the data
(a very good idea), use the '-fitts' option, which is
described later.
-->** 'pen' selects the type of penalty function to be
applied to constrain the deconvolved time series:
++ The following penalty functions are available:
P0[s] = f^q * sum{ |S(t)|^q }
P1[s] = f^q * sum{ |S(t)-S(t-1)|^q }
P2[s] = f^q * sum{ |2*S(t)-S(t-1)-S(t+1)|^q }
P3[s] = f^q * sum{ |3*S(t)-3*S(t-1)-S(t+1)+S(t-2)|^q }
where S(t) is the deconvolved time series;
where q=1 for L1 fitting, q=2 for L2 fitting;
where f is the value of 'fac' (defined below).
P0 tries to keep S(t) itself small
P1 tries to keep point-to-point fluctuations
in S(t) small (1st derivative)
P2 tries to keep 3 point fluctuations
in S(t) small (2nd derivative)
P3 tries to keep 4 point fluctuations
in S(t) small (3nd derivative)
++ Higher digits try to make the result function S(t)
smoother. If a smooth result makes sense, then use
the string '012' or '0123' for 'pen'.
++ In L2 regression, these penalties are analogous to Wiener
(frequency space) deconvolution, with noise spectra
proportional to
P0 ==> fac^2 * 1 (constant in frequency)
P1 ==> fac^2 * freq^2
P2 ==> fac^2 * freq^4
P3 ==> fac^2 * freq^6
However, 3dTfitter does deconvolution in the time
domain, not the frequency domain, and you can choose
to use L2, L1, or LASSO (L2+L1) regression.
++ The value of 'pen' is a combination of the digits
'0', '1', '2', and/or '3'; for example:
0 = use P0 only
1 = use P1 only
2 = use P2 only
3 = use P3 only
01 = use P0+P1 (the sum of these two functions)
02 = use P0+P2
12 = use P1+P2
012 = use P0+P1+P2 (sum of three penalty functions)
0123 = use P0+P1+P2+P3 (et cetera)
If 'pen' does not contain any of the digits 0..3,
then '01' will be used.
-->** 'fac' is the positive weight 'f' for the penalty function:
++ if fac < 0, then the program chooses a penalty factor
for each voxel separately and then scales that by -fac.
++ use fac = -1 to get this voxel-dependent factor unscaled.
(this is a very reasonable place to start, by the way :-)
++ fac = 0 is a special case: the program chooses a range
of penalty factors, does the deconvolution regression
for each one, and then chooses the fit it likes best
(as a tradeoff between fit error and solution size).
++ fac = 0 will be MUCH slower since it solves about 20
problems for each voxel and then chooses what it likes.
setenv AFNI_TFITTER_VERBOSE YES to get some progress
reports, if you want to see what it is doing.
++ Instead of using fac = 0, a useful alternative is to
do some test runs with several negative values of fac,
[e.g., -1, -2, and -3] and then look at the results to
determine which one is most suitable for your purposes.
++ It is a good idea to experiment with different fac values,
so you can see how the solution varies, and so you can get
some idea of what penalty level to use for YOUR problems.
++ SOME penalty has to be applied, since otherwise the
set of linear equations for S(t) is under-determined
and/or ill-conditioned!
** If '-LHS' is used with '-FALTUNG', those basis vectors can
be thought of as a baseline to be regressed out at the
same time the convolution model is fitted.
++ When '-LHS' supplies a baseline, it is important
that penalty type 'pen' include '0', so that the
collinearity between convolution with a constant S(t)
and a constant baseline can be resolved!
++ Instead of using a baseline here, you could project the
baseline out of a dataset or 1D file using 3dDetrend,
before using 3dTfitter.
*** At most one '-FALTUNG' option can be used!!!
*** Consider the time series model
Z(t) = K(t)*S(t) + baseline + noise,
where Z(t) = data time series (in each voxel)
K(t) = kernel (e.g., hemodynamic response function)
S(t) = stimulus time series
baseline = constant, drift, etc.
and * = convolution in time
Then program 3dDeconvolve solves for K(t) given S(t), whereas
3dTfitter -FALTUNG solves for S(t) given K(t). The difference
between the two cases is that K(t) is presumed to be causal and
have limited support, while S(t) is a full-length time series.
*** Presumably you know this already, but deconvolution in the
Fourier domain -1
S(t) = F { F[Z] / F[K] }
(where F[] is the Fourier transform) is a bad idea, since
division by small values F[K] will grotesquely amplify the
noise. 3dTfitter does NOT even try to do such a silly thing.
****** Deconvolution is a tricky business, so be careful out there!
++ e.g., Experiment with the different parameters to make
sure the results in your type of problems make sense.
-->>++ Look at the results and the fits with AFNI (or 1dplot)!
Do not blindly assume that the results are accurate.
++ Also, do not blindly assume that a paper promoting
a new deconvolution method that always works is
actually a good thing!
++ There is no guarantee that the automatic selection of
of the penalty factor herein will give usable results
for your problem!
++ You should probably use a mask dataset with -FALTUNG,
since deconvolution can often fail on pure noise
time series.
++ Unconstrained (no '-cons' options) least squares ('-lsqfit')
is normally the fastest solution method for deconvolution.
This, however, may only matter if you have a very long input
time series dataset (e.g., more than 1000 time points).
++ For unconstrained least squares deconvolution, a special
sparse matrix algorithm is used for speed. If you wish to
disable this for some reason, set environment variable
AFNI_FITTER_RCMAT to NO before running the program.
++ Nevertheless, a FALTUNG problem with more than 1000 time
points will probably take a LONG time to run, especially
if 'fac' is chosen to be 0.
----------------
SOLUTION METHOD:
----------------
-lsqfit = Solve equations via least squares [the default method].
* This is sometimes called L2 regression by mathematicians.
* '-l2fit' and '-L2' are synonyms for this option.
-l1fit = Solve equations via least sum of absolute residuals.
* This is sometimes called L1 regression by mathematicians.
* '-L1' is a synonym for this option.
* L1 fitting is usually slower than L2 fitting, but
is perhaps less sensitive to outliers in the data.
++ L1 deconvolution might give nicer looking results
when you expect the deconvolved signal S(t) to
have large-ish sections where S(t) = 0.
[The LASSO solution methods can also have this property.]
* L2 fitting is statistically more efficient when the
noise is KNOWN to be normally (Gaussian) distributed
(and a bunch of other assumptions are also made).
++ Where such KNOWLEDGE comes from is an interesting question.
-l2lasso lam [i j k ...]
= Solve equations via least squares with a LASSO (L1) penalty
on the coefficients.
* The positive value 'lam' after the option name is the
weight given to the penalty.
++ As a rule of thumb, you can try lam = 2 * sigma, where
sigma = standard deviation of noise, but that requires
you to have some idea what the noise level is.
++ If you enter 'lam' as a negative number, then the code
will CRUDELY estimate sigma and then scale abs(lam) by
that value -- in which case, you can try lam = -2 (or so)
and see if that works well for you.
++ Or you can use the Square Root LASSO option (next), which
(in theory) does not need to know sigma when setting lam.
++ If you do not provide lam, or give a value of 0, then a
default value will be used.
* Optionally, you can supply a list of parameter indexes
(after 'lam') that should NOT be penalized in the
the fitting process (e.g., traditionally, the mean value
is not included in the L1 penalty.) Indexes start at 1,
as in 'consign' (below).
++ If this un-penalized integer list has long stretches of
contiguous entries, you can specify ranges of integers,
as in '1:9' instead of '1 2 3 4 5 6 7 8 9'.
**-->>++ If you want to supply the list of indexes that GET a
L1 penalty, instead of the list that does NOT, you can
put an 'X' character first, as in
-LASSO 0 X 12:41
to indicate that variables 12..41 (inclusive) get the
penalty applied, and the other variables do not. This
inversion might be more useful to you in some cases.
++ If you also want the indexes to have 1 added to them and
be inverted -- because they came from a 0-based program --
then use 'X1', as in '-LASSO 0 X1 12:41'.
++ If you want the indexes to have 1 added to them but NOT
to be inverted, use 'Y1', as in '-LASSO 0 Y1 13:42'.
++ Note that if you supply an integer list, you MUST supply
a value for lam first, even if that value is 0.
++ In deconvolution ('-FALTUNG'), all baseline parameters
(from '-LHS' and/or '-polort') are automatically non-penalized,
so there is usually no point to using this un-penalizing feature.
++ If you are NOT doing deconvolution, then you'll need this
option to un-penalize any '-polort' parameters (if desired).
** LASSO-ing herein should be considered experimental, and its
implementation is subject to change! You should definitely
play with different 'lam' values to see how well they work
for your particular types of problems. Algorithm is here:
++ TT Wu and K Lange.
Coordinate descent algorithms for LASSO penalized regression.
Annals of Applied Statistics, 2: 224-244 (2008).
http://arxiv.org/abs/0803.3876
* '-LASSO' is a synonym for this option.
-lasso_centro_block i j k ...
= Defines a block of coefficients that will be penalized together
with ABS( beta[i] - centromean( beta[i], beta[j] , ... ) )
where the centromean(a,b,...) is computed by sorting the
arguments (a,b,...) and then averaging the central 50% values.
* The goal is to use LASSO to shrink these coefficients towards
a common value to suppress outliers, rather than the default
LASSO method of shrinking coefficients towards 0, where the
penalty on coefficient beta[i] is just ABS( beta[i] ).
* For example:
-lasso_centro_block 12:26 -lasso_centro_block 27:41
These options define two blocks of coefficients.
-->>*** The intended application of this option is to regularize
(reduce fluctuations) in the 'IM' regression method from
3dDeconvolve, where each task instance gets a separate
beta fit parameter.
*** That is, the idea is that you run 3dTfitter to get the
'IM' betas as an alternative to 3dDeconvolve or 3dREMLfit,
since the centromean regularization will damp down wild
fluctuations in the individual task betas.
*** In this example, the two blocks of coefficients correspond
to the beta values for each of two separate tasks.
*** The input '-LHS' matrix is available from 3dDeconvolve's
'-x1D' option.
*** Further details on 'blocks' can be found in this Google Doc
https://shorturl.at/boxU9
including shell commands on how to extract the block indexes
from the header of the matrix file.
*** A 'lam' value for the '-LASSO' option that makes sense is a value
between -1 and -2, but as usual, you'll have to experiment with
your particular data and application.
* If you have more than one block, do NOT let them overlap,
because the program doesn't check for this kind of stoopidity
and then peculiar/bad things will probably happen!
* A block defined here must have at least 5 entries.
In practice, I would recommend at least 12 entries for a
block, or the whole idea of 'shrinking to the centromean'
is silly.
* This option can be abbreviated as '-LCB', since typing
'-lasso_centro_block' correctly is a nontrivial challenge :-)
*** This option is NOT implemented for -l2sqrtlasso :-(
* [New option - 10 Aug 2021 - RWCox]
-l2sqrtlasso lam [i j k ...]
= Similar to above option, but uses 'Square Root LASSO' instead:
* Approximately speaking, LASSO minimizes E = Q2+lam*L1,
where Q2=sum of squares of residuals and L1=sum of absolute
values of all fit parameters, while Square Root LASSO minimizes
sqrt(Q2)+lam*L1; the method and motivation is described here:
++ A Belloni, V Chernozhukov, and L Wang.
Square-root LASSO: Pivotal recovery of sparse signals via
conic programming (2010). http://arxiv.org/abs/1009.5689
++ A coordinate descent algorithm is also used for this optimization
(unlike in the paper above).
** A reasonable range of 'lam' to use is from 1 to 10 (or so);
I suggest you start with 2 and see how well that works.
++ Unlike the pure LASSO option above, you do not need to give
give a negative value for lam here -- there is no need for
scaling by sigma -- or so they say.
* The theoretical advantange of Square Root LASSO over
standard LASSO is that a good choice of 'lam' does not
depend on knowing the noise level in the data (that is
what 'Pivotal' means in the paper's title).
* '-SQRTLASSO' is a synonym for this option.
--------->>**** GENERAL NOTES ABOUT LASSO and SQUARE ROOT LASSO ****<<--------
* LASSO methods are the only way to solve a under-determined
system with 3dTfitter -- one with more vectors on the RHS
than time points. However, a 'solution' to such a problem
doesn't necessarily mean anything -- be careful out there!
* LASSO methods will tend to push small coefficients down
to zero. This feature can be useful when doing deconvolution,
if you expect the result to be zero over large-ish intervals.
++ L1 regression ('-l1fit') has a similar property, of course.
++ This difficult-to-estimate bias in the LASSO-computed coefficients
makes it nearly impossible to provide reliable estimates of statistical
significance for the fit (e.g., R^2, F, ...).
* The actual penalty factor lambda used for a given coefficient
is lam scaled by the L2 norm of the corresponding regression
column. The purpose of this is to keep the penalties scale-free:
if a regression column were doubled, then the corresponding fit
coefficient would be cut in half; thus, to keep the same penalty
level, lambda should also be doubled.
* For '-l2lasso', a negative lam additionally means to scale
by the estimate of sigma, as described earlier. This feature
does not apply to Square Root LASSO, however (if you give a
negative lam to '-l2sqrtlasso', its absolute value is used).
-->>** There is no 'best' value of lam; if you are lucky, there is
is a range of lam values that give reasonable results. A good
procedure to follow would be to use several different values of
lam and see how the results vary; for example, the list
lam = -1, -2, -4, -7, -10 might be a good starting point.
* If you don't give ANY numeric value after the LASSO option
(i.e., the next argument on the command line is another option),
then the program will use '-3.1415926536' for the value of lam.
* A tiny value of lam (say 0.01) should give almost the same
results as pure L2 regression.
* Data with a smaller signal-to-noise ratio will probably need
larger values of lam -- you'll have to experiment.
* The number of iterations used for the LASSO solution will be
printed out for the first voxel solved, and for ever 10,000th
one following -- this is mostly for my personal edification.
-->>** Recall: "3dTfitter is not for the casual user!"
This statement especially applies when using LASSO, which is a
powerful tool -- and as such, can be dangerous if not used wisely.
---------------------
SOLUTION CONSTRAINTS:
---------------------
-consign = Follow this option with a list of LHS parameter indexes
to indicate that the sign of some output LHS parameters
should be constrained in the solution; for example:
-consign +1 -3
which indicates that LHS parameter #1 (from the first -LHS)
must be non-negative, and that parameter #3 must be
non-positive. Parameter #2 is unconstrained (e.g., the
output can be positive or negative).
* Parameter counting starts with 1, and corresponds to
the order in which the LHS columns are specified.
* Unlike '-LHS or '-label', only one '-consign' option
can be used.
* Do NOT give the same index more than once after
'-consign' -- you can't specify that an coefficient
is both non-negative and non-positive, for example!
*** Constraints can be used with any of the 4 fitting methods.
*** '-consign' constraints only apply to the '-LHS'
fit parameters. To constrain the '-FALTUNG' output,
use the option below.
* If '-consign' is not used, the signs of the fitted
LHS parameters are not constrained.
-consFAL c= Constrain the deconvolution time series from '-FALTUNG'
to be positive if 'c' is '+' or to be negative if
'c' is '-'.
* There is no way at present to constrain the deconvolved
time series S(t) to be positive in some regions and
negative in others.
* If '-consFAL' is not used, the sign of the deconvolved
time series is not constrained.
---------------
OUTPUT OPTIONS:
---------------
-prefix p = Prefix for the output dataset (LHS parameters) filename.
* Output datasets from 3dTfitter are always in float format.
* If you don't give this option, 'Tfitter' is the prefix.
* If you don't want this dataset, use 'NULL' as the prefix.
* If you are doing deconvolution and do not also give any
'-LHS' options, then this file will not be output, since
it comprises the fit parameters for the '-LHS' vectors.
-->>** If the input '-RHS' file is a .1D file, normally the
output files are written in the AFNI .3D ASCII format,
where each row contains the time series data for one
voxel. If you want to have these files written in the
.1D format, with time represented down the column
direction, be sure to put '.1D' on the end of the prefix,
as in '-prefix Elvis.1D'. If you use '-' or 'stdout' as
the prefix, the resulting 1D file will be written to the
terminal. (See the fun fun fun examples, below.)
-label lb = Specifies sub-brick labels in the output LHS parameter dataset.
* More than one 'lb' can follow the '-label' option;
however, each label must NOT start with the '-' character!
* Labels are applied in the order given.
* Normally, you would provide exactly as many labels as
LHS columns. If not, the program invents some labels.
-fitts ff = Prefix filename for the output fitted time series dataset.
* Which is always in float format.
* Which will not be written if this option isn't given!
*** If you want the residuals, subtract this time series
from the '-RHS' input using 3dcalc (or 1deval).
-errsum e = Prefix filename for the error sums dataset, which
is calculated from the difference between the input
time series and the fitted time series (in each voxel):
* Sub-brick #0 is the sum of squares of differences (L2 sum)
* Sub-brick #1 is the sum of absolute differences (L1 sum)
* The L2 sum value, in particular, can be used to produce
a statistic to measure the significance of a fit model;
cf. the 'Correlation Coefficient Example' far below.
--------------
OTHER OPTIONS:
--------------
-mask ms = Read in dataset 'ms' as a mask; only voxels with nonzero
values in the mask will be processed. Voxels falling
outside the mask will be set to all zeros in the output.
* Voxels whose time series are all zeros will not be
processed, even if they are inside the mask!
-quiet = Don't print the fun fun fun progress report messages.
* Why would you want to hide these delightful missives?
----------------------
ENVIRONMENT VARIABLES:
----------------------
AFNI_TFITTER_VERBOSE = YES means to print out information during
the fitting calculations.
++ Automatically turned on for 1 voxel -RHS inputs.
AFNI_TFITTER_P1SCALE = number > 0 will scale the P1 penalty by
this value (e.g., to count it more)
AFNI_TFITTER_P2SCALE = number > 0 will scale the P2 penalty by
this value
AFNI_TFITTER_P3SCALE = number > 0 will scale the P3 penalty by
this value
You could set these values on the command line using the AFNI standard
'-Dvariablename=value' command line option.
------------
NON-Options:
------------
* There is no option to produce statistical estimates of the
significance of the parameter estimates.
++ 3dTcorrelate might be useful, to compute the correlation
between the '-fitts' time series and the '-RHS' input data.
++ You can use the '-errsum' option to get around this limitation,
with enough cleverness.
* There are no options for censoring or baseline generation (except '-polort').
++ You could generate some baseline 1D files using 1deval, perhaps.
* There is no option to constrain the range of the output parameters,
except the semi-infinite ranges provided by '-consign' and/or '-consFAL'.
* This program is NOW parallelized via OpenMP :-) [17 Aug 2021 - RWCox]
------------------
Contrived Example:
------------------
The datasets 'atm' and 'btm' are assumed to have 99 time points each.
We use 3dcalc to create a synthetic combination of these plus a constant
plus Gaussian noise, then use 3dTfitter to fit the weights of these
3 functions to each voxel, using 4 different methods. Note the use of
the input 1D time series '1D: 99@1' to provide the constant term.
3dcalc -a atm+orig -b btm+orig -expr '-2*a+b+gran(100,20)' -prefix 21 -float
3dTfitter -RHS 21+orig -LHS atm+orig btm+orig '1D: 99@1' -prefix F2u -l2fit
3dTfitter -RHS 21+orig -LHS atm+orig btm+orig '1D: 99@1' -prefix F1u -l1fit
3dTfitter -RHS 21+orig -LHS atm+orig btm+orig '1D: 99@1' -prefix F1c -l1fit \
-consign -1 +3
3dTfitter -RHS 21+orig -LHS atm+orig btm+orig '1D: 99@1' -prefix F2c -l2fit \
-consign -1 +3
In the absence of noise and error, the output datasets should be
#0 sub-brick = -2.0 in all voxels
#1 sub-brick = +1.0 in all voxels
#2 sub-brick = +100.0 in all voxels
----------------------
Yet More Contrivances:
----------------------
You can input a 1D file for the RHS dataset, as in the example below,
to fit a single time series to a weighted sum of other time series:
1deval -num 30 -expr 'cos(t)' > Fcos.1D
1deval -num 30 -expr 'sin(t)' > Fsin.1D
1deval -num 30 -expr 'cos(t)*exp(-t/20)' > Fexp.1D
3dTfitter -quiet -RHS Fexp.1D -LHS Fcos.1D Fsin.1D -prefix -
* Note the use of the '-' as a prefix to write the results
(just 2 numbers) to stdout, and the use of '-quiet' to hide
the divertingly funny and informative progress messages.
* For the Jedi AFNI Masters out there, the above example can be carried
out on using single complicated command line:
3dTfitter -quiet -RHS `1deval -1D: -num 30 -expr 'cos(t)*exp(-t/20)'` \
-LHS `1deval -1D: -num 30 -expr 'cos(t)'` \
`1deval -1D: -num 30 -expr 'sin(t)'` \
-prefix -
resulting in the single output line below:
0.535479 0.000236338
which are respectively the fit coefficients of 'cos(t)' and 'sin(t)'.
--------------------------------
Contrived Deconvolution Example:
--------------------------------
(1) Create a 101 point 1D file that is a block of 'activation'
between points 40..50, convolved with a triangle wave kernel
(the '-iresp' input below):
3dConvolve -input1D -polort -1 -num_stimts 1 \
-stim_file 1 '1D: 40@0 10@1 950@0' \
-stim_minlag 1 0 -stim_maxlag 1 5 \
-iresp 1 '1D: 0 1 2 3 2 1' -nlast 100 \
| grep -v Result | grep -v '^$' > F101.1D
(2) Create a 3D+time dataset with this time series in each
voxel, plus noise that increases with voxel 'i' index:
3dUndump -prefix Fjunk -dimen 100 100 1
3dcalc -a Fjunk+orig -b F101.1D \
-expr 'b+gran(0,0.04*(i+1))' \
-float -prefix F101d
/bin/rm -f Fjunk+orig.*
(3) Deconvolve, then look what you get by running AFNI:
3dTfitter -RHS F101d+orig -l1fit \
-FALTUNG '1D: 0 1 2 3 2 1' F101d_fal1 012 0.0
3dTfitter -RHS F101d+orig -l2fit \
-FALTUNG '1D: 0 1 2 3 2 1' F101d_fal2 012 0.0
(4) View F101d_fal1+orig, F101d_fal2+orig, and F101d+orig in AFNI,
(in Axial image and graph viewers) and see how the fit quality
varies with the noise level and the regression type -- L1 or
L2 regression. Note that the default 'fac' level of 0.0 was
selected in the commands above, which means the program selects
the penalty factor for each voxel, based on the size of the
data time series fluctuations and the quality of the fit.
(5) Add logistic noise (long tails) to the noise-free 1D time series, then
deconvolve and plot the results directly to the screen, using L1 and L2
and the two LASSO fitting methods:
1deval -a F101.1D -expr 'a+lran(.5)' > F101n.1D
3dTfitter -RHS F101n.1D -l1fit \
-FALTUNG '1D: 0 1 2 3 2 1' stdout 01 -2 | 1dplot -stdin -THICK &
3dTfitter -RHS F101n.1D -l2fit \
-FALTUNG '1D: 0 1 2 3 2 1' stdout 01 -2 | 1dplot -stdin -THICK &
3dTfitter -RHS F101n.1D -l2sqrtlasso 2 \
-FALTUNG '1D: 0 1 2 3 2 1' stdout 01 -2 | 1dplot -stdin -THICK &
3dTfitter -RHS F101n.1D -l2lasso -2 \
-FALTUNG '1D: 0 1 2 3 2 1' stdout 01 -2 | 1dplot -stdin -THICK &
For even more fun, add the '-consfal +' option to the above commands,
to force the deconvolution results to be positive.
***N.B.: You can only use 'stdout' as an output filename when
the output will be written as a 1D file (as above)!
--------------------------------
Correlation Coefficient Example:
--------------------------------
Suppose your initials are HJJ and you want to compute the partial
correlation coefficient of time series Seed.1D with every voxel in
a dataset Rest+orig once a spatially dependent 'artifact' time series
Art+orig has been projected out. You can do this with TWO 3dTfitter
runs, plus 3dcalc:
(1) Run 3dTfitter with ONLY the artifact time series and get the
error sum dataset
3dTfitter -RHS Rest+orig -LHS Art+orig -polort 2 -errsum Ebase
(2) Run 3dTfitter again with the artifact PLUS the seed time series
and get the error sum dataset and also the beta coefficients
3dTfitter -RHS Rest+orig -LHS Seed.1D Art+orig -polort 2 \
-errsum Eseed -prefix Bseed
(3) Compute the correlation coefficient from the amount of variance
reduction between cases 1 and 2, times the sign of the beta
3dcalc -a Eseed+orig'[0]' -b Ebase+orig'[0]' -c Bseed+orig'[0]' \
-prefix CorrSeed -expr '(2*step(c)-1)*sqrt(1-a/b)'
3drefit -fbuc -sublabel 0 'SeedCorrelation' CorrSeed+orig
More cleverness could be used to compute t- or F-statistics in a
similar fashion, using the error sum of squares between 2 different fits.
(Of course, these are assuming you use the default '-lsqfit' method.)
--------------------------------
PPI (psycho-physiological interaction) Example:
--------------------------------
Suppose you are running a PPI analysis and want to deconvolve a GAM
signal from the seed time series, hoping (very optimistically) to
convert from the BOLD time series (typical FMRI signal) to a
neurological time series (an impulse signal, say).
If the BOLD signal at the seed is seed_BOLD.1D and the GAM signal is
GAM.1D, then consider this example for the deconvolution, in order to
create the neuro signal, seed_neuro.1D:
3dTfitter -RHS seed_BOLD.1D \
-FALTUNG GAM.1D seed_neuro.1D 012 -2 \
-l2lasso -6
*************************************************************************
** RWCox - Feb 2008, et seq. **
** Created for the glorious purposes of John A Butman, MD, PhD, Poobah **
** But might be useful for some other well-meaning souls out there **
*************************************************************************
=========================================================================
* This binary version of 3dTfitter is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dThreetoRGB
Usage #1: 3dThreetoRGB [options] dataset
Usage #2: 3dThreetoRGB [options] dataset1 dataset2 dataset3
Converts 3 sub-bricks of input to an RGB-valued dataset.
* If you have 1 input dataset, then sub-bricks [0..2] are
used to form the RGB components of the output.
* If you have 3 input datasets, then the [0] sub-brick of
each is used to form the RGB components, respectively.
* RGB datasets have 3 bytes per voxel, with values ranging
from 0..255.
Options:
-prefix ppp = Write output into dataset with prefix 'ppp'.
[default='rgb']
-scale fac = Multiply input values by 'fac' before using
as RGB [default=1]. If you have floating
point inputs in range 0..1, then using
'-scale 255' would make a lot of sense.
-mask mset = Only output nonzero values where the mask
dataset 'mset' is nonzero.
-fim = Write result as a 'fim' type dataset.
[this is the default]
-anat = Write result as a anatomical type dataset.
Notes:
* Input datasets must be byte-, short-, or float-valued.
* You might calculate the component datasets using 3dcalc.
* You can also create RGB-valued datasets in to3d, using
2D raw PPM image files as input, or the 3Dr: format.
* RGB fim overlays are transparent in AFNI in voxels where all
3 bytes are zero - that is, it won't overlay solid black.
* At present, there is limited support for RGB datasets.
About the only thing you can do is display them in 2D
slice windows in AFNI.
-- RWCox - April 2002
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTnorm
Usage: 3dTnorm [options] dataset
Takes each voxel time series and normalizes it
(by multiplicative scaling) -- in some sense.
Options:
-prefix p = use string 'p' for the prefix of the
output dataset [DEFAULT = 'tnorm']
-norm2 = L2 normalize (sum of squares = 1) [DEFAULT]
-normR = normalize so sum of squares = number of time points
* e.g., so RMS = 1.
-norm1 = L1 normalize (sum of absolute values = 1)
-normx = Scale so max absolute value = 1 (L_infinity norm)
-polort p = Detrend with polynomials of order p before normalizing
[DEFAULT = don't do this]
* Use '-polort 0' to remove the mean, for example
-L1fit = Detrend with L1 regression (L2 is the default)
* This option is here just for the hell of it
Notes:
* Each voxel is processed separately
* A voxel that is all zero will be unchanged (duh)
* Output dataset is in float format, no matter what the input format
* This program is for producing regressors to use in 3dTfitter
* Also see programs 1dnorm and 3dcalc
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTORTOISEtoHere
Convert standard TORTOISE DTs (diagonal-first format) to standard
AFNI (lower triangular, row-wise) format. NB: Starting from
TORTOISE v2.0.1, there is an 'AFNI output' format as well, which
would not need to be converted.
Part of FATCAT (Taylor & Saad, 2013) in AFNI.
*** NB: this program is likely no longer necessary if using 'AFNI
*** export' from TORTOISE!
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMAND: 3dTORTOISEtoHere -dt_tort DTFILE {-scale_fac X } \
{-flip_x | -flip_y | -flip_z} -prefix PREFIX
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ OUTPUT:
1) An AFNI-style DT file with the following ordering of the 6 bricks:
Dxx,Dxy,Dyy,Dxz,Dyz,Dzz.
In case it is useful, one can apply 'flips' to the eventual (or
underlying, depending how you look at it) eigenvector directions,
as well as rescale the associated eigenvalues.
+ RUNNING:
-dt_tort DTFILE :diffusion tensor file, which should have six bricks
of DT components ordered in the TORTOISE manner, i.e.,
diagonals first:
Dxx,Dyy,Dzz,Dxy,Dxz,Dyz.
-prefix PREFIX :output file name prefix. Will have N+1 bricks when
GRADFILE has N rows of gradients.
-flip_x :change sign of first element of (inner) eigenvectors.
-flip_y :change sign of second element of (inner) eigenvectors.
-flip_z :change sign of third element of (inner) eigenvectors.
-> Only a single flip would ever be necessary; the combination
of any two flips is mathematically equivalent to the sole
application of the remaining one.
Normally, it is the *gradients* that are flipped, not the
DT, but if, for example, necessary files are missing, then
one can apply the requisite changes here.
-scale_fac X :optional switch to rescale the DT elements, dividing
by a number X>0.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3dTORTOISEtoHere \
-dt_tort DTI/DT_DT+orig \
-scale_fac 1000 \
-prefix AFNI_DT
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
____________________________________________________________________________
AFNI program: 3dToutcount
Usage: 3dToutcount [options] dataset
Calculates number of 'outliers' a 3D+time dataset, at each
time point, and writes the results to stdout.
Options:
-mask mset = Only count voxels in the mask dataset.
-qthr q = Use 'q' instead of 0.001 in the calculation
of alpha (below): 0 < q < 1.
-autoclip }= Clip off 'small' voxels (as in 3dClipLevel);
-automask }= you can't use this with -mask!
-fraction = Output the fraction of (masked) voxels which are
outliers at each time point, instead of the count.
-range = Print out median+3.5*MAD of outlier count with
each time point; use with 1dplot as in
3dToutcount -range fred+orig | 1dplot -stdin -one
-save ppp = Make a new dataset, and save the outlier Q in each
voxel, where Q is calculated from voxel value v by
Q = -log10(qg(abs((v-median)/(sqrt(PI/2)*MAD))))
or Q = 0 if v is 'close' to the median (not an outlier).
That is, 10**(-Q) is roughly the p-value of value v
under the hypothesis that the v's are iid normal.
The prefix of the new dataset (float format) is 'ppp'.
-polort nn = Detrend each voxel time series with polynomials of
order 'nn' prior to outlier estimation. Default
value of nn=0, which means just remove the median.
Detrending is done with L1 regression, not L2.
-legendre = Use Legendre polynomials (also allows -polort > 3).
OUTLIERS are defined as follows:
* The trend and MAD of each time series are calculated.
- MAD = median absolute deviation
= median absolute value of time series minus trend.
* In each time series, points that are 'far away' from the
trend are called outliers, where 'far' is defined by
alpha * sqrt(PI/2) * MAD
alpha = qginv(0.001/N) (inverse of reversed Gaussian CDF)
N = length of time series
* Some outliers are to be expected, but if a large fraction of the
voxels in a volume are called outliers, you should investigate
the dataset more fully.
Since the results are written to stdout, you probably want to redirect
them to a file or another program, as in this example:
3dToutcount -automask v1+orig | 1dplot -stdin
NOTE: also see program 3dTqual for a similar quality check.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dtoXdataset
Convert input datasets to the format needed for 3dClustSimX.
Usage:
3dtoXdataset -prefix PPP maskdataset inputdataset ...
The output file 'PPP.sdat' will be created, if it does not exist.
If it already exists, the input dataset value (inside the mask) will
be appended to this output file.
AFNI program: 3dToyProg
Usage: 3dToyProg [-prefix PREF] [-mask MSET] [-datum DATUM]
[-h|-help] <-input ISET>
A program to illustrate dataset creation, and manipulation in C using
AFNI's API. Comments in the code (should) explain it all.
-input ISET: reference dataset
-prefix PREF: Prefix of output datasets.
-mask MSET: Restrict analysis to non-zero voxels in MSET
-datum DATUM: Output datum type for one of the datasets.
Choose from 'float' or 'short'. Default is
'float'
-h: Mini help, at time, same as -help in many cases.
-help: The entire help output
-HELP: Extreme help, same as -help in majority of cases.
-h_view: Open help in text editor. AFNI will try to find a GUI editor
-hview : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_web: Open help in web browser. AFNI will try to find a browser.
-hweb : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_find WORD: Look for lines in this programs's -help output that match
(approximately) WORD.
-h_raw: Help string unedited
-h_spx: Help string in sphinx loveliness, but do not try to autoformat
-h_aspx: Help string in sphinx with autoformatting of options, etc.
-all_opts: Try to identify all options for the program from the
output of its -help option. Some options might be missed
and others misidentified. Use this output for hints only.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTproject
Usage: 3dTproject [options]
This program projects (detrends) out various 'nuisance' time series from each
voxel in the input dataset. Note that all the projections are done via linear
regression, including the frequency-based options such as '-passband'. In this
way, you can bandpass time-censored data, and at the same time, remove other
time series of no interest (e.g., physiological estimates, motion parameters).
--------
OPTIONS:
--------
-input dataset = Specifies the input dataset.
-prefix ppp = Specifies the output dataset, as usual.
-censor cname = As in 3dDeconvolve.
-CENSORTR clist = As in 3dDeconvolve.
-cenmode mode = 'mode' specifies how censored time points are treated in
the output dataset:
++ mode = ZERO ==> put zero values in their place
==> output dataset is same length as input
++ mode = KILL ==> remove those time points
==> output dataset is shorter than input
++ mode = NTRP ==> censored values are replaced by interpolated
neighboring (in time) non-censored values,
BEFORE any projections, and then the
analysis proceeds without actual removal
of any time points -- this feature is to
keep the Spanish Inquisition happy.
** The default mode is KILL !!!
-concat ccc.1D = The catenation file, as in 3dDeconvolve, containing the
TR indexes of the start points for each contiguous run
within the input dataset (the first entry should be 0).
++ Also as in 3dDeconvolve, if the input dataset is
automatically catenated from a collection of datasets,
then the run start indexes are determined directly,
and '-concat' is not needed (and will be ignored).
++ Each run must have at least 9 time points AFTER
censoring, or the program will not work!
++ The only use made of this input is in setting up
the bandpass/stopband regressors.
++ '-ort' and '-dsort' regressors run through all time
points, as read in. If you want separate projections
in each run, then you must either break these ort files
into appropriate components, OR you must run 3dTproject
for each run separately, using the appropriate pieces
from the ort files via the '{...}' selector for the
1D files and the '[...]' selector for the datasets.
-noblock = Also as in 3dDeconvolve, if you want the program to treat
an auto-catenated dataset as one long run, use this option.
++ However, '-noblock' will not affect catenation if you use
the '-concat' option.
-ort f.1D = Remove each column in f.1D
++ Multiple -ort options are allowed.
++ Each column will have its mean removed.
-polort pp = Remove polynomials up to and including degree pp.
++ Default value is 2.
++ It makes no sense to use a value of pp greater than
2, if you are bandpassing out the lower frequencies!
++ For catenated datasets, each run gets a separate set
set of pp+1 Legendre polynomial regressors.
++ Use of -polort -1 is not advised (if data mean != 0),
even if -ort contains constant terms, as all means are
removed.
-dsort fset = Remove the 3D+time time series in dataset fset.
++ That is, 'fset' contains a different nuisance time
series for each voxel (e.g., from AnatICOR).
++ Multiple -dsort options are allowed.
-passband fbot ftop = Remove all frequencies EXCEPT those in the range
*OR* -bandpass fbot..ftop.
++ Only one -passband option is allowed.
-stopband sbot stop = Remove all frequencies in the range sbot..stop.
++ More than one -stopband option is allowed.
++ For example, '-passband 0.01 0.10' is equivalent to
'-stopband 0 0.0099 -stopband 0.1001 9999'
-dt dd = Use time step dd for the frequency calculations,
*OR* -TR rather than the value stored in the dataset header.
-mask mset = Only operate on voxels nonzero in the mset dataset.
*OR* ++ Use '-mask AUTO' to have the program generate the
-automask mask automatically (or use '-automask')
++ Voxels outside the mask will be filled with zeros.
++ If no masking option is given, then all voxels
will be processed.
-blur fff = Blur (inside the mask only) with a filter that has
width (FWHM) of fff millimeters.
++ Spatial blurring (if done) is after the time
series filtering.
-norm = Normalize each output time series to have sum of
squares = 1. This is the LAST operation.
-quiet = Hide the super-fun and thrilling progress messages.
-verb = The program will save the fixed ort matrix and its
singular values into .1D files, for post-mortems.
It will also print out more progress messages, which
might help with figuring out what's happening when
problems occur.
------
NOTES:
------
* The output dataset is in floating point format.
* Removal of the various undesired components is via linear regression.
In particular, this method allows for bandpassing of censored time
series.
* If you like technical math jargon (and who doesn't?), this program
performs orthogonal projection onto the null space of the set of 'ort'
vectors assembled from the various options '-polort', '-ort',
'-passband', '-stopband', and '-dsort'.
* If A is a matrix whose column comprise the vectors to be projected
out, define the projection matrix Q(A) by
Q(A) = I - A psinv(A)
where psinv(A) is the pseudo-inverse of A [e.g., inv(A'A)A' -- but
the pseudo-inverse is actually calculated here via the SVD algorithm.]
* If option '-dsort' is used, each voxel has a different matrix of
regressors -- encode this extra set of regressors in matrix B
(i.e., each column of B is a vector to be removed from its voxel's
time series). Then the projection for the compound matrix [A B] is
Q( Q(A)B ) Q(A)
that is, A is projected out of B, then the projector for that
reduced B is formed, and applied to the projector for the
voxel-independent A. Since the number of columns in B is usually
many fewer than the number of columns in A, this technique can
be much faster than constructing the full Q([A B]) for each voxel.
(Since Q(A) only need to be constructed once for all voxels.)
A little fun linear algebra will show you that Q(Q(A)B)Q(A) = Q([A B]).
* A similar regression could be done via the slower 3dTfitter program:
3dTfitter -RHS inputdataset+orig \
-LHS ort1.1D dsort2+orig \
-polort 2 -prefix NULL \
-fitts Tfit
3dcalc -a inputdataset+orig -b Tfit+orig -expr 'a-b' \
-datum float -prefix Tresidual
3dTproject should be MUCH more efficient, especially when using
voxel-specific regressors (i.e., '-dsort'), and of course, it also
offers internal generation of the bandpass/stopband regressors,
as well as censoring, blurring, and L2-norming.
* This version of the program is compiled using OpenMP for speed.
* Authored by RWCox in a fit of excessive linear algebra [summer 2013].
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTqual
Usage: 3dTqual [options] dataset
Computes a `quality index' for each sub-brick in a 3D+time dataset.
The output is a 1D time series with the index for each sub-brick.
The results are written to stdout.
Note that small values of the index are 'good', indicating that
the sub-brick is not very different from the norm. The purpose
of this program is to provide a crude way of screening FMRI
time series for sporadic abnormal images, such as might be
caused by large subject head motion or scanner glitches.
Do not take the results of this program too literally. It
is intended as a GUIDE to help you find data problems, and no
more. It is not an assurance that the dataset is good, and
it may indicate problems where nothing is wrong.
Sub-bricks with index values much higher than others should be
examined for problems. How you determine what 'much higher' means
is mostly up to you. I suggest graphical inspection of the indexes
(cf. EXAMPLE, infra). As a guide, the program will print (stderr)
the median quality index and the range median-3.5*MAD .. median+3.5*MAD
(MAD=Median Absolute Deviation). Values well outside this range might
be considered suspect; if the quality index were normally distributed,
then values outside this range would occur only about 1% of the time.
OPTIONS:
-spearman = Quality index is 1 minus the Spearman (rank)
correlation coefficient of each sub-brick
with the median sub-brick.
[This is the default method.]
-quadrant = Similar to -spearman, but using 1 minus the
quadrant correlation coefficient as the
quality index.
-autoclip = Clip off low-intensity regions in the median sub-brick,
-automask = so that the correlation is only computed between
high-intensity (presumably brain) voxels. The
intensity level is determined the same way that
3dClipLevel works. This prevents the vast number
of nearly 0 voxels outside the brain from biasing
the correlation coefficient calculations.
-clip val = Clip off values below 'val' in the median sub-brick.
-mask MSET = Compute correlation only across masked voxels.
-range = Print the median-3.5*MAD and median+3.5*MAD values
out with EACH quality index, so that they
can be plotted (cf. Example, infra).
Notes: * These values are printed to stderr in any case.
* This is only useful for plotting with 1dplot.
* The lower value median-3.5*MAD is never allowed
to go below 0.
EXAMPLE:
3dTqual -range -automask fred+orig | 1dplot -one -stdin
will calculate the time series of quality indexes and plot them
to an X11 window, along with the median+/-3.5*MAD bands.
NOTE: cf. program 3dToutcount for a somewhat different quality check.
-- RWCox - Aug 2001
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTrackID
FACTID-based tractography code, from Taylor, Cho, Lin and Biswal (2012),
and part of FATCAT (Taylor & Saad, 2013) in AFNI. Version 2.1 (Jan. 2014),
written by PA Taylor and ZS Saad.
Estimate locations of WM associated with target ROIs, particularly between
pairs of GM in a network; can process several networks in a given run.
Now does both single tract propagation per voxel (as per DTI) and
multi-directional tracking (as in HARDI-type models). Many extra files can
be loaded in for getting quantitative stats in WM-ROIs, mostly done via
search from entered prefixes. Many more switches and options are available
to the user to control the tracking (yay!).
Track display capabilities in SUMA have been boosted and continue to rise
quickly (all courtesy of ZS Saad).
****************************************************************************
+ NOTE that this program runs in three separate modes, each with its own
subset of commandline options and outputs:
$ 3dTrackID -mode {DET | MINIP | PROB} ...
where DET -> deterministic tracking,
MINIP -> mini-probabilistic tracking,
PROB -> (full) probabilistic tracking.
So, for example, DET and MINIP produce pretty track-image output,
while PROB only provides volumes; MINIP and PROB make use of
tensor uncertainty to produce more robust results than DET; all
produce quantitative statistical output of WM-ROIs; etc. In some cases,
using a combination of all three might even be variously useful in a
particular study.
****************************************************************************
For DTI, this program reads in tensor-related data from, e.g., 3dDWItoDT,
and also uses results from 3dDWUncert for uncertainty measures when
necessary.
For HARDI, this program reads in the direction vectors and WM-proxy map
(such as the diffusion anisotropy coefficient, GFA) created by any source-
right now, there's no HARDI modeler in AFNI. Currently known sources which
are reasonably straightforward to use include DSI-Studio (Yeh et al.,
2010) and Diffusion Toolkit (Wang et al., 2007). An example script of
outputting Qball model data as NIFTI output from the former software is
included in the FATCAT demo set.
...And on that note, it is highly recommended for users to check out the
FATCAT demo set, which can be downloaded and unwrapped simply from the
commandline:
$ @Install_FATCAT_Demo
In that demo are data, a number of scripts, and more detailed descriptions
for using 3dTrackID, as well as other programs in the FATCAT litter.
Recommended to always check that one has the most up-to-date version.
****************************************************************************
+ INPUT NOTES:
NETWORK MAPS, for any '-mode' of track, given as a single- or multi-brik
file via '-netrois':
Each target ROI is defined by the set of voxels with a given integer >0.
Target ROI labels do not have to be purely consecutive.
Note on vocabulary, dual usage of 'ROI': an (input) network is made up of
*target ROIs*, between/among which one wants to find WM connections; so,
3dTrackID outputs locations and stats on those calculated *WM-ROIs*.
****************************************************************************
+ OUTPUTS, all named using '-prefix INPREF'; somewhat dependent on tracking
mode being utilized ('-mode {DET | MINIP | PROB}').
Because multiple networks can be input simultaneously as a multi-
brik '-netrois ROIS' file, the output prefix will also have a
numerical designation of its network, matching to the brik of
the ROIS file: thus, INPREF_000* goes with ROIS[0], INPREF_001*
with ROIS[1] (if present), etc. This applies with all types of
output files, now described:
1) *INDIMAP* BRIK files (output in ALL modes).
For each network with N_ROI target ROIs, this is a N_ROI+1 brik file.
0th brick contains the number of tracts per voxel which passed through
at least one target ROI in that network (and in '-mode PROB', this
number has been thresholded-- see 'alg_Thresh_Frac' below).
If the target ROIs are consecutively labelled from 1 to N_ROI, then:
Each i-th brick (i running from 1 to N_ROI) contains the voxels
through which tracks hitting that i-th target passed; the value of
each voxel is the number of tracks passing through that location.
Else, then:
Each i-th brick contains the voxels through which the tracks
hitting the j-th target passed (where j may or may not equal i; the
value of j is recorded in the brick label: OR_roi_'j'). The target
ROI connectivity is recorded increasing order of 'j'.
For single-ROI inputs (such as a single wholebrain ROI), only the
[0] brick is output (because [1] would be redundant).
2) *PAIRMAP* BRIK files (output in ALL modes).
(-> This has altered slightly at the end of June, 2014! No longer using
2^i notation-- made simpler for reading, assuming individual connection
information for calculations was likely obtained more easily with
'-dump_rois {AFNI | BOTH | AFNI_MAP}...)
For each network with N_ROI target ROIs, this is a N_ROI+1 brik file.
0th brick contains a binary mask of voxels through which passed a
supra-threshold number of tracks (more than 0 for '-mode {DET | MINIP}'
and more than the user-defined threshold for '-mode PROB') between any
pair of target ROIs in that network (by default, these tracks have been
trimmed to only run between ROIs, cutting off parts than dangle outside
of the connection).
If the target ROIs are consecutively labelled from 1 to N_ROI, then:
Each i-th brick (i running from 1 to N_ROI) contains the voxels
through which tracks hitting that i-th target AND any other target
passed; voxels connecting i- and j-th target ROIs have value j, and
the values are summed if a given voxel is in multiple WM ROIs (i.e.,
for a voxel connecting both target ROIs 2 and 1 as well as 2 and 4,
then the value there in brick [2] would be 1 + 4 = 5).
Else, then:
Each i-th brick contains the voxels through which the tracks
hitting the j-th target AND any other target passed (where j may or
may not equal i; the value of j is recorded in the brick label:
AND_roi_'j'). The same voxel labelling and summing rules described
above also apply here.
For single-ROI inputs (such as a single wholebrain ROI), no PAIRMAP
file is output (because it would necessarily be empty).
3) *.grid ASCII-text file (output in ALL modes).
Simple text file of output stats of WM-ROIs. It outputs the means and
standard deviations of parameter quantities (such as FA, MD, L1, etc.)
as well as counts of tracks and volumes of WM-ROIs. Each matrix is
square, with dimension N_ROI by N_ROI. Like the locations in a standard
correlation matrix, each element reflects associativity with target
ROIs. A value at element (1,3) is the same as that at (3,1) and tells
about the property of a WM-ROI connecting target ROIs 1 and 3 (consider
upper left corner as (1,1)); diagonal elements provide info of tracks
through (at minimum) that single target ROI-- like OR logic connection.
Format of *.grid file is:
Line 1: number of ROIs in network (padded with #-signs)
Line 2: number of output matrices of stats info (padded with #-signs)
Line 3: list of N_ROI labels for that network
Lines following: first line, label of a property (padded with #), and
then N_ROI lines of the N_ROI-by-N_ROI matrix of that
property;
/repeat/
The first *five* matrices are currently (this may change over time):
NT = number of tracks in that WM-ROI
fNT = fractional number of tracks in that WM-ROI, defined as NT
divided by total number of tracts found (may not be relevant)
PV = physical volume of tracks, in mm^3
fNV = fractional volume of tracks compared to masked (internally or
'-mask'edly) total volume; would perhaps be useful if said
mask represents the whole brain volume well.
NV = number of voxels in that WM-ROI.
BL = average length (in mm) of a bundle of tracts.
sBL = stdev of the length (in mm) of a bundle of tracts.
Then, there can be a great variety in the remaining matrices, depending
on whether one is in DTI or HARDI mode and how many scalar parameter
files get input (max is 10). For each scalar file there are two
matrices: first a label (e.g., 'FA') and then an N_ROI-by-N_ROI matrix
of the means of that parameter in each WM-ROI; then a label (here,
would be 'sFA') and then an N_ROI-by-N_ROI matrix of the standard
deviations of that parameter in each WM-ROI.
4) *niml.tract NIML/SUMA-esque file (output in '-mode {DET | MINIP}')
File for viewing track-like output in SUMA, with, e.g.:
$ suma -tract FILE.niml.tract
5) *niml.dset NIML/SUMA-esque file (output in '-mode {DET | MINIP}')
File accompanying the *.niml.tract file-- also for use in SUMA, for
including GRID-file like information with the tract info.
$ suma -tract FILE.niml.tract -gdset FILE.niml.dset
6) *.trk TrackVis-esque file (output in '-mode {DET | MINIP}')
File for viewing track-like output in TrackVis (separate install from
AFNI/SUMA); things mainly done via GUI interface; this format of
output is off by default (see '-do_trk_out' below to enable it).
****************************************************************************
+ LABELTABLE LABELLING (Sept 2014).
The ability to use label tables in tracking result output has been
included.
Default behavior will be to *construct* a labeltable from zero-padded ints
in the '-netrois' file which define target ROIs. Thus, the ROI of '3's
will be given a label '003'. This will be used in INDIMAP and PAIRMAP
brick labels (which is useful if the targets are not consecutively
numbered from 1), PAIRMAP connections in bricks >0, and output
*.niml.tract files. The PAIRMAP labeltable will be created and output
as 'PREFIX_PAIRMAP.niml.lt', and will be useful for the user in (some-
what efficiently) resolving multiple tracts passing through voxels.
These labels are also used in the naming of '-dump_rois AFNI' output.
At the moment, in a given PAIRMAP brick of index >0, labels can only
describe up to two connections through a given voxel. In brick 1, if
voxel is intersected by tracts connection ROIs 1 and 3 as well as ROIs
1 and 6, then the label there would be '003<->006'; if another voxel
in that brick had those connections as well as one between ROIs 1 and
4, then the label might be '_M_<->003<->006', or '_M_<->003<->004', or
any two of the connections plus the leading '_M_' that stands for
'multiple others' (NB: which two are shown is not controlled, but I
figured it was better to show at least some, rather than just the
less informative '_M_' alone). In all of these things, the PAIRMAP
map is a useful, fairly efficient guide-check, but the overlaps are
difficult to represent fully and efficiently, given the possibly
complexity of patterns. For more definite, unique, and scriptable
information of where estimated WM connections are, use the
'-dump_rois AFNI' or '-dump_rois AFNI_MAP' option.
If the '-netrois' input has a labeltable, then this program will program
will read it in, use it in PAIRMAP and INDIMAP bricklabels, PAIRMAP
subbricks with index >0, *niml.tract outputs and, by default, in the
naming of '-dump_rois AFNI' output. The examples and descriptions
directly above still hold, but in cases where the ROI number has an
explicit label, then the former is replaced by the latter's string.
In cases where an input label table does not cover all ROI values,
there is no need to panic-- the explicit input labels will be used
wherever possible, and the zero-padded numbers will be used for the
remaining cases. Thus, one might see PAIRMAP labels such as:
'003<->Right-Amygdala', '_M_<->ctx-lh-insula<->006', etc.
****************************************************************************
+ RUNNING AND COMMANDLINE OPTIONS: pick a MODEL and a MODE.
There are now two types of models, DTI and HARDI, that can be tracked.
In HARDI, one may have multiple directions per voxel along which tracts
may propagate; in DTI, there can be only one. Each MODEL has some
required, and some optional, inputs.
Additionally, tracking is run in one of three modes, as described near the
top of this document, '-mode {DET | MINIP | PROB}', for deterministic
mini-probabilistic, or full probabilistic tracking, respectively.
Each MODE has some required, and some optional, inputs. Some options
find work in multiple modes.
To run '3dTrackID', one needs to have both a model and a mode in mind (and
in data...). Below is a table to show the various options available
for the user to perform tracking. The required options for a given
model or mode are marked with a single asterisk (*); the options under
the /ALL/ column are necessary in any mode. Thus, to run deterministic
tracking with DTI data, one *NEEDS* to select, at a minimum:
'-mode DET', '-netrois', '-prefix', '-logic';
and then there is a choice of loading DTI data, with either:
'-dti_in' or '-dti_list',
and then one can also use '-dti_extra', '-mask', '-alg_Nseed_Y',
et al. from the /ALL/ and DET columns; one canNOT specify '-unc_min_FA'
here -> the option is in an unmatched mode column.
Exact usages of each option, plus formats for any arguments, are listed
below. Default values for optional arguments are also described.
+-----------------------------------------------------------------+
| COMMAND OPTIONS FOR TRACKING MODES AND MODELS |
+-----------------------------------------------------------------+
| /ALL/ | DET | MINIP | PROB |
+--------+-------------------+-------------+-------------+-----------------+
|{dti_in, dti_list}*| | | |
DTI | dti_extra | | | |
| dti_search_NO | | | |
+-~or~---+-------------------+-------------+-------------+-----------------+
| hardi_gfa* | | | |
HARDI | hardi_dirs* | | | |
| hardi_pars | | | |
==~and~==+===================+=============+=============+=================+
| mode* | | | |
OPTIONS | netrois* | | | |
| prefix* | | | |
| mask | | | |
| thru_mask | | | |
| targ_surf_stop | | | |
| targ_surf_twixt | | | |
| | logic* | logic* | |
| | | mini_num* | |
| | | uncert* | uncert* |
| | | unc_min_FA | unc_min_FA |
| | | unc_min_V | unc_min_V |
| algopt | | | |
| alg_Thresh_FA | | | |
| alg_Thresh_ANG | | | |
| alg_Thresh_Len | | | |
| | alg_Nseed_X | alg_Nseed_X | |
| | alg_Nseed_Y | alg_Nseed_Y | |
| | alg_Nseed_Z | alg_Nseed_Z | |
| | | | alg_Thresh_Frac |
| | | | alg_Nseed_Vox |
| | | | alg_Nmonte |
| uncut_at_rois | | | |
| do_trk_out | | | |
| trk_opp_orient | | | |
| dump_rois | | | |
| dump_no_labtab | | | |
| dump_lab_consec | | | |
| posteriori | | | |
| rec_orig | | | |
| tract_out_mode | | | |
| write_opts | | | |
| write_rois | | | |
| pair_out_power | | | |
+--------+-------------------+-------------+-------------+-----------------+
*above, asterisked options are REQUIRED for running the given '-mode'.
With DTI data, one must use either '-dti_in' *or* '-dti_list' for input.
FOR MODEL DTI:
-dti_in INPREF :basename of DTI volumes output by, e.g., 3dDWItoDT.
NB- following volumes are *required* to be present:
INPREF_FA, INPREF_MD, INPREF_L1,
INPREF_V1, INPREF_V2, INPREF_V3,
and (now) INPREF_RD (**now output by 3dTrackID**).
Additionally, the program will search for all other
scalar (=single brik) files with name INPREF* and will
load these in as additional quantities for WM-ROI
stats; this could be useful if, for example, you have
PD or anatomical measures and want mean/stdev values
in the WM-ROIs (to turn this feature off, see below,
'dti_search_NO'); all the INPREF* files must be in same
DWI space.
Sidenote: including/omitting a '_' at the end of INPREF
makes no difference in the hunt for files.
-dti_extra SET :if you want to use a non-FA derived definition for the
WM skeleton in which tracts run, you can input one, and
then the threshold in the -algopt file (or, via the
'-alg_Thresh_FA' option) will be applied to
thresholding this SET; similarly for the minimum
uncertainty by default will be set to 0.015 times the
max value of SET, or can be set with '-unc_min_FA'.
If the SET name is formatted as INPREF*, then it will
probably be included twice in stats, but that's not the
worst thing. In grid files, name of this quantity will
be 'XF' (stands for 'extra file').
-dti_search_NO :turn off the feature to search for more scalar (=single
brik) files with INPREF*, for including stats in output
GRID file. Will only go for FA, MD, L1 and RD scalars
with INPREF.
-dti_list FILE :an alternative way to specify DTI input files, where
FILE is a NIML-formatted text file that lists the
explicit/specific files for DTI input. This option is
used in place of '-dti_in' and '-dti_extra' for loading
data sets of FA, MD, L1, etc. An 'extra' set (XF) can
be loaded in the file, as well as supplementary scalar
data sets for extra WM-ROI statistics.
See below for a 'DTI LIST FILE EXAMPLE'.
FOR MODEL HARDI:
-hardi_gfa GFA :single brik data set with generalized FA (GFA) info.
In reality, it doesn't *have* to be a literal GFA, esp.
if you are using some HARDI variety that doesn't have
a specific GFA value-- in such a case, use whatever
could be thresholded as your proxy for WM.
The default threshold is still 0.2, so you will likely
need to set a new one in the '-algopt ALG_FILE' file or
from the commandline with '-alg_Thresh_FA', which does
apply to the GFA in the HARDI case as well.
Stats in GRID file are output under name 'GFA'.
-hardi_dirs DIRS :For tracking if X>1 propagation directions per voxel
are given, for example if HARDI data is input. DIRS
would then be a file with 3*X briks of (x,y,z) ordered,
unit magnitude vector components; i.e., brik [0]
contains V1_x, [1] V1_y, [2] V1_z, [3] V2_x, etc.
(NB: even if X=1, this option works, but that would
seem to take the HAR out of HARDI...)
-hardi_pars PREF :search for scalar (=single brik) files of naming
format PREF*. These will be read in for WM-ROI stats
output in the GRID file. For example, if there are
some files PREF_PD.nii.gz, PREF_CAT.nii.gz and
PREF_DOG.nii.gz, they will be labelled in the GRID file
as 'PD', 'CAT' and 'DOG' (that '_' will be cut out).
MODEL-INDEPENDENT OPTIONS:
-mode MODUS :this necessary option is used to define whether one is
performing deterministic, mini-probabilistic or full-
probabilistic tractography, by selecting one of three
respective modes: DET, MINIP, or PROB.
-netrois ROIS :mask(s) of target ROIs- single file can have multiple
briks, one per network. The target ROIs through which
tracks will be kept should have index values >0. It is
also possible to define anti-targets (exclusionary
regions) which stop a propagating track... in its
tracks. These are defined per network (i.e., per brik)
by voxels with values <0.
-prefix PREFIX :output file name part.
-mask MASK :can include a brainmask within which to calculate
things. Otherwise, data should be masked already.
-thru_mask TM :optional extra restrictor mask, through which paths are
(strictly) required to pass in order to be included
when passing through or connecting targets. It doesn't
discriminate based on target ROI number, so it's
probably mostly useful in examining specific pairwise
connections. It is also not like one of the target
'-netrois' in that no statistics are calculated for it.
Must be same number of briks as '-netrois' set.
-targ_surf_stop :make the final tracts and tracked regions stop at the
outer surface of the target ROIs, rather than being
able to journey arbitrarily far into them (latter being
the default behavior. Might be useful when you want
meaningful distances *between* targets. Tracts stop
after going *into* the outer layer of a target.
This can be applied to DET, MINIP, or PROB modes.
NB: this only affects the connections between pairs
of targets (= AND-logic, off-diagonal elements in
output matrices), not the single-target tracts
(= OR-logic, on-diagonal elements in output
matrices); see also a related option, below.
-targ_surf_twixt :quite similar to '-targ_surf_stop', above, but the
tracts stop *before* entering the target surfaces, so
that they are only between (or betwixt) the targets.
Again, only affects tracts between pairs of targets.
-logic {OR|AND} :when in one of '-mode {DET | MINIP}', one will look for
either OR- or AND-logic connections among target ROIs
per network (multiple networks can be entered as
separate briks in '-netrois ROIS'): i.e., one keeps
either any track going through at least one network ROI
or only those tracks which join a pair of ROIs.
When using AND here, default behavior is to only keep
voxels in tracks between the ROIs they connect (i.e.,
cut off track bits which run beyond ROIs).
-mini_num NUM :will run a small number NUM of whole brain Monte Carlo
iterations perturbing relevant tensor values in accord
with their uncertainty values (hence, the need for also
using `-uncert' with this option). This might be useful
for giving a flavor of a broader range of connections
while still seeing estimated tracks themselves. NB: if
NUM is large, you might be *big* output track files;
e.g., perhaps try NUM = 5 or 9 or so to start.
Requires '-mode MINIP' in commandline.
-bundle_thr V :the number of tracts for a given connection is called
a bundle. For '-mode {DET | MINIP}', one can choose to
NOT output tracts, matrix info, etc. for any bundle
with fewer than V tracts. This might be useful to weed
out ugly/false tracts (default: V=1).
-uncert U_FILE :when in one of '-mode {MINIP | PROB}', uncertainty
values for eigenvector and WM skeleton (FA, GFA, etc.)
maps are necessary.
When using DTI ('-dti_*'), then use the 6-brik file
from 3dDWUncert; format of the file given below.
When using HARDI ('-hardi_*') with up to X directions
per voxel, one needs U_FILE to have X+1 briks, where
U_FILE[0] is the uncertainty for the GFAfile, and the
other briks are ordered for directions given with
'-hardi_dirs'.
Whatever the values in the U_FILE, this program asserts
a minimum uncertainty of stdevs, with defaults:
for FA it is 0.015, and for GFA or -dti_extra sets it
is 0.015 times the max value present (set with option
'-unc_min_FA');
for each eigenvector or dir, it is 0.06rad (~3.4deg)
(set with option '-unc_min_V')
-unc_min_FA VAL1 :when using '-uncert', one can control the minimum
stdev for perturbing FA (in '-dti_in'), or the EXTRA-
file also in DTI ('-dti_extra'), or GFA (in '-hardi_*).
Default value is: 0.015 for FA, and 0.015 times the max
value in the EXTRA-file or in the GFA file.
-unc_min_V VAL2 :when using '-uncert', one can control the minimum
stdev for perturbing eigen-/direction-vectors.
In DTI, this is for tipping e_1 separately toward e2
and e3, and in HARDI, this is for defining a single
degree of freedom uncertainty cone. Default values are
0.06rad (~3.4deg) for any eigenvector/direction. User
assigns values in degrees.
-algopt A_FILE :simple ASCII file with six numbers defining tracking
parameter quantities (see list below); note the
differences whether running in '-mode {DET | MINIP}'
or in '-mode PROB': the first three parameters in each
mode are the same, but the next three differ.
The file can be in the more understandable html-type
format with labels per quantity, or just as a column
of the numbers, necessarily in the correct order.
NB: each quantity can also be changed individually
using a commandline option (see immediately following).
If A_FILE ends with '.niml.opts' (such as would be
produced using the '-write_opts' option), then it is
expected that it is in nice labelled NIML format;
otherwise, the file should just be a column of numbers
in the right order. Examples of A_FILEs are given at
the end of the option section.
-alg_Thresh_FA A :set threshold for DTI FA map, '-dti_extra' FILE, or
HARDI GFA map (default = 0.2).
-alg_Thresh_ANG B :set max angle (in deg) for turning when going to a new
voxel during propagation (default = 60).
-alg_Thresh_Len C :min physical length (in mm) of tracts to keep
(default = 20).
-alg_Nseed_X D :Number of seeds per vox in x-direc (default = 2).
-alg_Nseed_Y E :Number of seeds per vox in y-direc (default = 2).
-alg_Nseed_Z F :Number of seeds per vox in z-direc (default = 2).
+-------> NB: in summation, for example the alg_Nseed_* options
for '-mode {DET | MINIP} place 2x2x2=8 seed points,
equally spread in a 3D cube, in each voxel when
tracking.
-alg_Thresh_Frac G :value for thresholding how many tracks must pass
through a voxel for a given connection before it is
included in the final WM-ROI of that connection.
It is a decimal value <=1, which will multiply the
number of 'starting seeds' per voxel, Nseed_Vox*Nmonte
(see just below for those). (efault = 0.001; for higher
specificity, a value of 0.01-0.05 could be used).
-alg_Nseed_Vox H :number of seeds per voxel per Monte Carlo iteration;
seeds will be placed randomly (default = 5).
-alg_Nmonte I :number of Monte Carlo iterations (default = 1000).
+-------> NB: in summation, the preceding three options for the
'-mode PROB' will mean that 'I' Monte Carlo
iterations will be run, each time using 'H' track
seeds per relevant voxel, and that a voxel will
need 'G*H*I' tracks of a given connection through
it to be included in that WM-ROI. Default example:
1000 iterations with 5 seeds/voxel, and therefore
a candidate voxel needs at least 0.001*5*1000 = 5
tracks/connection.
-extra_tr_par :run three extra track parameter scalings for each
target pair, output in the *.grid file. The NT value
of each connection is scaled in the following manners
for each subsequent matrix label:
NTpTarVol :div. by average target volume;
NTpTarSA :div. by average target surface area;
NTpTarSAFA :div. by average target surface area
bordering suprathreshold FA (or equi-
valent WM proxy definition).
NB: the volume and surface area numbers are given in
terms of voxel counts and not using physical units
(consistent: NT values themselves are just numbers.)
-uncut_at_rois :when looking for pairwise connections, keep entire
length of any track passing through multiple targets,
even when part ~overshoots a target (i.e., it's not
between them). When using OR tracking, this is
automatically applied. For probabilistic tracking, not
recommended to use (are untrimmed ends meaningful?).
The default behavior is to trim the tracts that AND-
wise connect targets to only include sections that are
between the targets, and not parts that run beyond one.
(Not sure why one would want to use this option, to be
honest; see '-targ_surf_stop' for really useful tract
control.)
-dump_rois TYPE :can output individual masks of ROI connections.
Options for TYPE are: {DUMP | AFNI | BOTH | AFNI_MAP}.
Using DUMP gives a set of 4-column ASCII files, each
formatted like a 3dmaskdump data set; it can be recon-
stituted using 3dUndump. Using AFNI gives a set of
BRIK/HEAD (byte) files in a directory called PREFIX;
using AFNI_MAP is like using AFNI, but it gives non-
binarized *maps* of ROI connections.
Using BOTH produces AFNI and DUMP formats of outputs.
-dump_no_labtab :if the ROIS file has a label table, the default is to
use it in naming a '-dump_rois' output (if being used);
using this switch turn that off-- output file names
will be the same as if no label table were present.
-dump_lab_consec :if using `-dump_rois', then DON'T apply the numerical
labels of the original ROIs input to the output names.
This would only matter if input ROI labels aren't
consecutive and starting with one (e.g., if instead
they were 1,2,3,5,8,..).
---> This is the opposite from previous default behavior, where
the option '-lab_orig_rois' was used to switch away
from consecutive-izing the labels in the output.
-posteriori :switch to have a bunch of individual files output, with
the value in each being the number of tracks per voxel
for that pair; works with '-dump_rois {AFNI | BOTH }',
where you get track-path maps instead of masks; makes
threshold for number of tracks between ROIs to keep to
be one automatically, regardless of setting in algopt.
-rec_orig :record dataset origin in the header of the *.trk file.
As of Sept. 2012, TrackVis doesn't use this info so it
wasn't included, but if you might want to map your
*.trk file later, then use the switch as the datasets's
Origin is necessary info for the mapping (the default
image in TrackVis might not pop up in the center of the
viewing window, though, just be aware). NB: including
the origin might become default at some point in time.
-do_trk_out :Switch ON outputting *.trk files, which are mainly to
be viewed in TrackVis (Wang et al., 2007).
(Feb, 2015): Default is to NOT output *.trk files.
-trk_opp_orient :If outputting *.trk files, you can choose to oppositize
the voxel_order parameter for the TRK file (only).
Thus, when inputting AFNI files with orient RAI, the
*.trk file would have voxel_order LPS; this is so that
files can be viewed in some other software, such as
DTK.
-nifti :output the PAIRMAP, INDIMAP, and any '-dump_rois' in
*.nii.gz format (default is BRIK/HEAD).
-no_indipair_out :Switch off outputting *INDIMAP* and *PAIRMAP* volumes.
This is probably just if you want to save file space;
also, for connectome-y studies with many (>100) target
regions, the output INDI and PAIR maps can be quite
large and/or difficult to write out. In some cases, it
might be better to just use '-dump_rois AFNI' instead.
Default is to output the INDI and PAIR map files.
-write_rois :write out a file (PREFIX.roi.labs) of all the ROI
(re-)labels, for example if the input ROIs aren't
simply consecutive and starting from 1. File has 3cols:
Input_ROI Condensed_form_ROI Power_of_2_label
-write_opts :write out all the option values into PREFIX.niml.opts.
-pair_out_power :switch to affect output of *PAIRMAP* output files.
Now, the default format is to output the >0 bricks with
tracks labelled by the target integers themselves.
This is not a unique labelling system, but it *is* far
easier to view and understand what's going on than
using a purely unique system based on using powers of
two of the ROIs (with integer summation for overlaps).
Using the switch '-pair_out_power' will change the
output of bricks [1] and higher to contain
information on connections stored as powers of two, so
that there is a unique decomposition in terms of
overlapped connections. However, it's *far* easier to
use '-dump_rois {AFNI | BOTH }' to get individual mask
files of the ROIs clearly (as well as annoying to need
to calculate powers of two simply to visualize the
connections.
-----> when considering this option, see the 'LABELTABLE'
description up above for how the labels work, with
or without an explicit table being entered.
-verb VERB :verbosity level, default is 0.
****************************************************************************
+ ALGOPT FILE EXAMPLES (note that different MODES have some different opts):
For '-mode {DET | MINIP}, the nicely readable NIML format of algopt file
would have a file name ending '.niml.opts' and contain something like the:
following seven lines:
<TRACK_opts
Thresh_FA="0.2"
Thresh_ANG="60.000000"
Thresh_Len="20.000000"
Nseed_X="2"
Nseed_Y="2"
Nseed_Z="2" />
The values above are actually all default values, and such a file would be
output using the '-write_opts' flag. For the same modes, one could get
the same result using a plain column of numbers, whose meaning is defined
by their order, contained in a file NOT ending in .niml.opts, such as
exemplified in the next six lines:
0.2
60
20
2
2
2
For '-mode PROB', the nice NIML format algopt file would contain something
like the next seven lines (again requiring the file name to end in
'.niml.opts'):
<TRACK_opts
Thresh_FA="0.2"
Thresh_ANG="60.0"
Thresh_Len="20.0"
Thresh_Frac="0.001"
Nseed_Vox="5"
Nmonte="1000" />
Again, those represent the default values, and could be given as a plain
column of numbers, in that order.
* * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * **
+ DTI LIST FILE EXAMPLE:
Consider, for example, if you hadn't used the '-sep_dsets' option when
outputting all the tensor information from 3dDWItoDT. Then one could
specify the DTI inputs for this program with a file called, e.g.,
FILE_DTI_IN.niml.opts (the name *must* end with '.niml.opts'):
<DTIFILE_opts
dti_V1="SINGLEDT+orig[9..11]"
dti_V2="SINGLEDT+orig[12..14]"
dti_V3="SINGLEDT+orig[15..17]"
dti_FA="SINGLEDT+orig[18]"
dti_MD="SINGLEDT+orig[19]"
dti_L1="SINGLEDT+orig[6]"
dti_RD="SINGLEDT+orig[20]" />
This represents the *minimum* set of input files needed when running
3dTrackID. (Oct. 2016: RD now output by 3dDWItoDT, and not calc'ed
internally by 3dTrackID.)
One could also input extra data: an 'extra file' (XF) to take the place
of an FA map for determining where tracks can propagate; and up to four
other data sets (P1, P2, P3 and P4, standing for 'plus one' etc.) for
calculating mean/stdev properties in the obtained WM-ROIs:
<DTIFILE_opts
dti_V1="SINGLEDT+orig[9..11]"
dti_V2="SINGLEDT+orig[12..14]"
dti_V3="SINGLEDT+orig[15..17]"
dti_XF="Segmented_WM.nii.gz"
dti_FA="SINGLEDT+orig[18]"
dti_MD="SINGLEDT+orig[19]"
dti_L1="SINGLEDT+orig[6]"
dti_RD="SINGLEDT+orig[20]"
dti_P1="SINGLEDT+orig[7]"
dti_P2="SINGLEDT+orig[8]"
dti_P3="T1_map.nii.gz"
dti_P4="PD_map.nii.gz" />
****************************************************************************
+ EXAMPLES:
Here are just a few scenarios-- please see the Demo data set for *maaany*
more, as well as for fuller descriptions. To obtain the Demo, type the
following into a commandline:
$ @Install_FATCAT_demo
This will also unzip the archive, which contains required data (so it's
pretty big, currently >200MB), a README.txt file, and several premade
scripts that are ~heavily commented.
A) Deterministic whole-brain tracking; set of targets is just the volume
mask. This can be useful for diagnostic purposes, sanity check for
gradients+data, for interactively selecting interesting subsets later,
etc. This uses most of the default algopts, but sets a higher minimum
length for keeping tracks:
$ 3dTrackID -mode DET \
-dti_in DTI/DT \
-netrois mask_DWI+orig \
-logic OR \
-alg_Thresh_Len 30 \
-prefix DTI/o.WB
B) Mini-probabilistic tracking through a multi-brik network file using a
DTI model and AND-logic. Instead of using the thresholded FA map to
guide tracking, an extra data set (e.g., a mapped anatomical
segmentation image) is input as the WM proxy; as such, what used to be
a threshold for adult parenchyma FA is now changed to an appropriate
value for the segmentation percentages; and this would most likely
also assume that 3dDWUncert had been used to calculate tensor value
uncertainties:
$ 3dTrackID -mode MINIP \
-dti_in DTI/DT \
-dti_extra T1_WM_in_DWI.nii.gz \
-netrois ROI_ICMAP_GMI+orig \
-logic AND \
-mini_num 7 \
-uncert DTI/o.UNCERT_UNC+orig. \
-alg_Thresh_FA 0.95 \
-prefix DTI/o.MP_AND_WM
C) Full probabilistic tracking through a multi-brik network file using
HARDI-Qball reconstruction. The designated GFA file is used to guide
the tracking, with an appropriate threshold set and a smaller minimum
uncertainty of that GFA value (from this and example B, note how
generically the '-alg_Thresh_FA' functions, always setting a value for
the WM proxy map, whether it be literally FA, GFA or the dti_extra
file). Since HARDI-value uncertainty isn't yet calculable in AFNI,
brain-wide uniform values were assigned to the GFA and directions:
$ 3dTrackID -mode PROB \
-hardi_gfa QBALL/GFA.nii.gz \
-hardi_dirs QBALL/dirs.nii.gz \
-netrois ROI_ICMAP_GMI+orig \
-uncert QBALL/UNIFORM_UNC+orig. \
-mask mask_DWI+orig \
-alg_Thresh_FA 0.04 \
-unc_min_FA 0.003 \
-prefix QBALL/o.PR_QB
****************************************************************************
If you use this program, please reference the workhorse FACTID
tractography algorithm:
Taylor PA, Cho K-H, Lin C-P, Biswal BB (2012). Improving DTI
Tractography by including Diagonal Tract Propagation. PLoS ONE
7(9): e43415.
and the introductory/description paper for FATCAT:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional And
Tractographic Connectivity Analysis Toolbox. Brain Connectivity.
AFNI program: 3dTRfix
Usage: 3dTRfix [options]
This program will read in a dataset that was sampled on an irregular time
grid and re-sample it via linear interpolation to a regular time grid.
NOTES:
------
The re-sampling will include the effects of slice time offsets (similarly
to program 3dTshift), if these time offsets are encoded in the input dataset's
header.
No other processing is performed -- in particular, there is no allowance
(at present) for T1 artifacts resulting from variable TR.
If the first 1 or 2 time points are abnormally bright due to the NMR
pre-steady-state effect, then their influence might be spread farther
into the output dataset by the interpolation process. You can avoid this
effect by excising these values from the input using the '[2..$]' notation
in the input dataset syntax.
If the input dataset is catenated from multiple non-contiguous imaging runs,
the program will happily interpolate across the time breaks between the runs.
For this reason, you should not give such a file (e.g., from 3dTcat) to this
program -- you should use 3dTRfix on each run separately, and only later
catenate the runs.
The output dataset is stored in float format, regardless of the input format.
** Basically, this program is a hack for the Mad Spaniard.
** When are we going out for tapas y cerveza (sangria es bueno, tambien)?
OPTIONS:
--------
-input iii = Input dataset 'iii'. [MANDATORY]
-TRlist rrr = 1D columnar file of time gaps between sub-bricks in 'iii';
If the input dataset has N time points, this file must
have at least N-1 (positive) values.
* Please note that these time steps (or the time values in
'-TIMElist') should be in seconds, NOT in milliseconds!
* AFNI time units are seconds!!!
-TIMElist ttt = Alternative to '-TRlist', where you give the N values of
the times at each sub-brick; these values must be monotonic
increasing and non-negative.
* You must give exactly one of '-TIMElist' or '-TRlist'.
* The TR value given in the input dataset header is ignored.
-prefix ppp = Prefix name for the output dataset.
-TRout ddd = 'ddd' gives the value for the output dataset's TR (in sec).
If '-TRout' is not given, then the average TR of the input
dataset will be used.
November 2014 -- Zhark the Fixer
AFNI program: 3dTSgen
++ 3dTSgen: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: B. Douglas Ward
This program generates an AFNI 3d+time data set. The time series for
each voxel is generated according to a user specified signal + noise
model.
Usage:
3dTSgen
-input fname fname = filename of prototype 3d + time data file
[-inTR] set the TR of the created timeseries to be the TR
of the prototype dataset
[The default is to compute with TR = 1.]
[The model functions are called for a ]
[time grid of 0, TR, 2*TR, 3*TR, .... ]
-signal slabel slabel = name of (non-linear) signal model
-noise nlabel nlabel = name of (linear) noise model
-sconstr k c d constraints for kth signal parameter:
c <= gs[k] <= d
-nconstr k c d constraints for kth noise parameter:
c+b[k] <= gn[k] <= d+b[k]
-sigma s s = std. dev. of additive Gaussian noise
[-voxel num] screen output for voxel #num
-output fname fname = filename of output 3d + time data file
The following commands generate individual AFNI 1 sub-brick datasets:
[-scoef k fname] write kth signal parameter gs[k];
output 'fim' is written to prefix filename fname
[-ncoef k fname] write kth noise parameter gn[k];
output 'fim' is written to prefix filename fname
The following commands generate one AFNI 'bucket' type dataset:
[-bucket n prefixname] create one AFNI 'bucket' dataset containing
n sub-bricks; n=0 creates default output;
output 'bucket' is written to prefixname
The mth sub-brick will contain:
[-brick m scoef k label] kth signal parameter regression coefficient
[-brick m ncoef k label] kth noise parameter regression coefficient
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTshift
Usage: 3dTshift [options] dataset
* Shifts voxel time series from the input dataset so that the separate
slices are aligned to the same temporal origin. By default, uses the
slicewise shifting information in the dataset header (from the 'tpattern'
input to program to3d).
Method: detrend -> interpolate -> retrend (optionally)
* The input dataset can have a sub-brick selector attached, as documented
in '3dcalc -help'.
* The output dataset time series will be interpolated from the input to
the new temporal grid. This may not be the best way to analyze your
data, but it can be convenient.
* Slices where significant time interpolation happens will have extra
temporal autocorrelation introduced by the interpolation. The amount
of extra correlation along the time axis depends on the type of
interpolation used. Higher order interpolation will produce smaller
such 'extra' correlation; in order, from lowest (most extra correlation)
to highest (least extra correlation):
-linear -cubic -quintic -heptic
-wsinc5 -wsinc9 -Fourier
* The last two methods do not add much correlation in time. However, they
have the widest interpolation 'footprint' and so the output data values
will have contributions from data points further away in time.
* To properly account for these extra correlations, which vary in space,
we advise you to analyze the time series using 3dREMLfit, which uses
a voxel-dependent prewhitening (de-correlating) linear regression method,
unlike most other FMRI time series regression software.
++ Or else use '-wsinc9' interpolation, which has a footprint of 18 time points:
9 before and 9 after the intermediate time at which the value is output.
WARNINGS:
--------
* Please recall the phenomenon of 'aliasing': frequencies above 1/(2*TR) can't
be properly interpolated. For most 3D FMRI data, this means that cardiac
and respiratory effects will not be treated properly by this program.
* The images at the beginning of a high-speed FMRI imaging run are usually
of a different quality than the later images, due to transient effects
before the longitudinal magnetization settles into a steady-state value.
These images should not be included in the interpolation! For example,
if you wish to exclude the first 4 images, then the input dataset should
be specified in the form 'prefix+orig[4..$]'. Alternatively, you can
use the '-ignore ii' option.
* It seems to be best to use 3dTshift before using 3dvolreg.
(But this statement is controversial.)
* If the input dataset does not have any slice timing information, and
'-tpattern' is not given, then this program just copies the input to
the output. [02 Nov 2011 -- formerly, it failed]
* Please consider the potential impact of 3dTshift on any subsequent
linear regression model. While the temporal resampling of 3dTshift is
not exact, it is attempting to interpolate the slice timing so that it
is as if each volume were acquired at time 'tzero' + k*TR. So with
-tzero 0, it becomes akin to each entire volume being acquired at the
very beginning of its TR. By default, the offset is the average offset
across the slices, which for alt+z or seq is:
(nslices-1)/nslices * TR/2
That average approaches TR/2 as the number of slices increases.
The new slice/volume timing is intended to be the real timing from the
start of the run.
How might this affect stimulus timing in 3dDeconvolve?
3dDeconvolve creates regressors based on volume times of k*TR, matching
tzero=0. So an event at run time t=0 would start at the time of volume
#0. However using -tzero 1 (or the default, in the case of TR~=2s),
an event at run time t=0 would then be 1s *before* the first volume.
Note that this matches reality. An event at time t=0 happens before
all but the first acquired slice. In particular, a slice acquired at
TR offset 1s might be unaffected by 3dTshift. And an event at run time
t=0 seems to happen at time t=-1s from the perspective of that slice.
To align stimulus times with the applied tzero of 3dTshift, tzero
should be subtracted from each stimulus event time (3dDeconvolve
effectively subtracts tzero from the EPI timing, so that should be
applied to the event times as well).
OPTIONS:
-------
-verbose = print lots of messages while program runs
-TR ddd = use 'ddd' as the TR, rather than the value
stored in the dataset header using to3d.
You may attach the suffix 's' for seconds,
or 'ms' for milliseconds.
-tzero zzz = align each slice to time offset 'zzz';
the value of 'zzz' must be between the
minimum and maximum slice temporal offsets.
N.B.: The default alignment time is the average
of the 'tpattern' values (either from the
dataset header or from the -tpattern option)
-slice nnn = align each slice to the time offset of slice
number 'nnn' - only one of the -tzero and
-slice options can be used.
-prefix ppp = use 'ppp' for the prefix of the output file;
the default is 'tshift'.
-ignore ii = Ignore the first 'ii' points. (Default is ii=0.)
The first ii values will be unchanged in the output
(regardless of the -rlt option). They also will
not be used in the detrending or time shifting.
-rlt = Before shifting, the mean and linear trend
-rlt+ = of each time series is removed. The default
action is to add these back in after shifting.
-rlt means to leave both of these out of the output
-rlt+ means to add only the mean back into the output
(cf. '3dTcat -help')
-no_detrend = Do not remove or restore linear trend.
Heptic becomes the default interpolation method.
** Options to choose the temporal interpolation method: **
-Fourier = Use a Fourier method (the default: most accurate; slowest).
-linear = Use linear (1st order polynomial) interpolation (least accurate).
-cubic = Use the cubic (3rd order) Lagrange polynomial interpolation.
-quintic = Use the quintic (5th order) Lagrange polynomial interpolation.
-heptic = Use the heptic (7th order) Lagrange polynomial interpolation.
-wsinc5 = Use weighted sinc interpolation - plus/minus 5 [Aug 2019].
-wsinc9 = Use weighted sinc interpolation - plus/minus 9.
-tpattern ttt = use 'ttt' as the slice time pattern, rather
than the pattern in the input dataset header;
'ttt' can have any of the values that would
go in the 'tpattern' input to to3d, described below:
alt+z = altplus = alternating in the plus direction
alt+z2 = alternating, starting at slice #1 instead of #0
alt-z = altminus = alternating in the minus direction
alt-z2 = alternating, starting at slice #nz-2 instead of #nz-1
seq+z = seqplus = sequential in the plus direction
seq-z = seqminus = sequential in the minus direction
@filename = read temporal offsets from 'filename'
(the filename time units should match those of the
dataset
* Originally, times were given in units of ms (with 'ms' being stored
as the TR unit in the dataset). Generally, time is now specified in
units of s (with that unit store in the dataset).
Here the original 'to3d' example has be converted to seconds.
For example if nz = 5 and TR = 1.0 (with dataset TR in units of s),
then the inter-slice time is taken to be dt = TR/nz = 0.2. In this
case, the slices are offset in time by the following amounts:
S L I C E N U M B E R
tpattern 0 1 2 3 4 Comment
--------- --- --- --- --- --- -------------------------------
altplus 0 0.6 0.2 0.8 0.4 Alternating in the +z direction
alt+z2 0.4 0 0.6 0.2 0.8 Alternating, but starting at #1
altminus 0.4 0.8 0.2 0.6 0 Alternating in the -z direction
alt-z2 0.8 0.2 0.6 0 0.4 Alternating, starting at #nz-2
seqplus 0 0.2 0.4 0.6 0.8 Sequential in the +z direction
seqminus 0.8 0.6 0.4 0.2 0 Sequential in the -z direction
If @filename is used for tpattern, then nz ASCII-formatted numbers
are read from the file. These indicate the time offsets for each
slice. For example, if 'filename' contains
0 0.6 0.2 0.8 0.4
then this is equivalent to 'altplus' in the above example.
(nz = number of slices in the input dataset)
Note that 1D format can be used with @filename. For example, to shift
a single voxel time series given TR=2.0, and adjusting the old toffset
from 0.5 s to 0 s, consider:
3dTshift -prefix new.1D -TR 2 -tzero 0 -tpattern '@1D: 0.5' old.1D\'
For a conceptual test of 3dTshift, consider a sequence of commands:
1deval -num 25 -expr t+10 > t0.1D
3dTshift -linear -no_detrend -TR 1 -tzero 0 -tpattern '@1D: 0.5' \
-prefix t.shift.1D t0.1D\'
1dplot -one t0.1D t.shift.1D
Recall from your memorization of the -help that 3dTshift performs the
shift on a detrended time series. Hence the '--linear -no_detrend'
options are included (otherwise, the line would be unaltered).
Also, be aware that since we are asking to interpolate the data so that
it is as if it were acquired 0.5 seconds earlier, that is moving the
time window to the left, and therefore the plot seems to move to the
right.
N.B.: if you are using -tpattern, make sure that the units supplied
match the units of TR in the dataset header, or provide a
new TR using the -TR option.
As a test of how well 3dTshift interpolates, you can take a dataset
that was created with '-tpattern alt+z', run 3dTshift on it, and
then run 3dTshift on the new dataset with '-tpattern alt-z' -- the
effect will be to reshift the dataset back to the original time
grid. Comparing the original dataset to the shifted-then-reshifted
output will show where 3dTshift does a good job and where it does
a bad job.
******* Voxel-Wise Shifting -- New Option [Sep 2011] *******
-voxshift fset = Read in dataset 'fset' and use the values in there
to shift each input dataset's voxel's time series a
different amount. The values in 'fset' are NOT in
units of time, but rather are fractions of a TR
to shift -- a positive value means to shift backwards.
* To compute an fset-style dataset that matches the
time pattern of an existing dataset, try
set TR = 2.5
3dcalc -a 'dset+orig[0..1]' -datum float -prefix Toff -expr "t/${TR}-l"
where you first set the shell variable TR to the true TR
of the dataset, then create a dataset Toff+orig with the
fractional shift of each slice stored in each voxel. Then
the two commands below should give identical outputs:
3dTshift -ignore 2 -tzero 0 -prefix Dold -heptic dset+orig
3dTshift -ignore 2 -voxshift Toff+orig -prefix Dnew -heptic dset+orig
Use of '-voxshift' means that options such as '-tzero' and '-tpattern' are
ignored -- the burden is on you to encode all the shifts into the 'fset'
dataset somehow. (3dcalc can be your friend here.)
-- RWCox - 31 October 1999, et cetera
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTsmooth
Usage: 3dTsmooth [options] dataset
Smooths each voxel time series in a 3D+time dataset and produces
as output a new 3D+time dataset (e.g., lowpass filter in time).
*** Also see program 3dBandpass ***
General Options:
-prefix ppp = Sets the prefix of the output dataset to be 'ppp'.
[default = 'smooth']
-datum type = Coerce output dataset to be stored as the given type.
[default = input data type]
Three Point Filtering Options [07 July 1999]
--------------------------------------------
The following options define the smoothing filter to be used.
All these filters use 3 input points to compute one output point:
Let a = input value before the current point
b = input value at the current point
c = input value after the current point
[at the left end, a=b; at the right end, c=b]
-lin = 3 point linear filter: 0.15*a + 0.70*b + 0.15*c
[This is the default smoother]
-med = 3 point median filter: median(a,b,c)
-osf = 3 point order statistics filter:
0.15*min(a,b,c) + 0.70*median(a,b,c) + 0.15*max(a,b,c)
-3lin m = 3 point linear filter: 0.5*(1-m)*a + m*b + 0.5*(1-m)*c
Here, 'm' is a number strictly between 0 and 1.
General Linear Filtering Options [03 Mar 2001]
----------------------------------------------
-hamming N = Use N point Hamming or Blackman windows.
-blackman N (N must be odd and bigger than 1.)
-custom coeff_filename.1D (odd # of coefficients must be in a
single column in ASCII file)
(-custom added Jan 2003)
WARNING: If you use long filters, you do NOT want to include the
large early images in the program. Do something like
3dTsmooth -hamming 13 'fred+orig[4..$]'
to eliminate the first 4 images (say).
The following options determine how the general filters treat
time points before the beginning and after the end:
-EXTEND = BEFORE: use the first value; AFTER: use the last value
-ZERO = BEFORE and AFTER: use zero
-TREND = compute a linear trend, and extrapolate BEFORE and AFTER
The default is -EXTEND. These options do NOT affect the operation
of the 3 point filters described above, which always use -EXTEND.
Adaptive Mean Filtering option [03 Oct 2014]
--------------------------------------------
-adaptive N = use adaptive mean filtering of width N
(where N must be odd and bigger than 3).
* This filter is similar to the 'AdptMean9'
1D filter in the AFNI GUI, except that the
end points are treated differently.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTsort
Usage: 3dTsort [options] dataset
Sorts each voxel and produces a new dataset.
Options:
-prefix p = use string 'p' for the prefix of the
output dataset [DEFAULT = 'tsort']
-inc = sort into increasing order [default]
-dec = sort into decreasing order
-rank = output rank instead of sorted values
ranks range from 1 to Nvals
-ind = output sorting index. (0 to Nvals -1)
See example below.
-val = output sorted values (default)
-random = randomly shuffle (permute) the time points in each voxel
* Each voxel is permuted independently!
* Why is this here? Someone asked for it :)
-ranFFT = randomize each time series by scrambling the FFT phase
* Each voxel is treated separately!
* Why is this here? cf. Matthew 7:7-8 :)
-ranDFT = Almost the same as above, but:
* In '-ranFFT', the FFT length is taken
to be the next integer >= data length
for which the FFT algorithm is efficient.
This will result in data padding unless
the data length is exactly 'nice' for FFT.
* In '-ranDFT', the DFT length is exactly
the data length. If the data length is
a large-ish prime number (say 997), this
operation can be slow.
* The DFT/FFT algorithm is reasonably fast
when the data length prime factors contain
only 2s, 3s, and/or 5s.
* Using '-ranDFT' can preserve the spectral
(temporal correlation) structure of the
original data a little better than '-ranFFT'.
* The only reason to use '-ranFFT' instead of
'-ranDFT' is for speed. For example, with
997 time points, '-ranFFT' was about 13 times
faster (FFT length=1000) than '-ranDFT'.
-datum D = Coerce the output data to be stored as
the given type D, which may be
byte, short, or float (default).
Notes:
* Each voxel is sorted (or processed) separately.
* Sub-brick labels are not rearranged!
* This program is useful only in limited cases.
It was written to sort the -stim_times_IM
beta weights output by 3dDeconvolve.
* Also see program 1dTsort, for sorting text files of numbers.
Examples:
setenv AFNI_1D_TIME YES
echo '8 6 3 9 2 7' > test.1D
3dTsort -overwrite test.1D
1dcat tsort.1D
3dTsort -overwrite -rank test.1D
1dcat tsort.1D
3dTsort -overwrite -ind test.1D
1dcat tsort.1D
3dTsort -overwrite -dec test.1D
1dcat tsort.1D
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTsplit4D
USAGE: 3dTsplit4D [options] dataset
This program converts a 3D+time dataset into multiple 3D single-brick
files. The main purpose of this is to accelerate the process of
export AFNI/NIFTI datasets if you have the unfortunate need to work
with Some other PrograM that doesn't like datasets in the pseudo-4D
nature that AFNI knows and loves.
examples:
1. Write the 152 time point dataset, epi_r1+orig, to 152 single
volume datasets, out/epi.000+orig ... epi.151+orig.
mkdir out
3dTsplit4D -prefix out/epi epi_r1+orig
2. Do the same thing, but write to 152 NIFTI volume datasets,
out/epi.000.nii ... out/epi.151.nii. Include .nii in -prefix.
mkdir out
3dTsplit4D -prefix out/epi.nii epi_r1+orig
3. Convert an AFNI stats dataset (betas, t-stats, F-stats) into
a set of NIFTI volume datasets, including the volume labels
in the file names.
3dTsplit4D -prefix stats.FT.nii -label_prefix stats.FT+tlrc
-prefix PREFIX : Prefix of the output datasets
Numbers will be added after the prefix to denote
prior sub-brick.
-digits DIGITS : number of digits to use for output filenames
-keep_datum : output uses original datum (no conversion to float)
-label_prefix : include volume label in each output prefix
-bids_deriv : format string for BIDS-Derivative-style naming
Authored by: Peter Molfese, UConn
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dTstat
++ 3dTstat: AFNI version=AFNI_25.1.08 (May 6 2025) [64-bit]
++ Authored by: KR Hammett & RW Cox
Usage: 3dTstat [options] dataset
Computes one or more voxel-wise statistics for a 3D+time dataset
and stores them in a bucket dataset. If no statistic option is
given, computes just the mean of each voxel time series.
Multiple statistics options may be given, and will result in
a multi-volume dataset.
Statistics Options (note where detrending does/does not occur):
-sum = compute sum of input voxels
-abssum = compute absolute sum of input voxels
-sos = compute sum of squares
-l2norm = compute L2 norm (sqrt(sum squares))
-mean = compute mean of input voxels
-slope = compute the slope of input voxels vs. time
-stdev = compute standard deviation of input voxels
NB: input is detrended by first removing mean+slope
-stdevNOD = like '-stdev', but no initial detrending
-cvar = compute coefficient of variation of input:
voxels = stdev/fabs(mean)
NB: in stdev calc, input is detrended by removing mean+slope
-cvarNOD = like '-cvar', but no initial detrending in stdev calc
-cvarinv = 1.0/cvar = 'signal to noise ratio' [for Vinai]
NB: in stdev calc, input is detrended by removing mean+slope
-cvarinvNOD = like '-cvarinv', but no detrending in stdev calc
-tsnr = compute temporal signal to noise ratio
fabs(mean)/stdev NOT DETRENDED (same as -cvarinvNOD)
-MAD = compute MAD (median absolute deviation) of
input voxels = median(|voxel-median(voxel)|)
[N.B.: the trend is NOT removed for this]
-DW = compute Durbin-Watson Statistic of input voxels
[N.B.: the trend IS removed for this]
-median = compute median of input voxels [undetrended]
-nzmedian = compute median of non-zero input voxels [undetrended]
-nzstdev = standard deviation of non-zero input voxel [undetrended]
-bmv = compute biweight midvariance of input voxels [undetrended]
[actually is 0.989*sqrt(biweight midvariance), to make]
[the value comparable to the standard deviation output]
-MSSD = Von Neumann's Mean of Successive Squared Differences
= average of sum of squares of first time difference
-MSSDsqrt = Sqrt(MSSD)
-MASDx = Median of absolute values of first time differences
times 1.4826 (to scale it like standard deviation)
= a robust alternative to MSSDsqrt
-min = compute minimum of input voxels [undetrended]
-max = compute maximum of input voxels [undetrended]
-absmax = compute absolute maximum of input voxels [undetrended]
-signed_absmax = (signed) value with absolute maximum [undetrended]
-percentile P = the P-th percentile point (0=min, 50=median, 100=max)
of the data in each voxel time series.
[this option can only be used once!]
-argmin = index of minimum of input voxels [undetrended]
-argmin1 = index + 1 of minimum of input voxels [undetrended]
-argmax = index of maximum of input voxels [undetrended]
-argmax1 = index + 1 of maximum of input voxels [undetrended]
-argabsmax = index of absolute maximum of input voxels [undetrended]
-argabsmax1= index +1 of absolute maximum of input voxels [undetrended]
-duration = compute number of points around max above a threshold
Use basepercent option to set limits
-onset = beginning of duration around max where value
exceeds basepercent
-offset = end of duration around max where value
exceeds basepercent
-centroid = compute centroid of data time curves
(sum(i*f(i)) / sum(f(i)))
-centduration = compute duration using centroid's index as center
-nzmean = compute mean of non-zero voxels
-zcount = count number of zero values at each voxel
-nzcount = count number of non zero values at each voxel
-autocorr n = compute autocorrelation function and return
first n coefficients
-autoreg n = compute autoregression coefficients and return
first n coefficients
[N.B.: -autocorr 0 and/or -autoreg 0 will return number
coefficients equal to the length of the input data]
-accumulate = accumulate time series values (partial sums)
val[i] = sum old_val[t] over t = 0..i
(output length = input length)
-centromean = compute mean of middle 50% of voxel values [undetrended]
-skewness = measure of asymmetry in distribution - based on Pearson's
moment, coefficient of skewness.
-kurtosis = measure of the 'tailedness' of the probability distribution
- the fourth standardized moment. Never negative.
-firstvalue = first value in dataset - typically just placeholder
** If no statistic option is given, then '-mean' is assumed **
Other Options:
-tdiff = Means to take the first difference of each time
series before further processing.
-prefix p = Use string 'p' for the prefix of the
output dataset [DEFAULT = 'stat']
-datum d = use data type 'd' for the type of storage
of the output, where 'd' is one of
'byte', 'short', or 'float' [DEFAULT=float]
-nscale = Do not scale output values when datum is byte or short.
Scaling is done by default.
-basepercent nn = Percentage of maximum for duration calculation
-mask mset Means to use the dataset 'mset' as a mask:
Only voxels with nonzero values in 'mset'
will be printed from 'dataset'. Note
that the mask dataset and the input dataset
must have the same number of voxels.
-mrange a b Means to further restrict the voxels from
'mset' so that only those mask values
between 'a' and 'b' (inclusive) will
be used. If this option is not given,
all nonzero values from 'mset' are used.
Note that if a voxel is zero in 'mset', then
it won't be included, even if a < 0 < b.
-cmask 'opts' Means to execute the options enclosed in single
quotes as a 3dcalc-like program, and produce
produce a mask from the resulting 3D brick.
Examples:
-cmask '-a fred+orig[7] -b zork+orig[3] -expr step(a-b)'
produces a mask that is nonzero only where
the 7th sub-brick of fred+orig is larger than
the 3rd sub-brick of zork+orig.
-cmask '-a fred+orig -expr 1-bool(k-7)'
produces a mask that is nonzero only in the
7th slice (k=7); combined with -mask, you
could use this to extract just selected voxels
from particular slice(s).
Notes: * You can use both -mask and -cmask in the same
run - in this case, only voxels present in
both masks will be dumped.
* Only single sub-brick calculations can be
used in the 3dcalc-like calculations -
if you input a multi-brick dataset here,
without using a sub-brick index, then only
its 0th sub-brick will be used.
* Do not use quotes inside the 'opts' string!
If you want statistics on a detrended dataset and the option
doesn't allow that, you can use program 3dDetrend first.
The output is a bucket dataset. The input dataset may
use a sub-brick selection list, as in program 3dcalc.
*** If you are trying to compute the mean or std.dev. of multiple
datasets (not across time), use 3dMean or 3dmerge instead.
----------------- Processing 1D files with 3dTstat -----------------
To analyze a 1D file and get statistics on each of its columns,
you can do something like this:
3dTstat -stdev -bmv -prefix stdout: file.1D\'
where the \' means to transpose the file on input, since 1D files
read into 3dXXX programs are interpreted as having the time direction
along the rows rather than down the columns. In this example, the
output is written to the screen, which could be captured with '>'
redirection. Note that if you don't give the '-prefix stdout:'
option, then the output will be written into a NIML-formatted 1D
dataset, which you might find slightly confusing (but still usable).
++ Compile date = May 6 2025 {AFNI_25.1.08:linux_ubuntu_24_64}
AFNI program: 3dttest++
Gosset (Student) t-test of sets of 3D datasets. ~1~
[* Also consider program 3dMEMA, which can carry out a *]
[* more sophisticated type of 't-test' that also takes *]
[* into account the variance map of each input dataset. *]
[* When constructing 3dttest++ commands consider using *]
[* gen_group_command.py to build your command FOR you, *]
[* which can simplify the syntax/process. *]
* Usage can be similar (not identical) to the old 3dttest;
for example [SHORT form of dataset input]:
3dttest++ -setA a+tlrc'[3]' b+tlrc'[3]' ...
* OR, usage can be similar to 3dMEMA; for example [LONG form]:
3dttest++ -setA Green sub001 a+tlrc'[3]' \
sub002 b+tlrc'[3]' \
sub003 c+tlrc'[3]' \
... \
-covariates Cfile
* Please note that in the second ('LONG') form of the '-setA' option,
the first value after '-setA' is a label for the set (here, 'Green').
++ After that, pairs of values are given; in each pair, the first
entry is a label for the dataset that is the second entry.
++ This dataset label is used as a key into the covariates file.
++ If you want to have a label for the set, but do not wish (or need)
to have a label for each dataset in the set, then you can use
the SHORT form (first example above), and then provide the overall
label for the set with the '-labelA' option.
++ The set label is used to create sub-brick labels in the output dataset,
to make it simpler for a user to select volumes for display in the
AFNI GUI. Example:
-labelA Nor -label Pat
then the difference between the setA and setB means will get the
label 'Nor-Pat_mean', and the corresponding t-statistic will get
the label 'Nor-Pat_Tstat'.
++ See the section 'STRUCTURE OF THE OUTPUT DATASET' (far below) for
more information on how the results are formatted.
** NOTES on the labels above:
++ The '-setX' label (above: 'Green') will be limited to 12 characters
-- this label is used in the sub-brick labels in the output files,
which are shown in the AFNI GUI 'Define Overlay' buttons for
choosing the volumes (sub-bricks) you want to look at.
++ However, the dataset labels (above: 'sub001', etc) are only limited
to 256 characters. These labels are used to pick values out of the
covariates table.
++ In the 'LONG' form input illustrated above, the set label and the
dataset labels are given explicitly.
++ In the 'SHORT' form input, the set label must be given separately,
using option '-labelA' and/or '-labelB'. The dataset labels are
taken from the dataset input filenames -- to be precise, the 'prefix'
part of the filename, as in:
'Ethel/Fred.nii' -> 'Fred' and 'Lucy/Ricky+tlrc.HEAD' -> 'Lucy'
If you are using covariates and are using the 'SHORT' form of input
(the most common usage), the prefixes of the dataset filename must
be unique within their first 256 characters, or trouble will happen.
++ I added this note [15 Dec 2021] because failing to distinguish between
these labels and their limits was causing some confusion and angst.
* You can input 1 or 2 sets of data (labeled 'A' and 'B' by default).
* With 1 set ('-setA'), the mean across input datasets (usually subjects)
is tested against 0.
* With 2 sets, the difference in means across each set is tested
against 0. The 1 sample results for each set are also provided, since
these are often of interest to the investigator (e.g., YOU).
++ With 2 sets, the default is to produce the difference as setA - setB.
++ You can use the option '-BminusA' to get the signs reversed.
* Covariates can be per-dataset (input=1 number) and/or per-voxel/per-dataset
(input=1 dataset sub-brick).
++ Note that voxel-level covariates will slow the program down, since
the regression matrix for the covariates must be re-inverted for
each voxel separately. For most purposes, the program is so fast
that this slower operation won't be important.
* The new-ish options '-Clustsim' and '-ETAC' will use randomization and
permutation simulation to produce cluster-level threshold values that
can be used to control the false positive rate (FPR) globally. These
options are slow, since they will run 1000s of simulated 3D t-tests in
order to get cluster-level statistics about the 1 actual test.
* You can input plain text files of numbers, provided their filenames end
in the AFNI standard '.1D'. If you have two columns of numbers in files
AA.1D and BB.1D, you could test their means for equality with a command like
3dttest++ -prefix stdout: -no1sam setA AA.1D\' -setB BB.1D\'
Here, the \' at the end of the filename tells the program to transpose
the column files to row files, since AFNI treats a single row of numbers
as the multiple values for a single 'voxel'. The output (on stdout) from
such a command will be one row of numbers: the first value is the
difference in the means between the 2 samples, and the second value is
the t-statistic for this difference. (There will also be a bunch of text
on stderr, with various messages.)
* This program is meant (for most uses) to replace the original 3dttest,
which was written in 1994, "When grass was green and grain was yellow".
++ And when the program's author still had hair on the top of his head /:(
------------------
SET INPUT OPTIONS ~1~
------------------
* At least the '-setA' option must be given.
* '-setB' is optional, and if it isn't used, then the mean of the dataset
values from '-setA' is t-tested against 0 (1 sample t-test).
* Two forms for the '-setX' (X='A' or 'B') options are allowed. The first
(short) form is similar to the original 3dttest program, where the option
is just followed by a list of datasets to use.
* The second (long) form is similar to the 3dMEMA program, where you specify
a distinct label for each input dataset sub-brick (a difference between this
option and the version in 3dMEMA is only that you do not give a second
dataset ('T_DSET') with each sample in this program).
***** SHORT FORM *****
-setA BETA_DSET BETA_DSET ...
[-setB]
* In this form of input, you specify the datasets for each set
directly following the '-setX' option.
++ Unlike 3dttest, you can specify multiple sub-bricks in a dataset:
-setA a+tlrc'[1..13(2)]'
which inputs 7 sub-bricks at once (1,3,5,7,9,11,13).
*** See the '-brickwise' option (far below) for more information ***
*** on how multiple sub-brick datasets will be processed herein. ***
++ If multiple sub-bricks are input from a single dataset, then
covariates cannot be used (sorry, Charlie).
++ In the short form input, the 'prefix' for each dataset is its label
if '-covariates' is used. The prefix is the dataset file name with
any leading directory name removed, and everything at and after
'+' or '.nii' cut off:
Zork/Fred.nii -> Fred *OR* Zork/Fred+tlrc.HEAD -> Fred
++ In the long form input (described below), you provide each dataset
with a label on the command line directly.
++ For some limited compatibility with 3dttest, you can use '-set2' in
place of '-setA', and '-set1' in place of '-setB'.
++ [19 Jun 2012, from Beijing Normal University, during AFNI Bootcamp]
For the SHORT FORM only, you can use the wildcards '*' and/or '?' in
the BETA_DSET filenames, along with sub-brick selectors, to make it
easier to create the command line.
To protect the wildcards from the shell, the entire filename should be
inside single ' or double " quote marks. For example:
3dttest++ -setA '*.beta+tlrc.HEAD[Vrel#0_Coef]' \
-setB '*.beta+tlrc.HEAD[Arel#0_Coef]' -prefix VAtest -paired
will do a paired 2-sample test between the symbolically selected sub-bricks
from a collection of single-subject datasets (here, 2 different tasks).
***** LONG FORM *****
-setA SETNAME \
[-setB] LABL_1 BETA_DSET \
LABL_2 BETA_DSET \
... ... \
LABL_N BETA_DSET
* In this form of input, you specify an overall name for the set of datasets,
and a label to be associated with each separate input dataset. (This label
is used with the '-covariates' option, described later.)
SETNAME is the name assigned to the set (used in the output labels).
LABL_K is the label for the Kth input dataset name, whose name follows.
BETA_DSET is the name of the dataset of the beta coefficient or GLT.
++ only 1 sub-brick can be specified here!
** Note that the label 'SETNAME' is limited to 12 characters,
and the dataset labels 'LABL_K' are limited to 256 characters.
-- Any more will be thrown away without warning.
-- This limit also applies to the dataset labels taken
from the dataset filenames in the short form input.
** Only the first 12 characters of the covariate labels can be
used in the sub-brick labels, due to limitations in the AFNI
dataset structure and AFNI GUI. Any covariate labels longer than
this will be truncated when put into the output dataset :(
** The program determines if you are using the short form or long **
** form to specify the input datasets based on the first argument **
** after the '-setX' option. If this argument can be opened as a **
** dataset, the short form is used. If instead, the next argument **
** cannot be opened as a dataset, then the long form is assumed. **
-labelA SETNAME = for the short form of '-setX', this option allows you
[-labelB] to attach a label to the set, which will be used in
the sub-brick labels in the output dataset. If you don't
give a SETNAME, then '-setA' will be named 'SetA', etc.
***** NOTE WELL: The sign of a two sample test is A - B. *****
*** Thus, '-setB' corresponds to '-set1' in 3dttest, ***
*** and '-setA' corresponds to '-set2' in 3dttest. ***
***** This ordering of A and B matches 3dGroupInCorr. *****
*****-------------------------------------------------------------*****
***** ALSO NOTE: You can reverse this sign by using the option *****
*** '-BminusA', in which case the test is B - A. ***
*** The option '-AminusB' can be used to explicitly ***
***** specify the standard subtraction order. *****
------------ Dataset (e.g., Subject) level weights [Mar 2020] ------------
These options let you mark some datasets (that is, some subjects) as
weighing more in the analysis. A larger weight means a subject's
data will count more in the analysis.
-setweightA wname = Name of a file with the weights for the -setA
*and/or* datasets. This is a .1D (numbers as text) file
-setweightB that should have 1 positive value for each
volume being processed.
* A larger weight value means the voxel values for
that volume counts more in the test.
* In the least squares world, these weights would
typically be the reciprocal of that subject's
(or volume's) standard deviation -- in other words,
a measure of the perceived reliability of the data
in that volume.
* For -setweightA, there should be the same number
of weight values in the 'wname' file as there
are volumes in -setA.
++ Fewer weight values cause a fatal ERROR.
++ Extra weight values will print a WARNING
message and then be ignored.
++ Non-positive weight values cause a fatal ERROR.
* You can provide the weights directly on the
the command line with an option of the form
-setweightA '1D: 3 2 1 4 1 2'
when -setA has 6 input volumes.
* You can use -covariates and -setweight together.
--LIMITATIONS-- ** At this time, there is no way to set voxel-wise weights.
** -setweight will turn off -unpooled (if it was used).
** -paired will turn off -setweightB (if used), since
a paired t-test requires equal weights
(and equal covariates) in both samples.
** -singletonA will turn off -setweightA.
** Using -setweight with -rankize is not allowed.
Implementation of weights is by use of the regression method used
for implementing covariates. For convenience in the program, the
provided weights are normalized to average 1, separately for
-setA and -setB (if present). This means that the total weight
actually used for each set is the number of volumes present in that set.
The t-statistic for setA-setB is testing whether the weighted
means of the two samples are equal. Similar remarks apply to
the individual sample means (e.g., weighted mean of setA
tested versus 0).
Dataset weights are conceptually different than dataset covariates:
* Weights measure the reliability of the input dataset values - larger
weight for a dataset means its values are more reliable.
* Covariates are measures that might directly affect the input dataset values.
In a different language, weights are about the variance of the input dataset
values, whereas covariates are about the size of the input dataset values.
As with covariates, where you get the weights from is your business.
Be careful out there, and don't go crazy.
---------------------------------------------------------------
TESTING A SINGLE DATASET VERSUS THE MEAN OF A GROUP OF DATASETS ~1~
---------------------------------------------------------------
This new [Mar 2015] option allows you to test a single value versus
a group of datasets. To do this, replace the '-setA' option with the
'-singletonA' option described below, and input '-setB' normally
(that is, '-setB' must have more than 1 dataset).
The '-singletonA' option comes in 3 different forms:
-singletonA dataset_A
*OR*
-singletonA LABL_A dataset_A
*OR*
-singletonA FIXED_NUMBER
* In the first form, just give the 1 sub-brick dataset name after the option.
* In the second form, you can provide a dataset 'label' to be used for
covariates extraction. As in the case of the long forms for '-setA' and
'-setB', the 'LABL_A' argument cannot be the name of an existing dataset;
otherwise, the program will assume you are using the first form.
* In the third form, instead of giving a dataset, you give a fixed number
(e.g., '0.5'), to test the -setB collection against this 1 number.
++ In this form, '-singleton_variance_ratio' is set to a very small number,
since you presumably aren't testing against an instance of a random
variable.
++ Also, '-BminusA' is turned on when FIXED_NUMBER is used, to give the
effect of a 1-sample test against a constant. For example,
-singletonA 0.0 -set B x y z
is equivalent to the 1-sample test with '-setA x y z'. The only advantage
of using '-singletonA FIXED_NUMBER' is that you can test against a
nonzero constant this way.
++ You cannot use covariates with this FIXED_NUMBER form of '-singletonA' /:(
* The output dataset will have 2 sub-bricks:
++ The difference (at each voxel) between the dataset_A value and the
mean of the setB dataset values.
++ (In the form where 'dataset_A' is replaced by a fixed)
(number, the output is instead the difference between)
(the mean of the setB values and the fixed number. )
++ The t-statistic corresponding to this difference.
* If covariates are used, at each voxel the slopes of the setB data values with
respect to the covariates are estimated (as usual).
++ These slopes are then used to project the covariates out of the mean of
the setB values, and are also applied similarly to the single value from
the singleton dataset_A (using its respective covariate value).
++ That is, the covariate slopes from setB are applied to the covariate values
for dataset_A in order to subtract the covariate effects from dataset_A,
as well as from the setB mean.
++ Since it impossible to independently estimate the covariate slopes for
dataset_A, this procedure seems (to me) like the only reasonable way to use
covariates with a singleton dataset.
* The t-statistic is computed assuming that the variance of dataset_A is the
same as the variance of the setB datasets.
++ Of course, it is impossible to estimate the variance of dataset_A at each
voxel from its single number!
++ In this way, the t-statistic differs from testing the setB mean against
a (voxel-dependent) constant, which would not have any variance.
++ In particular, the t-statistic will be smaller than in the more usual
'test-against-constant' case, since the test here allows for the variance
of the dataset_A value.
++ As a special case, you can use the option
-singleton_variance_ratio RRR
to set the (assumed) variance of dataset_A to be RRR times the variance
of set B. Here, 'RRR' must be a positive number -- it cannot be zero,
so if you really want to test against a voxel-wise constant, use something
like 0.000001 for RRR (this is the setting automatically made when
'dataset_A' is replaced by a fixed number, in the third form above).
* Statistical inference on a single sample (dataset_A values) isn't really
possible. The purpose of '-singletonA' is to give you some guidance when
a voxel value in dataset_A is markedly different from the distribution of
values in setB.
++ However, a statistician would caution you that when an elephant walks into
the room, it might be a 500,000 standard deviation mouse, so you can't
validly conclude it is a different species until you get some more data.
* At present, '-singletonA' cannot be used with '-brickwise'.
++ Various other options don't make sense with '-singletonA', including
'-paired' and '-center SAME'.
* Note that there is no '-singletonB' option -- the only reason this is labeled
as '-singletonA' is to remind the user (you) that this option replaces the
'-setA' option.
--------------------------------------
COVARIATES - per dataset and per voxel ~1~
--------------------------------------
-covariates COVAR_FILE
* COVAR_FILE is the name of a text file with a table for the covariate(s).
Each column in the file is treated as a separate covariate, and each
row contains the values of these covariates for one sample (dataset). Note
that you can use '-covariates' only ONCE -- the COVAR_FILE should contain
the covariates for ALL input samples from both sets.
* Rows in COVAR_FILE whose first column don't match a dataset label are
ignored (silently).
++ This feature allows you to analyze subsets of data collections while
using the covariates file for a large group of subjects -- some of whom
might not be in a given subset analysis.
* An input dataset label that doesn't match a row in COVAR_FILE, on the other
hand, is a fatal error.
++ The program doesn't know how to get the covariate values for such a
dataset, so it can't continue.
* There is no provision for missing values -- the entire table must be filled!
* The format of COVAR_FILE is similar to the format used in 3dMEMA and
3dGroupInCorr (generalized to allow for voxel-wise covariates):
FIRST LINE --> subject IQ age GMfrac
LATER LINES --> Elvis 143 42 Elvis_GM+tlrc[8]
Fred 85 59 Fred_GM+tlrc[8]
Ethel 109 49 Ethel_GM+tlrc[8]
Lucy 133 32 Lucy_GM+tlrc[8]
Ricky 121 37 Ricky_GM+tlrc[8]
* The first line of COVAR_FILE contains column headers. The header label
for the first column (#0) isn't used for anything. The later header labels
are used in the sub-brick labels stored in the output dataset.
* The first column contains the dataset labels that must match the dataset
LABL_K labels given in the '-setX' option(s).
* If you used a short form '-setX' option, each dataset label is
the dataset's prefix name (truncated to 12 characters).
++ e.g., Klaatu+tlrc'[3]' ==> Klaatu
++ e.g., Elvis.nii.gz ==> Elvis
* '-covariates' can only be used with the short form '-setX' option
when each input dataset has only 1 sub-brick (so that each label
refers to exactly 1 volume of data).
++ Duplicate labels in the dataset list or in the covariates file
will not work well!
* The later columns in COVAR_FILE contain numbers (e.g., 'IQ' and 'age',
above), OR dataset names. In the latter case, you are specifying a
voxel-wise covariate (e.g., 'GMfrac').
++ Do NOT put the dataset names or labels in this file in quotes.
* A column can contain numbers only, OR datasets names only. But one
column CANNOT contain a mix of numbers and dataset names!
++ In the second line of the file (after the header line), a column entry
that is purely numeric indicates that column will be all numbers.
++ A column entry that is not numeric indicates that column will be
dataset names.
++ You are not required to make the columns and rows line up neatly,
(separating entries in the same row with 1 or more blanks is OK),
but your life will be much nicer if you DO make them well organized.
* You cannot enter covariates as pure labels (e.g., 'Male' and 'Female').
To assign such categorical covariates, you must use numeric values.
A column in the covariates file that contains strings rather than
numbers is assumed to be a list of dataset names, not category labels!
* If you want to omit some columns in COVAR_FILE from the analysis, you
can do so with the standard AFNI column selector '[...]'. However,
you MUST include column #0 first (the dataset labels) and at least
one more column. For example:
-covariates Cov.table'[0,2..4]'
to skip column #1 but keep columns #2, #3, and #4.
* Only the -paired and -pooled options can be used with covariates.
++ If you use -unpooled, it will be changed to -pooled.
++ The same limitation on -unpooled applies to -setweight.
* If you use -paired, then the covariate values for setB will be the
same as those for setA, even if the dataset labels are different!
++ If you want to use different covariates for setA and setB in the
paired test, then you'll have to subtract the setA and setB
datasets (with 3dcalc), and then do a 1-sample test, using the
differences of the original covariates as the covariates for
this 1-sample test.
++ This subtraction technique works because a paired t-test is really
the same as subtracting the paired samples and then doing a
1-sample t-test on these differences.
++ For example, you do FMRI scans on a group of subjects, then
train them on some task for a week, then re-scan them, and
you want to use their behavioral scores on the task, pre- and
post-training, as the covariates.
* See the section 'STRUCTURE OF THE OUTPUT DATASET' for details of
what is calculated and stored by 3dttest++.
* If you are having trouble getting the program to read your covariates
table file, then set the environment variable AFNI_DEBUG_TABLE to YES
and run the program. A lot of progress reports will be printed out,
which may help pinpoint the problem; for example:
3dttest++ -DAFNI_DEBUG_TABLE=YES -covariates cfile.txt |& more
* A maximum of 31 covariates are allowed. If you have more, then
seriously consider the likelihood that you are completely deranged.
* N.B.: The simpler forms of the COVAR_FILE that 3dMEMA allows are
NOT supported here! Only the format described above will work.
* N.B.: IF you are entering multiple sub-bricks from the same dataset in
one of the '-setX' options, AND you are using covariates, then
you must use the 'LONG FORM' of input for the '-setX' option,
and give each sub-brick a distinct label that matches something
in the covariates file. Otherwise, the program will not know
which covariate to use with which input sub-brick, and bad
things will happen.
* N.B.: Please be careful in setting up the covariates file and dataset
labels, as the program only does some simple error checking.
++ If you REALLY want to see the regression matrices
used with covariates, use the '-debug' option.
++ Which you give you a LOT of output (to stderr), so redirect:
3dttest++ .... |& tee debug.out
***** CENTERING (this subject is very important -- read and think!) *******
++ This term refers to how the mean across subjects of a covariate
will be processed. There are 3 possibilities:
-center NONE = Do not remove the mean of any covariate.
-center DIFF = Each set will have the means removed separately.
-center SAME = The means across both sets will be computed and removed.
(This option only applies to a 2-sample test, obviously.)
++ These operations (DIFF or SAME) can be altered slightly by the following:
-cmeth MEAN = When centering, subtract the mean.
-cmeth MEDIAN = When centering, subtract the median.
(Per the request of the Musical Neuroscientist, AKA Steve Gotts.)
++ If you use a voxel-wise (dataset) covariate, then the centering method
is applied to each voxel's collection of covariate values separately.
++ The default operation is '-center DIFF'.
++ '-center NONE' is for the case where you have pre-processed the
covariate values to meet your needs; otherwise, it is not recommended!
++ Centering can be important. For example, suppose that the mean
IQ in setA is significantly higher than in setB, and that the beta
values are positively correlated with IQ IN THE SAME WAY IN THE
TWO GROUPS. Then the mean beta value in setA will be higher than in
setB simply from the IQ effect.
-- To attempt to allow for this type of inter-group mean differences,
in order to detect other difference between the two groups
(e.g., from disease status), you would have to center the two groups
together, rather than separately (i.e., use '-center SAME').
-- However, if the beta values are correlated significantly differently
with IQ in the two groups, then '-center DIFF' would perhaps be
a better choice. Please read on:
++ How to choose between '-center SAME' or '-center DIFF'? You have
to understand what your model is and what effect the covariates
are likely to have on the data. You shouldn't just blindly use
covariates 'just in case'. That way lies statistical madness.
-- If the two samples don't differ much in the mean values of their
covariates, then the results with '-center SAME' and '-center DIFF'
should be nearly the same.
-- For fixed covariates (not those taken from datasets), the program
prints out the results of a t-test of the between-group mean
covariate values. This test is purely informative; no action is
taken if the t-test shows that the two groups are significantly
different in some covariate.
-- If the two samples DO differ much in the mean values of their
covariates, then you should read the next point VERY CAREFULLY.
++ The principal purpose of including covariates in an analysis (ANCOVA)
is to reduce the variance of the beta values due to extraneous causes.
Some investigators also wish to use covariates to 'factor out' significant
differences between groups. However, there are those who argue
(convincingly) that if your two groups differ markedly in their mean
covariate values, then there is NO statistical test that can tell if
their mean beta values (dependent variable) would be the same or
different if their covariate values were all the same instead:
Miller GM and Chapman JP. Misunderstanding analysis of covariance.
J Abnormal Psych 110: 40-48 (2001).
http://dx.doi.org/10.1037/0021-843X.110.1.40
http://psycnet.apa.org/journals/abn/110/1/40.pdf
-- For example, if all your control subjects have high IQs and all your
patient subjects have normal IQs, group differences in activation can
be due to either cause (IQ or disease status) and you can't turn the
results from a set of high IQ controls into the results you would have
gotten from a set of normal IQ controls (so you can compare them to the
patients) just by linear regression and then pretending the IQ issue
goes away.
-- The decision as to whether a mean covariate difference between groups
makes the t-test of the mean beta difference invalid or valid isn't
purely a statistical question; it's also a question of interpretation
of the scientific issues of the study. See the Miller & Chapman paper
(above) for a lengthy discussion of this issue.
-- It is not clear how much difference in covariate levels is acceptable.
You could carry out a t-test on the covariate values between the
2 groups and if the difference in means is not significant at some
level (i.e., if p > 0.05?), then accept the two groups as being
'identical' in that variable. But this is just a suggestion.
(In fact, the program now carries out this t-test for you; cf supra.)
-- Thanks to Andy Mayer for pointing out this article to me.
++ At this time, there is no option to force the SLOPES of the
regression vs. covariate values to be the same in the two-sample
analysis. [Adding this feature would be too much like work.]
-------------
OTHER OPTIONS ~1~
-------------
-paired = Specifies the use of a paired-sample t-test to
compare setA and setB. If this option is used,
setA and setB must have the same cardinality (duh).
++ Recall that if '-paired' is used with '-covariates',
the covariates for setB will be the same as for setA.
++ If you don't understand the difference between a
paired and unpaired t-test, I'm not going to teach you
in this help file. But please consult someone or you
will undoubtedly come to grief!
-unpooled = Specifies that the variance estimates for setA and
setB be computed separately (not pooled together).
++ This only makes sense if -paired is NOT given.
++ '-unpooled' cannot be used with '-covariates'.
++ Unpooled variance estimates are supposed to
provide some protection against heteroscedasticty
(significantly different inter-subject variance
between the two different collections of datasets).
++ Our experience is that for most FMRI data, using
'-unpooled' is not needed; the option is here for
those who like to experiment or who are very cautious.
-toz = Convert output t-statistics to z-scores
++ -unpooled implies -toz, since t-statistics won't be
comparable between voxels as the number of degrees
of freedom will vary between voxels.
-->>++ -toz is automatically turned on with the -Clustsim option.
The reason for this is that -Clustsim (and -ETAC) work by
specifying voxel-wise thresholds via p-values -- z-statistics
are simpler to compute in the external clustering programs
(3dClustSim and 3dXClustSim) than t-statistics, since converting
a z=N(0,1) value to a p-value doesn't require knowing any
extra parameters (such as the t DOF).
-- In other words, I did this to make my life simpler.
++ If for some bizarre reason you want to convert a z-statistic
to a t-statistic, you can use 3dcalc with a clumsy expression
of the form
'cdf2stat(stat2cdf(x,5,0,0,0),3,DOF,0,0)'
where 'DOF' is replaced with the number of degrees of freedom.
The following command will show the effect of such a conversion:
1deval -xzero -4 -del 0.01 -num 801 \
-expr 'cdf2stat(stat2cdf(x,5,0,0,0),3,10,0,0)' | \
1dplot -xzero -4 -del 0.01 -stdin -xlabel z -ylabel 't(10)'
-zskip [n]= Do not include voxel values that are zero in the analysis.
++ This option can be used when not all subjects' datasets
overlap perfectly.
++ -zskip implies -toz, since the number of samples per
voxel will now vary, so the number of degrees of
freedom will be spatially variable.
++ If you follow '-zskip' with a positive integer (> 1),
then that is the minimum number of nonzero values (in
each of setA and setB, separately) that must be present
before the t-test is carried out. If you don't give
this value, but DO use '-zskip', then its default is 5
(for no good reason).
++ At this time, you can't use -zskip with -covariates,
because that would require more extensive re-thinking
and then serious re-programming.
++ You CAN use -zskip with -paired, but it works slightly
differently than with a non-paired test [06 May 2021]:
-- In a non-paired test, setA and setB are pruned of
zero values separately; e.g., setA could lose 3
values at a given voxel, while setB loses 5 there.
-- In a paired test, if EITHER setA or setB has a zero
value at a given voxel, both paired values are discarded.
This choice is necessary, since a paired t-test
requires subtracting the setA/setB values pairwise
and if one element of a pair is invalid, then the
other element has nothing to be paired with.
++ You can also put a decimal fraction between 0 and 1 in
place of 'n' (e.g., '0.9', or '90%'). Such a value
indicates that at least 90% (e.g.) of the values in each
set must be nonzero for the t-test to proceed. [08 Nov 2010]
-- In no case will the number of values tested fall below 3!
-- You can use '100%' for 'n', to indicate that all data
values must be nonzero for the test to proceed.
-rankize = Convert the data (and covariates, if any) into ranks before
doing the 2-sample analyses. This option is intended to make
the statistics more 'robust', and is inspired by the paper
WJ Conover and RL Iman.
Analysis of Covariance Using the Rank Transformation,
Biometrics 38: 715-724 (1982).
http://www.jstor.org/stable/2530051
Also see http://www.jstor.org/stable/2683975
++ Using '-rankize' also implies '-no1sam' (infra), since it
doesn't make sense to do 1-sample t-tests on ranks.
++ Don't use this option unless you understand what it does!
The use of ranks herein should be considered very
experimental or speculative!!
-no1sam = When you input two samples (setA and setB), normally the
program outputs the 1-sample test results for each set
(comparing to zero), as well as the 2-sample test results
for differences between the sets. With '-no1sam', these
1-sample test results will NOT be calculated or saved.
-nomeans = You can also turn off output of the 'mean' sub-bricks, OR
-notests = of the 'test' sub-bricks if you want, to reduce the size of
the output dataset. For example, '-nomeans -no1sam' will
result in only getting the t-statistics for the 2-sample
tests. These options are intended for use with '-brickwise',
where the amount of output sub-bricks can become overwhelming.
++ You CANNOT use both '-nomeans' and '-notests', because
then you would be asking for no outputs at all!
-nocov = Do not output the '-covariates' results. This option is
intended only for internal testing, and it's hard to see
why the ordinary user would want it.
-mask mmm = Only compute results for voxels in the specified mask.
++ Voxels not in the mask will be set to 0 in the output.
++ If '-mask' is not used, all voxels will be tested.
-->>++ It is VERY important to use '-mask' when you use '-ClustSim'
or '-ETAC' to computed cluster-level thresholds.
++ NOTE: voxels whose input data is constant (in either set)
will NOT be processed and will get all zero outputs. This
inaction happens because the variance of a constant set of
data is zero, and division by zero is forbidden by the
Deities of Mathematics -- cf., http://www.math.ucla.edu/~tao/
-exblur b = Before doing the t-test, apply some extra blurring to the input
datasets; parameter 'b' is the Gaussian FWHM of the smoothing
kernel (in mm).
++ This option is how '-ETAC_blur' is implemented, so it isn't
usually needed by itself.
++ The blurring is done inside the mask; that is, voxels outside
the mask won't be used in the blurring process. Such blurring
is done the same way as in program 3dBlurInMask (using a
finite difference evolution with Neumann boundary conditions).
++ Gaussian blurring is NOT additive in the FWHM parameter.
If the inputs to 3dttest++ were blurred by FWHM=4 mm
(e.g., via afni_proc.py), then giving an extra blur of
FWHM=6 mm is more-or-less equivalent to applying a single
blur of sqrt(4*4+6*6)=7.2 mm, NOT to 4+6=10 mm!
++ '-exblur' does not work with '-brickwise'.
++ '-exblur' only works with 3D datasets.
++ If any covariates are datasets, you should be aware that the
covariate datasets are NOT blurred by the '-exblur' process.
-brickwise = This option alters the way this program works with input
datasets that have multiple sub-bricks (cf. the SHORT FORM).
++ If you use this option, it must appear BEFORE either '-set'
option (so the program knows how to do the bookkeeping
for the input datasets).
++ WITHOUT '-brickwise', all the input sub-bricks from all
datasets in '-setA' are gathered together to form the setA
sample (similarly for setB, of course). In this case, there
is no requirement that all input datasets have the same
number of sub-bricks.
++ WITH '-brickwise', all input datasets (in both sets)
MUST have the same number of sub-bricks. The t-tests
are then carried out sub-brick by sub-brick; that is,
if you input a collection of datasets with 10 sub-bricks
in each dataset, then you will get 10 t-test results.
++ Each t-test result will be made up of more than 1 sub-brick
in the output dataset. If you are doing a 2-sample test,
you might want to use '-no1sam' to reduce the number of
volumes in the output dataset. In addition, if you are
only interested in the statistical tests and not the means
(or slopes for covariates), then the option '-nomeans'
will reduce the dataset to just the t (or z) statistics
-- e.g., the combination '-no1sam -nomeans' will give you
one statistical sub-brick per input sub-brick.
++ If you input a LOT of sub-bricks, you might want to set
environment variable AFNI_AUTOMATIC_FDR to NO, in order
to suppress the automatic calculation of FDR curves for
each t-statistic sub-brick -- this FDR calculation can
be time consuming when done en masse.
-->>++ The intended application of this option is to make it
easy to take a collection of time-dependent datasets
(e.g., from MEG or from moving-window RS-FMRI analyses),
and get time-dependent t-test results. It is possible to do
the same thing with a scripted loop, but that way is painful.
++ You CAN use '-covariates' with '-brickwise'. You should note
that each t-test will reuse the same covariates -- that is,
there is no provision for time-dependent covariate values --
for that, you'd have to use scripting to run 3dttest++
multiple times.
++ EXAMPLE:
Each input dataset (meg*.nii) has 100 time points; the 'X'
datasets are for one test condition and the 'Y' datasets are
for another. In this example, the subjects are the same in
both conditions, so the '-paired' option makes sense.
3dttest++ -brickwise -prefix megXY.nii -no1sam -paired\
-setA meg01X.nii meg02X.nii meg03X.nii ... \
-setB meg01Y.nii meg02Y.nii meg03Y.nii ...
* The output dataset will have 200 sub-bricks: 100 differences
of the means between 'X' and 'Y', and 100 t-statistics.
* You could extract the output dataset t-statistics (say)
into a single dataset with a command like
3dTcat -prefix megXY_tstat.nii megXY.nii'[1..$(2)]'
(Or you could have used the '-nomeans' option.)
This dataset could then be used to plot the t-statistic
versus time, make a movie, or otherwise do lots of fun things.
* If '-brickwise' were NOT used, the output dataset would just
get 2 sub-bricks, as all the inputs in setA would be lumped
together into one super-sized sample (and similarly for setB).
* Remember that with the SHORT FORM input (needed for option
'-brickwise') you can use wildcards '*' and '?' together with
'[...]' sub-brick selectors.
-prefix p = Gives the name of the output dataset file.
++ For surface-based datasets, use something like:
-prefix p.niml.dset or -prefix p.gii.dset
Otherwise you may end up files containing numbers but
not a full set of header information.
-resid q = Output the residuals into a dataset with prefix 'q'.
++ The residuals are the difference between the data values
and their prediction from the set mean (and set covariates).
++ For use in further analysis of the results (e.g., 3dFWHMx).
++ Cannot be used with '-brickwise' (sorry).
++ If used with '-zskip', values which were skipped in the
analysis will get residuals set to zero.
-ACF = If residuals are saved, also compute the ACF parameters from
them using program 3dFHWMx -- for further use in 3dClustSim
(which must be run separately).
++ HOWEVER, the '-Clustsim' option below provides a resampling
alternative to using the parametric '-ACF' method in
program 3dClustSim.
-dupe_ok = Duplicate dataset labels are OK. Do not generate warnings
for dataset pairs.
** This option must precede the corresponding -setX options.
** Such warnings are issued only when '-covariates' is used
-- when the labels are used to extract covariate values
from the covariate table.
-debug = Prints out information about the analysis, which can
be VERY lengthy -- not for general usage (or even for colonels).
++ Two copies of '-debug' will give even MORE output!
-----------------------------------------------------------------------------
ClustSim Options -- for global cluster-level thresholding and FPR control ~1~
-----------------------------------------------------------------------------
The following options are for using randomization/permutation to simulate
noise-only generated t-tests, and then run those results through the
cluster-size threshold simulation program 3dClustSim. The goal is to
compute cluster-size thresholds that are not based on a fixed model
for the spatial autocorrelation function (ACF) of the noise.
ETAC (infra) and ClustSim are parallelized. The randomized t-test steps are
done by spawning multiple 3dttest++ jobs using the residuals as input.
Then the 3dClustSim program (for -Clustsim) and 3dXClustSim program (for -ETAC)
use multi-threaded processing to carry out their clusterization statistics.
If your computer does NOT have multiple CPU cores, then these options will
run very very slowly.
You can use both -ETAC and -Clustsim in the same run. The main reason for
doing this is to compare the results of the two methods. Using both methods
in one 3dttest++ run will be super slow.
++ In such a dual-use case, and if '-ETAC_blur' is also given, note that
3dClustSim will be run once for each blur level, giving a set of cluster-
size threshold tables for each blur case. This process is necessary since
3dClustSim does not have a multi-blur thresholding capability, unlike
ETAC (via program 3dXClustSim).
++ The resulting 3dClustSim tables are to be applied to each of the auxiliary
t-test files produced, one for each blur case. Unless one of those blur
cases is '0.0', the 3dClustSim tables do NOT apply to the main output
dataset produced by this program.
++ These auxiliary blur case t-test results get names of the form
PREFIX.B8.0.nii
where PREFIX was given in the '-prefix' option, and in this example,
the amount of extra blurring was 8.0 mm. These files are the result
of re-running the commanded t-tests using blurred input datasets.
-Clustsim = With this option, after the commanded t-tests are done, then:
(a) the residuals from '-resid' are used with '-randomsign' to
simulate about 10000 null 3D results, and then
(b) 3dClustSim is run with those to generate cluster-threshold
tables, and then
(c) 3drefit is used to pack those tables into the main output
dataset, and then
(d) the temporary files created in this process are deleted.
The goal is to provide a method for cluster-level statistical
inference in the output dataset, to be used with the AFNI GUI
Clusterize controls.
++ If you want to keep ALL the temporary files, use '-CLUSTSIM'.
They will include the z-scores from all the simulations.
** Normally, the permutation/randomization z-scores are saved
in specially compressed files with suffix '.sdat'. If you
want these files in the '.nii' format, use the options
'-DAFNI_TTEST_NIICSIM=YES -CLUSTSIM'.
** However, if '-ETAC' is also used, the '.sdat' format will
be used instead of the '.nii' format, as the program that
implements ETAC (3dXClustSim) requires that format.
** You can change the number of simulations using an option
such as '-DAFNI_TTEST_NUMCSIM=20000' if you like.
++ Since the simulations are done with '-toz' active, the program
also turns on the '-toz' option for your output dataset. This
means that the output statistics will be z-scores, not t-values.
++ If you have less than 14 datasets total (setA & setB combined),
this option will not work! (There aren't enough random subsets.)
** And it will not work with '-singletonA'.
-->>++ '-Clustsim' runs step (a) in multiple jobs, for speed. By
default, it tries to auto-detect the number of CPUs on the
system and uses that many separate jobs. If you put a positive
integer immediately following the option, as in '-Clustsim 12',
it will instead use that many jobs (e.g., 12). This capability
is to be used when the CPU count is not auto-detected correctly.
** You can also set the number of CPUs to be used via the Unix
environment variable OMP_NUM_THREADS.
** This program does not use OpenMP (OMP), but since many other
AFNI programs do, setting OMP_NUM_THREADS is a common way
to set the amount of parallel computation to use.
-->>++ It is important to use a proper '-mask' option with '-Clustsim'.
Otherwise, the statistics of the clustering will be skewed.
-->>++ You can change the number of simulations from the default 10000
by setting Unix environment variable AFNI_TTEST_NUMCSIM to a
different value (in the range 1000..1000000). Note that the
3dClustSim tables go down to a cluster-corrected false positive
rate of 0.01, so that reducing the number of simulations below
10000 will produce notably less accurate results for such small
FPR (alpha) values.
**-->>++ The primary reason for reducing AFNI_TTEST_NUMCSIM below its
default value is testing '-Clustsim' and/or '-ETAC' more quickly
-->>++ The clever scripter can pick out a particular value from a
particular 3dClustSim output .1D file using the '{row}[col]'
syntax of AFNI, as in the tcsh command
set csize = `1dcat Fred.NN1_1sided.1D"{10}[6]"`
to pick out the number in the #10 row, #6 column (counting
from #0), which is the p=0.010 FPR=0.05 entry in the table.
-->++ Or even *better* now for extracting a table value:
a clever person added command line options to 1d_tool.py
to extract a value from the table having a voxelwise p-value
('-csim_pthr ..') and an FDR alpha level ('-csim_alpha ..').
Be sure to check out those options in 1d_tool.py's help!
**-->>++ NOTE: The default operation of 3dClustSim when used from
3dttest++ is with the '-LOTS' option controlling
the thresholds used for the tabular output.
You can change that to the '-MEGA' option = a larger
table, by setting Unix environment variable
AFNI_CLUSTSIM_MEGA to YES. You can do that in several
ways, including on the command line with the option
'-DAFNI_CLUSTSIM_MEGA=YES'. [15 Dec 2021 - RWCox]
---==>>> PLEASE NOTE: This option has been tested for 1- and 2-sample
---==>>> unpaired and paired tests vs. resting state data -- to see if the
---==>>> false positive rate (FPR) was near the nominal 5% level (it was).
---==>>> The FPR for the covariate effects (as opposed to the main effect)
---==>>> is still somewhat biased away from the 5% level /:(
****** The following options affect both '-Clustsim' and '-ETAC' ******
-prefix_clustsim cc = Use 'cc' for the prefix for the '-Clustsim' temporary
files, rather than a randomly generated prefix.
You might find this useful if scripting.
++ By default, the Clustsim (and ETAC) prefix will
be the same as that given by '-prefix'.
-->>++ If you use option '-Clustsim', then the simulations
keep track of the maximum (in mask) voxelwise
z-statistic, compute the threshold for 5% global FPR,
and write those values (for 1-sided and 2-sided
thresholding) to a file named 'cc'.5percent.txt --
where 'cc' is the prefix given here. Using such a
threshold in the AFNI GUI will (presumably) give you
a map with a 5% chance of false positive WITHOUT
clustering. Of course, these thresholds generally come
with a VERY stringent per-voxel p-value.
** In one analysis, the 5% 2-sided test FPR p-value was
about 7e-6 for a mask of 43000 voxels, which is
bigger (less strict) than the 1.2e-6 one would get
from the Bonferroni correction, but is still very
stringent for many purposes. This threshold value
was also close to the threshold at which the FDR
q=1/43000, which may not be a coincidence.
-->>++ This file has been updated to give the voxel-wise
statistic threshold for global FPRs from 1% to 9%.
However, the name is still '.5percent.txt' for the
sake of nostalgia.
-no5percent = Don't output the 'cc'.5percent.txt file that comes
for free with '-Clustsim' and/or '-ETAC'.
++ But whyyy? Don't you like free things?
-tempdir ttt = Store temporary files for '-Clustsim' in this directory,
rather than in the current working directory.
-->>++ This option is for use when you have access to a fast
local disk (e.g., SSD) compared to general storage
on a rotating disk, RAID, or network storage.
++ Using '-tempdir' can make a significant difference
in '-Clustsim' and '-ETAC' runtime, if you have
a local solid state drive available!
[NOTE: with '-CLUSTSIM', these files aren't deleted!]
-seed X [Y] = This option is used to set the random number seed for
'-randomsign' to the positive integer 'X'. If a second integer
'Y' follows, then that value is used for the random number seed
for '-permute'.
++ The purpose of setting seeds (rather than letting the program
pick them) is for reproducibility. It is not usually needed by
the ordinary user.
++ Option '-seed' is used by the multi-blur analysis possible
with '-ETAC', so that the different blur levels use the same
randomizations, to make their results compatible for multi-
threshold combination.
++ Example: -seed 3217343 1830201
***** These options (below) are not usually directly used, but *****
***** are described here for completeness and for reference. *****
***** They are invoked by options '-Clustsim' and '-ETAC'. *****
-randomsign = Randomize the signs of the datasets. Intended to be used
with the output of '-resid' to generate null hypothesis
statistics in a second run of the program (probably using
'-nomeans' and '-toz'). Cannot be used with '-singletonA'
or with '-brickwise'.
++ You will never get an 'all positive' or 'all negative' sign
flipping case -- each sign will be present at least 15%
of the time.
++ There must be at least 4 samples in each input set to
use this option, and at least a total of 14 samples in
setA and setB combined.
++ If you following '-randomsign' with a number (e.g.,
'-randomsign 1000'), then you will get 1000 iterations
of random sign flipping, so you will get 1000 times the
as many output sub-bricks as usual. This is intended for
for use with simulations such as '3dClustSim -inset'.
-->>++ This option is usually not used directly, but will be
invoked by the use of '-Clustsim' and/or '-ETAC'. It is
documented here for the sake of telling the Galaxy how the
program works.
-permute = With '-randomsign', and when both '-setA' and '-setB' are used,
this option will add inter-set permutation to the randomization.
++ If only '-setA' is used (1-sample test), there is no permutation.
(Neither will there be permutation with '-singletonA'.)
++ If '-randomsign' is NOT given, but '-Clustsim' is used, then
'-permute' will be passed for use with the '-Clustsim' tests
(again, only if '-setA' and '-setB' are both used).
++ If '-randomsign' is given and if the following conditions
are ALL true, then '-permute' is assumed (without the option
needed on the command line):
(a) You have a 2-sample test.
And, you are not using '-singletonA'.
[Permutation is meaningless without 2 samples!]
(b) And, you are not using '-unpooled'.
(c) And, you are not using '-paired'.
-->>++ You only NEED to use '-permute' if you want inter-set
permutation used AND you the '-unpooled' option.
+ Permutation with '-unpooled' is a little weird.
+ Permutation with '-paired' is very weird and is NOT allowed.
+ Permutation with '-covariates' may not work the way you wish.
In the past [pre-March 2020], covariates were NOT permuted along
with their data. Now, covariate ARE permuted along with their data.
This latter method seems more logical to me [RWCox].
++ There is no option to do permutation WITHOUT sign randomization.