All AFNI program -help files
This page auto-generated on Tue Oct 1 08:25:58 PM EDT 2024
AFNI program: 1dApar2mat
Usage: 1dApar2mat dx dy dz a1 a2 a3 sx sy sz hx hy hz
* This program computes the affine transformation matrix
from the set of 3dAllineate parameters.
* The result is printed to stdout, and can be captured
by Unix shell redirection (e.g., '|', '>', '>>', etc.).
See the EXAMPLE, far below.
* One use for 1dApar2mat is to take a set of parameters
from '3dAllineate -1Dparam_save', alter them in some way,
and re-compute the corresponding matrix. For example,
compute the full affine transform with 12 parameters,
but then omit the final 6 parameters to see what the
'pure' shift+rotation matrix looks like.
* The 12 parameters are, in the order used on the 1dApar2mat command line
(the same order as output by 3dAllineate):
x-shift in mm
y-shift in mm
z-shift in mm
z-angle (roll) in degrees (not radians!)
x-angle (pitch) in degrees
y-angle (yaw) in degrees
x-scale unitless factor, in [0.10,10.0]
y-scale unitless factor, in [0.10,10.0]
z-scale unitless factor, in [0.10,10.0]
y/x-shear unitless factor, in [-0.3333,0.3333]
z/x-shear unitless factor, in [-0.3333,0.3333]
z/y-shear unitless factor, in [-0.3333,0.3333]
* Parameters omitted from the end of the command line get their
default values (0 except for scales, which default to 1).
* At least 1 parameter must be given, or you get this help message :)
The minimum command line is
1dApar2mat 0
which will output the identity matrix.
* Legal scale and shear factors have limited ranges, as
described above. An input value outside the given range
will be reset to the default value for that factor (1 or 0).
* UNUSUAL SPECIAL CASES:
If you used 3dAllineate with any of the options described
under 'CHANGING THE ORDER OF MATRIX APPLICATION' or you
used the '-EPI' option, then the order of parameters inside
3dAllineate will no longer be the same as the parameter order
in 1dApar2mat. In such a situation, the matrix output by
this program will NOT agree with that output by 3dAllineate
for the same set of parameter numbers :(
* EXAMPLE:
1dApar2mat 0 1 2 3 4 5
to get a rotation matrix with some shifts; the output is:
# mat44 1dApar2mat 0 1 2 3 4 5 :
0.994511 0.058208 -0.086943 0.000000
-0.052208 0.996197 0.069756 1.000000
0.090673 -0.064834 0.993768 2.000000
If you wish to capture this matrix all on one line, you can
combine various Unix shell and command tricks/tools, as in
echo `1dApar2mat 0 1 2 3 4 5 | tail -3` > Fred.aff12.1D
This 12-numbers-in-one-line is the format output by '-1Dmatrix_save'
in 3dAllineate and 3dvolreg.
* FANCY EXAMPLE:
Tricksy command line stuff to compute the inverse of a matrix
set fred = `1dApar2mat 0 0 0 3 4 5 1 1 1 0.2 0.1 0.2 | tail -3`
cat_matvec `echo $fred | sed -e 's/ /,/g' -e 's/^/MATRIX('/`')' -I
* ALSO SEE: Programs cat_matvec and 1dmatcalc for doing
simple matrix arithmetic on such files.
* OPTIONS: This program has no options. Love it or leave it :)
* AUTHOR: Zhark the Most Affine and Sublime - April 2019
AFNI program: 1dAstrip
Usage: 1dAstrip < input > output
This very simple program strips non-numeric characters
from a file, so that it can be processed by other AFNI
1d programs. For example, if your input is
x=3.6 y=21.6 z=14.2
then your output would be
3.6 21.6 14.2
* Non-numeric characters are replaced with blanks.
* The letter 'e' is preserved if it is preceded
or followed by a numeric character. This is
to allow for numbers like '1.2e-3'.
* Numeric characters, for the purpose of this
program, are defined as the digits '0'..'9',
and '.', '+', '-'.
* The program is simple and can easily end up leaving
undesired junk characters in the output. Sorry.
* This help string is longer than the rest of the
source code to this program!
AFNI program: 1dBandpass
Usage: 1dBandpass [options] fbot ftop infile ~1~
* infile is an AFNI *.1D file; each column is processed
* fbot = lowest frequency in the passband, in Hz
[can be 0 if you want to do a lowpass filter only,]
but the mean and Nyquist freq are always removed ]
* ftop = highest frequency in the passband (must be > fbot)
[if ftop > Nyquist freq, then we have a highpass filter only]
* You cannot construct a 'notch' filter with this program!
* Output vectors appear on stdout; redirect as desired
* Program will fail if fbot and ftop are too close for comfort
* The actual FFT length used will be printed, and may be larger
than the input time series length for the sake of efficiency.
Options: ~1~
-dt dd = set time step to 'dd' sec [default = 1.0]
-ort f.1D = Also orthogonalize input to columns in f.1D
[only one '-ort' option is allowed]
-nodetrend = Skip the quadratic detrending of the input
-norm = Make output time series have L2 norm = 1
Example: ~1~
1deval -num 1000 -expr 'gran(0,1)' > r1000.1D
1dBandpass 0.025 0.20 r1000.1D > f1000.1D
1dfft f1000.1D - | 1dplot -del 0.000977 -stdin -plabel 'Filtered |FFT|'
Goal: ~1~
* Mostly to test the functions in thd_bandpass.c -- RWCox -- May 2009
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dBport
Usage: 1dBport [options]
Creates a set of columns of sines and cosines for the purpose of
bandpassing via regression (e.g., in 3dDeconvolve). Various option
are given to specify the duration and structure of the time series
to be created. Results are written to stdout, and usually should be
redirected appropriately (cf. EXAMPLES, infra). The file produced
could be used with the '-ortvec' option to 3dDeconvolve, for example.
OPTIONS
-------
-band fbot ftop = Specify lowest and highest frequencies in the passband.
fbot can be 0 if you want to do a highpass filter only;
on the other hand, if ftop > Nyquist frequency, then
it's a lowpass filter only.
** This 'option' is actually mandatory! (At least once.)
* For the un-enlightened, the Nyquist frequency is the
highest frequency supported on the given grid, and
is equal to 0.5/TR (units are Hz if TR is in s).
* The lowest nonzero frequency supported on the grid
is equal to 1/(N*TR), where N=number of time points.
** Multiple -band options can be used, if needed.
If the bands overlap, regressors will NOT be duplicated.
* That is, '-band 0.01 0.05 -band 0.03 0.08' is the same
as using '-band 0.01 0.08'.
** Note that if fbot==0 and ftop>=Nyquist frequency, you
get a 'complete' set of trig functions, meaning that
using these in regression is effectively a 'no-pass'
filter -- probably not what you want!
** It is legitimate to set fbot = ftop.
** The 0 frequency (fbot = 0) component is all 1, of course.
But unless you use the '-quad' option, nothing generated
herein will deal well with linear-ish or quadratic-ish
trends, which fall below the lowest nonzero frequency
representable in a full cycle on the grid:
f_low = 1 / ( NT * TR )
where NT = number of time points.
** See the fourth EXAMPLE to learn how to use 3dDeconvolve
to generate a file of polynomials for regression fun.
-invert = After computing which frequency indexes correspond to the
input band(s), invert the selection -- that is, output
all those frequencies NOT selected by the -band option(s).
See the fifth EXAMPLE.
-nozero } Do NOT generate the 0 frequency (constant) component
*OR } when fbot = 0; this has the effect of setting fbot to
-noconst } 1/(N*TR), and is essentially a convenient way to say
'eliminate all oscillations below the ftop frequency'.
-quad = Add regressors for linear and quadratic trends.
(These will be the last columns in the output.)
-input dataset } One of these options is used to specify the number of
*OR* } time points to be created, as in 3dDeconvolve.
-input1D 1Dfile } ** '-input' allow catenated datasets, as in 3dDeconvolve.
*OR* } ** '-input1D' assumes TR=1 unless you use the '-TR' option.
-nodata NT [TR] } ** One of these options is mandatory, to specify the length
of the time series file to generate.
-TR del = Set the time step to 'del' rather than use the one
given in the input dataset (if any).
** If TR is not specified by the -input dataset or by
-nodata or by -TR, the program will assume it is 1.0 s.
-concat rname = As in 3dDeconvolve, used to specify the list of start
indexes for concatenated runs.
** Also as in 3dDeconvolve, if the -input dataset is auto-
catenated (by providing a list of more than one dataset),
the run start list is automatically generated. Otherwise,
this option is needed if more than one run is involved.
EXAMPLES
--------
The first example provides basis functions to filter out all frequency
components from 0 to 0.25 Hz:
1dBport -nodata 100 1 -band 0 0.25 > highpass.1D
The second example provides basis functions to filter out all frequency
components from 0.25 Hz up to the Nyquist frequency:
1dBport -nodata 100 1 -band 0.25 666 > lowpass.1D
The third example shows how to examine the results visually, for fun:
1dBport -nodata 100 1 -band 0.41 0.43 | 1dplot -stdin -thick
The fourth example shows how to use 3dDeconvolve to generate a file of
polynomial 'orts', in case you find yourself needing this ability someday
(e.g., when stranded on a desert isle, with Gilligan, the Skipper, et al.):
3dDeconvolve -nodata 100 1 -polort 2 -x1D_stop -x1D stdout: | 1dcat stdin: > pol3.1D
The fifth example shows how to use 1dBport to generate a set of regressors to
eliminate all frequencies EXCEPT those in the selected range:
1dBport -nodata 100 1 -band 0.03 0.13 -nozero -invert | 1dplot -stdin
In this example, the '-nozero' flag is used because the next step will be to
3dDeconvolve with '-polort 2' and '-ortvec' to get rid of the undesirable stuff.
ETYMOLOGICAL NOTES
------------------
* The word 'ort' was coined by Andrzej Jesmanowicz, as a shorthand name for
a timeseries to which you want to 'orthogonalize' your data.
* 'Ort' actually IS an English word, and means 'a scrap of food left from a meal'.
As far as I know, its only usage in modern English is in crossword puzzles,
and in Scrabble.
* For other meanings of 'ort', see http://en.wikipedia.org/wiki/Ort
* Do not confuse 'ort' with 'Oort': http://en.wikipedia.org/wiki/Oort_cloud
AUTHOR -- RWCox -- Jan 2012
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dcat
Usage: 1dcat [options] a.1D b.1D ...
where each file a.1D, b.1D, etc. is a 1D file.
In the simplest form, a 1D file is an ASCII file of numbers
arranged in rows and columns.
1dcat takes as input one or more 1D files, and writes out a 1D file
containing the side-by-side concatenation of all or a subset of the
columns from the input files.
* Output goes to stdout (the screen); redirect (e.g., '>') to save elsewhere.
* All files MUST have the same number of rows!
* Any header lines (i.e., lines that start with '#') will be lost.
* For generic 1D file usage help and information, see '1dplot -help'
-----------
TSV files: [Sep 2018]
-----------
* 1dcat can now also read .tsv files, which are columns of values separated
by tab characters (tsv = tab separated values). The first row of a .tsv
file is a set of column labels. After the header row, each column is either
all numbers, or is a column of strings. For example
Col 1 Col 2 Col 3
3.2 7.2 Elvis
8.2 -1.2 Sinatra
6.66 33.3 20892
In this example, the column labels contain spaces, which are NOT separators;
the only column separator used in a .tsv file is the tab character.
The first and second columns are converted to number columns, since every
value (after the label/header row) is a numeric string. The third column
is stored as strings, since some of the entries are not valid numbers.
* 1dcat can deal with a mix of .1D and .tsv files. The .tsv file header
rows are NOT output by default, since .1D files don't have such headers.
* The usual output from 1dcat is NOT a .tsv file - blanks are used for
separators. You can use the '-tsvout' option to get TSV formatted output.
* If you mix .1D and .tsv files, the number of data rows in each file
must be the same. Since the header row in a .tsv file is NOT used here,
the total number of lines in a .tsv file must be 1 more than the number
of lines in a .1D file for the two files to match in this program.
* The purpose of supporting .tsv files is for eventual compatibility with
the BIDS format http://bids.neuroimaging.io - which uses .tsv files
extensively to provide auxiliary information for (F)MRI datasets.
* Column selectors (like '[0,3]') can be used on .tsv files, but row selectors
(like '{0,3..5}') cannot be used on .tsv files - at this time :(
* You can also select a column in a .tsv file by using the label at the top of
of the column. A BIDS-related example:
1dcat sub-666_task-XXX_events.tsv'[onset,duration,trial_type,reaction_time]'
A similar example, which outputs a list of the trial types in an imaging run:
1dcat sub-666_task-XXX_events.tsv'[trial_type]' | sort | uniq
* Since .1D files don't have headers, the label method of column selection
doesn't work with such inputs; you must use integer column selectors
on .1D files.
* NOTE WELL: The string 'N/A' or 'n/a' in a column that is otherwise numeric
will be considered to be a number, and will be replaced on input
with the mean of the "true" numbers in the column -- there is
no concept of missing data in an AFNI .1D file.
++ If you don't like this, well ... too bad for you.
* NOTE WELL: 1dcat now also allows comma separated value (.csv) files. These
are treated the same as .tsv files, with a header line, et cetera.
--------
OPTIONS:
--------
-tsvout = Output in a TSV (.tsv) format, where the values in each row
are separated by tabs, not blanks. Also, a header line will
be provided, as TSV files require.
-csvout = Output in a CSV (.csv) format, where the values in each row
are separated by commas, not blanks. Also, a header line will
be provided, as CSV files require.
-nonconst = Columns that are identically constant should be omitted
from the output.
-nonfixed = Keep only columns that are marked as 'free' in the
3dAllineate header from '-1Dparam_save'.
If there is no such header, all columns are kept.
* NOTE: -nconst and -nonfixed don't have any effect on
.tsv/.csv files, and the use of these options
has NOT been tested at all when the inputs
are mixture of .tsv/.csv and .1D files.
-form FORM = Format of the numbers to be output.
You can also substitute -form FORM with shortcuts such
as -i, -f, or -c.
For help on -form's usage, and its shortcut versions
see ccalc's help for the option of the same name.
-stack = Stack the columns of the resultant matrix in the output.
You can't use '-stack' with .tsv/.csv files :(
-sel SEL = Apply the same column/row selection string to all filenames
on the command line.
For example:
1dcat -sel '[0,2]' f1.1D f2.1D
is the same as: 1dcat f1.1D'[1,2]' f2.1D'[1,2]'
The advantage of the option is that it allows wildcard use
in file specification so that you can run something like:
1dcat -sel '[0,2]' f?.1D
-OKempty: Exit quietly when encountering an empty file on disk.
Note that if the file is poorly formatted, it might be
considered empty.
EXAMPLE:
--------
Input file 1:
1
2
3
4
Input file 2:
5
6
7
8
1dcat data1.1D data2.1D > catout.1D
Output file:
1 5
2 6
3 7
4 8
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dCorrelate
Usage: 1dCorrelate [options] 1Dfile 1Dfile ...
------
* Each input 1D column is a collection of data points.
* The correlation coefficient between each column pair is computed, along
with its confidence interval (via a bias-corrected bootstrap procedure).
* The minimum sensible column length is 7.
* At least 2 columns are needed [in 1 or more .1D files].
* If there are N input columns, there will be N*(N-1)/2 output rows.
* Output appears on stdout; redirect ('>' or '>>') as needed.
* Only one correlation method can be used in one run of this program.
* This program is basically the basterd offspring of program 1ddot.
* Also see http://en.wikipedia.org/wiki/Confidence_interval
-------
Methods [actually, only the first letter is needed to choose a method]
------- [and the case doesn't matter: '-P' and '-p' both = '-Pearson']
-Pearson = Pearson correlation [the default method]
-Spearman = Spearman (rank) correlation [more robust vs. outliers]
-Quadrant = Quadrant (binarized) correlation [most robust, but weaker]
-Ktaub = Kendall's tau_b 'correlation' [popular somewhere, maybe]
-------------
Other Options [these options cannot be abbreviated!]
-------------
-nboot B = Set the number of bootstrap replicates to 'B'.
* The default value of B is 4000.
* A larger number will give somewhat more accurate
confidence intervals, at the cost of more CPU time.
-alpha A = Set the 2-sided confidence interval width to '100-A' percent.
* The default value of A is 5, giving the 2.5..97.5% interval.
* The smallest allowed A is 1 (0.5%..99.5%) and the largest
allowed value of A is 20 (10%..90%).
* If you are interested assessing if the 'p-value' of a
correlation is smaller than 5% (say), then you should use
'-alpha 10' and see if the confidence interval includes 0.
-block = Attempt to allow for serial correlation in the data by doing
*OR* variable-length block resampling, rather than completely
-blk random resampling as in the usual bootstrap.
* You should NOT do this unless you believe that serial
correlation (along each column) is present and significant.
* Block resampling requires at least 20 data points in each
input column. Fewer than 20 will turn off this option.
-----
Notes
-----
* For each pair of columns, the output include the correlation value
as directly calculated, plus the bias-corrected bootstrap value, and
the desired (100-A)% confidence interval [also via bootstrap].
* The primary purpose of this program is to provide an easy way to get
the bootstrap confidence intervals, since people almost always seem to use
the asymptotic normal theory to decide if a correlation is 'significant',
and this often seems misleading to me [especially for short columns].
* Bootstrapping confidence intervals for the inverse correlations matrix
(i.e., partial correlations) would be interesting -- anyone out there
need this ability?
-------------
Sample output [command was '1dCorrelate -alpha 10 A2.1D B2.1D']
-------------
# Pearson correlation [n=12 #col=2]
# Name Name Value BiasCorr 5.00% 95.00% N: 5.00% N:95.00%
# -------- -------- -------- -------- -------- -------- -------- --------
A2.1D[0] B2.1D[0] +0.57254 +0.57225 -0.03826 +0.86306 +0.10265 +0.83353
* Bias correction of the correlation had little effect; this is very common.
++ To be clear, the bootstrap bias correction is to allow for potential bias
in the statistical estimate of correlation when the sample size is small.
++ It cannot correct for biases that result from faulty data (or faulty
assumptions about the data).
* The correlation is NOT significant at this level, since the CI (confidence
interval) includes 0 in its range.
* For the Pearson method ONLY, the last two columns ('N:', as above) also
show the widely used asymptotic normal theory confidence interval. As in
the example, the bootstrap interval is often (but not always) wider than
the theoretical interval.
* In the example, the normal theory might indicate that the correlation is
significant (less than a 5% chance that the CI includes 0), but the
bootstrap CI shows that is not a reasonable statistical conclusion.
++ The principal reason that I wrote this program was to make it easy
to check if the normal (Gaussian) theory for correlation significance
testing is reasonable in any given case -- for small samples, it often
is NOT reasonable!
* Using the same data with the '-S' option gives the table below, again
indicating that there is no significant correlation between the columns
(note also the lack of the 'N:' results for Spearman correlation):
# Spearman correlation [n=12 #col=2]
# Name Name Value BiasCorr 5.00% 95.00%
# -------- -------- -------- -------- -------- --------
A2.1D[0] B2.1D[0] +0.46154 +0.42756 -0.23063 +0.86078
-------------
SAMPLE SCRIPT
-------------
This script generates random data and correlates it until it is
statistically significant at some level (default=2%). Then it
plots the data that looks correlated. The point is to show what
purely random stuff that appears correlated can look like.
(Like most AFNI scripts, this is written in tcsh, not bash.)
#!/bin/tcsh
set npt = 20
set alp = 2
foreach fred ( `count_afni -dig 1 1 1000` )
1dcat jrandom1D:${npt},2 > qqq.1D
set aabb = ( `1dCorrelate -spearman -alpha $alp qqq.1D | grep qqq.1D | colrm 1 42` )
set ab = `ccalc -form rint "1000 * $aabb[1] * $aabb[2]"`
echo $fred $ab
if( $ab > 1 )then
1dplot -one -noline -x qqq.1D'[0]' -xaxis -1:1:20:5 -yaxis -1:1:20:5 \
-DAFNI_1DPLOT_BOXSIZE=0.012 \
-plabel "N=$npt trial#=$fred \alpha=${alp}% => r\in[$aabb[1],$aabb[2]]" \
qqq.1D'[1]'
break
endif
end
\rm qqq.1D
----------------------------------------------------------------------
*** Written by RWCox (AKA Zhark the Mad Correlator) -- 19 May 2011 ***
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: @1dDiffMag
Usage: @1dDiffMag file.1D
* Computes a magnitude estimate of the first differences of a 1D file.
* Differences are computed down each column.
* The result -- a single number -- is on stdout.
* But (I hear you say), what IS the result?
* For each column, the standard deviation of the first differences is computed.
* The final result is the square-root of the sum of the squares of these stdev values.
AFNI program: 1ddot
Usage: 1ddot [options] 1Dfile 1Dfile ...
* Prints out correlation matrix of the 1D files and
their inverse correlation matrix.
* Output appears on stdout.
* Program 1dCorrelate does something similar-ish.
Options:
-one = Make 1st vector be all 1's.
-dem = Remove mean from all vectors (conflicts with '-one')
-cov = Compute with covariance matrix instead of correlation
-inn = Computed with inner product matrix instead
-rank = Compute Spearman rank correlation instead
(also implies '-terse')
-terse= Output only the correlation or covariance matrix
and without any of the garnish.
-okzero= Do not quit if a vector is all zeros.
The correlation matrix will have 0 where NaNs ought to go.
Expect rubbish in the inverse matrices if all zero
vectors exist.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dDW_Grad_o_Mat++
++ Program version: 2.2
Simple function to manipulate DW gradient vector files, b-value
files, and b- or g-matrices. Let: g_i be one of Ng spatial gradients
in three dimensions; |g_i| = 1, and the g-matrix is G_{ij} = g_i * g_j
(i.e., dyad of gradients, without b-value included); and the DW-scaled
b-matrix is B_{ij} = b * g_i * g_j.
**This new version of the function** will replace the original/older
version (1dDW_Grad_o_Mat). The new has similar functionality, but
improved defaults:
+ it does not average b=0 volumes together by default;
+ it does not remove top b=0 line from top by default;
+ output has same scaling as input by default (i.e., by bval or not);
and a switch is used to turn *off* scaling, for unit magn output
(which is cleverly concealed under the name '-unit_mag_out').
Wherefore, you ask? Well, times change, and people change.
The above functionality is still available, but each just requires
selection with command line switches.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
As of right now, one can input:
+ 3 rows of gradients (as output from dcm2nii, for example);
+ 3 columns of gradients;
+ 6 columns of g- or b-matrices, in `diagonal-first' (-> matA) order:
Bxx, Byy, Bzz, Bxy, Bxz, Byz,
which is used in 3dDWItoDT, for example;
+ 6 columns of g- or b-matrices, in `row-first' (-> matT) order:
Bxx, 2*Bxy, 2*Bxz, Byy, 2*Byz, Bzz,
which is output by TORTOISE, for example;
+ when specifying input file, one can use the brackets '{ }'
in order to specify a subset of rows to keep (NB: probably
can't use this grad-filter when reading in row-data right
now).
During processing, one can:
+ flip the sign of any of the x-, y- or z-components, which
may be necessary to do to make the scanned data and tracking
work happily together;
+ filter out all `zero' rows of recorded reference images,
THOUGH this is not really recommended.
One can then output:
+ 3 columns of gradients;
+ 6 columns of g- or b-matrices, in 'diagonal-first' order;
+ 6 columns of g- or b-matrices, in 'row-first' order;
+ as well as including a column of b-values (such as used in, e.g.,
DSI-Studio);
+ as well as explicitly include a row of zeros at the top;
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ RUNNING:
1dDW_Grad_o_Mat++ \
{ -in_row_vec | -in_col_vec | \
-in_col_matA | -in_col_matT } INFILE \
{ -flip_x | -flip_y | -flip_z | -no_flip } \
{ -out_row_vec | -out_col_vec | \
-out_col_matA | -out_col_matT } OUTFILE \
{ -in_bvals BVAL_FILE } \
{ -out_col_bval } \
{ -out_row_bval_sep BB | -out_col_bval_sep BB } \
{ -unit_mag_out } \
{ -bref_mean_top } \
{ -bmax_ref THRESH } \
{ -put_zeros_top } \
where:
(one of the following formats of input must be given):
-in_row_vec INFILE :input file of 3 rows of gradients (e.g.,
dcm2nii-format output).
-in_col_vec INFILE :input file of 3 columns of gradients.
-in_col_matA INFILE :input file of 6 columns of b- or g-matrix in
'A(FNI)' `diagonal first'-format. (See above.)
-in_col_matT INFILE :input file of 6 columns of b- or g-matrix in
'T(ORTOISE)' `row first'-format. (See above.)
(one of the following formats of output must be given):
-out_row_vec OUTFILE :output file of 3 rows of gradients.
-out_col_vec OUTFILE :output file of 3 columns of gradients.
-out_col_matA OUTFILE :output file of 6 columns of b- or g-matrix in
'A(FNI)' `diagonal first'-format. (See above.)
-out_col_matT OUTFILE :output file of 6 cols of b- or g-matrix in
'T(ORTOISE)' `row first'-format. (See above.)
(and any of the following options may be used):
-in_bvals BVAL_FILE :BVAL_FILE is a file of b-values, either a single
row (such as the 'bval' file generated by
dcm2nii) or a single column of numbers. Must
have the same number of entries as the number
of grad vectors or matrices.
-out_col_bval :switch to put a column of the bvalues as the
first column in the output data.
-out_row_bval_sep BB :output a file BB of bvalues in a single row.
-out_col_bval_sep BB :output a file BB of bvalues in a single column.
-unit_mag_out :switch so that each vector/matrix from the INFILE
is scaled to either unit or zero magnitude.
(Supplementary input bvalues would be ignored
in the output matrix/vector, but not in the
output bvalues themselves.) The default
behavior of the function is to leave the output
scaled however it is input (while also applying
any input BVAL_FILE).
-flip_x :change sign of first column of gradients (or of
the x-component parts of the matrix)
-flip_y :change sign of second column of gradients (or of
the y-component parts of the matrix)
-flip_z :change sign of third column of gradients (or of
the z-component parts of the matrix)
-no_flip :don't change any gradient/matrix signs. This
is an extraneous switch, as the default is to
not flip any signs (this is mainly used for
some scripting convenience
-check_abs_min VVV :By default, this program checks input matrix
formats for consistency (having positive semi-
definite diagonal matrix elements). It will fail
if those don't occur. However, sometimes there is
just a tiny values <0, like a rounding error;
you can specify to push throughfor negative
diagonal elements with magnitude <VVV, with those
values getting replaced by zero. Be judicious
with this power! (E.g., maybe VVV ~ 0.0001 might
be OK... but if you get looots of negatives, then
you really, really need to check your data for
badness.
(and the follow options are probably mainly extraneous, nowadays)
-bref_mean_top :when averaging the reference X 'b0' values (the
default behavior), have the mean of the X
values be represented in the top row; default
behavior is to have nothing representing the b0
information in the top row (for historical
functionality reasons). NB: if your reference
'b0' actually has b>0, you might not want to
average the b0 refs together, because their
images could have differing contrast if the
same reference vector wasn't used for each.
-put_zeros_top :whatever the output format is, add a row at the
top with all zeros.
-bmax_ref THRESH :THRESH is a scalar number below which b-values
(in BVAL_IN) are considered `zero' or reference.
Sometimes, for the reference images, the scanner
has a value like b=5 s/mm^2, instead of strictly
b=0 strictly. One can still flag such values as
being associated with a reference image and
trim it out, using, for the example case here,
'-bmax_ref 5.1'.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
EXAMPLES
# An example of type-conversion from a TORTOISE-style matrix to column
# gradients (if the matT file has bweights, so will the grad values):
1dDW_Grad_o_Mat++ \
-in_col_matT BMTXT_TORT.txt \
-out_col_vec GRAD.dat
# An example of filtering (note the different styles of parentheses
# for the column- and row-type files) and type-conversion (to an
# AFNI-style matrix that should have the bvalue weights afterwards):
1dDW_Grad_o_Mat++ \
-in_col_vec GRADS_col.dat'{0..10,12..30}' \
-in_bvals BVALS_row.dat'[0..10,12..30]' \
-out_col_matA FILT_matA.dat
# An example of filtering *without* type-conversion. Here, note
# the '-unit_mag_out' flag is used so that the output row-vec does
# not carry the bvalue weight with it; it does not affect the output
# bval file. As Levon might say, the '-unit_mag_out' option acts to
# 'Take a load off bvecs, take a load for free;
# Take a load off bvecs, and you put the load right on bvals only.'
# This example might be useful for working with dcm2nii* output:
1dDW_Grad_o_Mat++ \
-in_row_vec ap.bvec'[0..10,12..30]' \
-in_bvals ap.bval'[0..10,12..30]' \
-out_row_vec FILT_ap.bvec \
-out_row_bval_sep FILT_ap.bval \
-unit_mag_out
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
___________________________________________________________________________
AFNI program: 1deval
Usage: 1deval [options] -expr 'expression'
Evaluates an expression that may include columns of data
from one or more text files and writes the result to stdout.
** Only a single column can be used for each input 1D file. **
* Simple multiple column operations (e.g., addition, scaling)
can be done with program 1dmatcalc.
* Any single letter from a-z can be used as the independent
variable in the expression.
* Unless specified using the '[]' notation (cf. 1dplot -help),
only the first column of an input 1D file is used, and other
columns are ignored.
* Only one column of output will be produced -- if you want to
calculate a multi-column output file, you'll have to run 1deval
separately for each column, and then glue the results together
using program 1dcat. [However, see the 1dcat example combined
with the '-1D:' option, infra.]
Options:
--------
-del d = Use 'd' as the step for a single undetermined variable
in the expression [default = 1.0]
SYNONYMS: '-dx' and '-dt'
-start s = Start at value 's' for a single undetermined variable
in the expression [default = 0.0]
That is, for the indeterminate variable in the expression
(if any), the i-th value will be s+i*d for i=0, 1, ....
SYNONYMS: '-xzero' and '-tzero'
-num n = Evaluate the expression 'n' times.
If -num is not used, then the length of an
input time series is used. If there are no
time series input, then -num is required.
-a q.1D = Read time series file q.1D and assign it
to the symbol 'a' (as in 3dcalc).
* Letters 'a' to 'z' may be used as symbols.
* You can use the filename 'stdin:' to indicate that
the data for 1 symbol comes from standard input:
1dTsort q.1D stdout: | 1deval -a stdin: -expr 'sqrt(a)' | 1dplot stdin:
-a=NUMBER = set the symbol 'a' to a fixed numerical value
rather than a variable value from a 1D file.
* Letters 'a' to 'z' may be used as symbols.
* You can't assign the same symbol twice!
-index i.1D = Read index column from file i.1D and
write it out as 1st column of output.
This option is useful when working with
surface data.
-1D: = Write output in the form of a single '1D:'
string suitable for input on the command
line of another program.
[-1D: is incompatible with the -index option!]
[This won't work if the output string is very long,]
[since the maximum command line length is limited. ]
Examples:
---------
* 't' is the indeterminate variable in the expression below:
1deval -expr 'sin(2*PI*t)' -del 0.01 -num 101 > sin.1D
* Multiply two columns of data (no indeterminate variable):
1deval -expr 'a*b' -a fred.1D -b ethel.1D > ab.1D
* Compute and plot the F-statistic corresponding to p=0.001 for
varying degrees of freedom given by the indeterminate variable 'n':
1deval -start 10 -num 90 -expr 'fift_p2t(0.001,n,2*n)' | 1dplot -xzero 10 -stdin
* Compute the square root of some numbers given in '1D:' form
directly on the command line:
1deval -x '1D: 1 4 9 16' -expr 'sqrt(x)'
Examples using '-1D:' as the output format:
-------------------------------------------
The examples use the shell backquote `xxx` operation, where the
command inside the backquotes is executed, its stdout is captured
into a string, and placed back on the command line. When you have
mastered this idea, you have taken another step towards becoming
a Jedi AFNI Master!
1dplot `1deval -1D: -num 71 -expr 'cos(t/2)*exp(-t/19)'`
1dcat `1deval -1D: -num 100 -expr 'cos(t/5)'` \
`1deval -1D: -num 100 -expr 'sin(t/5)'` > sincos.1D
3dTfitter -quiet -prefix - \
-RHS `1deval -1D: -num 30 -expr 'cos(t)*exp(-t/7)'` \
-LHS `1deval -1D: -num 30 -expr 'cos(t)'` \
`1deval -1D: -num 30 -expr 'sin(t)'`
Notes:
------
* Program 3dcalc operates on 3D and 3D+time datasets in a similar way.
* Program ccalc can be used to evaluate a single numeric expression.
* If I had any sense, THIS program would have been called 1dcalc!
* For generic 1D file usage help, see '1dplot -help'
* For help with expression format, see '3dcalc -help', or type
'help' when using ccalc in interactive mode.
* 1deval only produces a single column of output. 3dcalc can be
tricked into doing multi-column 1D format output by treating
a 1D file as a 3D dataset and auto-transposing it with \'
For example:
3dcalc -a '1D: 3 4 5 | 1 2 3'\' -expr 'cbrt(a)' -prefix -
The input has 2 'columns' and so does the output.
Note that the 1D 'file' is transposed on input to 3dcalc!
This is essential, or 3dcalc will not treat the 1D file as
a dataset, and the results will be very different. Recall that
when a 1D file is read as an 3D AFNI dataset, the row direction
corresponds to the sub-brick (e.g., time) direction, and the
column direction corresponds to the voxel direction.
A Dastardly Trick:
------------------
If you use some other letter than 'z' as the indeterminate variable
in the calculation, and if 'z' is not assigned to any input 1D file,
then 'z' in the expression will be the previous value computed.
This trick can be used to create 1 point recursions, as in the
following command for creating a AR(1) noise time series:
1deval -num 500 -expr 'gran(0,1)+(i-i)+0.7*z' > g07.1D
Note the use of '(i-i)' to intoduce the variable 'i' so that 'z'
would be used as the previous output value, rather than as the
indeterminate variable generated by '-del' and '-start'.
The initial value of 'z' is 0 (for the first evaluation).
* [02 Apr 2010] You can set the initial value of 'z' to a nonzero
value by using the environment variable AFNI_1DEVAL_ZZERO, as in
1deval -DAFNI_1DEVAL_ZZERO=1 -num 10 -expr 'i+z'
-- RW Cox --
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dfft
Usage: 1dfft [options] infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, with the absolute
value of the FFT of the input columns. The length of the file
will be 1+(FFT length)/2.
Options:
-ignore sss = Skip the first 'sss' lines in the input file.
[default = no skipping]
-use uuu = Use only 'uuu' lines of the input file.
[default = use them all, Frank]
-nfft nnn = Set FFT length to 'nnn'.
[default = length of data (# of lines used)]
-tocx = Save Re and Im parts of transform in 2 columns.
-fromcx = Convert 2 column complex input into 1 column
real output.
[-fromcx will not work if the original]
[data FFT length was an odd number! :(]
-hilbert = When -fromcx is used, the inverse FFT will
do the Hilbert transform instead.
-nodetrend = Skip the detrending of the input.
Nota Bene:
* Each input time series has any quadratic trend of the
form 'a+b*t+c*t*t' removed before the FFT, where 't'
is the line number.
* The FFT length can be any positive even integer, but
the Fast Fourier Transform algorithm will be slower if
any prime factors of the FFT length are large (say > 997)
Unless you are applying this program to VERY long files,
this slowdown will probably not be appreciable.
* If the FFT length is longer than the file length, the
data is zero-padded to make up the difference.
* Do NOT call the output of this program the Power Spectrum!
That is something else entirely.
* If 'outfile' is '-' (or missing), the output appears on stdout.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dFlagMotion
Usage: 1dFlagMotion [options] MotionParamsFile
Produces an list of time points that have more than a
user specified amount of motion relative to the previous
time point.
Options:
-MaxTrans maximum translation allowed in any direction
[defaults to 1.5mm]
-MaxRot maximum rotation allowed in any direction
[defaults to 1.25 degrees]
** The input file must have EXACTLY 6 columns of input, in the order:
roll pitch yaw delta-SI delta-LR delta-AP
(angles in degrees first, then translations in mm)
** The program does NOT accept column '[...]' selectors on the input
file name, or comments in the file itself. As a palliative, if the
input file name is '-', then the input numbers are read from stdin,
so you could do something like the following:
1dcat mfile.1D'[1..6]' | 1dFlagMotion -
e.g., to work with the output from 3dvolreg's '-dfile' option
(where the first column is just the time index).
** The output is in a 1D format, with comments on '#' comment lines,
and the list of points exceeding the motion bounds listed being
intercalated on normal (non-comment) lines.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dgenARMA11
Program to generate an ARMA(1,1) time series, for simulation studies.
Results are written to stdout.
Usage: 1dgenARMA11 [options]
Options:
========
-num N } These equivalent options specify the length of the time
-len N } series vector to generate.
-nvec M = The number of time series vectors to generate;
if this option is not given, defaults to 1.
-a a = Specify ARMA(1,1) parameters 'a'.
-b b = Specify ARMA(1,1) parameter 'b' directly.
-lam lam = Specify ARMA(1,1) parameter 'b' indirectly.
-sig ss = Set standard deviation of results [default=1].
-norm = Normalize time series so sum of squares is 1.
-seed dd = Set random number seed.
* The correlation coefficient r(k) of noise samples k units apart in time,
for k >= 1, is given by r(k) = lam * a^(k-1)
where lam = (b+a)(1+a*b)/(1+2*a*b+b*b)
(N.B.: lam=a when b=0 -- AR(1) noise has r(k)=a^k for k >= 0)
(N.B.: lam=b when a=0 -- MA(1) noise has r(k)=b for k=1, r(k)=0 for k>1)
* lam can be bigger or smaller than a, depending on the sign of b:
b > 0 means lam > a; b < 0 means lam < a.
* What I call (a,b) here is sometimes called (p,q) in the ARMA literature.
* For a noise model which is the sum of AR(1) and white noise, 0 < lam < a
(i.e., a > 0 and -a < b < 0 ).
-CORcut cc = The exact ARMA(1,1) correlation matrix (for a != 0)
has no non-zero entries. The calculations in this
program set correlations below a cutoff to zero.
The default cutoff is 0.00010, but can be altered with
this option. The usual reason to use this option is
to test the sensitivity of the results to the cutoff.
-----------------------------
A restricted ARMA(3,1) model:
-----------------------------
Skip the '-a', '-b', and '-lam' options, and use a model with 3 roots
-arma31 a r theta vrat
where the roots are z = a, z = r*exp(I*theta), z = r*exp(-I*theta)
and vrat = s^2/(s^2+w^2) [so 0 < vrat < 1], where s = variance
of the pure AR(3) component and w = variance of extra white noise
added to the AR(3) process -- this is the 'restricted' ARMA(3,1).
If the data has given TR, and you want a frequency of f Hz, in
the noise model, then theta = 2 * PI * TR * f. If theta > PI,
then you are modeling noise beyond the Nyquist frequency and
the gods (and this program) won't be happy.
# csh syntax for 'set' variable assignment commands
set nt = 500
set tr = 1
set df = `ccalc "1/($nt*$tr)"`
set f1 = 0.10
set t1 = `ccalc "2*PI*$tr*$f1"`
1dgenARMA11 -nvec 500 -len $nt -arma31 0.8 0.9 $t1 0.9 -CORcut 0.0001 \
| 1dfft -nodetrend stdin: > qqq.1D
3dTstat -mean -prefix stdout: qqq.1D \
| 1dplot -stdin -num 201 -dt $df -xlabel 'frequency' -ylabel '|FFT|'
---------------------------------------------------------------------------
A similar option is now available for a restricted ARMA(5,1) model:
-arma51 a r1 theta1 r2 theta2 vrat
where now the roots are
z = a z = r1*exp(I*theta1) z = r1*exp(-I*theta1)
z = r2*exp(I*theta2) z = r2*exp(-I*theta2)
This model allows the simulation of two separate frequencies in the 'noise'.
---------------------------------------------------------------------------
Author: RWCox [for his own demented and deranged purposes]
Examples:
1dgenARMA11 -num 200 -a .8 -lam 0.7 | 1dplot -stdin
1dgenARMA11 -num 2000 -a .8 -lam 0.7 | 1dfft -nodetrend stdin: stdout: | 1dplot -stdin
AFNI program: 1dgrayplot
Usage: 1dgrayplot [options] tsfile
Graphs the columns of a *.1D type time series file to the screen,
sort of like 1dplot, but in grayscale.
Options:
-install = Install a new X11 colormap (for X11 PseudoColor)
-ignore nn = Skip first 'nn' rows in the input file
[default = 0]
-flip = Plot x and y axes interchanged.
[default: data columns plotted DOWN the screen]
-sep = Separate scales for each column.
-use mm = Plot 'mm' points
[default: all of them]
-ps = Don't draw plot in a window; instead, write it
to stdout in PostScript format.
N.B.: If you view this result in 'gv', you should
turn 'anti-alias' off, and switch to
landscape mode.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dMarry
Usage: 1dMarry [options] file1 file2 ...
Joins together 2 (or more) ragged-right .1D files, for use with
3dDeconvolve -stim_times_AM2.
**_OR_**
Breaks up 1 married file into 2 (or more) single-valued files.
OPTIONS:
=======
-sep abc == Use the first character (e.g., 'a') as the separator
between values 1 and 2, the second character (e.g., 'b')
as the separator between values 2 and 3, etc.
* These characters CANNOT be a blank, a tab, a digit,
or a non-printable control character!
* Default separator string is '*,' which will result
in output similar to '3*4,5,6'
-divorce == Instead of marrying the files, assume that file1
is already a married file: split time*value*value... tuples
into separate files, and name them in the pattern
'file2_A.1D' 'file2_B.1D' et cetera.
If not divorcing, the 'married' file is written to stdout, and
probably should be captured using a redirection such as '>'.
NOTES:
=====
* You cannot use column [...] or row {...} selectors on
ragged-right .1D files, so don't even think about trying!
* The maximum number of values that can be married is 26.
(No polygamy or polyandry jokes here, please.)
* For debugging purposes, with '-divorce', if 'file2' is '-',
then all the divorcees are written directly to stdout.
-- RWCox -- written hastily in March 2007 -- hope I don't repent
-- modified to deal with multiple marriages -- December 2008
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dmatcalc
Usage: 1dmatcalc [-verb] expression
Evaluate a space delimited RPN matrix-valued expression:
* The operations are on a stack, each element of which is a
real-valued matrix.
* N.B.: This is a computer-science stack of separate matrices.
If you want to join two matrices in separate files
into one 'stacked' matrix, then you must use program
1dcat to join them as columns, or the system program
cat to join them as rows.
* You can also save matrices by name in an internal buffer
using the '=NAME' operation and then retrieve them later
using just the same NAME.
* You can read and write matrices from files stored in ASCII
columns (.1D format) using the &read and &write operations.
* The following 5 operations, input as a single string,
'&read(V.1D) &read(U.1D) &transp * &write(VUT.1D)'
- reads matrices V and U from disk (separately),
- transposes U (on top of the stack) into U',
- multiplies V and U' (the two matrices on top of the stack),
- and writes matrix VU' out (the matrix left on the stack by '*').
* Calculations are carried out in single precision ('float').
* Operations mostly contain characters such as '&' and '*' that
are special to Unix shells, so you'll probably need to put
the arguments to this program in 'single quotes'.
* You can use '%' or '@' in place of the '&' character, if you wish.
STACK OPERATIONS
-----------------
number == push scalar value (1x1 matrix) on stack;
a number starts with a digit or a minus sign
=NAME == save a copy matrix on top of stack as 'NAME'
NAME == push a copy of NAME-ed matrix onto top of stack;
names start with an alphabetic character
&clear == erase all named matrices (to save memory);
does not affect the stack at all
&purge == erase the stack;
does not affect named matrices
&read(FF) == read ASCII (.1D) file onto top of stack from file 'FF'
&read4x4Xform(FF)
== Similar to &read(FF), except that it expects data
for a 12-parameter spatial affine transform.
FF can contain 12x1, 1x12, 16x1, 1x16, 3x4, or
4x4 values.
The read operation loads the data into a 4x4 matrix
r11 r12 r13 r14
r21 r22 r23 r24
r31 r32 r33 r34
0.0 0.0 0.0 1.0
This option was added to simplify the combination of
linear spatial transformations. However, you are better
off using cat_matvec for that purpose.
&write(FF) == write top matrix to ASCII file to file 'FF';
if 'FF' == '-', writes to stdout
&transp == replace top matrix with its transpose
&ident(N) == push square identity matrix of order N onto stack
N is an fixed integer, OR
&R to indicate the row dimension of the
current top matrix, OR
&C to indicate the column dimension of the
current top matrix, OR
=X to indicate the (1,1) element of the
matrix named X
&Psinv == replace top matrix with its pseudo-inverse
[computed via SVD, not via inv(A'*A)*A']
&Sqrt == replace top matrix with its square root
[computed via Denman & Beavers iteration]
N.B.: not all real matrices have real square
roots, and &Sqrt will fail if you push it
N.B.: the matrix must be square!
&Pproj == replace top matrix with the projection onto
its column space; Input=A; Output = A*Psinv(A)
N.B.: result P is symmetric and P*P=P
&Qproj == replace top matrix with the projection onto
the orthogonal complement of its column space
Input=A; Output=I-Pproj(A)
* == replace top 2 matrices with their product;
OR stack = [ ... C A B ] (where B = top) goes to
&mult stack = [ ... C AB ]
if either of the top matrices is a 1x1 scalar,
then the result is the scalar multiplication of
the other matrix; otherwise, matrices must conform
+ OR &add == replace top 2 matrices with sum A+B
- OR &sub == replace top 2 matrices with difference A-B
&dup == push duplicate of top matrix onto stack
&pop == discard top matrix
&swap == swap top two matrices (A <-> B)
&Hglue == glue top two matrices together horizontally:
stack = [ ... C A B ] goes to
stack = [ ... C A|B ]
this is like what program 1dcat does.
&Vglue == glue top two matrices together vertically:
stack = [ ... C A B ] goes to
A
stack = [ ... C - ]
B
this is like what program cat does.
SIMPLE EXAMPLES
---------------
* Multiply each element of an input 1D file
by a constant factor and write to disk.
1dmatcalc "&read(in.1D) 3.1416 * &write(out.1D)"
* Subtract two 1D files
1dmatcalc "&read(a.1D) &read(b.1D) - &write(stdout:)"
AFNI program: 1dNLfit
Program to fit a model to a vector of data. The model is given by a
symbolic expression, with parameters to be estimated.
Usage: 1dNLfit OPTIONS
Options: [all but '-meth' are actually mandatory]
--------
-expr eee = The expression for the fit. It must contain one symbol from
'a' to 'z' which is marked as the independent variable by
option '-indvar', and at least one more symbol which is
a parameter to be estimated.
++ Expressions use the same syntax as 3dcalc, ccalc, and 1deval.
++ Note: expressions and symbols are not case sensitive.
-indvar c d = Indicates which variable in '-expr' is the independent
variable. All other symbols are parameters, which are
either fixed (constants) or variables to be estimated.
++ Then, read the values of the independent variable from
1D file 'd' (only the first column will be used).
++ If the independent variable has a constant step size,
you can input it via with 'd' replaced by a string like
'1D: 100%0:2.1'
which creates an array with 100 value, starting at 0,
then adding 2.1 for each step:
0 2.1 4.2 6.3 8.4 ...
-param ppp = Set fixed value or estimating range for a particular
symbol.
++ For a fixed value, 'ppp' takes the form 'a=3.14', where the
first letter is the symbol name, which must be followed by
an '=', then followed by a constant expression. This
expression can be symbolic, as in 'a=cbrt(3)'.
++ For a parameter to be estimated, 'ppp' takes the form of
two constant expressions separated by a ':', as in
'q=-sqrt(2):sqrt(2)'.
++ All symbols in '-expr' must have a corresponding '-param'
option, EXCEPT for the '-indvar' symbol (which will be set
by its data file).
-depdata v = Read the values of the dependent variable (to be fitted to
'-expr') from 1D file 'v'.
++ File 'v' must have the same number of rows as file 'd'
from the '-indvar' option!
++ File 'v' can have more than one column; each will be fitted
separately to the expression.
-meth m = Set the method for fitting: '1' for L1, '2' for L2.
(The default method is L2, which is usually better.)
Example:
--------
Create a sin wave corrupted by logistic noise, to file ss.1D.
Create a cos wave similarly, to file cc.1D.
Put these files together into a 2 column file sc.1D.
Fit both columns to a 3 parameter model and write the fits to file ff.1D.
Plot the data and the fit together, for fun and profit(?).
1deval -expr 'sin(2*x)+lran(0.3)' -del 0.1 -num 100 > ss.1D
1deval -expr 'cos(2*x)+lran(0.3)' -del 0.1 -num 100 > cc.1D
1dcat ss.1D cc.1D > sc.1D ; \rm ss.1D cc.1D
1dNLfit -depdata sc.1D -indvar x '1D: 100%0:0.1' -expr 'a*sin(b*x)+c*cos(b*x)' \
-param a=-2:2 -param b=1:3 -param c=-2:2 > ff.1D
1dplot -one -del 0.1 -ynames sin:data cos:data sin:fit cos:fit - sc.1D ff.1D
Notes:
------
* PLOT YOUR RESULTS! There is no guarantee that you'll get a good fit.
* This program is not particularly efficient, so using it on a large
scale (e.g., for lots of columns, or in a shell loop) will be slow.
* The results (fitted time series models) are written to stdout,
and should be saved by '>' redirection (as in the example).
The first few lines of the output from the example are:
# 1dNLfit output (meth=L2)
# expr = a*sin(b*x)+c*cos(b*x)
# Fitted parameters:
# A = 1.0828 0.12786
# B = 1.9681 2.0208
# C = 0.16905 1.0102
# ----------- -----------
0.16905 1.0102
0.37753 1.0153
0.57142 0.97907
* Coded by Zhark the Well-Fitted - during Snowzilla 2016.
AFNI program: 1dnorm
Usage: 1dnorm [options] infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, with each column being
L_2 normalized (sum of squares = 1).
* If 'infile' is '-', it will be read from stdin.
* If 'outfile' is '-', it will be written to stdout.
Options:
--------
-norm1 = Normalize so sum of absolute values is 1 (L_1 norm)
-normx = So that max absolute value is 1 (L_infinity norm)
-demean = Subtract each column's mean before normalizing
-demed = Subtract each column's median before normalizing
[-demean and -demed are mutually exclusive!]
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dplot
++ 1dplot: AFNI version=AFNI_24.3.00 (Oct 1 2024) [64-bit]
++ Authored by: RWC et al.
Usage: 1dplot [options] tsfile ...
Graphs the columns of a *.1D time series file to the X11 screen,
or to an image file (.jpg or .png).
** This is the original C-language plotting program in AFNI, first created **
** in 1999 (by RW Cox), built on routines he first wrote in the 1980s. **
** Also see the much newer and similar Python-language program 1dplot.py **
** (created by PA Taylor in 2018), which can produce nicer looking graphs. **
-------
OPTIONS
-------
-install = Install a new X11 colormap.
-sep = Plot each column in a separate sub-graph.
-one = Plot all columns together in one big graph.
[default = -sep]
-sepscl = Plot each column in a separate sub-graph
and allow each sub-graph to have a different
y-scale. -sepscl is meaningless with -one!
-noline = Don't plot the connecting lines (also implies '-box').
-NOLINE = Same as '-noline', but will not try to plot values outside
the rectangular box that contains the graph axes.
-box = Plot a small 'box' at each data point, in addition
to the lines connecting the points.
* The box size can be set via the environment variable
AFNI_1DPLOT_BOXSIZE; the value is a fraction of the
overall plot size. The standard box size is 0.006.
Example with a bigger box:
1dplot -DAFNI_1DPLOT_BOXSIZE=0.01 -box A.1D
* The box shapes are different for different time
series columns. At present, there is no way to
control which shape is used for what column
(unless you modify the source code, that is).
* If you want some data columns plotted with boxes
and some with lines, don't use '-box'. Instead, use
option '-dashed'.
* You can set environment variable AFNI_1DPLOT_RANBOX
to YES to get the '-noline' boxes plotted in a
pseudo-random order, so that one particular color
doesn't dominate just because it is last in the
plotting order; for example:
1dplot -DAFNI_1DPLOT_RANBOX=YES -one -x X.1D -noline Y1.1D Y2.1D Y3.1D
-hist = Plot graphs in histogram style (i.e., vertical boxes).
* Histograms can be generated from 3D or 1D files using
program 3dhistog; for example
3dhistog -nbin 50 -notitle -min 0 -max .04 err.1D > eh.1D
1dplot -hist -x eh.1D'[0]' -xlabel err -ylabel hist eh.1D'[1]'
or, for something a little more fun looking:
1dplot -one -hist -dashed 1:2 -x eh.1D'[0]' \
-xlabel err -ylabel hist eh.1D'[1]' eh.1D'[1]'
** The '-norm' options below can be useful for plotting data
with different value ranges on top of each other via '-one':
-norm2 = Independently scale each time series plotted to
have L_2 norm = 1 (sum of squares).
-normx = Independently scale each time series plotted to
have max absolute value = 1 (L_infinity norm).
-norm1 = Independently scale each time series plotted to
have max sum of absolute values = 1 (L_1 norm).
-demean = This option will remove the mean from each time series
(before normalizing). The combination '-demean -normx -one'
can be useful when plotting disparate data together.
* If you use '-demean' twice, you will get linear detrending.
* Et cetera (e.g,, 4 times gives you cubic detrending.)
-x X.1D = Use for X axis the data in X.1D.
Note that X.1D should have one column
of the same length as the columns in tsfile.
** Coupled with '-box -noline', you can use '-x' to make
a scatter plot, as in graphing file A1.1D along the
x-axis and file A2.1D along the y-axis:
1dplot -box -noline -x A1.1D -xlabel A1 -ylabel A2 A2.1D
** '-x' will override -dx and -xzero; -xaxis still works
-xl10 X.1D = Use log10(X.1D) as the X axis.
-xmulti X1.1D X2.1D ...
This new [Oct 2013] option allows you to plot different
columns from the data with different values along the
x-axis. You can supply one or more 1D files after the
'-xmulti' option. The columns from these files are
catenated, and then the first xmulti column is used as
as x-axis values for the first data column plotted, the
second xmulti column gives the x-axis values for the
second data column plotted, and so on.
** The command line arguments after '-xmulti' are taken
as 1D filenames to read, until an argument starts with
a '-' character -- this would either be another option,
or just a single '-' to separate the xmulti 1D files
from the data files to be plotted.
** If you don't provide enough xmulti columns for all the
data files, the last xmulti column will be reused.
** Useless but fun example:
1deval -num 100 -expr '(i-i)+z+gran(0,6)' > X1.1D
1deval -num 100 -expr '(i-i)+z+gran(0,6)' > X2.1D
1dplot -one -box -xmulti X1.1D X2.1D - X2.1D X1.1D
-dx xx = Spacing between points on the x-axis is 'xx'
[default = 1] SYNONYMS: '-dt' and '-del'
-xzero zz = Initial x coordinate is 'zz' [default = 0]
SYNONYMS: '-tzero' and '-start'
-nopush = Don't 'push' axes ranges outwards.
-ignore nn = Skip first 'nn' rows in the input file
[default = 0]
-use mm = Plot 'mm' points [default = all of them]
-xlabel aa = Put string 'aa' below the x-axis
[default = no axis label]
-ylabel aa = Put string 'aa' to the left of the y-axis
[default = no axis label]
-plabel pp = Put string 'pp' atop the plot.
Some characters, such as '_' have
special formatting effects. You
can escape that with ''. For example:
echo 2 4.5 -1 | 1dplot -plabel 'test_underscore' -stdin
versus
echo 2 4.5 -1 | 1dplot -plabel 'test\_underscore' -stdin
-title pp = Same as -plabel, but only works with -ps/-png/-jpg/-pnm options.
-wintitle pp = Set string 'pp' as the title of the frame
containing the plot. Default is based on input.
-naked = Do NOT plot axes or labels, just the graph(s).
You might want to use '-nopush' with '-naked'.
-aspect A = Set the width-to-height ratio of the plot region to 'A'.
Default value is 1.3. Larger 'A' means a wider graph.
-stdin = Don't read from tsfile; instead, read from
stdin and plot it. You cannot combine input
from stdin and tsfile(s). If you want to do so,
use program 1dcat first.
-ps = Don't draw plot in a window; instead, write it
to stdout in PostScript format.
* If you view the result in 'gv', you should turn
'anti-alias' off, and switch to landscape mode.
* You can use the 'gs' program to convert PostScript
to other formats; for example, a .bmp file:
1dplot -ps ~/data/verbal/cosall.1D |
gs -r100 -sOutputFile=fred.bmp -sDEVICE=bmp256 -q -dBATCH -
* 1dplot is built on some line drawing software written
a long time ago in a galaxy far away, which is why PostScript
output was a natural thing to do -- I doubt that anyone uses
this feature in these decadent modern times.
-jpg fname } = Render plot to an image and save to a file named
-jpeg fname } = 'fname', in JPEG mode or in PNG mode or in PNM mode.
-png fname } = The default image width is 1024 pixels; to change
-pnm fname } = this value to 2048 pixels (say), do
setenv AFNI_1DPLOT_IMSIZE 2048
before running 1dplot, or add
-DAFNI_1DPLOT_IMSIZE=2048
to the 1dplot command line. Widths over 4096 might
start to look odd in some cases. The largest allowed
size is 8192 pixels.
* PNG files created by 1dplot will be smaller than JPEG,
and are compressed without loss.
* PNG output requires that the netpbm program
pnmtopng be installed somewhere in your PATH.
This program is NOT supplied with AFNI, but must
be installed separately:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/index.html
* PNM output files are not compressed, and are manipulable
by the netpbm package: http://netpbm.sourceforge.net/
Otherwise, this format isn't very useful anymore.
* There will be small drawing differences between the
X11 (interactive) plotting window and the images saved
by these options -- or by the interactive button.
These differences arise from the use of different line
drawing functions for X11 windows and for off-screen
bitmap images.
-pngs size fname } = convenience options equivalent to
-jpgs size fname } = -DAFNI_1DPLOT_IMSIZE=size followed by
-jpegs size fname} = -png fname (or -jpg or -jpeg or -pnm)
-pnms size fname } = The largest allowed size is 8192 pixels.
-ytran 'expr' = Transform the data along the y-axis by
applying the expression to each input value.
For example:
-ytran 'log10(z)'
will take log10 of each input time series value
before plotting it.
* The expression should have one variable (any letter
from a-z will do), which stands for the time series
data to be transformed.
* An expression such as 'sqrt(x*x+i)' will use 'x'
for the time series value and use 'i' for the time
index (starting at 0) -- in this way, you can use
time-dependent transformations, if needed.
* This transformation applies to all input time series
(at present, there is no way to transform different
time series in distinct ways inside 1dplot).
* '-ytran' is applied BEFORE the various '-norm' options.
-xtran 'expr' = Similar, but for the x-axis.
** Applies to '-xmulti' , '-x' , or the default x-axis.
-xaxis b:t:n:m = Set the x-axis to run from value 'b' to
value 't', with 'n' major divisions and
'm' minor tic marks per major division.
For example:
-xaxis 0:100:5:20
Setting 'n' to 0 means no tic marks or labels.
* You can set 'b' to be greater than 't', to
have the x-coordinate decrease from left-to-right.
* This is the only way to have this effect in 1dplot.
* In particular, '-dx' with a negative value will not work!
-yaxis b:t:n:m = Similar to above, for the y-axis. These
options override the normal autoscaling
of their respective axes.
-ynames a b ... = Use the strings 'a', 'b', etc., as
labels to the right of the graphs,
corresponding to each input column.
These strings CANNOT start with the
'-' character.
N.B.: Each separate string after '-ynames'
is taken to be a new label, until the
end of the command line or until some
string starts with a '-'. In particular,
This means you CANNOT do something like
1dplot -ynames a b c file.1D
since the input filename 'file.1D' will
be used as a label string, not a filename.
Instead, you must put another option between
the end of the '-ynames' label list, OR you
can put a single '-' at the end of the label
list to signal its end:
1dplot -ynames a b c - file.1D
TSV files: When plotting a TSV file, where the first row
is the set of column labels, you can use this
Unix trick to put the column labels here:
-ynames `head -1 file.tsv`
The 'head' command copies just the first line
of the file to stdout, and the backquotes `...`
capture stdout and put it onto the command line.
* You might need to put a single '-' after this
option to prevent the problem alluded to above.
In any case, it can't hurt to use '-' as an option
after '-ynames'.
* If any of the TSV labels start with the '-' character,
peculiar and unpleasant things might transpire.
-volreg = Makes the 'ynames' be the same as the
6 labels used in plug_volreg for
Roll, Pitch, Yaw, I-S, R-L, and A-P
movements, in that order.
-thick = Each time you give this, it makes the line
thickness used for plotting a little larger.
[An alternative to using '-DAFNI_1DPLOT_THIK=...']
-THICK = Twice the power of '-thick' at no extra cost!!
-dashed codes = Plot dashed lines between data points. The 'codes'
are a colon-separated list of dash values, which
can be 1 (solid), 2 (longer dashes), or 3 (shorter dashes).
0 can be used to indicate that a time series is to be
plotted without lines but with boxes instead.
** Example: '-dashed 1:2:3' means to plot the first time
series with solid lines, the second with long dashes,
and the third with short dashes.
-Dname=val = Set environment variable 'name' to 'val'
for this run of the program only:
1dplot -DAFNI_1DPLOT_THIK=0.01 -DAFNI_1DPLOT_COLOR_01=blue '1D:3 4 5 3 1 0'
You may also select a subset of columns to display using
a tsfile specification like 'fred.1D[0,3,5]', indicating
that columns #0, #3, and #5 will be the only ones plotted.
For more details on this selection scheme, see the output
of '3dcalc -help'.
Example: graphing a 'dfile' output by 3dvolreg, when TR=5:
1dplot -volreg -dx 5 -xlabel Time 'dfile[1..6]'
You can also input more than one tsfile, in which case the files
will all be plotted. However, if the files have different column
lengths, the shortest one will rule.
The colors for the line graphs cycle between black, red, green, and
blue. You can alter these colors by setting Unix environment
variables of the form AFNI_1DPLOT_COLOR_xx -- cf. README.environment.
You can alter the thickness of the lines by setting the variable
AFNI_1DPLOT_THIK to a value between 0.00 and 0.05 -- the units are
fractions of the page size; of course, you can also use the options
'-thick' or '-THICK' if you prefer.
----------------
RENDERING METHOD
----------------
On 30 Apr 2012, a new method of rendering the 1dplot graph into an X11
window was introduced -- this method uses 'anti-aliasing' to produce
smoother-looking lines and characters. If you want the old coarser-looking
rendering method, set environment variable AFNI_1DPLOT_RENDEROLD to YES.
The program always uses the new rendering method when drawing to a JPEG
or PNG or PNM file (which is not and never has been just a screen capture).
There is no way to disable the new rendering method for image-file saves.
------
LABELS
------
Besides normal alphabetic text, the various labels can include some
special characters, using TeX-like escapes starting with '\'.
Also, the '^' and '_' characters denote super- and sub-scripts,
respectively. The following command shows many of the escapes:
1deval -num 100 -expr 'J0(t/4)' | 1dplot -stdin -thick \
-xlabel '\alpha\beta\gamma\delta\epsilon\zeta\eta^{\oplus\dagger}\times c' \
-ylabel 'Bessel Function \green J_0(t/4)' \
-plabel '\Upsilon\Phi\Chi\Psi\Omega\red\leftrightarrow\blue\partial^{2}f/\partial x^2'
TIMESERIES (1D) INPUT
---------------------
A timeseries file is in the form of a 1D or 2D table of ASCII numbers;
for example: 3 5 7
2 4 6
0 3 3
7 2 9
This example has 4 rows and 3 columns. Each column is considered as
a timeseries in AFNI. The convention is to store this type of data
in a filename ending in '.1D'.
** COLUMN SELECTION WITH [] **
When specifying a timeseries file to an command-line AFNI program, you
can select a subset of columns using the '[...]' notation:
'fred.1D[5]' ==> use only column #5
'fred.1D[5,9,17]' ==> use columns #5, #9, and #17
'fred.1D[5..8]' ==> use columns #5, #6, #7, and #8
'fred.1D[5..13(2)]' ==> use columns #5, #7, #9, #11, and #13
Column indices start at 0. You can use the character '$'
to indicate the last column in a 1D file; for example, you
can select every third column in a 1D file by using the selection list
'fred.1D[0..$(3)]' ==> use columns #0, #3, #6, #9, ....
** ROW SELECTION WITH {} **
Similarly, you select a subset of the rows using the '{...}' notation:
'fred.1D{0..$(2)}' ==> use rows #0, #2, #4, ....
You can also use both notations together, as in
'fred.1D[1,3]{1..$(2)}' ==> columns #1 and #3; rows #1, #3, #5, ....
** DIRECT INPUT OF DATA ON THE COMMAND LINE WITH 1D: **
You can also input a 1D time series 'dataset' directly on the command
line, without an external file. The 'filename' for such input has the
general format
'1D:n_1@val_1,n_2@val_2,n_3@val_3,...'
where each 'n_i' is an integer and each 'val_i' is a float. For
example
-a '1D:5@0,10@1,5@0,10@1,5@0'
specifies that variable 'a' be assigned to a 1D time series of 35,
alternating in blocks between values 0 and value 1.
* Spaces or commas can be used to separate values.
* A '|' character can be used to start a new input "line":
Try 1dplot '1D: 3 4 3 5 | 3 5 4 3'
** TRANSPOSITION WITH \' **
Finally, you can force most AFNI programs to transpose a 1D file on
input by appending a single ' character at the end of the filename.
N.B.: Since the ' character is also special to the shell, you'll
probably have to put a \ character before it. Examples:
1dplot '1D: 3 2 3 4 | 2 3 4 3' and
1dplot '1D: 3 2 3 4 | 2 3 4 3'\'
When you have reached this level of understanding, you are ready to
take the AFNI Jedi Master test. I won't insult you by telling you
where to find this examination.
TAB SEPARATED VALUE (.tsv) FILES [Sep 2018]
-------------------------------------------
These files are used in BIDS http://bids.neuroimaging.io and AFNI
programs can read these in a few places.
The format of a .tsv file is a set of columns, where the values in
each row are separated by tab characters -- spaces are NOT separators.
Each element is string, some of which are numeric (e.g. 3.1416).
The first row of a .tsv file is a set of strings which are column
descriptors (separated by tabs, of course). For the most part, the
following data in each column are exclusively numeric or exclusively
strings. Strings can contain blanks/spaces since only tabs are used
to separate values.
A .tsv file can be read in most places where a .1D file is read.
However, columns (after the header row) that are not purely numeric
will be ignored, since the internal usage of .1D data in AFNI is numeric.
Thus, you can do something like
1dplot -nopush -sepscl sub-10506_task-pamenc_events.tsv
and you will get a plot of all the numeric columns in this BIDS file.
Column selection '[]' can be done, using numbers to specify columns
or using the column labels in the .tsv file.
N.B.: The string 'N/A' or 'n/a' in a column that is otherwise numeric
will be considered to be a number, and will be replaced on input
with the mean of the "true" numbers in the column -- there is
no concept of missing data in an AFNI .1D file.
++ If you don't like this, well ... too bad for you.
Program 1dcat has special knowledge of .tsv files, and will cat
(sideways - along rows) .tsv and .1D files together. It also has an
option to write the output in .tsv format.
For example, to get the 'onset', 'duration', and 'trial_type' columns
out of a BIDS task .tsv file, a command like this could be used:
1dcat sub-10506_task-pamenc_events.tsv'[onset,duration,trial_type]'
Note that the column headers are lost in this output, but could be kept
if the 1dcat '-tsvout' option were used. In reverse, a numeric .1D file
can be converted to .tsv format by a command like:
1dcat -tsvout Fred.1D
In this case, since a the data for .1D file doesn't have headers for its
columns, 1dcat will invent some column names.
At this time, other programs don't 'know' much about .tsv files, and will
ignore the header row and non-numeric columns when reading a .tsv file.
in place of a .1D file.
--------------
MARKING BLOCKS (e.g., censored time points)
--------------
The following options let you mark blocks along the x-axis, by drawing
colored vertical boxes over the standard white background.
* The intended use is to mark blocks of time points that are censored
out of an analysis, which is why the options are the same as those
in 3dDeconvolve -- but you can mark blocks for any reason, of course.
* These options don't do anything when the '-x' option is used to
alter the x-axis spacings.
* To see what the various color markings look like, try this silly example:
1deval -num 100 -expr 'lran(2)' > zz.1D
1dplot -thick -censor_RGB red -CENSORTR 3-8 \
-censor_RGB green -CENSORTR 11-16 \
-censor_RGB blue -CENSORTR 22-27 \
-censor_RGB yellow -CENSORTR 34-39 \
-censor_RGB violet -CENSORTR 45-50 \
-censor_RGB pink -CENSORTR 55-60 \
-censor_RGB gray -CENSORTR 65-70 \
-censor_RGB #2cf -CENSORTR 75-80 \
-plabel 'red green blue yellow violet pink gray #2cf' zz.1D &
-censor_RGB clr = set the color used for the marking to 'clr', which
can be one of the strings below:
red green blue yellow violet pink gray (OR grey)
* OR 'clr' can be in the form '#xyz' or '#xxyyzz', where
'x', 'y', and 'z' are hexadecimal digits -- for example,
'#2cf' is sort of a cyan color.
* OR 'clr' can be in the form 'rgbi:rf/gf/bf' where
each color intensity (rf, gf, bf) is a number between
0.0 and 1.0 -- e.g., white is 'rgbi:1.0/1.0/1.0'.
Since the background is white, dark colors don't look
good here, and will obscure the graphs; for example,
pink is defined here as 'rgbi:1.0/0.5/0.5'.
* The default color is (a rather pale) yellow.
* You can use '-censor_RGB' more than once. The color
most recently specified previous on the command line
is what will be used with the '-censor' and '-CENSORTR'
options. This allows you to mark different blocks
with different colors (e.g., if they were censored
for different reasons).
* The feature of allowing multiple '-censor_RGB' options
means that you must put this option BEFORE the
relevant '-censor' and/or '-CENSORTR' options.
Otherwise, you'll get the default yellow color!
-censor cname = cname is the filename of censor .1D time series
* This is a file of 1s and 0s, indicating which
time points are to be un-marked (1) and which are
to be marked (0).
* Please note that only one '-censor' option can be
used, for compatibility with 3dDeconvolve.
* The option below may be simpler to use!
(And can be used multiple times.)
-CENSORTR clist = clist is a list of strings that specify time indexes
to be marked in the graph(s). Each string is of
one of the following forms:
37 => mark global time index #37
2:37 => mark time index #37 in run #2
37..47 => mark global time indexes #37-47
37-47 => same as above
*:0-2 => mark time indexes #0-2 in all runs
2:37..47 => mark time indexes #37-47 in run #2
* Time indexes within each run start at 0.
* Run indexes start at 1 (just be to confusing).
* Multiple -CENSORTR options may be used, or
multiple -CENSORTR strings can be given at
once, separated by spaces or commas.
* Each argument on the command line after
'-CENSORTR' is treated as a censoring string,
until an argument starts with a '-' or an
alphabetic character, or it contains the substring
'1D'. This means that if you want to plot a file
named '9zork.xyz', you may have to do this:
1dplot -CENSORTR 3-7 18-22 - 9zork.xyz
The stand-alone '-' will stop the processing
of censor strings; otherwise, the '9zork.xyz'
string, since it doesn't start with a letter,
would be treated as a censoring string, which
you would find confusing.
** N.B.: 2:37,47 means index #37 in run #2 and
global time index 47; it does NOT mean
index #37 in run #2 AND index #47 in run #2.
-concat rname = rname is the filename for list of concatenated runs
* 'rname' can be in the format
'1D: 0 100 200 300'
which indicates 4 runs, the first of which
starts at time index=0, second at index=100,
and so on.
* The ONLY function of '-concat' is for use with
'-CENSORTR', to be compatible with 3dDeconvolve
[e.g., for plotting motion parameters from]
[3dvolreg -1Dfile, where you've cat-enated]
[the 1D files from separate runs into one ]
[long file for plotting with this program.]
-rbox x1 y1 x2 y2 color1 color2
= Draw a rectangular box with corners (x1,y1) to
(x2,y2), in color1, with an outline in color2.
Colors are names, such as 'green'.
[This option lets you make bar]
[charts, *if* you care enough.]
-Rbox x1 y1 x2 y2 y3 color1 color2
= As above, with an extra horizontal line at y3.
-line x1 y1 x2 y2 color dashcode
= Draw one line segment.
Another fun fun example:
1dplot -censor_RGB #ffa -CENSORTR '0-99' \
`1deval -1D: -num 61 -dx 0.3 -expr 'J0(x)'`
which illustrates the use of 'censoring' to mark the entire graph
background in pale yellow '#ffa', and also illustrates the use
of the '-1D:' option in 1deval to produce output that can be
used directly on the command line, via the backquote `...` operator.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dplot.py
OVERVIEW ~1~
This program is for making images to visualize columns of numbers from
"1D" text files. It is based heavily on RWCox's 1dplot program, just
using Python (particularly matplotlib). To use this program, Python
version >=2.7 is required, as well as matplotlib modules (someday numpy
might be needed, as well).
This program takes very few required options-- mainly, file names and
an output prefix-- but it allows the user to control/add many
features, such as axis labels, titles, colors, adding in censor
information, plotting summary boxplots and more.
++ constructed by PA Taylor (NIMH, NIH, USA).
# =========================================================================
COMMAND OPTIONS ~1~
-help, -h :see helpfile
-infiles II :(req) one or more file names of text files. Each column
in this file will be treated as a separate time series
for plotting (i.e., as 'y-values'). One can use
AFNI-style column '{ }' and row '[ ]' selectors. One
or more files may be entered, but they must all be of
equal length.
-yfiles YY :exactly the same behavior as "-infiles ..", just another
option name for it that might be more consistent with
other options.
-prefix PP :output filename or prefix; if no file extension for an
image is included in 'PP', one will be added from a
list. At present, OK file types to output should include:
.jpg, .png, .tif, .pdf, .svg
... but note that the kinds of image files you may output
may be limited by packages (or lack thereof) installed on
your own computer. Default output image type is .jpg
-boxplot_on :a fun feature to show an small, additional boxplot
adjacent to each time series. The plot is a standard
Python boxplot of that times series's values. The box
shows the 25-75%ile range (interquartile range, IQR);
the median value highlighted by a white line; whiskers
stretch to 1.5*IQR; circles show outliers.
When using this option and censoring, by default both a
boxplot of data "before censoring" (BC) and "after
censoring (AC) will be added after. See '-bplot_view ...'
about current opts to change that, if desired.
-bplot_view BC_ONLY | AC_ONLY
:when using '-boxplot_on' and censoring, by default the
plotter will put one boxplot of data "before censoring"
(BC) and after censoring (AC). To show *only* the
uncensored one, use this option and keyword.
-margin_off :use this option to have the plot frame fill the figure
window completely; thus, no labels, frame, titles or
other parts of the 'normal' image outside the plot
window will be visible. Tick lines will still be
present, living their best lives.
This is probably only useful/recommended/tested for
plots with a single panel.
-scale SCA1 SCA2 SCA3 ...
:provide a list of scales to apply to the y-values.
These will be applied multiplicatively to the y-values;
there should either be 1 (applied to all time series)
or the same number as the time series (in the same
order as those were entered). The scale values are
also applied to the censor_hline values, but *not* to
the "-yaxis ..." range(s).
Note that there are a couple keywords that can be used
instead of SCA* values:
SCALE_TO_HLINE: each input time series is
vertically scaled so that its censor_hline -> 1.
That is, each time point is divided by the
censor_hline value. When using this, a visually
pleasing yaxis range might be 0:3.
SCALE_TO_MAX: each input time series is
vertically scaled so that its max value -> 1.
That is, each time point is divided by the
max value. When using this, a visually
pleasing yaxis range might be 0:1.1.
-yfiles_pm YP :one or more file names of text files. Each column in
this file will be treated as a separate time series for
plotting a plus/minus colorized range for an associated
yfile/infile line. The number of files input with YP
must exactly match that of either '-infiles ..' or
'-yfiles ..'. The color will match the line color, but at
greatly reduced opacity.
-ylim_use_pm :by default, if not '-yaxis ..' opt is used, the ylim
range each subplot comes from (slightly expanded)
bounds of the min and max yvalue in each. But if
'-yfiles_pm ..' is used, you can use this option to expand
those limits by the min and max of the extra error-bounded
space.
-xfile XX :one way to input x-values explicitly: as a "1D" file XX, a
containing a single file of numbers. If no xfile is
entered, then a list of integers is created, 0..N-1, based
on the length of the "-infiles ..".
-xvals START STOP STEP
:an alternative means for entering abscissa values: one
can provide exactly 3 numbers, the start (inclusive)
the stop (exclusive) and the steps to take, following
Python conventions-- that is, numbers are generated
[START,STOP) in stepsizes of STEP.
-yaxis YMIN1:YMAX1 YMIN2:YMAX2 YMIN3:YMAX3 ...
:optional range for each "infile" y-axis; note the use
of a colon to designate the min/max of the range. One
can also specify just the min (e.g., "YMIN:") or just
the max (e.g., ":YMAX"). The final number of y-axis
values or pairs *must* match the total number of columns
of data from infiles; a placeholder could just be
":". Without specifying a range, one is calculated
automatically from the min and max of the dsets
themselves. The order of ylabels should match the order
of infiles.
-ylabels YL1 YL2 YL3 ...
:optional text labels for each "infile" column; the
final number of ylabels *must* match the total number
of columns of data from infiles. The order of ylabels
should match the order of infiles. These labels are
plotted vertically along the y-axis of the plot.
* For 1D files output by 3dvolreg, one can
automatically provide the 6 associated ylabels by
providing the keyword 'VOLREG' (and this counts as 6
labels).
* For 1D files output by '3dAllineate -1Dparam_save ..',
if you are using just the 6 rigid body parameters, you
can automatically provide the 6 associated ylabels by
providing the keyword 'ALLINPAR6' (and this counts as
6 labels). If using the 6 rigid body parameters and 3
scaling, you can use the keyword 'ALLINPAR9' (which counts
as 9 labels). If using all 12 affine parameters, you can use
the keyword 'ALLINPAR12' (which counts as 9 labels).
-ylabels_maxlen MM
:y-axis labels can get long; this opt allows you to have
them wrap into multiple rows, each of length <=MM. At the
moment, this wrapping is done with some "logic" that tries
to be helpful (e.g., split at underscores where possible),
as long as that helpfulness doesn't increase line numbers
a lot. The value entered here will apply to all y-axis
labels in the plot.
-legend_on :turn on the plotting of a legend in the plot(s). Legend
will not be shown in the boxplot panels, if using.
-legend_labels LL1 LL2 LL3 ...
:optional legend labels, if using '-legend_on' to show a
legend. If no arguments are provided for this option,
then the labels will be the arguments to '-infiles ..'
(or '-yfiles ..'). If arguments ARE input, then they must
match the number of '-infiles ..' (or '-yfiles ..').
-legend_locs LOC1 LOC2 LOC3 ...
:optional legend locations, if using '-legend_on' to
show a legend. If no arguments are provided for this
option, then the locations will be the ones picked by
Python (reasonable starting point) If arguments ARE
input, then they must match the number of '-infiles ..'
(or '-yfiles ..'). Valid entries are strings
recognizable by matplotlib's plt.legend()'s "loc" opt;
this includes: 'best', 'right', 'upper right', 'lower
right', 'center right', etc. Note that if you use a
two-word argument here, you MUST put it in quotes (or,
as a special treat, you can combine it with an
underscore, and it will be parsed correctly. So, valid
values of LOC* could be:
left
'lower left'
upper_center
-xlabel XL :optional text labels for the abscissa/x-axis. Only one may
be entered, and it will *only* be displayed on the bottom
panel of the output plot. Using labels is good practice!!
-title TT :optional title for the set of plots, placed above the top-
most subplot
-reverse_order :optional switch; by default, the entered time series
are plotted top to bottom according to the order they
were entered (i.e., first- listed plot at the top).
This option reverses that order (to first-listed plot
at the bottom), in order to match with 1dplot's
behavior.
-sepscl :make each graph have its own y-range, determined by
slightly padding its min and max values. By default,
the separate plots all have the same y-range, which
is determined by taking the min-of-mins and max-of-
maxes, and padding slightly outward.
-one_graph :plot multiple infiles in a single subplot (default is to put
each one in a new subplot).
-dpi DDD :choose the output image's DPI. The default value is
150.
-figsize FX FY :choose the output image's dimensions (units are inches).
The default width is 10; the default height
is 0.5 + N*0.75, where 'N' is the number of
infile columns.
-fontsize FS :change image fontsize; default is 10.
-fontfamily FF :change font-family used; default is the luvly
monospace.
-fontstyles FSS :add in a fontname; should match with chosen
font-family; default is whatever Python has on your
system for the given family. Whether your prescribed
font gets used depends on what is installed on your
comp.
-colors C1 C2 C3 ...
:you can decide what color(s) to cycle through in plots
(enter one or more); if there are more infile columns
than entered colors, the program just keeps cycling
through the list. By default, if only 1 infile column is
given, the plotline will be black; when more than one
infile column is given, a default palette of 10
colors, chosen for their mutual-distinguishable-ness,
will be cycled through.
One of the colors can also be a decimal in range [0.0, 1.0],
which will correspond to grayscale in range [black, white],
respectively.
-patches RL1 RL2 RL3 ...
:when viewing data from multiple runs that have been
processing+concatenated, knowing where they start/stop
can be useful. This option helps with that, by
alternating patches of the background slightly between
white and light gray. The user enters any appropriate
number of run lengths, and the background patch for
the duration of the first is white, then light gray,
etc. (to *start* with light gray, one can have '0' be
the first RL value).
-censor_trs CS1 CS2 CS3 ...
:specify time points where censoring has occurred (e.g.,
due to a motion or outlier criterion). With this
option, the values are entered using AFNI index
notation, such as '0..3,8,25,99..$'. Note that if you
use special characters like the '$', then the given
string must be enclosed on quotes.
One or more string can be entered, and results are
simply combined (as well as if censor files are
entered-- see the '-censor_files ..' opt).
In order to highlight censored points, a translucent
background color will be added to all plots of width 1.
-censor_files CF1 CF2 CF3 ...
:specify time points where censoring has occurred (e.g.,
due to a motion or outlier criterion). With this
option, the values are entered as 1D files, columns
where 0 indicates censoring at that [i]th time point,
and 1 indicates *no* censoring there.
One or more file can be entered, and results are
simply combined (as well as if censor strings are
entered-- see the '-censor_str ..' opt).
In order to highlight censored points, a translucent
background color will be added to all plots of width 1.
-censor_hline CH1 CH2 CH3 ...
:one can add a dotted horizontal line to the plot, with
the intention that it represents the relevant threshold
(for example, motion limit or outlier fraction limit).
One can specify more than one hline: if one line
is entered, it will be applied to each plot; if more
than one hline is entered, there must be the same number
of values as infile columns.
Ummm, it is also assumed that all censor hline values
are >=0; if negative, it will be a problem-- ask if this
is a problem!
A value of 'NONE' can also be input, to be a placeholder
in a list, when some subplots have censor_hline values
and others don't.
-censor_RGB COL :choose the color of the censoring background; from the
command line, users enter a string, which could be:
+ 3 space-separated floats in range [0, 1], of RGB values
+ 4 space-separated floats in range [0, 1], of RGBA values
+ 1 string of a valid matplotlib color
+ 1 string of a valid matplotlib color and 1 floats in
range [0, 1], which is an alpha opacity value.
(default is: '1 0.7 0.7').
-bkgd_color BC :change the background color outside of the plot
windows. Default is the Python color: 0.9.
EXAMPLES ~1~
1) Plot Euclidean norm (enorm) profile, with the censor limit and
related file of censoring:
1dplot.py \
-sepscl \
-boxplot_on \
-infiles motion_sub-10506_enorm.1D \
-censor_files motion_sub-10506_censor.1D \
-censor_hline 0.2 \
-title "Motion censoring" \
-ylabels enorm \
-xlabel "vols" \
-title "Motion censoring" \
-prefix mot_cen_plot.jpg
2) Plot the 6 solid body parameters from 3dvolreg, along with
the useful composite 'enorm' and outlier time series:
1dplot.py \
-sepscl \
-boxplot_on \
-reverse_order \
-infiles dfile_rall.1D \
motion_sub-10506_enorm.1D \
outcount_rall.1D \
-ylabels VOLREG enorm outliers \
-xlabel "vols" \
-title "Motion and outlier plots" \
-prefix mot_outlier_plot.png
3) Use labels and locations to plot 3dhistog output (there will
be some minor whining about failing to process comment label
*.1D files, but won't cause any problems for plot); here,
legend labels will be the args after '-yfiles ..' (with the
part in square brackets, but without the quotes):
1dplot.py \
-xfile HOUT_A.1D'[0]' \
-yfiles HOUT_A.1D'[1]' HOUT_B.1D'[1]' \
-prefix img_histog.png \
-colors blue 0.6 \
-boxplot_on \
-legend_on
4) Same as #3, but using some additional opts to control legends.
Here, am using 2 different formats of providing the legend
locations in each separate subplot, just for fun:
1dplot.py \
-xfile HOUT_A.1D'[0]' \
-yfiles HOUT_A.1D'[1]' HOUT_B.1D'[1]' \
-prefix img_histog.png \
-colors blue 0.6 \
-boxplot_on \
-legend_on \
-legend_locs upper_right "lower left" \
-legend_labels A B
AFNI program: 1dRplot
Usage:
------
1dRplot is a program for plotting a 1D file
Options in alphabetical order:
------------------------------
-addavg: Add line at average of column
-col.color COL1 [COL2 ...]: Colors for each column in -input.
COL? are integers for now.
-col.grp 1Dfile or Rexp: integer labels defining column grouping
-col.line.type LT1 [LT2 ...]: Line type for each column in -input.
LT? are integers for now.
-col.name NAME1 [NAME2 ...]: Name of each column in -input.
Special flags:
VOLREG: --> 'Roll Pitch Yaw I-S R-L A-P'
-col.name.show : Show names of column in -input.
-col.nozeros: Do not plot all zeros columns
-col.plot.char CHAR1 [CHAR2 ...] : Symbols for each column in -input.
CHAR? are integers (usually 0-127), or
characters + - I etc.
See the following link for what CHAR? values you can use:
http://stat.ethz.ch/R-manual/R-patched/library/graphics/html/points.html
-col.plot.type PLOT_TYPE: Column plot type.
'l' for line, 'p' for points, 'b' for both
-col.text.lym LYM_TEXT: Text to be placed at left Y margin.
You need one string per column.
Special Flags: You can also use COL.NAME to use column
names for the margin text, or you can use
COL.IND to use the colum's index in the file
-col.text.rym RYM_TEXT: Text to be placed at right Y margin.
You need one string per column.
See also Special Flags section under -col.text.lym
-col.ystack: Scale each column and offset it based on its
column index. This is useful for stacking
a large number of columns on one plot.
It is only carried out when graphing more
than one series with the -one option.
-grid.show : Show grid.
-grp.label GROUP1 [GROUP2 ...]: Labels assigned to each group.
Default is no labeling
-help: this help message
-i 1D_INPUT: file to plot. This field can have multiple
formats. See Data Strings section below.
1dRplot will automatically detect certain
1D files ouput by some programs such as 3dhistog
or 3ddot and adjust parameters accordingly.
-input 1D_INPUT: Same as -i
-input_delta 1D_INPUT: file containing value for error bars
-input_type 1D_TYPE: Type of data in 1D file.
Choose from 'VOLREG', or 'XMAT'
-leg.fontsize : fontsize for legend text.
-leg.line.color : Color to use for items in legend.
Default is taken from column line color.
-leg.line.type : Line type to use for items in legend.
Default is taken from column line types.
If you want no line, set -leg.line.type = 0
-leg.names : Names to use for items in legend.
Default is taken from column names.
-leg.ncol : Number of columns in legend.
-leg.plot.char : plot characters to use for items in legend.
Default is taken from column plot character (-col.plot.char).
-leg.position : Legend position. Choose from:
bottomright, bottom, bottomleft
left, topleft, top, topright, right,
and center
-leg.show : Show legend.
-load.Rdat RDAT: load data list from save.Rdat for reproducing plot.
Note that you cannot override the settings in RDAT,
unless you run in the interactive R mode. For example,
say you have dice.Rdat saved from a previous command
and you want to change P$nodisp to TRUE:
load('dice.Rdat'); P$nodisp <- TRUE; plot.1D.eng(P)
-mat: Display as matrix
-matplot: Display as matrix
-msg.trace: Output trace information along with errors and notices
-multi: Put columns in separate graphs
-multiplot: Put columns in separate graphs
-nozeros: Do not plot all zeros time series
-one: Put all columns on one graph
-oneplot: Put all columns on one graph
-prefix PREFIX: Output prefix. See also -save.
-row.name NAME1 [NAME2 ...]: Name of each row in -input.
For the moment, this is only used with -matplot
-rowcol.name NAME1 [NAME2 ...]: Names of rows, same as name of columns.
For the moment, this is only used with -matplot.
-run_examples: Run all examples, one after the other.
-save PREFIX: Save plot and quit
No need for -prefix with this option
-save.Rdat : Save data list for reproducing plot in R.
You need to specify -prefix or -save
along with this option to set the prefix.
See also -load.Rdat
-save.size width height: Save figure size in pixels
Default is 2000 2000
-show_allowed_options: list of allowed options
-title TITLE: Graph title. File name is used by default.
Use NONE to be sure no title is used.
-TR TR: Sampling period, in seconds.
-verb VERB: VERB is an integer specifying verbosity level.
0 for quiet (Default). 1 or more: talkative.
-x 1D_INPUT: x axis. You can also use the string 'ENUM'
to indicate that the x axis should go from
1 to N, the number of samples in -input
-xax.label XLABEL: Label of X axis
-xax.lim MIN MAX [STEP]: Range of X axis, STEP is optional
-xax.tic.text XTTEXT: X tics text
-yax.label YLABEL: Label of Y axis
-yax.lim MIN MAX [STEP]: Range of X axis, STEP is optional
-yax.tic.text YTTEXT: Y tics text
-zeros: Do plot all zeros time series
Data Strings:
-------------
You can specify input matrices and vectors in a variety of
ways. The simplest is by specifying a .1D file with all
the trimmings of column and row selectors. You can also
specify a string that gets evaluated on the fly.
For example: '1D: 1 4 8' evaluates to a vector of values 1 4 and 8.
Also, you can use R expressions such as: 'R: seq(0,10,3)'
To download demo data from AFNI's website run this command:
-----------------------------------------------------------
curl -o demo.X.xmat.1D afni.nimh.nih.gov/pub/dist/edu/data/samples/X.xmat.1D
curl -o demo.motion.1D afni.nimh.nih.gov/pub/dist/edu/data/samples/motion.1D
Example 1 --- :
--------------------------------
1dRplot -input demo.X.xmat.1D'[5..10]'
Example 2 --- :
--------------------------------
1dRplot -input demo.X.xmat.1D'[5..10]' \
-input_type XMAT
Example 3 --- :
--------------------------------
1dRplot -input demo.motion.1D \
-input_type VOLREG
Example 4 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)'
Example 5 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 5)' \
-one
Example 6 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack
Example 7 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack \
-col.grp '1D:1 1 1 2 2 2 3 3 3 3' \
-grp.label slow medium fast \
-prefix ta.jpg \
-yax.lim 0 18 \
-leg.show \
-leg.position top
Example 8 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack \
-col.grp '1D:1 1 1 2 2 2 3 3 3 3' \
-grp.label slow medium fast \
-prefix tb.jpg \
-yax.lim 0 18 \
-leg.show \
-leg.position top \
-nozeros \
-addavg
Example 9 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack \
-col.grp '1D:1 1 1 2 2 2 3 3 3 3' \
-grp.label slow medium fast \
-prefix tb.jpg \
-yax.lim 0 18 \
-leg.show \
-leg.position top \
-nozeros \
-addavg \
-col.text.lym Tutti mi chiedono tutti mi vogliono \
Donne ragazzi vecchi fanciulle \
-col.text.rym "R:paste('Col',seq(1,10), sep='')"
Example 10 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.plot.char 2 \
-col.plot.type p
Example 11 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.line.type 3 \
-col.plot.type l
Example 12 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.plot.char 2 \
-col.line.type 3 \
-col.plot.type b
Example 13 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.plot.char 2 5\
-col.line.type 3 4\
-col.plot.type b \
-TR 2
Example 14 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one -col.plot.char 2 -col.line.type 3 \
-col.plot.type b -TR 2 \
-yax.tic.text 'numa numa numa numaei' \
-xax.tic.text 'Alo' 'Salut' 'sunt eu' 'un haiduc'
AFNI program: 1dSEM
Usage: 1dSEM [options] -theta 1dfile -C 1dfile -psi 1dfile -DF nn.n
Computes path coefficients for connection matrix in Structural Equation
Modeling (SEM)
The program takes as input :
1. A 1D file with an initial representation of the connection matrix
with a 1 for each interaction component to be modeled and a 0 if
if it is not to be modeled. This matrix should be PxP rows and column
2. A 1D file of the C, correlation matrix, also with dimensions PxP
3. A 1D file of the residual variance vector, psi
4. The degrees of freedom, DF
Output is printed to the terminal and may be redirected to a 1D file
The path coefficient matrix is printed for each matrix computed
Options:
-theta file.1D = connection matrix 1D file with initial representation
-C file.1D = correlation matrix 1D file
-psi file.1D = residual variance vector 1D file
-DF nn.n = degrees of freedom
-max_iter n = maximum number of iterations for convergence (Default=10000).
Values can range from 1 to any positive integer less than 10000.
-nrand n = number of random trials before optimization (Default = 100)
-limits m.mmm n.nnn = lower and upper limits for connection coefficients
(Default = -1.0 to 1.0)
-calccost = no modeling at all, just calculate the cost function for the
coefficients as given in the theta file. This may be useful for verifying
published results
-verbose nnnnn = print info every nnnnn steps
Model search options:
Look for best model. The initial connection matrix file must follow these
specifications. Each entry must be 0 for entries excluded from the model,
1 for each required entry in the minimum model, 2 for each possible path
to try.
-tree_growth or
-model_search = search for best model by growing a model for one additional
coefficient from the previous model for n-1 coefficients. If the initial
theta matrix has no required coefficients, the initial model will grow from
the best model for a single coefficient
-max_paths n = maximum number of paths to include (Default = 1000)
-stop_cost n.nnn = stop searching for paths when cost function is below
this value (Default = 0.1)
-forest_growth or
-grow_all = search over all possible models by comparing models at
incrementally increasing number of path coefficients. This
algorithm searches all possible combinations; for the number of coeffs
this method can be exceptionally slow, especially as the number of
coefficients gets larger, for example at n>=9.
-leafpicker = relevant only for forest growth searches. Expands the search
optimization to look at multiple paths to avoid local minimum. This method
is the default technique for tree growth and standard coefficient searches
This program uses a Powell optimization algorithm to find the connection
coefficients for any particular model.
References:
Powell, MJD, "The NEWUOA software for unconstrained optimization without
derivatives", Technical report DAMTP 2004/NA08, Cambridge University
Numerical Analysis Group:
See: http://www.ii.uib.no/~lennart/drgrad/Powell2004.pdf
Bullmore, ET, Horwitz, B, Honey, GD, Brammer, MJ, Williams, SCR, Sharma, T,
How Good is Good Enough in Path Analysis of fMRI Data?
NeuroImage 11, 289-301 (2000)
Stein, JL, et al., A validated network of effective amygdala connectivity,
NeuroImage (2007), doi:10.1016/j.neuroimage.2007.03.022
The initial representation in the theta file is non-zero for each element
to be modeled. The 1D file can have leading columns for labels that will
be used in the output. Label rows must be commented with the # symbol
If using any of the model search options, the theta file should have a '1' for
each required coefficient, '0' for each excluded coefficient, '2' for an
optional coefficient. Excluded coefficients are not modeled. Required
coefficients are included in every computed model.
N.B. - Connection directionality in the path connection matrices is from
column to row of the output connection coefficient matrices.
Be very careful when interpreting those path coefficients.
First of all, they are not correlation coefficients. Suppose we have a
network with a path connecting from region A to region B. The meaning
of the coefficient theta (e.g., 0.81) is this: if region A increases by
one standard deviation from its mean, region B would be expected to increase
by 0.81 its own standard deviations from its own mean while holding all other
relevant regional connections constant. With a path coefficient of -0.16,
when region A increases by one standard deviation from its mean, region B
would be expected to decrease by 0.16 its own standard deviations from its
own mean while holding all other relevant regional connections constant.
So theoretically speaking the range of the path coefficients can be anything,
but most of the time they range from -1 to 1. To save running time, the
default values for -limits are set with -1 and 1, but if the result hits
the boundary, increase them and re-run the analysis.
Examples:
To confirm a specific model:
1dSEM -theta inittheta.1D -C SEMCorr.1D -psi SEMvar.1D -DF 30
To search models by growing from the best single coefficient model
up to 12 coefficients
1dSEM -theta testthetas_ms.1D -C testcorr.1D -psi testpsi.1D \
-limits -2 2 -nrand 100 -DF 30 -model_search -max_paths 12
To search all possible models up to 8 coefficients:
1dSEM -theta testthetas_ms.1D -C testcorr.1D -psi testpsi.1D \
-nrand 10 -DF 30 -stop_cost 0.1 -grow_all -max_paths 8 | & tee testgrow.txt
For more information, see https://afni.nimh.nih.gov/sscc/gangc/PathAna.html
and our HBM 2007 poster at
https://sscc.nimh.nih.gov/sscc/posters/file.2007-06-07.0771819246
If you find this program useful, please cite:
G Chen, DR Glen, JL Stein, AS Meyer-Lindenberg, ZS Saad, RW Cox,
Model Validation and Automated Search in FMRI Path Analysis:
A Fast Open-Source Tool for Structural Equation Modeling,
Human Brain Mapping Conference, 2007
AFNI program: 1dsound
Usage: 1dsound [options] tsfile
Program to create a sound file from a 1D file (column of numbers).
Is this program useful? Probably not, but it can be fun.
-------
OPTIONS
-------
===== output filename =====
-prefix ppp = Output filename will be ppp.au
[Sun audio format https://en.wikipedia.org/wiki/Au_file_format]
+ If you don't use '-prefix', the output is file 'sound.au'.
+ If 'ppp' ends in '.au', this program won't add another '.au.
===== encoding details =====
-16PCM = Output in 16-bit linear PCM encoding (uncompressed)
+ Less quantization noise (audible hiss) :)
+ Takes twice as much disk space for output as 8-bit output :(
+++ This is the default method now!
+ https://en.wikipedia.org/wiki/Pulse-code_modulation
-8PCM = Output in 8-bit linear PCM encoding
+ There is no good reason to use this option.
-8ulaw = Output in 8-bit mu-law encoding.
+ Provides a little better quality than -8PCM,
but still has audible quantization noise hiss.
+ https://en.wikipedia.org/wiki/M-law_algorithm
-tper X = X seconds of sound per time point in 'tsfile'.
-TR X Allowed range for 'X' is 0.01 to 1.0 (inclusive).
-dt X [default time step is 0.2 s]
You can use '-tper', '-dt', or '-TR', as you like.
===== how the sound timeseries is produced from the data timeseries =====
-FM = Output sound is frequency modulated between 110 and 1760 Hz
from min to max in the input 1D file.
+ Usually 'sounds terrible'.
+ The only reason this is here is that it was the first method
I implemented, and I kept it for the sake of nostalgia.
-notes = Output sound is a sequence of notes, low to high pitch
based on min to max in the input 1D file.
+++ This is the default method of operation.
+ A pentatonic scale is used, which usually 'sounds nice':
https://en.wikipedia.org/wiki/Pentatonic_scale
-notewave W = Selects the shape of the notes used. 'W' is one of these:
-waveform W sine = pure sine wave (sounds simplistic)
sqsine = square root of sine wave (a little harsh and loud)
square = square wave (a lot harsh and loud)
triangle = triangle wave [the default waveform]
-despike = apply a simple despiking algorithm, to avoid the artifact
of one very large or small value making all the other notes
end up being the same.
===== Notes about notes =====
** At this time, the default production method is '-notes', **
** using the triangle waveform (I like this best). **
** With '-notes', up to 6 columns of the input file will be used **
** to produce a polyphonic sound (in a single channel). **
** (Any columns past the 6th in the input 'tsfile' are ignored.) **
===== hear the sound right away! =====
-play = Plays the sound file after it is written.
On this computer: uses program /usr/bin/aplay
===>> Playing sound on a remote computer is
annoying, pointless, and likely to get you punched.
--------
EXAMPLES
--------
The first 2 examples are purely synthetic, using 'data' files created
on the command line. The third example uses a data file that was written
out of an AFNI graph viewer using the 'w' keystroke.
1dsound -prefix A1 '1D: 0 1 2 1 0 1 2 0 1 2'
1deval -num 100 -expr 'sin(x+0.01*x*x)' | 1dsound -tper 0.1 -prefix A2 1D:stdin
1dsound -prefix -tper 0.1 A3 028_044_003.1D
-----
NOTES
-----
* File can be played with the 'sox' audio package command
play A1.au gain -5
+ Here 'gain -5' turns the volume down :)
+ sox is not provided with AFNI :(
+ To see if sox is on your system, type the command 'which sox'
+ If you have sox, you can add 'reverb 99' at the end of the
'play' command line, and have some extra fun.
+ Many other effects are available with sox 'play',
and they can also be used to produce edited sound files:
http://sox.sourceforge.net/sox.html#EFFECTS
+ You can convert the .au file produced from here to other
formats using sox; for example:
sox Bob.au Cox.au BobCox.aiff
combines the 2 .au input files to a 2-channel (stereo)
Apple .aiff output file. See this for more information:
http://sox.sourceforge.net/soxformat.html
* Creation of the file does not depend on sox, so if you have
another way to play .au files, you can use that.
* Mac OS X: Quicktime (GUI) or afplay (command line) programs.
+ sox can be installed by first installing 'brew'
-- see https://brew.sh/ -- and then using command
'brew install sox'.
* Linux: Getting sox is probably the simplest thing to do.
+ Or install the mplayer package (which also does videos).
+ Another possibility is the aplay program.
* The audio output file is sampled at 16K bytes per second.
For example, a 30 second file will be 960K bytes in size,
at 16 bits per sample.
* The auditory effect varies significantly with the '-tper'
parameter X; '-tper 0.02' is very different than '-tper 0.4'.
--- Quick hack for experimentation and fun - RWCox - Aug 2018 ---
AFNI program: 1dsum
Usage: 1dsum [options] a.1D b.1D ...
where each file a.1D, b.1D, etc. is an ASCII file of numbers arranged
in rows and columns. The sum of each column is written to stdout.
Options:
-ignore nn = skip the first nn rows of each file
-use mm = use only mm rows from each file
-mean = compute the average instead of the sum
-nocomment = the # comments from the header of the first
input file will be reproduced to the output;
if you do NOT want this to happen, use the
'-nocomment' option.
-OKempty = If you encounter an empty 1D file, print 0
and exit quietly instead of exiting with an
error message
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dsvd
Usage: 1dsvd [options] 1Dfile 1Dfile ...
- Computes SVD of the matrix formed by the 1D file(s).
- Output appears on stdout; to save it, use '>' redirection.
OPTIONS:
-one = Make 1st vector be all 1's.
-vmean = Remove mean from each vector (can't be used with -one).
-vnorm = Make L2-norm of each vector = 1 before SVD.
* The above 2 options mirror those in 3dpc.
-cond = Only print condition number (ratio of extremes)
-sing = Only print singular values
* To compare the singular values from 1dsvd with those from
3dDeconvolve you must use the -vnorm option with 1dsvd.
For example, try
3dDeconvolve -nodata 200 1 -polort 5 -num_stimts 1 \
-stim_times 1 '1D: 30 130' 'BLOCK(50,1)' -singvals
1dsvd -sing -vnorm nodata.xmat.1D
-sort = Sort singular values (descending) [the default]
-nosort = Don't bother to sort the singular values
-asort = Sort singular values (ascending)
-1Dleft = Only output left eigenvectors, in a .1D format
This might be useful for reducing the number of
columns in a design matrix. The singular values
are printed at the top of each vector column,
as a '#...' comment line.
-nev n = If -1Dleft is used, '-nev' specifies to output only
the first 'n' eigenvectors, rather than all of them.
* If you are a tricky person, such as Souheil, you can
put a '%' after the value, and then you are saying
keep eigenvectors until at least n% of the sum of
singular values is accounted for. In this usage,
'n' must be a number less than 100; for example, to
reduce a matrix down to a smaller set of columns that
capture most of its column space, try something like
1dsvd -1Dleft -nev 99% Xorig.1D > X99.1D
EXAMPLE:
1dsvd -vmean -vnorm -1Dleft fred.1D'[1..6]' | 1dplot -stdin
NOTES:
* Call the input n X m matrix [A] (n rows, m columns). The SVD
is the factorization [A] = [U] [S] [V]' ('=transpose), where
- [U] is an n x m matrix (whose columns are the 'Left vectors')
- [S] is a diagonal m x m matrix (the 'singular values')
- [V] is an m x m matrix (whose columns are the 'Right vectors')
* The default output of the program is
- An echo of the input [A]
- The [U] matrix, each column headed by its singular value
- The [V] matrix, each column headed by its singular value
(please note that [V] is output, not [V]')
- The pseudo-inverse of [A]
* This program was written simply for some testing purposes,
but is distributed with AFNI because it might be useful-ish.
* Recall that you can transpose a .1D file on input by putting
an escaped ' character after the filename. For example,
1dsvd fred.1D\'
You can use this feature to get around the fact that there
is no '-1Dright' option. If you understand.
* For more information on the SVD, you can start at
http://en.wikipedia.org/wiki/Singular_value_decomposition
* Author: Zhark the Algebraical (Linear).
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1d_tool.py
=============================================================================
1d_tool.py - for manipulating and evaluating 1D files
---------------------------------------------------------------------------
purpose: ~1~
This program is meant to read/manipulate/write/diagnose 1D datasets.
Input can be specified using AFNI sub-brick[]/time{} selectors.
---------------------------------------------------------------------------
examples (very basic for now): ~1~
Example 1. Select by rows and columns, akin to 1dcat. ~2~
Note: columns can be X-matrix labels.
1d_tool.py -infile 'data/X.xmat.1D[0..3]{0..5}' -write t1.1D
or using column labels:
1d_tool.py -infile 'data/X.xmat.1D[Run#1Pol#0..Run#1Pol#3]' \
-write run0_polorts.1D
Example 2. Compare with selection by separate options. ~2~
1d_tool.py -infile data/X.xmat.1D \
-select_cols '0..3' -select_rows '0..5' \
-write t2.1D
diff t1.1D t2.1D
Example 2b. Select or remove columns by label prefixes. ~2~
Keep only bandpass columns:
1d_tool.py -infile X.xmat.1D -write X.bandpass.1D \
-label_prefix_keep bandpass
Remove only bandpass columns (maybe for 3dRFSC):
1d_tool.py -infile X.xmat.1D -write X.no.bandpass.1D \
-label_prefix_drop bandpass
Keep polort columns (start with 'Run') motion shifts ('d') and labels
starting with 'a' and 'b'. But drop 'bandpass' columns:
1d_tool.py -infile X.xmat.1D -write X.weird.1D \
-label_prefix_keep Run d a b \
-label_prefix_drop bandpass
Example 2c. Select columns by group values, 3 examples. ~2~
First be sure of what the group labels represent.
1d_tool.py -infile X.xmat.1D -show_group_labels
i) Select polort (group -1) and other baseline (group 0) terms.
1d_tool.py -infile X.xmat.1D -select_groups -1 0 -write baseline.1D
ii) Select everything but baseline groups (anything positive).
1d_tool.py -infile X.xmat.1D -select_groups POS -write regs.of.int.1D
iii) Reorder to have rests of interest, then motion, then polort.
1d_tool.py -infile X.xmat.1D -select_groups POS 0, -1 -write order.1D
iv) Create stim-only X-matrix file: select non-baseline columns of
X-matrix and write with header comment.
1d_tool.py -infile X.xmat.1D -select_groups POS \
-write_with_header yes -write X.stim.xmat.1D
Or, using a convenience option:
1d_tool.py -infile X.xmat.1D -write_xstim X.stim.xmat.1D
Example 2d. Select specific runs from the input. ~2~
Note that X.xmat.1D may have runs defined automatically, but for an
arbitrary input, they may need to be specified via -set_run_lengths.
i) .... apparently I forgot to do this...
Example 3. Transpose a dataset, akin to 1dtranspose. ~2~
1d_tool.py -infile t3.1D -transpose -write ttr.1D
Example 4a. Zero-pad a single-run 1D file across many runs. ~2~
Given a file of regressors (for example) across a single run (run 2),
created a new file that is padded with zeros, so that it now spans
many (7) runs. Runs are 1-based here.
1d_tool.py -infile ricor_r02.1D -pad_into_many_runs 2 7 \
-write ricor_r02_all.1D
Example 4b. Similar to 4a, but specify varying TRs per run. ~2~
The number of runs must match the number of run_lengths parameters.
1d_tool.py -infile ricor_r02.1D -pad_into_many_runs 2 7 \
-set_run_lengths 64 61 67 61 67 61 67 \
-write ricor_r02_all.1D
Example 5. Display small details about a 1D dataset: ~2~
a. Display number of rows and columns for a 1D dataset.
Note: to display them "quietly" (only the numbers), add -verb 0.
This is useful for setting a script variable.
1d_tool.py -infile X.xmat.1D -show_rows_cols
1d_tool.py -infile X.xmat.1D -show_rows_cols -verb 0
b. Display indices of regressors of interest from an X-matrix.
1d_tool.py -infile X.xmat.1D -show_indices_interest
c. Display X-matrix labels by group.
1d_tool.py -infile X.xmat.1D -show_group_labels
d. Display "degree of freedom" information:
1d_tool.py -infile X.xmat.1D -show_df_info
e. Display X-matrix stimulus class information (for one class or ALL).
1d_tool.py -infile X.xmat.1D -show_xmat_stim_info aud
1d_tool.py -infile X.xmat.1D -show_xmat_stim_info ALL
f. Display X-matrix column index list for those of the given classes.
Display regressor labels or in encoded column index format.
1d_tool.py -infile X.xmat.1D -show_xmat_stype_cols AM IM
1d_tool.py -infile X.xmat.1D -show_xmat_stype_cols ALL \
-show_regs_style encoded
g. Display X-matrix column index list for all-zero regressors.
Display regressor labels or in encoded column index format.
1d_tool.py -infile X.xmat.1D -show_xmat_stype_cols AM IM
Example 6a. Show correlation matrix warnings for this matrix. ~2~
This option does not include warnings from baseline regressors,
which are common (from polort 0, from similar motion, etc).
1d_tool.py -infile X.xmat.1D -show_cormat_warnings
Example 6b. Show entire correlation matrix. ~2~
1d_tool.py -infile X.xmat.1D -show_cormat
Example 6c. Like 6a, but include warnings for baseline regressors. ~2~
1d_tool.py -infile X.xmat.1D -show_cormat_warnings_full
Example 7a. Output temporal derivative of motion regressors. ~2~
There are 9 runs in dfile_rall.1D, and derivatives are applied per run.
1d_tool.py -infile dfile_rall.1D -set_nruns 9 \
-derivative -write motion.deriv.1D
Example 7b. Similar to 7a, but let the run lengths vary. ~2~
The sum of run lengths should equal the number of time points.
1d_tool.py -infile dfile_rall.1D \
-set_run_lengths 64 64 64 64 64 64 64 64 64 \
-derivative -write motion.deriv.rlens.1D
Example 7c. Use forward differences. ~2~
instead of the default backward differences...
1d_tool.py -infile dfile_rall.1D \
-set_run_lengths 64 64 64 64 64 64 64 64 64 \
-forward_diff -write motion.deriv.rlens.1D
Example 8. Verify whether labels show slice-major ordering.
This is where all slice0 regressors come first, then all slice1
regressors, etc. Either show the labels and verify visually, or
print whether it is true.
1d_tool.py -infile scan_2.slibase.1D'[0..12]' -show_labels
1d_tool.py -infile scan_2.slibase.1D -show_labels
1d_tool.py -infile scan_2.slibase.1D -show_label_ordering
Example 9a. Given motion.1D, create an Enorm time series. ~2~
Take the derivative (ignoring run breaks) and the Euclidean Norm,
and write as e.norm.1D. This might be plotted to show show sudden
motion as a single time series.
1d_tool.py -infile motion.1D -set_nruns 9 \
-derivative -collapse_cols euclidean_norm \
-write e.norm.1D
Example 9b. Like 9a, but supposing the run lengths vary (still 576 TRs). ~2~
1d_tool.py -infile motion.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-derivative -collapse_cols euclidean_norm \
-write e.norm.rlens.1D
Example 9c. Like 9b, but weight the rotations as 0.9 mm. ~2~
1d_tool.py -infile motion.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-derivative -collapse_cols weighted_enorm \
-weight_vec .9 .9 .9 1 1 1 \
-write e.norm.weighted.1D
Example 10. Given motion.1D, create censor files to use in 3dDeconvolve. ~2~
Here a TR is censored if the derivative values have a Euclidean Norm
above 1.2. It is common to also censor each previous TR, as motion may
span both (previous because "derivative" is actually a backward
difference).
The file created by -write_censor can be used with 3dD's -censor option.
The file created by -write_CENSORTR can be used with -CENSORTR. They
should have the same effect in 3dDeconvolve. The CENSORTR file is more
readable, but the censor file is better for plotting against the data.
a. general example ~3~
1d_tool.py -infile motion.1D -set_nruns 9 \
-derivative -censor_prev_TR \
-collapse_cols euclidean_norm \
-moderate_mask -1.2 1.2 \
-show_censor_count \
-write_censor subjA_censor.1D \
-write_CENSORTR subjA_CENSORTR.txt
b. using -censor_motion ~3~
The -censor_motion option is available, which implies '-derivative',
'-collapse_cols euclidean_norm', 'moderate_mask -LIMIT LIMIT', and the
prefix for '-write_censor' and '-write_CENSORTR' output files. This
option will also result in subjA_enorm.1D being written, which is the
euclidean norm of the derivative, before the extreme mask is applied.
1d_tool.py -infile motion.1D -set_nruns 9 \
-show_censor_count \
-censor_motion 1.2 subjA \
-censor_prev_TR
c. allow the run lengths to vary ~3~
1d_tool.py -infile motion.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-show_censor_count \
-censor_motion 1.2 subjA_rlens \
-censor_prev_TR
Consider also '-censor_prev_TR' and '-censor_first_trs'.
Example 11. Demean the data. Use motion parameters as an example. ~2~
The demean operation is done per run (default is 1 when 1d_tool.py
does not otherwise know).
a. across all runs (if runs are not known from input file)
1d_tool.py -infile dfile_rall.1D -demean -write motion.demean.a.1D
b. per run, over 9 runs of equal length
1d_tool.py -infile dfile_rall.1D -set_nruns 9 \
-demean -write motion.demean.b.1D
c. per run, over 9 runs of varying length
1d_tool.py -infile dfile_rall.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-demean -write motion.demean.c.1D
Example 12. "Uncensor" the data, zero-padding previously censored TRs. ~2~
Note that an X-matrix output by 3dDeconvolve contains censor
information in GoodList, which is the list of uncensored TRs.
a. if the input dataset has censor information
1d_tool.py -infile X.xmat.1D -censor_fill -write X.uncensored.1D
b. if censor information needs to come from a parent
1d_tool.py -infile sum.ideal.1D -censor_fill_parent X.xmat.1D \
-write sum.ideal.uncensored.1D
c. if censor information needs to come from a simple 1D time series
1d_tool.py -censor_fill_parent motion_FT_censor.1D \
-infile cdata.1D -write cdata.zeropad.1D
Example 13. Show whether the input file is valid as a numeric data file. ~2~
a. as any generic 1D file
1d_tool.py -infile data.txt -looks_like_1D
b. as a 1D stim_file, of 3 runs of 64 TRs (TR is irrelevant)
1d_tool.py -infile data.txt -looks_like_1D \
-set_run_lengths 64 64 64
c. as a stim_times file with local times
1d_tool.py -infile data.txt -looks_like_local_times \
-set_run_lengths 64 64 64 -set_tr 2
d. as a 1D or stim_times file with global times
1d_tool.py -infile data.txt -looks_like_global_times \
-set_run_lengths 64 64 64 -set_tr 2
e. report modulation type (amplitude and/or duration)
1d_tool.py -infile data.txt -looks_like_AM
f. perform all tests, reporting all errors
1d_tool.py -infile data.txt -looks_like_test_all \
-set_run_lengths 64 64 64 -set_tr 2
Example 14. Split motion parameters across runs. ~2~
Split, but keep them at the original length so they apply to the same
multi-run regression. Each file will be the same as the original for
the run it applies to, but zero across all other runs.
Note that -split_into_pad_runs takes the output prefix as a parameter.
1d_tool.py -infile motion.1D \
-set_run_lengths 64 64 64 \
-split_into_pad_runs mot.padded
The output files are:
mot.padded.r01.1D mot.padded.r02.1D mot.padded.r03.1D
If the run lengths are the same -set_nruns is shorter...
1d_tool.py -infile motion.1D \
-set_nruns 3 \
-split_into_pad_runs mot.padded
Example 15a. Show the maximum pairwise displacement. ~2~
Show the max pairwise displacement in the motion parameter file.
So over all TRs pairs, find the biggest displacement.
In one direction it is easy (AP say). If the minimum AP shift is -0.8
and the maximum is 1.5, then the maximum displacement is 2.3 mm. It
is less clear in 6-D space, and instead of trying to find an enveloping
set of "coordinates", distances between all N choose 2 pairs are
evaluated (brute force).
1d_tool.py -infile dfile_rall.1D -show_max_displace
Example 15b. Like 15a, but do not include displacement from censored TRs. ~2~
1d_tool.py -infile dfile_rall.1D -show_max_displace \
-censor_infile motion_censor.1D
Example 15c. Show the entire distance/displacement matrix. ~2~
Show all pairwise displacements (vector distances) in a (motion param?)
row vector file. Note that the maximum element of this matrix should
be the one output by -show_max_displace.
1d_tool.py -infile coords.1D -show_distmat
Example 16. Randomize a list of numbers, say, those from 1..40. ~2~
The numbers can come from 1deval, with the result piped to
'1d_tool.py -infile stdin -randomize_trs ...'.
1deval -num 40 -expr t+1 | \
1d_tool.py -infile stdin -randomize_trs -write stdout
See also -seed.
Example 17. Display min, mean, max, stdev of 1D file. ~2~
1d_tool.py -show_mmms -infile data.1D
To be more detailed, get stats for each of x, y, and z directional
blur estimates for all subjects. Cat(enate) all of the subject files
and pipe that to 1d_tool.py with infile - (meaning stdin).
cat subject_results/group.*/sub*/*.results/blur.errts.1D \
| 1d_tool.py -show_mmms -infile -
Example 18. Just output censor count for default method. ~2~
This will output nothing but the number of TRs that would be censored,
akin to using -censor_motion and -censor_prev_TR.
1d_tool.py -infile dfile_rall.1D -set_nruns 3 -quick_censor_count 0.3
1d_tool.py -infile dfile_rall.1D -set_run_lengths 100 80 120 \
-quick_censor_count 0.3
Example 19. Compute GCOR from some 1D file. ~2~
* Note, time should be in the vertical direction of the file
(else use -transpose).
1d_tool.py -infile data.1D -show_gcor
Or get some GCOR documentation and many values.
1d_tool.py -infile data.1D -show_gcor_doc
1d_tool.py -infile data.1D -show_gcor_all
Example 20. Display censored or uncensored TRs lists (for use in 3dTcat). ~2~
TRs which were censored:
1d_tool.py -infile X.xmat.1D -show_trs_censored encoded
TRs which were applied in analysis (those NOT censored):
1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded
Only those applied in run #2 (1-based).
1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \
-show_trs_run 2
Example 21. Convert to rank order. ~2~
a. show rank order of slice times from a 1D file
1d_tool.py -infile slice_times.1D -rank -write -
b. show rank order of slice times piped directly from 3dinfo
Note: input should be space separated, not '|' separated.
3dinfo -slice_timing -sb_delim ' ' epi+orig \
| 1d_tool.py -infile - -rank -write -
c. show rank order using 'competition' rank, instead of default 'dense'
3dinfo -slice_timing -sb_delim ' ' epi+orig \
| 1d_tool.py -infile - -rank_style competition -write -
Example 22. Guess volreg base index from motion parameters. ~2~
1d_tool.py -infile dfile_rall.1D -collapse_cols enorm -show_argmin
Example 23. Convert volreg parameters to those suitable for 3dAllineate. ~2~
1d_tool.py -infile dfile_rall.1D -volreg2allineate \
-write allin_rall_aff12.1D
Example 24. Show TR counts per run. ~2~
a. list the number of TRs in each run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts trs
b. list the number of TRs censored in each run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts trs_cen
c. list the number of TRs prior to censoring in each run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts trs_no_cen
d. list the fraction of TRs censored per run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts frac_cen
e. list the fraction of TRs censored in run 3
1d_tool.py -infile X.xmat.1D -show_tr_run_counts frac_cen \
-show_trs_run 3
Example 25. Show number of runs. ~2~
1d_tool.py -infile X.xmat.1D -show_num_runs
Example 26. Convert global index to run and TR index. ~2~
Note that run indices are 1-based, while TR indices are 0-based,
as usual. Confusion is key.
a. explicitly, given run lengths
1d_tool.py -set_run_lengths 100 80 120 -index_to_run_tr 217
b. implicitly, given an X-matrix (** be careful about censoring **)
1d_tool.py -infile X.nocensor.xmat.1D -index_to_run_tr 217
Example 27. Display length of response curve. ~2~
1d_tool.py -show_trs_to_zero -infile data.1D
Print out the length of the input (in TRs, say) until the data
values become a constant zero. Zeros that are followed by non-zero
values are irrelevant.
Example 28. Convert slice order to slice times. ~2~
A slice order might be the sequence in which slices were acquired.
For example, with 33 slices, perhaps the order is:
set slice_order = ( 0 6 12 18 24 30 1 7 13 19 25 31 2 8 14 20 \
26 32 3 9 15 21 27 4 10 16 22 28 5 11 17 23 29 )
Put this in a file:
echo $slice_order > slice_order.1D
1d_tool.py -set_tr 2 -slice_order_to_times \
-infile slice_order.1D -write slice_times.1D
Or as a filter:
echo $slice_order | 1d_tool.py -set_tr 2 -slice_order_to_times \
-infile - -write -
Example 29. Display minimum cluster size from 3dClustSim output. ~2~
Given a text file output by 3dClustSim, e.g. ClustSim.ACF.NN1_1sided.1D,
and given both an uncorrected (pthr) and a corrected (alpha) p-value,
look up the entry that specifies the minimum cluster size needed for
corrected p-value significance.
If requested in afni_proc.py, they are under files_ClustSim.
a. with modestly verbose output (default is -verb 1)
1d_tool.py -infile ClustSim.ACF.NN1_1sided.1D -csim_show_clustsize
b. quiet, to see just the output value
1d_tool.py -infile ClustSim.ACF.NN1_1sided.1D -csim_show_clustsize \
-verb 0
c. quiet, and capture the output value (tcsh syntax)
set clustsize = `1d_tool.py -infile ClustSim.ACF.NN1_1sided.1D \
-csim_show_clustsize -verb 0`
Example 30. Display columns that are all-zero (e.g. censored out) ~2~
Given a regression matrix, list columns that are entirely zero, such
as those for which there were no events, or those for which event
responses were censored out.
a. basic output
Show the number of such columns and a list of labels
1d_tool.py -show_regs allzero -infile zerocols.X.xmat.1D
b. quiet output (do not include the number of such columns)
1d_tool.py -show_regs allzero -infile zerocols.X.xmat.1D -verb 0
c. quiet encoded index list
1d_tool.py -show_regs allzero -infile zerocols.X.xmat.1D \
-show_regs_style encoded -verb 0
d. list all labels of regressors of interest (with no initial count)
1d_tool.py -show_regs set -infile zerocols.X.xmat.1D \
-select_groups POS -verb 0
Example 31. Determine slice timing pattern (for EPI data) ~2~
Determine the slice timing pattern from a list of slice times.
The output is :
- multiband level (usually 1)
- tpattern, one such pattern from those in 'to3d -help'
a. where slice times are in a file
1d_tool.py -show_slice_timing_pattern -infile slice_times.1D
b. or as a filter
3dinfo -slice_timing -sb_delim ' ' FT_epi_r1+orig \
| 1d_tool.py -show_slice_timing_pattern -infile -
c. or if it fails, be gentle and verbose
1d_tool.py -infile slice_times.1D \
-show_slice_timing_gentle -verb 3
Example 32. Display slice timing ~2~
Display slice timing given a to3d timing pattern, the number of
slices, the multiband level, and optionally the TR.
a. pattern alt+z, 40 slices, multiband 1, TR 2s
(40 slices in 2s means slices are acquired every 0.05 s)
1d_tool.py -slice_pattern_to_times alt+z 40 1 -set_tr 2
b. same, but multiband 2
(so slices are acquired every 0.1 s, and there are 2 such sets)
1d_tool.py -slice_pattern_to_times alt+z 40 2 -set_tr 2
c. test this by feeding the output to -show_slice_timing_pattern
1d_tool.py -slice_pattern_to_times alt+z 40 2 -set_tr 2 \
| 1d_tool.py -show_slice_timing_pattern -infile -
---------------------------------------------------------------------------
command-line options: ~1~
---------------------------------------------------------------------------
basic informational options: ~2~
-help : show this help
-hist : show the module history
-show_valid_opts : show all valid options
-ver : show the version number
----------------------------------------
required input: ~2~
-infile DATASET.1D : specify input 1D file
----------------------------------------
general options: ~2~
-add_cols NEW_DSET.1D : extend dset to include these columns
-backward_diff : take derivative as first backward difference
Take the backward differences at each time point. For each index > 0,
value[index] = value[index] - value[index-1], and value[0] = 0.
This option is identical to -derivative.
See also -forward_diff, -derivative, -set_nruns, -set_run_lens.
-collapse_cols METHOD : collapse multiple columns into one, where
METHOD is one of: min, max, minabs, maxabs, euclidean_norm,
weighted_enorm.
Consideration of the euclidean_norm method:
For censoring, the euclidean_norm method is used (sqrt(sum squares)).
This combines rotations (in degrees) with shifts (in mm) as if they
had the same weight.
Note that assuming rotations are about the center of mass (which
should produce a minimum average distance), then the average arc
length (averaged over the brain mask) of a voxel rotated by 1 degree
(about the CM) is the following (for the given datasets):
TT_N27+tlrc: 0.967 mm (average radius = 55.43 mm)
MNIa_caez_N27+tlrc: 1.042 mm (average radius = 59.69 mm)
MNI_avg152T1+tlrc: 1.088 mm (average radius = 62.32 mm)
The point of these numbers is to suggest that equating degrees and
mm should be fine. The average distance caused by a 1 degree
rotation is very close to 1 mm (in an adult human).
* 'enorm' is short for 'euclidean_norm'.
* Use of weighted_enorm requires the -weight_vec option.
e.g. -collapse_cols weighted_enorm -weight_vec .9 .9 .9 1 1 1
-censor_motion LIMIT PREFIX : create censor files
This option implies '-derivative', '-collapse_cols euclidean_norm',
'moderate_mask -LIMIT LIMIT' and applies PREFIX for '-write_censor'
and '-write_CENSORTR' output files. It also outputs the derivative
of the euclidean norm, before the limit it applied.
The temporal derivative is taken with run breaks applied (derivative
of the first run of a TR is 0), then the columns are collapsed into
one via each TR's vector length (Euclidean Norm: sqrt(sum of squares)).
After that, a mask time series is made from TRs with values outside
(-LIMIT,LIMIT), i.e. if >= LIMIT or <= LIMIT, result is 1.
This binary time series is then written out in -CENSORTR format, with
the moderate TRs written in -censor format (either can be applied in
3dDeconvolve). The output files will be named PREFIX_censor.1D,
PREFIX_CENSORTR.txt and PREFIX_enorm.1D (e.g. subj123_censor.1D,
subj123_CENSORTR.txt and subj123_enorm.1D).
Besides an input motion file (-infile), the number of runs is needed
(-set_nruns or -set_run_lengths).
Consider also '-censor_prev_TR' and '-censor_first_trs'.
See example 10.
-censor_fill : expand data, filling censored TRs with zeros
-censor_fill_parent PARENT : similar, but get censor info from a parent
The output of these operation is a longer dataset. Each TR that had
previously been censored is re-inserted as a zero.
The purpose of this is to make 1D time series data properly align
with the all_runs dataset, for example. Otherwise, the ideal 1D data
might have missing TRs, and so will align worse with responses over
the duration of all runs (it might start aligned, but drift earlier
and earlier as more TRs are censored).
See example 12.
-censor_infile CENSOR_FILE : apply censoring to -infile dataset
This removes TRs from the -infile dataset where the CENSOR_FILE is 0.
The censor file is akin to what is used with "3dDeconvolve -censor",
where TRs with 1 are kept and those with 0 are excluded from analysis.
See example 15b.
-censor_first_trs N : when censoring motion, also censor the first
N TRs of each run
-censor_next_TR : for each censored TR, also censor next one
(probably for use with -forward_diff)
-censor_prev_TR : for each censored TR, also censor previous
-cormat_cutoff CUTOFF : set cutoff for cormat warnings (in [0,1])
-csim_show_clustsize : for 3dClustSim input, show min clust size
Given a 3dClustSim table output (e.g. ClustSim.ACF.NN1_1sided.1D),
along with uncorrected (pthr) and corrected (alpha) p-values, show the
minimum cluster size to achieve significance.
The pthr and alpha values can be controlled via the options -csim_pthr
and -csim_alpha (with defaults of 0.001 and 0.05, respectively).
The -verb option can be used to provide additional or no details
about the clustering method.
See Example 29, along with options -csim_pthr, -csim_alpha and -verb.
-csim_pthr THRESH : specify uncorrected threshold for csim output
e.g. -csim_pthr 0.0001
This option implies -csim_show_clustsize, and is used to specify the
uncorrected p-value of the 3dClustSim output.
See also -csim_show_clustsize.
-csim_alpha THRESH : specify corrected threshold for csim output
e.g. -csim_alpha 0.01
This option implies -csim_show_clustsize, and is used to specify the
corrected, cluster-wise p-value of the 3dClustSim output.
See also -csim_show_clustsize.
-demean : demean each run (new mean of each run = 0.0)
-derivative : take the temporal derivative of each vector
(done as first backward difference)
Take the backward differences at each time point. For each index > 0,
value[index] = value[index] - value[index-1], and value[0] = 0.
This option is identical to -backward_diff.
See also -backward_diff, -forward_diff, -set_nruns, -set_run_lens.
-extreme_mask MIN MAX : make mask of extreme values
Convert to a 0/1 mask, where 1 means the given value is extreme
(outside the (MIN, MAX) range), and 0 means otherwise. This is the
opposite of -moderate_mask (not exactly, both are inclusive).
Note: values = MIN or MAX will be in both extreme and moderate masks.
Note: this was originally described incorrectly in the help.
-forward_diff : take first forward difference of each vector
Take the first forward differences at each time point. For index<last,
value[index] = value[index+1] - value[index], and value[last] = 0.
The difference between -forward_diff and -backward_diff is a time shift
by one index.
See also -backward_diff, -derivative, -set_nruns, -set_run_lens.
-index_to_run_tr INDEX : convert global INDEX to run and TR indices
Given a list of run lengths, convert INDEX to a run and TR index pair.
This option requires -set_run_lens or maybe an Xmat.
See also -set_run_lens example 26.
-moderate_mask MIN MAX : make mask of moderate values
Convert to a 0/1 mask, where 1 means the given value is moderate
(within [MIN, MAX]), and 0 means otherwise. This is useful for
censoring motion (in the -censor case, not -CENSORTR), where the
-censor file should be a time series of TRs to apply.
See also -extreme_mask.
-label_prefix_drop prefix1 prefix2 ... : remove labels matching prefix list
e.g. to remove motion shift (starting with 'd') and bandpass labels:
-label_prefix_drop d bandpass
This is a type of column selection.
Use this option to remove columns from a matrix that have labels
starting with any from the given prefix list.
This option can be applied along with -label_prefix_keep.
See also -label_prefix_keep and example 2b.
-label_prefix_keep prefix1 prefix2 ... : keep labels matching prefix list
e.g. to keep only motion shift (starting with 'd') and bandpass labels:
-label_prefix_keep d bandpass
This is a type of column selection.
Use this option to keep columns from a matrix that have labels starting
with any from the given prefix list.
This option can be applied along with -label_prefix_drop.
See also -label_prefix_drop and example 2b.
"Looks like" options:
These are terminal options that check whether the input file seems to
be of type 1D, local stim_times or global stim_times formats. The only
associated options are currently -infile, -set_run_lens, -set_tr and
-verb.
They are terminal in that no other 1D-style actions are performed.
See 'timing_tool.py -help' for details on stim_times operations.
-looks_like_1D : is the file in 1D format
Does the input data file seem to be in 1D format?
- must be rectangular (same number of columns per row)
- duration must match number of rows (if run lengths are given)
-looks_like_AM : does the file have modulators?
Does the file seem to be in local or global times format, and
do the times have modulators?
- amplitude modulators should use '*' format (e.g. 127.3*5.1)
- duration modulators should use trailing ':' format (12*5.1:3.4)
- number of amplitude modulators should be constant
-looks_like_local_times : is the file in local stim_times format
Does the input data file seem to be in the -stim_times format used by
3dDeconvolve (and timing_tool.py)? More specifically, is it the local
format, with one scanning run per row.
- number of rows must match number of runs
- times cannot be negative
- times must be unique per run (per row)
- times cannot exceed the current run time
-looks_like_global_times : is the file in global stim_times format
Does the input data file seem to be in the -stim_times format used by
3dDeconvolve (and timing_tool.py)? More specifically, is it the global
format, either as one long row or one long line?
- must be one dimensional (either a single row or column)
- times cannot be negative
- times must be unique
- times cannot exceed total duration of all runs
-looks_like_test_all : run all -looks_like tests
Applies all "looks like" test options: -looks_like_1D, -looks_like_AM,
-looks_like_local_times and -looks_like_global_times.
-overwrite : allow overwriting of any output dataset
-pad_into_many_runs RUN NRUNS : pad as current run of num_runs
e.g. -pad_into_many_runs 2 7
This option is used to create a longer time series dataset where the
input is consider one particular run out of many. The output is
padded with zero for all run TRs before and after this run.
Given the example, there would be 1 run of zeros, then the input would
be treated as run 2, and there would be 5 more runs of zeros.
-quick_censor_count LIMIT : output # TRs that would be censored
e.g. -quick_censor_count 0.3
This is akin to -censor_motion, but it only outputs the number of TRs
that would be censored. It does not actually create a censor file.
This option essentially replaces these:
-derivative -demean -collapse_cols euclidean_norm
-censor_prev_TR -verb 0 -show_censor_count
-moderate_mask 0 LIMIT
-rank : convert data to rank order
0-based index order of small to large values
The default rank STYLE is 'dense'.
See also -rank_style.
-rank_style STYLE : convert to rank using the given style
The STYLE refers to what to do in the case of repeated values.
Assuming inputs 4 5 5 9...
dense - repeats get same rank, no gaps in rank
- same a "3dmerge -1rank"
- result: 0 1 1 2
competition - repeats get same rank, leading to gaps in rank
- same a "3dmerge -1rank"
- result: 0 1 1 3
(case '2' is counted, though no such rank occurs)
Option '-rank' uses style 'dense'.
See also -rank.
-reverse_rank : convert data to reverse rank order
(large values come first)
-reverse : reverse data over time
-randomize_trs : randomize the data over time
-seed SEED : set random number seed (integer)
-select_groups g0 g1 ... : select columns by group numbers
e.g. -select groups 0
e.g. -select groups POS 0
An X-matrix dataset (e.g. X.xmat.1D) often has columns partitioned by
groups, such as:
-1 : polort regressors
0 : motion regressors and other (non-polort) baseline terms
N>0: regressors of interest
This option can be used to select columns by integer groups, with
special cases of POS (regs of interest), NEG (probably polort).
Note that NONNEG is unneeded as it is the pair POS 0.
See also -show_group_labels.
-select_cols SELECTOR : apply AFNI column selectors, [] is optional
e.g. '[5,0,7..21(2)]'
-select_rows SELECTOR : apply AFNI row selectors, {} is optional
e.g. '{5,0,7..21(2)}'
-select_runs r1 r2 ... : extract the given runs from the dataset
(these are 1-based run indices)
e.g. 2
e.g. 2 3 1 1 1 1 1 4
-set_nruns NRUNS : treat the input data as if it has nruns
(e.g. applies to -derivative and -demean)
See examples 7a, 10a and b, and 14.
-set_run_lengths N1 N2 ... : treat as if data has run lengths N1, N2, etc.
(applies to -derivative, for example)
Notes: o option -set_nruns is not allowed with -set_run_lengths
o the sum of run lengths must equal NT
See examples 7b, 10c and 14.
-set_tr TR : set the TR (in seconds) for the data
-show_argmin : display the index of min arg (of first column)
-show_censor_count : display the total number of censored TRs
Note : if input is a valid xmat.1D dataset, then the
count will come from the header. Otherwise
the input is assumed to be a binary censor
file, and zeros are simply counted.
-show_cormat : display correlation matrix
-show_cormat_warnings : display correlation matrix warnings
(this does not include baseline terms)
-show_cormat_warnings_full : display correlation matrix warnings
(this DOES include baseline terms)
-show_distmat : display distance matrix
Expect input as one coordinate vector per row.
Output NROWxNROW matrix of vector distances.
See Example 15c.
-show_df_info : display info about degrees of freedom
(found in in xmat.1D formatted files)
-show_df_protect yes/no : protection flag (def=yes)
-show_gcor : display GCOR: the average correlation
-show_gcor_all : display many ways of computing (a) GCOR
-show_gcor_doc : display descriptions of those ways
-show_group_labels : display group and label, per column
-show_indices_baseline : display column indices for baseline
-show_indices_interest : display column indices for regs of interest
-show_indices_motion : display column indices for motion regressors
-show_indices_zero : display column indices for all-zero columns
-show_label_ordering : display the labels
-show_labels : display the labels
-show_max_displace : display max displacement (from motion params)
- the maximum pairwise distance (enorm)
-show_mmms : display min, mean, max, stdev of columns
-show_num_runs : display number of runs found
-show_regs PROPERTY : display regressors with the given property
Show column indices or labels for those columns where PROPERTY holds:
allzero : the entire column is exactly 0
set : (NOT allzero) the column has some set (non-zero) value
How the columns are displayed is controlled by -show_regs_style
(label, encoded, comma, space) and -verb (0, 1 or 2).
With -verb > 0, the number of matching columns is also output.
See also -show_regs_style, -verb.
See example 30.
-show_regs_style STYLE : use STYLE for how to -show_regs
This only applies when using -show_regs, and specifies the style for
how to show matching columns.
space : show indices as a space-separated list
comma : show indices as a comma-separated list
encoded : succinct selector list (like sub-brick selectors)
label : if xmat.1D has them, show space separated labels
set : (NOT allzero) the column has some set (non-zero) value
How the columns are displayed is controlled by -show_regs_style
(label, encoded, comma, space) and -verb (0, 1 or 2).
-show_rows_cols : display the number of rows and columns
-show_slice_timing_pattern : display the to3d tpattern for the data
e.g. -show_slice_timing_pattern -infile slice_times.txt
The output will be 2 values, the multiband level (the number of
sets of unique slice times) and the tpattern for those slice times.
The tpattern will be one of those from 'to3d -help', such as alt+z.
This operation is the reverse of -slice_pattern_to_times.
See also -slice_pattern_to_times.
See example 31 and example 32
-show_tr_run_counts STYLE : display TR counts per run, according to STYLE
STYLE can be one of:
trs : TR counts
trs_cen : censored TR counts
trs_no_cen : TR counts, as if no censoring
frac_cen : fractions of TRs censored
See example 24.
-show_trs_censored STYLE : display a list of TRs which were censored
-show_trs_uncensored STYLE : display a list of TRs which were not censored
STYLE can be one of:
comma : comma delimited
space : space delimited
encoded : succinct selector list
verbose : chatty
See example 20.
-show_trs_run RUN : restrict -show_trs_[un]censored to the given
1-based run
-show_trs_to_zero : display number of TRs before final zero value
(e.g. length of response curve)
-show_xmat_stype_cols T1 ... : display columns of the given class types
Display the columns (labels, indices or encoded) of the given stimulus
types. These types refer specifically to those with basis functions,
and correspond with 3dDeconvolve -stim_* options as follows:
times : -stim_times
AM : -stim_times_AM1 or -stim_times_AM2
AM1 : -stim_times_AM1
AM2 : -stim_times_AM2
IM : -stim_times_IM
Multiple types can be provided.
See example 5f.
See also -show_regs_style.
-show_xmat_stim_info CLASS : display information for the given stim class
(CLASS can be a specific one, or 'ALL')
Display information for a specific (3dDeconvolve -stim_*) stim class.
This includes the class Name, the 3dDeconvolve Option, the basis
Function, and the relevant Columns of the X-matrix.
See example 5e.
See also -show_regs_style.
-show_group_labels : display group and label, per column
-slice_order_to_times : convert a list of slice indices to times
Programs like to3d, 3drefit, 3dTcat and 3dTshift expect slice timing
to be a list of slice times over the sequential slices. But in some
cases, people have an ordered list of slices. So the sorting needs
to change.
input: a file with TIME-SORTED slice indices
output: a SLICE-SORTED list of slice times
* Note, this is a list of slice indices over time (one TR interval).
Across one TR, this lists each slice index as acquired.
It IS a per-slice-time index of acquired slices.
It IS **NOT** a per-slice index of its acquisition position.
(this latter case could be output by -slice_pattern_to_times)
If TR=2 and the slice order is alt+z: 0 2 4 6 8 1 3 5 7 9
Then the slices/times ordered by time (as input) are:
times: 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8
input-> slices: 0 2 4 6 8 1 3 5 7 9
(slices across time)
And the slices/times ordered instead by slice index are:
slices: 0 1 2 3 4 5 6 7 8 9
output-> times: 0.0 1.0 0.2 1.2 0.4 1.4 0.6 1.6 0.8 1.8
(timing across slices)
It is this final list of times that is output.
For kicks, note that one can convert from per-time slice indices to
per-slice acquisition indices by setting TR=nslices.
See example 28.
-slice_pattern_to_times PAT NS MB : output slice timing, given:
slice pattern, nslices, MBlevel
(TR is optionally set via -set_tr)
e.g. -slice_pattern_to_times alt+z 30 1
-set_tr 2.0
Input description:
PAT : a valid to3d-style slice timing pattern, one of:
zero simult
seq+z seqplus seq-z seqminus
alt+z altplus alt+z2
alt-z altminus alt-z2
NS : the total number of slices (MB * nunique_times)
MB : the multiband level
For a volume with NS slices and multiband MB and a
slice timing pattern PAT with NST unique slice times,
we must have: NS = MB * NST
TR : (optional) the volume repetition time
TR is specified via -set_tr.
Output the appropriate slice times for the timing pattern, also given
the number of slices, multiband level and TR. If TR is not specified,
the output will be as if TR=NST (number of unique slice times), which
means the output is order index of each slice.
This operation is the reverse of -show_slice_timing_pattern.
See also -show_slice_timing_pattern.
See example 32.
-sort : sort data over time (smallest to largest)
- sorts EVERY vector
- consider the -reverse option
-split_into_pad_runs PREFIX : split input into one padded file per run
e.g. -split_into_pad_runs motion.pad
This option is used for breaking a set of regressors up by run. The
output would be one file per run, where each file is the same as the
input for the run it corresponds to, and is padded with 0 across all
other runs.
Assuming the 300 row input dataset spans 3 100-TR runs, then there
would be 3 output datasets created, each still be 300 rows:
motion.pad.r01.1D : 100 rows as input, 200 rows of 0
motion.pad.r02.1D : 100 rows of 0, 100 rows as input, 100 of 0
motion.pad.r03.1D : 200 rows of 0, 100 rows as input
This option requires either -set_nruns or -set_run_lengths.
See example 14.
-transpose : transpose the input matrix (rows for columns)
-transpose_write : transpose the output matrix before writing
-volreg2allineate : convert 3dvolreg parameters to 3dAllineate
This option should be used when the -infile file is a 6 column file
of motion parameters (roll, pitch, yaw, dS, dL, dP). The output would
be converted to a 12 parameter file, suitable for input to 3dAllineate
via the -1Dparam_apply option.
volreg: roll, pitch, yaw, dS, dL, dP
3dAllinate: -dL, -dP, -dS, roll, pitch, yaw, 0,0,0, 0,0,0
These parameters would be to correct the motion, akin to what 3dvolreg
did (i.e. they are the negative estimates of how the subject moved).
See example 23.
-write FILE : write the current 1D data to FILE
-write_sep SEP : use SEP for column separators
-write_style STYLE : write using one of the given styles
basic: the default, don't work too hard
ljust: left-justified columns of the same width
rjust: right-justified columns of the same width
tsv: tab-separated (use <tab> as in -write_sep '\t')
-weight_vec v1 v2 ... : supply weighting vector
e.g. -weight_vec 0.9 0.9 0.9 1 1 1
This vector currently works only with the weighted_enorm method for
the -collapse_cols option. If supplied (as with the example), it will
weight the angles at 0.9 times the weights of the shifts in the motion
parameters output by 3dvolreg.
See also -collapse_cols.
-write_censor FILE : write as boolean censor.1D
e.g. -write_censor subjA_censor.1D
This file can be given to 3dDeconvolve to censor TRs with excessive
motion, applied with the -censor option.
e.g. 3dDeconvolve -censor subjA_censor.1D
This file works well for plotting against the data, where the 0 entries
are removed from the regression of 3dDeconvolve. Alternatively, the
file created with -write_CENSORTR is probably more human readable.
-write_CENSORTR FILE : write censor times as CENSORTR string
e.g. -write_CENSORTR subjA_CENSORTR.txt
This file can be given to 3dDeconvolve to censor TRs with excessive
motion, applied with the -CENSORTR option.
e.g. 3dDeconvolve -CENSORTR `cat subjA_CENSORTR.txt`
Which might expand to:
3dDeconvolve -CENSORTR '1:16..19,44 3:28 4:19,37..39'
Note that the -CENSORTR option requires the text on the command line.
This file is in the easily readable format applied with -CENSORTR.
It has the same effect on 3dDeconvolve as the sister file from
-write_censor, above.
-verb LEVEL : set the verbosity level
-----------------------------------------------------------------------------
R Reynolds March 2009
=============================================================================
AFNI program: 1dtranspose
Usage: 1dtranspose infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, but transposed.
You can use a column subvector selector list on infile, as in
1dtranspose 'fred.1D[0,3,7]' ethel.1D
* This program may produce files with lines longer than a
text editor can handle.
* If 'outfile' is '-' (or missing entirely), output goes to stdout.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dTsort
Usage: 1dTsort [options] file.1D
Sorts each column of the input 1D file and writes result to stdout.
Options
-------
-inc = sort into increasing order [default]
-dec = sort into decreasing order
-flip = transpose the file before OUTPUT
* the INPUT can be transposed using file.1D\'
* thus, to sort each ROW, do something like
1dTsort -flip file.1D\' > sfile.1D
-col j = sort only on column #j (counting starts at 0),
and carry the rest of the columns with it.
-imode = typecast all values to integers, return the mode in
the input then exit. No sorting results are returned.
N.B.: Data will be read from standard input if the filename IS stdin,
and will also be row/column transposed if the filename is stdin\'
For example:
1deval -num 100 -expr 'uran(1)' | 1dTsort stdin | 1dplot stdin
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 1dUpsample
Program 1dUpsample:
Upsamples a 1D time series (along the column direction)
to a finer time grid.
Usage: 1dUpsample [options] n fred.1D > ethel.1D
Where 'n' is the upsample factor (integer from 2..32)
NOTES:
------
* Interpolation is done with 7th order polynomials.
(Why 7? It's a nice number, and the code already existed.)
* The only option is '-1' or '-one', to use 1st order
polynomials instead (i.e., linear interpolation).
* Output is written to stdout.
* If you want to interpolate along the row direction,
transpose before input, then transpose the output.
* Example:
1dUpsample 5 '1D: 4 5 4 3 4' | 1dplot -stdin -dx 0.2
* If the input has M time points, the output will
have n*M time points. The last n-1 of them
will be past the end of the original time series.
* This program is a quick hack for Gang Chen.
Where are my Twizzlers?
AFNI program: 24swap
Usage: 24swap [options] file ...
Swaps bytes pairs and/or quadruples on the files listed.
Options:
-q Operate quietly
-pattern pat 'pat' determines the pattern of 2 and 4
byte swaps. Each element is of the form
2xN or 4xN, where N is the number of
bytes to swap as pairs (for 2x) or
as quadruples (for 4x). For 2x, N must
be divisible by 2; for 4x, N must be
divisible by 4. The whole pattern is
made up of elements separated by colons,
as in '-pattern 4x39984:2x0'. If bytes
are left over after the pattern is used
up, the pattern starts over. However,
if a byte count N is zero, as in the
example below, then it means to continue
until the end of file.
N.B.: You can also use 1xN as a pattern, indicating to
skip N bytes without any swapping.
N.B.: A default pattern can be stored in the Unix
environment variable AFNI_24SWAP_PATTERN.
If no -pattern option is given, the default
will be used. If there is no default, then
nothing will be done.
N.B.: If there are bytes 'left over' at the end of the file,
they are written out unswapped. This will happen
if the file is an odd number of bytes long.
N.B.: If you just want to swap pairs, see program 2swap.
For quadruples only, see program 4swap.
N.B.: This program will overwrite the input file!
You might want to test it first.
Example: 24swap -pat 4x8:2x0 fred
If fred contains 'abcdabcdabcdabcdabcd' on input,
then fred has 'dcbadcbabadcbadcbadc' on output.
AFNI program: 2dcat
Usage: 2dcat [options] fname1 fname2 etc.
Puts a set images into an image matrix (IM)
montage of NX by NY images.
The minimum set of input is N images (N >= 1).
If need be, the default is to reuse images until the desired
NX by NY size is achieved.
See options -zero_wrap and -image_wrap for more detail.
OPTIONS:
++ Options for editing, coloring input images:
-scale_image SCALE_IMG: Multiply each image IM(i,j) in output
image matrix IM by the color or intensity
of the pixel (i,j) in SCALE_IMG.
-scale_pixels SCALE_PIX: Multiply each pixel (i,j) in output image
by the color or intensity
of the pixel (i,j) in SCALE_IMG.
SCALE_IMG is automatically resized to the
resolution of the output image.
-scale_intensity: Instead of multiplying by the color of
pixel (i,j), use its intensity
(average color)
-gscale FAC: Apply FAC in addition to scaling of -scale_* options
-rgb_out: Force output to be in rgb, even if input is bytes.
This option is turned on automatically in certain cases.
-res_in RX RY: Set resolution of all input images to RX by RY pixels.
Default is to make all input have the same
resolution as the first image.
-respad_in RPX RPY: Like -res_in, but resample to the max while respecting
the aspect ratio, and then pad to achieve desired
pixel count.
-pad_val VAL: Set the padding value, should it be needed by -respad_in
to VAL. VAL is typecast to byte, default is 0, max is 255.
-crop L R T B: Crop images by L (Left), R (Right), T (Top), B (Bottom)
pixels. Cutting is performed after any resolution change,
if any, is to be done.
-autocrop_ctol CTOL: A line is eliminated if none of its R G B values
differ by more than CTOL% from those of the corner
pixel.
-autocrop_atol ATOL: A line is eliminated if none of its R G B values
differ by more than ATOL% from those of line
average.
-autocrop: This option is the same as using both of -autocrop_atol 20
and -autocrop_ctol 20
NOTE: Do not mix -autocrop* options with -crop
Cropping is determined from the 1st input image and applied to
to all remaining ones.
++ Options for output:
-zero_wrap: If number of images is not enough to fill matrix
solid black images are used.
-white_wrap: If number of images is not enough to fill matrix
solid white images are used.
-gray_wrap GRAY: If number of images is not enough to fill matrix
solid gray images are used. GRAY must be between 0 and 1.0
-image_wrap: If number of images is not enough to fill matrix
images on command line are reused (default)
-rand_wrap: When reusing images to fill matrix, randomize the order
in refill section only.
-prefix ppp = Prefix the output files with string 'ppp'
Note: If the prefix ends with .1D, then a 1D file containing
the average of RGB values. You can view the output with
1dgrayplot.
-matrix NX NY: Specify number of images in each row and column
of IM at the same time.
-nx NX: Number of images in each row (3 for example below)
-ny NY: Number of images in each column (4 for example below)
Example: If 12 images appearing on the command line
are to be assembled into a 3x4 IM matrix they
would appear in this order:
0 1 2
3 4 5
6 7 8
9 10 11
NOTE: The program will try to guess if neither NX nor NY
are specified.
-matrix_from_scale: Set NX and NY to be the same as the
SCALE_IMG's dimensions. (needs -scale_image)
-gap G: Put a line G pixels wide between images.
-gap_col R G B: Set color of line to R G B values.
Values range between 0 and 255.
Example 0 (assuming afni is in ~/abin directory):
Resizing an image:
2dcat -prefix big -res_in 1024 1024 \
~/abin/funstuff/face_zzzsunbrain.jpg
2dcat -prefix small -res_in 64 64 \
~/abin/funstuff/face_zzzsunbrain.jpg
aiv small.ppm big.ppm
Example 1:
Stitching together images:
(Can be used to make very high resolution SUMA images.
Read about 'Ctrl+r' in SUMA's GUI help.)
2dcat -prefix cat -matrix 14 12 \
~/abin/funstuff/face_*.jpg
aiv cat.ppm
Example 2:
Stitching together 3 images getting rid of annoying white boundary:
2dcat -prefix surfview_pry3b.jpg -ny 1 -autocrop surfview.000[789].jpg
Example 20 (assuming afni is in ~/abin directory):
2dcat -prefix bigcat.jpg -scale_image ~/abin/afnigui_logo.jpg \
-matrix_from_scale -rand_wrap -rgb_out -respad_in 128 128 \
-pad_val 128 ~/abin/funstuff/face_*.jpg
aiv bigcat.jpg bigcat.jpg
Crop/Zoom in to see what was done. In practice, you want to use
a faster image viewer to examine the result. Zooming on such
a large image is not fast in aiv.
Be careful with this toy. Images get real big, real quick.
You can look at the output image file with
afni -im ppp.ppm [then open the Sagittal image window]
Deprecation warning: The imcat program will be replaced by 2dcat in the future.
AFNI program: 2dImReg
++ 2dImReg: AFNI version=AFNI_24.3.00 (Oct 1 2024) [64-bit]
This program performs 2d image registration. Image alignment is
performed on a slice-by-slice basis for the input 3d+time dataset,
relative to a user specified base image.
** Note that the script @2dwarper.Allin can do similar things, **
** with nonlinear (polynomial) warping on a slice-wise basis. **
Usage:
2dImReg
-input fname Filename of input 3d+time dataset to process
-basefile fname Filename of 3d+time dataset for base image
(default = current input dataset)
-base num Time index for base image (0 <= num)
(default: num = 3)
-nofine Deactivate fine fit phase of image registration
(default: fine fit is active)
-fine blur dxy dphi Set fine fit parameters
where:
blur = FWHM of blurring prior to registration (in pixels)
(default: blur = 1.0)
dxy = Convergence tolerance for translations (in pixels)
(default: dxy = 0.07)
dphi = Convergence tolerance for rotations (in degrees)
(default: dphi = 0.21)
-prefix pname Prefix name for output 3d+time dataset
-dprefix dname Write files 'dname'.dx, 'dname'.dy, 'dname'.psi
containing the registration parameters for each
slice in chronological order.
File formats:
'dname'.dx: time(sec) dx(pixels)
'dname'.dy: time(sec) dy(pixels)
'dname'.psi: time(sec) psi(degrees)
-dmm Change dx and dy output format from pixels to mm
-rprefix rname Write files 'rname'.oldrms and 'rname'.newrms
containing the volume RMS error for the original
and the registered datasets, respectively.
File formats:
'rname'.oldrms: volume(number) rms_error
'rname'.newrms: volume(number) rms_error
-debug Lots of additional output to screen
AFNI program: @2dwarper.Allin
script to do 2D registration on each slice of a 3D+time
dataset, and glue the results back together at the end
This script is structured to operate only on an AFNI
+orig.HEAD dataset. The one input on the command line
is the prefix for the dataset.
Modified 07 Dec 2010 by RWC to use 3dAllineate instead
of 3dWarpDrive, with nonlinear slice-wise warping.
Set prefix of input 3D+time dataset here.
In this example with 'wilma' as the command line
argument, the output dataset will be 'wilma_reg+orig'.
The output registration parameters files will
be 'wilma_param_ssss.1D', where 'ssss' is the slice number.
usage: @2dwarper.Allin [options] INPUT_PREFIX
example: @2dwarper.Allin epi_run1
example: @2dwarper.Allin -mask my_mask epi_run1
options:
-mask MSET : provide the prefix of an existing mask dataset
-prefix PREFIX : provide the prefix for output datasets
AFNI program: 2perm
Usage: 2perm [-prefix PPP] [-comma] bot top [n1 n2]
This program creates 2 random non-overlapping subsets of the set of
integers from 'bot' to 'top' (inclusive). The first subset is of
length 'n1' and the second of length 'n2'. If those values are not
given, then equal size subsets of length (top-bot+1)/2 are used.
This program is intended for use in various simulation and/or
randomization scripts, or for amusement/hilarity.
OPTIONS:
========
-prefix PPP == Two output files are created, with names PPP_A and PPP_B,
where 'PPP' is the given prefix. If no '-prefix' option
is given, then the string 'AFNIroolz' will be used.
++ Each file is a single column of numbers.
++ Note that the filenames do NOT end in '.1D'.
-comma == Write each file as a single row of comma-separated numbers.
EXAMPLE:
========
This illustration shows the purpose of 2perm -- for use in permutation
and/or randomization tests of statistical significance and power.
Given a dataset with 100 sub-bricks (indexed 0..99), split it into two
random halves and do a 2-sample t-test between them.
2perm -prefix Q50 0 99
3dttest++ -setA dataset+orig"[1dcat Q50_A]" \
-setB dataset+orig"[1dcat Q50_B]" \
-no1sam -prefix Q50
\rm -f Q50_?
Alternatively:
2perm -prefix Q50 -comma 0 99
3dttest++ -setA dataset+orig"[`cat Q50_A`]" \
-setB dataset+orig"[`cat Q50_B`]" \
-no1sam -prefix Q50
\rm -f Q50_?
Note the combined use of the double quote " and backward quote `
shell operators in this second approach.
AUTHOR: (no one want to admit they wrote this trivial code).
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 2swap
Usage: 2swap [-q] file ...
-- Swaps byte pairs on the files listed.
The -q option means to work quietly.
AFNI program: 3dABoverlap
Usage: 3dABoverlap [options] A B
Output (to screen) is a count of various things about how
the automasks of datasets A and B overlap or don't overlap.
* Dataset B will be resampled to match dataset A, if necessary,
which will be slow if A is high resolution. In such a case,
you should only use one sub-brick from dataset B.
++ The resampling of B is done before the automask is generated.
* The values output are labeled thusly:
#A = number of voxels in the A mask
#B = number of voxels in the B mask
#(A uni B) = number of voxels in the either or both masks (set union)
#(A int B) = number of voxels present in BOTH masks (set intersection)
#(A \ B) = number of voxels in A mask that aren't in B mask
#(B \ A) = number of voxels in B mask that aren't in A mask
%(A \ B) = percentage of voxels from A mask that aren't in B mask
%(B \ A) = percentage of voxels from B mask that aren't in A mask
Rx(B/A) = radius of gyration of B mask / A mask, in x direction
Ry(B/A) = radius of gyration of B mask / A mask, in y direction
Rz(B/A) = radius of gyration of B mask / A mask, in z direction
* If B is an EPI dataset sub-brick, and A is a skull stripped anatomical
dataset, then %(B \ A) might be useful for assessing if the EPI
brick B is grossly misaligned with respect to the anatomical brick A.
* The radius of gyration ratios might be useful for determining if one
dataset is grossly larger or smaller than the other.
OPTIONS
-------
-no_automask = consider input datasets as masks
(automask does not work on mask datasets)
-quiet = be as quiet as possible (without being entirely mute)
-verb = print out some progress reports (to stderr)
NOTES
-----
* If an input dataset is comprised of bytes and contains only one
sub-brick, then this program assumes it is already an automask-
generated dataset and the automask operation will be skipped.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAFNIto3D
[7m*+ WARNING:[0m This program (3dAFNIto3D) is old, not maintained, and probably useless!
Usage: 3dAFNIto3D [options] dataset
Reads in an AFNI dataset, and writes it out as a 3D file.
OPTIONS:
-prefix ppp = Write result into file ppp.3D;
default prefix is same as AFNI dataset's.
-bin = Write data in binary format, not text.
-txt = Write data in text format, not binary.
NOTES:
* At present, all bricks are written out in float format.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAFNItoANALYZE
[7m*+ WARNING:[0m This program (3dAFNItoANALYZE) is old, not maintained, and probably useless!
Usage: 3dAFNItoANALYZE [-4D] [-orient code] aname dset
Writes AFNI dataset 'dset' to 1 or more ANALYZE 7.5 format
.hdr/.img file pairs (one pair for each sub-brick in the
AFNI dataset). The ANALYZE files will be named
aname_0000.hdr aname_0000.img for sub-brick #0
aname_0001.hdr aname_0001.img for sub-brick #1
and so forth. Each file pair will contain a single 3D array.
* If the AFNI dataset does not include sub-brick scale
factors, then the ANALYZE files will be written in the
datum type of the AFNI dataset.
* If the AFNI dataset does have sub-brick scale factors,
then each sub-brick will be scaled to floating format
and the ANALYZE files will be written as floats.
* The .hdr and .img files are written in the native byte
order of the computer on which this program is executed.
Options
-------
-4D [30 Sep 2002]:
If you use this option, then all the data will be written to
one big ANALYZE file pair named aname.hdr/aname.img, rather
than a series of 3D files. Even if you only have 1 sub-brick,
you may prefer this option, since the filenames won't have
the '_0000' appended to 'aname'.
-orient code [19 Mar 2003]:
This option lets you flip the dataset to a different orientation
when it is written to the ANALYZE files. The orientation code is
formed as follows:
The code must be 3 letters, one each from the
pairs {R,L} {A,P} {I,S}. The first letter gives
the orientation of the x-axis, the second the
orientation of the y-axis, the third the z-axis:
R = Right-to-Left L = Left-to-Right
A = Anterior-to-Posterior P = Posterior-to-Anterior
I = Inferior-to-Superior S = Superior-to-Inferior
For example, 'LPI' means
-x = Left +x = Right
-y = Posterior +y = Anterior
-z = Inferior +z = Superior
* For display in SPM, 'LPI' or 'RPI' seem to work OK.
Be careful with this: you don't want to confuse L and R
in the SPM display!
* If you DON'T use this option, the dataset will be written
out in the orientation in which it is stored in AFNI
(e.g., the output of '3dinfo dset' will tell you this.)
* The dataset orientation is NOT stored in the .hdr file.
* AFNI and ANALYZE data are stored in files with the x-axis
varying most rapidly and the z-axis most slowly.
* Note that if you read an ANALYZE dataset into AFNI for
display, AFNI assumes the LPI orientation, unless you
set environment variable AFNI_ANALYZE_ORIENT.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAFNItoNIFTI
Usage: 3dAFNItoNIFTI [options] dataset
Reads an AFNI dataset, writes it out as a NIfTI-1.1 file.
NOTES:
* The nifti_tool program can be used to manipulate
the contents of a NIfTI-1.1 file.
* The input dataset can actually be in any input format
that AFNI can read directly (e.g., MINC-1).
* There is no 3dNIFTItoAFNI program, since AFNI programs
can directly read .nii files. If you wish to make such
a conversion anyway, one way to do so is like so:
3dcalc -a ppp.nii -prefix ppp -expr 'a'
OPTIONS:
-prefix ppp = Write the NIfTI-1.1 file as 'ppp.nii'.
Default: the dataset's prefix is used.
* You can use 'ppp.hdr' to output a 2-file
NIfTI-1.1 file pair 'ppp.hdr' & 'ppp.img'.
* If you want a compressed file, try
using a prefix like 'ppp.nii.gz'.
* Setting the Unix environment variable
AFNI_AUTOGZIP to YES will result in
all output .nii files being gzip-ed.
-verb = Be verbose = print progress messages.
Repeating this increases the verbosity
(maximum setting is 3 '-verb' options).
-float = Force the output dataset to be 32-bit
floats. This option should be used when
the input AFNI dataset has different
float scale factors for different sub-bricks,
an option that NIfTI-1.1 does not support.
The following options affect the contents of the AFNI extension
field that is written by default into the NIfTI-1.1 header:
-pure = Do NOT write an AFNI extension field into
the output file. Only use this option if
needed. You can also use the 'nifti_tool'
program to strip extensions from a file.
-denote = When writing the AFNI extension field, remove
text notes that might contain subject
identifying information.
-oldid = Give the new dataset the input dataset's
AFNI ID code.
-newid = Give the new dataset a new AFNI ID code, to
distinguish it from the input dataset.
**** N.B.: -newid is now the default action.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAFNItoNIML
Usage: 3dAFNItoNIML [options] dset
Dumps AFNI dataset header information to stdout in NIML format.
Mostly for debugging and testing purposes!
OPTIONS:
-data == Also put the data into the output (will be huge).
-ascii == Format in ASCII, not binary (even huger).
-tcp:host:port == Instead of stdout, send the dataset to a socket.
(implies '-data' as well)
-- RWCox - Mar 2005
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAFNItoRaw
[7m*+ WARNING:[0m This program (3dAFNItoRaw) is old, not maintained, and probably useless!
Usage: 3dAFNItoRaw [options] dataset
Convert an AFNI brik file with multiple sub-briks to a raw file with
each sub-brik voxel concatenated voxel-wise.
For example, a dataset with 3 sub-briks X,Y,Z with elements x1,x2,x3,...,xn,
y1,y2,y3,...,yn and z1,z2,z3,...,zn will be converted to a raw dataset with
elements x1,y1,z1, x2,y2,z2, x3,y3,z3, ..., xn,yn,zn
The dataset is kept in the original data format (float/short/int)
Options:
-output / -prefix = name of the output file (not an AFNI dataset prefix)
the default output name will be rawxyz.dat
-datum float = force floating point output. Floating point forced if any
sub-brik scale factors not equal to 1.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAllineate
Usage: 3dAllineate [options] sourcedataset
--------------------------------------------------------------------------
Program to align one dataset (the 'source') to a 'base'
dataset, using an affine (matrix) transformation of space.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
***** Please check your results visually, or at some point *****
***** in time you will have bad results and not know it :-( *****
***** *****
***** No method for 3D image alignment, however tested it *****
***** was, can be relied upon 100% of the time, and anyone *****
***** who tells you otherwise is a madman or is a liar!!!! *****
***** *****
***** In particular, if you are aligning two datasets with *****
***** significantly different spatial coverage (e.g., *****
***** -source = whole head T1w and -base = MNI template), *****
***** the be careful to check the results. In such a case, *****
***** using '-twobest MAX' should increase the chance of *****
***** getting a good alignment (at the cost of CPU time). *****
***** *****
***** Furthermore, don't EVER think that "I have so much *****
***** data that a few errors will not matter"!!!! *****
--------------------------------------------------------------------------
* Options (lots of them!) are available to control:
++ How the matching between the source and the base is computed
(i.e., the 'cost functional' measuring image mismatch).
++ How the resliced source is interpolated to the base space.
++ The complexity of the spatial transformation ('warp') used.
++ And many many technical options to control the process in detail,
if you know what you are doing (or just like to fool around).
* This program is a generalization of and improvement on the older
software 3dWarpDrive.
* For nonlinear transformations, see program 3dQwarp.
* 3dAllineate can also be used to apply a pre-computed matrix to a dataset
to produce the transformed output. In this mode of operation, it just
skips the alignment process, whose function is to compute the matrix,
and instead it reads the matrix in, computes the output dataset,
writes it out, and stops.
* If you are curious about the stepwise process used, see the section below
titled: SUMMARY of the Default Allineation Process.
=====----------------------------------------------------------------------
NOTES: For most 3D image registration purposes, we now recommend that you
===== use Daniel Glen's script align_epi_anat.py (which, despite its name,
can do many more registration problems than EPI-to-T1-weighted).
-->> In particular, using 3dAllineate with the 'lpc' cost functional
(to align EPI and T1-weighted volumes) requires using a '-weight'
volume to get good results, and the align_epi_anat.py script will
automagically generate such a weight dataset that works well for
EPI-to-structural alignment.
-->> This script can also be used for other alignment purposes, such
as T1-weighted alignment between field strengths using the
'-lpa' cost functional. Investigate align_epi_anat.py to
see if it will do what you need -- you might make your life
a little easier and nicer and happier and more tranquil.
-->> Also, if/when you ask for registration help on the AFNI
message board, we'll probably start by recommending that you
try align_epi_anat.py if you haven't already done so.
-->> For aligning EPI and T1-weighted volumes, we have found that
using a flip angle of 50-60 degrees for the EPI works better than
a flip angle of 90 degrees. The reason is that there is more
internal contrast in the EPI data when the flip angle is smaller,
so the registration has some image structure to work with. With
the 90 degree flip angle, there is so little internal contrast in
the EPI dataset that the alignment process ends up being just
trying to match brain outlines -- which doesn't always give accurate
results: see http://dx.doi.org/10.1016/j.neuroimage.2008.09.037
-->> Although the total MRI signal is reduced at a smaller flip angle,
there is little or no loss in FMRI/BOLD information, since the bulk
of the time series 'noise' is from physiological fluctuation signals,
which are also reduced by the lower flip angle -- for more details,
see http://dx.doi.org/10.1016/j.neuroimage.2010.11.020
---------------------------------------------------------------------------
**** New (Summer 2013) program 3dQwarp is available to do nonlinear ****
*** alignment between a base and source dataset, including the use ***
** of 3dAllineate for the preliminary affine alignment. If you are **
* interested, see the output of '3dQwarp -help' for the details. *
---------------------------------------------------------------------------
COMMAND LINE OPTIONS:
====================
-base bbb = Set the base dataset to be the #0 sub-brick of 'bbb'.
If no -base option is given, then the base volume is
taken to be the #0 sub-brick of the source dataset.
(Base must be stored as floats, shorts, or bytes.)
** -base is not needed if you are just applying a given
transformation to the -source dataset to produce
the output, using -1Dmatrix_apply or -1Dparam_apply
** Unless you use the -master option, the aligned
output dataset will be stored on the same 3D grid
as the -base dataset.
-source ttt = Read the source dataset from 'ttt'. If no -source
*OR* (or -input) option is given, then the source dataset
-input ttt is the last argument on the command line.
(Source must be stored as floats, shorts, or bytes.)
** This is the dataset to be transformed, to match the
-base dataset, or directly with one of the options
-1Dmatrix_apply or -1Dparam_apply
** 3dAllineate can register 2D datasets (single slice),
but both the base and source must be 2D -- you cannot
use this program to register a 2D slice into a 3D volume!
-- However, the 'lpc' and 'lpa' cost functionals do not
work properly with 2D images, as they are designed
around local 3D neighborhoods and that code has not
been patched to work with 2D neighborhoods :(
-- You can input .jpg files as 2D 'datasets', register
them with 3dAllineate, and write the result back out
using a prefix that ends in '.jpg'; HOWEVER, the color
information will not be used in the registration, as
this program was written to deal with monochrome medical
datasets. At the end, if the source was RGB (color), then
the output will be also be RGB, and then a color .jpg
can be output.
-- The above remarks also apply to aligning 3D RGB datasets:
it will be done using only the 3D volumes converted to
grayscale, but the final output will be the source
RGB dataset transformed to the (hopefully) aligned grid.
* However, I've never tested aligning 3D color datasets;
you can be the first one ever!
** See the script @2dwarper.Allin for an example of using
3dAllineate to do slice-by-slice nonlinear warping to
align 3D volumes distorted by time-dependent magnetic
field inhomogeneities.
** NOTA BENE: The base and source dataset do NOT have to be defined **
** [that's] on the same 3D grids; the alignment process uses the **
** [Latin ] coordinate systems defined in the dataset headers to **
** [ for ] make the match between spatial locations, rather than **
** [ NOTE ] matching the 2 datasets on a voxel-by-voxel basis **
** [ WELL ] (as 3dvolreg and 3dWarpDrive do). **
** -->> However, this coordinate-based matching requires that **
** image volumes be defined on roughly the same patch of **
** of (x,y,z) space, in order to find a decent starting **
** point for the transformation. You might need to use **
** the script @Align_Centers to do this, if the 3D **
** spaces occupied by the images do not overlap much. **
** -->> Or the '-cmass' option to this program might be **
** sufficient to solve this problem, maybe, with luck. **
** (Another reason why you should use align_epi_anat.py) **
** -->> If the coordinate system in the dataset headers is **
** WRONG, then 3dAllineate will probably not work well! **
** And I say this because we have seen this in several **
** datasets downloaded from online archives. **
-prefix ppp = Output the resulting dataset to file 'ppp'. If this
*OR* option is NOT given, no dataset will be output! The
-out ppp transformation matrix to align the source to the base will
be estimated, but not applied. You can save the matrix
for later use using the '-1Dmatrix_save' option.
*N.B.: By default, the new dataset is computed on the grid of the
base dataset; see the '-master' and/or the '-mast_dxyz'
options to change this grid.
*N.B.: If 'ppp' is 'NULL', then no output dataset will be produced.
This option is for compatibility with 3dvolreg.
-floatize = Write result dataset as floats. Internal calculations
-float are all done on float copies of the input datasets.
[Default=convert output dataset to data format of ]
[ source dataset; if the source dataset was ]
[ shorts with a scale factor, then the new ]
[ dataset will get a scale factor as well; ]
[ if the source dataset was shorts with no ]
[ scale factor, the result will be unscaled.]
-1Dparam_save ff = Save the warp parameters in ASCII (.1D) format into
file 'ff' (1 row per sub-brick in source).
* A historical synonym for this option is '-1Dfile'.
* At the top of the saved 1D file is a #comment line
listing the names of the parameters; those parameters
that are fixed (e.g., via '-parfix') will be marked
by having their symbolic names end in the '$' character.
You can use '1dcat -nonfixed' to remove these columns
from the 1D file if you just want to further process the
varying parameters somehow (e.g., 1dsvd).
* However, the '-1Dparam_apply' option requires the
full list of parameters, including those that were
fixed, in order to work properly!
-1Dparam_apply aa = Read warp parameters from file 'aa', apply them to
the source dataset, and produce a new dataset.
(Must also use the '-prefix' option for this to work! )
(In this mode of operation, there is no optimization of)
(the cost functional by changing the warp parameters; )
(previously computed parameters are applied directly. )
*N.B.: If you use -1Dparam_apply, you may also want to use
-master to control the grid on which the new
dataset is written -- the base dataset from the
original 3dAllineate run would be a good possibility.
Otherwise, the new dataset will be written out on the
3D grid coverage of the source dataset, and this
might result in clipping off part of the image.
*N.B.: Each row in the 'aa' file contains the parameters for
transforming one sub-brick in the source dataset.
If there are more sub-bricks in the source dataset
than there are rows in the 'aa' file, then the last
row is used repeatedly.
*N.B.: A trick to use 3dAllineate to resample a dataset to
a finer grid spacing:
3dAllineate -input dataset+orig \
-master template+orig \
-prefix newdataset \
-final wsinc5 \
-1Dparam_apply '1D: 12@0'\'
Here, the identity transformation is specified
by giving all 12 affine parameters as 0 (note
the extra \' at the end of the '1D: 12@0' input!).
** You can also use the word 'IDENTITY' in place of
'1D: 12@0'\' (to indicate the identity transformation).
**N.B.: Some expert options for modifying how the wsinc5
method works are described far below, if you use
'-HELP' instead of '-help'.
****N.B.: The interpolation method used to produce a dataset
is always given via the '-final' option, NOT via
'-interp'. If you forget this and use '-interp'
along with one of the 'apply' options, this program
will chastise you (gently) and change '-final'
to match what the '-interp' input.
-1Dmatrix_save ff = Save the transformation matrix for each sub-brick into
file 'ff' (1 row per sub-brick in the source dataset).
If 'ff' does NOT end in '.1D', then the program will
append '.aff12.1D' to 'ff' to make the output filename.
*N.B.: This matrix is the coordinate transformation from base
to source DICOM coordinates. In other terms:
Xin = Xsource = M Xout = M Xbase
or
Xout = Xbase = inv(M) Xin = inv(M) Xsource
where Xin or Xsource is the 4x1 coordinates of a
location in the input volume. Xout is the
coordinate of that same location in the output volume.
Xbase is the coordinate of the corresponding location
in the base dataset. M is ff augmented by a 4th row of
[0 0 0 1], X. is an augmented column vector [x,y,z,1]'
To get the inverse matrix inv(M)
(source to base), use the cat_matvec program, as in
cat_matvec fred.aff12.1D -I
-1Dmatrix_apply aa = Use the matrices in file 'aa' to define the spatial
transformations to be applied. Also see program
cat_matvec for ways to manipulate these matrix files.
*N.B.: You probably want to use either -base or -master
with either *_apply option, so that the coordinate
system that the matrix refers to is correctly loaded.
** You can also use the word 'IDENTITY' in place of a
filename to indicate the identity transformation --
presumably for the purpose of resampling the source
dataset to a new grid.
* The -1Dmatrix_* options can be used to save and reuse the transformation *
* matrices. In combination with the program cat_matvec, which can multiply *
* saved transformation matrices, you can also adjust these matrices to *
* other alignments. These matrices can also be combined with nonlinear *
* warps (from 3dQwarp) using programs 3dNwarpApply or 3dNwarpCat. *
* The script 'align_epi_anat.py' uses 3dAllineate and 3dvolreg to align EPI *
* datasets to T1-weighted anatomical datasets, using saved matrices between *
* the two programs. This script is our currently recommended method for *
* doing such intra-subject alignments. *
-cost ccc = Defines the 'cost' function that defines the matching
between the source and the base; 'ccc' is one of
ls *OR* leastsq = Least Squares [Pearson Correlation]
mi *OR* mutualinfo = Mutual Information [H(b)+H(s)-H(b,s)]
crM *OR* corratio_mul = Correlation Ratio (Symmetrized*)
nmi *OR* norm_mutualinfo = Normalized MI [H(b,s)/(H(b)+H(s))]
hel *OR* hellinger = Hellinger metric
crA *OR* corratio_add = Correlation Ratio (Symmetrized+)
crU *OR* corratio_uns = Correlation Ratio (Unsym)
lpc *OR* localPcorSigned = Local Pearson Correlation Signed
lpa *OR* localPcorAbs = Local Pearson Correlation Abs
lpc+ *OR* localPcor+Others= Local Pearson Signed + Others
lpa+ *OR* localPcorAbs+Others= Local Pearson Abs + Others
You can also specify the cost functional using an option
of the form '-mi' rather than '-cost mi', if you like
to keep things terse and cryptic (as I do).
[Default == '-hel' (for no good reason, but it sounds nice).]
**NB** See more below about lpa and lpc, which are typically
what we would recommend as first-choice cost functions
now:
lpa if you have similar contrast vols to align;
lpc if you have *non*similar contrast vols to align!
-interp iii = Defines interpolation method to use during matching
process, where 'iii' is one of
NN *OR* nearestneighbour *OR nearestneighbor
linear *OR* trilinear
cubic *OR* tricubic
quintic *OR* triquintic
Using '-NN' instead of '-interp NN' is allowed (e.g.).
Note that using cubic or quintic interpolation during
the matching process will slow the program down a lot.
Use '-final' to affect the interpolation method used
to produce the output dataset, once the final registration
parameters are determined. [Default method == 'linear'.]
** N.B.: Linear interpolation is used during the coarse
alignment pass; the selection here only affects
the interpolation method used during the second
(fine) alignment pass.
** N.B.: '-interp' does NOT define the final method used
to produce the output dataset as warped from the
input dataset. If you want to do that, use '-final'.
-final iii = Defines the interpolation mode used to create the
output dataset. [Default == 'cubic']
** N.B.: If you are applying a transformation to an
integer-valued dataset (such as an atlas),
then you should use '-final NN' to avoid
interpolation of the integer labels.
** N.B.: For '-final' ONLY, you can use 'wsinc5' to specify
that the final interpolation be done using a
weighted sinc interpolation method. This method
is so SLOW that you aren't allowed to use it for
the registration itself.
++ wsinc5 interpolation is highly accurate and should
reduce the smoothing artifacts from lower
order interpolation methods (which are most
visible if you interpolate an EPI time series
to high resolution and then make an image of
the voxel-wise variance).
++ On my Intel-based Mac, it takes about 2.5 s to do
wsinc5 interpolation, per 1 million voxels output.
For comparison, quintic interpolation takes about
0.3 s per 1 million voxels: 8 times faster than wsinc5.
++ The '5' refers to the width of the sinc interpolation
weights: plus/minus 5 grid points in each direction;
this is a tensor product interpolation, for speed.
TECHNICAL OPTIONS (used for fine control of the program):
=================
-nmatch nnn = Use at most 'nnn' scattered points to match the
datasets. The smaller nnn is, the faster the matching
algorithm will run; however, accuracy may be bad if
nnn is too small. If you end the 'nnn' value with the
'%' character, then that percentage of the base's
voxels will be used.
[Default == 47% of voxels in the weight mask]
-nopad = Do not use zero-padding on the base image.
(I cannot think of a good reason to use this option.)
[Default == zero-pad, if needed; -verb shows how much]
-zclip = Replace negative values in the input datasets (source & base)
-noneg with zero. The intent is to clip off a small set of negative
values that may arise when using 3dresample (say) with
cubic interpolation.
-conv mmm = Convergence test is set to 'mmm' millimeters.
This doesn't mean that the results will be accurate
to 'mmm' millimeters! It just means that the program
stops trying to improve the alignment when the optimizer
(NEWUOA) reports it has narrowed the search radius
down to this level.
* To set this value to the smallest allowable, use '-conv 0'.
* A coarser value for 'quick-and-dirty' alignment is 0.05.
-verb = Print out verbose progress reports.
[Using '-VERB' will give even more prolix reports :]
-quiet = Don't print out verbose stuff. (But WHY?)
-usetemp = Write intermediate stuff to disk, to economize on RAM.
Using this will slow the program down, but may make it
possible to register datasets that need lots of space.
**N.B.: Temporary files are written to the directory given
in environment variable TMPDIR, or in /tmp, or in ./
(preference in that order). If the program crashes,
these files are named TIM_somethingrandom, and you
may have to delete them manually. (TIM=Temporary IMage)
**N.B.: If the program fails with a 'malloc failure' type of
message, then try '-usetemp' (malloc=memory allocator).
* If the program just stops with a message 'killed', that
means the operating system (Unix/Linux) stopped the
program, which almost always is due to the system running
low on memory -- so it starts killing programs to save itself.
-nousetemp = Don't use temporary workspace on disk [the default].
-check hhh = After cost functional optimization is done, start at the
final parameters and RE-optimize using the new cost
function 'hhh'. If the results are too different, a
warning message will be printed. However, the final
parameters from the original optimization will be
used to create the output dataset. Using '-check'
increases the CPU time, but can help you feel sure
that the alignment process did not go wild and crazy.
[Default == no check == don't worry, be happy!]
**N.B.: You can put more than one function after '-check', as in
-nmi -check mi hel crU crM
to register with Normalized Mutual Information, and
then check the results against 4 other cost functionals.
**N.B.: On the other hand, some cost functionals give better
results than others for specific problems, and so
a warning that 'mi' was significantly different than
'hel' might not actually mean anything useful (e.g.).
** PARAMETERS THAT AFFECT THE COST OPTIMIZATION STRATEGY **
-onepass = Use only the refining pass -- do not try a coarse
resolution pass first. Useful if you know that only
SMALL amounts of image alignment are needed.
[The default is to use both passes.]
-twopass = Use a two pass alignment strategy, first searching for
a large rotation+shift and then refining the alignment.
[Two passes are used by default for the first sub-brick]
[in the source dataset, and then one pass for the others.]
['-twopass' will do two passes for ALL source sub-bricks.]
*** The first (coarse) pass is relatively slow, as it tries
to search a large volume of parameter (rotations+shifts)
space for initial guesses at the alignment transformation.
* A lot of these initial guesses are kept and checked to
see which ones lead to good starting points for the
further refinement.
* The winners of this competition are then passed to the
'-twobest' (infra) successive optimization passes.
* The ultimate winner of THAT stage is what starts
the second (fine) pass alignment. Usually, this starting
point is so good that the fine pass optimization does
not provide a lot of improvement; that is, most of the
run time ends up in coarse pass with its multiple stages.
* All of these stages are intended to help the program avoid
stopping at a 'false' minimum in the cost functional.
They were added to the software as we gathered experience
with difficult 3D alignment problems. The combination of
multiple stages of partial optimization of multiple
parameter candidates makes the coarse pass slow, but also
makes it (usually) work well.
-twoblur rr = Set the blurring radius for the first pass to 'rr'
millimeters. [Default == 11 mm]
**N.B.: You may want to change this from the default if
your voxels are unusually small or unusually large
(e.g., outside the range 1-4 mm along each axis).
-twofirst = Use -twopass on the first image to be registered, and
then on all subsequent images from the source dataset,
use results from the first image's coarse pass to start
the fine pass.
(Useful when there may be large motions between the )
(source and the base, but only small motions within )
(the source dataset itself; since the coarse pass can )
(be slow, doing it only once makes sense in this case.)
**N.B.: [-twofirst is on by default; '-twopass' turns it off.]
-twobest bb = In the coarse pass, use the best 'bb' set of initial
points to search for the starting point for the fine
pass. If bb==0, then no search is made for the best
starting point, and the identity transformation is
used as the starting point. [Default=5; min=0 max=29]
**N.B.: Setting bb=0 will make things run faster, but less reliably.
Setting bb = 'MAX' will make it be the max allowed value.
-fineblur x = Set the blurring radius to use in the fine resolution
pass to 'x' mm. A small amount (1-2 mm?) of blurring at
the fine step may help with convergence, if there is
some problem, especially if the base volume is very noisy.
[Default == 0 mm = no blurring at the final alignment pass]
**NOTES ON
**STRATEGY: * If you expect only small-ish (< 2 voxels?) image movement,
then using '-onepass' or '-twobest 0' makes sense.
* If you expect large-ish image movements, then do not
use '-onepass' or '-twobest 0'; the purpose of the
'-twobest' parameter is to search for large initial
rotations/shifts with which to start the coarse
optimization round.
* If you have multiple sub-bricks in the source dataset,
then the default '-twofirst' makes sense if you don't expect
large movements WITHIN the source, but expect large motions
between the source and base.
* '-twopass' re-starts the alignment process for each sub-brick
in the source dataset -- this option can be time consuming,
and is really intended to be used when you might expect large
movements between sub-bricks; for example, when the different
volumes are gathered on different days. For most purposes,
'-twofirst' (the default process) will be adequate and faster,
when operating on multi-volume source datasets.
-cmass = Use the center-of-mass calculation to determine an initial shift
[This option is OFF by default]
can be given as cmass+a, cmass+xy, cmass+yz, cmass+xz
where +a means to try determine automatically in which
direction the data is partial by looking for a too large shift
If given in the form '-cmass+xy' (for example), means to
do the CoM calculation in the x- and y-directions, but
not the z-direction.
* MY OPINION: This option is REALLY useful in most cases.
However, if you only have partial coverage in
the -source dataset, you will need to use
one of the '+' additions to restrict the
use of the CoM limits.
-nocmass = Don't use the center-of-mass calculation. [The default]
(You would not want to use the C-o-M calculation if the )
(source sub-bricks have very different spatial locations,)
(since the source C-o-M is calculated from all sub-bricks)
**EXAMPLE: You have a limited coverage set of axial EPI slices you want to
register into a larger head volume (after 3dSkullStrip, of course).
In this case, '-cmass+xy' makes sense, allowing CoM adjustment
along the x = R-L and y = A-P directions, but not along the
z = I-S direction, since the EPI doesn't cover the whole brain
along that axis.
-autoweight = Compute a weight function using the 3dAutomask
algorithm plus some blurring of the base image.
**N.B.: '-autoweight+100' means to zero out all voxels
with values below 100 before computing the weight.
'-autoweight**1.5' means to compute the autoweight
and then raise it to the 1.5-th power (e.g., to
increase the weight of high-intensity regions).
These two processing steps can be combined, as in
'-autoweight+100**1.5'
** Note that '**' must be enclosed in quotes;
otherwise, the shell will treat it as a wildcard
and you will get an error message before 3dAllineate
even starts!!
** UPDATE: one can now use '^' for power notation, to
avoid needing to enclose the string in quotes.
**N.B.: Some cost functionals do not allow -autoweight, and
will use -automask instead. A warning message
will be printed if you run into this situation.
If a clip level '+xxx' is appended to '-autoweight',
then the conversion into '-automask' will NOT happen.
Thus, using a small positive '+xxx' can be used trick
-autoweight into working on any cost functional.
-automask = Compute a mask function, which is like -autoweight,
but the weight for a voxel is set to either 0 or 1.
**N.B.: '-automask+3' means to compute the mask function, and
then dilate it outwards by 3 voxels (e.g.).
** Note that '+' means something very different
for '-automask' and '-autoweight'!!
-autobox = Expand the -automask function to enclose a rectangular
box that holds the irregular mask.
**N.B.: This is the default mode of operation!
For intra-modality registration, '-autoweight' may be better!
* If the cost functional is 'ls', then '-autoweight' will be
the default, instead of '-autobox'.
-nomask = Don't compute the autoweight/mask; if -weight is not
also used, then every voxel will be counted equally.
-weight www = Set the weighting for each voxel in the base dataset;
larger weights mean that voxel counts more in the cost
function.
**N.B.: The weight dataset must be defined on the same grid as
the base dataset.
**N.B.: Even if a method does not allow -autoweight, you CAN
use a weight dataset that is not 0/1 valued. The
risk is yours, of course (!*! as always in AFNI !*!).
-wtprefix p = Write the weight volume to disk as a dataset with
prefix name 'p'. Used with '-autoweight/mask', this option
lets you see what voxels were important in the algorithm.
-emask ee = This option lets you specify a mask of voxels to EXCLUDE from
the analysis. The voxels where the dataset 'ee' is nonzero
will not be included (i.e., their weights will be set to zero).
* Like all the weight options, it applies in the base image
coordinate system.
** Like all the weight options, it means nothing if you are using
one of the 'apply' options.
Method Allows -autoweight
------ ------------------
ls YES
mi NO
crM YES
nmi NO
hel NO
crA YES
crU YES
lpc YES
lpa YES
lpc+ YES
lpa+ YES
-source_mask sss = Mask the source (input) dataset, using 'sss'.
-source_automask = Automatically mask the source dataset.
[By default, all voxels in the source]
[dataset are used in the matching. ]
**N.B.: You can also use '-source_automask+3' to dilate
the default source automask outward by 3 voxels.
-warp xxx = Set the warp type to 'xxx', which is one of
shift_only *OR* sho = 3 parameters
shift_rotate *OR* shr = 6 parameters
shift_rotate_scale *OR* srs = 9 parameters
affine_general *OR* aff = 12 parameters
[Default = affine_general, which includes image]
[ shifts, rotations, scaling, and shearing]
* MY OPINION: Shearing is usually unimportant, so
you can omit it if you want: '-warp srs'.
But it doesn't hurt to keep shearing,
except for a little extra CPU time.
On the other hand, scaling is often
important, so should not be omitted.
-warpfreeze = Freeze the non-rigid body parameters (those past #6)
after doing the first sub-brick. Subsequent volumes
will have the same spatial distortions as sub-brick #0,
plus rigid body motions only.
* MY OPINION: This option is almost useless.
-replacebase = If the source has more than one sub-brick, and this
option is turned on, then after the #0 sub-brick is
aligned to the base, the aligned #0 sub-brick is used
as the base image for subsequent source sub-bricks.
* MY OPINION: This option is almost useless.
-replacemeth m = After sub-brick #0 is aligned, switch to method 'm'
for later sub-bricks. For use with '-replacebase'.
* MY OPINION: This option is almost useless.
-EPI = Treat the source dataset as being composed of warped
EPI slices, and the base as comprising anatomically
'true' images. Only phase-encoding direction image
shearing and scaling will be allowed with this option.
**N.B.: For most people, the base dataset will be a 3dSkullStrip-ed
T1-weighted anatomy (MPRAGE or SPGR). If you don't remove
the skull first, the EPI images (which have little skull
visible due to fat-suppression) might expand to fit EPI
brain over T1-weighted skull.
**N.B.: Usually, EPI datasets don't have as complete slice coverage
of the brain as do T1-weighted datasets. If you don't use
some option (like '-EPI') to suppress scaling in the slice-
direction, the EPI dataset is likely to stretch the slice
thickness to better 'match' the T1-weighted brain coverage.
**N.B.: '-EPI' turns on '-warpfreeze -replacebase'.
You can use '-nowarpfreeze' and/or '-noreplacebase' AFTER the
'-EPI' on the command line if you do not want these options used.
** OPTIONS to change search ranges for alignment parameters **
-smallrange = Set all the parameter ranges to be smaller (about half) than
the default ranges, which are rather large for many purposes.
* Default angle range is plus/minus 30 degrees
* Default shift range is plus/minus 32% of grid size
* Default scaling range is plus/minus 20% of grid size
* Default shearing range is plus/minus 0.1111
-parfix n v = Fix parameter #n to be exactly at value 'v'.
-parang n b t = Allow parameter #n to range only between 'b' and 't'.
If not given, default ranges are used.
-parini n v = Initialize parameter #n to value 'v', but then
allow the algorithm to adjust it.
**N.B.: Multiple '-par...' options can be used, to constrain
multiple parameters.
**N.B.: -parini has no effect if -twopass is used, since
the -twopass algorithm carries out its own search
for initial parameters.
-maxrot dd = Allow maximum rotation of 'dd' degrees. Equivalent
to '-parang 4 -dd dd -parang 5 -dd dd -parang 6 -dd dd'
[Default=30 degrees]
-maxshf dd = Allow maximum shift of 'dd' millimeters. Equivalent
to '-parang 1 -dd dd -parang 2 -dd dd -parang 3 -dd dd'
[Default=32% of the size of the base image]
**N.B.: This max shift setting is relative to the center-of-mass
shift, if the '-cmass' option is used.
-maxscl dd = Allow maximum scaling factor to be 'dd'. Equivalent
to '-parang 7 1/dd dd -parang 8 1/dd dd -paran2 9 1/dd dd'
[Default=1.4=image can go up or down 40% in size]
-maxshr dd = Allow maximum shearing factor to be 'dd'. Equivalent
to '-parang 10 -dd dd -parang 11 -dd dd -parang 12 -dd dd'
[Default=0.1111 for no good reason]
NOTE: If the datasets being registered have only 1 slice, 3dAllineate
will automatically fix the 6 out-of-plane motion parameters to
their 'do nothing' values, so you don't have to specify '-parfix'.
-master mmm = Write the output dataset on the same grid as dataset
'mmm'. If this option is NOT given, the base dataset
is the master.
**N.B.: 3dAllineate transforms the source dataset to be 'similar'
to the base image. Therefore, the coordinate system
of the master dataset is interpreted as being in the
reference system of the base image. It is thus vital
that these finite 3D volumes overlap, or you will lose data!
**N.B.: If 'mmm' is the string 'SOURCE', then the source dataset
is used as the master for the output dataset grid.
You can also use 'BASE', which is of course the default.
-mast_dxyz del = Write the output dataset using grid spacings of
*OR* 'del' mm. If this option is NOT given, then the
-newgrid del grid spacings in the master dataset will be used.
This option is useful when registering low resolution
data (e.g., EPI time series) to high resolution
datasets (e.g., MPRAGE) where you don't want to
consume vast amounts of disk space interpolating
the low resolution data to some artificially fine
(and meaningless) spatial grid.
----------------------------------------------
DEFINITION OF AFFINE TRANSFORMATION PARAMETERS
----------------------------------------------
The 3x3 spatial transformation matrix is calculated as [S][D][U],
where [S] is the shear matrix,
[D] is the scaling matrix, and
[U] is the rotation (proper orthogonal) matrix.
Thes matrices are specified in DICOM-ordered (x=-R+L,y=-A+P,z=-I+S)
coordinates as:
[U] = [Rotate_y(param#6)] [Rotate_x(param#5)] [Rotate_z(param #4)]
(angles are in degrees)
[D] = diag( param#7 , param#8 , param#9 )
[ 1 0 0 ] [ 1 param#10 param#11 ]
[S] = [ param#10 1 0 ] OR [ 0 1 param#12 ]
[ param#11 param#12 1 ] [ 0 0 1 ]
The shift vector comprises parameters #1, #2, and #3.
The goal of the program is to find the warp parameters such that
I([x]_warped) 'is similar to' J([x]_in)
as closely as possible in some sense of 'similar', where J(x) is the
base image, and I(x) is the source image.
Using '-parfix', you can specify that some of these parameters
are fixed. For example, '-shift_rotate_scale' is equivalent
'-affine_general -parfix 10 0 -parfix 11 0 -parfix 12 0'.
Don't even think of using the '-parfix' option unless you grok
this example!
----------- Special Note for the '-EPI' Option's Coordinates -----------
In this case, the parameters above are with reference to coordinates
x = frequency encoding direction (by default, first axis of dataset)
y = phase encoding direction (by default, second axis of dataset)
z = slice encoding direction (by default, third axis of dataset)
This option lets you freeze some of the warping parameters in ways that
make physical sense, considering how echo-planar images are acquired.
The x- and z-scaling parameters are disabled, and shears will only affect
the y-axis. Thus, there will be only 9 free parameters when '-EPI' is
used. If desired, you can use a '-parang' option to allow the scaling
fixed parameters to vary (put these after the '-EPI' option):
-parang 7 0.833 1.20 to allow x-scaling
-parang 9 0.833 1.20 to allow z-scaling
You could also fix some of the other parameters, if that makes sense
in your situation; for example, to disable out-of-slice rotations:
-parfix 5 0 -parfix 6 0
and to disable out of slice translation:
-parfix 3 0
NOTE WELL: If you use '-EPI', then the output warp parameters (e.g., in
'-1Dparam_save') apply to the (freq,phase,slice) xyz coordinates,
NOT to the DICOM xyz coordinates, so equivalent transformations
will be expressed with different sets of parameters entirely
than if you don't use '-EPI'! This comment does NOT apply
to the output of '-1Dmatrix_save', since that matrix is
defined relative to the RAI (DICOM) spatial coordinates.
*********** CHANGING THE ORDER OF MATRIX APPLICATION ***********
{{{ There is no good reason to ever use these options! }}}
-SDU or -SUD }= Set the order of the matrix multiplication
-DSU or -DUS }= for the affine transformations:
-USD or -UDS }= S = triangular shear (params #10-12)
D = diagonal scaling matrix (params #7-9)
U = rotation matrix (params #4-6)
Default order is '-SDU', which means that
the U matrix is applied first, then the
D matrix, then the S matrix.
-Supper }= Set the S matrix to be upper or lower
-Slower }= triangular [Default=lower triangular]
NOTE: There is no '-Lunch' option.
There is no '-Faster' option.
-ashift OR }= Apply the shift parameters (#1-3) after OR
-bshift }= before the matrix transformation. [Default=after]
==================================================
===== RWCox - September 2006 - Live Long and Prosper =====
==================================================
********************************************************
*** From Webster's Dictionary: Allineate == 'to align' ***
********************************************************
===========================================================================
FORMERLY SECRET HIDDEN OPTIONS
---------------------------------------------------------------------------
** N.B.: Most of these are experimental! [permanent beta] **
===========================================================================
-num_rtb n = At the beginning of the fine pass, the best set of results
from the coarse pass are 'refined' a little by further
optimization, before the single best one is chosen for
for the final fine optimization.
* This option sets the maximum number of cost functional
evaluations to be used (for each set of parameters)
in this step.
* The default is 99; a larger value will take more CPU
time but may give more robust results.
* If you want to skip this step entirely, use '-num_rtb 0'.
then, the best of the coarse pass results is taken
straight to the final optimization passes.
**N.B.: If you use '-VERB', you will see that one extra case
is involved in this initial fine refinement step; that
case is starting with the identity transformation, which
helps insure against the chance that the coarse pass
optimizations ran totally amok.
* MY OPINION: This option is mostly useless - but not always!
* Every step in the multi-step alignment process
was added at some point to solve a difficult
alignment problem.
* Since you usually don't know if YOUR problem
is difficult, you should not reduce the default
process without good reason.
-nocast = By default, parameter vectors that are too close to the
best one are cast out at the end of the coarse pass
refinement process. Use this option if you want to keep
them all for the fine resolution pass.
* MY OPINION: This option is nearly useless.
-norefinal = Do NOT re-start the fine iteration step after it
has converged. The default is to re-start it, which
usually results in a small improvement to the result
(at the cost of CPU time). This re-start step is an
an attempt to avoid a local minimum trap. It is usually
not necessary, but sometimes helps.
-realaxes = Use the 'real' axes stored in the dataset headers, if they
conflict with the default axes. [For Jedi AFNI Masters only!]
-savehist sss = Save start and final 2D histograms as PGM
files, with prefix 'sss' (cost: cr mi nmi hel).
* if filename contains 'FF', floats is written
* these are the weighted histograms!
* -savehist will also save histogram files when
the -allcost evaluations takes place
* this option is mostly useless unless '-histbin' is
also used
* MY OPINION: This option is mostly for debugging.
-median = Smooth with median filter instead of Gaussian blur.
(Somewhat slower, and not obviously useful.)
* MY OPINION: This option is nearly useless.
-powell m a = Set the Powell NEWUOA dimensional parameters to
'm' and 'a' (cf. source code in powell_int.c).
The number of points used for approximating the
cost functional is m*N+a, where N is the number
of parameters being optimized. The default values
are m=2 and a=3. Larger values will probably slow
the program down for no good reason. The smallest
allowed values are 1.
* MY OPINION: This option is nearly useless.
-target ttt = Same as '-source ttt'. In the earliest versions,
what I now call the 'source' dataset was called the
'target' dataset:
Try to remember the kind of September (2006)
When life was slow and oh so mellow
Try to remember the kind of September
When grass was green and source was target.
-Xwarp =} Change the warp/matrix setup so that only the x-, y-, or z-
-Ywarp =} axis is stretched & sheared. Useful for EPI, where 'X',
-Zwarp =} 'Y', or 'Z' corresponds to the phase encoding direction.
-FPS fps = Generalizes -EPI to arbitrary permutation of directions.
-histpow pp = By default, the number of bins in the histogram used
for calculating the Hellinger, Mutual Information, and
Correlation Ratio statistics is n^(1/3), where n is
the number of data points. You can change that exponent
to 'pp' with this option.
-histbin nn = Or you can just set the number of bins directly to 'nn'.
-eqbin nn = Use equalized marginal histograms with 'nn' bins.
-clbin nn = Use 'nn' equal-spaced bins except for the bot and top,
which will be clipped (thus the 'cl'). If nn is 0, the
program will pick the number of bins for you.
**N.B.: '-clbin 0' is now the default [25 Jul 2007];
if you want the old all-equal-spaced bins, use
'-histbin 0'.
**N.B.: '-clbin' only works when the datasets are
non-negative; any negative voxels in either
the input or source volumes will force a switch
to all equal-spaced bins.
* MY OPINION: The above histogram-altering options are useless.
-wtmrad mm = Set autoweight/mask median filter radius to 'mm' voxels.
-wtgrad gg = Set autoweight/mask Gaussian filter radius to 'gg' voxels.
-nmsetup nn = Use 'nn' points for the setup matching [default=98756]
-ignout = Ignore voxels outside the warped source dataset.
-blok bbb = Blok definition for the 'lp?' (Local Pearson) cost
functions: 'bbb' is one of
'BALL(r)' or 'CUBE(r)' or 'RHDD(r)' or 'TOHD(r)'
corresponding to
spheres or cubes or rhombic dodecahedra or
truncated octahedra
where 'r' is the size parameter in mm.
[Default is 'TOHD(r)' = truncated octahedron]
[with 'radius' r chosen to include about 500]
[voxels in the base dataset 3D grid. ]
* Changing the 'blok' definition/radius should only be
needed in unusual situations, as when you are trying
to have fun fun fun.
* You can change the blok shape but leave the program
to set the radius, using (say) 'RHDD(0)'.
* The old default blok shape/size was 'RHDD(6.54321)',
so if you want to maintain backward compatibility,
you should use option '-blok "RHDD(6.54321)"'
* Only voxels in the weight mask will be used
inside a blok.
* HISTORICAL NOTES:
* CUBE, RHDD, and TOHD are space filling polyhedra.
That is, they are shapes that fit together without
overlaps or gaps to fill up 3D space.
* To even approximately fill space, BALLs must overlap,
unlike the other blok shapes. Which means that BALL
bloks will use some voxels more than once.
* Kepler discovered/invented the RHDD (honeybees also did).
* The TOHD is the 'most compact' or 'most ball-like'
of the known convex space filling polyhedra.
[Which is why TOHD is the default blok shape.]
-PearSave sss = Save the final local Pearson correlations into a dataset
*OR* with prefix 'sss'. These are the correlations from
-SavePear sss which the lpc and lpa cost functionals are calculated.
* The values will be between -1 and 1 in each blok.
See the 'Too Much Detail' section below for how
these correlations are used to compute lpc and lpa.
* Locations not used in the matching will get 0.
** Unless you use '-nmatch 100%', there will be holes
of 0s in the bloks, as not all voxels are used in
the matching algorithm (speedup attempt).
* All the matching points in a given blok will get
the same value, which makes the resulting dataset
look jauntily blocky, especially in color.
* This saved dataset will be on the grid of the base
dataset, and may be zero padded if the program
chose to do so in it wisdom. This padding means
that the voxels in this output dataset may not
match one-to-one with the voxels in the base
dataset; however, AFNI displays things using
coordinates, so overlaying this dataset on the
base dataset (say) should work OK.
* If you really want this saved dataset to be on the
grid as the base dataset, you'll have use
3dZeropad -master {Base Dataset} ....
* Option '-PearSave' works even if you don't use the
'lpc' or 'lpa' cost functionals.
* If you use this option combined with '-allcostX', then
the local correlations will be saved from the INITIAL
alignment parameters, rather than from the FINAL
optimized parameters.
(Of course, with '-allcostX', there IS no final result.)
* This option does NOT work with '-allcost' or '-allcostX1D'.
-allcost = Compute ALL available cost functionals and print them
at various points in the optimization progress.
-allcostX = Compute and print ALL available cost functionals for the
un-warped inputs, and then quit.
* This option is for testing purposes (AKA 'fun').
-allcostX1D p q = Compute ALL available cost functionals for the set of
parameters given in the 1D file 'p' (12 values per row),
write them to the 1D file 'q', then exit. (For you, Zman)
* N.B.: If -fineblur is used, that amount of smoothing
will be applied prior to the -allcostX evaluations.
The parameters are the rotation, shift, scale,
and shear values, not the affine transformation
matrix. An identity matrix could be provided as
"0 0 0 0 0 0 1 1 1 0 0 0" for instance or by
using the word "IDENTITY"
* This option is for testing purposes (even more 'fun').
===========================================================================
Too Much Detail -- How Local Pearson Correlations Are Computed and Used
-----------------------------------------------------------------------
* The automask region of the base dataset is divided into a discrete
set of 'bloks'. Usually there are several thousand bloks.
* In each blok, the voxel values from the base and the source (after
the alignment transformation is applied) are extracted and the
correlation coefficient is computed -- either weighted or unweighted,
depending on the options used in 3dAllineate (usually weighted).
* Let p[i] = correlation coefficient in blok #i,
w[i] = sum of weights used in blok #i, or = 1 if unweighted.
** The values of p[i] are what get output via the '-PearSave' option.
* Define pc[i] = arctanh(p[i]) = 0.5 * log( (1+p[i]) / (1-p[i]) )
This expression is designed to 'stretch' out larger correlations,
giving them more emphasis in psum below. The same reasoning
is why pc[i]*abs(pc[i]) is used below, to make bigger correlations
have a bigger impact in the final result.
* psum = SUM_OVER_i { w[i]*pc[i]*abs(pc[i]) }
wsum = SUM_OVER_i { w[i] }
lpc = psum / wsum ==> negative correlations are good (smaller lpc)
lpa = 1 - abs(lpc) ==> positive correlations are good (smaller lpa)
===========================================================================
Modifying '-final wsinc5' -- for the truly crazy people out there
-----------------------------------------------------------------
* The windowed (tapered) sinc function interpolation can be modified
by several environment variables. This is expert-level stuff, and
you should understand what you are doing if you use these options.
The simplest way to use these would be on the command line, as in
-DAFNI_WSINC5_RADIUS=9 -DAFNI_WSINC5_TAPERFUN=Hamming
* AFNI_WSINC5_TAPERFUN lets you choose the taper function.
The default taper function is the minimum sidelobe 3-term cosine:
0.4243801 + 0.4973406*cos(PI*x) + 0.0782793*cos(2*PI*x)
If you set this environment variable to 'Hamming', then the
minimum sidelobe 2-term cosine will be used instead:
0.53836 + 0.46164*cos(PI*x)
Here, 'x' is between 0 and 1, where x=0 is the center of the
interpolation mask and x=1 is the outer edge.
++ Unfortunately, the 3-term cosine doesn't have a catchy name; you can
find it (and many other) taper functions described in the paper
AH Nuttall, Some Windows with Very Good Sidelobe Behavior.
IEEE Trans. ASSP, 29:84-91 (1981).
In particular, see Fig.14 and Eq.36 in this paper.
* AFNI_WSINC5_TAPERCUT lets you choose the start 'x' point for tapering:
This value should be between 0 and 0.8; for example, 0 means to taper
all the way from x=0 to x=1 (maximum tapering). The default value
is 0. Setting TAPERCUT to 0.5 (say) means only to taper from x=0.5
to x=1; thus, a larger value means that fewer points are tapered
inside the interpolation mask.
* AFNI_WSINC5_RADIUS lets you choose the radius of the tapering window
(i.e., the interpolation mask region). This value is an integer
between 3 and 21. The default value is 5 (which used to be the
ONLY value, thus 'wsinc5'). RADIUS is measured in voxels, not mm.
* AFNI_WSINC5_SPHERICAL lets you choose the shape of the mask region.
If you set this value to 'Yes', then the interpolation mask will be
spherical; otherwise, it defaults to cubical.
* The Hamming taper function is a little faster than the 3-term function,
but will have a little more Gibbs phenomenon.
* A larger TAPERCUT will give a little more Gibbs phenomenon; compute
speed won't change much with this parameter.
* Compute time goes up with (at least) the 3rd power of the RADIUS; setting
RADIUS to 21 will be VERY slow.
* Visually, RADIUS=3 is similar to quintic interpolation. Increasing
RADIUS makes the interpolated images look sharper and more well-
defined. However, values of RADIUS greater than or equal to 7 appear
(to Zhark's eagle eye) to be almost identical. If you really care,
you'll have to experiment with this parameter yourself.
* A spherical mask is also VERY slow, since the cubical mask allows
evaluation as a tensor product. There is really no good reason
to use a spherical mask; I only put it in for fun/experimental purposes.
** For most users, there is NO reason to ever use these environment variables
to modify wsinc5. You should only do this kind of thing if you have a
good and articulable reason! (Or if you really like to screw around.)
** The wsinc5 interpolation function is parallelized using OpenMP, which
makes its usage moderately tolerable.
===========================================================================
Hidden experimental cost functionals:
-------------------------------------
sp *OR* spearman = Spearman [rank] Correlation
je *OR* jointentropy = Joint Entropy [H(b,s)]
lss *OR* signedPcor = Signed Pearson Correlation
Notes for the new [Feb 2010] lpc+ cost functional:
--------------------------------------------------
* The cost functional named 'lpc+' is a combination of several others:
lpc + hel*0.4 + crA*0.4 + nmi*0.2 + mi*0.2 + ov*0.4
++ 'hel', 'crA', 'nmi', and 'mi' are the histogram-based cost
functionals also available as standalone options.
++ 'ov' is a measure of the overlap of the automasks of the base and
source volumes; ov is not available as a standalone option.
* The purpose of lpc+ is to avoid situations where the pure lpc cost
goes wild; this especially happens if '-source_automask' isn't used.
++ Even with lpc+, you should use '-source_automask+2' (say) to be safe.
* You can alter the weighting of the extra functionals by giving the
option in the form (for example)
'-lpc+hel*0.5+nmi*0+mi*0+crA*1.0+ov*0.5'
* The quotes are needed to prevent the shell from wild-card expanding
the '*' character.
--> You can now use ':' in place of '*' to avoid this wildcard problem:
-lpc+hel:0.5+nmi:0+mi:0+crA:1+ov:0.5+ZZ
* Notice the weight factors FOLLOW the name of the extra functionals.
++ If you want a weight to be 0 or 1, you have to provide for that
explicitly -- if you leave a weight off, then it will get its
default value!
++ The order of the weight factor names is unimportant here:
'-lpc+hel*0.5+nmi*0.8' == '-lpc+nmi*0.8+hel*0.5'
* Only the 5 functionals listed (hel,crA,nmi,mi,ov) can be used in '-lpc+'.
* In addition, if you want the initial alignments to be with '-lpc+' and
then finish the Final alignment with pure '-lpc', you can indicate this
by putting 'ZZ' somewhere in the option string, as in '-lpc+ZZ'.
***** '-cost lpc+ZZ' is very useful for aligning EPI to T1w volumes *****
* [28 Nov 2018]
All of the above now applies to the 'lpa+' cost functional,
which can be used as a robust method for like-to-like alignment.
For example, aligning 3T and 7T T1-weighted datasets from the same person.
* [28 Sep 2021]
However, the default multiplier constants for cost 'lpa+' are now
different from the 'lpc+' multipliers -- to make 'lpa+' more
robust. The new default for 'lpa+' is
lpa + hel*0.4 + crA*0.4 + nmi*0.2 + mi*0.0 + ov*0.4
***** '-cost lpa+ZZ' is very useful for T1w to T1w volumes (or any *****
***** similar-contrast datasets). *****
*** Note that in trial runs, we have found that lpc+ZZ and lpa+ZZ are ***
*** more robust than lpc+ and lpa+ -- which is why the '+ZZ' amendment ***
*** was created. ***
Cost functional descriptions (for use with -allcost output):
------------------------------------------------------------
ls :: 1 - abs(Pearson correlation coefficient)
sp :: 1 - abs(Spearman correlation coefficient)
mi :: - Mutual Information = H(base,source)-H(base)-H(source)
crM :: 1 - abs[ CR(base,source) * CR(source,base) ]
nmi :: 1/Normalized MI = H(base,source)/[H(base)+H(source)]
je :: H(base,source) = joint entropy of image pair
hel :: - Hellinger distance(base,source)
crA :: 1 - abs[ CR(base,source) + CR(source,base) ]
crU :: CR(source,base) = Var(source|base) / Var(source)
lss :: Pearson correlation coefficient between image pair
lpc :: nonlinear average of Pearson cc over local neighborhoods
lpa :: 1 - abs(lpc)
lpc+:: lpc + hel + mi + nmi + crA + overlap
lpa+:: lpa + hel + nmi + crA + overlap
* N.B.: Some cost functional values (as printed out above)
are negated from their theoretical descriptions (e.g., 'hel')
so that the best image alignment will be found when the cost
is minimized. See the descriptions above and the references
below for more details for each functional.
* MY OPINIONS:
* Some of these cost functionals were implemented only for
the purposes of fun and/or comparison and/or experimentation
and/or special circumstances. These are
sp je lss crM crA crM hel mi nmi
* For many purposes, lpc+ZZ and lpa+ZZ are the most robust
cost functionals, but usually the slowest to evaluate.
* HOWEVER, just because some method is best MOST of the
time does not mean it is best ALL of the time.
Please check your results visually, or at some point
in time you will have bad results and not know it!
* For speed and for 'like-to-like' alignment, '-cost ls'
can work well.
* For more information about the 'lpc' functional, see
ZS Saad, DR Glen, G Chen, MS Beauchamp, R Desai, RW Cox.
A new method for improving functional-to-structural
MRI alignment using local Pearson correlation.
NeuroImage 44: 839-848, 2009.
http://dx.doi.org/10.1016/j.neuroimage.2008.09.037
https://pubmed.ncbi.nlm.nih.gov/18976717
The '-blok' option can be used to control the regions
(size and shape) used to compute the local correlations.
*** Using the 'lpc' functional wisely requires the use of
a proper weight volume. We HIGHLY recommend you use
the align_epi_anat.py script if you want to use this
cost functional! Otherwise, you are likely to get
less than optimal results (and then swear at us unjustly).
* For more information about the 'cr' functionals, see
http://en.wikipedia.org/wiki/Correlation_ratio
Note that CR(x,y) is not the same as CR(y,x), which
is why there are symmetrized versions of it available.
* For more information about the 'mi', 'nmi', and 'je'
cost functionals, see
http://en.wikipedia.org/wiki/Mutual_information
http://en.wikipedia.org/wiki/Joint_entropy
http://www.cs.jhu.edu/~cis/cista/746/papers/mutual_info_survey.pdf
* For more information about the 'hel' functional, see
http://en.wikipedia.org/wiki/Hellinger_distance
* Some cost functionals (e.g., 'mi', 'cr', 'hel') are
computed by creating a 2D joint histogram of the
base and source image pair. Various options above
(e.g., '-histbin', etc.) can be used to control the
number of bins used in the histogram on each axis.
(If you care to control the program in such detail!)
* Minimization of the chosen cost functional is done via
the NEWUOA software, described in detail in
MJD Powell. 'The NEWUOA software for unconstrained
optimization without derivatives.' In: GD Pillo,
M Roma (Eds), Large-Scale Nonlinear Optimization.
Springer, 2006.
http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2004_08.pdf
===========================================================================
SUMMARY of the Default Allineation Process
------------------------------------------
As mentioned earlier, each of these steps was added to deal with a problem
that came up over the years. The resulting process is reasonably robust :),
but then tends to be slow :(. If you use the '-verb' or '-VERB' option, you
will get a lot of fun fun fun progress messages that show the results from
this sequence of steps.
Below, I refer to different scales of effort in the optimizations at each
step. Easier/faster optimization is done using: matching with fewer points
from the datasets; more smoothing of the base and source datasets; and by
putting a smaller upper limit on the number of trials the optimizer is
allowed to take. The Coarse phase starts with the easiest optimization,
and increases the difficulty a little at each refinement. The Fine phase
starts with the most difficult optimization setup: the most points for
matching, little or no smoothing, and a large limit on the number of
optimizer trials.
0. Preliminary Setup [Goal: create the basis for the following steps]
a. Create the automask and/or autoweight from the '-base' dataset.
The cost functional will only be computed from voxels inside the
automask, and only a fraction of those voxels will actually be used
for evaluating the cost functional (unless '-nmatch 100%' is used).
b. If the automask is 'too close' to the outside of the base 3D volume,
zeropad the base dataset to avoid edge effects.
c. Determine the 3D (x,y,z) shifts for the '-cmass' center-of-mass
crude alignment, if ordered by the user.
d. Set ranges of transformation parameters and which parameters are to
be frozen at fixed values.
1. Coarse Phase [Goal: explore the vastness of 6-12D parameter space]
a. The first step uses only the first 6 parameters (shifts + rotations),
and evaluates thousands of potential starting points -- selected from
a 6D grid in parameter space and also from random points in 6D
parameter space. This step is fairly slow. The best 45 parameter
sets (in the sense of the cost functional) are kept for the next step.
b. Still using only the first 6 parameters, the best 45 sets of parameters
undergo a little optimization. The best 6 parameter sets after this
refinement are kept for the next step. (The number of sets chosen
to go on to the next step can be set by the '-twobest' option.)
The optimizations in this step use the blurring radius that is
given by option '-twoblur', which defaults to 7.77 mm, and use
relatively few points in each dataset for computing the cost functional.
c. These 6 best parameter sets undergo further, more costly, optimization,
now using all 12 parameters. This optimization runs in 3 passes, each
more costly (less smoothing, more matching points) than the previous.
(If 2 sets get too close in parameter space, 1 of them will be cast out
-- this does not happen often.) Output parameter sets from the 3rd pass
of successive refinement are inputs to the fine refinement phase.
2. Fine Phase [Goal: use more expensive optimization on good starting points]
a. The 6 outputs from step 1c have the null parameter set (all 0, except
for the '-cmass' shifts) appended. Then a small amount of optimization
is applied to each of these 7 parameter sets ('-num_rtb'). The null
parameter set is added here to insure against the possibility that the
coarse optimizations 'ran away' to some unpleasant locations in the 12D
parameter space. These optimizations use the full set of points specified
by '-nmatch', and the smoothing specified by '-fineblur' (default = 0),
but the number of functional evaluations is small, to make this step fast.
b. The best (smallest cost) set from step 2a is chosen for the final
optimization, which is run until the '-conv' limit is reached.
These are the 'Finalish' parameters (shown using '-verb').
c. The set of parameters from step 2b is used as the starting point
for a new optimization, in an attempt to avoid a false minimum.
The results of this optimization are the final parameter set.
3. The final set of parameters is used to produce the output volume,
using the '-final' interpolation method.
In practice, the output from the Coarse phase successive refinements is
usually so good that the Fine phase runs quickly and makes only small
adjustments. The quality resulting from the Coarse phase steps is mostly
due, in my opinion, to the large number of initial trials (1ab), followed by
by the successive refinements of several parameter sets (1c) to help usher
'good' candidates to the starting line for the Fine phase.
For some 'easy' registration problems -- such as T1w-to-T1w alignment, high
quality images, a lot of overlap to start with -- the process can be sped
up by reducing the number of steps. For example, '-num_rtb 0 -twobest 0'
would eliminate step 2a and speed up step 1c. Even more extreme, '-onepass'
could be used to skip all of the Coarse phase. But be careful out there!
For 'hard' registration problems, cleverness is usually needed. Choice
of cost functional matters. Preprocessing the datasets may be necessary.
Using '-twobest 29' could help by providing more candidates for the
Fine phase -- at the cost of CPU time. If you run into trouble -- which
happens sooner or later -- try the AFNI Message Board -- and please
give details, including the exact command line(s) you used.
=========================================================================
* This binary version of 3dAllineate is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
* OpenMP may or may not speed up the program significantly. Limited
tests show that it provides some benefit, particularly when using
the more complicated interpolation methods (e.g., '-cubic' and/or
'-final wsinc5'), for up to 3-4 CPU threads.
* But the speedup is definitely not linear in the number of threads, alas.
Probably because my parallelization efforts were pretty limited.
=========================================================================
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAmpToRSFC
This program is for converting spectral amplitudes into standard RSFC
parameters. This function is made to work directly with the outputs of
3dLombScargle, but you could use other inputs that have similar
formatting. (3dLombScargle's main algorithm is special because it
calculates spectra from time series with nonconstant sampling, such as if
some time points have been censored during processing-- check it out!.)
At present, 6 RSFC parameters get returned in separate volumes:
ALFF, mALFF, fALFF, RSFA, mRSFA and fRSFA.
For more information about each RSFC parameter, see, e.g.:
ALFF/mALFF -- Zang et al. (2007),
fALFF -- Zou et al. (2008),
RSFA -- Kannurpatti & Biswal (2008).
You can also see the help of 3dRSFC, as well as the Appendix of
Taylor, Gohel, Di, Walter and Biswal (2012) for a mathematical
description and set of relations.
NB: *if* you want to input an unbandpassed time series and do some
filtering/other processing at the same time as estimating RSFC parameters,
then you would want to use 3dRSFC, instead.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMAND:
3dAmpToRSFC { -in_amp AMPS | -in_pow POWS } -prefix PREFIX \
-band FBOT FTOP { -mask MASK } { -nifti }
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ RUNNING:
-in_amp AMPS :input file of one-sided spectral amplitudes, such as
output by 3dLombScargle. It is also assumed that the
the frequencies are uniformly spaced with a single DF
('delta f'), and that the zeroth brick is at 1*DF (i.e.
that the zeroth/baseline frequency is not present in the
or spectrum.
-in_pow POWS :input file of a one-sided power spectrum, such as
output by 3dLombScargle. Similar freq assumptions
as in '-in_amp ...'.
-band FBOT FTOP :lower and upper boundaries, respectively, of the low
frequency fluctuations (LFFs), which will be in the
inclusive interval [FBOT, FTOP], within the provided
input file's frequency range.
-prefix PREFIX :output file prefix; file names will be: PREFIX_ALFF*,
PREFIX_FALFF*, etc.
-mask MASK :volume mask of voxels to include for calculations; if
no mask is included, values are calculated for voxels
whose values are not identically zero across time.
-nifti :output files as *.nii.gz (default is BRIK/HEAD).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ OUTPUT:
Currently, 6 volumes of common RSFC parameters, briefly:
PREFIX_ALFF+orig :amplitude of low freq fluctuations
(L1 sum).
PREFIX_MALFF+orig :ALFF divided by the mean value within
the input/estimated whole brain mask
(a.k.a. 'mean-scaled ALFF').
PREFIX_FALFF+orig :ALFF divided by sum of full amplitude
spectrum (-> 'fractional ALFF').
PREFIX_RSFA+orig :square-root of summed square of low freq
fluctuations (L2 sum).
PREFIX_MRSFA+orig :RSFA divided by the mean value within
the input/estimated whole brain mask
(a.k.a. 'mean-scaled RSFA').
PREFIX_FRSFA+orig :ALFF divided by sum of full amplitude
spectrum (a.k.a. 'fractional RSFA').
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3dAmpToRSFC \
-in_amp SUBJ_01_amp.nii.gz \
-prefix SUBJ_01 \
-mask mask_WB.nii.gz \
-band 0.01 0.1 \
-nifti
___________________________________________________________________________
AFNI program: 3dAnhist
Usage: 3dAnhist [options] dataset
Input dataset is a T1-weighted high-res of the brain (shorts only).
Output is a list of peaks in the histogram, to stdout, in the form
( datasetname #peaks peak1 peak2 ... )
In the C-shell, for example, you could do
set anhist = `3dAnhist -q -w1 dset+orig`
Then the number of peaks found is in the shell variable $anhist[2].
Options:
-q = be quiet (don't print progress reports)
-h = dump histogram data to Anhist.1D and plot to Anhist.ps
-F = DON'T fit histogram with stupid curves.
-w = apply a Winsorizing filter prior to histogram scan
(or -w7 to Winsorize 7 times, etc.)
-2 = Analyze top 2 peaks only, for overlap etc.
-label xxx = Use 'xxx' for a label on the Anhist.ps plot file
instead of the input dataset filename.
-fname fff = Use 'fff' for the filename instead of 'Anhist'.
If the '-2' option is used, AND if 2 peaks are detected, AND if
the -h option is also given, then stdout will be of the form
( datasetname 2 peak1 peak2 thresh CER CJV count1 count2 count1/count2)
where 2 = number of peaks
thresh = threshold between peak1 and peak2 for decision-making
CER = classification error rate of thresh
CJV = coefficient of joint variation
count1 = area under fitted PDF for peak1
count2 = area under fitted PDF for peak2
count1/count2 = ratio of the above quantities
NOTA BENE
---------
* If the input is a T1-weighted MRI dataset (the usual case), then
peak 1 should be the gray matter (GM) peak and peak 2 the white
matter (WM) peak.
* For the definitions of CER and CJV, see the paper
Method for Bias Field Correction of Brain T1-Weighted Magnetic
Resonance Images Minimizing Segmentation Error
JD Gispert, S Reig, J Pascau, JJ Vaquero, P Garcia-Barreno,
and M Desco, Human Brain Mapping 22:133-144 (2004).
* Roughly speaking, CER is the ratio of the overlapping area of the
2 peak fitted PDFs to the total area of the fitted PDFS. CJV is
(sigma_GM+sigma_WM)/(mean_WM-mean_GM), and is a different, ad hoc,
measurement of how much the two PDF overlap.
* The fitted PDFs are NOT Gaussians. They are of the form
f(x) = b((x-p)/w,a), where p=location of peak, w=width, 'a' is
a skewness parameter between -1 and 1; the basic distribution
is defined by b(x)=(1-x^2)^2*(1+a*x*abs(x)) for -1 < x < 1.
-- RWCox - November 2004
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3danisosmooth
Usage: 3danisosmooth [options] dataset
Smooths a dataset using an anisotropic smoothing technique.
The output dataset is preferentially smoothed to preserve edges.
Options :
-prefix pname = Use 'pname' for output dataset prefix name.
-iters nnn = compute nnn iterations (default=10)
-2D = smooth a slice at a time (default)
-3D = smooth through slices. Can not be combined with 2D option
-mask dset = use dset as mask to include/exclude voxels
-automask = automatically compute mask for dataset
Can not be combined with -mask
-viewer = show central axial slice image every iteration.
Starts aiv program internally.
-nosmooth = do not do intermediate smoothing of gradients
-sigma1 n.nnn = assign Gaussian smoothing sigma before
gradient computation for calculation of structure tensor.
Default = 0.5
-sigma2 n.nnn = assign Gaussian smoothing sigma after
gradient matrix computation for calculation of structure tensor.
Default = 1.0
-deltat n.nnn = assign pseudotime step. Default = 0.25
-savetempdata = save temporary datasets each iteration.
Dataset prefixes are Gradient, Eigens, phi, Dtensor.
Ematrix, Flux and Gmatrix are also stored for the first sub-brick.
Where appropriate, the filename is suffixed by .ITER where
ITER is the iteration number. Existing datasets will get overwritten.
-save_temp_with_diff_measures: Like -savetempdata, but with
a dataset named Diff_measures.ITER containing FA, MD, Cl, Cp,
and Cs values.
-phiding = use Ding method for computing phi (default)
-phiexp = use exponential method for computing phi
-noneg = set negative voxels to 0
-setneg NEGVAL = set negative voxels to NEGVAL
-edgefraction n.nnn = adjust the fraction of the anisotropic
component to be added to the original image. Can vary between
0 and 1. Default =0.5
-datum type = Coerce the output data to be stored as the given type
which may be byte, short or float. [default=float]
-matchorig - match datum type and clip min and max to match input data
-help = print this help screen
References:
Z Ding, JC Gore, AW Anderson, Reduction of Noise in Diffusion
Tensor Images Using Anisotropic Smoothing, Mag. Res. Med.,
53:485-490, 2005
J Weickert, H Scharr, A Scheme for Coherence-Enhancing
Diffusion Filtering with Optimized Rotation Invariance,
CVGPR Group Technical Report at the Department of Mathematics
and Computer Science,University of Mannheim,Germany,TR 4/2000.
J.Weickert,H.Scharr. A scheme for coherence-enhancing diffusion
filtering with optimized rotation invariance. J Visual
Communication and Image Representation, Special Issue On
Partial Differential Equations In Image Processing,Comp Vision
Computer Graphics, pages 103-118, 2002.
Gerig, G., Kubler, O., Kikinis, R., Jolesz, F., Nonlinear
anisotropic filtering of MRI data, IEEE Trans. Med. Imaging 11
(2), 221-232, 1992.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dANOVA
++ 3dANOVA: AFNI version=AFNI_24.3.00 (Oct 1 2024) [64-bit]
++ Authored by: B. Douglas Ward
This program performs single factor Analysis of Variance (ANOVA)
on 3D datasets
---------------------------------------------------------------
Usage:
-----
3dANOVA
-levels r : r = number of factor levels
-dset 1 filename : data set for factor level 1
. . .. . .
-dset 1 filename data set for factor level 1
. . .. . .
-dset r filename data set for factor level r
. . .. . .
-dset r filename data set for factor level r
[-voxel num] : screen output for voxel # num
[-diskspace] : print out disk space required for
program execution
[-mask mset] : use sub-brick #0 of dataset 'mset'
to define which voxels to process
[-debug level] : request extra output
The following commands generate individual AFNI 2-sub-brick datasets:
(In each case, output is written to the file with the specified
prefix file name.)
[-ftr prefix] : F-statistic for treatment effect
[-mean i prefix] : estimate of factor level i mean
[-diff i j prefix] : difference between factor levels
[-contr c1...cr prefix] : contrast in factor levels
Modified ANOVA computation options: (December, 2005)
** For details, see https://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html
[-old_method] request to perform ANOVA using the previous
functionality (requires -OK, also)
[-OK] confirm you understand that contrasts that
do not sum to zero have inflated t-stats, and
contrasts that do sum to zero assume sphericity
(to be used with -old_method)
[-assume_sph] assume sphericity (zero-sum contrasts, only)
This allows use of the old_method for
computing contrasts which sum to zero (this
includes diffs, for instance). Any contrast
that does not sum to zero is invalid, and
cannot be used with this option (such as
ameans).
The following command generates one AFNI 'bucket' type dataset:
[-bucket prefix] : create one AFNI 'bucket' dataset whose
sub-bricks are obtained by
concatenating the above output files;
the output 'bucket' is written to file
with prefix file name
N.B.: For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used,
e.g., -dset 2 'fred+orig[3]'
Example of 3dANOVA:
------------------
Example is based on a study with one factor (independent variable)
called 'Pictures', with 3 levels:
(1) Faces, (2) Houses, and (3) Donuts
The ANOVA is being conducted on the data of subjects Fred and Ethel:
3dANOVA -levels 3 \
-dset 1 fred_Faces+tlrc \
-dset 1 ethel_Faces+tlrc \
\
-dset 2 fred_Houses+tlrc \
-dset 2 ethel_Houses+tlrc \
\
-dset 3 fred_Donuts+tlrc \
-dset 3 ethel_Donuts+tlrc \
\
-ftr Pictures \
-mean 1 Faces \
-mean 2 Houses \
-mean 3 Donuts \
-diff 1 2 FvsH \
-diff 2 3 HvsD \
-diff 1 3 FvsD \
-contr 1 1 -1 FHvsD \
-contr -1 1 1 FvsHD \
-contr 1 -1 1 FDvsH \
-bucket fred_n_ethel_ANOVA
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
---------------------------------------------------
Also see HowTo#5 - Group Analysis on the AFNI website:
https://afni.nimh.nih.gov/pub/dist/HOWTO/howto/ht05_group/html/index.shtml
-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers. This truncantion might cause significant errors.
If you receive warnings that look like this:
*+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program. For convenience, you can do this
on the command line, as in
3dANOVA -DAFNI_FLOATIZE=YES ... other options ...
Also see the following links:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dANOVA2
++ 3dANOVA: AFNI version=AFNI_24.3.00 (Oct 1 2024) [64-bit]
++ Authored by: B. Douglas Ward
This program performs a two-factor Analysis of Variance (ANOVA)
on 3D datasets.
Please also see (and consider using) AFNI's gen_group_command.py program
to construct your 3dANOVA2 command. That program helps simplify the
process of specifying your command.
-----------------------------------------------------------
Usage ~1~
3dANOVA2
-type k : type of ANOVA model to be used:
k=1 fixed effects model (A and B fixed)
k=2 random effects model (A and B random)
k=3 mixed effects model (A fixed, B random)
-alevels a : a = number of levels of factor A
-blevels b : b = number of levels of factor B
-dset 1 1 filename : data set for level 1 of factor A
and level 1 of factor B
. . . . . .
-dset i j filename : data set for level i of factor A
and level j of factor B
. . . . . .
-dset a b filename : data set for level a of factor A
and level b of factor B
[-voxel num] : screen output for voxel # num
[-diskspace] : print out disk space required for
program execution
[-mask mset] : use sub-brick #0 of dataset 'mset'
to define which voxels to process
The following commands generate individual AFNI 2-sub-brick datasets:
(In each case, output is written to the file with the specified
prefix file name.)
[-ftr prefix] : F-statistic for treatment effect
[-fa prefix] : F-statistic for factor A effect
[-fb prefix] : F-statistic for factor B effect
[-fab prefix] : F-statistic for interaction
[-amean i prefix] : estimate mean of factor A level i
[-bmean j prefix] : estimate mean of factor B level j
[-xmean i j prefix] : estimate mean of cell at level i of factor A,
level j of factor B
[-adiff i j prefix] : difference between levels i and j of factor A
[-bdiff i j prefix] : difference between levels i and j of factor B
[-xdiff i j k l prefix] : difference between cell mean at A=i,B=j
and cell mean at A=k,B=l
[-acontr c1 ... ca prefix] : contrast in factor A levels
[-bcontr c1 ... cb prefix] : contrast in factor B levels
[-xcontr c11 ... c1b c21 ... c2b ... ca1 ... cab prefix]
: contrast in cell means
The following command generates one AFNI 'bucket' type dataset:
[-bucket prefix] : create one AFNI 'bucket' dataset whose
sub-bricks are obtained by concatenating
the above output files; the output 'bucket'
is written to file with prefix file name
Modified ANOVA computation options: (December, 2005) ~1~
** These options apply to model type 3, only.
For details, see https://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html
[-old_method] : request to perform ANOVA using the previous
functionality (requires -OK, also)
[-OK] : confirm you understand that contrasts that
do not sum to zero have inflated t-stats, and
contrasts that do sum to zero assume sphericity
(to be used with -old_method)
[-assume_sph] : assume sphericity (zero-sum contrasts, only)
This allows use of the old_method for
computing contrasts which sum to zero (this
includes diffs, for instance). Any contrast
that does not sum to zero is invalid, and
cannot be used with this option (such as
ameans).
----------------------------------------------------------
Examples of 3dANOVA2 ~1~
(And see also AFNI's gen_group_command.py for what might is likely a
simpler method for constructing these commands.)
1) This example is based on a study with a 3 x 4 mixed factorial:
design:
Factor 1 - DONUTS has 3 levels:
(1) chocolate, (2) glazed, (3) sugar
Factor 2 - SUBJECTS, of which there are 4 in this analysis:
(1) fred, (2) ethel, (3) lucy, (4) ricky
3dANOVA2 \
-type 3 -alevels 3 -blevels 4 \
-dset 1 1 fred_choc+tlrc \
-dset 2 1 fred_glaz+tlrc \
-dset 3 1 fred_sugr+tlrc \
-dset 1 2 ethel_choc+tlrc \
-dset 2 2 ethel_glaz+tlrc \
-dset 3 2 ethel_sugr+tlrc \
-dset 1 3 lucy_choc+tlrc \
-dset 2 3 lucy_glaz+tlrc \
-dset 3 3 lucy_sugr+tlrc \
-dset 1 3 ricky_choc+tlrc \
-dset 2 3 ricky_glaz+tlrc \
-dset 3 3 ricky_sugr+tlrc \
-amean 1 Chocolate \
-amean 2 Glazed \
-amean 3 Sugar \
-adiff 1 2 CvsG \
-adiff 2 3 GvsS \
-adiff 1 3 CvsS \
-acontr 1 1 -2 CGvsS \
-acontr -2 1 1 CvsGS \
-acontr 1 -2 1 CSvsG \
-fa Donuts \
-bucket ANOVA_results
The -bucket option will place all of the 3dANOVA2 results (i.e., main
effect of DONUTS, means for each of the 3 levels of DONUTS, and
contrasts between the 3 levels of DONUTS) into one big dataset with
multiple sub-bricks called ANOVA_results+tlrc.
-----------------------------------------------------------
Notes ~1~
For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used, e.g.:
-dset 2 4 'fred+orig[3]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
Also see HowTo #5: Group Analysis on the AFNI website:
https://afni.nimh.nih.gov/pub/dist/HOWTO/howto/ht05_group/html/index.shtml
-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers. This truncantion might cause significant errors.
If you receive warnings that look like this:
*+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program. For convenience, you can do this
on the command line, as in
3dANOVA -DAFNI_FLOATIZE=YES ... other options ...
Also see the following links:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dANOVA3
This program performs three-factor ANOVA on 3D data sets.
Please also see (and consider using) AFNI's gen_group_command.py program
to construct your 3dANOVA2 command. That program helps simplify the
process of specifying your command.
-----------------------------------------------------------
Usage ~1~
3dANOVA3
-type k type of ANOVA model to be used:
k = 1 A,B,C fixed; AxBxC
k = 2 A,B,C random; AxBxC
k = 3 A fixed; B,C random; AxBxC
k = 4 A,B fixed; C random; AxBxC
k = 5 A,B fixed; C random; AxB,BxC,C(A)
-alevels a a = number of levels of factor A
-blevels b b = number of levels of factor B
-clevels c c = number of levels of factor C
-dset 1 1 1 filename data set for level 1 of factor A
and level 1 of factor B
and level 1 of factor C
. . . . . .
-dset i j k filename data set for level i of factor A
and level j of factor B
and level k of factor C
. . . . . .
-dset a b c filename data set for level a of factor A
and level b of factor B
and level c of factor C
[-voxel num] screen output for voxel # num
[-diskspace] print out disk space required for
program execution
[-mask mset] use sub-brick #0 of dataset 'mset'
to define which voxels to process
The following commands generate individual AFNI 2 sub-brick datasets:
(In each case, output is written to the file with the specified
prefix file name.)
[-fa prefix] F-statistic for factor A effect
[-fb prefix] F-statistic for factor B effect
[-fc prefix] F-statistic for factor C effect
[-fab prefix] F-statistic for A*B interaction
[-fac prefix] F-statistic for A*C interaction
[-fbc prefix] F-statistic for B*C interaction
[-fabc prefix] F-statistic for A*B*C interaction
[-amean i prefix] estimate of factor A level i mean
[-bmean i prefix] estimate of factor B level i mean
[-cmean i prefix] estimate of factor C level i mean
[-xmean i j k prefix] estimate mean of cell at factor A level i,
factor B level j, factor C level k
[-adiff i j prefix] difference between factor A levels i and j
(with factors B and C collapsed)
[-bdiff i j prefix] difference between factor B levels i and j
(with factors A and C collapsed)
[-cdiff i j prefix] difference between factor C levels i and j
(with factors A and B collapsed)
[-xdiff i j k l m n prefix] difference between cell mean at A=i,B=j,
C=k, and cell mean at A=l,B=m,C=n
[-acontr c1...ca prefix] contrast in factor A levels
(with factors B and C collapsed)
[-bcontr c1...cb prefix] contrast in factor B levels
(with factors A and C collapsed)
[-ccontr c1...cc prefix] contrast in factor C levels
(with factors A and B collapsed)
[-aBcontr c1 ... ca : j prefix] 2nd order contrast in A, at fixed
B level j (collapsed across C)
[-Abcontr i : c1 ... cb prefix] 2nd order contrast in B, at fixed
A level i (collapsed across C)
[-aBdiff i_1 i_2 : j prefix] difference between levels i_1 and i_2 of
factor A, with factor B fixed at level j
[-Abdiff i : j_1 j_2 prefix] difference between levels j_1 and j_2 of
factor B, with factor A fixed at level i
[-abmean i j prefix] mean effect at factor A level i and
factor B level j
The following command generates one AFNI 'bucket' type dataset:
[-bucket prefix] create one AFNI 'bucket' dataset whose
sub-bricks are obtained by concatenating
the above output files; the output 'bucket'
is written to file with prefix file name
Modified ANOVA computation options: (December, 2005) ~1~
** These options apply to model types 4 and 5, only.
For details, see: https://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html
https://afni.nimh.nih.gov/afni/doc/manual/ANOVAm.pdf
[-old_method] request to perform ANOVA using the previous
functionality (requires -OK, also)
[-OK] confirm you understand that contrasts that
do not sum to zero have inflated t-stats, and
contrasts that do sum to zero assume sphericity
(to be used with -old_method)
[-assume_sph] assume sphericity (zero-sum contrasts, only)
This allows use of the old_method for
computing contrasts which sum to zero (this
includes diffs, for instance). Any contrast
that does not sum to zero is invalid, and
cannot be used with this option (such as
ameans).
-----------------------------------------------------------------
Examples ~1~
(And see also AFNI's gen_group_command.py for what might is likely a
simpler method for constructing these commands.)
1) The "classic" houses/faces/donuts for 4 subjects (2 genders)
(level sets are gender (M/W), image (H/F/D), and subject)
Note: factor C is really subject within gender (since it is
nested). There are 4 subjects in this example, and 2
subjects per gender. So clevels is 2.
3dANOVA3 -type 5 \
-alevels 2 \
-blevels 3 \
-clevels 2 \
-dset 1 1 1 man1_houses+tlrc \
-dset 1 2 1 man1_faces+tlrc \
-dset 1 3 1 man1_donuts+tlrc \
-dset 1 1 2 man2_houses+tlrc \
-dset 1 2 2 man2_faces+tlrc \
-dset 1 3 2 man2_donuts+tlrc \
-dset 2 1 1 woman1_houses+tlrc \
-dset 2 2 1 woman1_faces+tlrc \
-dset 2 3 1 woman1_donuts+tlrc \
-dset 2 1 2 woman2_houses+tlrc \
-dset 2 2 2 woman2_faces+tlrc \
-dset 2 3 2 woman2_donuts+tlrc \
-adiff 1 2 MvsW \
-bdiff 2 3 FvsD \
-bcontr -0.5 1 -0.5 FvsHD \
-aBcontr 1 -1 : 1 MHvsWH \
-aBdiff 1 2 : 1 same_as_MHvsWH \
-Abcontr 2 : 0 1 -1 WFvsWD \
-Abdiff 2 : 2 3 same_as_WFvsWD \
-Abcontr 2 : 1 7 -4.2 goofy_example \
-bucket donut_anova
Notes ~1~
For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used, e.g.:
-dset 2 4 5 'fred+orig[3]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers. This truncantion might cause significant errors.
If you receive warnings that look like this:
*+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program. For convenience, you can do this
on the command line, as in
3dANOVA -DAFNI_FLOATIZE=YES ... other options ...
Also see the following links:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAttribute
Usage ~1~
3dAttribute [options] aname dset
Prints (to stdout) the value of the attribute 'aname' from
the header of dataset 'dset'. If the attribute doesn't exist,
prints nothing and sets the exit status to 1.
See the full list of attributes in README.attributes here:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.attributes.html
Options ~1~
-name = Include attribute name in printout
-all = Print all attributes [don't put aname on command line]
Also implies '-name'. Attributes print in whatever order
they are in the .HEAD file, one per line. You may want
to do '3dAttribute -all elvis+orig | sort' to get them
in alphabetical order.
-center = Center of volume in RAI coordinates.
Note that center is not itself an attribute in the
.HEAD file. It is calculated from other attributes.
Special options for string attributes:
-ssep SSEP Use string SSEP as a separator between strings for
multiple sub-bricks. The default is '~', which is what
is used internally in AFNI's .HEAD file. For tcsh,
I recommend ' ' which makes parsing easy, assuming each
individual string contains no spaces to begin with.
Try -ssep 'NUM'
-sprep SPREP Use string SPREP to replace blank space in string
attributes.
-quote Use single quote around each string.
Examples ~1~
3dAttribute -quote -ssep ' ' BRICK_LABS SomeStatDset+tlrc.HEAD
3dAttribute -quote -ssep 'NUM' -sprep '+' BRICK_LABS SomeStatDset+tlrc.HEAD
3dAttribute BRICK_STATAUX SomeStatDset+tlrc.HEAD'[0]'
# ... which outputs information for just the [0]th brick of a dset.
# If that dset were an F-stat, then the output might look like:
# 0 4 2 2 430
# ... which, in order, translate to:
# 0 --> the index of the brick in question
# 4 --> the brick's statistical code, findable in README.attributes:
# '#define FUNC_FT_TYPE 4 /* fift: F-statistic */'
# to be an F-statistic
# 2 --> the number of parameters for that stat (shown subsequently)
# 2 --> here, the 1st parameter for the F-stat: 'Numerator DOF'
# 430 --> here, the 2nd parameter for the F-stat: 'Denominator DOF'
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAutobox
++ 3dAutobox: AFNI version=AFNI_24.3.00 (Oct 1 2024) [64-bit]
Usage: 3dAutobox [options] DATASET
Computes size of a box that fits around the volume.
Also can be used to crop the volume to that box.
The default 'info message'-based terminal text is a set of IJK coords.
See below for options to display coordinates in other ways, as well as
to save them in a text file. Please note in particular the difference
between *ijk* and *ijkord* outputs, for scripting.
OPTIONS: ~1~
-prefix PREFIX :Crop the input dataset to the size of the box, and
write an output dataset with PREFIX for the name.
*If -prefix is not used, no new volume is written out,
just the (x,y,z) extents of the voxels to be kept.
-input DATASET :An alternate way to specify the input dataset.
The default method is to pass DATASET as
the last parameter on the command line.
-noclust :Don't do any clustering to find box. Any non-zero
voxel will be preserved in the cropped volume.
The default method uses some clustering to find the
cropping box, and will clip off small isolated blobs.
-extent :Write to standard out the spatial extent of the box
-extent_xyz_quiet :The same numbers as '-extent', but only numbers and
no string content. Ordering is RLAPIS.
-extent_ijk :Write out the 6 auto bbox ijk slice numbers to
screen:
imin imax jmin jmax kmin kmax
Note that resampling would affect the ijk vals (but
not necessarily the xyz ones).
-extent_ijk_to_file FF
:Write out the 6 auto bbox ijk slice numbers to
a simple-formatted text file FF (single row file):
imin imax jmin jmax kmin kmax
(same notes as above apply).
-extent_ijk_midslice :Write out the 3 ijk midslices of the autobox to
the screen:
imid jmid kmid
These are obtained via: (imin + imax)/2, etc.
-extent_ijkord :Write out the 6 auto bbox ijk slice numbers to screen
but in a particular order and format (see 'NOTE on
*ijkord* format', below).
NB: This ordering is useful if you want to use
the output indices in 3dcalc expressions.
-extent_ijkord_to_file FFORRD
:Write out the 6 auto bbox ijk slice numbers to a file
but in a particular order and format (see 'NOTE on
*ijkord* format', below).
NB: This option is quite useful if you want to use
the output indices in 3dcalc expressions.
-extent_xyz_to_file GG
:Write out the 6 auto bbox xyz coords to
a simple-formatted text file GG (single row file):
xmin xmax ymin ymax zmin zmax
(same values as '-extent').
-extent_xyz_midslice :Write out the 3 xyz midslices of the autobox to
the screen:
xmid ymid zmid
These are obtained via: (xmin + xmax)/2, etc.
These follow the same meaning as '-extent'.
-npad NNN :Number of extra voxels to pad on each side of box,
since some troublesome people (that's you, LRF) want
this feature for no apparent reason.
** With this option, it is possible to get a dataset
thatis actually bigger than the input.
** You can input a negative value for NNN, which will
crop the dataset even more than the automatic method.
-npad_safety_on :Constrain npad-ded extents to be within dset. So,
each index is bounded to be in range [0, L-1], where L
is matrix length along that dimension.
NOTE on *ijkord* format ~1~
Using any of the '-*ijkord*' options above will output pairs of ijk
indices just like the regular ijk options, **but** they will be ordered
in a way that you can associate each of the i, j, and k indices with
a standard x, y and z coordinate direction. Without this ordering,
resampling a dataset could change what index is associated with which
coordinate axis. That situation can be confusing for scripting (and
by confusing, we mean 'bad').
The output format for any '-*ijkord*' options is a 3x3 table, where
the first column is the index value (i, j or k), and the next two
columns are the min and max interval boundaries for the autobox.
Importantly, the rows are placed in order so that the top corresponds
to the x-axis, the middle to the y-axis and the bottom to the z-axis.
So, if you had the following table output for a dset:
k 10 170
i 35 254
j 21 199
... you would look at the third row for the min/max slice values
along the z-axis, and you would use the index 'j' to refer to it in,
say, a 3dcalc expression.
Note that the above example table output came from a dataset with ASL
orientation. We can see how that fits, recalling that the first,
second and third rows tell us about x, y and z info, respectively; and
that i, j and k refer to the first, second and third characters in the
orientation string. So, the third (z-like) row contains a j, which
points us at the middle character in the orientation, which is S, which
is along the z-axis---all consistent! Similarly, the top (x-like) row
contains a k, which points us at the last char in the orientation,
which is L and that is along the x-axis---phew!
The main point of this would be to extra this information and use it
in a script. If you knew that you wanted the z-slice range to use
in a 3dcalc 'within()' expression, then you could extract the 3rd row
to get the correct index and slice ranges, e.g., in tcsh:
set vvv = `sed -n 3p FILE_ijkord.txt`
... where now ${vvv} will have 3 values, the first of which is the
relevant index letter, then the min and max slice range values.
So an example 3dcalc expression to keep values only within
that slice range:
3dcalc \
-a DSET \
-expr "a*within(${vvv[1]},${vvv[2]},${vvv[3]})" \
-prefix DSET_SUBSET
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAutomask
Usage: 3dAutomask [options] dataset
Input dataset is EPI 3D+time, or a skull-stripped anatomical.
Output dataset is a brain-only mask dataset.
This program by itself does NOT do 'skull-stripping'. Use
program 3dSkullStrip for that purpose!
Method:
+ Uses 3dClipLevel algorithm to find clipping level.
+ Keeps only the largest connected component of the
supra-threshold voxels, after an erosion/dilation step.
+ Writes result as a 'fim' type of functional dataset,
which will be 1 inside the mask and 0 outside the mask.
Options:
--------
-prefix ppp = Write mask into dataset with prefix 'ppp'.
[Default == 'automask']
-apply_prefix ppp = Apply mask to input dataset and save
masked dataset. If an apply_prefix is given
and not the usual prefix, the only output
will be the applied dataset
-clfrac cc = Set the 'clip level fraction' to 'cc', which
must be a number between 0.1 and 0.9.
A small 'cc' means to make the initial threshold
for clipping (a la 3dClipLevel) smaller, which
will tend to make the mask larger. [default=0.5]
-nograd = The program uses a 'gradual' clip level by default.
To use a fixed clip level, use '-nograd'.
[Change to gradual clip level made 24 Oct 2006.]
-peels pp = Peel (erode) the mask 'pp' times,
then unpeel (dilate). Using NN2 neighborhoods,
clips off protuberances less than 2*pp voxels
thick. Turn off by setting to 0. [Default == 1]
-NN1 -NN2 -NN3 = Erode and dilate using different neighbor definitions
NN1=faces, NN2=edges, NN3= corners [Default=NN2]
Applies to erode and dilate options, if present.
Note the default peeling processes still use NN2
unless the peels are set to 0
-nbhrs nn = Define the number of neighbors needed for a voxel
NOT to be eroded. The 18 nearest neighbors in
the 3D lattice are used, so 'nn' should be between
6 and 26. [Default == 17]
-q = Don't write progress messages (i.e., be quiet).
-eclip = After creating the mask, remove exterior
voxels below the clip threshold.
-dilate nd = Dilate the mask outwards 'nd' times.
-erode ne = Erode the mask inwards 'ne' times.
-SI hh = After creating the mask, find the most superior
voxel, then zero out everything more than 'hh'
millimeters inferior to that. hh=130 seems to
be decent (i.e., for Homo Sapiens brains).
-depth DEP = Produce a dataset (DEP) that shows how many peel
operations it takes to get to a voxel in the mask.
The higher the number, the deeper a voxel is located
in the mask. Note this uses the NN1,2,3 neighborhoods
above with a default of 2 for edge-sharing neighbors
None of -peels, -dilate, or -erode affect this option.
--------------------------------------------------------------------
How to make an edge-of-brain mask from an anatomical volume:
* 3dSkullStrip to create a brain-only dataset; say, Astrip+orig
* 3dAutomask -prefix Amask Astrip+orig
* Create a mask of edge-only voxels via
3dcalc -a Amask+orig -b a+i -c a-i -d a+j -e a-j -f a+k -g a-k \
-expr 'ispositive(a)*amongst(0,b,c,d,e,f,g)' -prefix Aedge
which will be 1 at all voxels in the brain mask that have a
nearest neighbor that is NOT in the brain mask.
* cf. '3dcalc -help' DIFFERENTIAL SUBSCRIPTS for information
on the 'a+i' et cetera inputs used above.
* In regions where the brain mask is 'stair-stepping', then the
voxels buried inside the corner of the steps probably won't
show up in this edge mask:
...00000000...
...aaa00000...
...bbbaa000...
...bbbbbaa0...
Only the 'a' voxels are in this edge mask, and the 'b' voxels
down in the corners won't show up, because they only touch a
0 voxel on a corner, not face-on. Depending on your use for
the edge mask, this effect may or may not be a problem.
--------------------------------------------------------------------
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dAutoTcorrelate
Usage: 3dAutoTcorrelate [options] dset
Computes the correlation coefficient between the time series of each
pair of voxels in the input dataset, and stores the output into a
new anatomical bucket dataset [scaled to shorts to save memory space].
*** Also see program 3dTcorrMap ***
Options:
-pearson = Correlation is the normal Pearson (product moment)
correlation coefficient [default].
-eta2 = Output is eta^2 measure from Cohen et al., NeuroImage, 2008:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2705206/
http://dx.doi.org/10.1016/j.neuroimage.2008.01.066
** '-eta2' is intended to be used to measure the similarity
between 2 correlation maps; therefore, this option is
to be used in a second stage analysis, where the input
dataset is the output of running 3dAutoTcorrelate with
the '-pearson' option -- the voxel 'time series' from
that first stage run is the correlation map of that
voxel with all other voxels.
** '-polort -1' is recommended with this option!
** Odds are you do not want use this option if the dataset
on which eta^2 is to be computed was generated with
options -mask_only_targets or -mask_source.
In this program, the eta^2 is computed between pseudo-
timeseries (the 4th dimension of the dataset).
If you want to compute eta^2 between sub-bricks then use
3ddot -eta2 instead.
-spearman AND -quadrant are disabled at this time :-(
-polort m = Remove polynomial trend of order 'm', for m=-1..3.
[default is m=1; removal is by least squares].
Using m=-1 means no detrending; this is only useful
for data/information that has been pre-processed.
-autoclip = Clip off low-intensity regions in the dataset,
-automask = so that the correlation is only computed between
high-intensity (presumably brain) voxels. The
mask is determined the same way that 3dAutomask works.
-mask mmm = Mask of both 'source' and 'target' voxels.
** Restricts computations to those in the mask. Output
volumes are restricted to masked voxels. Also, only
masked voxels will have non-zero output.
** A dataset with 1000 voxels would lead to output of
1000 sub-bricks. With a '-mask' of 50 voxels, the
output dataset have 50 sub-bricks, where the 950
unmasked voxels would be all zero in all 50 sub-bricks
(unless option '-mask_only_targets' is also used).
** The mask is encoded in the output dataset header in the
attribute named 'AFNI_AUTOTCORR_MASK' (cf. 3dMaskToASCII).
-mask_only_targets = Provide output for all voxels.
** Used with '-mask': every voxel is correlated with each
of the mask voxels. In the example above, there would
be 50 output sub-bricks; the n-th output sub-brick
would contain the correlations of the n-th voxel in
the mask with ALL 1000 voxels in the dataset (rather
than with just the 50 voxels in the mask).
-mask_source sss = Provide output for voxels only in mask sss
** For each seed in mask mm, compute correlations only with
non-zero voxels in sss. If you have 250 non-zero voxels
in sss, then the output will still have 50 sub-bricks, but
each n-th sub-brick will have non-zero values at the 250
non-zero voxels in sss
Do not use this option along with -mask_only_targets
-prefix p = Save output into dataset with prefix 'p'
[default prefix is 'ATcorr'].
-out1D FILE.1D = Save output in a text file formatted thusly:
Row 1 contains the 1D indices of non zero voxels in the
mask from option -mask.
Column 1 contains the 1D indices of non zero voxels in the
mask from option -mask_source
The rest of the matrix contains the correlation/eta2
values. Each column k corresponds to sub-brick k in
the output volume p.
To see 1D indices in AFNI, right click on the top left
corner of the AFNI controller - where coordinates are
shown - and chose voxel indices.
A 1D index (ijk) is computed from the 3D (i,j,k) indices:
ijk = i + j*Ni + k*Ni*Nj , with Ni and Nj being the
number of voxels in the slice orientation and given by:
3dinfo -ni -nj YOUR_VOLUME_HERE
This option can only be used in conjunction with
options -mask and -mask_source. Otherwise it makes little
sense to write a potentially enormous text file.
-time = Mark output as a 3D+time dataset instead of an anat bucket.
-mmap = Write .BRIK results to disk directly using Unix mmap().
This trick can speed the program up when the amount
of memory required to hold the output is very large.
** In many case, the amount of time needed to write
the results to disk is longer than the CPU time.
This option can shorten the disk write time.
** If the program crashes, you'll have to manually
remove the .BRIK file, which will have been created
before the loop over voxels and written into during
that loop, rather than being written all at once
at the end of the analysis, as is usually the case.
** If the amount of memory needed is bigger than the
RAM on your system, this program will be very slow
with or without '-mmap'.
** This option won't work with NIfTI-1 (.nii) output!
Example: correlate every voxel in mask_in+tlrc with only those voxels in
mask_out+tlrc (the rest of each volume is zero, for speed).
Assume detrending was already done along with other pre-processing.
The output will have one volume per masked voxel in mask_in+tlrc.
Volumes will be labeled by the ijk index triples of mask_in+tlrc.
3dAutoTcorrelate -mask_source mask_out+tlrc -mask mask_in+tlrc \
-polort -1 -prefix test_corr clean_epi+tlrc
Notes:
* The output dataset is anatomical bucket type of shorts
(unless '-time' is used).
* Values are scaled so that a correlation (or eta-squared)
of 1 corresponds to a value of 10000.
* The output file might be gigantic and you might run out
of memory running this program. Use at your own risk!
++ If you get an error message like
*** malloc error for dataset sub-brick
this means that the program ran out of memory when making
the output dataset.
++ If this happens, you can try to use the '-mmap' option,
and if you are lucky, the program may actually run.
* The program prints out an estimate of its memory usage
when it starts. It also prints out a progress 'meter'
to keep you pacified.
* This is a quick hack for Peter Bandettini. Now pay up.
* OpenMP-ized for Hang Joon Jo. Where's my baem-sul?
-- RWCox - 31 Jan 2002 and 16 Jul 2010
=========================================================================
* This binary version of 3dAutoTcorrelate is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3daxialize
[7m*+ WARNING:[0m This program (3daxialize) is old, not maintained, and probably useless!
Usage: 3daxialize [options] dataset
Purpose: Read in a dataset and write it out as a new dataset
with the data brick oriented as axial slices.
The input dataset must have a .BRIK file.
One application is to create a dataset that can
be used with the AFNI volume rendering plugin.
Options:
-prefix ppp = Use 'ppp' as the prefix for the new dataset.
[default = 'axialize']
-verb = Print out a progress report.
The following options determine the order/orientation
in which the slices will be written to the dataset:
-sagittal = Do sagittal slice order [-orient ASL]
-coronal = Do coronal slice order [-orient RSA]
-axial = Do axial slice order [-orient RAI]
This is the default AFNI axial order, and
is the one currently required by the
volume rendering plugin; this is also
the default orientation output by this
program (hence the program's name).
-orient code = Orientation code for output.
The code must be 3 letters, one each from the
pairs {R,L} {A,P} {I,S}. The first letter gives
the orientation of the x-axis, the second the
orientation of the y-axis, the third the z-axis:
R = Right-to-left L = Left-to-right
A = Anterior-to-posterior P = Posterior-to-anterior
I = Inferior-to-superior S = Superior-to-inferior
If you give an illegal code (e.g., 'LPR'), then
the program will print a message and stop.
N.B.: 'Neurological order' is -orient LPI
-frugal = Write out data as it is rotated, a sub-brick at
a time. This saves a little memory and was the
previous behavior.
Note the frugal option is not available with NIFTI
datasets
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dBallMatch
--------------------------------------
Usage #1: 3dBallMatch dataset [radius]
--------------------------------------
-----------------------------------------------------------------------
Usage #2: 3dBallMatch [options]
where the pitifully few options are:
-input dataset = read this dataset
-ball radius = set the radius of the 3D ball to match (mm)
-spheroid a b = match with a spheroid of revolution, with principal
axis radius of 'a' and secondary axes radii 'b'
++ this option is considerably slower
-----------------------------------------------------------------------
-------------------
WHAT IT IS GOOD FOR
-------------------
* This program tries to find a good match between a ball (filled sphere)
of the given radius (in mm) and a dataset. The goal is to find a crude
approximate center of the brain quickly.
* The output can be used to re-center a dataset so that its coordinate
origin is inside the brain and/or as a starting point for more refined
3D alignment. Sample scripts are given below.
* The reason for this program is that not all brain images are even
crudely centered by using the center-of-mass ('3dAllineate -cmass')
as a starting point -- if the volume covered by the image includes
a lot of neck or even shoulders, then the center-of-mass may be
far from the brain.
* If you don't give a radius, the default is 72 mm, which is about the
radius of an adult human brain/cranium. A larger value would be needed
for elephant brain images. A smaller value for marmosets.
* For advanced use, you could try a prolate spheroid, using something like
3dBallMatch -input Fred.nii -spheroid 90 70
for a human head image (that was not skull stripped). This option is
several times slower than the 'ball' option, as multiple spheroids have
to be correlated with the input dataset.
* This program does NOT work well with datasets containing large amounts
of negative values or background junk -- such as I've seen with animal
MRI scans and CT scans. Such datasets will likely require some repair
first, such as cropping (cf. 3dZeropad), to make this program useful.
* Frankly, this program may not be that useful for any purpose :(
* The output is text to stdout containing 3 triples of numbers, all on
one line:
i j k xs ys zs xd yd zd
where
i j k = index triple of the central voxel
xs ys zs = values to use in '3drefit -dxxorigin' (etc.)
to make (i,j,k) be at coordinates (x,y,z)=(0,0,0)
xd yd zd = DICOM-order (x,y,z) coordinates of (i,j,k) in the
input dataset
* The intention is that this output line be captured and then the
appropriate pieces be used for some higher purpose.
--------------------------------------------------------------
SAMPLE SCRIPT - VISUALIZING THE MATCHED LOCATION (csh syntax)
--------------------------------------------------------------
Below is a script to process all the entries in a directory.
#!/bin/tcsh
# optional: start a virtual X11 server
set xdisplay = `count_afni -dig 1 3 999 R1`
echo " -- trying to start Xvfb :${xdisplay}"
Xvfb :${xdisplay} -screen 0 1024x768x24 >& /dev/null &
sleep 1
set display_old = $DISPLAY
setenv DISPLAY :${xdisplay}
# loop over all subjects
foreach sss ( sub-?????_T1w.nii.gz )
# extract subject ID code
set sub = `echo $sss | sed -e 's/sub-//' -e 's/_T1w.nii.gz//'`
# skip if already finished
if ( -f $sub.match ) continue
if ( -f $sub.sag.jpg ) continue
if ( -f $sub.cor.jpg ) continue
# run the program, save output to a file
3dBallMatch $sss > $sub.match
# capture the output for use below
set ijk = ( `cat $sub.match` )
echo $sub $ijk
# run afni to make some QC images
afni -DAFNI_NOSPLASH=YES \
-DAFNI_NOPLUGINS=YES \
-com "OPEN_WINDOW A.sagittalimage" \
-com "OPEN_WINDOW A.coronalimage" \
-com "SET_IJK $ijk[1-3]" \
-com "SAVE_JPEG A.sagittalimage $sub.sag.jpg" \
-com "SAVE_JPEG A.coronalimage $sub.cor.jpg" \
-com "QUITT" \
$sss
# end of loop over subject
end
# kill the virtual X11 server (if it was started above)
sleep 1
killall Xvfb
# make a movie of the sagittal slices
im_to_mov -resize -prefix Bsag -npure 4 -nfade 0 *.sag.jpg
# make a movie of the coronal slices
im_to_mov -resize -prefix Bcor -npure 4 -nfade 0 *.cor.jpg
exit 0
------------------------------------------------------------
SAMPLE SCRIPT - IMPROVING THE MATCHED LOCATION (csh syntax)
------------------------------------------------------------
This script is an extension of the one above, where it uses
3dAllineate to align the human brain image to the MNI template,
guided by the initial point computed by 3dBallMatch. The output
of 3dAllineate is the coordinate of the center of the original
volume, in the first 3 values stored in '*Aparam.1D' file.
* Note that the 3dAllineate step presumes that the input
dataset is a T1-weighted volume. A different set of options would
have to be used for an EPI (T2*-weighted) or T2-weighted volume.
* This script worked pretty well for putting the crosshairs at
the 'origin' of the brain -- near the anterior commissure.
Of course, you will need to evaluate its performance yourself.
#!/bin/tcsh
# optional: start Xvfb to avoid the AFNI GUI starting visibly
set xdisplay = `count_afni -dig 1 3 999 R1`
echo " -- trying to start Xvfb :${xdisplay}"
Xvfb :${xdisplay} -screen 0 1024x768x24 >& /dev/null &
sleep 1
set display_old = $DISPLAY
setenv DISPLAY :${xdisplay}
# loop over datasets in the current directory
foreach sss ( anat_sub?????.nii.gz )
# extract the subject identifier code (the '?????')
set sub = `echo $sss | sed -e 's/anat_sub//' -e 's/.nii.gz//'`
# if 3dAllineate was already run on this, skip to next dataset
if ( -f $sub.Aparam.1D ) continue
# find the 'center' voxel location with 3dBallMatch
if ( ! -f $sub.match ) then
echo "Running 3dBallMatch $sss"
3dBallMatch $sss | tee $sub.match
endif
# extract results from 3dBallMatch output
# in this case, we want the final triplet of coordinates
set ijk = ( `cat $sub.match` )
# set shift range to be 55 mm about 3dBallMatch coordinates
set xd = $ijk[7] ; set xbot = `ccalc "${xd}-55"` ; set xtop = `ccalc "${xd}+55"`
set yd = $ijk[8] ; set ybot = `ccalc "${yd}-55"` ; set ytop = `ccalc "${yd}+55"`
set zd = $ijk[9] ; set zbot = `ccalc "${zd}-55"` ; set ztop = `ccalc "${zd}+55"`
# Align the brain image volume with 3dAllineate:
# match to 'skull on' part of MNI template = sub-brick [1]
# only save the parameters, not the final aligned dataset
3dAllineate \
-base ~/abin/MNI152_2009_template_SSW.nii.gz'[1]' \
-source $sss \
-parang 1 $xbot $xtop \
-parang 2 $ybot $ytop \
-parang 3 $zbot $ztop \
-prefix NULL -lpa \
-1Dparam_save $sub.Aparam.1D \
-conv 3.666 -fineblur 3 -num_rtb 0 -norefinal -verb
# 1dcat (instead of cat) to strip off the comments at the top of the file
# the first 3 values in 'param' are the (x,y,z) shifts
# Those values could be used in 3drefit to re-center the dataset
set param = ( `1dcat $sub.Aparam.1D` )
# run AFNI to produce the snapshots with crosshairs at
# the 3dBallMatch center and the 3dAllineate center
# - B.*.jpg = 3dBallMatch result in crosshairs
# - A.*.jpg = 3dAllineate result in crosshairs
afni -DAFNI_NOSPLASH=YES \
-DAFNI_NOPLUGINS=YES \
-com "OPEN_WINDOW A.sagittalimage" \
-com "SET_IJK $ijk[1-3]" \
-com "SAVE_JPEG A.sagittalimage B.$sub.sag.jpg" \
-com "SET_DICOM_XYZ $param[1-3]" \
-com "SAVE_JPEG A.sagittalimage A.$sub.sag.jpg" \
-com "QUITT" \
$sss
# End of loop over datasets
end
# stop Xvfb (only needed if it was started above)
sleep 1
killall Xvfb
# make movies from the resulting images
im_to_mov -resize -prefix Bsag -npure 4 -nfade 0 B.[1-9]*.sag.jpg
im_to_mov -resize -prefix Asag -npure 4 -nfade 0 A.[1-9]*.sag.jpg
exit 0
----------------------------
HOW IT WORKS (approximately)
----------------------------
1] Create the automask of the input dataset (as in 3dAutomask).
+ This is a 0/1 binary marking of outside/inside voxels.
+ Then convert it to a -1/+1 mask instead.
2] Create a -1/+1 mask for the ball [-1=outside, +1=inside],
inside a rectangular box.
3] Convolve these 2 masks (using FFTs for speed).
+ Basically, this is moving the ball around, then adding up
the voxel counts where the masks match sign (both positive
means ball and dataset are both 'inside'; both negative
means ball and dataset are both 'outside'), and subtracting
off the voxel counts where the mask differ in sign
(one is 'inside' and one is 'outside' == not matched).
+ That is, the convolution value is the sum of matched voxels
minus the sum of mismatched voxels, at every location of
offset (i,j,k) of the corner of the ball mask.
+ The ball mask is in a cube of side 2*radius, which has volume
8*radius^3. The volume of the ball is 4*pi/3*radius^3, so the
inside of the ball is about 4*pi/(3*8) = 52% of the volume of the cube
-- that is, inside and outside voxels are (roughly) matched, so they
have (approximately) equal weight.
+ Most of the CPU time is in the 3D FFTs required.
4] Find the centroid of the locations where the convolution
is positive (matches win over non-matches) and at least 5%
of the maximum convolution. This centroid gives (i,j,k).
Why the centroid? I found that the peak convolution location
is not very stable, as a lot of locations have results barely less
than the peak value -- it was more stable to average them together.
------------------------
WHY 'ball' NOT 'sphere'?
------------------------
* Because a 'sphere' is a 2D object, the surface of the 3D object 'ball'.
* Because my training was in mathematics, where precise terminology has
been developed and honed for centuries.
* Because I'm yanking your chain. Any other questions? No? Good.
-------
CREDITS
-------
By RWCox, September 2020 (the year it all fell apart).
Delenda est. Never forget.
AFNI program: 3dBandpass
--------------------------------------------------------------------------
** NOTA BENE: For the purpose of preparing resting-state FMRI datasets **
** for analysis (e.g., with 3dGroupInCorr), this program is now mostly **
** superseded by the afni_proc.py script. See the 'afni_proc.py -help' **
** section 'Resting state analysis (modern)' to get our current rs-FMRI **
** pre-processing recommended sequence of steps. -- RW Cox, et alii. **
--------------------------------------------------------------------------
** If you insist on doing your own bandpassing, I now recommend using **
** program 3dTproject instead of this program. 3dTproject also can do **
** censoring and other nuisance regression at the same time -- RW Cox. **
--------------------------------------------------------------------------
Usage: 3dBandpass [options] fbot ftop dataset
* One function of this program is to prepare datasets for input
to 3dSetupGroupInCorr. Other uses are left to your imagination.
* 'dataset' is a 3D+time sequence of volumes
++ This must be a single imaging run -- that is, no discontinuities
in time from 3dTcat-ing multiple datasets together.
* fbot = lowest frequency in the passband, in Hz
++ fbot can be 0 if you want to do a lowpass filter only;
HOWEVER, the mean and Nyquist freq are always removed.
* ftop = highest frequency in the passband (must be > fbot)
++ if ftop > Nyquist freq, then it's a highpass filter only.
* Set fbot=0 and ftop=99999 to do an 'allpass' filter.
++ Except for removal of the 0 and Nyquist frequencies, that is.
* You cannot construct a 'notch' filter with this program!
++ You could use 3dBandpass followed by 3dcalc to get the same effect.
++ If you are understand what you are doing, that is.
++ Of course, that is the AFNI way -- if you don't want to
understand what you are doing, use Some other PrograM, and
you can still get Fine StatisticaL maps.
* 3dBandpass will fail if fbot and ftop are too close for comfort.
++ Which means closer than one frequency grid step df,
where df = 1 / (nfft * dt) [of course]
* The actual FFT length used will be printed, and may be larger
than the input time series length for the sake of efficiency.
++ The program will use a power-of-2, possibly multiplied by
a power of 3 and/or 5 (up to and including the 3rd power of
each of these: 3, 9, 27, and 5, 25, 125).
* Note that the results of combining 3dDetrend and 3dBandpass will
depend on the order in which you run these programs. That's why
3dBandpass has the '-ort' and '-dsort' options, so that the
time series filtering can be done properly, in one place.
* The output dataset is stored in float format.
* The order of processing steps is the following (most are optional):
(0) Check time series for initial transients [does not alter data]
(1) Despiking of each time series
(2) Removal of a constant+linear+quadratic trend in each time series
(3) Bandpass of data time series
(4) Bandpass of -ort time series, then detrending of data
with respect to the -ort time series
(5) Bandpass and de-orting of the -dsort dataset,
then detrending of the data with respect to -dsort
(6) Blurring inside the mask [might be slow]
(7) Local PV calculation [WILL be slow!]
(8) L2 normalization [will be fast.]
--------
OPTIONS:
--------
-despike = Despike each time series before other processing.
++ Hopefully, you don't actually need to do this,
which is why it is optional.
-ort f.1D = Also orthogonalize input to columns in f.1D
++ Multiple '-ort' options are allowed.
-dsort fset = Orthogonalize each voxel to the corresponding
voxel time series in dataset 'fset', which must
have the same spatial and temporal grid structure
as the main input dataset.
++ At present, only one '-dsort' option is allowed.
-nodetrend = Skip the quadratic detrending of the input that
occurs before the FFT-based bandpassing.
++ You would only want to do this if the dataset
had been detrended already in some other program.
-dt dd = set time step to 'dd' sec [default=from dataset header]
-nfft N = set the FFT length to 'N' [must be a legal value]
-norm = Make all output time series have L2 norm = 1
++ i.e., sum of squares = 1
-mask mset = Mask dataset
-automask = Create a mask from the input dataset
-blur fff = Blur (inside the mask only) with a filter
width (FWHM) of 'fff' millimeters.
-localPV rrr = Replace each vector by the local Principal Vector
(AKA first singular vector) from a neighborhood
of radius 'rrr' millimeters.
++ Note that the PV time series is L2 normalized.
++ This option is mostly for Bob Cox to have fun with.
-input dataset = Alternative way to specify input dataset.
-band fbot ftop = Alternative way to specify passband frequencies.
-prefix ppp = Set prefix name of output dataset.
-quiet = Turn off the fun and informative messages. (Why?)
-notrans = Don't check for initial positive transients in the data:
*OR* ++ The test is a little slow, so skipping it is OK,
-nosat if you KNOW the data time series are transient-free.
++ Or set AFNI_SKIP_SATCHECK to YES.
++ Initial transients won't be handled well by the
bandpassing algorithm, and in addition may seriously
contaminate any further processing, such as inter-voxel
correlations via InstaCorr.
++ No other tests are made [yet] for non-stationary behavior
in the time series data.
=========================================================================
* This binary version of 3dBandpass is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
* At present, the only part of 3dBandpass that is parallelized is the
'-blur' option, which processes each sub-brick independently.
=========================================================================
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dBlurInMask
Usage: ~1~
3dBlurInMask [options]
Blurs a dataset spatially inside a mask. That's all. Experimental.
OPTIONS ~1~
-------
-input ddd = This required 'option' specifies the dataset
that will be smoothed and output.
-FWHM f = Add 'f' amount of smoothness to the dataset (in mm).
**N.B.: This is also a required 'option'.
-FWHMdset d = Read in dataset 'd' and add the amount of smoothness
given at each voxel -- spatially variable blurring.
** EXPERIMENTAL EXPERIMENTAL EXPERIMENTAL **
-mask mmm = Mask dataset, if desired. Blurring will
occur only within the mask. Voxels NOT in
the mask will be set to zero in the output.
-Mmask mmm = Multi-mask dataset -- each distinct nonzero
value in dataset 'mmm' will be treated as
a separate mask for blurring purposes.
**N.B.: 'mmm' must be byte- or short-valued!
-automask = Create an automask from the input dataset.
**N.B.: only 1 masking option can be used!
-preserve = Normally, voxels not in the mask will be
set to zero in the output. If you want the
original values in the dataset to be preserved
in the output, use this option.
-prefix ppp = Prefix for output dataset will be 'ppp'.
**N.B.: Output dataset is always in float format.
-quiet = Don't be verbose with the progress reports.
-float = Save dataset as floats, no matter what the
input data type is.
**N.B.: If the input dataset is unscaled shorts, then
the default is to save the output in short
format as well. In EVERY other case, the
program saves the output as floats. Thus,
the ONLY purpose of the '-float' option is to
force an all-shorts input dataset to be saved
as all-floats after blurring.
** NEW IN 2021 **
-FWHMxyz fx fy fz = Add different amounts of smoothness in the 3
spatial directions.
** If one of the 'f' values is 0, no smoothing is done
in that direction.
** Here, the axes names ('x', 'y', 'z') refer to the
order of storage in the dataset, as can be seen
in the output of 3dinfo; for example, from a dataset
that I happen to have lying around:
Data Axes Orientation:
first (x) = Anterior-to-Posterior
second (y) = Superior-to-Inferior
third (z) = Left-to-Right
In this example, 'fx' is the FWHM blurring along the
A-P direction, et cetera.
** In other words, x-y-z does not necessarily refer
to the DICOM order of coordinates (R-L, A-P, I-S)!
NOTES ~1~
-----
* If you don't provide a mask, then all voxels will be included
in the blurring. (But then why are you using this program?)
* Note that voxels inside the mask that are not contiguous with
any other voxels inside the mask will not be modified at all!
* Works iteratively, similarly to 3dBlurToFWHM, but without
the extensive overhead of monitoring the smoothness.
* But this program will be faster than 3dBlurToFWHM, and probably
slower than 3dmerge.
* Since the blurring is done iteratively, rather than all-at-once as
in 3dmerge, the results will be slightly different than 3dmerge's,
even if no mask is used here (3dmerge, of course, doesn't take a mask).
* If the original FWHM of the dataset was 'S' and you input a value
'F' with the '-FWHM' option, then the output dataset's smoothness
will be about sqrt(S*S+F*F). The number of iterations will be
about (F*F/d*d) where d=grid spacing; this means that a large value
of F might take a lot of CPU time!
* The spatial smoothness of a 3D+time dataset can be estimated with a
command similar to the following:
3dFWHMx -detrend -mask mmm+orig -input ddd+orig
* The minimum number of voxels in the mask is 9
* Isolated voxels will be removed from the mask!
=========================================================================
* This binary version of 3dBlurInMask is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work across cluster nodes).
* For some implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 1.
* The maximum number of CPUs that will be used is now set to .... 1.
=========================================================================
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dBlurToFWHM
Usage: 3dBlurToFWHM [options]
Blurs a 'master' dataset until it reaches a specified FWHM
smoothness (approximately). The same blurring schedule is
applied to the input dataset to produce the output. The goal
is to make the output dataset have the given smoothness, no
matter what smoothness it had on input (however, the program
cannot 'unsmooth' a dataset!). See below for the METHOD used.
OPTIONS
-------
-input ddd = This required 'option' specifies the dataset
that will be smoothed and output.
-blurmaster bbb = This option specifies the dataset whose
whose smoothness controls the process.
**N.B.: If not given, the input dataset is used.
**N.B.: This should be one continuous run.
Do not input catenated runs!
-prefix ppp = Prefix for output dataset will be 'ppp'.
**N.B.: Output dataset is always in float format.
-mask mmm = Mask dataset, if desired. Blurring will
occur only within the mask. Voxels NOT in
the mask will be set to zero in the output.
-automask = Create an automask from the input dataset.
**N.B.: Not useful if the input dataset has been
detrended or otherwise regressed before input!
-FWHM f = Blur until the 3D FWHM is 'f'.
-FWHMxy f = Blur until the 2D (x,y)-plane FWHM is 'f'.
No blurring is done along the z-axis.
**N.B.: Note that you can't REDUCE the smoothness
of a dataset.
**N.B.: Here, 'x', 'y', and 'z' refer to the
grid/slice order as stored in the dataset,
not DICOM ordered coordinates!
**N.B.: With -FWHMxy, smoothing is done only in the
dataset xy-plane. With -FWHM, smoothing
is done in 3D.
**N.B.: The actual goal is reached when
-FWHM : cbrt(FWHMx*FWHMy*FWHMz) >= f
-FWHMxy: sqrt(FWHMx*FWHMy) >= f
That is, when the area or volume of a
'resolution element' goes past a threshold.
-quiet Shut up the verbose progress reports.
**N.B.: This should be the first option, to stifle
any verbosity from the option processing code.
FILE RECOMMENDATIONS for -blurmaster:
For FMRI statistical purposes, you DO NOT want the FWHM to reflect
the spatial structure of the underlying anatomy. Rather, you want
the FWHM to reflect the spatial structure of the noise. This means
that the -blurmaster dataset should not have anatomical structure. One
good form of input is the output of '3dDeconvolve -errts', which is
the residuals left over after the GLM fitted signal model is subtracted
out from each voxel's time series. You can also use the output of
'3dREMLfit -Rerrts' or '3dREMLfit -Rwherr' for this purpose.
You CAN give a multi-brick EPI dataset as the -blurmaster dataset; the
dataset will be detrended in time (like the -detrend option in 3dFWHMx)
which will tend to remove the spatial structure. This makes it
practicable to make the input and blurmaster datasets be the same,
without having to create a detrended or residual dataset beforehand.
Considering the accuracy of blurring estimates, this is probably good
enough for government work [that is an insider's joke :-].
N.B.: Do not use catenated runs as blurmasters. There should
be no discontinuities in the time axis of blurmaster, which would
make the simple regression detrending do peculiar things.
ALSO SEE:
* 3dFWHMx, which estimates smoothness globally
* 3dLocalstat -stat FWHM, which estimates smoothness locally
* This paper, which discusses the need for a fixed level of smoothness
when combining FMRI datasets from different scanner platforms:
Friedman L, Glover GH, Krenz D, Magnotta V; The FIRST BIRN.
Reducing inter-scanner variability of activation in a multicenter
fMRI study: role of smoothness equalization.
Neuroimage. 2006 Oct 1;32(4):1656-68.
METHOD:
The blurring is done by a conservative finite difference approximation
to the diffusion equation:
du/dt = d/dx[ D_x(x,y,z) du/dx ] + d/dy[ D_y(x,y,z) du/dy ]
+ d/dz[ D_z(x,y,z) du/dz ]
= div[ D(x,y,z) grad[u(x,y,z)] ]
where diffusion tensor D() is diagonal, Euler time-stepping is used, and
with Neumann (reflecting) boundary conditions at the edges of the mask
(which ensures that voxel data inside and outside the mask don't mix).
* At each pseudo-time step, the FWHM is estimated globally (like '3dFWHMx')
and locally (like '3dLocalstat -stat FWHM'). Voxels where the local FWHM
goes past the goal will not be smoothed any more (D gets set to zero).
* When the global smoothness estimate gets close to the goal, the blurring
rate (pseudo-time step) will be reduced, to avoid over-smoothing.
* When an individual direction's smoothness (e.g., FWHMz) goes past the goal,
all smoothing in that direction stops, but the other directions continue
to be smoothed until the overall resolution element goal is achieved.
* When the global FWHM estimate reaches the goal, the program is done.
It will also stop if progress stalls for some reason, or if the maximum
iteration count is reached (infinite loops being unpopular).
* The output dataset will NOT have exactly the smoothness you ask for, but
it will be close (fondly we do hope). In our Imperial experiments, the
results (measured via 3dFWHMx) are within 10% of the goal (usually better).
* 2D blurring via -FWHMxy may increase the smoothness in the z-direction
reported by 3dFWHMx, even though there is no inter-slice processing.
At this moment, I'm not sure why. It may be an estimation artifact due
to increased correlation in the xy-plane that biases the variance estimates
used to calculate FWHMz.
ADVANCED OPTIONS:
-maxite ccc = Set maximum number of iterations to 'ccc' [Default=variable].
-rate rrr = The value of 'rrr' should be a number between
0.05 and 3.5, inclusive. It is a factor to change
the overall blurring rate (slower for rrr < 1) and thus
require more or less blurring steps. This option should only
be needed to slow down the program if the it over-smooths
significantly (e.g., it overshoots the desired FWHM in
Iteration #1 or #2). You can increase the speed by using
rrr > 1, but be careful and examine the output.
-nbhd nnn = As in 3dLocalstat, specifies the neighborhood
used to compute local smoothness.
[Default = 'SPHERE(-4)' in 3D, 'SPHERE(-6)' in 2D]
** N.B.: For the 2D -FWHMxy, a 'SPHERE()' nbhd
is really a circle in the xy-plane.
** N.B.: If you do NOT want to estimate local
smoothness, use '-nbhd NULL'.
-ACF or -acf = Use the 'ACF' method (from 3dFWHMx) to estimate
the global smoothness, rather than the 'classic'
Forman 1995 method. This option will be somewhat
slower. It will also set '-nbhd NULL', since there
is no local ACF estimation method implemented.
-bsave bbb = Save the local smoothness estimates at each iteration
with dataset prefix 'bbb' [for debugging purposes].
-bmall = Use all blurmaster sub-bricks.
[Default: a subset will be chosen, for speed]
-unif = Uniformize the voxel-wise MAD in the blurmaster AND
input datasets prior to blurring. Will be restored
in the output dataset.
-detrend = Detrend blurmaster dataset to order NT/30 before starting.
-nodetrend = Turn off detrending of blurmaster.
** N.B.: '-detrend' is the new default [05 Jun 2007]!
-detin = Also detrend input before blurring it, then retrend
it afterwards. [Off by default]
-temper = Try harder to make the smoothness spatially uniform.
-- Author: The Dreaded Emperor Zhark - Nov 2006
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dBrainSync
Usage: 3dBrainSync [options]
This program 'synchronizes' the -inset2 dataset to match the -inset1
dataset, as much as possible (average voxel-wise correlation), using the
same transformation on each input time series from -inset2:
++ With the -Qprefix option, the transformation is an orthogonal matrix,
computed as described in Joshi's original OHBM 2017 presentations,
and in the corresponding NeuroImage 2018 paper.
-->> Anand Joshi's presentation at OHBM was the genesis of this program.
++ With the -Pprefix option, the transformation is simply a
permutation of the time order of -inset2 (a very special case
of an orthogonal matrix).
++ The algorithms and a little discussion of the different features of
these two techniques are discussed in the METHODS section, infra.
++ At least one of '-Qprefix' or '-Pprefix' must be given, or
this program does not do anything! You can use both methods,
if you want to compare them.
++ 'Harmonize' might be a better name for what this program does,
but calling it 3dBrainHarm would probably not be good marketing
(except for Traumatic Brain Injury researchers?).
One possible application of this program is to correlate resting state
FMRI datasets between subjects, voxel-by-voxel, as is sometimes done
with naturalistic stimuli (e.g., movie viewing).
It would be amusing to see if within-subject resting state FMRI
runs can be BrainSync-ed better than between-subject runs.
--------
OPTIONS:
--------
-inset1 dataset1 = Reference dataset
-inset2 dataset2 = Dataset to be matched to the reference dataset,
as much as possible.
++ These 2 datasets must be on the same spatial grid,
and must have the same number of time points!
++ There must be at least twice as many voxels being
processed as there are time points (see '-mask', below).
++ These are both MANDATORY 'options'.
++ As usual in AFNI, since the computations herein are
voxel-wise, it is possible to input plain text .1D
files as datasets. When doing so, remember that
a ROW in the .1D file is interpreted as a time series
(single voxel's data). If your .1D files are oriented
so that time runs in down the COLUMNS, you will have to
transpose the inputs, which can be done on the command
line with the \' operator, or externally using the
1dtranspose program.
-->>++ These input datasets should be pre-processed first
to remove undesirable components (motions, baseline,
spikes, breathing, etc). Otherwise, you will be trying
to match artifacts between the datasets, which is not
likely to be interesting or useful. 3dTproject would be
one way to do this. Even better: afni_proc.py!
++ In particular, the mean of each time series should have
been removed! Otherwise, the calculations are fairly
meaningless.
-Qprefix qqq = Specifies the output dataset to be used for
the orthogonal matrix transformation.
++ This will be the -inset2 dataset transformed
to be as correlated as possible (in time)
with the -inset1 dataset, given the constraint
that the transformation applied to each time
series is an orthogonal matrix.
-Pprefix ppp = Specifies the output dataset to be used for
the permutation transformation.
++ The output dataset is the -inset2 dataset
re-ordered in time, again to make the result
as correlated as possible with the -inset1
dataset.
-normalize = Normalize the output dataset(s) so that each
time series has sum-of-squares = 1.
++ This option is not usually needed in AFNI
(e.g., 3dTcorrelate does not care).
-mask mset = Only operate on nonzero voxels in the mset dataset.
++ Voxels outside the mask will not be used in computing
the transformation, but WILL be transformed for
your application and/or edification later.
++ For FMRI purposes, a gray matter mask would make
sense here, or at least a brain mask.
++ If no masking option is given, then all voxels
will be processed in computing the transformation.
This set will include all non-brain voxels (if any).
++ Any voxel which is all constant in time
(in either input) will be removed from the mask.
++ This mask dataset must be on the same spatial grid
as the other input datasets!
-verb = Print some progress reports and auxiliary information.
++ Use this option twice to get LOTS of progress
reports; mostly useful for debugging, or for fun.
------
NOTES:
------
* Is this program useful? Not even The Shadow knows!
(But do NOT call it BS.)
* The output dataset is in floating point format.
* Although the goal of 3dBrainSync is to make the transformed
-inset2 as correlated (voxel-by-voxel) as possible with -inset1,
it does not actually compute or output that correlation dataset.
You can do that computation with program 3dTcorrelate, as in
3dBrainSync -inset1 dataset1 -inset2 dataset2 \
-Qprefix transformed-dataset2
3dTcorrelate -polort -1 -prefix AB.pcor.nii \
dataset1 transformed-dataset2
* Besides the transformed dataset(s), if the '-verb' option is used,
some other (text formatted) files are written out:
{Qprefix}.sval.1D = singular values from the BC' decomposition
{Qprefix}.qmat.1D = Q matrix
{Pprefix}.perm.1D = permutation indexes p(i)
You probably do not have any use for these files; they are mostly
present to diagnose any problems.
--------
METHODS:
--------
* Notation used in the explanations below:
M = Number of time points
N = Number of voxels > M (N = size of mask)
B = MxN matrix of time series from -inset1
C = MxN matrix of time series from -inset2
Both matrices will have each column normalized to
have sum-of-squares = 1 (L2 normalized) --
The program does this operation internally; you do not have
to ensure that the input datasets are so normalized.
Q = Desired orthgonal MxM matrix to transform C such that B-QC
is as small as possible (sum-of-squares = Frobenius norm).
That is, Q transforms dataset C to be as close as possible
to dataset B, given that Q is an orthogonal matrix.
normF(A) = sum_{ij} A_{ij}^2 = trace(AA') = trace(A'A).
NOTE: This norm is different from the matrix L2 norm.
NOTE: A' denotes the transpose of A.
NOTE: trace(A) = sum of diagonal element of square matrix A.
https://en.wikipedia.org/wiki/Matrix_norm
* The expansion below shows why the matrix BC' is crucial to the analysis:
normF(B-QC) = trace( [B-QC][B'-C'Q'] )
= trace(BB') + trace(QCC'Q') - trace(BC'Q') - trace(QCB')
= trace(BB') + trace(C'C) - 2 trace(BC'Q')
The second term collapses because trace(AA') = trace(A'A), so
trace([QC][QC]') = trace([QC]'[QC]) = trace(C'Q'QC) = trace(C'C)
because Q is orthogonal. So the first 2 terms in the expansion of
normF(B-QC) do not depend on Q at all. Thus, to minimize normF(B-QC),
we have to maximize trace(BC'Q') = trace([B][QC]') = trace([QC][B]').
Since the columns of B and C are the (normalized) time series,
each row represents the image at a particular time. So the (i,j)
element of BC' is the (spatial) dot product of the i-th TR image from
-inset1 with the j-th TR image from -inset2. Furthermore,
trace(BC') = trace(C'B) = sum of dot products (correlations)
of all time series. So maximizing trace(BC'Q') will maximize the
summed correlations of B (time series from -inset1) and QC
(transformed time series from -inset2).
Note again that the sum of correlations (dot products) of all the time
series is equal to the sum of dot products of all the spatial images.
So the algorithm to find the transformation Q is to maximize the sum of
dot products of spatial images from B with Q-transformed spatial images
from C -- since there are fewer time points than voxels, this is more
efficient and elegant than trying to maximize the sum over voxels of dot
products of time series.
If you use the '-verb' option, these summed correlations ('scores')
are printed to stderr during the analysis, for your fun and profit(?).
*******************************************************************************
* Joshi method [-Qprefix]:
(a) compute MxM matrix B C'
(b) compute SVD of B C' = U S V' (U, S, V are MxM matrices)
(c) Q = U V'
[note: if B=C, then U=V, so Q=I, as it should]
(d) transform each time series from -inset2 using Q
This matrix Q is the solution to the restricted least squares
problem (i.e., restricted to have Q be an orthogonal matrix).
NOTE: The sum of the singular values in S is equal to the sum
of the time series dot products (correlations) in B and QC,
when Q is calculated as above.
An article describing this method is available as:
AA Joshi, M Chong, RM Leahy.
Are you thinking what I'm thinking? Synchronization of resting fMRI
time-series across subjects.
NeuroImage v172:740-752 (2018).
https://doi.org/10.1016/j.neuroimage.2018.01.058
https://pubmed.ncbi.nlm.nih.gov/29428580/
https://www.google.com/search?q=joshi+brainsync
*******************************************************************************
* Permutation method [-Pprefix]:
(a) Compute B C' (same as above)
(b) Find a permutation p(i) of the integers {0..M-1} such
that sum_i { (BC')[i,p(i)] } is as large as possible
(i.e., p() is used as a permutation of the COLUMNS of BC').
This permutation is equivalent to post-multiplying BC'
by an orthogonal matrix P representing the permutation;
such a P is full of 0s except for a single 1 in each row
and each column.
(c) Permute the ROWS (time direction) of the time series matrix
from -inset2 using p().
Only an approximate (greedy) algorithm is used to find this
permutation; that is, the BEST permutation is not guaranteed to be found
(just a 'good' permutation -- it is the best thing I could code quickly :).
Algorithm currently implemented (let D=BC' for notational simplicity):
1) Find the largest element D(i,j) in the matrix.
Then the permutation at row i is p(i)=j.
Strike row i and column j out of the matrix D.
2) Repeat, finding the largest element left, say at D(f,g).
Then p(f) = g. Strike row f and column g from the matrix.
Repeat until done.
(Choosing the largest possible element at each step is what makes this
method 'greedy'.) This permutation is not optimal but is pretty good,
and another step is used to improve it:
3) For all pairs (i,j), p(i) and p(j) are swapped and that permutation
is tested to see if the trace gets bigger.
4) This pair-wise swapping is repeated until it does not improve things
any more (typically, it improves the trace about 1-2% -- not much).
The purpose of the pair swapping is to deal with situations where D looks
something like this: [ 1 70 ]
[ 70 99 ]
Step 1 would pick out 99, and Step 2 would pick out 1; that is,
p(2)=2 and then p(1)=1, for a total trace/score of 100. But swapping
1 and 2 would give a total trace/score of 140. In practice, extreme versions
of this situation do not seem common with real FMRI data, probably because
the subject's brain isn't actively conspiring against this algorithm :)
[Something called the 'Hungarian algorithm' can solve for the optimal]
[permutation exactly, but I've not had the inclination to program it.]
This whole permutation optimization procedure is very fast: about 1 second.
In the RS-FMRI data I've tried this on, the average time series correlation
resulting from this optimization is 30-60% of that which comes from
optimizing over ALL orthogonal matrices (Joshi method). If you use '-verb',
the stderr output line that looks like this
+ corr scores: original=-722.5 Q matrix=22366.0 permutation=12918.7 57.8%
shows trace(BC') before any transforms, with the Q matrix transform,
and with the permutation transform. As explained above, trace(BC') is
the summed correlations of the time series (since the columns of B and C
are normalized prior to the optimizations); in this example, the ratio of
the average time series correlation between the permutation method and the
Joshi method is about 58% (in a gray matter mask with 72221 voxels).
* Results from the permutation method MUST be less correlated (on average)
with -inset1 than the Joshi method's results: the permutation can be
thought of as an orthogonal matrix containing only 1s and 0s, and the BEST
possible orthogonal matrix, from Joshi's method, has more general entries.
++ However, the permutation method has an obvious interpretation
(re-ordering time points), while the general method linearly combines
different time points (perhaps far apart); the interpretation of this
combination in terms of synchronizing brain activity is harder to intuit
(at least for me).
++ Another feature of a permutation-only transformation is that it cannot
change the sign of data, unlike a general orthgonal matrix; e.g.,
[ 0 -1 0 ]
[-1 0 0 ]
[ 0 0 1 ], which swaps the first 2 time points AND negates them,
and leave the 3rd time point unchanged, is a valid orthogonal
matrix. For rs-FMRI datasets, this consideration might not be important,
since rs-FMRI correlations are generally positive, so don't often need
sign-flipping to make them so.
*******************************************************************************
* This program is NOT multi-threaded. Typically, I/O is the biggest part of
the run time (at least, for the cases I've tested). The '-verb' option
will give progress reports with elapsed-time stamps, making it easy to
see which parts of the program take the most time.
* Author: RWCox, servant of the ChronoSynclastic Infundibulum - July 2017
* Thanks go to Anand Joshi for his clear exposition of BrainSync at OHBM 2017,
and his encouragement about the development of this program.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dBRAIN_VOYAGERtoAFNI
Usage: 3dBRAIN_VOYAGERtoAFNI <-input BV_VOLUME.vmr>
[-bs] [-qx] [-tlrc|-acpc|-orig] [<-prefix PREFIX>]
Converts a BrainVoyager vmr dataset to AFNI's BRIK format
The conversion is based on information from BrainVoyager's
website: www.brainvoyager.com.
Sample data and information provided by
Adam Greenberg and Nikolaus Kriegeskorte.
If you get error messages about the number of
voxels and file size, try the options below.
I hope to automate these options once I have
a better description of the BrainVoyager QX format.
Optional Parameters:
-bs: Force byte swapping.
-qx: .vmr file is from BrainVoyager QX
-tlrc: dset in tlrc space
-acpc: dset in acpc-aligned space
-orig: dset in orig space
If unspecified, the program attempts to guess the view from
the name of the input.
[-novolreg]: Ignore any Rotate, Volreg, Tagalign,
or WarpDrive transformations present in
the Surface Volume.
[-noxform]: Same as -novolreg
[-setenv "'ENVname=ENVvalue'"]: Set environment variable ENVname
to be ENVvalue. Quotes are necessary.
Example: suma -setenv "'SUMA_BackgroundColor = 1 0 1'"
See also options -update_env, -environment, etc
in the output of 'suma -help'
Common Debugging Options:
[-trace]: Turns on In/Out debug and Memory tracing.
For speeding up the tracing log, I recommend
you redirect stdout to a file when using this option.
For example, if you were running suma you would use:
suma -spec lh.spec -sv ... > TraceFile
This option replaces the old -iodbg and -memdbg.
[-TRACE]: Turns on extreme tracing.
[-nomall]: Turn off memory tracing.
[-yesmall]: Turn on memory tracing (default).
NOTE: For programs that output results to stdout
(that is to your shell/screen), the debugging info
might get mixed up with your results.
Global Options (available to all AFNI/SUMA programs)
-h: Mini help, at time, same as -help in many cases.
-help: The entire help output
-HELP: Extreme help, same as -help in majority of cases.
-h_view: Open help in text editor. AFNI will try to find a GUI editor
-hview : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_web: Open help in web browser. AFNI will try to find a browser.
-hweb : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_find WORD: Look for lines in this programs's -help output that match
(approximately) WORD.
-h_raw: Help string unedited
-h_spx: Help string in sphinx loveliness, but do not try to autoformat
-h_aspx: Help string in sphinx with autoformatting of options, etc.
-all_opts: Try to identify all options for the program from the
output of its -help option. Some options might be missed
and others misidentified. Use this output for hints only.
Compile Date:
Oct 1 2024
Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov
AFNI program: 3dBrickStat
Usage: 3dBrickStat [options] dataset
Compute maximum and/or minimum voxel values of an input dataset
The output is a number to the console. The input dataset
may use a sub-brick selection list, as in program 3dcalc.
Note that this program computes ONE number as the output; e.g.,
the mean over all voxels and time points. If you want (say) the
mean over all voxels but for each time point individually, see
program 3dmaskave.
Note: If you don't specify one sub-brick, the parameter you get
----- back is computed from all the sub-bricks in dataset.
Options :
-quick = get the information from the header only (default)
-slow = read the whole dataset to find the min and max values
all other options except min and max imply slow
-min = print the minimum value in dataset
-max = print the maximum value in dataset (default)
-mean = print the mean value in dataset
-sum = print the sum of values in the dataset
-var = print the variance in the dataset
-stdev = print the standard deviation in the dataset
-stdev and -var are mutually exclusive
-count = print the number of voxels included
-volume = print the volume of voxels included in microliters
-positive = include only positive voxel values
-negative = include only negative voxel values
-zero = include only zero voxel values
-non-positive = include only voxel values 0 or negative
-non-negative = include only voxel values 0 or greater
-non-zero = include only voxel values not equal to 0
-absolute = use absolute value of voxel values for all calculations
can be combined with restrictive non-positive, non-negative,
etc. even if not practical. Ignored for percentile and
median computations.
-nan = include only voxel values that are not numbers (e.g., NaN or inf).
This is basically meant for counting bad numbers in a dataset.
-nan forces -slow mode.
-nonan = exclude voxel values that are not numbers
(exclude any NaN or inf values from computations).
-mask dset = use dset as mask to include/exclude voxels
-mrange MIN MAX = Only accept values between MIN and MAX (inclusive)
from the mask. Default it to accept all non-zero
voxels.
-mvalue VAL = Only accept values equal to VAL from the mask.
-automask = automatically compute mask for dataset
Can not be combined with -mask
-percentile p0 ps p1 write the percentile values starting
at p0% and ending at p1% at a step of ps%
Output is of the form p% value p% value ...
Percentile values are output first.
Only one sub-brick is accepted as input with this option.
Write the author if you REALLY need this option
to work with multiple sub-bricks.
-perclist NUM_PERC PERC1 PERC2 ...
Like -percentile, but output the given percentiles, rather
than a list on an evenly spaced grid using 'ps'.
-median a shortcut for -percentile 50 1 50 (or -perclist 1 50)
-perc_quiet = only print percentile results, not input percentile cutoffs
-ver = print author and version info
-help = print this help screen
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = Oct 1 2024 {AFNI_24.3.00:linux_ubuntu_24_64}
AFNI program: 3dbucket
++ 3dbucket: AFNI version=AFNI_24.3.00 (Oct &nbs