All AFNI program -help files
This page auto-generated on Thu Dec 12 01:23:53 EST 2019
AFNI program: 1dApar2mat
Usage: 1dApar2mat dx dy dz a1 a2 a3 sx sy sz hx hy hz
* This program computes the affine transformation matrix
from the set of 3dAllineate parameters.
* The result is printed to stdout, and can be captured
by Unix shell redirection (e.g., '|', '>', '>>', etc.).
See the EXAMPLE, far below.
* One use for 1dApar2mat is to take a set of parameters
from '3dAllineate -1Dparam_save', alter them in some way,
and re-compute the corresponding matrix. For example,
compute the full affine transform with 12 parameters,
but then omit the final 6 parameters to see what the
'pure' shift+rotation matrix looks like.
* The 12 parameters are, in the order used on the 1dApar2mat command line
(the same order as output by 3dAllineate):
x-shift in mm
y-shift in mm
z-shift in mm
z-angle (roll) in degrees (not radians!)
x-angle (pitch) in degrees
y-angle (yaw) in degrees
x-scale unitless factor, in [0.10,10.0]
y-scale unitless factor, in [0.10,10.0]
z-scale unitless factor, in [0.10,10.0]
y/x-shear unitless factor, in [-0.3333,0.3333]
z/x-shear unitless factor, in [-0.3333,0.3333]
z/y-shear unitless factor, in [-0.3333,0.3333]
* Parameters omitted from the end of the command line get their
default values (0 except for scales, which default to 1).
* At least 1 parameter must be given, or you get this help message :)
The minimum command line is
1dApar2mat 0
which will output the identity matrix.
* Legal scale and shear factors have limited ranges, as
described above. An input value outside the given range
will be reset to the default value for that factor (1 or 0).
* UNUSUAL SPECIAL CASES:
If you used 3dAllineate with any of the options described
under 'CHANGING THE ORDER OF MATRIX APPLICATION' or you
used the '-EPI' option, then the order of parameters inside
3dAllineate will no longer be the same as the parameter order
in 1dApar2mat. In such a situation, the matrix output by
this program will NOT agree with that output by 3dAllineate
for the same set of parameter numbers :(
* EXAMPLE:
1dApar2mat 0 1 2 3 4 5
to get a rotation matrix with some shifts; the output is:
# mat44 1dApar2mat 0 1 2 3 4 5 :
0.994511 0.058208 -0.086943 0.000000
-0.052208 0.996197 0.069756 1.000000
0.090673 -0.064834 0.993768 2.000000
If you wish to capture this matrix all on one line, you can
combine various Unix shell and command tricks/tools, as in
echo `1dApar2mat 0 1 2 3 4 5 | tail -3` > Fred.aff12.1D
This 12-numbers-in-one-line is the format output by '-1Dmatrix_save'
in 3dAllineate and 3dvolreg.
* FANCY EXAMPLE:
Tricksy command line stuff to compute the inverse of a matrix
set fred = `1dApar2mat 0 0 0 3 4 5 1 1 1 0.2 0.1 0.2 | tail -3`
cat_matvec `echo $fred | sed -e 's/ /,/g' -e 's/^/MATRIX('/`')' -I
* ALSO SEE: Programs cat_matvec and 1dmatcalc for doing
simple matrix arithmetic on such files.
* OPTIONS: This program has no options. Love it or leave it :)
* AUTHOR: Zhark the Most Affine and Sublime - April 2019
AFNI program: 1dAstrip
Usage: 1dAstrip < input > output
This very simple program strips non-numeric characters
from a file, so that it can be processed by other AFNI
1d programs. For example, if your input is
x=3.6 y=21.6 z=14.2
then your output would be
3.6 21.6 14.2
* Non-numeric characters are replaced with blanks.
* The letter 'e' is preserved if it is preceeded
or followed by a numeric character. This is
to allow for numbers like '1.2e-3'.
* Numeric characters, for the purpose of this
program, are defined as the digits '0'..'9',
and '.', '+', '-'.
* The program is simple and can easily end up leaving
undesired junk characters in the output. Sorry.
* This help string is longer than the rest of the
source code to this program!
AFNI program: 1dBandpass
Usage: 1dBandpass [options] fbot ftop infile ~1~
* infile is an AFNI *.1D file; each column is processed
* fbot = lowest frequency in the passband, in Hz
[can be 0 if you want to do a lowpass filter only,]
but the mean and Nyquist freq are always removed ]
* ftop = highest frequency in the passband (must be > fbot)
[if ftop > Nyquist freq, then we have a highpass filter only]
* You cannot construct a 'notch' filter with this program!
* Output vectors appear on stdout; redirect as desired
* Program will fail if fbot and ftop are too close for comfort
* The actual FFT length used will be printed, and may be larger
than the input time series length for the sake of efficiency.
Options: ~1~
-dt dd = set time step to 'dd' sec [default = 1.0]
-ort f.1D = Also orthogonalize input to columns in f.1D
[only one '-ort' option is allowed]
-nodetrend = Skip the quadratic detrending of the input
-norm = Make output time series have L2 norm = 1
Example: ~1~
1deval -num 1000 -expr 'gran(0,1)' > r1000.1D
1dBandpass 0.025 0.20 r1000.1D > f1000.1D
1dfft f1000.1D - | 1dplot -del 0.000977 -stdin -plabel 'Filtered |FFT|'
Goal: ~1~
* Mostly to test the functions in thd_bandpass.c -- RWCox -- May 2009
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dBport
Usage: 1dBport [options]
Creates a set of columns of sines and cosines for the purpose of
bandpassing via regression (e.g., in 3dDeconvolve). Various option
are given to specify the duration and structure of the time series
to be created. Results are written to stdout, and usually should be
redirected appropriately (cf. EXAMPLES, infra). The file produced
could be used with the '-ortvec' option to 3dDeconvolve, for example.
OPTIONS
-------
-band fbot ftop = Specify lowest and highest frequencies in the passband.
fbot can be 0 if you want to do a highpass filter only;
on the other hand, if ftop > Nyquist frequency, then
it's a lowpass filter only.
** This 'option' is actually mandatory! (At least once.)
* For the un-enlightened, the Nyquist frequency is the
highest frequency supported on the given grid, and
is equal to 0.5/TR (units are Hz if TR is in s).
* The lowest nonzero frequency supported on the grid
is equaly to 1/(N*TR), where N=number of time points.
** Multiple -band options can be used, if needed.
If the bands overlap, regressors will NOT be duplicated.
* That is, '-band 0.01 0.05 -band 0.03 0.08' is the same
as using '-band 0.01 0.08'.
** Note that if fbot==0 and ftop>=Nyquist frequency, you
get a 'complete' set of trig functions, meaning that
using these in regression is effectively a 'no-pass'
filter -- probably not what you want!
** It is legitimate to set fbot = ftop.
** The 0 frequency (fbot = 0) component is all 1, of course.
But unless you use the '-quad' option, nothing generated
herein will deal well with linear-ish or quadratic-ish
trends, which fall below the lowest nonzero frequency
representable in a full cycle on the grid:
f_low = 1 / ( NT * TR )
where NT = number of time points.
** See the fourth EXAMPLE to learn how to use 3dDeconvolve
to generate a file of polynomials for regression fun.
-invert = After computing which frequency indexes correspond to the
input band(s), invert the selection -- that is, output
all those frequencies NOT selected by the -band option(s).
See the fifth EXAMPLE.
-nozero } Do NOT generate the 0 frequency (constant) component
*OR } when fbot = 0; this has the effect of setting fbot to
-noconst } 1/(N*TR), and is essentially a convenient way to say
'eliminate all oscillations below the ftop frequency'.
-quad = Add regressors for linear and quadratic trends.
(These will be the last columns in the output.)
-input dataset } One of these options is used to specify the number of
*OR* } time points to be created, as in 3dDeconvolve.
-input1D 1Dfile } ** '-input' allow catenated datasets, as in 3dDeconvolve.
*OR* } ** '-input1D' assumes TR=1 unless you use the '-TR' option.
-nodata NT [TR] } ** One of these options is mandatory, to specify the length
of the time series file to generate.
-TR del = Set the time step to 'del' rather than use the one
given in the input dataset (if any).
** If TR is not specified by the -input dataset or by
-nodata or by -TR, the program will assume it is 1.0 s.
-concat rname = As in 3dDeconvolve, used to specify the list of start
indexes for concatenated runs.
** Also as in 3dDeconvolve, if the -input dataset is auto-
catenated (by providing a list of more than one dataset),
the run start list is automatically generated. Otherwise,
this option is needed if more than one run is involved.
EXAMPLES
--------
The first example provides basis functions to filter out all frequency
components from 0 to 0.25 Hz:
1dBport -nodata 100 1 -band 0 0.25 > highpass.1D
The second example provides basis functions to filter out all frequency
components from 0.25 Hz up to the Nyquist freqency:
1dBport -nodata 100 1 -band 0.25 666 > lowpass.1D
The third example shows how to examine the results visually, for fun:
1dBport -nodata 100 1 -band 0.41 0.43 | 1dplot -stdin -thick
The fourth example shows how to use 3dDeconvolve to generate a file of
polynomial 'orts', in case you find yourself needing this ability someday
(e.g., when stranded on a desert isle, with Gilligan, the Skipper, et al.):
3dDeconvolve -nodata 100 1 -polort 2 -x1D_stop -x1D stdout: | 1dcat stdin: > pol3.1D
The fifth example shows how to use 1dBport to generate a set of regressors to
eliminate all frequencies EXCEPT those in the selected range:
1dBport -nodata 100 1 -band 0.03 0.13 -nozero -invert | 1dplot -stdin
In this example, the '-nozero' flag is used because the next step will be to
3dDeconvolve with '-polort 2' and '-ortvec' to get rid of the undesirable stuff.
ETYMOLOGICAL NOTES
------------------
* The word 'ort' was coined by Andrzej Jesmanowicz, as a shorthand name for
a timeseries to which you want to 'orthogonalize' your data.
* 'Ort' actually IS an English word, and means 'a scrap of food left from a meal'.
As far as I know, its only usage in modern English is in crossword puzzles,
and in Scrabble.
* For other meanings of 'ort', see http://en.wikipedia.org/wiki/Ort
* Do not confuse 'ort' with 'Oort': http://en.wikipedia.org/wiki/Oort_cloud
AUTHOR -- RWCox -- Jan 2012
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dcat
Usage: 1dcat [options] a.1D b.1D ...
where each file a.1D, b.1D, etc. is a 1D file.
In the simplest form, a 1D file is an ASCII file of numbers
arranged in rows and columns.
1dcat takes as input one or more 1D files, and writes out a 1D file
containing the side-by-side concatenation of all or a subset of the
columns from the input files.
* Output goes to stdout (the screen); redirect (e.g., '>') to save elsewhere.
* All files MUST have the same number of rows!
* Any header lines (i.e., lines that start with '#') will be lost.
* For generic 1D file usage help and information, see '1dplot -help'
-----------
TSV files: [Sep 2018]
-----------
* 1dcat can now also read .tsv files, which are columns of values separated
by tab characters (tsv = tab separated values). The first row of a .tsv
file is a set of column labels. After the header row, each column is either
all numbers, or is a column of strings. For example
Col 1 Col 2 Col 3
3.2 7.2 Elvis
8.2 -1.2 Sinatra
6.66 33.3 20892
In this example, the column labels contain spaces, which are NOT separators;
the only column separator used in a .tsv file is the tab character.
The first and second columns are converted to number columns, since every
value (after the label/header row) is a numeric string. The third column
is stored as strings, since some of the entries are not valid numbers.
* 1dcat can deal with a mix of .1D and .tsv files. The .tsv file header
rows are NOT output by default, since .1D files don't have such headers.
* The usual output from 1dcat is NOT a .tsv file - blanks are used for
separators. You can use the '-tsvout' option to get TSV formatted output.
* If you mix .1D and .tsv files, the number of data rows in each file
must be the same. Since the header row in a .tsv file is NOT used here,
the total number of lines in a .tsv file must be 1 more than the number
of lines in a .1D file for the two files to match in this program.
* The purpose of supporting .tsv files is for eventual compatibility with
the BIDS format http://bids.neuroimaging.io - which uses .tsv files
extensively to provide auxiliary information for (F)MRI datasets.
* Column selectors (like '[0,3]') can be used on .tsv files, but row selectors
(like '{0,3..5}') cannot be used on .tsv files - at this time :(
* You can also select a column in a .tsv file by using the label at the top of
of the column. A BIDS-related example:
1dcat sub-666_task-XXX_events.tsv'[onset,duration,trial_type,reaction_time]'
A similar example, which outputs a list of the trial types in an imaging run:
1dcat sub-666_task-XXX_events.tsv'[trial_type]' | sort | uniq
* Since .1D files don't have headers, the label method of column selection
doesn't work with such inputs; you must use integer column selectors
on .1D files.
* NOTE WELL: The string 'N/A' or 'n/a' in a column that is otherwise numeric
will be considered to be a number, and will be replaced on input
with the mean of the "true" numbers in the column -- there is
no concept of missing data in an AFNI .1D file.
++ If you don't like this, well ... too bad for you.
* NOTE WELL: 1dcat now also allows comma separated value (.csv) files. These
are treated the same as .tsv files, with a header line, et cetera.
--------
OPTIONS:
--------
-tsvout = Output in a TSV (.tsv) format, where the values in each row
are separated by tabs, not blanks. Also, a header line will
be provided, as TSV files require.
-csvout = Output in a CSV (.csv) format, where the values in each row
are separated by commas, not blanks. Also, a header line will
be provided, as CSV files require.
-nonconst = Columns that are identically constant should be omitted
from the output.
-nonfixed = Keep only columns that are marked as 'free' in the
3dAllineate header from '-1Dparam_save'.
If there is no such header, all columns are kept.
* NOTE: -nconst and -nonfixed don't have any effect on
.tsv/.csv files, and the use of these options
has NOT been tested at all when the inputs
are mixture of .tsv/.csv and .1D files.
-form FORM = Format of the numbers to be output.
You can also substitute -form FORM with shortcuts such
as -i, -f, or -c.
For help on -form's usage, and its shortcut versions
see ccalc's help for the option of the same name.
-stack = Stack the columns of the resultant matrix in the output.
You can't use '-stack' with .tsv/.csv files :(
-sel SEL = Apply the same column/row selection string to all filenames
on the command line.
For example:
1dcat -sel '[0,2]' f1.1D f2.1D
is the same as: 1dcat f1.1D'[1,2]' f2.1D'[1,2]'
The advantage of the option is that it allows wildcard use
in file specification so that you can run something like:
1dcat -sel '[0,2]' f?.1D
-OKempty: Exit quietly when encountering an empty file on disk.
Note that if the file is poorly formatted, it might be
considered empty.
EXAMPLE:
--------
Input file 1:
1
2
3
4
Input file 2:
5
6
7
8
1dcat data1.1D data2.1D > catout.1D
Output file:
1 5
2 6
3 7
4 8
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dCorrelate
Usage: 1dCorrelate [options] 1Dfile 1Dfile ...
------
* Each input 1D column is a collection of data points.
* The correlation coefficient between each column pair is computed, along
with its confidence interval (via a bias-corrected bootstrap procedure).
* The minimum sensible column length is 7.
* At least 2 columns are needed [in 1 or more .1D files].
* If there are N input columns, there will be N*(N-1)/2 output rows.
* Output appears on stdout; redirect ('>' or '>>') as needed.
* Only one correlation method can be used in one run of this program.
* This program is basically the basterd offspring of program 1ddot.
* Also see http://en.wikipedia.org/wiki/Confidence_interval
-------
Methods [actually, only the first letter is needed to choose a method]
------- [and the case doesn't matter: '-P' and '-p' both = '-Pearson']
-Pearson = Pearson correlation [the default method]
-Spearman = Spearman (rank) correlation [more robust vs. outliers]
-Quadrant = Quadrant (binarized) correlation [most robust, but weaker]
-Ktaub = Kendall's tau_b 'correlation' [popular somewhere, maybe]
-------------
Other Options [these options cannot be abbreviated!]
-------------
-nboot B = Set the number of bootstrap replicates to 'B'.
* The default value of B is 4000.
* A larger number will give somewhat more accurate
confidence intervals, at the cost of more CPU time.
-alpha A = Set the 2-sided confidence interval width to '100-A' percent.
* The default value of A is 5, giving the 2.5..97.5% interval.
* The smallest allowed A is 1 (0.5%..99.5%) and the largest
allowed value of A is 20 (10%..90%).
* If you are interested assessing if the 'p-value' of a
correlation is smaller than 5% (say), then you should use
'-alpha 10' and see if the confidence interval includes 0.
-block = Attempt to allow for serial correlation in the data by doing
*OR* variable-length block resampling, rather than completely
-blk random resampling as in the usual bootstrap.
* You should NOT do this unless you believe that serial
correlation (along each column) is present and significant.
* Block resampling requires at least 20 data points in each
input column. Fewer than 20 will turn off this option.
-----
Notes
-----
* For each pair of columns, the output include the correlation value
as directly calculated, plus the bias-corrected bootstrap value, and
the desired (100-A)% confidence interval [also via bootstrap].
* The primary purpose of this program is to provide an easy way to get
the bootstrap confidence intervals, since people almost always seem to use
the asymptotic normal theory to decide if a correlation is 'significant',
and this often seems misleading to me [especially for short columns].
* Bootstrapping confidence intervals for the inverse correlations matrix
(i.e., partial correlations) would be interesting -- anyone out there
need this ability?
-------------
Sample output [command was '1dCorrelate -alpha 10 A2.1D B2.1D']
-------------
# Pearson correlation [n=12 #col=2]
# Name Name Value BiasCorr 5.00% 95.00% N: 5.00% N:95.00%
# -------- -------- -------- -------- -------- -------- -------- --------
A2.1D[0] B2.1D[0] +0.57254 +0.57225 -0.03826 +0.86306 +0.10265 +0.83353
* Bias correction of the correlation had little effect; this is very common.
++ To be clear, the bootstrap bias correction is to allow for potential bias
in the statistical estimate of correlation when the sample size is small.
++ It cannot correct for biases that result from faulty data (or faulty
assumptions about the data).
* The correlation is NOT significant at this level, since the CI (confidence
interval) includes 0 in its range.
* For the Pearson method ONLY, the last two columns ('N:', as above) also
show the widely used asymptotic normal theory confidence interval. As in
the example, the bootstrap interval is often (but not always) wider than
the theoretical interval.
* In the example, the normal theory might indicate that the correlation is
significant (less than a 5% chance that the CI includes 0), but the
bootstrap CI shows that is not a reasonable statistical conclusion.
++ The principal reason that I wrote this program was to make it easy
to check if the normal (Gaussian) theory for correlation significance
testing is reasonable in any given case -- for small samples, it often
is NOT reasonable!
* Using the same data with the '-S' option gives the table below, again
indicating that there is no significant correlation between the columns
(note also the lack of the 'N:' results for Spearman correlation):
# Spearman correlation [n=12 #col=2]
# Name Name Value BiasCorr 5.00% 95.00%
# -------- -------- -------- -------- -------- --------
A2.1D[0] B2.1D[0] +0.46154 +0.42756 -0.23063 +0.86078
-------------
SAMPLE SCRIPT
-------------
This script generates random data and correlates it until it is
statistically significant at some level (default=2%). Then it
plots the data that looks correlated. The point is to show what
purely random stuff that appears correlated can look like.
(Like most AFNI scripts, this is written in tcsh, not bash.)
#!/bin/tcsh
set npt = 20
set alp = 2
foreach fred ( `count -dig 1 1 1000` )
1dcat jrandom1D:${npt},2 > qqq.1D
set aabb = ( `1dCorrelate -spearman -alpha $alp qqq.1D | grep qqq.1D | colrm 1 42` )
set ab = `ccalc -form rint "1000 * $aabb[1] * $aabb[2]"`
echo $fred $ab
if( $ab > 1 )then
1dplot -one -noline -x qqq.1D'[0]' -xaxis -1:1:20:5 -yaxis -1:1:20:5 \
-DAFNI_1DPLOT_BOXSIZE=0.012 \
-plabel "N=$npt trial#=$fred \alpha=${alp}% => r\in[$aabb[1],$aabb[2]]" \
qqq.1D'[1]'
break
endif
end
\rm qqq.1D
----------------------------------------------------------------------
*** Written by RWCox (AKA Zhark the Mad Correlator) -- 19 May 2011 ***
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: @1dDiffMag
Usage: @1dDiffMag file.1D
* Computes a magnitude estimate of the first differences of a 1D file.
* Differences are computed down each column.
* The result -- a single number -- is on stdout.
* But (I hear you say), what IS the result?
* For each column, the standard deviation of the first differences is computed.
* The final result is the square-root of the sum of the squares of these stdev values.
AFNI program: 1ddot
Usage: 1ddot [options] 1Dfile 1Dfile ...
* Prints out correlation matrix of the 1D files and
their inverse correlation matrix.
* Output appears on stdout.
* Program 1dCorrelate does something similar-ish.
Options:
-one = Make 1st vector be all 1's.
-dem = Remove mean from all vectors (conflicts with '-one')
-cov = Compute with covariance matrix instead of correlation
-inn = Computed with inner product matrix instead
-rank = Compute Spearman rank correlation instead
(also implies '-terse')
-terse= Output only the correlation or covariance matrix
and without any of the garnish.
-okzero= Do not quit if a vector is all zeros.
The correlation matrix will have 0 where NaNs ought to go.
Expect rubbish in the inverse matrices if all zero
vectors exist.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dDW_Grad_o_Mat
Simple function to manipulate DW gradient vector files, b-value
files, and b-/g-matrices. Let: g_i be one of Ng spatial gradients
in three dimensions; the g-matrix is G_{ij} = g_i*g_j (i.e., dyad
of gradients, without b-value included); and the DW-scaled
b-matrix is B_{ij} = b*g_i*g_j.
**NB: please consider using the newer function '1dDW_Grad_o_Mat++'
instead of this one, as modern thinking means much of the
defaults (such as averaging reference b0 volumes together
and functionality here is not really in vogue anymore.
At some point, the present program will go the way of the
Silesauridae.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
As of right now, one can input:
+ 3 rows of gradients (as output from dcm2nii, for example);
+ 3 columns of gradients;
+ 6 columns of g- or b-matrices, in `diagonal-first' order:
Bxx, Byy, Bzz, Bxy, Bxz, Byz,
which is used in 3dDWItoDT, for example;
+ 6 columns of g- or b-matrices, in `row-first' order:
Bxx, 2*Bxy, 2*Bxz, Byy, 2*Byz, Bzz,
which is output by TORTOISE, for example;
+ when specifying input file, one can use the brackets '{ }'
in order to specify a subset of rows to keep (NB: probably
can't use this grad-filter when reading in row-data right
now).
During processing, one can:
+ flip the sign of any of the x-, y- or z-components, which
may be necessary to do to make the scanned data and tracking
work happily together;
+ filter out all `zero' rows of recorded reference images;
One can then output:
+ 3 columns of gradients;
+ 6 columns of g- or b-matrices, in 'diagonal-first' order;
+ 6 columns of g- or b-matrices, in 'row-first' order;
+ as well as including a column of b-values (such as used in;
DTI-Studio);
+ as well as including a row of zeros at the top;
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ RUNNING:
1dDW_Grad_o_Mat \
{ -in_grad_cols | -in_grad_cols_bwt | \
-in_gmatT_cols | -in_gmatA_cols | \
-in_bmatT_cols | -in_gmatA_cols | \
-in_grad_rows } INFILE \
{ -flip_x | -flip_y | -flip_z } \
{ -keep_b0s } { -put_zeros_top } { -out_bval_col } \
{ -bref_mean_top } \
{ -in_bvals BVAL_IN } \
{ -bmax_ref THRESH } \
{ -out_grad_cols | -out_grad_cols_bwt | \
-out_gmatT_cols | -out_gmatA_cols | \
-out_bmatT_cols | -out_gmatA_cols | \
-out_grad_rows } OUTFILE \
{ -out_bval_row_sep | -out_bval_col_sep BB }
where:
(one of the following six formats of input must be given):
-in_grad_rows INFILE :input file of 3 rows of gradients (e.g., dcm2nii-
format output).
-in_grad_cols INFILE :input file of 3 columns of gradients.
-in_grad_cols_bwt INFILE :input file of 3 columns of gradients, each
weighted by the b-value.
-in_gmatA_cols INFILE :input file of 6 columns of g-matrix in 'A(FNI)'
`diagonal first'-format. (See above.)
-in_gmatT_cols INFILE :input file of 6 columns of g-matr in 'T(ORTOISE)'
`row first'-format. (See above.)
-in_bmatA_cols INFILE :input file of 6 columns of b-matrix in 'A(FNI)'
`diagonal first'-format. (See above.)
-in_bmatT_cols INFILE :input file of 6 columns of b-matr in 'T(ORTOISE)'
`row first'-format. (See above.)
(one of the following five formats of output must be given):
-out_grad_cols OUTFILE :output file of 3 columns of gradients.
-out_grad_cols_bwt OUTFILE :output file of 3 columns of gradients, each
weighted by the b-value.
-out_gmatA_cols OUTFILE :output file of 6 columns of g-matrix in 'A(FNI)'
`diagonal first'-format. (See above.)
-out_gmatT_cols OUTFILE :output file of 6 cols of g-matr in 'T(ORTOISE)'
`row first'-format. (See above.)
-out_bmatA_cols OUTFILE :output file of 6 columns of b-matrix in 'A(FNI)'
`diagonal first'-format. (See above.)
-out_bmatT_cols OUTFILE :output file of 6 cols of b-matr in 'T(ORTOISE)'
`row first'-format. (See above.)
-out_grad_rows OUTFILE :output file of 3 rows of gradients.
(and any of the following options may be used):
-proc_dset DSET :input a dataset DSET of X 'b=0' and Y DWI bricks,
matching the X zero- and Y nonzero-gradient
entries in the INFILE. The 'processing' will:
1) extract all the 'b=0' bricks,
2) average them,
3) store the result in the zeroth brick of
the output PREFIX data set, and
4) place the DWIs (kept in their original
order) as the next Y bricks of PREFIX.
This option cannot be used with '-keep_b0s'.
The output set has Y+1 bricks. The option is
probably mostly useful only if X>1.
-pref_dset PREFIX :output dataset filename prefix (required and iff
using '-proc_dset', above).
-dwi_comp_fac N_REP :option for averaging DWI bricks in DSET that have
been acquired with exactly N_REP repeated sets of
gradients. *You* the user must know how many
repetitions have been performed (this program
will perform a simplistic gradient comparison
using dot products to flag possible errors, but
this is by no means bulletproof. Use wisely.
-flip_x :change sign of first column of gradients
-flip_y :change sign of second column of gradients
-flip_z :change sign of third column of gradients
-bref_mean_top :when averaging the reference X 'b0' values (which
is default behavior), have the mean of the X
values be represented in the top row; default
behavior is to have nothing representing the b0
information in the top row (for historical
functionality reasons). NB: if your reference
'b0' actually has b>0, you might not want to
average the b0 refs together, because their
images could have differing contrast if the
same reference vector wasn't used for each.
-keep_b0s :default function is to get rid of all reference
image, but this option acts as switch to keep
them.
-put_zeros_top :whatever the output format is, add a row at the
top with all zeros.
-bmax_ref THRESH :THRESH is a scalar number below which b-values
(in BVAL_IN) are considered `zero' or reference.
Sometimes, for the reference images, the scanner
has a value like b=5 s/mm^2, instead of strictly
b=0 strictly. One can still flag such values as
being associated with a reference image and
trim it out, using, for the example case here,
'-bmax_ref 5.1'.
-in_bvals BVAL_IN :BVAL_IN is a file of b-values, such as the 'bval'
file generated by dcm2nii.
-out_bval_col :switch to put a column of the bvalues as the
first column in the output data.
-out_bval_row_sep BB :output a file BB of bvalues in a single row.
-out_bval_col_sep BB :output a file BB of bvalues in a single row.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
____________________________________________________________________________
AFNI program: 1dDW_Grad_o_Mat++
++ Program version: 2.2
Simple function to manipulate DW gradient vector files, b-value
files, and b- or g-matrices. Let: g_i be one of Ng spatial gradients
in three dimensions; |g_i| = 1, and the g-matrix is G_{ij} = g_i * g_j
(i.e., dyad of gradients, without b-value included); and the DW-scaled
b-matrix is B_{ij} = b * g_i * g_j.
**This new version of the function** will replace the original/older
version (1dDW_Grad_o_Mat). The new has similar functionality, but
improved defaults:
+ it does not average b=0 volumes together by default;
+ it does not remove top b=0 line from top by default;
+ output has same scaling as input by default (i.e., by bval or not);
and a switch is used to turn *off* scaling, for unit magn output
(which is cleverly concealed under the name '-unit_mag_out').
Wherefore, you ask? Well, times change, and people change.
The above functionality is still available, but each just requires
selection with command line switches.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
As of right now, one can input:
+ 3 rows of gradients (as output from dcm2nii, for example);
+ 3 columns of gradients;
+ 6 columns of g- or b-matrices, in `diagonal-first' (-> matA) order:
Bxx, Byy, Bzz, Bxy, Bxz, Byz,
which is used in 3dDWItoDT, for example;
+ 6 columns of g- or b-matrices, in `row-first' (-> matT) order:
Bxx, 2*Bxy, 2*Bxz, Byy, 2*Byz, Bzz,
which is output by TORTOISE, for example;
+ when specifying input file, one can use the brackets '{ }'
in order to specify a subset of rows to keep (NB: probably
can't use this grad-filter when reading in row-data right
now).
During processing, one can:
+ flip the sign of any of the x-, y- or z-components, which
may be necessary to do to make the scanned data and tracking
work happily together;
+ filter out all `zero' rows of recorded reference images,
THOUGH this is not really recommended.
One can then output:
+ 3 columns of gradients;
+ 6 columns of g- or b-matrices, in 'diagonal-first' order;
+ 6 columns of g- or b-matrices, in 'row-first' order;
+ as well as including a column of b-values (such as used in, e.g.,
DSI-Studio);
+ as well as explicitly include a row of zeros at the top;
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ RUNNING:
1dDW_Grad_o_Mat++ \
{ -in_row_vec | -in_col_vec | \
-in_col_matA | -in_col_matT } INFILE \
{ -flip_x | -flip_y | -flip_z | -no_flip } \
{ -out_row_vec | -out_col_vec | \
-out_col_matA | -out_col_matT } OUTFILE \
{ -in_bvals BVAL_FILE } \
{ -out_col_bval } \
{ -out_row_bval_sep BB | -out_col_bval_sep BB } \
{ -unit_mag_out } \
{ -bref_mean_top } \
{ -bmax_ref THRESH } \
{ -put_zeros_top } \
where:
(one of the following formats of input must be given):
-in_row_vec INFILE :input file of 3 rows of gradients (e.g.,
dcm2nii-format output).
-in_col_vec INFILE :input file of 3 columns of gradients.
-in_col_matA INFILE :input file of 6 columns of b- or g-matrix in
'A(FNI)' `diagonal first'-format. (See above.)
-in_col_matT INFILE :input file of 6 columns of b- or g-matrix in
'T(ORTOISE)' `row first'-format. (See above.)
(one of the following formats of output must be given):
-out_row_vec OUTFILE :output file of 3 rows of gradients.
-out_col_vec OUTFILE :output file of 3 columns of gradients.
-out_col_matA OUTFILE :output file of 6 columns of b- or g-matrix in
'A(FNI)' `diagonal first'-format. (See above.)
-out_col_matT OUTFILE :output file of 6 cols of b- or g-matrix in
'T(ORTOISE)' `row first'-format. (See above.)
(and any of the following options may be used):
-in_bvals BVAL_FILE :BVAL_FILE is a file of b-values, either a single
row (such as the 'bval' file generated by
dcm2nii) or a single column of numbers. Must
have the same number of entries as the number
of grad vectors or matrices.
-out_col_bval :switch to put a column of the bvalues as the
first column in the output data.
-out_row_bval_sep BB :output a file BB of bvalues in a single row.
-out_col_bval_sep BB :output a file BB of bvalues in a single column.
-unit_mag_out :switch so that each vector/matrix from the INFILE
is scaled to either unit or zero magnitude.
(Supplementary input bvalues would be ignored
in the output matrix/vector, but not in the
output bvalues themselves.) The default
behavior of the function is to leave the output
scaled however it is input (while also applying
any input BVAL_FILE).
-flip_x :change sign of first column of gradients (or of
the x-component parts of the matrix)
-flip_y :change sign of second column of gradients (or of
the y-component parts of the matrix)
-flip_z :change sign of third column of gradients (or of
the z-component parts of the matrix)
-no_flip :don't change any gradient/matrix signs. This
is an extraneous switch, as the default is to
not flip any signs (this is mainly used for
some scripting convenience
-check_abs_min VVV :By default, this program checks input matrix
formats for consistency (having positive semi-
definite diagonal matrix elements). It will fail
if those don't occur. However, sometimes there is
just a tiny values <0, like a rounding error;
you can specify to push throughfor negative
diagonal elements with magnitude <VVV, with those
values getting replaced by zero. Be judicious
with this power! (E.g., maybe VVV ~ 0.0001 might
be OK... but if you get looots of negatives, then
you really, really need to check your data for
badness.
(and the follow options are probably mainly extraneous, nowadays)
-bref_mean_top :when averaging the reference X 'b0' values (the
default behavior), have the mean of the X
values be represented in the top row; default
behavior is to have nothing representing the b0
information in the top row (for historical
functionality reasons). NB: if your reference
'b0' actually has b>0, you might not want to
average the b0 refs together, because their
images could have differing contrast if the
same reference vector wasn't used for each.
-put_zeros_top :whatever the output format is, add a row at the
top with all zeros.
-bmax_ref THRESH :THRESH is a scalar number below which b-values
(in BVAL_IN) are considered `zero' or reference.
Sometimes, for the reference images, the scanner
has a value like b=5 s/mm^2, instead of strictly
b=0 strictly. One can still flag such values as
being associated with a reference image and
trim it out, using, for the example case here,
'-bmax_ref 5.1'.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
EXAMPLES
# An example of type-conversion from a TORTOISE-style matrix to column
# gradients (if the matT file has bweights, so will the grad values):
1dDW_Grad_o_Mat++ \
-in_col_matT BMTXT_TORT.txt \
-out_col_vec GRAD.dat
# An example of filtering (note the different styles of parentheses
# for the column- and row-type files) and type-conversion (to an
# AFNI-style matrix that should have the bvalue weights afterwards):
1dDW_Grad_o_Mat++ \
-in_col_vec GRADS_col.dat'{0..10,12..30}' \
-in_bvals BVALS_row.dat'[0..10,12..30]' \
-out_col_matA FILT_matA.dat
# An example of filtering *without* type-conversion. Here, note
# the '-unit_mag_out' flag is used so that the output row-vec does
# not carry the bvalue weight with it; it does not affect the output
# bval file. As Levon might say, the '-unit_mag_out' option acts to
# 'Take a load off bvecs, take a load for free;
# Take a load off bvecs, and you put the load right on bvals only.'
# This example might be useful for working with dcm2nii* output:
1dDW_Grad_o_Mat++ \
-in_row_vec ap.bvec'[0..10,12..30]' \
-in_bvals ap.bval'[0..10,12..30]' \
-out_row_vec FILT_ap.bvec \
-out_row_bval_sep FILT_ap.bval \
-unit_mag_out
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
If you use this program, please reference the introductory/description
paper for the FATCAT toolbox:
Taylor PA, Saad ZS (2013). FATCAT: (An Efficient) Functional
And Tractographic Connectivity Analysis Toolbox. Brain
Connectivity 3(5):523-535.
___________________________________________________________________________
AFNI program: 1deval
Usage: 1deval [options] -expr 'expression'
Evaluates an expression that may include columns of data
from one or more text files and writes the result to stdout.
** Only a single column can be used for each input 1D file. **
* Simple multiple column operations (e.g., addition, scaling)
can be done with program 1dmatcalc.
* Any single letter from a-z can be used as the independent
variable in the expression.
* Unless specified using the '[]' notation (cf. 1dplot -help),
only the first column of an input 1D file is used, and other
columns are ignored.
* Only one column of output will be produced -- if you want to
calculate a multi-column output file, you'll have to run 1deval
separately for each column, and then glue the results together
using program 1dcat. [However, see the 1dcat example combined
with the '-1D:' option, infra.]
Options:
--------
-del d = Use 'd' as the step for a single undetermined variable
in the expression [default = 1.0]
SYNONYMS: '-dx' and '-dt'
-start s = Start at value 's' for a single undetermined variable
in the expression [default = 0.0]
That is, for the indeterminate variable in the expression
(if any), the i-th value will be s+i*d for i=0, 1, ....
SYNONYMS: '-xzero' and '-tzero'
-num n = Evaluate the expression 'n' times.
If -num is not used, then the length of an
input time series is used. If there are no
time series input, then -num is required.
-a q.1D = Read time series file q.1D and assign it
to the symbol 'a' (as in 3dcalc).
* Letters 'a' to 'z' may be used as symbols.
* You can use the filename 'stdin:' to indicate that
the data for 1 symbol comes from standard input:
1dTsort q.1D stdout: | 1deval -a stdin: -expr 'sqrt(a)' | 1dplot stdin:
-a=NUMBER = set the symbol 'a' to a fixed numerical value
rather than a variable value from a 1D file.
* Letters 'a' to 'z' may be used as symbols.
* You can't assign the same symbol twice!
-index i.1D = Read index column from file i.1D and
write it out as 1st column of output.
This option is useful when working with
surface data.
-1D: = Write output in the form of a single '1D:'
string suitable for input on the command
line of another program.
[-1D: is incompatible with the -index option!]
[This won't work if the output string is very long,]
[since the maximum command line length is limited. ]
Examples:
---------
* 't' is the indeterminate variable in the expression below:
1deval -expr 'sin(2*PI*t)' -del 0.01 -num 101 > sin.1D
* Multiply two columns of data (no indeterminate variable):
1deval -expr 'a*b' -a fred.1D -b ethel.1D > ab.1D
* Compute and plot the F-statistic corresponding to p=0.001 for
varying degrees of freedom given by the indeterminate variable 'n':
1deval -start 10 -num 90 -expr 'fift_p2t(0.001,n,2*n)' | 1dplot -xzero 10 -stdin
* Compute the square root of some numbers given in '1D:' form
directly on the command line:
1deval -x '1D: 1 4 9 16' -expr 'sqrt(x)'
Examples using '-1D:' as the output format:
-------------------------------------------
The examples use the shell backquote `xxx` operation, where the
command inside the backquotes is executed, its stdout is captured
into a string, and placed back on the command line. When you have
mastered this idea, you have taken another step towards becoming
a Jedi AFNI Master!
1dplot `1deval -1D: -num 71 -expr 'cos(t/2)*exp(-t/19)'`
1dcat `1deval -1D: -num 100 -expr 'cos(t/5)'` \
`1deval -1D: -num 100 -expr 'sin(t/5)'` > sincos.1D
3dTfitter -quiet -prefix - \
-RHS `1deval -1D: -num 30 -expr 'cos(t)*exp(-t/7)'` \
-LHS `1deval -1D: -num 30 -expr 'cos(t)'` \
`1deval -1D: -num 30 -expr 'sin(t)'`
Notes:
------
* Program 3dcalc operates on 3D and 3D+time datasets in a similar way.
* Program ccalc can be used to evaluate a single numeric expression.
* If I had any sense, THIS program would have been called 1dcalc!
* For generic 1D file usage help, see '1dplot -help'
* For help with expression format, see '3dcalc -help', or type
'help' when using ccalc in interactive mode.
* 1deval only produces a single column of output. 3dcalc can be
tricked into doing multi-column 1D format output by treating
a 1D file as a 3D dataset and auto-transposing it with \'
For example:
3dcalc -a '1D: 3 4 5 | 1 2 3'\' -expr 'cbrt(a)' -prefix -
The input has 2 'columns' and so does the output.
Note that the 1D 'file' is transposed on input to 3dcalc!
This is essential, or 3dcalc will not treat the 1D file as
a dataset, and the results will be very different. Recall that
when a 1D file is read as an 3D AFNI dataset, the row direction
corresponds to the sub-brick (e.g., time) direction, and the
column direction corresponds to the voxel direction.
A Dastardly Trick:
------------------
If you use some other letter than 'z' as the indeterminate variable
in the calculation, and if 'z' is not assigned to any input 1D file,
then 'z' in the expression will be the previous value computed.
This trick can be used to create 1 point recursions, as in the
following command for creating a AR(1) noise time series:
1deval -num 500 -expr 'gran(0,1)+(i-i)+0.7*z' > g07.1D
Note the use of '(i-i)' to intoduce the variable 'i' so that 'z'
would be used as the previous output value, rather than as the
indeterminate variable generated by '-del' and '-start'.
The initial value of 'z' is 0 (for the first evaluation).
* [02 Apr 2010] You can set the initial value of 'z' to a nonzero
value by using the environment variable AFNI_1DEVAL_ZZERO, as in
1deval -DAFNI_1DEVAL_ZZERO=1 -num 10 -expr 'i+z'
-- RW Cox --
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dfft
Usage: 1dfft [options] infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, with the absolute
value of the FFT of the input columns. The length of the file
will be 1+(FFT length)/2.
Options:
-ignore sss = Skip the first 'sss' lines in the input file.
[default = no skipping]
-use uuu = Use only 'uuu' lines of the input file.
[default = use them all, Frank]
-nfft nnn = Set FFT length to 'nnn'.
[default = length of data (# of lines used)]
-tocx = Save Re and Im parts of transform in 2 columns.
-fromcx = Convert 2 column complex input into 1 column
real output.
[-fromcx will not work if the original]
[data FFT length was an odd number! :(]
-hilbert = When -fromcx is used, the inverse FFT will
do the Hilbert transform instead.
-nodetrend = Skip the detrending of the input.
Nota Bene:
* Each input time series has any quadratic trend of the
form 'a+b*t+c*t*t' removed before the FFT, where 't'
is the line number.
* The FFT length can be any positive even integer, but
the Fast Fourier Transform algorithm will be slower if
any prime factors of the FFT length are large (say > 997)
Unless you are applying this program to VERY long files,
this slowdown will probably not be appreciable.
* If the FFT length is longer than the file length, the
data is zero-padded to make up the difference.
* Do NOT call the output of this program the Power Spectrum!
That is something else entirely.
* If 'outfile' is '-' (or missing), the output appears on stdout.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dFlagMotion
Usage: 1dFlagMotion [options] MotionParamsFile
Produces an list of time points that have more than a
user specified amount of motion relative to the previous
time point.
Options:
-MaxTrans maximum translation allowed in any direction
[defaults to 1.5mm]
-MaxRot maximum rotation allowed in any direction
[defaults to 1.25 degrees]
** The input file must have EXACTLY 6 columns of input, in the order:
roll pitch yaw delta-SI delta-LR delta-AP
(angles in degrees first, then translations in mm)
** The program does NOT accept column '[...]' selectors on the input
file name, or comments in the file itself. As a palliative, if the
input file name is '-', then the input numbers are read from stdin,
so you could do something like the following:
1dcat mfile.1D'[1..6]' | 1dFlagMotion -
e.g., to work with the output from 3dvolreg's '-dfile' option
(where the first column is just the time index).
** The output is in a 1D format, with comments on '#' comment lines,
and the list of points exceeding the motion bounds listed being
intercalated on normal (non-comment) lines.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dgenARMA11
Program to generate an ARMA(1,1) time series, for simulation studies.
Results are written to stdout.
Usage: 1dgenARMA11 [options]
Options:
========
-num N } These equivalent options specify the length of the time
-len N } series vector to generate.
-nvec M = The number of time series vectors to generate;
if this option is not given, defaults to 1.
-a a = Specify ARMA(1,1) parameters 'a'.
-b b = Specify ARMA(1,1) parameter 'b' directly.
-lam lam = Specify ARMA(1,1) parameter 'b' indirectly.
-sig ss = Set standard deviation of results [default=1].
-norm = Normalize time series so sum of squares is 1.
-seed dd = Set random number seed.
* The correlation coefficient r(k) of noise samples k units apart in time,
for k >= 1, is given by r(k) = lam * a^(k-1)
where lam = (b+a)(1+a*b)/(1+2*a*b+b*b)
(N.B.: lam=a when b=0 -- AR(1) noise has r(k)=a^k for k >= 0)
(N.B.: lam=b when a=0 -- MA(1) noise has r(k)=b for k=1, r(k)=0 for k>1)
* lam can be bigger or smaller than a, depending on the sign of b:
b > 0 means lam > a; b < 0 means lam < a.
* What I call (a,b) here is sometimes called (p,q) in the ARMA literature.
* For a noise model which is the sum of AR(1) and white noise, 0 < lam < a
(i.e., a > 0 and -a < b < 0 ).
-CORcut cc = The exact ARMA(1,1) correlation matrix (for a != 0)
has no non-zero entries. The calculations in this
program set correlations below a cutoff to zero.
The default cutoff is 0.00010, but can be altered with
this option. The usual reason to use this option is
to test the sensitivity of the results to the cutoff.
Author: RWCox [for his own demented purposes]
Examples:
1dgenARMA11 -num 200 -a .8 -lam 0.7 | 1dplot -stdin
1dgenARMA11 -num 2000 -a .8 -lam 0.7 | 1dfft -nodetrend stdin: stdout: | 1dplot -stdin
AFNI program: 1dgrayplot
Usage: 1dgrayplot [options] tsfile
Graphs the columns of a *.1D type time series file to the screen,
sort of like 1dplot, but in grayscale.
Options:
-install = Install a new X11 colormap (for X11 PseudoColor)
-ignore nn = Skip first 'nn' rows in the input file
[default = 0]
-flip = Plot x and y axes interchanged.
[default: data columns plotted DOWN the screen]
-sep = Separate scales for each column.
-use mm = Plot 'mm' points
[default: all of them]
-ps = Don't draw plot in a window; instead, write it
to stdout in PostScript format.
N.B.: If you view this result in 'gv', you should
turn 'anti-alias' off, and switch to
landscape mode.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dMarry
Usage: 1dMarry [options] file1 file2 ...
Joins together 2 (or more) ragged-right .1D files, for use with
3dDeconvolve -stim_times_AM2.
**_OR_**
Breaks up 1 married file into 2 (or more) single-valued files.
OPTIONS:
=======
-sep abc == Use the first character (e.g., 'a') as the separator
between values 1 and 2, the second character (e.g., 'b')
as the separator between values 2 and 3, etc.
* These characters CANNOT be a blank, a tab, a digit,
or a non-printable control character!
* Default separator string is '*,' which will result
in output similar to '3*4,5,6'
-divorce == Instead of marrying the files, assume that file1
is already a married file: split time*value*value... tuples
into separate files, and name them in the pattern
'file2_A.1D' 'file2_B.1D' et cetera.
If not divorcing, the 'married' file is written to stdout, and
probably should be captured using a redirection such as '>'.
NOTES:
=====
* You cannot use column [...] or row {...} selectors on
ragged-right .1D files, so don't even think about trying!
* The maximum number of values that can be married is 26.
(No polygamy or polyandry jokes here, please.)
* For debugging purposes, with '-divorce', if 'file2' is '-',
then all the divorcees are written directly to stdout.
-- RWCox -- written hastily in March 2007 -- hope I don't repent
-- modified to deal with multiple marriages -- December 2008
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dmatcalc
Usage: 1dmatcalc [-verb] expression
Evaluate a space delimited RPN matrix-valued expression:
* The operations are on a stack, each element of which is a
real-valued matrix.
* N.B.: This is a computer-science stack of separate matrices.
If you want to join two matrices in separate files
into one 'stacked' matrix, then you must use program
1dcat to join them as columns, or the system program
cat to join them as rows.
* You can also save matrices by name in an internal buffer
using the '=NAME' operation and then retrieve them later
using just the same NAME.
* You can read and write matrices from files stored in ASCII
columns (.1D format) using the &read and &write operations.
* The following 5 operations, input as a single string,
'&read(V.1D) &read(U.1D) &transp * &write(VUT.1D)'
- reads matrices V and U from disk (separately),
- transposes U (on top of the stack) into U',
- multiplies V and U' (the two matrices on top of the stack),
- and writes matrix VU' out (the matrix left on the stack by '*').
* Calculations are carried out in single precision ('float').
* Operations mostly contain characters such as '&' and '*' that
are special to Unix shells, so you'll probably need to put
the arguments to this program in 'single quotes'.
* You can use '%%' or '@' in place of the '&' character, if you wish.
STACK OPERATIONS
-----------------
number == push scalar value (1x1 matrix) on stack;
a number starts with a digit or a minus sign
=NAME == save a copy matrix on top of stack as 'NAME'
NAME == push a copy of NAME-ed matrix onto top of stack;
names start with an alphabetic character
&clear == erase all named matrices (to save memory);
does not affect the stack at all
&read(FF) == read ASCII (.1D) file onto top of stack from file 'FF'
&read4x4Xform(FF)
== Similar to &read(FF), except that it expects data
for a 12-parameter spatial affine transform.
FF can contain 12x1, 1x12, 16x1, 1x16, 3x4, or
4x4 values.
The read operation loads the data into a 4x4 matrix
r11 r12 r13 r14
r21 r22 r23 r24
r31 r32 r33 r34
0.0 0.0 0.0 1.0
This option was added to simplify the combination of
linear spatial transformations. However, you are better
off using cat_matvec for that purpose.
&write(FF) == write top matrix to ASCII file to file 'FF';
if 'FF' == '-', writes to stdout
&transp == replace top matrix with its transpose
&ident(N) == push square identity matrix of order N onto stack
N is an fixed integer, OR
&R to indicate the row dimension of the
current top matrix, OR
&C to indicate the column dimension of the
current top matrix, OR
=X to indicate the (1,1) element of the
matrix named X
&Psinv == replace top matrix with its pseudo-inverse
[computed via SVD, not via inv(A'*A)*A']
&Sqrt == replace top matrix with its square root
[computed via Denman & Beavers iteration]
N.B.: not all real matrices have real square
roots, and &Sqrt will fail if you push it
N.B.: the matrix must be square!
&Pproj == replace top matrix with the projection onto
its column space; Input=A; Output = A*Psinv(A)
N.B.: result P is symmetric and P*P=P
&Qproj == replace top matrix with the projection onto
the orthogonal complement of its column space
Input=A; Output=I-Pproj(A)
* == replace top 2 matrices with their product;
OR stack = [ ... C A B ] (where B = top) goes to
&mult stack = [ ... C AB ]
if either of the top matrices is a 1x1 scalar,
then the result is the scalar multiplication of
the other matrix; otherwise, matrices must conform
+ OR &add == replace top 2 matrices with sum A+B
- OR &sub == replace top 2 matrices with difference A-B
&dup == push duplicate of top matrix onto stack
&pop == discard top matrix
&swap == swap top two matrices (A <-> B)
&Hglue == glue top two matrices together horizontally:
stack = [ ... C A B ] goes to
stack = [ ... C A|B ]
this is like what program 1dcat does.
&Vglue == glue top two matrices together vertically:
stack = [ ... C A B ] goes to
A
stack = [ ... C - ]
B
this is like what program cat does.
SIMPLE EXAMPLES
---------------
* Multiply each element of an input 1D file
by a constant factor and write to disk.
1dmatcalc "&read(in.1D) 3.1416 * &write(out.1D)"
* Subtract two 1D files
1dmatcalc "&read(a.1D) &read(b.1D) - &write(stdout:)"
AFNI program: 1dNLfit
Program to fit a model to a vector of data. The model is given by a
symbolic expression, with parameters to be estimated.
Usage: 1dNLfit OPTIONS
Options: [all but '-meth' are actually mandatory]
--------
-expr eee = The expression for the fit. It must contain one symbol from
'a' to 'z' which is marked as the independent variable by
option '-indvar', and at least one more symbol which is
a parameter to be estimated.
++ Expressions use the same syntax as 3dcalc, ccalc, and 1deval.
++ Note: expressions and symbols are not case sensitive.
-indvar c d = Indicates which variable in '-expr' is the independent
variable. All other symbols are parameters, which are
either fixed (constants) or variables to be estimated.
++ Then, read the values of the independent variable from
1D file 'd' (only the first column will be used).
++ If the independent variable has a constant step size,
you can input it via with 'd' replaced by a string like
'1D: 100%0:2.1'
which creates an array with 100 value, starting at 0,
then adding 2.1 for each step:
0 2.1 4.2 6.3 8.4 ...
-param ppp = Set fixed value or estimating range for a particular
symbol.
++ For a fixed value, 'ppp' takes the form 'a=3.14', where the
first letter is the symbol name, which must be followed by
an '=', then followed by a constant expression. This
expression can be symbolic, as in 'a=cbrt(3)'.
++ For a parameter to be estimated, 'ppp' takes the form of
two constant expressions separated by a ':', as in
'q=-sqrt(2):sqrt(2)'.
++ All symbols in '-expr' must have a corresponding '-param'
option, EXCEPT for the '-indvar' symbol (which will be set
by its data file).
-depdata v = Read the values of the dependent variable (to be fitted to
'-expr') from 1D file 'v'.
++ File 'v' must have the same number of rows as file 'd'
from the '-indvar' option!
++ File 'v' can have more than one column; each will be fitted
separately to the expression.
-meth m = Set the method for fitting: '1' for L1, '2' for L2.
(The default method is L2, which is usually better.)
Example:
--------
Create a sin wave corrupted by logistic noise, to file ss.1D.
Create a cos wave similarly, to file cc.1D.
Put these files together into a 2 column file sc.1D.
Fit both columns to a 3 parameter model and write the fits to file ff.1D.
Plot the data and the fit together, for fun and profit(?).
1deval -expr 'sin(2*x)+lran(0.3)' -del 0.1 -num 100 > ss.1D
1deval -expr 'cos(2*x)+lran(0.3)' -del 0.1 -num 100 > cc.1D
1dcat ss.1D cc.1D > sc.1D ; \rm ss.1D cc.1D
1dNLfit -depdata sc.1D -indvar x '1D: 100%0:0.1' -expr 'a*sin(b*x)+c*cos(b*x)' \
-param a=-2:2 -param b=1:3 -param c=-2:2 > ff.1D
1dplot -one -del 0.1 -ynames sin:data cos:data sin:fit cos:fit - sc.1D ff.1D
Notes:
------
* PLOT YOUR RESULTS! There is no guarantee that you'll get a good fit.
* This program is not particularly efficient, so using it on a large
scale (e.g., for lots of columns, or in a shell loop) will be slow.
* The results (fitted time series models) are written to stdout,
and should be saved by '>' redirection (as in the example).
The first few lines of the output from the example are:
# 1dNLfit output (meth=L2)
# expr = a*sin(b*x)+c*cos(b*x)
# Fitted parameters:
# A = 1.0828 0.12786
# B = 1.9681 2.0208
# C = 0.16905 1.0102
# ----------- -----------
0.16905 1.0102
0.37753 1.0153
0.57142 0.97907
* Coded by Zhark the Well-Fitted - during Snowzilla 2016.
AFNI program: 1dnorm
Usage: 1dnorm [options] infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, with each column being
L_2 normalized (sum of squares = 1).
* If 'infile' is '-', it will be read from stdin.
* If 'outfile' is '-', it will be written to stdout.
Options:
--------
-norm1 = Normalize so sum of absolute values is 1 (L_1 norm)
-normx = So that max absolute value is 1 (L_infinity norm)
-demean = Subtract each column's mean before normalizing
-demed = Subtract each column's median before normalizing
[-demean and -demed are mutually exclusive!]
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dplot
++ 1dplot: AFNI version=AFNI_19.3.16 (Dec 12 2019) [64-bit]
++ Authored by: RWC et al.
Usage: 1dplot [options] tsfile ...
Graphs the columns of a *.1D time series file to the X11 screen.
-------
OPTIONS
-------
-install = Install a new X11 colormap.
-sep = Plot each column in a separate sub-graph.
-one = Plot all columns together in one big graph.
[default = -sep]
-sepscl = Plot each column in a separate sub-graph
and allow each sub-graph to have a different
y-scale. -sepscl is meaningless with -one!
-noline = Don't plot the connecting lines (also implies '-box').
-NOLINE = Same as '-noline', but will not try to plot values outside
the rectangular box that contains the graph axes.
-box = Plot a small 'box' at each data point, in addition
to the lines connecting the points.
* The box size can be set via the environment variable
AFNI_1DPLOT_BOXSIZE; the value is a fraction of the
overall plot size. The standard box size is 0.006.
Example with a bigger box:
1dplot -DAFNI_1DPLOT_BOXSIZE=0.01 -box A.1D
* The box shapes are different for different time
series columns. At present, there is no way to
control which shape is used for what column
(unless you modify the source code, that is).
* You can set environment variable AFNI_1DPLOT_RANBOX
to YES to get the '-noline' boxes plotted in a
pseudo-random order, so that one particular color
doesn't dominate just because it is last in the
plotting order; for example:
1dplot -DAFNI_1DPLOT_RANBOX=YES -one -x X.1D -noline Y1.1D Y2.1D Y3.1D
-hist = Plot graphs in histogram style (i.e., vertical boxes).
* Histograms can be generated from 3D or 1D files using
program 3dhistog; for example
3dhistog -nbin 50 -notitle -min 0 -max .04 err.1D > eh.1D
1dplot -hist -x eh.1D'[0]' -xlabel err -ylabel hist eh.1D'[1]'
or, for something a little more fun looking:
1dplot -one -hist -dashed 1:2 -x eh.1D'[0]' \
-xlabel err -ylabel hist eh.1D'[1]' eh.1D'[1]'
** The '-norm' options below can be useful for plotting data
with different value ranges on top of each other via '-one':
-norm2 = Independently scale each time series plotted to
have L_2 norm = 1 (sum of squares).
-normx = Independently scale each time series plotted to
have max absolute value = 1 (L_infinity norm).
-norm1 = Independently scale each time series plotted to
have max sum of absolute values = 1 (L_1 norm).
-demean = This option will remove the mean from each time series
(before normalizing). The combination '-demean -normx -one'
can be useful when plotting disparate data together.
* If you use '-demean' twice, you will get linear detrending.
* Et cetera (e.g,, 4 times gives you cubic detrending.)
-x X.1D = Use for X axis the data in X.1D.
Note that X.1D should have one column
of the same length as the columns in tsfile.
** Coupled with '-box -noline', you can use '-x' to make
a scatter plot, as in graphing file A1.1D along the
x-axis and file A2.1D along the y-axis:
1dplot -box -noline -x A1.1D -xlabel A1 -ylabel A2 A2.1D
** '-x' will override -dx and -xzero; -xaxis still works
-xl10 X.1D = Use log10(X.1D) as the X axis.
-xmulti X1.1D X2.1D ...
This new [Oct 2013] option allows you to plot different
columns from the data with different values along the
x-axis. You can supply one or more 1D files after the
'-xmulti' option. The columns from these files are
catenated, and then the first xmulti column is used as
as x-axis values for the first data column plotted, the
second xmulti column gives the x-axis values for the
second data column plotted, and so on.
** The command line arguments after '-xmulti' are taken
as 1D filenames to read, until an argument starts with
a '-' character -- this would either be another option,
or just a single '-' to separate the xmulti 1D files
from the data files to be plotted.
** If you don't provide enough xmulti columns for all the
data files, the last xmulti column will be re-used.
** Useless but fun example:
1deval -num 100 -expr '(i-i)+z+gran(0,6)' > X1.1D
1deval -num 100 -expr '(i-i)+z+gran(0,6)' > X2.1D
1dplot -one -box -xmulti X1.1D X2.1D - X2.1D X1.1D
-dx xx = Spacing between points on the x-axis is 'xx'
[default = 1] SYNONYMS: '-dt' and '-del'
-xzero zz = Initial x coordinate is 'zz' [default = 0]
SYNONYMS: '-tzero' and '-start'
-nopush = Don't 'push' axes ranges outwards.
-ignore nn = Skip first 'nn' rows in the input file
[default = 0]
-use mm = Plot 'mm' points [default = all of them]
-xlabel aa = Put string 'aa' below the x-axis
[default = no axis label]
-ylabel aa = Put string 'aa' to the left of the y-axis
[default = no axis label]
-plabel pp = Put string 'pp' atop the plot.
Some characters, such as '_' have
special formatting effects. You
can escape that with ''. For example:
echo 2 4.5 -1 | 1dplot -plabel 'test_underscore' -stdin
versus
echo 2 4.5 -1 | 1dplot -plabel 'test\_underscore' -stdin
-title pp = Same as -plabel, but only works with -ps/-png/-jpg/-pnm options.
-wintitle pp = Set string 'pp' as the title of the frame
containing the plot. Default is based on input.
-naked = Do NOT plot axes or labels, just the graph(s).
You might want to use '-nopush' with '-naked'.
-aspect A = Set the width-to-height ratio of the plot region to 'A'.
Default value is 1.3. Larger 'A' means a wider graph.
-stdin = Don't read from tsfile; instead, read from
stdin and plot it. You cannot combine input
from stdin and tsfile(s). If you want to do so,
use program 1dcat first.
-ps = Don't draw plot in a window; instead, write it
to stdout in PostScript format.
* If you view the result in 'gv', you should turn
'anti-alias' off, and switch to landscape mode.
* You can use the 'gs' program to convert PostScript
to other formats; for example, a .bmp file:
1dplot -ps ~/data/verbal/cosall.1D |
gs -r100 -sOutputFile=fred.bmp -sDEVICE=bmp256 -q -dBATCH -
-jpg fname } = Render plot to an image and save to a file named
-jpeg fname } = 'fname', in JPEG mode or in PNG mode or in PNM mode.
-png fname } = The default image width is 1024 pixels; to change
-pnm fname } = this value to 2000 pixels (say), do
setenv AFNI_1DPLOT_IMSIZE 2000
before running 1dplot. Widths over 2000 may start
to look odd, and will run more slowly.
* PNG files will be smaller than JPEG, and are
compressed without loss.
* PNG output requires that the netpbm program
pnmtopng be installed somewhere in your PATH.
* PNM output files are not compressed, and are manipulable
by the netpbm package: http://netpbm.sourceforge.net/
-pngs SIZE fname } = a convenience function equivalent to
-jpgs SIZE fname } = setenv AFNI_1DPLOT_IMSIZE SIZE and
-jpegs SIZE fname} = -png (or -jpg or -pnm) fname
-pnms SIZE fname }
-ytran 'expr' = Transform the data along the y-axis by
applying the expression to each input value.
For example:
-ytran 'log10(z)'
will take log10 of each input time series value
before plotting it.
* The expression should have one variable (any letter
from a-z will do), which stands for the time series
data to be transformed.
* An expression such as 'sqrt(x*x+i)' will use 'x'
for the time series value and use 'i' for the time
index (starting at 0) -- in this way, you can use
time-dependent transformations, if needed.
* This transformation applies to all input time series
(at present, there is no way to transform different
time series in distinct ways inside 1dplot).
* '-ytran' is applied BEFORE the various '-norm' options.
-xtran 'expr' = Similar, but for the x-axis.
** Applies to '-xmulti' , '-x' , or the default x-axis.
-xaxis b:t:n:m = Set the x-axis to run from value 'b' to
value 't', with 'n' major divisions and
'm' minor tic marks per major division.
For example:
-xaxis 0:100:5:20
Setting 'n' to 0 means no tic marks or labels.
* You can set 'b' to be greater than 't', to
have the x-coordinate decrease from left-to-right.
* This is the only way to have this effect in 1dplot.
* In particular, '-dx' with a negative value will not work!
-yaxis b:t:n:m = Similar to above, for the y-axis. These
options override the normal autoscaling
of their respective axes.
-ynames a b ... = Use the strings 'a', 'b', etc., as
labels to the right of the graphs,
corresponding to each input column.
These strings CANNOT start with the
'-' character.
N.B.: Each separate string after '-ynames'
is taken to be a new label, until the
end of the command line or until some
string starts with a '-'. In particular,
This means you CANNOT do something like
1dplot -ynames a b c file.1D
since the input filename 'file.1D' will
be used as a label string, not a filename.
Instead, you must put another option between
the end of the '-ynames' label list, OR you
can put a single '-' at the end of the label
list to signal its end:
1dplot -ynames a b c - file.1D
TSV files: When plotting a TSV file, where the first row
is the set of column labels, you can use this
Unix trick to put the column labels here:
-ynames `head -1 file.tsv`
The 'head' command copies just the first line
of the file to stdout, and the backquotes `...`
capture stdout and put it onto the command line.
* You might need to put a single '-' after this
option to prevent the problem alluded to above.
In any case, it can't hurt to use '-' as an option
after '-ynames'.
* If any of the TSV labels start with the '-' character,
peculiar and unpleasant things might transpire.
-volreg = Makes the 'ynames' be the same as the
6 labels used in plug_volreg for
Roll, Pitch, Yaw, I-S, R-L, and A-P
movements, in that order.
-thick = Each time you give this, it makes the line
thickness used for plotting a little larger.
[An alternative to using '-DAFNI_1DPLOT_THIK=...']
-THICK = Twice the power of '-thick' at no extra cost!!
-dashed codes = Plot dashed lines between data points. The 'codes'
are a colon-separated list of dash values, which
can be 1 (solid), 2 (longer dashes), or 3 (shorter dashes).
** Example: '-dashed 1:2:3' means to plot the first time
series with solid lines, the second with long dashes,
and the third with short dashes.
-Dname=val = Set environment variable 'name' to 'val'
for this run of the program only:
1dplot -DAFNI_1DPLOT_THIK=0.01 -DAFNI_1DPLOT_COLOR_01=blue '1D:3 4 5 3 1 0'
You may also select a subset of columns to display using
a tsfile specification like 'fred.1D[0,3,5]', indicating
that columns #0, #3, and #5 will be the only ones plotted.
For more details on this selection scheme, see the output
of '3dcalc -help'.
Example: graphing a 'dfile' output by 3dvolreg, when TR=5:
1dplot -volreg -dx 5 -xlabel Time 'dfile[1..6]'
You can also input more than one tsfile, in which case the files
will all be plotted. However, if the files have different column
lengths, the shortest one will rule.
The colors for the line graphs cycle between black, red, green, and
blue. You can alter these colors by setting Unix environment
variables of the form AFNI_1DPLOT_COLOR_xx -- cf. README.environment.
You can alter the thickness of the lines by setting the variable
AFNI_1DPLOT_THIK to a value between 0.00 and 0.05 -- the units are
fractions of the page size; of course, you can also use the options
'-thick' or '-THICK' if you prefer.
----------------
RENDERING METHOD
----------------
On 30 Apr 2012, a new method of rendering the 1dplot graph into an X11
window was introduced -- this method uses 'anti-aliasing' to produce
smoother-looking lines and characters. If you want the old coarser-looking
rendering method, set environment variable AFNI_1DPLOT_RENDEROLD to YES.
The program always uses the new rendering method when drawing to a JPEG
or PNG or PNM file (which is not and never has been just a screen capture).
There is no way to disable the new rendering method for image-file saves.
------
LABELS
------
Besides normal alphabetic text, the various labels can include some
special characters, using TeX-like escapes starting with '\'.
Also, the '^' and '_' characters denote super- and sub-scripts,
respectively. The following command shows many of the escapes:
1deval -num 100 -expr 'J0(t/4)' | 1dplot -stdin -thick \
-xlabel '\alpha\beta\gamma\delta\epsilon\zeta\eta^{\oplus\dagger}\times c' \
-ylabel 'Bessel Function \green J_0(t/4)' \
-plabel '\Upsilon\Phi\Chi\Psi\Omega\red\leftrightarrow\blue\partial^{2}f/\partial x^2'
TIMESERIES (1D) INPUT
---------------------
A timeseries file is in the form of a 1D or 2D table of ASCII numbers;
for example: 3 5 7
2 4 6
0 3 3
7 2 9
This example has 4 rows and 3 columns. Each column is considered as
a timeseries in AFNI. The convention is to store this type of data
in a filename ending in '.1D'.
** COLUMN SELECTION WITH [] **
When specifying a timeseries file to an command-line AFNI program, you
can select a subset of columns using the '[...]' notation:
'fred.1D[5]' ==> use only column #5
'fred.1D[5,9,17]' ==> use columns #5, #9, and #17
'fred.1D[5..8]' ==> use columns #5, #6, #7, and #8
'fred.1D[5..13(2)]' ==> use columns #5, #7, #9, #11, and #13
Column indices start at 0. You can use the character '$'
to indicate the last column in a 1D file; for example, you
can select every third column in a 1D file by using the selection list
'fred.1D[0..$(3)]' ==> use columns #0, #3, #6, #9, ....
** ROW SELECTION WITH {} **
Similarly, you select a subset of the rows using the '{...}' notation:
'fred.1D{0..$(2)}' ==> use rows #0, #2, #4, ....
You can also use both notations together, as in
'fred.1D[1,3]{1..$(2)}' ==> columns #1 and #3; rows #1, #3, #5, ....
** DIRECT INPUT OF DATA ON THE COMMAND LINE WITH 1D: **
You can also input a 1D time series 'dataset' directly on the command
line, without an external file. The 'filename' for such input has the
general format
'1D:n_1@val_1,n_2@val_2,n_3@val_3,...'
where each 'n_i' is an integer and each 'val_i' is a float. For
example
-a '1D:5@0,10@1,5@0,10@1,5@0'
specifies that variable 'a' be assigned to a 1D time series of 35,
alternating in blocks between values 0 and value 1.
* Spaces or commas can be used to separate values.
* A '|' character can be used to start a new input "line":
Try 1dplot '1D: 3 4 3 5 | 3 5 4 3'
** TRANSPOSITION WITH \' **
Finally, you can force most AFNI programs to transpose a 1D file on
input by appending a single ' character at the end of the filename.
N.B.: Since the ' character is also special to the shell, you'll
probably have to put a \ character before it. Examples:
1dplot '1D: 3 2 3 4 | 2 3 4 3' and
1dplot '1D: 3 2 3 4 | 2 3 4 3'\'
When you have reached this level of understanding, you are ready to
take the AFNI Jedi Master test. I won't insult you by telling you
where to find this examination.
TAB SEPARATED VALUE (.tsv) FILES [Sep 2018]
-------------------------------------------
These files are used in BIDS http://bids.neuroimaging.io and AFNI
programs can read these in a few places.
The format of a .tsv file is a set of columns, where the values in
each row are separated by tab characters -- spaces are NOT separators.
Each element is string, some of which are numeric (e.g. 3.1416).
The first row of a .tsv file is a set of strings which are column
desciptors (separated by tabs, of course). For the most part, the
following data in each column are exclusively numeric or exclusively
strings. Strings can contain blanks/spaces since only tabs are used
to separate values.
A .tsv file can be read in most places where a .1D file is read.
However, columns (after the header row) that are not purely numeric
will be ignored, since the internal usage of .1D data in AFNI is numeric.
Thus, you can do something like
1dplot -nopush -sepscl sub-10506_task-pamenc_events.tsv
and you will get a plot of all the numeric columns in this BIDS file.
Column selection '[]' can be done, using numbers to specify columns
or using the column labels in the .tsv file.
N.B.: The string 'N/A' or 'n/a' in a column that is otherwise numeric
will be considered to be a number, and will be replaced on input
with the mean of the "true" numbers in the column -- there is
no concept of missing data in an AFNI .1D file.
++ If you don't like this, well ... too bad for you.
Program 1dcat has special knowledge of .tsv files, and will cat
(sideways - along rows) .tsv and .1D files together. It also has an
option to write the output in .tsv format.
For example, to get the 'onset', 'duration', and 'trial_type' columns
out of a BIDS task .tsv file, a command like this could be used:
1dcat sub-10506_task-pamenc_events.tsv'[onset,duration,trial_type]'
Note that the column headers are lost in this output, but could be kept
if the 1dcat '-tsvout' option were used. In reverse, a numeric .1D file
can be converted to .tsv format by a command like:
1dcat -tsvout Fred.1D
In this case, since a the data for .1D file doesn't have headers for its
columns, 1dcat will invent some column names.
At this time, other programs don't 'know' much about .tsv files, and will
ignore the header row and non-numeric columns when reading a .tsv file.
in place of a .1D file.
--------------
MARKING BLOCKS (e.g., censored time points)
--------------
The following options let you mark blocks along the x-axis, by drawing
colored vertical boxes over the standard white background.
* The intended use is to mark blocks of time points that are censored
out of an analysis, which is why the options are the same as those
in 3dDeconvolve -- but you can mark blocks for any reason, of course.
* These options don't do anything when the '-x' option is used to
alter the x-axis spacings.
* To see what the various color markings look like, try this silly example:
1deval -num 100 -expr 'lran(2)' > zz.1D
1dplot -thick -censor_RGB red -CENSORTR 3-8 \
-censor_RGB green -CENSORTR 11-16 \
-censor_RGB blue -CENSORTR 22-27 \
-censor_RGB yellow -CENSORTR 34-39 \
-censor_RGB violet -CENSORTR 45-50 \
-censor_RGB pink -CENSORTR 55-60 \
-censor_RGB gray -CENSORTR 65-70 \
-censor_RGB #2cf -CENSORTR 75-80 \
-plabel 'red green blue yellow violet pink gray #2cf' zz.1D &
-censor_RGB clr = set the color used for the marking to 'clr', which
can be one of the strings below:
red green blue yellow violet pink gray (OR grey)
* OR 'clr' can be in the form '#xyz' or '#xxyyzz', where
'x', 'y', and 'z' are hexadecimal digits -- for example,
'#2cf' is sort of a cyan color.
* OR 'clr' can be in the form 'rgbi:rf/gf/bf' where
each color intensity (rf, gf, bf) is a number between
0.0 and 1.0 -- e.g., white is 'rgbi:1.0/1.0/1.0'.
Since the background is white, dark colors don't look
good here, and will obscure the graphs; for example,
pink is defined here as 'rgbi:1.0/0.5/0.5'.
* The default color is (a rather pale) yellow.
* You can use '-censor_RGB' more than once. The color
most recently specified previous on the command line
is what will be used with the '-censor' and '-CENSORTR'
options. This allows you to mark different blocks
with different colors (e.g., if they were censored
for different reasons).
* The feature of allowing multiple '-censor_RGB' options
means that you must put this option BEFORE the
relevant '-censor' and/or '-CENSORTR' options.
Otherwise, you'll get the default yellow color!
-censor cname = cname is the filename of censor .1D time series
* This is a file of 1s and 0s, indicating which
time points are to be un-marked (1) and which are
to be marked (0).
* Please note that only one '-censor' option can be
used, for compatibility with 3dDeconvolve.
* The option below may be simpler to use!
(And can be used multiple times.)
-CENSORTR clist = clist is a list of strings that specify time indexes
to be marked in the graph(s). Each string is of
one of the following forms:
37 => mark global time index #37
2:37 => mark time index #37 in run #2
37..47 => mark global time indexes #37-47
37-47 => same as above
*:0-2 => mark time indexes #0-2 in all runs
2:37..47 => mark time indexes #37-47 in run #2
* Time indexes within each run start at 0.
* Run indexes start at 1 (just be to confusing).
* Multiple -CENSORTR options may be used, or
multiple -CENSORTR strings can be given at
once, separated by spaces or commas.
* Each argument on the command line after
'-CENSORTR' is treated as a censoring string,
until an argument starts with a '-' or an
alphabetic character, or it contains the substring
'1D'. This means that if you want to plot a file
named '9zork.xyz', you may have to do this:
1dplot -CENSORTR 3-7 18-22 - 9zork.xyz
The stand-alone '-' will stop the processing
of censor strings; otherwise, the '9zork.xyz'
string, since it doesn't start with a letter,
would be treated as a censoring string, which
you would find confusing.
** N.B.: 2:37,47 means index #37 in run #2 and
global time index 47; it does NOT mean
index #37 in run #2 AND index #47 in run #2.
-concat rname = rname is the filename for list of concatenated runs
* 'rname' can be in the format
'1D: 0 100 200 300'
which indicates 4 runs, the first of which
starts at time index=0, second at index=100,
and so on.
* The ONLY function of '-concat' is for use with
'-CENSORTR', to be compatible with 3dDeconvolve
[e.g., for plotting motion parameters from]
[3dvolreg -1Dfile, where you've cat-enated]
[the 1D files from separate runs into one ]
[long file for plotting with this program.]
-rbox x1 y1 x2 y2 color1 color2
= Draw a rectangular box with corners (x1,y1) to
(x2,y2), in color1, with an outline in color2.
Colors are names, such as 'green'.
[This option lets you make bar]
[charts, *if* you care enough.]
-Rbox x1 y1 x2 y2 y3 color1 color2
= As above, with an extra horizontal line at y3.
-line x1 y1 x2 y2 color dashcode
= Draw one line segment.
Another fun fun example:
1dplot -censor_RGB #ffa -CENSORTR '0-99' \
`1deval -1D: -num 61 -dx 0.3 -expr 'J0(x)'`
which illustrates the use of 'censoring' to mark the entire graph
background in pale yellow '#ffa', and also illustrates the use
of the '-1D:' option in 1deval to produce output that can be
used directly on the command line, via the backquote `...` operator.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dplot.py
OVERVIEW ~1~
This program is for making images to visualize columns of numbers from
"1D" text files. It is based heavily on RWCox's 1dplot program, just
using Python (particularly matplotlib). To use this program, Python
version >=2.7 is required, as well as matplotlib modules (someday numpy
might be needed, as well).
This program takes very few required options-- mainly, file names and
an output prefix-- but it allows the user to control/add many
features, such as axis labels, titles, colors, adding in censor
information, plotting summary boxplots and more.
++ constructed by PA Taylor (NIMH, NIH, USA).
# =========================================================================
COMMAND OPTIONS ~1~
-help, -h :see helpfile
-infiles II :(req) one or more file names of text files. Each column
in this file will be treated as a separate time series
for plotting (i.e., as 'y-values'). One can use
AFNI-style column '{ }' and row '[ ]' selectors. One
or more files may be entered, but they must all be of
equal length.
-yfiles YY :exactly the same behavior as "-infiles ..", just another
option name for it that might be more consistent with
other options.
-prefix PP :output filename or prefix; if no file extension for an
image is included in 'PP', one will be added from a
list. At present, OK file types to output should include:
.jpg, .png, .tif, .pdf
... but note that the kinds of image files you may output
may be limited by packages (or lack thereof) installed on
your own computer. Default output image type is .jpg
-boxplot_on :a fun feature to show an small, additional boxplot
adjacent to each time series. The plot is a standard
Python boxplot of that times series's values. The box
shows the 25-75%ile range (interquartile range, IQR);
the median value highlighted by a white line; whiskers
stretch to 1.5*IQR; circles show outliers.
When using this option and censoring, by default both a
boxplot of data "before censoring" (BC) and "after
censoring (AC) will be added after. See '-bplot_view ...'
about current opts to change that, if desired.
-bplot_view BC_ONLY | AC_ONLY
:when using '-boxplot_on' and censoring, by default the
plotter will put one boxplot of data "before censoring"
(BC) and after censoring (AC). To show *only* the
uncensored one, use this option and keyword.
-margin_off :use this option to have the plot frame fill the figure
window completely; thus, no labels, frame, titles or
other parts of the 'normal' image outside the plot
window will be visible. Tick lines will still be
present, living their best lives.
This is probably only useful/recommended/tested for
plots with a single panel.
-scale SCA1 SCA2 SCA3 ...
:provide a list of scales to apply to the y-values.
These will be applied multiplicatively to the y-values;
there should either be 1 (applied to all time series)
or the same number as the time series (in the same
order as those were entered). The scale values are
also applied to the censor_hline values, but *not* to
the "-yaxis ..." range(s).
Note that there are a couple keywords that can be used
instead of SCA* values:
SCALE_TO_HLINE: each input time series is
vertically scaled so that its censor_hline -> 1.
That is, each time point is divided by the
censor_hline value. When using this, a visually
pleasing yaxis range might be 0:3.
SCALE_TO_MAX: each input time series is
vertically scaled so that its max value -> 1.
That is, each time point is divided by the
max value. When using this, a visually
pleasing yaxis range might be 0:1.1.
-xfile XX :one way to input x-values explicitly: as a "1D" file XX, a
containing a single file of numbers. If no xfile is
entered, then a list of integers is created, 0..N-1, based
on the length of the "-infiles ..".
-xvals START STOP STEP
:an alternative means for entering abscissa values: one
can provide exactly 3 numbers, the start (inclusive)
the stop (exclusive) and the steps to take, following
Python conventions-- that is, numbers are generated
[START,STOP) in stepsizes of STEP.
-yaxis YMIN1:YMAX1 YMIN2:YMAX2 YMIN3:YMAX3 ...
:optional range for each "infile" y-axis; note the use
of a colon to designate the min/max of the range. One
can also specify just the min (e.g., "YMIN:") or just
the max (e.g., ":YMAX"). The final number of y-axis
values or pairs *must* match the total number of columns
of data from infiles; a placeholder could just be
":". Without specifying a range, one is calculated
automatically from the min and max of the dsets
themselves. The order of ylabels should match the order
of infiles.
-ylabels YL1 YL2 YL3 ...
:optional text labels for each "infile" column; the
final number of ylabels *must* match the total number
of columns of data from infiles. For 1D files output
by 3dvolreg, one can automatically provide the 6
associated ylabels by providing the keyword 'VOLREG'
(and this counts as 6 labels). The order of ylabels
should match the order of infiles.
-xlabel XL :optional text labels for the abscissa/x-axis. Only one may
be entered, and it will *only* be displayed on the bottom
panel of the output plot. Using labels is good practice!!
-title TT :optional title for the set of plots, placed above the top-
most subplot
-reverse_order :optional switch; by default, the entered time series
are plotted top to bottom according to the order they
were entered (i.e., first- listed plot at the top).
This option reverses that order (to first-listed plot
at the bottom), in order to match with 1dplot's
behavior.
-sepscl :make each graph have its own y-range, determined by
slightly padding its min and max values. By default,
the separate plots all have the same y-range, which
is determined by taking the min-of-mins and max-of-
maxes, and padding slightly outward.
-dpi DDD :choose the output image's DPI. The default value is
150.
-figsize FX FY :choose the output image's dimensions (units are inches).
The default width is 10; the default height
is 0.5 + N*0.75, where 'N' is the number of
infile columns.
-fontsize FS :change image fontsize; default is 10.
-fontfamily FF :change font-family used; default is the luvly
monospace.
-fontstyles FSS :add in a fontname; should match with chosen
font-family; default is whatever Python has on your
system for the given family. Whether your prescribed
font gets used depends on what is installed on your
comp.
-colors C1 C2 C3 ...
:you can decide what color(s) to cycle through in plots
(enter one or more); if there are more infile columns
than entered colors, the program just keeps cycling
through the list. By default, if only 1 infile column is
given, the plotline will be black; when more than one
infile column is given, a default palette of 10
colors, chosen for their mutual-distinguishable-ness,
will be cycled through.
-patches RL1 RL2 RL3 ...
:when viewing data from multiple runs that have been
processing+concatenated, knowing where they start/stop
can be useful. This option helps with that, by
alternating patches of the background slightly between
white and light gray. The user enters any appropriate
number of run lengths, and the background patch for
the duration of the first is white, then light gray,
etc. (to *start* with light gray, one can have '0' be
the first RL value).
-censor_trs CS1 CS2 CS3 ...
:specify time points where censoring has occured (e.g.,
due to a motion or outlier criterion). With this
option, the values are entered using AFNI index
notation, such as '0..3,8,25,99..$'. Note that if you
use special characters like the '$', then the given
string must be enclosed on quotes.
One or more string can be entered, and results are
simply combined (as well as if censor files are
entered-- see the '-censor_files ..' opt).
In order to highlight censored points, a translucent
background color will be added to all plots of width 1.
-censor_files CF1 CF2 CF3 ...
:specify time points where censoring has occured (e.g.,
due to a motion or outlier criterion). With this
option, the values are entered as 1D files, columns
where 0 indicates censoring at that [i]th time point,
and 1 indicates *no* censoring there.
One or more file can be entered, and results are
simply combined (as well as if censor strings are
entered-- see the '-censor_str ..' opt).
In order to highlight censored points, a translucent
background color will be added to all plots of width 1.
-censor_hline CH1 CH2 CH3 ...
:one can add a dotted horizontal line to the plot, with
the intention that it represents the relevant threshold
(for example, motion limit or outlier fraction limit).
One can specify more than one hline: if one line
is entered, it will be applied to each plot; if more
than one hline is entered, there must be the same number
of values as infile columns.
Ummm, it is also assumed that all censor hline values
are >=0; if negative, it will be a problem-- ask if this
is a problem!
-censor_RGB COL :choose the color of the censoring background; default
is: [1, 0.7, 0.7].
-bkgd_color BC :change the background color outside of the plot
windows. Default is the Python color: 0.9.
EXAMPLES ~1~
1) Plot Euclidean norm (enorm) profile, with the censor limit and
related file of censoring:
1dplot.py \
-sepscl \
-boxplot_on \
-infiles motion_sub-10506_enorm.1D \
-censor_files motion_sub-10506_censor.1D \
-censor_hline 0.2 \
-title "Motion censoring" \
-ylabels enorm \
-xlabel "vols" \
-title "Motion censoring" \
-prefix mot_cen_plot.jpg
2) Plot the 6 solid body parameters from 3dvolreg, along with
the useful composite 'enorm' and outlier time series
1dplot.py \
-sepscl \
-boxplot_on \
-reverse_order \
-infiles dfile_rall.1D \
motion_sub-10506_enorm.1D \
outcount_rall.1D \
-ylabels VOLREG enorm outliers \
-xlabel "vols" \
-title "Motion and outlier plots" \
-prefix mot_outlier_plot.png
AFNI program: 1dRplot
Usage:
------
1dRplot is a program for plotting a 1D file
Options in alphabetical order:
------------------------------
-addavg: Add line at average of column
-col.color COL1 [COL2 ...]: Colors for each column in -input.
COL? are integers for now.
-col.grp 1Dfile or Rexp: integer labels defining column grouping
-col.line.type LT1 [LT2 ...]: Line type for each column in -input.
LT? are integers for now.
-col.name NAME1 [NAME2 ...]: Name of each column in -input.
Special flags:
VOLREG: --> 'Roll Pitch Yaw I-S R-L A-P'
-col.name.show : Show names of column in -input.
-col.nozeros: Do not plot all zeros columns
-col.plot.char CHAR1 [CHAR2 ...] : Symbols for each column in -input.
CHAR? are integers (usually 0-127), or
characters + - I etc.
See the following link for what CHAR? values you can use:
http://stat.ethz.ch/R-manual/R-patched/library/graphics/html/points.html
-col.plot.type PLOT_TYPE: Column plot type.
'l' for line, 'p' for points, 'b' for both
-col.text.lym LYM_TEXT: Text to be placed at left Y margin.
You need one string per column.
Special Flags: You can also use COL.NAME to use column
names for the margin text, or you can use
COL.IND to use the colum's index in the file
-col.text.rym RYM_TEXT: Text to be placed at right Y margin.
You need one string per column.
See also Special Flags section under -col.text.lym
-col.ystack: Scale each column and offset it based on its
column index. This is useful for stacking
a large number of columns on one plot.
It is only carried out when graphing more
than one series with the -one option.
-grid.show : Show grid.
-grp.label GROUP1 [GROUP2 ...]: Labels assigned to each group.
Default is no labeling
-help: this help message
-i 1D_INPUT: file to plot. This field can have multiple
formats. See Data Strings section below.
1dRplot will automatically detect certain
1D files ouput by some programs such as 3dhistog
or 3ddot and adjust parameters accordingly.
-input 1D_INPUT: Same as -i
-input_delta 1D_INPUT: file containing value for error bars
-input_type 1D_TYPE: Type of data in 1D file.
Choose from 'VOLREG', or 'XMAT'
-leg.fontsize : fontsize for legend text.
-leg.line.color : Color to use for items in legend.
Default is taken from column line color.
-leg.line.type : Line type to use for items in legend.
Default is taken from column line types.
If you want no line, set -leg.line.type = 0
-leg.names : Names to use for items in legend.
Default is taken from column names.
-leg.ncol : Number of columns in legend.
-leg.plot.char : plot characters to use for items in legend.
Default is taken from column plot character (-col.plot.char).
-leg.position : Legend position. Choose from:
bottomright, bottom, bottomleft
left, topleft, top, topright, right,
and center
-leg.show : Show legend.
-load.Rdat RDAT: load data list from save.Rdat for reproducing plot.
Note that you cannot override the settings in RDAT,
unless you run in the interactive R mode. For example,
say you have dice.Rdat saved from a previous command
and you want to change P$nodisp to TRUE:
load('dice.Rdat'); P$nodisp <- TRUE; plot.1D.eng(P)
-mat: Display as matrix
-matplot: Display as matrix
-msg.trace: Output trace information along with errors and notices
-multi: Put columns in separate graphs
-multiplot: Put columns in separate graphs
-nozeros: Do not plot all zeros time series
-one: Put all columns on one graph
-oneplot: Put all columns on one graph
-prefix PREFIX: Output prefix. See also -save.
-rowcol.name NAME1 [NAME2 ...]: Names of rows, same as name of columns.
For the moment, this is only used with -matplot.
-row.name NAME1 [NAME2 ...]: Name of each row in -input.
For the moment, this is only used with -matplot
-run_examples: Run all examples, one after the other.
-save PREFIX: Save plot and quit
No need for -prefix with this option
-save.Rdat : Save data list for reproducing plot in R.
You need to specify -prefix or -save
along with this option to set the prefix.
See also -load.Rdat
-save.size width height: Save figure size in pixels
Default is 2000 2000
-show_allowed_options: list of allowed options
-title TITLE: Graph title. File name is used by default.
Use NONE to be sure no title is used.
-TR TR: Sampling period, in seconds.
-verb VERB: VERB is an integer specifying verbosity level.
0 for quiet (Default). 1 or more: talkative.
-x 1D_INPUT: x axis. You can also use the string 'ENUM'
to indicate that the x axis should go from
1 to N, the number of samples in -input
-xax.label XLABEL: Label of X axis
-xax.lim MIN MAX [STEP]: Range of X axis, STEP is optional
-xax.tic.text XTTEXT: X tics text
-yax.label YLABEL: Label of Y axis
-yax.lim MIN MAX [STEP]: Range of X axis, STEP is optional
-yax.tic.text YTTEXT: Y tics text
-zeros: Do plot all zeros time series
Data Strings:
-------------
You can specify input matrices and vectors in a variety of
ways. The simplest is by specifying a .1D file with all
the trimmings of column and row selectors. You can also
specify a string that gets evaluated on the fly.
For example: '1D: 1 4 8' evaluates to a vector of values 1 4 and 8.
Also, you can use R expressions such as: 'R: seq(0,10,3)'
To download demo data from AFNI's website run this command:
-----------------------------------------------------------
curl -o demo.X.xmat.1D afni.nimh.nih.gov/pub/dist/edu/data/samples/X.xmat.1D
curl -o demo.motion.1D afni.nimh.nih.gov/pub/dist/edu/data/samples/motion.1D
Example 1 --- :
--------------------------------
1dRplot -input demo.X.xmat.1D'[5..10]'
Example 2 --- :
--------------------------------
1dRplot -input demo.X.xmat.1D'[5..10]' \
-input_type XMAT
Example 3 --- :
--------------------------------
1dRplot -input demo.motion.1D \
-input_type VOLREG
Example 4 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)'
Example 5 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 5)' \
-one
Example 6 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack
Example 7 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack \
-col.grp '1D:1 1 1 2 2 2 3 3 3 3' \
-grp.label slow medium fast \
-prefix ta.jpg \
-yax.lim 0 18 \
-leg.show \
-leg.position top
Example 8 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack \
-col.grp '1D:1 1 1 2 2 2 3 3 3 3' \
-grp.label slow medium fast \
-prefix tb.jpg \
-yax.lim 0 18 \
-leg.show \
-leg.position top \
-nozeros \
-addavg
Example 9 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 10)' \
-one \
-col.ystack \
-col.grp '1D:1 1 1 2 2 2 3 3 3 3' \
-grp.label slow medium fast \
-prefix tb.jpg \
-yax.lim 0 18 \
-leg.show \
-leg.position top \
-nozeros \
-addavg \
-col.text.lym Tutti mi chiedono tutti mi vogliono \
Donne ragazzi vecchi fanciulle \
-col.text.rym "R:paste('Col',seq(1,10), sep='')"
Example 10 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.plot.char 2 \
-col.plot.type p
Example 11 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.line.type 3 \
-col.plot.type l
Example 12 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.plot.char 2 \
-col.line.type 3 \
-col.plot.type b
Example 13 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one \
-col.plot.char 2 5\
-col.line.type 3 4\
-col.plot.type b \
-TR 2
Example 14 --- :
--------------------------------
1dRplot -input 'R:plot.1D.testmat(100, 2)' \
-one -col.plot.char 2 -col.line.type 3 \
-col.plot.type b -TR 2 \
-yax.tic.text 'numa numa numa numaei' \
-xax.tic.text 'Alo' 'Salut' 'sunt eu' 'un haiduc'
AFNI program: 1dSEM
Usage: 1dSEM [options] -theta 1dfile -C 1dfile -psi 1dfile -DF nn.n
Computes path coefficients for connection matrix in Structural Equation
Modeling (SEM)
The program takes as input :
1. A 1D file with an initial representation of the connection matrix
with a 1 for each interaction component to be modeled and a 0 if
if it is not to be modeled. This matrix should be PxP rows and column
2. A 1D file of the C, correlation matrix, also with dimensions PxP
3. A 1D file of the residual variance vector, psi
4. The degrees of freedom, DF
Output is printed to the terminal and may be redirected to a 1D file
The path coefficient matrix is printed for each matrix computed
Options:
-theta file.1D = connection matrix 1D file with initial representation
-C file.1D = correlation matrix 1D file
-psi file.1D = residual variance vector 1D file
-DF nn.n = degrees of freedom
-max_iter n = maximum number of iterations for convergence (Default=10000).
Values can range from 1 to any positive integer less than 10000.
-nrand n = number of random trials before optimization (Default = 100)
-limits m.mmm n.nnn = lower and upper limits for connection coefficients
(Default = -1.0 to 1.0)
-calccost = no modeling at all, just calculate the cost function for the
coefficients as given in the theta file. This may be useful for verifying
published results
-verbose nnnnn = print info every nnnnn steps
Model search options:
Look for best model. The initial connection matrix file must follow these
specifications. Each entry must be 0 for entries excluded from the model,
1 for each required entry in the minimum model, 2 for each possible path
to try.
-tree_growth or
-model_search = search for best model by growing a model for one additional
coefficient from the previous model for n-1 coefficients. If the initial
theta matrix has no required coefficients, the initial model will grow from
the best model for a single coefficient
-max_paths n = maximum number of paths to include (Default = 1000)
-stop_cost n.nnn = stop searching for paths when cost function is below
this value (Default = 0.1)
-forest_growth or
-grow_all = search over all possible models by comparing models at
incrementally increasing number of path coefficients. This
algorithm searches all possible combinations; for the number of coeffs
this method can be exceptionally slow, especially as the number of
coefficients gets larger, for example at n>=9.
-leafpicker = relevant only for forest growth searches. Expands the search
optimization to look at multiple paths to avoid local minimum. This method
is the default technique for tree growth and standard coefficient searches
This program uses a Powell optimization algorithm to find the connection
coefficients for any particular model.
References:
Powell, MJD, "The NEWUOA software for unconstrained optimization without
derivatives", Technical report DAMTP 2004/NA08, Cambridge University
Numerical Analysis Group -- http://www.damtp.cam.ac.uk/user/na/reports.html
Bullmore, ET, Horwitz, B, Honey, GD, Brammer, MJ, Williams, SCR, Sharma, T,
How Good is Good Enough in Path Analysis of fMRI Data?
NeuroImage 11, 289-301 (2000)
Stein, JL, et al., A validated network of effective amygdala connectivity,
NeuroImage (2007), doi:10.1016/j.neuroimage.2007.03.022
The initial representation in the theta file is non-zero for each element
to be modeled. The 1D file can have leading columns for labels that will
be used in the output. Label rows must be commented with the # symbol
If using any of the model search options, the theta file should have a '1' for
each required coefficient, '0' for each excluded coefficient, '2' for an
optional coefficient. Excluded coefficients are not modeled. Required
coefficients are included in every computed model.
N.B. - Connection directionality in the path connection matrices is from
column to row of the output connection coefficient matrices.
Be very careful when interpreting those path coefficients.
First of all, they are not correlation coefficients. Suppose we have a
network with a path connecting from region A to region B. The meaning
of the coefficient theta (e.g., 0.81) is this: if region A increases by
one standard deviation from its mean, region B would be expected to increase
by 0.81 its own standard deviations from its own mean while holding all other
relevant regional connections constant. With a path coefficient of -0.16,
when region A increases by one standard deviation from its mean, region B
would be expected to decrease by 0.16 its own standard deviations from its
own mean while holding all other relevant regional connections constant.
So theoretically speaking the range of the path coefficients can be anything,
but most of the time they range from -1 to 1. To save running time, the
default values for -limits are set with -1 and 1, but if the result hits
the boundary, increase them and re-run the analysis.
Examples:
To confirm a specific model:
1dSEM -theta inittheta.1D -C SEMCorr.1D -psi SEMvar.1D -DF 30
To search models by growing from the best single coefficient model
up to 12 coefficients
1dSEM -theta testthetas_ms.1D -C testcorr.1D -psi testpsi.1D \
-limits -2 2 -nrand 100 -DF 30 -model_search -max_paths 12
To search all possible models up to 8 coefficients:
1dSEM -theta testthetas_ms.1D -C testcorr.1D -psi testpsi.1D \
-nrand 10 -DF 30 -stop_cost 0.1 -grow_all -max_paths 8 | & tee testgrow.txt
For more information, see https://afni.nimh.nih.gov/sscc/gangc/PathAna.html
and our HBM 2007 poster at
https://afni.nimh.nih.gov/sscc/posters/file.2007-06-07.0771819246
If you find this program useful, please cite:
G Chen, DR Glen, JL Stein, AS Meyer-Lindenberg, ZS Saad, RW Cox,
Model Validation and Automated Search in FMRI Path Analysis:
A Fast Open-Source Tool for Structural Equation Modeling,
Human Brain Mapping Conference, 2007
AFNI program: 1dsound
Usage: 1dsound [options] tsfile
Program to create a sound file from a 1D file (column of numbers).
Is this program useful? Probably not, but it can be fun.
-------
OPTIONS
-------
===== output filename =====
-prefix ppp = Output filename will be ppp.au
[Sun audio format https://en.wikipedia.org/wiki/Au_file_format]
+ If you don't use '-prefix', the output is file 'sound.au'.
+ If 'ppp' ends in '.au', this program won't add another '.au.
===== encoding details =====
-16PCM = Output in 16-bit linear PCM encoding (uncompressed)
+ Less quantization noise (audible hiss) :)
+ Takes twice as much disk space for output as 8-bit output :(
+++ This is the default method now!
+ https://en.wikipedia.org/wiki/Pulse-code_modulation
-8PCM = Output in 8-bit linear PCM encoding
+ There is no good reason to use this option.
-8ulaw = Output in 8-bit mu-law encoding.
+ Provides a little better quality than -8PCM,
but still has audible quantization noise hiss.
+ https://en.wikipedia.org/wiki/M-law_algorithm
-tper X = X seconds of sound per time point in 'tsfile'.
-TR X Allowed range for 'X' is 0.01 to 1.0 (inclusive).
-dt X [default time step is 0.2 s]
You can use '-tper', '-dt', or '-TR', as you like.
===== how the sound timeseries is produced from the data timeseries =====
-FM = Output sound is frequency modulated between 110 and 1760 Hz
from min to max in the input 1D file.
+ Usually 'sounds terrible'.
+ The only reason this is here is that it was the first method
I implemented, and I kept it for the sake of nostalgia.
-notes = Output sound is a sequence of notes, low to high pitch
based on min to max in the input 1D file.
+++ This is the default method of operation.
+ A pentatonic scale is used, which usually 'sounds nice':
https://en.wikipedia.org/wiki/Pentatonic_scale
-notewave W = Selects the shape of the notes used. 'W' is one of these:
-waveform W sine = pure sine wave (sounds simplistic)
sqsine = square root of sine wave (a little harsh and loud)
square = square wave (a lot harsh and loud)
triangle = triangle wave [the default waveform]
-despike = apply a simple despiking algorithm, to avoid the artifact
of one very large or small value making all the other notes
end up being the same.
===== Notes about notes =====
** At this time, the default production method is '-notes', **
** using the triangle waveform (I like this best). **
** With '-notes', up to 6 columns of the input file will be used **
** to produce a polyphonic sound (in a single channel). **
** (Any columns past the 6th in the input 'tsfile' are ignored.) **
===== hear the sound right away! =====
-play = Plays the sound file after it is written.
On this computer: uses program /usr/bin/mplayer
===>> Playing sound on a remote computer is
annoying, pointless, and likely to get you punched.
--------
EXAMPLES
--------
The first 2 examples are purely synthetic, using 'data' files created
on the command line. The third example uses a data file that was written
out of an AFNI graph viewer using the 'w' keystroke.
1dsound -prefix A1 '1D: 0 1 2 1 0 1 2 0 1 2'
1deval -num 100 -expr 'sin(x+0.01*x*x)' | 1dsound -tper 0.1 -prefix A2 1D:stdin
1dsound -prefix -tper 0.1 A3 028_044_003.1D
-----
NOTES
-----
* File can be played with the 'sox' audio package command
play A1.au gain -5
+ Here 'gain -5' turns the volume down :)
+ sox is not provided with AFNI :(
+ To see if sox is on your system, type the command 'which sox'
+ If you have sox, you can add 'reverb 99' at the end of the
'play' command line, and have some extra fun.
+ Many other effects are available with sox 'play',
and they can also be used to produce edited sound files:
http://sox.sourceforge.net/sox.html#EFFECTS
+ You can convert the .au file produced from here to other
formats using sox; for example:
sox Bob.au Cox.au BobCox.aiff
combines the 2 .au input files to a 2-channel (stereo)
Apple .aiff output file. See this for more information:
http://sox.sourceforge.net/soxformat.html
* Creation of the file does not depend on sox, so if you have
another way to play .au files, you can use that.
* Mac OS X: Quicktime (GUI) or afplay (command line) programs.
+ sox can be installed by first installing 'brew'
-- see https://brew.sh/ -- and then using command
'brew install sox'.
* Linux: Getting sox is probably the simplest thing to do.
+ Or install the mplayer package (which also does videos).
+ Another possibility is the aplay program.
* The audio output file is sampled at 16K bytes per second.
For example, a 30 second file will be 960K bytes in size,
at 16 bits per sample.
* The auditory effect varies significantly with the '-tper'
parameter X; '-tper 0.02' is very different than '-tper 0.4'.
--- Quick hack for experimentation and fun - RWCox - Aug 2018 ---
AFNI program: 1dsum
Usage: 1dsum [options] a.1D b.1D ...
where each file a.1D, b.1D, etc. is an ASCII file of numbers arranged
in rows and columns. The sum of each column is written to stdout.
Options:
-ignore nn = skip the first nn rows of each file
-use mm = use only mm rows from each file
-mean = compute the average instead of the sum
-nocomment = the # comments from the header of the first
input file will be reproduced to the output;
if you do NOT want this to happen, use the
'-nocomment' option.
-OKempty = If you encounter an empty 1D file, print 0
and exit quietly instead of exiting with an
error message
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dsvd
Usage: 1dsvd [options] 1Dfile 1Dfile ...
- Computes SVD of the matrix formed by the 1D file(s).
- Output appears on stdout; to save it, use '>' redirection.
OPTIONS:
-one = Make 1st vector be all 1's.
-vmean = Remove mean from each vector (can't be used with -one).
-vnorm = Make L2-norm of each vector = 1 before SVD.
* The above 2 options mirror those in 3dpc.
-cond = Only print condition number (ratio of extremes)
-sing = Only print singular values
* To compare the singular values from 1dsvd with those from
3dDeconvolve you must use the -vnorm option with 1dsvd.
For example, try
3dDeconvolve -nodata 200 1 -polort 5 -num_stimts 1 \
-stim_times 1 '1D: 30 130' 'BLOCK(50,1)' -singvals
1dsvd -sing -vnorm nodata.xmat.1D
-sort = Sort singular values (descending) [the default]
-nosort = Don't bother to sort the singular values
-asort = Sort singular values (ascending)
-1Dleft = Only output left eigenvectors, in a .1D format
This might be useful for reducing the number of
columns in a design matrix. The singular values
are printed at the top of each vector column,
as a '#...' comment line.
-nev n = If -1Dleft is used, '-nev' specifies to output only
the first 'n' eigenvectors, rather than all of them.
* If you are a tricky person, such as Souheil, you can
put a '%' after the value, and then you are saying
keep eigenvectors until at least n% of the sum of
singular values is accounted for. In this usage,
'n' must be a number less than 100; for example, to
reduce a matrix down to a smaller set of columns that
capture most of its column space, try something like
1dsvd -1Dleft -nev 99% Xorig.1D > X99.1D
EXAMPLE:
1dsvd -vmean -vnorm -1Dleft fred.1D'[1..6]' | 1dplot -stdin
NOTES:
* Call the input n X m matrix [A] (n rows, m columns). The SVD
is the factorization [A] = [U] [S] [V]' ('=transpose), where
- [U] is an n x m matrix (whose columns are the 'Left vectors')
- [S] is a diagonal m x m matrix (the 'singular values')
- [V] is an m x m matrix (whose columns are the 'Right vectors')
* The default output of the program is
- An echo of the input [A]
- The [U] matrix, each column headed by its singular value
- The [V] matrix, each column headed by its singular value
(please note that [V] is output, not [V]')
- The pseudo-inverse of [A]
* This program was written simply for some testing purposes,
but is distributed with AFNI because it might be useful-ish.
* Recall that you can transpose a .1D file on input by putting
an escaped ' character after the filename. For example,
1dsvd fred.1D\'
You can use this feature to get around the fact that there
is no '-1Dright' option. If you understand.
* For more information on the SVD, you can start at
http://en.wikipedia.org/wiki/Singular_value_decomposition
* Author: Zhark the Algebraical (Linear).
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1d_tool.py
=============================================================================
1d_tool.py - for manipulating and evaluating 1D files
---------------------------------------------------------------------------
purpose: ~1~
This program is meant to read/manipulate/write/diagnose 1D datasets.
Input can be specified using AFNI sub-brick[]/time{} selectors.
---------------------------------------------------------------------------
examples (very basic for now): ~1~
Example 1. Select by rows and columns, akin to 1dcat. ~2~
1d_tool.py -infile 'data/X.xmat.1D[0..3]{0..5}' -write t1.1D
Example 2. Compare with selection by separate options. ~2~
1d_tool.py -infile data/X.xmat.1D \
-select_cols '0..3' -select_rows '0..5' \
-write t2.1D
diff t1.1D t2.1D
Example 2b. Select or remove columns by label prefixes. ~2~
Keep only bandpass columns:
1d_tool.py -infile X.xmat.1D -write X.bandpass.1D \
-label_prefix_keep bandpass
Remove only bandpass columns (maybe for 3dRFSC):
1d_tool.py -infile X.xmat.1D -write X.no.bandpass.1D \
-label_prefix_drop bandpass
Keep polort columns (start with 'Run') motion shifts ('d') and labels
starting with 'a' and 'b'. But drop 'bandpass' columns:
1d_tool.py -infile X.xmat.1D -write X.weird.1D \
-label_prefix_keep Run d a b \
-label_prefix_drop bandpass
Example 2c. Select columns by group values, 3 examples. ~2~
First be sure of what the group labels represent.
1d_tool.py -infile X.xmat.1D -show_group_labels
i) Select polort (group -1) and other baseline (group 0) terms.
1d_tool.py -infile X.xmat.1D -select_groups -1 0 -write baseline.1D
ii) Select everything but baseline groups (anything positive).
1d_tool.py -infile X.xmat.1D -select_groups POS -write regs.of.int.1D
iii) Reorder to have rests of interest, then motion, then polort.
1d_tool.py -infile X.xmat.1D -select_groups POS 0, -1 -write order.1D
iv) Create stim-only X-matrix file: select non-baseline columns of
X-matrix and write with header comment.
1d_tool.py -infile X.xmat.1D -select_groups POS \
-write_with_header yes -write X.stim.xmat.1D
Or, using a convenience option:
1d_tool.py -infile X.xmat.1D -write_xstim X.stim.xmat.1D
Example 2d. Select specific runs from the input. ~2~
Note that X.xmat.1D may have runs defined automatically, but for an
arbitrary input, they may need to be specified via -set_run_lengths.
i) .... apparently I forgot to do this...
1d_tool.py -infile X.xmat.1D -write X.bandpass.1D \
Example 3. Transpose a dataset, akin to 1dtranspose. ~2~
1d_tool.py -infile t3.1D -transpose -write ttr.1D
Example 4a. Zero-pad a single-run 1D file across many runs. ~2~
Given a file of regressors (for example) across a single run (run 2),
created a new file that is padded with zeros, so that it now spans
many (7) runs. Runs are 1-based here.
1d_tool.py -infile ricor_r02.1D -pad_into_many_runs 2 7 \
-write ricor_r02_all.1D
Example 4b. Similar to 4a, but specify varying TRs per run. ~2~
The number of runs must match the number of run_lengths parameters.
1d_tool.py -infile ricor_r02.1D -pad_into_many_runs 2 7 \
-set_run_lengths 64 61 67 61 67 61 67 \
-write ricor_r02_all.1D
Example 5. Display small details about a 1D dataset: ~2~
a. Display number of rows and columns for a 1D dataset.
Note: to display them "quietly" (only the numbers), add -verb 0.
This is useful for setting a script variable.
1d_tool.py -infile X.xmat.1D -show_rows_cols
1d_tool.py -infile X.xmat.1D -show_rows_cols -verb 0
b. Display indices of regressors of interest.
1d_tool.py -infile X.xmat.1D -show_indices_interest
c. Display labels by group.
1d_tool.py -infile X.xmat.1D -show_group_labels
d. Display "degree of freedom" information:
1d_tool.py -infile X.xmat.1D -show_df_info
Example 6a. Show correlation matrix warnings for this matrix. ~2~
1d_tool.py -infile X.xmat.1D -show_cormat_warnings
Example 6b. Show entire correlation matrix. ~2~
1d_tool.py -infile X.xmat.1D -show_cormat
Example 7a. Output temporal derivative of motion regressors. ~2~
There are 9 runs in dfile_rall.1D, and derivatives are applied per run.
1d_tool.py -infile dfile_rall.1D -set_nruns 9 \
-derivative -write motion.deriv.1D
Example 7b. Similar to 7a, but let the run lengths vary. ~2~
The sum of run lengths should equal the number of time points.
1d_tool.py -infile dfile_rall.1D \
-set_run_lengths 64 64 64 64 64 64 64 64 64 \
-derivative -write motion.deriv.rlens.1D
Example 7c. Use forward differences. ~2~
instead of the default backward differences...
1d_tool.py -infile dfile_rall.1D \
-set_run_lengths 64 64 64 64 64 64 64 64 64 \
-forward_diff -write motion.deriv.rlens.1D
Example 8. Verify whether labels show slice-major ordering.
This is where all slice0 regressors come first, then all slice1
regressors, etc. Either show the labels and verify visually, or
print whether it is true.
1d_tool.py -infile scan_2.slibase.1D'[0..12]' -show_labels
1d_tool.py -infile scan_2.slibase.1D -show_labels
1d_tool.py -infile scan_2.slibase.1D -show_label_ordering
Example 9a. Given motion.1D, create an Enorm time series. ~2~
Take the derivative (ignoring run breaks) and the Euclidean Norm,
and write as e.norm.1D. This might be plotted to show show sudden
motion as a single time series.
1d_tool.py -infile motion.1D -set_nruns 9 \
-derivative -collapse_cols euclidean_norm \
-write e.norm.1D
Example 9b. Like 9a, but supposing the run lengths vary (still 576 TRs). ~2~
1d_tool.py -infile motion.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-derivative -collapse_cols euclidean_norm \
-write e.norm.rlens.1D
Example 9c. Like 9b, but weight the rotations as 0.9 mm. ~2~
1d_tool.py -infile motion.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-derivative -collapse_cols weighted_enorm \
-weight_vec .9 .9 .9 1 1 1 \
-write e.norm.weighted.1D
Example 10. Given motion.1D, create censor files to use in 3dDeconvolve. ~2~
Here a TR is censored if the derivative values have a Euclidean Norm
above 1.2. It is common to also censor each previous TR, as motion may
span both (previous because "derivative" is actually a backward
difference).
The file created by -write_censor can be used with 3dD's -censor option.
The file created by -write_CENSORTR can be used with -CENSORTR. They
should have the same effect in 3dDeconvolve. The CENSORTR file is more
readable, but the censor file is better for plotting against the data.
a. general example ~3~
1d_tool.py -infile motion.1D -set_nruns 9 \
-derivative -censor_prev_TR \
-collapse_cols euclidean_norm \
-moderate_mask -1.2 1.2 \
-show_censor_count \
-write_censor subjA_censor.1D \
-write_CENSORTR subjA_CENSORTR.txt
b. using -censor_motion ~3~
The -censor_motion option is available, which implies '-derivative',
'-collapse_cols euclidean_norm', 'moderate_mask -LIMIT LIMIT', and the
prefix for '-write_censor' and '-write_CENSORTR' output files. This
option will also result in subjA_enorm.1D being written, which is the
euclidean norm of the derivative, before the extreme mask is applied.
1d_tool.py -infile motion.1D -set_nruns 9 \
-show_censor_count \
-censor_motion 1.2 subjA \
-censor_prev_TR
c. allow the run lengths to vary ~3~
1d_tool.py -infile motion.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-show_censor_count \
-censor_motion 1.2 subjA_rlens \
-censor_prev_TR
Consider also '-censor_prev_TR' and '-censor_first_trs'.
Example 11. Demean the data. Use motion parameters as an example. ~2~
The demean operation is done per run (default is 1 when 1d_tool.py
does not otherwise know).
a. across all runs (if runs are not known from input file)
1d_tool.py -infile dfile_rall.1D -demean -write motion.demean.a.1D
b. per run, over 9 runs of equal length
1d_tool.py -infile dfile_rall.1D -set_nruns 9 \
-demean -write motion.demean.b.1D
c. per run, over 9 runs of varying length
1d_tool.py -infile dfile_rall.1D \
-set_run_lengths 64 61 67 61 67 61 67 61 67 \
-demean -write motion.demean.c.1D
Example 12. "Uncensor" the data, zero-padding previously censored TRs. ~2~
Note that an X-matrix output by 3dDeconvolve contains censor
information in GoodList, which is the list of uncensored TRs.
a. if the input dataset has censor information
1d_tool.py -infile X.xmat.1D -censor_fill -write X.uncensored.1D
b. if censor information needs to come from a parent
1d_tool.py -infile sum.ideal.1D -censor_fill_parent X.xmat.1D \
-write sum.ideal.uncensored.1D
c. if censor information needs to come from a simple 1D time series
1d_tool.py -censor_fill_parent motion_FT_censor.1D \
-infile cdata.1D -write cdata.zeropad.1D
Example 13. Show whether the input file is valid as a numeric data file. ~2~
a. as any generic 1D file
1d_tool.py -infile data.txt -looks_like_1D
b. as a 1D stim_file, of 3 runs of 64 TRs (TR is irrelevant)
1d_tool.py -infile data.txt -looks_like_1D \
-set_run_lengths 64 64 64
c. as a stim_times file with local times
1d_tool.py -infile data.txt -looks_like_local_times \
-set_run_lengths 64 64 64 -set_tr 2
d. as a 1D or stim_times file with global times
1d_tool.py -infile data.txt -looks_like_global_times \
-set_run_lengths 64 64 64 -set_tr 2
e. report modulation type (amplitude and/or duration)
1d_tool.py -infile data.txt -looks_like_AM
f. perform all tests, reporting all errors
1d_tool.py -infile data.txt -looks_like_test_all \
-set_run_lengths 64 64 64 -set_tr 2
Example 14. Split motion parameters across runs. ~2~
Split, but keep them at the original length so they apply to the same
multi-run regression. Each file will be the same as the original for
the run it applies to, but zero across all other runs.
Note that -split_into_pad_runs takes the output prefix as a parameter.
1d_tool.py -infile motion.1D \
-set_run_lengths 64 64 64 \
-split_into_pad_runs mot.padded
The output files are:
mot.padded.r01.1D mot.padded.r02.1D mot.padded.r03.1D
If the run lengths are the same -set_nruns is shorter...
1d_tool.py -infile motion.1D \
-set_nruns 3 \
-split_into_pad_runs mot.padded
Example 15a. Show the maximum pairwise displacement. ~2~
Show the max pairwise displacement in the motion parameter file.
So over all TRs pairs, find the biggest displacement.
In one direction it is easy (AP say). If the minimum AP shift is -0.8
and the maximum is 1.5, then the maximum displacement is 2.3 mm. It
is less clear in 6-D space, and instead of trying to find an enveloping
set of "coordinates", distances between all N choose 2 pairs are
evaluated (brute force).
1d_tool.py -infile dfile_rall.1D -show_max_displace
Example 15b. Like 15a, but do not include displacement from censored TRs. ~2~
1d_tool.py -infile dfile_rall.1D -show_max_displace \
-censor_infile motion_censor.1D
Example 16. Randomize a list of numbers, say, those from 1..40. ~2~
The numbers can come from 1deval, with the result piped to
'1d_tool.py -infile stdin -randomize_trs ...'.
1deval -num 40 -expr t+1 | \
1d_tool.py -infile stdin -randomize_trs -write stdout
See also -seed.
Example 17. Display min, mean, max, stdev of 1D file. ~2~
1d_tool.py -show_mmms -infile data.1D
To be more detailed, get stats for each of x, y, and z directional
blur estimates for all subjects. Cat(enate) all of the subject files
and pipe that to 1d_tool.py with infile - (meaning stdin).
cat subject_results/group.*/sub*/*.results/blur.errts.1D \
| 1d_tool.py -show_mmms -infile -
Example 18. Just output censor count for default method. ~2~
This will output nothing but the number of TRs that would be censored,
akin to using -censor_motion and -censor_prev_TR.
1d_tool.py -infile dfile_rall.1D -set_nruns 3 -quick_censor_count 0.3
1d_tool.py -infile dfile_rall.1D -set_run_lengths 100 80 120 \
-quick_censor_count 0.3
Example 19. Compute GCOR from some 1D file. ~2~
* Note, time should be in the vertical direction of the file
(else use -transpose).
1d_tool.py -infile data.1D -show_gcor
Or get some GCOR documentation and many values.
1d_tool.py -infile data.1D -show_gcor_doc
1d_tool.py -infile data.1D -show_gcor_all
Example 20. Display censored or uncensored TRs lists (for use in 3dTcat). ~2~
TRs which were censored:
1d_tool.py -infile X.xmat.1D -show_trs_censored encoded
TRs which were applied in analysis (those NOT censored):
1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded
Only those applied in run #2 (1-based).
1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \
-show_trs_run 2
Example 21. Convert to rank order. ~2~
a. show rank order of slice times from a 1D file
1d_tool.py -infile slice_times.1D -rank -write -
b. show rank order of slice times piped directly from 3dinfo
3dinfo -slice_timing epi+orig | 1d_tool.py -infile - -rank -write -
c. show rank order using 'competition' rank, instead of default 'dense'
3dinfo -slice_timing epi+orig \
| 1d_tool.py -infile - -rank_style competition -write -
Example 22. Guess volreg base index from motion parameters. ~2~
1d_tool.py -infile dfile_rall.1D -collapse_cols enorm -show_argmin
Example 23. Convert volreg parameters to those suitable for 3dAllineate. ~2~
1d_tool.py -infile dfile_rall.1D -volreg2allineate \
-write allin_rall_aff12.1D
Example 24. Show TR counts per run. ~2~
a. list the number of TRs in each run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts trs
b. list the number of TRs censored in each run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts trs_cen
c. list the number of TRs prior to censoring in each run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts trs_no_cen
d. list the fraction of TRs censored per run
1d_tool.py -infile X.xmat.1D -show_tr_run_counts frac_cen
e. list the fraction of TRs censored in run 3
1d_tool.py -infile X.xmat.1D -show_tr_run_counts frac_cen \
-show_trs_run 3
Example 25. Show number of runs. ~2~
1d_tool.py -infile X.xmat.1D -show_num_runs
Example 26. Convert global index to run and TR index. ~2~
Note that run indices are 1-based, while TR indices are 0-based,
as usual. Confusion is key.
a. explicitly, given run lengths
1d_tool.py -set_run_lengths 100 80 120 -index_to_run_tr 217
b. implicitly, given an X-matrix (** be careful about censoring **)
1d_tool.py -infile X.nocensor.xmat.1D -index_to_run_tr 217
Example 27. Display length of response curve. ~2~
1d_tool.py -show_trs_to_zero -infile data.1D
Print out the length of the input (in TRs, say) until the data
values become a constant zero. Zeros that are followed by non-zero
values are irrelevant.
Example 28. Convert slice order to slice times. ~2~
A slice order might be the sequence in which slices were acquired.
For example, with 33 slices, perhaps the order is:
set slice_order = ( 0 6 12 18 24 30 1 7 13 19 25 31 2 8 14 20 \
26 32 3 9 15 21 27 4 10 16 22 28 5 11 17 23 29 )
Put this in a file:
echo $slice_order > slice_order.1D
1d_tool.py -set_tr 2 -slice_order_to_times \
-infile slice_order.1D -write slice_times.1D
Or as a filter:
echo $slice_order | 1d_tool.py -set_tr 2 -slice_order_to_times \
-infile - -write -
Example 29. Display minimum cluster size from 3dClustSim output. ~2~
Given a text file output by 3dClustSim, e.g. ClustSim.ACF.NN1_1sided.1D,
and given both an uncorrected (pthr) and a corrected (alpha) p-value,
look up the entry that specifies the minimum cluster size needed for
corrected p-value significance.
If requested in afni_proc.py, they are under files_ClustSim.
a. with modestly verbose output (default is -verb 1)
1d_tool.py -infile ClustSim.ACF.NN1_1sided.1D -csim_show_clustsize
b. quiet, to see just the output value
1d_tool.py -infile ClustSim.ACF.NN1_1sided.1D -csim_show_clustsize \
-verb 0
c. quiet, and capture the output value (tcsh syntax)
set clustsize = `1d_tool.py -infile ClustSim.ACF.NN1_1sided.1D \
-csim_show_clustsize -verb 0`
---------------------------------------------------------------------------
command-line options: ~1~
---------------------------------------------------------------------------
basic informational options: ~2~
-help : show this help
-hist : show the module history
-show_valid_opts : show all valid options
-ver : show the version number
----------------------------------------
required input: ~2~
-infile DATASET.1D : specify input 1D file
----------------------------------------
general options: ~2~
-add_cols NEW_DSET.1D : extend dset to include these columns
-backward_diff : take derivative as first backward difference
Take the backward differences at each time point. For each index > 0,
value[index] = value[index] - value[index-1], and value[0] = 0.
This option is identical to -derivative.
See also -forward_diff, -derivative, -set_nruns, -set_run_lens.
-collapse_cols METHOD : collapse multiple columns into one, where
METHOD is one of: min, max, minabs, maxabs, euclidean_norm,
weighted_enorm.
Consideration of the euclidean_norm method:
For censoring, the euclidean_norm method is used (sqrt(sum squares)).
This combines rotations (in degrees) with shifts (in mm) as if they
had the same weight.
Note that assuming rotations are about the center of mass (which
should produce a minimum average distance), then the average arc
length (averaged over the brain mask) of a voxel rotated by 1 degree
(about the CM) is the following (for the given datasets):
TT_N27+tlrc: 0.967 mm (average radius = 55.43 mm)
MNIa_caez_N27+tlrc: 1.042 mm (average radius = 59.69 mm)
MNI_avg152T1+tlrc: 1.088 mm (average radius = 62.32 mm)
The point of these numbers is to suggest that equating degrees and
mm should be fine. The average distance caused by a 1 degree
rotation is very close to 1 mm (in an adult human).
* 'enorm' is short for 'euclidean_norm'.
* Use of weighted_enorm requires the -weight_vec option.
e.g. -collapse_cols weighted_enorm -weight_vec .9 .9 .9 1 1 1
-censor_motion LIMIT PREFIX : create censor files
This option implies '-derivative', '-collapse_cols euclidean_norm',
'moderate_mask -LIMIT LIMIT' and applies PREFIX for '-write_censor'
and '-write_CENSORTR' output files. It also outputs the derivative
of the euclidean norm, before the limit it applied.
The temporal derivative is taken with run breaks applied (derivative
of the first run of a TR is 0), then the columns are collapsed into
one via each TR's vector length (Euclidean Norm: sqrt(sum of squares)).
After that, a mask time series is made from TRs with values outside
(-LIMIT,LIMIT), i.e. if >= LIMIT or <= LIMIT, result is 1.
This binary time series is then written out in -CENSORTR format, with
the moderate TRs written in -censor format (either can be applied in
3dDeconvolve). The output files will be named PREFIX_censor.1D,
PREFIX_CENSORTR.txt and PREFIX_enorm.1D (e.g. subj123_censor.1D,
subj123_CENSORTR.txt and subj123_enorm.1D).
Besides an input motion file (-infile), the number of runs is needed
(-set_nruns or -set_run_lengths).
Consider also '-censor_prev_TR' and '-censor_first_trs'.
See example 10.
-censor_fill : expand data, filling censored TRs with zeros
-censor_fill_parent PARENT : similar, but get censor info from a parent
The output of these operation is a longer dataset. Each TR that had
previously been censored is re-inserted as a zero.
The purpose of this is to make 1D time series data properly align
with the all_runs dataset, for example. Otherwise, the ideal 1D data
might have missing TRs, and so will align worse with responses over
the duration of all runs (it might start aligned, but drift earlier
and earlier as more TRs are censored).
See example 12.
-censor_infile CENSOR_FILE : apply censoring to -infile dataset
This removes TRs from the -infile dataset where the CENSOR_FILE is 0.
The censor file is akin to what is used with "3dDeconvolve -censor",
where TRs with 1 are kept and those with 0 are excluded from analysis.
See example 15b.
-censor_first_trs N : when censoring motion, also censor the first
N TRs of each run
-censor_next_TR : for each censored TR, also censor next one
(probably for use with -forward_diff)
-censor_prev_TR : for each censored TR, also censor previous
-cormat_cutoff CUTOFF : set cutoff for cormat warnings (in [0,1])
-csim_show_clustsize : for 3dClustSim input, show min clust size
Given a 3dClustSim table output (e.g. ClustSim.ACF.NN1_1sided.1D),
along with uncorrected (pthr) and corrected (alpha) p-values, show the
minimum cluster size to achieve significance.
The pthr and alpha values can be controlled via the options -csim_pthr
and -csim_alpha (with defaults of 0.001 and 0.05, respectively).
The -verb option can be used to provide additional or no details
about the clustering method.
See Example 29, along with options -csim_pthr, -csim_alpha and -verb.
-csim_pthr THRESH : specify uncorrected threshold for csim output
e.g. -csim_pthr 0.0001
This option implies -csim_show_clustsize, and is used to specify the
uncorrected p-value of the 3dClustSim output.
See also -csim_show_clustsize.
-csim_alpha THRESH : specify corrected threshold for csim output
e.g. -csim_alpha 0.01
This option implies -csim_show_clustsize, and is used to specify the
corrected, cluster-wise p-value of the 3dClustSim output.
See also -csim_show_clustsize.
-demean : demean each run (new mean of each run = 0.0)
-derivative : take the temporal derivative of each vector
(done as first backward difference)
Take the backward differences at each time point. For each index > 0,
value[index] = value[index] - value[index-1], and value[0] = 0.
This option is identical to -backward_diff.
See also -backward_diff, -forward_diff, -set_nruns, -set_run_lens.
-extreme_mask MIN MAX : make mask of extreme values
Convert to a 0/1 mask, where 1 means the given value is extreme
(outside the (MIN, MAX) range), and 0 means otherwise. This is the
opposite of -moderate_mask (not exactly, both are inclusive).
Note: values = MIN or MAX will be in both extreme and moderate masks.
Note: this was originally described incorrectly in the help.
-forward_diff : take first forward difference of each vector
Take the first forward differences at each time point. For index<last,
value[index] = value[index+1] - value[index], and value[last] = 0.
The difference between -forward_diff and -backward_diff is a time shift
by one index.
See also -backward_diff, -derivative, -set_nruns, -set_run_lens.
-index_to_run_tr INDEX : convert global INDEX to run and TR indices
Given a list of run lengths, convert INDEX to a run and TR index pair.
This option requires -set_run_lens or maybe an Xmat.
See also -set_run_lens example 26.
-moderate_mask MIN MAX : make mask of moderate values
Convert to a 0/1 mask, where 1 means the given value is moderate
(within [MIN, MAX]), and 0 means otherwise. This is useful for
censoring motion (in the -censor case, not -CENSORTR), where the
-censor file should be a time series of TRs to apply.
See also -extreme_mask.
-label_prefix_drop prefix1 prefix2 ... : remove labels matching prefix list
e.g. to remove motion shift (starting with 'd') and bandpass labels:
-label_prefix_drop d bandpass
This is a type of column selection.
Use this option to remove columns from a matrix that have labels
starting with any from the given prefix list.
This option can be applied along with -label_prefix_keep.
See also -label_prefix_keep and example 2b.
-label_prefix_keep prefix1 prefix2 ... : keep labels matching prefix list
e.g. to keep only motion shift (starting with 'd') and bandpass labels:
-label_prefix_keep d bandpass
This is a type of column selection.
Use this option to keep columns from a matrix that have labels starting
with any from the given prefix list.
This option can be applied along with -label_prefix_drop.
See also -label_prefix_drop and example 2b.
"Looks like" options:
These are terminal options that check whether the input file seems to
be of type 1D, local stim_times or global stim_times formats. The only
associated options are currently -infile, -set_run_lens, -set_tr and
-verb.
They are terminal in that no other 1D-style actions are performed.
See 'timing_tool.py -help' for details on stim_times operations.
-looks_like_1D : is the file in 1D format
Does the input data file seem to be in 1D format?
- must be rectangular (same number of columns per row)
- duration must match number of rows (if run lengths are given)
-looks_like_AM : does the file have modulators?
Does the file seem to be in local or global times format, and
do the times have modulators?
- amplitude modulators should use '*' format (e.g. 127.3*5.1)
- duration modulators should use trailing ':' format (12*5.1:3.4)
- number of amplitude modulators should be constant
-looks_like_local_times : is the file in local stim_times format
Does the input data file seem to be in the -stim_times format used by
3dDeconvolve (and timing_tool.py)? More specifically, is it the local
format, with one scanning run per row.
- number of rows must match number of runs
- times cannot be negative
- times must be unique per run (per row)
- times cannot exceed the current run time
-looks_like_global_times : is the file in global stim_times format
Does the input data file seem to be in the -stim_times format used by
3dDeconvolve (and timing_tool.py)? More specifically, is it the global
format, either as one long row or one long line?
- must be one dimensional (either a single row or column)
- times cannot be negative
- times must be unique
- times cannot exceed total duration of all runs
-looks_like_test_all : run all -looks_like tests
Applies all "looks like" test options: -looks_like_1D, -looks_like_AM,
-looks_like_local_times and -looks_like_global_times.
-overwrite : allow overwriting of any output dataset
-pad_into_many_runs RUN NRUNS : pad as current run of num_runs
e.g. -pad_into_many_runs 2 7
This option is used to create a longer time series dataset where the
input is consider one particular run out of many. The output is
padded with zero for all run TRs before and after this run.
Given the example, there would be 1 run of zeros, then the input would
be treated as run 2, and there would be 5 more runs of zeros.
-quick_censor_count LIMIT : output # TRs that would be censored
e.g. -quick_censor_count 0.3
This is akin to -censor_motion, but it only outputs the number of TRs
that would be censored. It does not actually create a censor file.
This option essentially replaces these:
-derivative -demean -collapse_cols euclidean_norm
-censor_prev_TR -verb 0 -show_censor_count
-moderate_mask 0 LIMIT
-rank : convert data to rank order
0-based index order of small to large values
The default rank STYLE is 'dense'.
See also -rank_style.
-rank_style STYLE : convert to rank using the given style
The STYLE refers to what to do in the case of repeated values.
Assuming inputs 4 5 5 9...
dense - repeats get same rank, no gaps in rank
- same a "3dmerge -1rank"
- result: 0 1 1 2
competition - repeats get same rank, leading to gaps in rank
- same a "3dmerge -1rank"
- result: 0 1 1 3
(case '2' is counted, though no such rank occurs)
Option '-rank' uses style 'dense'.
See also -rank.
-reverse_rank : convert data to reverse rank order
(large values come first)
-reverse : reverse data over time
-randomize_trs : randomize the data over time
-seed SEED : set random number seed (integer)
-select_groups g0 g1 ... : select columns by group numbers
e.g. -select groups 0
e.g. -select groups POS 0
An X-matrix dataset (e.g. X.xmat.1D) often has columns partitioned by
groups, such as:
-1 : polort regressors
0 : motion regressors and other (non-polort) baseline terms
N>0: regressors of interest
This option can be used to select columns by integer groups, with
special cases of POS (regs of interest), NEG (probably polort).
Note that NONNEG is unneeded as it is the pair POS 0.
See also -show_group_labels.
-select_cols SELECTOR : apply AFNI column selectors, [] is optional
e.g. '[5,0,7..21(2)]'
-select_rows SELECTOR : apply AFNI row selectors, {} is optional
e.g. '{5,0,7..21(2)}'
-select_runs r1 r2 ... : extract the given runs from the dataset
(these are 1-based run indices)
e.g. 2
e.g. 2 3 1 1 1 1 1 4
-set_nruns NRUNS : treat the input data as if it has nruns
(e.g. applies to -derivative and -demean)
See examples 7a, 10a and b, and 14.
-set_run_lengths N1 N2 ... : treat as if data has run lengths N1, N2, etc.
(applies to -derivative, for example)
Notes: o option -set_nruns is not allowed with -set_run_lengths
o the sum of run lengths must equal NT
See examples 7b, 10c and 14.
-set_tr TR : set the TR (in seconds) for the data
-show_argmin : display the index of min arg (of first column)
-show_censor_count : display the total number of censored TRs
Note : if input is a valid xmat.1D dataset, then the
count will come from the header. Otherwise
the input is assumed to be a binary censor
file, and zeros are simply counted.
-show_cormat : display correlation matrix
-show_cormat_warnings : display correlation matrix warnings
-show_df_info : display info about degrees of freedom in xmat.1D file
-show_df_protect yes/no : protection flag (def=yes)
-show_gcor : display GCOR: the average correlation
-show_gcor_all : display many ways of computing (a) GCOR
-show_gcor_doc : display descriptions of those ways
-show_group_labels : display group and label, per column
-show_indices_baseline : display column indices for baseline
-show_indices_motion : display column indices for motion regressors
-show_indices_interest : display column indices for regs of interest
-show_label_ordering : display the labels
-show_labels : display the labels
-show_max_displace : display max displacement (from motion params)
- the maximum pairwise distance (enorm)
-show_mmms : display min, mean, max, stdev of columns
-show_num_runs : display number of runs found
-show_rows_cols : display the number of rows and columns
-show_tr_run_counts STYLE : display TR counts per run, according to STYLE
STYLE can be one of:
trs : TR counts
trs_cen : censored TR counts
trs_no_cen : TR counts, as if no censoring
frac_cen : fractions of TRs censored
See example 24.
-show_trs_censored STYLE : display a list of TRs which were censored
-show_trs_uncensored STYLE : display a list of TRs which were not censored
STYLE can be one of:
comma : comma delimited
space : space delimited
encoded : succinct selector list
verbose : chatty
See example 20.
-show_trs_run RUN : restrict -show_trs_[un]censored to the given
1-based run
-show_trs_to_zero : display number of TRs before final zero value
(e.g. length of response curve)
-slice_order_to_times : convert a list of slice indices to times
Programs like to3d, 3drefit, 3dTcat and 3dTshift expect slice timing
to be a list of slice times over the sequential slices. But in some
cases, people have an ordered list of slices. So the sorting needs
to change.
If TR=2 and the slice order is: 0 2 4 6 8 1 3 5 7 9
Then the slices/times ordered by time (as input) are:
slices: 0 2 4 6 8 1 3 5 7 9
times: 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8
And the slices/times ordered instead by slice index are:
slices: 0 1 2 3 4 5 6 7 8 9
times: 0.0 1.0 0.2 1.2 0.4 1.4 0.6 1.6 0.8 1.8
It is this final list of times that is output.
See example 28.
-sort : sort data over time (smallest to largest)
- sorts EVERY vector
- consider the -reverse option
-split_into_pad_runs PREFIX : split input into one padded file per run
e.g. -split_into_pad_runs motion.pad
This option is used for breaking a set of regressors up by run. The
output would be one file per run, where each file is the same as the
input for the run it corresponds to, and is padded with 0 across all
other runs.
Assuming the 300 row input dataset spans 3 100-TR runs, then there
would be 3 output datasets created, each still be 300 rows:
motion.pad.r01.1D : 100 rows as input, 200 rows of 0
motion.pad.r02.1D : 100 rows of 0, 100 rows as input, 100 of 0
motion.pad.r03.1D : 200 rows of 0, 100 rows as input
This option requires either -set_nruns or -set_run_lengths.
See example 14.
-transpose : transpose the input matrix (rows for columns)
-transpose_write : transpose the output matrix before writing
-volreg2allineate : convert 3dvolreg parameters to 3dAllineate
This option should be used when the -infile file is a 6 column file
of motion parameters (roll, pitch, yaw, dS, dL, dP). The output would
be converted to a 12 parameter file, suitable for input to 3dAllineate
via the -1Dparam_apply option.
volreg: roll, pitch, yaw, dS, dL, dP
3dAllinate: -dL, -dP, -dS, roll, pitch, yaw, 0,0,0, 0,0,0
These parameters would be to correct the motion, akin to what 3dvolreg
did (i.e. they are the negative estimates of how the subject moved).
See example 23.
-write FILE : write the current 1D data to FILE
-weight_vec v1 v2 ... : supply weighting vector
e.g. -weight_vec 0.9 0.9 0.9 1 1 1
This vector currently works only with the weighted_enorm method for
the -collapse_cols option. If supplied (as with the example), it will
weight the angles at 0.9 times the weights of the shifts in the motion
parameters output by 3dvolreg.
See also -collapse_cols.
-write_censor FILE : write as boolean censor.1D
e.g. -write_censor subjA_censor.1D
This file can be given to 3dDeconvolve to censor TRs with excessive
motion, applied with the -censor option.
e.g. 3dDeconvolve -censor subjA_censor.1D
This file works well for plotting against the data, where the 0 entries
are removed from the regression of 3dDeconvolve. Alternatively, the
file created with -write_CENSORTR is probably more human readable.
-write_CENSORTR FILE : write censor times as CENSORTR string
e.g. -write_CENSORTR subjA_CENSORTR.txt
This file can be given to 3dDeconvolve to censor TRs with excessive
motion, applied with the -CENSORTR option.
e.g. 3dDeconvolve -CENSORTR `cat subjA_CENSORTR.txt`
Which might expand to:
3dDeconvolve -CENSORTR '1:16..19,44 3:28 4:19,37..39'
Note that the -CENSORTR option requires the text on the command line.
This file is in the easily readable format applied with -CENSORTR.
It has the same effect on 3dDeconvolve as the sister file from
-write_censor, above.
-verb LEVEL : set the verbosity level
-----------------------------------------------------------------------------
R Reynolds March 2009
=============================================================================
AFNI program: 1dtranspose
Usage: 1dtranspose infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, but transposed.
You can use a column subvector selector list on infile, as in
1dtranspose 'fred.1D[0,3,7]' ethel.1D
* This program may produce files with lines longer than a
text editor can handle.
* If 'outfile' is '-' (or missing entirely), output goes to stdout.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dTsort
Usage: 1dTsort [options] file.1D
Sorts each column of the input 1D file and writes result to stdout.
Options
-------
-inc = sort into increasing order [default]
-dec = sort into decreasing order
-flip = transpose the file before OUTPUT
* the INPUT can be transposed using file.1D\'
* thus, to sort each ROW, do something like
1dTsort -flip file.1D\' > sfile.1D
-col j = sort only on column #j (counting starts at 0),
and carry the rest of the columns with it.
-imode = typecast all values to integers, return the mode in
the input then exit. No sorting results are returned.
N.B.: Data will be read from standard input if the filename IS stdin,
and will also be row/column transposed if the filename is stdin\'
For example:
1deval -num 100 -expr 'uran(1)' | 1dTsort stdin | 1dplot stdin
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 1dUpsample
Program 1dUpsample:
Upsamples a 1D time series (along the column direction)
to a finer time grid.
Usage: 1dUpsample [options] n fred.1D > ethel.1D
Where 'n' is the upsample factor (integer from 2..32)
NOTES:
------
* Interpolation is done with 7th order polynomials.
(Why 7? It's a nice number, and the code already existed.)
* The only option is '-1' or '-one', to use 1st order
polynomials instead (i.e., linear interpolation).
* Output is written to stdout.
* If you want to interpolate along the row direction,
transpose before input, then transpose the output.
* Example:
1dUpsample 5 '1D: 4 5 4 3 4' | 1dplot -stdin -dx 0.2
* If the input has M time points, the output will
have n*M time points. The last n-1 of them
will be past the end of the original time series.
* This program is a quick hack for Gang Chen.
Where are my Twizzlers?
AFNI program: 24swap
Usage: 24swap [options] file ...
Swaps bytes pairs and/or quadruples on the files listed.
Options:
-q Operate quietly
-pattern pat 'pat' determines the pattern of 2 and 4
byte swaps. Each element is of the form
2xN or 4xN, where N is the number of
bytes to swap as pairs (for 2x) or
as quadruples (for 4x). For 2x, N must
be divisible by 2; for 4x, N must be
divisible by 4. The whole pattern is
made up of elements separated by colons,
as in '-pattern 4x39984:2x0'. If bytes
are left over after the pattern is used
up, the pattern starts over. However,
if a byte count N is zero, as in the
example below, then it means to continue
until the end of file.
N.B.: You can also use 1xN as a pattern, indicating to
skip N bytes without any swapping.
N.B.: A default pattern can be stored in the Unix
environment variable AFNI_24SWAP_PATTERN.
If no -pattern option is given, the default
will be used. If there is no default, then
nothing will be done.
N.B.: If there are bytes 'left over' at the end of the file,
they are written out unswapped. This will happen
if the file is an odd number of bytes long.
N.B.: If you just want to swap pairs, see program 2swap.
For quadruples only, see program 4swap.
N.B.: This program will overwrite the input file!
You might want to test it first.
Example: 24swap -pat 4x8:2x0 fred
If fred contains 'abcdabcdabcdabcdabcd' on input,
then fred has 'dcbadcbabadcbadcbadc' on output.
AFNI program: 2dImReg
++ 2dImReg: AFNI version=AFNI_19.3.16 (Dec 12 2019) [64-bit]
This program performs 2d image registration. Image alignment is
performed on a slice-by-slice basis for the input 3d+time dataset,
relative to a user specified base image.
** Note that the script @2dwarper.Allin can do similar things, **
** with nonlinear (polynomial) warping on a slice-wise basis. **
Usage:
2dImReg
-input fname Filename of input 3d+time dataset to process
-basefile fname Filename of 3d+time dataset for base image
(default = current input dataset)
-base num Time index for base image (0 <= num)
(default: num = 3)
-nofine Deactivate fine fit phase of image registration
(default: fine fit is active)
-fine blur dxy dphi Set fine fit parameters
where:
blur = FWHM of blurring prior to registration (in pixels)
(default: blur = 1.0)
dxy = Convergence tolerance for translations (in pixels)
(default: dxy = 0.07)
dphi = Convergence tolerance for rotations (in degrees)
(default: dphi = 0.21)
-prefix pname Prefix name for output 3d+time dataset
-dprefix dname Write files 'dname'.dx, 'dname'.dy, 'dname'.psi
containing the registration parameters for each
slice in chronological order.
File formats:
'dname'.dx: time(sec) dx(pixels)
'dname'.dy: time(sec) dy(pixels)
'dname'.psi: time(sec) psi(degrees)
-dmm Change dx and dy output format from pixels to mm
-rprefix rname Write files 'rname'.oldrms and 'rname'.newrms
containing the volume RMS error for the original
and the registered datasets, respectively.
File formats:
'rname'.oldrms: volume(number) rms_error
'rname'.newrms: volume(number) rms_error
-debug Lots of additional output to screen
AFNI program: @2dwarper.Allin
script to do 2D registration on each slice of a 3D+time
dataset, and glue the results back together at the end
This script is structured to operate only on an AFNI
+orig.HEAD dataset. The one input on the command line
is the prefix for the dataset.
Modified 07 Dec 2010 by RWC to use 3dAllineate instead
of 3dWarpDrive, with nonlinear slice-wise warping.
Set prefix of input 3D+time dataset here.
In this example with 'wilma' as the command line
argument, the output dataset will be 'wilma_reg+orig'.
The output registration parameters files will
be 'wilma_param_ssss.1D', where 'ssss' is the slice number.
usage: @2dwarper.Allin [options] INPUT_PREFIX
example: @2dwarper.Allin epi_run1
example: @2dwarper.Allin -mask my_mask epi_run1
options:
-mask MSET : provide the prefix of an existing mask dataset
-prefix PREFIX : provide the prefix for output datasets
AFNI program: 2perm
Usage: 2perm [-prefix PPP] [-comma] bot top [n1 n2]
This program creates 2 random non-overlapping subsets of the set of
integers from 'bot' to 'top' (inclusive). The first subset is of
length 'n1' and the second of length 'n2'. If those values are not
given, then equal size subsets of length (top-bot+1)/2 are used.
This program is intended for use in various simulation and/or
randomization scripts, or for amusement/hilarity.
OPTIONS:
========
-prefix PPP == Two output files are created, with names PPP_A and PPP_B,
where 'PPP' is the given prefix. If no '-prefix' option
is given, then the string 'AFNIroolz' will be used.
++ Each file is a single column of numbers.
++ Note that the filenames do NOT end in '.1D'.
-comma == Write each file as a single row of comma-separated numbers.
EXAMPLE:
========
This illustration shows the purpose of 2perm -- for use in permutation
and/or randomization tests of statistical significance and power.
Given a dataset with 100 sub-bricks (indexed 0..99), split it into two
random halves and do a 2-sample t-test between them.
2perm -prefix Q50 0 99
3dttest++ -setA dataset+orig"[1dcat Q50_A]" \
-setB dataset+orig"[1dcat Q50_B]" \
-no1sam -prefix Q50
\rm -f Q50_?
Alternatively:
2perm -prefix Q50 -comma 0 99
3dttest++ -setA dataset+orig"[`cat Q50_A`]" \
-setB dataset+orig"[`cat Q50_B`]" \
-no1sam -prefix Q50
\rm -f Q50_?
Note the combined use of the double quote " and backward quote `
shell operators in this second approach.
AUTHOR: (no one want to admit they wrote this trivial code).
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 2swap
Usage: 2swap [-q] file ...
-- Swaps byte pairs on the files listed.
The -q option means to work quietly.
AFNI program: 3dABoverlap
Usage: 3dABoverlap [options] A B
Output (to screen) is a count of various things about how
the automasks of datasets A and B overlap or don't overlap.
* Dataset B will be resampled to match dataset A, if necessary,
which will be slow if A is high resolution. In such a case,
you should only use one sub-brick from dataset B.
++ The resampling of B is done before the automask is generated.
* The values output are labeled thusly:
#A = number of voxels in the A mask
#B = number of voxels in the B mask
#(A uni B) = number of voxels in the either or both masks (set union)
#(A int B) = number of voxels present in BOTH masks (set intesection)
#(A \ B) = number of voxels in A mask that aren't in B mask
#(B \ A) = number of voxels in B mask that arent' in A mask
%(A \ B) = percentage of voxels from A mask that aren't in B mask
%(B \ A) = percentage of voxels from B mask that aren't in A mask
Rx(B/A) = radius of gyration of B mask / A mask, in x direction
Ry(B/A) = radius of gyration of B mask / A mask, in y direction
Rz(B/A) = radius of gyration of B mask / A mask, in z direction
* If B is an EPI dataset sub-brick, and A is a skull stripped anatomical
dataset, then %(B \ A) might be useful for assessing if the EPI
brick B is grossly misaligned with respect to the anatomical brick A.
* The radius of gyration ratios might be useful for determining if one
dataset is grossly larger or smaller than the other.
OPTIONS
-------
-no_automask = consider input datasets as masks
(automask does not work on mask datasets)
-quiet = be as quiet as possible (without being entirely mute)
-verb = print out some progress reports (to stderr)
NOTES
-----
* If an input dataset is comprised of bytes and contains only one
sub-brick, then this program assumes it is already an automask-
generated dataset and the automask operation will be skipped.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAFNIto3D
[7m*+ WARNING:[0m This program (3dAFNIto3D) is old, not maintained, and probably useless!
Usage: 3dAFNIto3D [options] dataset
Reads in an AFNI dataset, and writes it out as a 3D file.
OPTIONS:
-prefix ppp = Write result into file ppp.3D;
default prefix is same as AFNI dataset's.
-bin = Write data in binary format, not text.
-txt = Write data in text format, not binary.
NOTES:
* At present, all bricks are written out in float format.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAFNItoANALYZE
[7m*+ WARNING:[0m This program (3dAFNItoANALYZE) is old, not maintained, and probably useless!
Usage: 3dAFNItoANALYZE [-4D] [-orient code] aname dset
Writes AFNI dataset 'dset' to 1 or more ANALYZE 7.5 format
.hdr/.img file pairs (one pair for each sub-brick in the
AFNI dataset). The ANALYZE files will be named
aname_0000.hdr aname_0000.img for sub-brick #0
aname_0001.hdr aname_0001.img for sub-brick #1
and so forth. Each file pair will contain a single 3D array.
* If the AFNI dataset does not include sub-brick scale
factors, then the ANALYZE files will be written in the
datum type of the AFNI dataset.
* If the AFNI dataset does have sub-brick scale factors,
then each sub-brick will be scaled to floating format
and the ANALYZE files will be written as floats.
* The .hdr and .img files are written in the native byte
order of the computer on which this program is executed.
Options
-------
-4D [30 Sep 2002]:
If you use this option, then all the data will be written to
one big ANALYZE file pair named aname.hdr/aname.img, rather
than a series of 3D files. Even if you only have 1 sub-brick,
you may prefer this option, since the filenames won't have
the '_0000' appended to 'aname'.
-orient code [19 Mar 2003]:
This option lets you flip the dataset to a different orientation
when it is written to the ANALYZE files. The orientation code is
formed as follows:
The code must be 3 letters, one each from the
pairs {R,L} {A,P} {I,S}. The first letter gives
the orientation of the x-axis, the second the
orientation of the y-axis, the third the z-axis:
R = Right-to-Left L = Left-to-Right
A = Anterior-to-Posterior P = Posterior-to-Anterior
I = Inferior-to-Superior S = Superior-to-Inferior
For example, 'LPI' means
-x = Left +x = Right
-y = Posterior +y = Anterior
-z = Inferior +z = Superior
* For display in SPM, 'LPI' or 'RPI' seem to work OK.
Be careful with this: you don't want to confuse L and R
in the SPM display!
* If you DON'T use this option, the dataset will be written
out in the orientation in which it is stored in AFNI
(e.g., the output of '3dinfo dset' will tell you this.)
* The dataset orientation is NOT stored in the .hdr file.
* AFNI and ANALYZE data are stored in files with the x-axis
varying most rapidly and the z-axis most slowly.
* Note that if you read an ANALYZE dataset into AFNI for
display, AFNI assumes the LPI orientation, unless you
set environment variable AFNI_ANALYZE_ORIENT.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAFNItoMINC
[7m*+ WARNING:[0m This program (3dAFNItoMINC) is old, not maintained, and probably useless!
Usage: 3dAFNItoMINC [options] dataset
Reads in an AFNI dataset, and writes it out as a MINC file.
OPTIONS:
-prefix ppp = Write result into file ppp.mnc;
default prefix is same as AFNI dataset's.
-floatize = Write MINC file in float format.
-swap = Swap bytes when passing data to rawtominc
NOTES:
* Multi-brick datasets are written as 4D (x,y,z,t) MINC
files.
* If the dataset has complex-valued sub-bricks, then this
program won't write the MINC file.
* If any of the sub-bricks have floating point scale
factors attached, then the output will be in float
format (regardless of the presence of -floatize).
* This program uses the MNI program 'rawtominc' to create
the MINC file; rawtominc must be in your path. If you
don't have rawtominc, you must install the MINC tools
software package from MNI. (But if you don't have the
MINC tools already, why do you want to convert to MINC
format anyway?)
* At this time, you can find the MINC tools at
ftp://ftp.bic.mni.mcgill.ca/pub/minc/
You need the latest version of minc-*.tar.gz and also
of netcdf-*.tar.gz.
-- RWCox - April 2002
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAFNItoNIFTI
++ 3dAFNItoNIFTI: AFNI version=AFNI_19.3.16 (Dec 12 2019) [64-bit]
Usage: 3dAFNItoNIFTI [options] dataset
Reads an AFNI dataset, writes it out as a NIfTI-1.1 file.
NOTES:
* The nifti_tool program can be used to manipulate
the contents of a NIfTI-1.1 file.
* The input dataset can actually be in any input format
that AFNI can read directly (e.g., MINC-1).
* There is no 3dNIFTItoAFNI program, since AFNI programs
can directly read .nii files. If you wish to make such
a conversion anyway, one way to do so is like so:
3dcalc -a ppp.nii -prefix ppp -expr 'a'
OPTIONS:
-prefix ppp = Write the NIfTI-1.1 file as 'ppp.nii'.
Default: the dataset's prefix is used.
* You can use 'ppp.hdr' to output a 2-file
NIfTI-1.1 file pair 'ppp.hdr' & 'ppp.img'.
* If you want a compressed file, try
using a prefix like 'ppp.nii.gz'.
* Setting the Unix environment variable
AFNI_AUTOGZIP to YES will result in
all output .nii files being gzip-ed.
-verb = Be verbose = print progress messages.
Repeating this increases the verbosity
(maximum setting is 3 '-verb' options).
-float = Force the output dataset to be 32-bit
floats. This option should be used when
the input AFNI dataset has different
float scale factors for different sub-bricks,
an option that NIfTI-1.1 does not support.
The following options affect the contents of the AFNI extension
field that is written by default into the NIfTI-1.1 header:
-pure = Do NOT write an AFNI extension field into
the output file. Only use this option if
needed. You can also use the 'nifti_tool'
program to strip extensions from a file.
-denote = When writing the AFNI extension field, remove
text notes that might contain subject
identifying information.
-oldid = Give the new dataset the input dataset's
AFNI ID code.
-newid = Give the new dataset a new AFNI ID code, to
distinguish it from the input dataset.
**** N.B.: -newid is now the default action.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAFNItoNIML
Usage: 3dAFNItoNIML [options] dset
Dumps AFNI dataset header information to stdout in NIML format.
Mostly for debugging and testing purposes!
OPTIONS:
-data == Also put the data into the output (will be huge).
-ascii == Format in ASCII, not binary (even huger).
-tcp:host:port == Instead of stdout, send the dataset to a socket.
(implies '-data' as well)
-- RWCox - Mar 2005
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAFNItoRaw
[7m*+ WARNING:[0m This program (3dAFNItoRaw) is old, not maintained, and probably useless!
Usage: 3dAFNItoRaw [options] dataset
Convert an AFNI brik file with multiple sub-briks to a raw file with
each sub-brik voxel concatenated voxel-wise.
For example, a dataset with 3 sub-briks X,Y,Z with elements x1,x2,x3,...,xn,
y1,y2,y3,...,yn and z1,z2,z3,...,zn will be converted to a raw dataset with
elements x1,y1,z1, x2,y2,z2, x3,y3,z3, ..., xn,yn,zn
The dataset is kept in the original data format (float/short/int)
Options:
-output / -prefix = name of the output file (not an AFNI dataset prefix)
the default output name will be rawxyz.dat
-datum float = force floating point output. Floating point forced if any
sub-brik scale factors not equal to 1.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAllineate
Usage: 3dAllineate [options] sourcedataset
Program to align one dataset (the 'source') to a base dataset,
using an affine (matrix) transformation of space.
* Options are available to control:
++ How the matching between the source and the base is computed
(i.e., the 'cost functional' measuring image mismatch).
++ How the resliced source is interpolated to the base space.
++ The complexity of the spatial transformation ('warp') used.
++ And many many technical options to control the process in detail,
if you know what you are doing (or just like to fool around).
* This program is a generalization of and improvement on the older
software 3dWarpDrive.
* For nonlinear transformations, see progam 3dQwarp.
* 3dAllineate can also be used to apply a pre-computed matrix to a dataset
to produce the transformed output. In this mode of operation, it just
skips the alignment process, whose function is to compute the matrix,
and instead it reads the matrix in, computes the output dataset,
writes it out, and stops.
=====----------------------------------------------------------------------
NOTES: For most 3D image registration purposes, we now recommend that you
===== use Daniel Glen's script align_epi_anat.py (which, despite its name,
can do many more registration problems than EPI-to-T1-weighted).
-->> In particular, using 3dAllineate with the 'lpc' cost functional
(to align EPI and T1-weighted volumes) requires using a '-weight'
volume to get good results, and the align_epi_anat.py script will
automagically generate such a weight dataset that works well for
EPI-to-structural alignment.
-->> This script can also be used for other alignment purposes, such
as T1-weighted alignment between field strengths using the
'-lpa' cost functional. Investigate align_epi_anat.py to
see if it will do what you need -- you might make your life
a little easier and nicer and happier and more tranquil.
-->> Also, if/when you ask for registration help on the AFNI
message board, we'll probably start by recommending that you
try align_epi_anat.py if you haven't already done so.
-->> For aligning EPI and T1-weighted volumes, we have found that
using a flip angle of 50-60 degrees for the EPI works better than
a flip angle of 90 degrees. The reason is that there is more
internal contrast in the EPI data when the flip angle is smaller,
so the registration has some image structure to work with. With
the 90 degree flip angle, there is so little internal contrast in
the EPI dataset that the alignment process ends up being just
trying to match brain outlines -- which doesn't always give accurate
results: see http://dx.doi.org/10.1016/j.neuroimage.2008.09.037
-->> Although the total MRI signal is reduced at a smaller flip angle,
there is little or no loss in FMRI/BOLD information, since the bulk
of the time series 'noise' is from physiological fluctuation signals,
which are also reduced by the lower flip angle -- for more details,
see http://dx.doi.org/10.1016/j.neuroimage.2010.11.020
---------------------------------------------------------------------------
**** New (Summer 2013) program 3dQwarp is available to do nonlinear ****
*** alignment between a base and source dataset, including the use ***
** of 3dAllineate for the preliminary affine alignment. If you are **
* interested, see the output of '3dQwarp -help' for the details. *
---------------------------------------------------------------------------
COMMAND LINE OPTIONS:
====================
-base bbb = Set the base dataset to be the #0 sub-brick of 'bbb'.
If no -base option is given, then the base volume is
taken to be the #0 sub-brick of the source dataset.
(Base must be stored as floats, shorts, or bytes.)
** -base is not needed if you are just applying a given
transformation to the -source dataset to produce
the output, using -1Dmatrix_apply or -1Dparam_apply
** Unless you use the -master option, the aligned
output dataset will be stored on the same 3D grid
as the -base dataset.
-source ttt = Read the source dataset from 'ttt'. If no -source
*OR* (or -input) option is given, then the source dataset
-input ttt is the last argument on the command line.
(Source must be stored as floats, shorts, or bytes.)
** This is the dataset to be transformed, to match the
-base dataset, or directly with one of the options
-1Dmatrix_apply or -1Dparam_apply
** 3dAllineate can register 2D datasets (single slice),
but both the base and source must be 2D -- you cannot
use this program to register a 2D slice into a 3D volume!
** See the script @2dwarper.Allin for an example of using
3dAllineate to do slice-by-slice nonlinear warping to
align 3D volumes distorted by time-dependent magnetic
field inhomogeneities.
** NOTA BENE: The base and source dataset do NOT have to be defined **
** [that's] on the same 3D grids; the alignment process uses the **
** [Latin ] coordinate systems defined in the dataset headers to **
** [ for ] make the match between spatial locations, rather than **
** [ NOTE ] matching the 2 datasets on a voxel-by-voxel basis **
** [ WELL ] (as 3dvolreg and 3dWarpDrive do). **
** -->> However, this coordinate-based matching requires that **
** image volumes be defined on roughly the same patch of **
** of (x,y,z) space, in order to find a decent starting **
** point for the transformation. You might need to use **
** the script @Align_Centers to do this, if the 3D **
** spaces occupied by the images do not overlap much. **
** -->> Or the '-cmass' option to this program might be **
** sufficient to solve this problem, maybe, with luck. **
** (Another reason why you should use align_epi_anat.py) **
** -->> If the coordinate system in the dataset headers is **
** WRONG, then 3dAllineate will probably not work well! **
-prefix ppp = Output the resulting dataset to file 'ppp'. If this
*OR* option is NOT given, no dataset will be output! The
-out ppp transformation matrix to align the source to the base will
be estimated, but not applied. You can save the matrix
for later use using the '-1Dmatrix_save' option.
*N.B.: By default, the new dataset is computed on the grid of the
base dataset; see the '-master' and/or the '-mast_dxyz'
options to change this grid.
*N.B.: If 'ppp' is 'NULL', then no output dataset will be produced.
This option is for compatibility with 3dvolreg.
-floatize = Write result dataset as floats. Internal calculations
-float are all done on float copies of the input datasets.
[Default=convert output dataset to data format of ]
[ source dataset; if the source dataset was ]
[ shorts with a scale factor, then the new ]
[ dataset will get a scale factor as well; ]
[ if the source dataset was shorts with no ]
[ scale factor, the result will be unscaled.]
-1Dparam_save ff = Save the warp parameters in ASCII (.1D) format into
file 'ff' (1 row per sub-brick in source).
* A historical synonym for this option is '-1Dfile'.
* At the top of the saved 1D file is a #comment line
listing the names of the parameters; those parameters
that are fixed (e.g., via '-parfix') will be marked
by having their symbolic names end in the '$' character.
You can use '1dcat -nonfixed' to remove these columns
from the 1D file if you just want to further process the
varying parameters somehow (e.g., 1dsvd).
* However, the '-1Dparam_apply' option requires the
full list of parameters, including those that were
fixed, in order to work properly!
-1Dparam_apply aa = Read warp parameters from file 'aa', apply them to
the source dataset, and produce a new dataset.
(Must also use the '-prefix' option for this to work! )
(In this mode of operation, there is no optimization of)
(the cost functional by changing the warp parameters; )
(previously computed parameters are applied directly. )
*N.B.: If you use -1Dparam_apply, you may also want to use
-master to control the grid on which the new
dataset is written -- the base dataset from the
original 3dAllineate run would be a good possibility.
Otherwise, the new dataset will be written out on the
3D grid coverage of the source dataset, and this
might result in clipping off part of the image.
*N.B.: Each row in the 'aa' file contains the parameters for
transforming one sub-brick in the source dataset.
If there are more sub-bricks in the source dataset
than there are rows in the 'aa' file, then the last
row is used repeatedly.
*N.B.: A trick to use 3dAllineate to resample a dataset to
a finer grid spacing:
3dAllineate -input dataset+orig \
-master template+orig \
-prefix newdataset \
-final wsinc5 \
-1Dparam_apply '1D: 12@0'\'
Here, the identity transformation is specified
by giving all 12 affine parameters as 0 (note
the extra \' at the end of the '1D: 12@0' input!).
** You can also use the word 'IDENTITY' in place of
'1D: 12@0'\' (to indicate the identity transformation).
**N.B.: Some expert options for modifying how the wsinc5
method works are described far below, if you use
'-HELP' instead of '-help'.
****N.B.: The interpolation method used to produce a dataset
is always given via the '-final' option, NOT via
'-interp'. If you forget this and use '-interp'
along with one of the 'apply' options, this program
will chastise you (gently) and change '-final'
to match what the '-interp' input.
-1Dmatrix_save ff = Save the transformation matrix for each sub-brick into
file 'ff' (1 row per sub-brick in the source dataset).
If 'ff' does NOT end in '.1D', then the program will
append '.aff12.1D' to 'ff' to make the output filename.
*N.B.: This matrix is the coordinate transformation from base
to source DICOM coordinates. In other terms:
Xin = Xsource = M Xout = M Xbase
or
Xout = Xbase = inv(M) Xin = inv(M) Xsource
where Xin or Xsource is the 4x1 coordinates of a
location in the input volume. Xout is the
coordinate of that same location in the output volume.
Xbase is the coordinate of the corresponding location
in the base dataset. M is ff augmented by a 4th row of
[0 0 0 1], X. is an augmented column vector [x,y,z,1]'
To get the inverse matrix inv(M)
(source to base), use the cat_matvec program, as in
cat_matvec fred.aff12.1D -I
-1Dmatrix_apply aa = Use the matrices in file 'aa' to define the spatial
transformations to be applied. Also see program
cat_matvec for ways to manipulate these matrix files.
*N.B.: You probably want to use either -base or -master
with either *_apply option, so that the coordinate
system that the matrix refers to is correctly loaded.
** You can also use the word 'IDENTITY' in place of a
filename to indicate the identity transformation --
presumably for the purpose of resampling the source
dataset to a new grid.
* The -1Dmatrix_* options can be used to save and re-use the transformation *
* matrices. In combination with the program cat_matvec, which can multiply *
* saved transformation matrices, you can also adjust these matrices to *
* other alignments. These matrices can also be combined with nonlinear *
* warps (from 3dQwarp) using programs 3dNwarpApply or 3dNwarpCat. *
* The script 'align_epi_anat.py' uses 3dAllineate and 3dvolreg to align EPI *
* datasets to T1-weighted anatomical datasets, using saved matrices between *
* the two programs. This script is our currently recommended method for *
* doing such intra-subject alignments. *
-cost ccc = Defines the 'cost' function that defines the matching
between the source and the base; 'ccc' is one of
ls *OR* leastsq = Least Squares [Pearson Correlation]
mi *OR* mutualinfo = Mutual Information [H(b)+H(s)-H(b,s)]
crM *OR* corratio_mul = Correlation Ratio (Symmetrized*)
nmi *OR* norm_mutualinfo = Normalized MI [H(b,s)/(H(b)+H(s))]
hel *OR* hellinger = Hellinger metric
crA *OR* corratio_add = Correlation Ratio (Symmetrized+)
crU *OR* corratio_uns = Correlation Ratio (Unsym)
lpc *OR* localPcorSigned = Local Pearson Correlation Signed
lpa *OR* localPcorAbs = Local Pearson Correlation Abs
lpc+ *OR* localPcor+Others= Local Pearson Signed + Others
lpa+ *OR* localPcorAbs+Others= Local Pearson Abs + Others
You can also specify the cost functional using an option
of the form '-mi' rather than '-cost mi', if you like
to keep things terse and cryptic (as I do).
[Default == '-hel' (for no good reason, but it sounds nice).]
**NB** See more below about lpa and lpc, which are typically
what we would recommend as first-choice cost functions
now:
lpa if you have similar contrast vols to align;
lpc if you have *non*similar contrast vols to align!
-interp iii = Defines interpolation method to use during matching
process, where 'iii' is one of
NN *OR* nearestneighbour *OR nearestneighbor
linear *OR* trilinear
cubic *OR* tricubic
quintic *OR* triquintic
Using '-NN' instead of '-interp NN' is allowed (e.g.).
Note that using cubic or quintic interpolation during
the matching process will slow the program down a lot.
Use '-final' to affect the interpolation method used
to produce the output dataset, once the final registration
parameters are determined. [Default method == 'linear'.]
** N.B.: Linear interpolation is used during the coarse
alignment pass; the selection here only affects
the interpolation method used during the second
(fine) alignment pass.
** N.B.: '-interp' does NOT define the final method used
to produce the output dataset as warped from the
input dataset. If you want to do that, use '-final'.
-final iii = Defines the interpolation mode used to create the
output dataset. [Default == 'cubic']
** N.B.: If you are applying a transformation to an
integer-valued dataset (such as an atlas),
then you should use '-final NN' to avoid
interpolation of the integer labels.
** N.B.: For '-final' ONLY, you can use 'wsinc5' to specify
that the final interpolation be done using a
weighted sinc interpolation method. This method
is so SLOW that you aren't allowed to use it for
the registration itself.
++ wsinc5 interpolation is highly accurate and should
reduce the smoothing artifacts from lower
order interpolation methods (which are most
visible if you interpolate an EPI time series
to high resolution and then make an image of
the voxel-wise variance).
++ On my Intel-based Mac, it takes about 2.5 s to do
wsinc5 interpolation, per 1 million voxels output.
For comparison, quintic interpolation takes about
0.3 s per 1 million voxels: 8 times faster than wsinc5.
++ The '5' refers to the width of the sinc interpolation
weights: plus/minus 5 grid points in each direction;
this is a tensor product interpolation, for speed.
TECHNICAL OPTIONS (used for fine control of the program):
=================
-nmatch nnn = Use at most 'nnn' scattered points to match the
datasets. The smaller nnn is, the faster the matching
algorithm will run; however, accuracy may be bad if
nnn is too small. If you end the 'nnn' value with the
'%' character, then that percentage of the base's
voxels will be used.
[Default == 47% of voxels in the weight mask]
-nopad = Do not use zero-padding on the base image.
(I cannot think of a good reason to use this option.)
[Default == zero-pad, if needed; -verb shows how much]
-zclip = Replace negative values in the input datasets (source & base)
-noneg with zero. The intent is to clip off a small set of negative
values that may arise when using 3dresample (say) with
cubic interpolation.
-conv mmm = Convergence test is set to 'mmm' millimeters.
This doesn't mean that the results will be accurate
to 'mmm' millimeters! It just means that the program
stops trying to improve the alignment when the optimizer
(NEWUOA) reports it has narrowed the search radius
down to this level.
-verb = Print out verbose progress reports.
[Using '-VERB' will give even more prolix reports.]
-quiet = Don't print out verbose stuff.
-usetemp = Write intermediate stuff to disk, to economize on RAM.
Using this will slow the program down, but may make it
possible to register datasets that need lots of space.
**N.B.: Temporary files are written to the directory given
in environment variable TMPDIR, or in /tmp, or in ./
(preference in that order). If the program crashes,
these files are named TIM_somethingrandom, and you
may have to delete them manually. (TIM=Temporary IMage)
**N.B.: If the program fails with a 'malloc failure' type of
message, then try '-usetemp' (malloc=memory allocator).
**N.B.: If you use '-verb', then memory usage is printed out
at various points along the way.
-nousetemp = Don't use temporary workspace on disk [the default].
-check hhh = After cost functional optimization is done, start at the
final parameters and RE-optimize using the new cost
function 'hhh'. If the results are too different, a
warning message will be printed. However, the final
parameters from the original optimization will be
used to create the output dataset. Using '-check'
increases the CPU time, but can help you feel sure
that the alignment process did not go wild and crazy.
[Default == no check == don't worry, be happy!]
**N.B.: You can put more than one function after '-check', as in
-nmi -check mi hel crU crM
to register with Normalized Mutual Information, and
then check the results against 4 other cost functionals.
**N.B.: On the other hand, some cost functionals give better
results than others for specific problems, and so
a warning that 'mi' was significantly different than
'hel' might not actually mean anything useful (e.g.).
** PARAMETERS THAT AFFECT THE COST OPTIMIZATION STRATEGY **
-onepass = Use only the refining pass -- do not try a coarse
resolution pass first. Useful if you know that only
small amounts of image alignment are needed.
[The default is to use both passes.]
-twopass = Use a two pass alignment strategy, first searching for
a large rotation+shift and then refining the alignment.
[Two passes are used by default for the first sub-brick]
[in the source dataset, and then one pass for the others.]
['-twopass' will do two passes for ALL source sub-bricks.]
*** The first (coarse) pass is relatively slow, as it tries
to search a large volume of parameter (rotations+shifts)
space for initial guesses at the alignment transformation.
* A lot of these initial guesses are kept and checked to
see which ones lead to good starting points for the
further refinement.
* The winners of this competition are then passed to the
'-twobest' (infra) successive optimization passes.
* The ultimate winner of THAT stage is what starts
the second (fine) pass alignment. Usually, this starting
point is so good that the fine pass optimization does
not provide a lot of improvement.
* All of these stages are intended to help the program avoid
stopping at a 'false' minimum in the cost functional.
They were added to the software as we gathered experience
with difficult 3D alignment problems. The combination of
multiple stages of partial optimization of multiple
parameter candidates makes the coarse pass slow, but also
makes it (usually) work well.
-twoblur rr = Set the blurring radius for the first pass to 'rr'
millimeters. [Default == 11 mm]
**N.B.: You may want to change this from the default if
your voxels are unusually small or unusually large
(e.g., outside the range 1-4 mm along each axis).
-twofirst = Use -twopass on the first image to be registered, and
then on all subsequent images from the source dataset,
use results from the first image's coarse pass to start
the fine pass.
(Useful when there may be large motions between the )
(source and the base, but only small motions within )
(the source dataset itself; since the coarse pass can )
(be slow, doing it only once makes sense in this case.)
**N.B.: [-twofirst is on by default; '-twopass' turns it off.]
-twobest bb = In the coarse pass, use the best 'bb' set of initial
points to search for the starting point for the fine
pass. If bb==0, then no search is made for the best
starting point, and the identity transformation is
used as the starting point. [Default=5; min=0 max=22]
**N.B.: Setting bb=0 will make things run faster, but less reliably.
-fineblur x = Set the blurring radius to use in the fine resolution
pass to 'x' mm. A small amount (1-2 mm?) of blurring at
the fine step may help with convergence, if there is
some problem, especially if the base volume is very noisy.
[Default == 0 mm = no blurring at the final alignment pass]
**NOTES ON
**STRATEGY: * If you expect only small-ish (< 2 voxels?) image movement,
then using '-onepass' or '-twobest 0' makes sense.
* If you expect large-ish image movements, then do not
use '-onepass' or '-twobest 0'; the purpose of the
'-twobest' parameter is to search for large initial
rotations/shifts with which to start the coarse
optimization round.
* If you have multiple sub-bricks in the source dataset,
then the default '-twofirst' makes sense if you don't expect
large movements WITHIN the source, but expect large motions
between the source and base.
* '-twopass' re-starts the alignment process for each sub-brick
in the source dataset -- this option can be time consuming,
and is really intended to be used when you might expect large
movements between sub-bricks; for example, when the different
volumes are gathered on different days. For most purposes,
'-twofirst' (the default process) will be adequate and faster,
when operating on multi-volume source datasets.
-cmass = Use the center-of-mass calculation to determin an initial shift
[This option is OFF by default]
can be given as cmass+a, cmass+xy, cmass+yz, cmass+xz
where +a means to try determine automatically in which
direction the data is partial by looking for a too large shift
If given in the form '-cmass+xy' (for example), means to
do the CoM calculation in the x- and y-directions, but
not the z-direction.
-nocmass = Don't use the center-of-mass calculation. [The default]
(You would not want to use the C-o-M calculation if the )
(source sub-bricks have very different spatial locations,)
(since the source C-o-M is calculated from all sub-bricks)
**EXAMPLE: You have a limited coverage set of axial EPI slices you want to
register into a larger head volume (after 3dSkullStrip, of course).
In this case, '-cmass+xy' makes sense, allowing CoM adjustment
along the x = R-L and y = A-P directions, but not along the
z = I-S direction, since the EPI doesn't cover the whole brain
along that axis.
-autoweight = Compute a weight function using the 3dAutomask
algorithm plus some blurring of the base image.
**N.B.: '-autoweight+100' means to zero out all voxels
with values below 100 before computing the weight.
'-autoweight**1.5' means to compute the autoweight
and then raise it to the 1.5-th power (e.g., to
increase the weight of high-intensity regions).
These two processing steps can be combined, as in
'-autoweight+100**1.5'
** Note that '**' must be enclosed in quotes;
otherwise, the shell will treat it as a wildcard
and you will get an error message before 3dAllineate
even starts!!
** UPDATE: one can now use '^' for power notation, to
avoid needing to enclose the string in quotes.
**N.B.: Some cost functionals do not allow -autoweight, and
will use -automask instead. A warning message
will be printed if you run into this situation.
If a clip level '+xxx' is appended to '-autoweight',
then the conversion into '-automask' will NOT happen.
Thus, using a small positive '+xxx' can be used trick
-autoweight into working on any cost functional.
-automask = Compute a mask function, which is like -autoweight,
but the weight for a voxel is set to either 0 or 1.
**N.B.: '-automask+3' means to compute the mask function, and
then dilate it outwards by 3 voxels (e.g.).
** Note that '+' means something very different
for '-automask' and '-autoweight'!!
-autobox = Expand the -automask function to enclose a rectangular
box that holds the irregular mask.
**N.B.: This is the default mode of operation!
For intra-modality registration, '-autoweight' may be better!
* If the cost functional is 'ls', then '-autoweight' will be
the default, instead of '-autobox'.
-nomask = Don't compute the autoweight/mask; if -weight is not
also used, then every voxel will be counted equally.
-weight www = Set the weighting for each voxel in the base dataset;
larger weights mean that voxel counts more in the cost
function.
**N.B.: The weight dataset must be defined on the same grid as
the base dataset.
**N.B.: Even if a method does not allow -autoweight, you CAN
use a weight dataset that is not 0/1 valued. The
risk is yours, of course (!*! as always in AFNI !*!).
-wtprefix p = Write the weight volume to disk as a dataset with
prefix name 'p'. Used with '-autoweight/mask', this option
lets you see what voxels were important in the algorithm.
-emask ee = This option lets you specify a mask of voxels to EXCLUDE from
the analysis. The voxels where the dataset 'ee' is nonzero
will not be included (i.e., their weights will be set to zero).
* Like all the weight options, it applies in the base image
coordinate system.
* Like all the weight options, it means nothing if you are using
one of the 'apply' options.
Method Allows -autoweight
------ ------------------
ls YES
mi NO
crM YES
nmi NO
hel NO
crA YES
crU YES
lpc YES
lpa YES
lpc+ YES
lpa+ YES
-source_mask sss = Mask the source (input) dataset, using 'sss'.
-source_automask = Automatically mask the source dataset.
[By default, all voxels in the source]
[dataset are used in the matching. ]
**N.B.: You can also use '-source_automask+3' to dilate
the default source automask outward by 3 voxels.
-warp xxx = Set the warp type to 'xxx', which is one of
shift_only *OR* sho = 3 parameters
shift_rotate *OR* shr = 6 parameters
shift_rotate_scale *OR* srs = 9 parameters
affine_general *OR* aff = 12 parameters
[Default = affine_general, which includes image]
[ shifts, rotations, scaling, and shearing]
-warpfreeze = Freeze the non-rigid body parameters (those past #6)
after doing the first sub-brick. Subsequent volumes
will have the same spatial distortions as sub-brick #0,
plus rigid body motions only.
-replacebase = If the source has more than one sub-brick, and this
option is turned on, then after the #0 sub-brick is
aligned to the base, the aligned #0 sub-brick is used
as the base image for subsequent source sub-bricks.
-replacemeth m = After sub-brick #0 is aligned, switch to method 'm'
for later sub-bricks. For use with '-replacebase'.
-EPI = Treat the source dataset as being composed of warped
EPI slices, and the base as comprising anatomically
'true' images. Only phase-encoding direction image
shearing and scaling will be allowed with this option.
**N.B.: For most people, the base dataset will be a 3dSkullStrip-ed
T1-weighted anatomy (MPRAGE or SPGR). If you don't remove
the skull first, the EPI images (which have little skull
visible due to fat-suppression) might expand to fit EPI
brain over T1-weighted skull.
**N.B.: Usually, EPI datasets don't have as complete slice coverage
of the brain as do T1-weighted datasets. If you don't use
some option (like '-EPI') to suppress scaling in the slice-
direction, the EPI dataset is likely to stretch the slice
thickness to better 'match' the T1-weighted brain coverage.
**N.B.: '-EPI' turns on '-warpfreeze -replacebase'.
You can use '-nowarpfreeze' and/or '-noreplacebase' AFTER the
'-EPI' on the command line if you do not want these options used.
-parfix n v = Fix parameter #n to be exactly at value 'v'.
-parang n b t = Allow parameter #n to range only between 'b' and 't'.
If not given, default ranges are used.
-parini n v = Initialize parameter #n to value 'v', but then
allow the algorithm to adjust it.
**N.B.: Multiple '-par...' options can be used, to constrain
multiple parameters.
**N.B.: -parini has no effect if -twopass is used, since
the -twopass algorithm carries out its own search
for initial parameters.
-maxrot dd = Allow maximum rotation of 'dd' degrees. Equivalent
to '-parang 4 -dd dd -parang 5 -dd dd -parang 6 -dd dd'
[Default=30 degrees]
-maxshf dd = Allow maximum shift of 'dd' millimeters. Equivalent
to '-parang 1 -dd dd -parang 2 -dd dd -parang 3 -dd dd'
[Default=32% of the size of the base image]
**N.B.: This max shift setting is relative to the center-of-mass
shift, if the '-cmass' option is used.
-maxscl dd = Allow maximum scaling factor to be 'dd'. Equivalent
to '-parang 7 1/dd dd -parang 8 1/dd dd -paran2 9 1/dd dd'
[Default=1.2=image can go up or down 20% in size]
-maxshr dd = Allow maximum shearing factor to be 'dd'. Equivalent
to '-parang 10 -dd dd -parang 11 -dd dd -parang 12 -dd dd'
[Default=0.1111 for no good reason]
NOTE: If the datasets being registered have only 1 slice, 3dAllineate
will automatically fix the 6 out-of-plane motion parameters to
their 'do nothing' values, so you don't have to specify '-parfix'.
-master mmm = Write the output dataset on the same grid as dataset
'mmm'. If this option is NOT given, the base dataset
is the master.
**N.B.: 3dAllineate transforms the source dataset to be 'similar'
to the base image. Therefore, the coordinate system
of the master dataset is interpreted as being in the
reference system of the base image. It is thus vital
that these finite 3D volumes overlap, or you will lose data!
**N.B.: If 'mmm' is the string 'SOURCE', then the source dataset
is used as the master for the output dataset grid.
You can also use 'BASE', which is of course the default.
-mast_dxyz del = Write the output dataset using grid spacings of
*OR* 'del' mm. If this option is NOT given, then the
-newgrid del grid spacings in the master dataset will be used.
This option is useful when registering low resolution
data (e.g., EPI time series) to high resolution
datasets (e.g., MPRAGE) where you don't want to
consume vast amounts of disk space interpolating
the low resolution data to some artificially fine
(and meaningless) spatial grid.
----------------------------------------------
DEFINITION OF AFFINE TRANSFORMATION PARAMETERS
----------------------------------------------
The 3x3 spatial transformation matrix is calculated as [S][D][U],
where [S] is the shear matrix,
[D] is the scaling matrix, and
[U] is the rotation (proper orthogonal) matrix.
Thes matrices are specified in DICOM-ordered (x=-R+L,y=-A+P,z=-I+S)
coordinates as:
[U] = [Rotate_y(param#6)] [Rotate_x(param#5)] [Rotate_z(param #4)]
(angles are in degrees)
[D] = diag( param#7 , param#8 , param#9 )
[ 1 0 0 ] [ 1 param#10 param#11 ]
[S] = [ param#10 1 0 ] OR [ 0 1 param#12 ]
[ param#11 param#12 1 ] [ 0 0 1 ]
The shift vector comprises parameters #1, #2, and #3.
The goal of the program is to find the warp parameters such that
I([x]_warped) 'is similar to' J([x]_in)
as closely as possible in some sense of 'similar', where J(x) is the
base image, and I(x) is the source image.
Using '-parfix', you can specify that some of these parameters
are fixed. For example, '-shift_rotate_scale' is equivalent
'-affine_general -parfix 10 0 -parfix 11 0 -parfix 12 0'.
Don't even think of using the '-parfix' option unless you grok
this example!
----------- Special Note for the '-EPI' Option's Coordinates -----------
In this case, the parameters above are with reference to coordinates
x = frequency encoding direction (by default, first axis of dataset)
y = phase encoding direction (by default, second axis of dataset)
z = slice encoding direction (by default, third axis of dataset)
This option lets you freeze some of the warping parameters in ways that
make physical sense, considering how echo-planar images are acquired.
The x- and z-scaling parameters are disabled, and shears will only affect
the y-axis. Thus, there will be only 9 free parameters when '-EPI' is
used. If desired, you can use a '-parang' option to allow the scaling
fixed parameters to vary (put these after the '-EPI' option):
-parang 7 0.833 1.20 to allow x-scaling
-parang 9 0.833 1.20 to allow z-scaling
You could also fix some of the other parameters, if that makes sense
in your situation; for example, to disable out-of-slice rotations:
-parfix 5 0 -parfix 6 0
and to disable out of slice translation:
-parfix 3 0
NOTE WELL: If you use '-EPI', then the output warp parameters (e.g., in
'-1Dparam_save') apply to the (freq,phase,slice) xyz coordinates,
NOT to the DICOM xyz coordinates, so equivalent transformations
will be expressed with different sets of parameters entirely
than if you don't use '-EPI'! This comment does NOT apply
to the output of '-1Dmatrix_save', since that matrix is
defined relative to the RAI (DICOM) spatial coordinates.
*********** CHANGING THE ORDER OF MATRIX APPLICATION ***********
{{{ There is no good reason to ever use these options! }}}
-SDU or -SUD }= Set the order of the matrix multiplication
-DSU or -DUS }= for the affine transformations:
-USD or -UDS }= S = triangular shear (params #10-12)
D = diagonal scaling matrix (params #7-9)
U = rotation matrix (params #4-6)
Default order is '-SDU', which means that
the U matrix is applied first, then the
D matrix, then the S matrix.
-Supper }= Set the S matrix to be upper or lower
-Slower }= triangular [Default=lower triangular]
-ashift OR }= Apply the shift parameters (#1-3) after OR
-bshift }= before the matrix transformation. [Default=after]
==================================================
===== RWCox - September 2006 - Live Long and Prosper =====
==================================================
********************************************************
*** From Webster's Dictionary: Allineate == 'to align' ***
********************************************************
===========================================================================
FORMERLY SECRET HIDDEN OPTIONS
---------------------------------------------------------------------------
** N.B.: Most of these are experimental! [permanent beta] **
===========================================================================
-num_rtb n = At the beginning of the fine pass, the best set of results
from the coarse pass are 'refined' a little by further
optimization, before the single best one is chosen for
for the final fine optimization.
* This option sets the maximum number of cost functional
evaluations to be used (for each set of parameters)
in this step.
* The default is 99; a larger value will take more CPU
time but may give more robust results.
* If you want to skip this step entirely, use '-num_rtb 0'.
then, the best of the coarse pass results is taken
straight to the final optimization passes.
**N.B.: If you use '-VERB', you will see that one extra case
is involved in this initial fine refinement step; that
case is starting with the identity transformation, which
helps insure against the chance that the coarse pass
optimizations ran totally amok.
-nocast = By default, parameter vectors that are too close to the
best one are cast out at the end of the coarse pass
refinement process. Use this option if you want to keep
them all for the fine resolution pass.
-norefinal = Do NOT re-start the fine iteration step after it
has converged. The default is to re-start it, which
usually results in a small improvement to the result
(at the cost of CPU time). This re-start step is an
an attempt to avoid a local minimum trap. It is usually
not necessary, but sometimes helps.
-realaxes = Use the 'real' axes stored in the dataset headers, if they
conflict with the default axes. [For Jedi AFNI Masters only!]
-savehist sss = Save start and final 2D histograms as PGM
files, with prefix 'sss' (cost: cr mi nmi hel).
* if filename contains 'FF', floats is written
* these are the weighted histograms!
* -savehist will also save histogram files when
the -allcost evaluations takes place
* this option is mostly useless unless '-histbin' is
also used
-median = Smooth with median filter instead of Gaussian blur.
(Somewhat slower, and not obviously useful.)
-powell m a = Set the Powell NEWUOA dimensional parameters to
'm' and 'a' (cf. source code in powell_int.c).
The number of points used for approximating the
cost functional is m*N+a, where N is the number
of parameters being optimized. The default values
are m=2 and a=3. Larger values will probably slow
the program down for no good reason. The smallest
allowed values are 1.
-target ttt = Same as '-source ttt'. In the earliest versions,
what I now call the 'source' dataset was called the
'target' dataset:
Try to remember the kind of September (2006)
When life was slow and oh so mellow
Try to remember the kind of September
When grass was green and source was target.
-Xwarp =} Change the warp/matrix setup so that only the x-, y-, or z-
-Ywarp =} axis is stretched & sheared. Useful for EPI, where 'X',
-Zwarp =} 'Y', or 'Z' corresponds to the phase encoding direction.
-FPS fps = Generalizes -EPI to arbitrary permutation of directions.
-histpow pp = By default, the number of bins in the histogram used
for calculating the Hellinger, Mutual Information, and
Correlation Ratio statistics is n^(1/3), where n is
the number of data points. You can change that exponent
to 'pp' with this option.
-histbin nn = Or you can just set the number of bins directly to 'nn'.
-eqbin nn = Use equalized marginal histograms with 'nn' bins.
-clbin nn = Use 'nn' equal-spaced bins except for the bot and top,
which will be clipped (thus the 'cl'). If nn is 0, the
program will pick the number of bins for you.
**N.B.: '-clbin 0' is now the default [25 Jul 2007];
if you want the old all-equal-spaced bins, use
'-histbin 0'.
**N.B.: '-clbin' only works when the datasets are
non-negative; any negative voxels in either
the input or source volumes will force a switch
to all equal-spaced bins.
-wtmrad mm = Set autoweight/mask median filter radius to 'mm' voxels.
-wtgrad gg = Set autoweight/mask Gaussian filter radius to 'gg' voxels.
-nmsetup nn = Use 'nn' points for the setup matching [default=98756]
-ignout = Ignore voxels outside the warped source dataset.
-blok bbb = Blok definition for the 'lp?' (Local Pearson) cost
functions: 'bbb' is one of
'BALL(r)' or 'CUBE(r)' or 'RHDD(r)' or 'TOHD(r)'
corresponding to
spheres or cubes or rhombic dodecahedra or
truncated octahedra
where 'r' is the size parameter in mm.
[Default is 'RHDD(6.54321)' (rhombic dodecahedron)]
-allcost = Compute ALL available cost functionals and print them
at various points.
-allcostX = Compute and print ALL available cost functionals for the
un-warped inputs, and then quit.
-allcostX1D p q = Compute ALL available cost functionals for the set of
parameters given in the 1D file 'p' (12 values per row),
write them to the 1D file 'q', then exit. (For you, Zman)
* N.B.: If -fineblur is used, that amount of smoothing
will be applied prior to the -allcostX evaluations.
The parameters are the rotation, shift, scale,
and shear values, not the affine transformation
matrix. An identity matrix could be provided as
"0 0 0 0 0 0 1 1 1 0 0 0" for instance or by
using the word "IDENTITY"
===========================================================================
Modifying '-final wsinc5'
-------------------------
* The windowed (tapered) sinc function interpolation can be modified
by several environment variables. This is expert-level stuff, and
you should understand what you are doing if you use these options.
The simplest way to use these would be on the command line, as in
-DAFNI_WSINC5_RADIUS=9 -DAFNI_WSINC5_TAPERFUN=Hamming
* AFNI_WSINC5_TAPERFUN lets you choose the taper function.
The default taper function is the minimum sidelobe 3-term cosine:
0.4243801 + 0.4973406*cos(PI*x) + 0.0782793*cos(2*PI*x)
If you set this environment variable to 'Hamming', then the
minimum sidelobe 2-term cosine will be used instead:
0.53836 + 0.46164*cos(PI*x)
Here, 'x' is between 0 and 1, where x=0 is the center of the
interpolation mask and x=1 is the outer edge.
++ Unfortunately, the 3-term cosine doesn't have a catchy name; you can
find it (and many other) taper functions described in the paper
AH Nuttall, Some Windows with Very Good Sidelobe Behavior.
IEEE Trans. ASSP, 29:84-91 (1981).
In particular, see Fig.14 and Eq.36 in this paper.
* AFNI_WSINC5_TAPERCUT lets you choose the start 'x' point for tapering:
This value should be between 0 and 0.8; for example, 0 means to taper
all the way from x=0 to x=1 (maximum tapering). The default value
is 0. Setting TAPERCUT to 0.5 (say) means only to taper from x=0.5
to x=1; thus, a larger value means that fewer points are tapered
inside the interpolation mask.
* AFNI_WSINC5_RADIUS lets you choose the radius of the tapering window
(i.e., the interpolation mask region). This value is an integer
between 3 and 21. The default value is 5 (which used to be the
ONLY value, thus 'wsinc5'). RADIUS is measured in voxels, not mm.
* AFNI_WSINC5_SPHERICAL lets you choose the shape of the mask region.
If you set this value to 'Yes', then the interpolation mask will be
spherical; otherwise, it defaults to cubical.
* The Hamming taper function is a little faster than the 3-term function,
but will have a little more Gibbs phenomenon.
* A larger TAPERCUT will give a little more Gibbs phenomenon; compute
speed won't change much with this parameter.
* Compute time goes up with (at least) the 3rd power of the RADIUS; setting
RADIUS to 21 will be VERY slow.
* Visually, RADIUS=3 is similar to quintic interpolation. Increasing
RADIUS makes the interpolated images look sharper and more well-
defined. However, values of RADIUS greater than or equal to 7 appear
(to Zhark's eagle eye) to be almost identical. If you really care,
you'll have to experiment with this parameter yourself.
* A spherical mask is also VERY slow, since the cubical mask allows
evaluation as a tensor product. There is really no good reason
to use a spherical mask; I only put it in for experimental purposes.
** For most users, there is NO reason to ever use these environment variables
to modify wsinc5. You should only do this kind of thing if you have a
good and articulable reason! (Or if you really like to screw around.)
** The wsinc5 interpolation function is parallelized using OpenMP, which
makes its usage moderately tolerable.
===========================================================================
Hidden experimental cost functionals:
-------------------------------------
sp *OR* spearman = Spearman [rank] Correlation
je *OR* jointentropy = Joint Entropy [H(b,s)]
lss *OR* signedPcor = Signed Pearson Correlation
Notes for the new [Feb 2010] lpc+ cost functional:
--------------------------------------------------
* The cost functional named 'lpc+' is a combination of several others:
lpc + hel*0.4 + crA*0.4 + nmi*0.2 + mi*0.2 + ov*0.4
++ 'hel', 'crA', 'nmi', and 'mi' are the histogram-based cost
functionals also available as standalone options.
++ 'ov' is a measure of the overlap of the automasks of the base and
source volumes; ov is not available as a standalone option.
* The purpose of lpc+ is to avoid situations where the pure lpc cost
goes wild; this especially happens if '-source_automask' isn't used.
++ Even with lpc+, you should use '-source_automask+2' (say) to be safe.
* You can alter the weighting of the extra functionals by giving the
option in the form (for example)
'-lpc+hel*0.5+nmi*0+mi*0+crA*1.0+ov*0.5'
* The quotes are needed to prevent the shell from wild-card expanding
the '*' character.
--> You can now use ':' in place of '*' to avoid this wildcard problem:
-lpc+hel:0.5+nmi:0+mi:0+crA:1+ov:0.5+ZZ
* Notice the weight factors FOLLOW the name of the extra functionals.
++ If you want a weight to be 0 or 1, you have to provide for that
explicitly -- if you leave a weight off, then it will get its
default value!
++ The order of the weight factor names is unimportant here:
'-lpc+hel*0.5+nmi*0.8' == '-lpc+nmi*0.8+hel*0.5'
* Only the 5 functionals listed (hel,crA,nmi,mi,ov) can be used in '-lpc+'.
* In addition, if you want the initial alignments to be with '-lpc+' and
then finish the Final alignment with pure '-lpc', you can indicate this
by putting 'ZZ' somewhere in the option string, as in '-lpc+ZZ'.
* [28 Nov 2018]
All of the above now applies to the 'lpa+' cost functional,
which can be used as a robust method for like-to-like alignment.
Cost functional descriptions (for use with -allcost output):
------------------------------------------------------------
ls :: 1 - abs(Pearson correlation coefficient)
sp :: 1 - abs(Spearman correlation coefficient)
mi :: - Mutual Information = H(base,source)-H(base)-H(source)
crM :: 1 - abs[ CR(base,source) * CR(source,base) ]
nmi :: 1/Normalized MI = H(base,source)/[H(base)+H(source)]
je :: H(base,source) = joint entropy of image pair
hel :: - Hellinger distance(base,source)
crA :: 1 - abs[ CR(base,source) + CR(source,base) ]
crU :: CR(source,base) = Var(source|base) / Var(source)
lss :: Pearson correlation coefficient between image pair
lpc :: nonlinear average of Pearson cc over local neighborhoods
lpa :: 1 - abs(lpc)
lpc+:: lpc + hel + mi + nmi + crA + overlap
lpa+:: lpa + hel + mi + nmi + crA + overlap
* N.B.: Some cost functional values (as printed out above)
are negated from their theoretical descriptions (e.g., 'hel')
so that the best image alignment will be found when the cost
is minimized. See the descriptions above and the references
below for more details for each functional.
* For more information about the 'lpc' functional, see
ZS Saad, DR Glen, G Chen, MS Beauchamp, R Desai, RW Cox.
A new method for improving functional-to-structural
MRI alignment using local Pearson correlation.
NeuroImage 44: 839-848, 2009.
http://dx.doi.org/10.1016/j.neuroimage.2008.09.037
https://afni.nimh.nih.gov/sscc/rwcox/papers/LocalPearson2009.pdf
The '-blok' option can be used to control the regions
(size and shape) used to compute the local correlations.
*** Using the 'lpc' functional wisely requires the use of
a proper weight volume. We HIGHLY recommend you use
the align_epi_anat.py script if you want to use this
cost functional! Otherwise, you are likely to get
less than optimal results (and then swear at us unjustly).
* For more information about the 'cr' functionals, see
http://en.wikipedia.org/wiki/Correlation_ratio
Note that CR(x,y) is not the same as CR(y,x), which
is why there are symmetrized versions of it available.
* For more information about the 'mi', 'nmi', and 'je'
cost functionals, see
http://en.wikipedia.org/wiki/Mutual_information
http://en.wikipedia.org/wiki/Joint_entropy
http://www.cs.jhu.edu/~cis/cista/746/papers/mutual_info_survey.pdf
* For more information about the 'hel' functional, see
http://en.wikipedia.org/wiki/Hellinger_distance
* Some cost functionals (e.g., 'mi', 'cr', 'hel') are
computed by creating a 2D joint histogram of the
base and source image pair. Various options above
(e.g., '-histbin', etc.) can be used to control the
number of bins used in the histogram on each axis.
(If you care to control the program in such detail!)
* Minimization of the chosen cost functional is done via
the NEWUOA software, described in detail in
MJD Powell. 'The NEWUOA software for unconstrained
optimization without derivatives.' In: GD Pillo,
M Roma (Eds), Large-Scale Nonlinear Optimization.
Springer, 2006.
http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2004_08.pdf
===========================================================================
-nwarp type = Experimental nonlinear warping:
***** Note that these '-nwarp' options are superseded *****
***** by the AFNI program 3dQwarp, which does a more *****
***** accurate and better and job of nonlinear warping *****
***** ------ Zhark the Warper ------ July 2013 ------- *****
* At present, the only 'type' is 'bilinear',
as in 3dWarpDrive, with 39 parameters.
* I plan to implement more complicated nonlinear
warps in the future, someday ....
* -nwarp can only be applied to a source dataset
that has a single sub-brick!
* -1Dparam_save and -1Dparam_apply work with
bilinear warps; see the Notes for more information.
==>>*** Nov 2010: I have now added the following polynomial
warps: 'cubic', 'quintic', 'heptic', 'nonic' (using
3rd, 5th, 7th, and 9th order Legendre polynomials); e.g.,
-nwarp heptic
* These are the nonlinear warps that I now am supporting.
* Or you can call them 'poly3', 'poly5', 'poly7', and 'poly9',
for simplicity and non-Hellenistic clarity.
* These names are not case sensitive: 'nonic' == 'Nonic', etc.
* Higher and higher order polynomials will take longer and longer
to run!
* If you wish to apply a nonlinear warp, you have to supply
a parameter file with -1Dparam_apply and also specify the
warp type with -nwarp. The number of parameters in the
file (per line) must match the warp type:
bilinear = 43 [for all nonlinear warps, the final]
cubic = 64 [4 'parameters' are fixed values to]
quintic = 172 [normalize the coordinates to -1..1]
heptic = 364 [for the nonlinear warp functions. ]
nonic = 664
In all these cases, the first 12 parameters are the
affine parameters (shifts, rotations, etc.), and the
remaining parameters define the nonlinear part of the warp
(polynomial coefficients); thus, the number of nonlinear
parameters over which the optimization takes place is
the number in the table above minus 16.
* The actual polynomial functions used are products of
Legendre polynomials, but the symbolic names used in
the header line in the '-1Dparam_save' output just
express the polynomial degree involved; for example,
quint:x^2*z^3:z
is the name given to the polynomial warp basis function
whose highest power of x is 2, is independent of y, and
whose highest power of z is 3; the 'quint' indicates that
this was used in '-nwarp quintic'; the final ':z' signifies
that this function was for deformations in the (DICOM)
z-direction (+z == Superior).
==>>*** You can further control the form of the polynomial warps
(but not the bilinear warp!) by restricting their degrees
of freedom in 2 different ways.
++ You can remove the freedom to have the nonlinear
deformation move along the DICOM x, y, and/or z axes.
++ You can remove the dependence of the nonlinear
deformation on the DICOM x, y, and/or z coordinates.
++ To illustrate with the six second order polynomials:
p2_xx(x,y,z) = x*x p2_xy(x,y,z) = x*y
p2_xz(x,y,z) = x*z p2_yy(x,y,z) = y*y
p2_yz(x,y,z) = y*z p2_zz(x,y,z) = z*z
Unrestricted, there are 18 parameters associated with
these polynomials, one for each direction of motion (x,y,z)
* If you remove the freedom of the nonlinear warp to move
data in the z-direction (say), then there would be 12
parameters left.
* If you instead remove the freedom of the nonlinear warp
to depend on the z-coordinate, you would be left with
3 basis functions (p2_xz, p2_yz, and p2_zz would be
eliminated), each of which would have x-motion, y-motion,
and z-motion parameters, so there would be 9 parameters.
++ To fix motion along the x-direction, use the option
'-nwarp_fixmotX' (and '-nwarp_fixmotY' and '-nwarp_fixmotZ).
++ To fix dependence of the polynomial warp on the x-coordinate,
use the option '-nwarp_fixdepX' (et cetera).
++ These coordinate labels in the options (X Y Z) refer to the
DICOM directions (X=R-L, Y=A-P, Z=I-S). If you would rather
fix things along the dataset storage axes, you can use
the symbols I J K to indicate the fastest to slowest varying
array dimensions (e.g., '-nwarp_fixdepK').
* Mixing up the X Y Z and I J K forms of parameter freezing
(e.g., '-nwarp_fixmotX -nwarp_fixmotJ') may cause trouble!
++ If you input a 2D dataset (a single slice) to be registered
with '-nwarp', the program automatically assumes '-nwarp_fixmotK'
and '-nwarp_fixdepK' so there are no out-of-plane parameters
or dependence. The number of nonlinear parameters is then:
2D: cubic = 14 ; quintic = 36 ; heptic = 66 ; nonic = 104.
3D: cubic = 48 ; quintic = 156 ; heptic = 348 ; nonic = 648.
[ n-th order: 2D = (n+4)*(n-1) ; 3D = (n*n+7*n+18)*(n-1)/2 ]
++ Note that these '-nwarp_fix' options have no effect on the
affine part of the warp -- if you want to constrain that as
well, you'll have to use the '-parfix' option.
* However, for 2D images, the affine part will automatically
be restricted to in-plane (6 parameter) 'motions'.
++ If you save the warp parameters (with '-1Dparam_save') when
doing 2D registration, all the parameters will be saved, even
the large number of them that are fixed to zero. You can use
'1dcat -nonfixed' to remove these columns from the 1D file if
you want to further process the varying parameters (e.g., 1dsvd).
**++ The mapping from I J K to X Y Z (DICOM coordinates), where the
'-nwarp_fix' constraints are actually applied, is very simple:
given the command to fix K (say), the coordinate X, or Y, or Z
whose direction most closely aligns with the dataset K grid
direction is chosen. Thus, for coronal images, K is in the A-P
direction, so '-nwarp_fixmotK' is translated to '-nwarp_fixmotY'.
* This simplicity means that using the '-nwarp_fix' commands on
oblique datasets is problematic. Perhaps it would work in
combination with the '-EPI' option, but that has not been tested.
-nwarp NOTES:
-------------
* -nwarp is slow - reeeaaallll slow - use it with OpenMP!
* Check the results to make sure the optimizer didn't run amok!
(You should ALWAYS do this with any registration software.)
* For the nonlinear warps, the largest coefficient allowed is
set to 0.10 by default. If you wish to change this, use an
option like '-nwarp_parmax 0.05' (to make the allowable amount
of nonlinear deformation half the default).
++ N.B.: Increasing the maximum past 0.10 may give very bad results!!
* If you use -1Dparam_save, then you can apply the nonlinear
warp to another dataset using -1Dparam_apply in a later
3dAllineate run. To do so, use '-nwarp xxx' in both runs
, so that the program knows what the extra parameters in
the file are to be used for.
++ Bilinear: 43 values are saved in 1 row of the param file.
++ The first 12 are the affine parameters
++ The next 27 are the D1,D2,D3 matrix parameters (cf. infra).
++ The final 'extra' 4 values are used to specify
the center of coordinates (vector Xc below), and a
pre-computed scaling factor applied to parameters #13..39.
++ For polynomial warps, a similar format is used (mutatis mutandis).
* The option '-nwarp_save sss' lets you save a 3D dataset of the
the displacement field used to create the output dataset. This
dataset can be used in program 3dNwarpApply to warp other datasets.
++ If the warp is symbolized by x -> w(x) [here, x is a DICOM 3-vector],
then the '-nwarp_save' dataset contains w(x)-x; that is, it contains
the warp displacement of each grid point from its grid location.
++ Also see program 3dNwarpCalc for other things you can do with this file:
warp inversion, catenation, square root, ...
* Bilinear warp formula:
Xout = inv[ I + {D1 (Xin-Xc) | D2 (Xin-Xc) | D3 (Xin-Xc)} ] [ A Xin ]
where Xin = input vector (base dataset coordinates)
Xout = output vector (source dataset coordinates)
Xc = center of coordinates used for nonlinearity
(will be the center of the base dataset volume)
A = matrix representing affine transformation (12 params)
I = 3x3 identity matrix
D1,D2,D3 = three 3x3 matrices (the 27 'new' parameters)
* when all 27 parameters == 0, warp is purely affine
{P|Q|R} = 3x3 matrix formed by adjoining the 3-vectors P,Q,R
inv[...] = inverse 3x3 matrix of stuff inside '[...]'
* The inverse of a bilinear transformation is another bilinear
transformation. Someday, I may write a program that will let
you compute that inverse transformation, so you can use it for
some cunning and devious purpose.
* If you expand the inv[...] part of the above formula in a 1st
order Taylor series, you'll see that a bilinear warp is basically
a quadratic warp, with the additional feature that its inverse
is directly computable (unlike a pure quadratic warp).
* 'bilinearD' means the matrices D1, D2, and D3 with be constrained
to be diagonal (a total of 9 nonzero values), rather than full
(a total of 27 nonzero values). This option is much faster.
* Is '-nwarp bilinear' useful? Try it and tell me!
* Unlike a bilinear warp, the polynomial warps cannot be exactly
inverted. At some point, I'll write a program to compute an
approximate inverse, if there is enough clamor for such a toy.
===========================================================================
=========================================================================
* This binary version of 3dAllineate is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work with 'cluster' setups).
* For implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 2.
* The maximum number of CPUs that will be used is now set to .... 2.
* OpenMP may or may not speed up the program significantly. Limited
tests show that it provides some benefit, particularly when using
the more complicated interpolation methods (e.g., '-cubic' and/or
'-final wsinc5'), for up to 3-4 CPU threads.
* But the speedup is definitely not linear in the number of threads, alas.
Probably because my parallelization efforts were pretty limited.
=========================================================================
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAmpToRSFC
This program is for converting spectral amplitudes into standard RSFC
parameters. This function is made to work directly with the outputs of
3dLombScargle, but you could use other inputs that have similar
formatting. (3dLombScargle's main algorithm is special because it
calculates spectra from time series with nonconstant sampling, such as if
some time points have been censored during processing-- check it out!.)
At present, 6 RSFC parameters get returned in separate volumes:
ALFF, mALFF, fALFF, RSFA, mRSFA and fRSFA.
For more information about each RSFC parameter, see, e.g.:
ALFF/mALFF -- Zang et al. (2007),
fALFF -- Zou et al. (2008),
RSFA -- Kannurpatti & Biswal (2008).
You can also see the help of 3dRSFC, as well as the Appendix of
Taylor, Gohel, Di, Walter and Biswal (2012) for a mathematical
description and set of relations.
NB: *if* you want to input an unbandpassed time series and do some
filtering/other processing at the same time as estimating RSFC parameters,
then you would want to use 3dRSFC, instead.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ COMMAND:
3dAmpToRSFC { -in_amp AMPS | -in_pow POWS } -prefix PREFIX \
-band FBOT FTOP { -mask MASK } { -nifti }
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ RUNNING:
-in_amp AMPS :input file of one-sided spectral amplitudes, such as
output by 3dLombScargle. It is also assumed that the
the frequencies are uniformly spaced with a single DF
('delta f'), and that the zeroth brick is at 1*DF (i.e.
that the zeroth/baseline frequency is not present in the
or spectrum.
-in_pow POWS :input file of a one-sided power spectrum, such as
output by 3dLombScargle. Similar freq assumptions
as in '-in_amp ...'.
-band FBOT FTOP :lower and upper boundaries, respectively, of the low
frequency fluctuations (LFFs), which will be in the
inclusive interval [FBOT, FTOP], within the provided
input file's frequency range.
-prefix PREFIX :output file prefix; file names will be: PREFIX_ALFF*,
PREFIX_FALFF*, etc.
-mask MASK :volume mask of voxels to include for calculations; if
no mask is included, values are calculated for voxels
whose values are not identically zero across time.
-nifti :output files as *.nii.gz (default is BRIK/HEAD).
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ OUTPUT:
Currently, 6 volumes of common RSFC parameters, briefly:
PREFIX_ALFF+orig :amplitude of low freq fluctuations
(L1 sum).
PREFIX_MALFF+orig :ALFF divided by the mean value within
the input/estimated whole brain mask
(a.k.a. 'mean-scaled ALFF').
PREFIX_FALFF+orig :ALFF divided by sum of full amplitude
spectrum (-> 'fractional ALFF').
PREFIX_RSFA+orig :square-root of summed square of low freq
fluctuations (L2 sum).
PREFIX_MRSFA+orig :RSFA divided by the mean value within
the input/estimated whole brain mask
(a.k.a. 'mean-scaled RSFA').
PREFIX_FRSFA+orig :ALFF divided by sum of full amplitude
spectrum (a.k.a. 'fractional RSFA').
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
+ EXAMPLE:
3dAmpToRSFC \
-in_amp SUBJ_01_amp.nii.gz \
-prefix SUBJ_01 \
-mask mask_WB.nii.gz \
-band 0.01 0.1 \
-nifti
___________________________________________________________________________
AFNI program: 3dANALYZEtoAFNI
** DON'T USE THIS PROGRAM! REALLY!
USE 3dcopy OR to3d INSTEAD.
IF YOU CHOOSE TO USE IT ANYWAY, PERHAPS
BECAUSE IT WORKS BETTER ON YOUR 12th
CENTURY PLANTAGENET ANALYZE FILES,
ADD THE OPTION -OK TO YOUR COMMAND
LINE.
Usage: 3dANALYZEtoAFNI [options] file1.hdr file2.hdr ...
This program constructs a 'volumes' stored AFNI dataset
from the ANALYZE-75 files file1.img file2.img ....
In this type of dataset, there is only a .HEAD file; the
.BRIK file is replaced by the collection of .img files.
- Other AFNI programs can read (but not write) this type
of dataset.
- The advantage of using this type of dataset vs. one created
with to3d is that you don't have to duplicate the image data
into a .BRIK file, thus saving disk space.
- The disadvantage of using 'volumes' for a multi-brick dataset
is that all the .img files must be kept with the .HEAD file
if you move the dataset around.
- The .img files must be in the same directory as the .HEAD file.
- Note that you put the .hdr files on the command line, but it is
the .img files that will be named in the .HEAD file.
- After this program is run, you must keep the .img files with
the output .HEAD file. AFNI doesn't need the .hdr files, but
other programs (e.g., FSL, SPM) will want them as well.
Options:
-prefix ppp = Save the dataset with the prefix name 'ppp'.
[default='a2a']
-view vvv = Save the dataset in the 'vvv' view, where
'vvv' is one of 'orig', 'acpc', or 'tlrc'.
[default='orig']
-TR ttt = For multi-volume datasets, create it as a
3D+time dataset with TR set to 'ttt'.
-fbuc = For multi-volume datasets, create it as a
functional bucket dataset.
-abuc = For multi-volume datasets, create it as an
anatomical bucket dataset.
** If more than one ANALYZE file is input, and none of the
above options is given, the default is as if '-TR 1s'
was used.
** For single volume datasets (1 ANALYZE file input), the
default is '-abuc'.
-geomparent g = Use the .HEAD file from dataset 'g' to set
the geometry of this dataset.
** If you don't use -geomparent, then the following options
can be used to specify the geometry of this dataset:
-orient code = Tells the orientation of the 3D volumes. The code
must be 3 letters, one each from the pairs {R,L}
{A,P} {I,S}. The first letter gives the orientation
of the x-axis, the second the orientation of the
y-axis, the third the z-axis:
R = right-to-left L = left-to-right
A = anterior-to-posterior P = posterior-to-anterior
I = inferior-to-superior S = superior-to-inferior
-zorigin dz = Puts the center of the 1st slice off at the
given distance ('dz' in mm). This distance
is in the direction given by the corresponding
letter in the -orient code. For example,
-orient RAI -zorigin 30
would set the center of the first slice at
30 mm Inferior.
** If the above options are NOT used to specify the geometry
of the dataset, then the default is '-orient RAI', and the
z origin is set to center the slices about z=0.
It is likely that you will want to patch up the .HEAD file using
program 3drefit.
-- RWCox - June 2002.
** DON'T USE THIS PROGRAM! REALLY!
USE 3dcopy OR to3d INSTEAD.
IF YOU CHOOSE TO USE IT ANYWAY, PERHAPS
BECAUSE IT WORKS BETTER ON YOUR 12th
CENTURY PLANTAGENET ANALYZE FILES,
ADD THE OPTION -OK TO YOUR COMMAND
LINE.-- KRH - April 2005.
AFNI program: 3dAnatNudge
[7m*+ WARNING:[0m This program (3dAnatNudge) is old, obsolete, and not maintained!
Usage: 3dAnatNudge [options]
Moves the anat dataset around to best overlap the epi dataset.
OPTIONS:
-anat aaa = aaa is an 'scalped' (3dIntracranial) high-resolution
anatomical dataset [a mandatory option]
-epi eee = eee is an EPI dataset [a mandatory option]
The first [0] sub-brick from each dataset is used,
unless otherwise specified on the command line.
-prefix ppp = ppp is the prefix of the output dataset;
this dataset will differ from the input only
in its name and its xyz-axes origin
[default=don't write new dataset]
-step sss = set the step size to be sss times the voxel size
in the anat dataset [default=1.0]
-x nx = search plus and minus nx steps along the EPI
-y ny dataset's x-axis; similarly for ny and the
-z nz y-axis, and for nz and the z-axis
[default: nx=1 ny=5 nz=0]
-verb = print progress reports (this is a slow program)
NOTES
*Systematically moves the anat dataset around and find the shift
that maximizes overlap between the anat dataset and the EPI
dataset. No rotations are done.
*Note that if you use -prefix, a new dataset will be created that
is a copy of the anat, except that it's origin will be shifted
and it will have a different ID code than the anat. If you want
to use this new dataset as the anatomy parent for the EPI
datasets, you'll have to use
3drefit -apar ppp+orig eee1+orig eee2+orig ...
*If no new dataset is written (no -prefix option), then you
can use the 3drefit command emitted at the end to modify
the origin of the anat dataset. (Assuming you trust the
results - visual inspection is recommended!)
*The reason the default search grid is mostly along the EPI y-axis
is that axis is usually the phase-encoding direction, which is
most subject to displacement due to off-resonance effects.
*Note that the time this program takes will be proportional to
(2*nx+1)*(2*ny+1)*(2*nz+1), so using a very large search grid
will result in a very large usage of CPU time.
*Recommended usage:
+ Make a 1-brick function volume from a typical EPI dataset:
3dbucket -fbuc -prefix epi_fb epi+orig
+ Use 3dIntracranial to scalp a T1-weighted volume:
3dIntracranial -anat spgr+orig -prefix spgr_st
+ Use 3dAnatNudge to produce a shifted anat dataset
3dAnatNudge -anat spgr_st+orig -epi epi_fb+orig -prefix spgr_nudge
+ Start AFNI and look at epi_fb overlaid in color on the
anat datasets spgr_st+orig and spgr_nudge+orig, to see if the
nudged dataset seems like a better fit.
+ Delete the nudged dataset spgr_nudge.
+ If the nudged dataset DOES look better, then apply the
3drefit command output by 3dAnatNudge to spgr+orig.
*Note that the x-, y-, and z-axes for the epi and anat datasets
may point in different directions (e.g., axial SPGR and
coronal EPI). The 3drefit command applies to the anat
dataset, NOT to the EPI dataset.
*If the program runs successfully, the only thing set to stdout
will be the 3drefit command string; all other messages go to
stderr. This can be useful if you want to capture the command
to a shell variable and then execute it, as in the following
csh fragment:
set cvar = `3dAnatNudge ...`
if( $cvar[1] == "3drefit" ) $cvar
The test on the first sub-string in cvar allows for the
possibility that the program fails, or that the optimal
nudge is zero.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAnhist
Usage: 3dAnhist [options] dataset
Input dataset is a T1-weighted high-res of the brain (shorts only).
Output is a list of peaks in the histogram, to stdout, in the form
( datasetname #peaks peak1 peak2 ... )
In the C-shell, for example, you could do
set anhist = `3dAnhist -q -w1 dset+orig`
Then the number of peaks found is in the shell variable $anhist[2].
Options:
-q = be quiet (don't print progress reports)
-h = dump histogram data to Anhist.1D and plot to Anhist.ps
-F = DON'T fit histogram with stupid curves.
-w = apply a Winsorizing filter prior to histogram scan
(or -w7 to Winsorize 7 times, etc.)
-2 = Analyze top 2 peaks only, for overlap etc.
-label xxx = Use 'xxx' for a label on the Anhist.ps plot file
instead of the input dataset filename.
-fname fff = Use 'fff' for the filename instead of 'Anhist'.
If the '-2' option is used, AND if 2 peaks are detected, AND if
the -h option is also given, then stdout will be of the form
( datasetname 2 peak1 peak2 thresh CER CJV count1 count2 count1/count2)
where 2 = number of peaks
thresh = threshold between peak1 and peak2 for decision-making
CER = classification error rate of thresh
CJV = coefficient of joint variation
count1 = area under fitted PDF for peak1
count2 = area under fitted PDF for peak2
count1/count2 = ratio of the above quantities
NOTA BENE
---------
* If the input is a T1-weighted MRI dataset (the usual case), then
peak 1 should be the gray matter (GM) peak and peak 2 the white
matter (WM) peak.
* For the definitions of CER and CJV, see the paper
Method for Bias Field Correction of Brain T1-Weighted Magnetic
Resonance Images Minimizing Segmentation Error
JD Gispert, S Reig, J Pascau, JJ Vaquero, P Garcia-Barreno,
and M Desco, Human Brain Mapping 22:133-144 (2004).
* Roughly speaking, CER is the ratio of the overlapping area of the
2 peak fitted PDFs to the total area of the fitted PDFS. CJV is
(sigma_GM+sigma_WM)/(mean_WM-mean_GM), and is a different, ad hoc,
measurement of how much the two PDF overlap.
* The fitted PDFs are NOT Gaussians. They are of the form
f(x) = b((x-p)/w,a), where p=location of peak, w=width, 'a' is
a skewness parameter between -1 and 1; the basic distribution
is defined by b(x)=(1-x^2)^2*(1+a*x*abs(x)) for -1 < x < 1.
-- RWCox - November 2004
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3danisosmooth
Usage: 3danisosmooth [options] dataset
Smooths a dataset using an anisotropic smoothing technique.
The output dataset is preferentially smoothed to preserve edges.
Options :
-prefix pname = Use 'pname' for output dataset prefix name.
-iters nnn = compute nnn iterations (default=10)
-2D = smooth a slice at a time (default)
-3D = smooth through slices. Can not be combined with 2D option
-mask dset = use dset as mask to include/exclude voxels
-automask = automatically compute mask for dataset
Can not be combined with -mask
-viewer = show central axial slice image every iteration.
Starts aiv program internally.
-nosmooth = do not do intermediate smoothing of gradients
-sigma1 n.nnn = assign Gaussian smoothing sigma before
gradient computation for calculation of structure tensor.
Default = 0.5
-sigma2 n.nnn = assign Gaussian smoothing sigma after
gradient matrix computation for calculation of structure tensor.
Default = 1.0
-deltat n.nnn = assign pseudotime step. Default = 0.25
-savetempdata = save temporary datasets each iteration.
Dataset prefixes are Gradient, Eigens, phi, Dtensor.
Ematrix, Flux and Gmatrix are also stored for the first sub-brick.
Where appropriate, the filename is suffixed by .ITER where
ITER is the iteration number. Existing datasets will get overwritten.
-save_temp_with_diff_measures: Like -savetempdata, but with
a dataset named Diff_measures.ITER containing FA, MD, Cl, Cp,
and Cs values.
-phiding = use Ding method for computing phi (default)
-phiexp = use exponential method for computing phi
-noneg = set negative voxels to 0
-setneg NEGVAL = set negative voxels to NEGVAL
-edgefraction n.nnn = adjust the fraction of the anisotropic
component to be added to the original image. Can vary between
0 and 1. Default =0.5
-datum type = Coerce the output data to be stored as the given type
which may be byte, short or float. [default=float]
-matchorig - match datum type and clip min and max to match input data
-help = print this help screen
References:
Z Ding, JC Gore, AW Anderson, Reduction of Noise in Diffusion
Tensor Images Using Anisotropic Smoothing, Mag. Res. Med.,
53:485-490, 2005
J Weickert, H Scharr, A Scheme for Coherence-Enhancing
Diffusion Filtering with Optimized Rotation Invariance,
CVGPR Group Technical Report at the Department of Mathematics
and Computer Science,University of Mannheim,Germany,TR 4/2000.
J.Weickert,H.Scharr. A scheme for coherence-enhancing diffusion
filtering with optimized rotation invariance. J Visual
Communication and Image Representation, Special Issue On
Partial Differential Equations In Image Processing,Comp Vision
Computer Graphics, pages 103-118, 2002.
Gerig, G., Kubler, O., Kikinis, R., Jolesz, F., Nonlinear
anisotropic filtering of MRI data, IEEE Trans. Med. Imaging 11
(2), 221-232, 1992.
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dANOVA
++ 3dANOVA: AFNI version=AFNI_19.3.16 (Dec 12 2019) [64-bit]
++ Authored by: B. Douglas Ward
This program performs single factor Analysis of Variance (ANOVA)
on 3D datasets
---------------------------------------------------------------
Usage:
-----
3dANOVA
-levels r : r = number of factor levels
-dset 1 filename : data set for factor level 1
. . .. . .
-dset 1 filename data set for factor level 1
. . .. . .
-dset r filename data set for factor level r
. . .. . .
-dset r filename data set for factor level r
[-voxel num] : screen output for voxel # num
[-diskspace] : print out disk space required for
program execution
[-mask mset] : use sub-brick #0 of dataset 'mset'
to define which voxels to process
[-debug level] : request extra output
The following commands generate individual AFNI 2-sub-brick datasets:
(In each case, output is written to the file with the specified
prefix file name.)
[-ftr prefix] : F-statistic for treatment effect
[-mean i prefix] : estimate of factor level i mean
[-diff i j prefix] : difference between factor levels
[-contr c1...cr prefix] : contrast in factor levels
Modified ANOVA computation options: (December, 2005)
** For details, see https://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html
[-old_method] request to perform ANOVA using the previous
functionality (requires -OK, also)
[-OK] confirm you understand that contrasts that
do not sum to zero have inflated t-stats, and
contrasts that do sum to zero assume sphericity
(to be used with -old_method)
[-assume_sph] assume sphericity (zero-sum contrasts, only)
This allows use of the old_method for
computing contrasts which sum to zero (this
includes diffs, for instance). Any contrast
that does not sum to zero is invalid, and
cannot be used with this option (such as
ameans).
The following command generates one AFNI 'bucket' type dataset:
[-bucket prefix] : create one AFNI 'bucket' dataset whose
sub-bricks are obtained by
concatenating the above output files;
the output 'bucket' is written to file
with prefix file name
N.B.: For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used,
e.g., -dset 2 'fred+orig[3]'
Example of 3dANOVA:
------------------
Example is based on a study with one factor (independent variable)
called 'Pictures', with 3 levels:
(1) Faces, (2) Houses, and (3) Donuts
The ANOVA is being conducted on the data of subjects Fred and Ethel:
3dANOVA -levels 3 \
-dset 1 fred_Faces+tlrc \
-dset 1 ethel_Faces+tlrc \
\
-dset 2 fred_Houses+tlrc \
-dset 2 ethel_Houses+tlrc \
\
-dset 3 fred_Donuts+tlrc \
-dset 3 ethel_Donuts+tlrc \
\
-ftr Pictures \
-mean 1 Faces \
-mean 2 Houses \
-mean 3 Donuts \
-diff 1 2 FvsH \
-diff 2 3 HvsD \
-diff 1 3 FvsD \
-contr 1 1 -1 FHvsD \
-contr -1 1 1 FvsHD \
-contr 1 -1 1 FDvsH \
-bucket fred_n_ethel_ANOVA
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
---------------------------------------------------
Also see HowTo#5 - Group Analysis on the AFNI website:
https://afni.nimh.nih.gov/pub/dist/HOWTO/howto/ht05_group/html/index.shtml
-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers. This truncantion might cause significant errors.
If you receive warnings that look like this:
*+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program. For convenience, you can do this
on the command line, as in
3dANOVA -DAFNI_FLOATIZE=YES ... other options ...
Also see the following links:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dANOVA2
++ 3dANOVA: AFNI version=AFNI_19.3.16 (Dec 12 2019) [64-bit]
++ Authored by: B. Douglas Ward
This program performs a two-factor Analysis of Variance (ANOVA)
on 3D datasets
-----------------------------------------------------------
Usage:
3dANOVA2
-type k : type of ANOVA model to be used:
k=1 fixed effects model (A and B fixed)
k=2 random effects model (A and B random)
k=3 mixed effects model (A fixed, B random)
-alevels a : a = number of levels of factor A
-blevels b : b = number of levels of factor B
-dset 1 1 filename : data set for level 1 of factor A
and level 1 of factor B
. . . . . .
-dset i j filename : data set for level i of factor A
and level j of factor B
. . . . . .
-dset a b filename : data set for level a of factor A
and level b of factor B
[-voxel num] : screen output for voxel # num
[-diskspace] : print out disk space required for
program execution
[-mask mset] : use sub-brick #0 of dataset 'mset'
to define which voxels to process
The following commands generate individual AFNI 2-sub-brick datasets:
(In each case, output is written to the file with the specified
prefix file name.)
[-ftr prefix] : F-statistic for treatment effect
[-fa prefix] : F-statistic for factor A effect
[-fb prefix] : F-statistic for factor B effect
[-fab prefix] : F-statistic for interaction
[-amean i prefix] : estimate mean of factor A level i
[-bmean j prefix] : estimate mean of factor B level j
[-xmean i j prefix] : estimate mean of cell at level i of factor A,
level j of factor B
[-adiff i j prefix] : difference between levels i and j of factor A
[-bdiff i j prefix] : difference between levels i and j of factor B
[-xdiff i j k l prefix] : difference between cell mean at A=i,B=j
and cell mean at A=k,B=l
[-acontr c1 ... ca prefix] : contrast in factor A levels
[-bcontr c1 ... cb prefix] : contrast in factor B levels
[-xcontr c11 ... c1b c21 ... c2b ... ca1 ... cab prefix]
: contrast in cell means
The following command generates one AFNI 'bucket' type dataset:
[-bucket prefix] : create one AFNI 'bucket' dataset whose
sub-bricks are obtained by concatenating
the above output files; the output 'bucket'
is written to file with prefix file name
Modified ANOVA computation options: (December, 2005)
** These options apply to model type 3, only.
For details, see https://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html
[-old_method] : request to perform ANOVA using the previous
functionality (requires -OK, also)
[-OK] : confirm you understand that contrasts that
do not sum to zero have inflated t-stats, and
contrasts that do sum to zero assume sphericity
(to be used with -old_method)
[-assume_sph] : assume sphericity (zero-sum contrasts, only)
This allows use of the old_method for
computing contrasts which sum to zero (this
includes diffs, for instance). Any contrast
that does not sum to zero is invalid, and
cannot be used with this option (such as
ameans).
----------------------------------------------------------
Example of 3dANOVA2:
Example is based on a study with a 3 x 4 mixed factorial design:
Factor 1 - DONUTS has 3 levels:
(1) chocolate, (2) glazed, (3) sugar
Factor 2 - SUBJECTS, of which there are 4 in this analysis:
(1) fred, (2) ethel, (3) lucy, (4) ricky
3dANOVA2 -type 3 -alevels 3 -blevels 4 \
-dset 1 1 fred_choc+tlrc \
-dset 2 1 fred_glaz+tlrc \
-dset 3 1 fred_sugr+tlrc \
-dset 1 2 ethel_choc+tlrc \
-dset 2 2 ethel_glaz+tlrc \
-dset 3 2 ethel_sugr+tlrc \
-dset 1 3 lucy_choc+tlrc \
-dset 2 3 lucy_glaz+tlrc \
-dset 3 3 lucy_sugr+tlrc \
-dset 1 3 ricky_choc+tlrc \
-dset 2 3 ricky_glaz+tlrc \
-dset 3 3 ricky_sugr+tlrc \
-amean 1 Chocolate \
-amean 2 Glazed \
-amean 3 Sugar \
-adiff 1 2 CvsG \
-adiff 2 3 GvsS \
-adiff 1 3 CvsS \
-acontr 1 1 -2 CGvsS \
-acontr -2 1 1 CvsGS \
-acontr 1 -2 1 CSvsG \
-fa Donuts \
-bucket ANOVA_results
The -bucket option will place all of the 3dANOVA2 results (i.e., main
effect of DONUTS, means for each of the 3 levels of DONUTS, and
contrasts between the 3 levels of DONUTS) into one big dataset with
multiple sub-bricks called ANOVA_results+tlrc.
-----------------------------------------------------------
N.B.: For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used, e.g.:
-dset 2 4 'fred+orig[3]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
Also see HowTo #5: Group Analysis on the AFNI website:
https://afni.nimh.nih.gov/pub/dist/HOWTO/howto/ht05_group/html/index.shtml
-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers. This truncantion might cause significant errors.
If you receive warnings that look like this:
*+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program. For convenience, you can do this
on the command line, as in
3dANOVA -DAFNI_FLOATIZE=YES ... other options ...
Also see the following links:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dANOVA3
This program performs three-factor ANOVA on 3D data sets.
Usage:
3dANOVA3
-type k type of ANOVA model to be used:
k = 1 A,B,C fixed; AxBxC
k = 2 A,B,C random; AxBxC
k = 3 A fixed; B,C random; AxBxC
k = 4 A,B fixed; C random; AxBxC
k = 5 A,B fixed; C random; AxB,BxC,C(A)
-alevels a a = number of levels of factor A
-blevels b b = number of levels of factor B
-clevels c c = number of levels of factor C
-dset 1 1 1 filename data set for level 1 of factor A
and level 1 of factor B
and level 1 of factor C
. . . . . .
-dset i j k filename data set for level i of factor A
and level j of factor B
and level k of factor C
. . . . . .
-dset a b c filename data set for level a of factor A
and level b of factor B
and level c of factor C
[-voxel num] screen output for voxel # num
[-diskspace] print out disk space required for
program execution
[-mask mset] use sub-brick #0 of dataset 'mset'
to define which voxels to process
The following commands generate individual AFNI 2 sub-brick datasets:
(In each case, output is written to the file with the specified
prefix file name.)
[-fa prefix] F-statistic for factor A effect
[-fb prefix] F-statistic for factor B effect
[-fc prefix] F-statistic for factor C effect
[-fab prefix] F-statistic for A*B interaction
[-fac prefix] F-statistic for A*C interaction
[-fbc prefix] F-statistic for B*C interaction
[-fabc prefix] F-statistic for A*B*C interaction
[-amean i prefix] estimate of factor A level i mean
[-bmean i prefix] estimate of factor B level i mean
[-cmean i prefix] estimate of factor C level i mean
[-xmean i j k prefix] estimate mean of cell at factor A level i,
factor B level j, factor C level k
[-adiff i j prefix] difference between factor A levels i and j
(with factors B and C collapsed)
[-bdiff i j prefix] difference between factor B levels i and j
(with factors A and C collapsed)
[-cdiff i j prefix] difference between factor C levels i and j
(with factors A and B collapsed)
[-xdiff i j k l m n prefix] difference between cell mean at A=i,B=j,
C=k, and cell mean at A=l,B=m,C=n
[-acontr c1...ca prefix] contrast in factor A levels
(with factors B and C collapsed)
[-bcontr c1...cb prefix] contrast in factor B levels
(with factors A and C collapsed)
[-ccontr c1...cc prefix] contrast in factor C levels
(with factors A and B collapsed)
[-aBcontr c1 ... ca : j prefix] 2nd order contrast in A, at fixed
B level j (collapsed across C)
[-Abcontr i : c1 ... cb prefix] 2nd order contrast in B, at fixed
A level i (collapsed across C)
[-aBdiff i_1 i_2 : j prefix] difference between levels i_1 and i_2 of
factor A, with factor B fixed at level j
[-Abdiff i : j_1 j_2 prefix] difference between levels j_1 and j_2 of
factor B, with factor A fixed at level i
[-abmean i j prefix] mean effect at factor A level i and
factor B level j
The following command generates one AFNI 'bucket' type dataset:
[-bucket prefix] create one AFNI 'bucket' dataset whose
sub-bricks are obtained by concatenating
the above output files; the output 'bucket'
is written to file with prefix file name
Modified ANOVA computation options: (December, 2005)
** These options apply to model types 4 and 5, only.
For details, see https://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html
[-old_method] request to perform ANOVA using the previous
functionality (requires -OK, also)
[-OK] confirm you understand that contrasts that
do not sum to zero have inflated t-stats, and
contrasts that do sum to zero assume sphericity
(to be used with -old_method)
[-assume_sph] assume sphericity (zero-sum contrasts, only)
This allows use of the old_method for
computing contrasts which sum to zero (this
includes diffs, for instance). Any contrast
that does not sum to zero is invalid, and
cannot be used with this option (such as
ameans).
-----------------------------------------------------------------
example: "classic" houses/faces/donuts for 4 subjects (2 genders)
(level sets are gender (M/W), image (H/F/D), and subject)
Note: factor C is really subject within gender (since it is
nested). There are 4 subjects in this example, and 2
subjects per gender. So clevels is 2.
3dANOVA3 -type 5 \
-alevels 2 \
-blevels 3 \
-clevels 2 \
-dset 1 1 1 man1_houses+tlrc \
-dset 1 2 1 man1_faces+tlrc \
-dset 1 3 1 man1_donuts+tlrc \
-dset 1 1 2 man2_houses+tlrc \
-dset 1 2 2 man2_faces+tlrc \
-dset 1 3 2 man2_donuts+tlrc \
-dset 2 1 1 woman1_houses+tlrc \
-dset 2 2 1 woman1_faces+tlrc \
-dset 2 3 1 woman1_donuts+tlrc \
-dset 2 1 2 woman2_houses+tlrc \
-dset 2 2 2 woman2_faces+tlrc \
-dset 2 3 2 woman2_donuts+tlrc \
-adiff 1 2 MvsW \
-bdiff 2 3 FvsD \
-bcontr -0.5 1 -0.5 FvsHD \
-aBcontr 1 -1 : 1 MHvsWH \
-aBdiff 1 2 : 1 same_as_MHvsWH \
-Abcontr 2 : 0 1 -1 WFvsWD \
-Abdiff 2 : 2 3 same_as_WFvsWD \
-Abcontr 2 : 1 7 -4.2 goofy_example \
-bucket donut_anova
N.B.: For this program, the user must specify 1 and only 1 sub-brick
with each -dset command. That is, if an input dataset contains
more than 1 sub-brick, a sub-brick selector must be used, e.g.:
-dset 2 4 5 'fred+orig[3]'
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers. This truncantion might cause significant errors.
If you receive warnings that look like this:
*+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program. For convenience, you can do this
on the command line, as in
3dANOVA -DAFNI_FLOATIZE=YES ... other options ...
Also see the following links:
https://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
https://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAttribute
Usage: 3dAttribute [options] aname dset
Prints (to stdout) the value of the attribute 'aname' from
the header of dataset 'dset'. If the attribute doesn't exist,
prints nothing and sets the exit status to 1.
Options:
-name = Include attribute name in printout
-all = Print all attributes [don't put aname on command line]
Also implies '-name'. Attributes print in whatever order
they are in the .HEAD file, one per line. You may want
to do '3dAttribute -all elvis+orig | sort' to get them
in alphabetical order.
-center = Center of volume in RAI coordinates.
Note that center is not itself an attribute in the
.HEAD file. It is calculated from other attributes.
Special options for string attributes:
-ssep SSEP Use string SSEP as a separator between strings for
multiple sub-bricks. The default is '~', which is what
is used internally in AFNI's .HEAD file. For tcsh,
I recommend ' ' which makes parsing easy, assuming each
individual string contains no spaces to begin with.
Try -ssep 'NUM'
-sprep SPREP Use string SPREP to replace blank space in string
attributes.
-quote Use single quote around each string.
Examples:
3dAttribute -quote -ssep ' ' BRICK_LABS SomeStatDset+tlrc.BRIK
3dAttribute -quote -ssep 'NUM' -sprep '+' BRICK_LABS SomeStatDset+tlrc.BRIK
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAutobox
++ 3dAutobox: AFNI version=AFNI_19.3.16 (Dec 12 2019) [64-bit]
Usage: 3dAutobox [options] DATASET
Computes size of a box that fits around the volume.
Also can be used to crop the volume to that box.
OPTIONS:
--------
-prefix PREFIX = Crop the input dataset to the size of the box, and
write an output dataset with PREFIX for the name.
* If -prefix is not used, no new volume is written out,
just the (x,y,z) extents of the voxels to be kept.
-input DATASET = An alternate way to specify the input dataset.
The default method is to pass DATASET as
the last parameter on the command line.
-noclust = Don't do any clustering to find box. Any non-zero
voxel will be preserved in the cropped volume.
The default method uses some clustering to find the
cropping box, and will clip off small isolated blobs.
-extent: Write to standard out the spatial extent of the box
-extent_ijk = Write out the 6 auto bbox ijk slice numbers to
screen:
imin imax jmin jmax kmin kmax
Note that resampling would affect the ijk vals (but
not necessarily the xyz ones).
Also note that this value is calculated before
any '-npad ...' option, so it would ignore that.
-extent_ijk_to_file FF = Write out the 6 auto bbox ijk slice numbers to
a simple-formatted text file FF (single row file):
imin imax jmin jmax kmin kmax
(same notes as above apply).
-extent_ijk_midslice = Write out the 3 ijk midslices of the autobox to
the screen:
imid jmid kmid
These are obtained via: (imin + imax)/2, etc.
-extent_xyz_midslice = Write out the 3 xyz midslices of the autobox to
the screen:
xmid ymid zmid
These are obtained via: (xmin + xmax)/2, etc.
These follow the same meaning as '-extent'.
-npad NNN = Number of extra voxels to pad on each side of box,
since some troublesome people (that's you, LRF) want
this feature for no apparent reason.
* With this option, it is possible to get a dataset that
is actually bigger than the input.
* You can input a negative value for NNN, which will
crop the dataset even more than the automatic method.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAutomask
Usage: 3dAutomask [options] dataset
Input dataset is EPI 3D+time, or a skull-stripped anatomical.
Output dataset is a brain-only mask dataset.
This program by itself does NOT do 'skull-stripping'. Use
program 3dSkullStrip for that purpose!
Method:
+ Uses 3dClipLevel algorithm to find clipping level.
+ Keeps only the largest connected component of the
supra-threshold voxels, after an erosion/dilation step.
+ Writes result as a 'fim' type of functional dataset,
which will be 1 inside the mask and 0 outside the mask.
Options:
--------
-prefix ppp = Write mask into dataset with prefix 'ppp'.
[Default == 'automask']
-apply_prefix ppp = Apply mask to input dataset and save
masked dataset. If an apply_prefix is given
and not the usual prefix, the only output
will be the applied dataset
-clfrac cc = Set the 'clip level fraction' to 'cc', which
must be a number between 0.1 and 0.9.
A small 'cc' means to make the initial threshold
for clipping (a la 3dClipLevel) smaller, which
will tend to make the mask larger. [default=0.5]
-nograd = The program uses a 'gradual' clip level by default.
To use a fixed clip level, use '-nograd'.
[Change to gradual clip level made 24 Oct 2006.]
-peels pp = Peel the mask 'pp' times, then unpeel. Designed
to clip off protuberances less than 2*pp voxels
thick. [Default == 1]
-nbhrs nn = Define the number of neighbors needed for a voxel
NOT to be peeled. The 18 nearest neighbors in
the 3D lattice are used, so 'nn' should be between
9 and 18. [Default == 17]
-q = Don't write progress messages (i.e., be quiet).
-eclip = After creating the mask, remove exterior
voxels below the clip threshold.
-dilate nd = Dilate the mask outwards 'nd' times.
-erode ne = Erode the mask inwards 'ne' times.
-SI hh = After creating the mask, find the most superior
voxel, then zero out everything more than 'hh'
millimeters inferior to that. hh=130 seems to
be decent (i.e., for Homo Sapiens brains).
-depth DEP = Produce a dataset (DEP) that shows how many peel
operations it takes to get to a voxel in the mask.
The higher the number, the deeper a voxel is located
in the mask.
None of -peels, -dilate, or -erode affect this option.
--------------------------------------------------------------------
How to make an edge-of-brain mask from an anatomical volume:
* 3dSkullStrip to create a brain-only dataset; say, Astrip+orig
* 3dAutomask -prefix Amask Astrip+orig
* Create a mask of edge-only voxels via
3dcalc -a Amask+orig -b a+i -c a-i -d a+j -e a-j -f a+k -g a-k \
-expr 'ispositive(a)*amongst(0,b,c,d,e,f,g)' -prefix Aedge
which will be 1 at all voxels in the brain mask that have a
nearest neighbor that is NOT in the brain mask.
* cf. '3dcalc -help' DIFFERENTIAL SUBSCRIPTS for information
on the 'a+i' et cetera inputs used above.
* In regions where the brain mask is 'stair-stepping', then the
voxels buried inside the corner of the steps probably won't
show up in this edge mask:
...00000000...
...aaa00000...
...bbbaa000...
...bbbbbaa0...
Only the 'a' voxels are in this edge mask, and the 'b' voxels
down in the corners won't show up, because they only touch a
0 voxel on a corner, not face-on. Depending on your use for
the edge mask, this effect may or may not be a problem.
--------------------------------------------------------------------
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dAutoTcorrelate
Usage: 3dAutoTcorrelate [options] dset
Computes the correlation coefficient between the time series of each
pair of voxels in the input dataset, and stores the output into a
new anatomical bucket dataset [scaled to shorts to save memory space].
*** Also see program 3dTcorrMap ***
Options:
-pearson = Correlation is the normal Pearson (product moment)
correlation coefficient [default].
-eta2 = Output is eta^2 measure from Cohen et al., NeuroImage, 2008:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2705206/
http://dx.doi.org/10.1016/j.neuroimage.2008.01.066
** '-eta2' is intended to be used to measure the similarity
between 2 correlation maps; therefore, this option is
to be used in a second stage analysis, where the input
dataset is the output of running 3dAutoTcorrelate with
the '-pearson' option -- the voxel 'time series' from
that first stage run is the correlation map of that
voxel with all other voxels.
** '-polort -1' is recommended with this option!
** Odds are you do not want use this option if the dataset
on which eta^2 is to be computed was generated with
options -mask_only_targets or -mask_source.
In this program, the eta^2 is computed between pseudo-
timeseries (the 4th dimension of the dataset).
If you want to compute eta^2 between sub-bricks then use
3ddot -eta2 instead.
-spearman AND -quadrant are disabled at this time :-(
-polort m = Remove polynomial trend of order 'm', for m=-1..3.
[default is m=1; removal is by least squares].
Using m=-1 means no detrending; this is only useful
for data/information that has been pre-processed.
-autoclip = Clip off low-intensity regions in the dataset,
-automask = so that the correlation is only computed between
high-intensity (presumably brain) voxels. The
mask is determined the same way that 3dAutomask works.
-mask mmm = Mask of both 'source' and 'target' voxels.
** Restricts computations to those in the mask. Output
volumes are restricted to masked voxels. Also, only
masked voxels will have non-zero output.
** A dataset with 1000 voxels would lead to output of
1000 sub-bricks. With a '-mask' of 50 voxels, the
output dataset have 50 sub-bricks, where the 950
unmasked voxels would be all zero in all 50 sub-bricks
(unless option '-mask_only_targets' is also used).
** The mask is encoded in the output dataset header in the
attribute named 'AFNI_AUTOTCORR_MASK' (cf. 3dMaskToASCII).
-mask_only_targets = Provide output for all voxels.
** Used with '-mask': every voxel is correlated with each
of the mask voxels. In the example above, there would
be 50 output sub-bricks; the n-th output sub-brick
would contain the correlations of the n-th voxel in
the mask with ALL 1000 voxels in the dataset (rather
than with just the 50 voxels in the mask).
-mask_source sss = Provide ouput for voxels only in mask sss
** For each seed in mask mm, compute correlations only with
non-zero voxels in sss. If you have 250 non-zero voxels
in sss, then the output will still have 50 sub-bricks, but
each n-th sub-brick will have non-zero values at the 250
non-zero voxels in sss
Do not use this option along with -mask_only_targets
-prefix p = Save output into dataset with prefix 'p'
[default prefix is 'ATcorr'].
-out1D FILE.1D = Save output in a text file formatted thusly:
Row 1 contains the 1D indices of non zero voxels in the
mask from option -mask.
Column 1 contains the 1D indices of non zero voxels in the
mask from option -mask_source
The rest of the matrix contains the correlation/eta2
values. Each column k corresponds to sub-brick k in
the output volume p.
To see 1D indices in AFNI, right click on the top left
corner of the AFNI controller - where coordinates are
shown - and chose voxel indices.
A 1D index (ijk) is computed from the 3D (i,j,k) indices:
ijk = i + j*Ni + k*Ni*Nj , with Ni and Nj being the
number of voxels in the slice orientation and given by:
3dinfo -ni -nj YOUR_VOLUME_HERE
This option can only be used in conjunction with
options -mask and -mask_source. Otherwise it makes little
sense to write a potentially enormous text file.
-time = Mark output as a 3D+time dataset instead of an anat bucket.
-mmap = Write .BRIK results to disk directly using Unix mmap().
This trick can speed the program up when the amount
of memory required to hold the output is very large.
** In many case, the amount of time needed to write
the results to disk is longer than the CPU time.
This option can shorten the disk write time.
** If the program crashes, you'll have to manually
remove the .BRIK file, which will have been created
before the loop over voxels and written into during
that loop, rather than being written all at once
at the end of the analysis, as is usually the case.
** If the amount of memory needed is bigger than the
RAM on your system, this program will be very slow
with or without '-mmap'.
** This option won't work with NIfTI-1 (.nii) output!
Example: correlate every voxel in mask_in+tlrc with only those voxels in
mask_out+tlrc (the rest of each volume is zero, for speed).
Assume detrending was already done along with other pre-processing.
The output will have one volume per masked voxel in mask_in+tlrc.
Volumes will be labeled by the ijk index triples of mask_in+tlrc.
3dAutoTcorrelate -mask_source mask_out+tlrc -mask mask_in+tlrc \
-polort -1 -prefix test_corr clean_epi+tlrc
Notes:
* The output dataset is anatomical bucket type of shorts
(unless '-time' is used).
* Values are scaled so that a correlation (or eta-squared)
of 1 corresponds to a value of 10000.
* The output file might be gigantic and you might run out
of memory running this program. Use at your own risk!
++ If you get an error message like
*** malloc error for dataset sub-brick
this means that the program ran out of memory when making
the output dataset.
++ If this happens, you can try to use the '-mmap' option,
and if you are lucky, the program may actually run.
* The program prints out an estimate of its memory usage
when it starts. It also prints out a progress 'meter'
to keep you pacified.
* This is a quick hack for Peter Bandettini. Now pay up.
* OpenMP-ized for Hang Joon Jo. Where's my baem-sul?
-- RWCox - 31 Jan 2002 and 16 Jul 2010
=========================================================================
* This binary version of 3dAutoTcorrelate is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work with 'cluster' setups).
* For implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 2.
* The maximum number of CPUs that will be used is now set to .... 2.
=========================================================================
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3daxialize
[7m*+ WARNING:[0m This program (3daxialize) is old, not maintained, and probably useless!
Usage: 3daxialize [options] dataset
Purpose: Read in a dataset and write it out as a new dataset
with the data brick oriented as axial slices.
The input dataset must have a .BRIK file.
One application is to create a dataset that can
be used with the AFNI volume rendering plugin.
Options:
-prefix ppp = Use 'ppp' as the prefix for the new dataset.
[default = 'axialize']
-verb = Print out a progress report.
The following options determine the order/orientation
in which the slices will be written to the dataset:
-sagittal = Do sagittal slice order [-orient ASL]
-coronal = Do coronal slice order [-orient RSA]
-axial = Do axial slice order [-orient RAI]
This is the default AFNI axial order, and
is the one currently required by the
volume rendering plugin; this is also
the default orientation output by this
program (hence the program's name).
-orient code = Orientation code for output.
The code must be 3 letters, one each from the
pairs {R,L} {A,P} {I,S}. The first letter gives
the orientation of the x-axis, the second the
orientation of the y-axis, the third the z-axis:
R = Right-to-left L = Left-to-right
A = Anterior-to-posterior P = Posterior-to-anterior
I = Inferior-to-superior S = Superior-to-inferior
If you give an illegal code (e.g., 'LPR'), then
the program will print a message and stop.
N.B.: 'Neurological order' is -orient LPI
-frugal = Write out data as it is rotated, a sub-brick at
a time. This saves a little memory and was the
previous behavior.
Note the frugal option is not available with NIFTI
datasets
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dBandpass
--------------------------------------------------------------------------
** NOTA BENE: For the purpose of preparing resting-state FMRI datasets **
** for analysis (e.g., with 3dGroupInCorr), this program is now mostly **
** superseded by the afni_proc.py script. See the 'afni_proc.py -help' **
** section 'Resting state analysis (modern)' to get our current rs-FMRI **
** pre-processing recommended sequence of steps. -- RW Cox, et alii. **
--------------------------------------------------------------------------
** If you insist on doing your own bandpassing, I now recommend using **
** program 3dTproject instead of this program. 3dTproject also can do **
** censoring and other nuisance regression at the same time -- RW Cox. **
--------------------------------------------------------------------------
Usage: 3dBandpass [options] fbot ftop dataset
* One function of this program is to prepare datasets for input
to 3dSetupGroupInCorr. Other uses are left to your imagination.
* 'dataset' is a 3D+time sequence of volumes
++ This must be a single imaging run -- that is, no discontinuities
in time from 3dTcat-ing multiple datasets together.
* fbot = lowest frequency in the passband, in Hz
++ fbot can be 0 if you want to do a lowpass filter only;
HOWEVER, the mean and Nyquist freq are always removed.
* ftop = highest frequency in the passband (must be > fbot)
++ if ftop > Nyquist freq, then it's a highpass filter only.
* Set fbot=0 and ftop=99999 to do an 'allpass' filter.
++ Except for removal of the 0 and Nyquist frequencies, that is.
* You cannot construct a 'notch' filter with this program!
++ You could use 3dBandpass followed by 3dcalc to get the same effect.
++ If you are understand what you are doing, that is.
++ Of course, that is the AFNI way -- if you don't want to
understand what you are doing, use Some other PrograM, and
you can still get Fine StatisticaL maps.
* 3dBandpass will fail if fbot and ftop are too close for comfort.
++ Which means closer than one frequency grid step df,
where df = 1 / (nfft * dt) [of course]
* The actual FFT length used will be printed, and may be larger
than the input time series length for the sake of efficiency.
++ The program will use a power-of-2, possibly multiplied by
a power of 3 and/or 5 (up to and including the 3rd power of
each of these: 3, 9, 27, and 5, 25, 125).
* Note that the results of combining 3dDetrend and 3dBandpass will
depend on the order in which you run these programs. That's why
3dBandpass has the '-ort' and '-dsort' options, so that the
time series filtering can be done properly, in one place.
* The output dataset is stored in float format.
* The order of processing steps is the following (most are optional):
(0) Check time series for initial transients [does not alter data]
(1) Despiking of each time series
(2) Removal of a constant+linear+quadratic trend in each time series
(3) Bandpass of data time series
(4) Bandpass of -ort time series, then detrending of data
with respect to the -ort time series
(5) Bandpass and de-orting of the -dsort dataset,
then detrending of the data with respect to -dsort
(6) Blurring inside the mask [might be slow]
(7) Local PV calculation [WILL be slow!]
(8) L2 normalization [will be fast.]
--------
OPTIONS:
--------
-despike = Despike each time series before other processing.
++ Hopefully, you don't actually need to do this,
which is why it is optional.
-ort f.1D = Also orthogonalize input to columns in f.1D
++ Multiple '-ort' options are allowed.
-dsort fset = Orthogonalize each voxel to the corresponding
voxel time series in dataset 'fset', which must
have the same spatial and temporal grid structure
as the main input dataset.
++ At present, only one '-dsort' option is allowed.
-nodetrend = Skip the quadratic detrending of the input that
occurs before the FFT-based bandpassing.
++ You would only want to do this if the dataset
had been detrended already in some other program.
-dt dd = set time step to 'dd' sec [default=from dataset header]
-nfft N = set the FFT length to 'N' [must be a legal value]
-norm = Make all output time series have L2 norm = 1
++ i.e., sum of squares = 1
-mask mset = Mask dataset
-automask = Create a mask from the input dataset
-blur fff = Blur (inside the mask only) with a filter
width (FWHM) of 'fff' millimeters.
-localPV rrr = Replace each vector by the local Principal Vector
(AKA first singular vector) from a neighborhood
of radius 'rrr' millimiters.
++ Note that the PV time series is L2 normalized.
++ This option is mostly for Bob Cox to have fun with.
-input dataset = Alternative way to specify input dataset.
-band fbot ftop = Alternative way to specify passband frequencies.
-prefix ppp = Set prefix name of output dataset.
-quiet = Turn off the fun and informative messages. (Why?)
-notrans = Don't check for initial positive transients in the data:
*OR* ++ The test is a little slow, so skipping it is OK,
-nosat if you KNOW the data time series are transient-free.
++ Or set AFNI_SKIP_SATCHECK to YES.
++ Initial transients won't be handled well by the
bandpassing algorithm, and in addition may seriously
contaminate any further processing, such as inter-voxel
correlations via InstaCorr.
++ No other tests are made [yet] for non-stationary behavior
in the time series data.
=========================================================================
* This binary version of 3dBandpass is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work with 'cluster' setups).
* For implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 2.
* The maximum number of CPUs that will be used is now set to .... 2.
* At present, the only part of 3dBandpass that is parallelized is the
'-blur' option, which processes each sub-brick independently.
=========================================================================
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dBlurInMask
Usage: ~1~
3dBlurInMask [options]
Blurs a dataset spatially inside a mask. That's all. Experimental.
OPTIONS ~1~
-------
-input ddd = This required 'option' specifies the dataset
that will be smoothed and output.
-FWHM f = Add 'f' amount of smoothness to the dataset (in mm).
**N.B.: This is also a required 'option'.
-FWHMdset d = Read in dataset 'd' and add the amount of smoothness
given at each voxel -- spatially variable blurring.
** EXPERIMENTAL EXPERIMENTAL EXPERIMENTAL **
-mask mmm = Mask dataset, if desired. Blurring will
occur only within the mask. Voxels NOT in
the mask will be set to zero in the output.
-Mmask mmm = Multi-mask dataset -- each distinct nonzero
value in dataset 'mmm' will be treated as
a separate mask for blurring purposes.
**N.B.: 'mmm' must be byte- or short-valued!
-automask = Create an automask from the input dataset.
**N.B.: only 1 masking option can be used!
-preserve = Normally, voxels not in the mask will be
set to zero in the output. If you want the
original values in the dataset to be preserved
in the output, use this option.
-prefix ppp = Prefix for output dataset will be 'ppp'.
**N.B.: Output dataset is always in float format.
-quiet = Don't be verbose with the progress reports.
-float = Save dataset as floats, no matter what the
input data type is.
**N.B.: If the input dataset is unscaled shorts, then
the default is to save the output in short
format as well. In EVERY other case, the
program saves the output as floats. Thus,
the ONLY purpose of the '-float' option is to
force an all-shorts input dataset to be saved
as all-floats after blurring.
NOTES ~1~
-----
* If you don't provide a mask, then all voxels will be included
in the blurring. (But then why are you using this program?)
* Note that voxels inside the mask that are not contiguous with
any other voxels inside the mask will not be modified at all!
* Works iteratively, similarly to 3dBlurToFWHM, but without
the extensive overhead of monitoring the smoothness.
* But this program will be faster than 3dBlurToFWHM, and probably
slower than 3dmerge.
* Since the blurring is done iteratively, rather than all-at-once as
in 3dmerge, the results will be slightly different than 3dmerge's,
even if no mask is used here (3dmerge, of course, doesn't take a mask).
* If the original FWHM of the dataset was 'S' and you input a value
'F' with the '-FWHM' option, then the output dataset's smoothness
will be about sqrt(S*S+F*F). The number of iterations will be
about (F*F/d*d) where d=grid spacing; this means that a large value
of F might take a lot of CPU time!
* The spatial smoothness of a 3D+time dataset can be estimated with a
command similar to the following:
3dFWHMx -detrend -mask mmm+orig -input ddd+orig
* The minimum number of voxels in the mask is 9
* Isolated voxels will be removed from the mask!
=========================================================================
* This binary version of 3dBlurInMask is compiled using OpenMP, a semi-
automatic parallelizer software toolkit, which splits the work across
multiple CPUs/cores on the same shared memory computer.
* OpenMP is NOT like MPI -- it does not work with CPUs connected only
by a network (e.g., OpenMP doesn't work with 'cluster' setups).
* For implementation and compilation details, please see
https://afni.nimh.nih.gov/pub/dist/doc/misc/OpenMP.html
* The number of CPU threads used will default to the maximum number on
your system. You can control this value by setting environment variable
OMP_NUM_THREADS to some smaller value (including 1).
* Un-setting OMP_NUM_THREADS resets OpenMP back to its default state of
using all CPUs available.
++ However, on some systems, it seems to be necessary to set variable
OMP_NUM_THREADS explicitly, or you only get one CPU.
++ On other systems with many CPUS, you probably want to limit the CPU
count, since using more than (say) 16 threads is probably useless.
* You must set OMP_NUM_THREADS in the shell BEFORE running the program,
since OpenMP queries this variable BEFORE the program actually starts.
++ You can't usefully set this variable in your ~/.afnirc file or on the
command line with the '-D' option.
* How many threads are useful? That varies with the program, and how well
it was coded. You'll have to experiment on your own systems!
* The number of CPUs on this particular computer system is ...... 2.
* The maximum number of CPUs that will be used is now set to .... 2.
=========================================================================
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dBlurToFWHM
Usage: 3dBlurToFWHM [options]
Blurs a 'master' dataset until it reaches a specified FWHM
smoothness (approximately). The same blurring schedule is
applied to the input dataset to produce the output. The goal
is to make the output dataset have the given smoothness, no
matter what smoothness it had on input (however, the program
cannot 'unsmooth' a dataset!). See below for the METHOD used.
OPTIONS
-------
-input ddd = This required 'option' specifies the dataset
that will be smoothed and output.
-blurmaster bbb = This option specifies the dataset whose
whose smoothness controls the process.
**N.B.: If not given, the input dataset is used.
**N.B.: This should be one continuous run.
Do not input catenated runs!
-prefix ppp = Prefix for output dataset will be 'ppp'.
**N.B.: Output dataset is always in float format.
-mask mmm = Mask dataset, if desired. Blurring will
occur only within the mask. Voxels NOT in
the mask will be set to zero in the output.
-automask = Create an automask from the input dataset.
**N.B.: Not useful if the input dataset has been
detrended or otherwise regressed before input!
-FWHM f = Blur until the 3D FWHM is 'f'.
-FWHMxy f = Blur until the 2D (x,y)-plane FWHM is 'f'.
No blurring is done along the z-axis.
**N.B.: Note that you can't REDUCE the smoothness
of a dataset.
**N.B.: Here, 'x', 'y', and 'z' refer to the
grid/slice order as stored in the dataset,
not DICOM ordered coordinates!
**N.B.: With -FWHMxy, smoothing is done only in the
dataset xy-plane. With -FWHM, smoothing
is done in 3D.
**N.B.: The actual goal is reached when
-FWHM : cbrt(FWHMx*FWHMy*FWHMz) >= f
-FWHMxy: sqrt(FWHMx*FWHMy) >= f
That is, when the area or volume of a
'resolution element' goes past a threshold.
-quiet Shut up the verbose progress reports.
**N.B.: This should be the first option, to stifle
any verbosity from the option processing code.
FILE RECOMMENDATIONS for -blurmaster:
For FMRI statistical purposes, you DO NOT want the FWHM to reflect
the spatial structure of the underlying anatomy. Rather, you want
the FWHM to reflect the spatial structure of the noise. This means
that the -blurmaster dataset should not have anatomical structure. One
good form of input is the output of '3dDeconvolve -errts', which is
the residuals left over after the GLM fitted signal model is subtracted
out from each voxel's time series. You can also use the output of
'3dREMLfit -Rerrts' or '3dREMLfit -Rwherr' for this purpose.
You CAN give a multi-brick EPI dataset as the -blurmaster dataset; the
dataset will be detrended in time (like the -detrend option in 3dFWHMx)
which will tend to remove the spatial structure. This makes it
practicable to make the input and blurmaster datasets be the same,
without having to create a detrended or residual dataset beforehand.
Considering the accuracy of blurring estimates, this is probably good
enough for government work [that is an insider's joke :-].
N.B.: Do not use catenated runs as blurmasters. There should
be no discontinuities in the time axis of blurmaster, which would
make the simple regression detrending do peculiar things.
ALSO SEE:
* 3dFWHMx, which estimates smoothness globally
* 3dLocalstat -stat FWHM, which estimates smoothness locally
* This paper, which discusses the need for a fixed level of smoothness
when combining FMRI datasets from different scanner platforms:
Friedman L, Glover GH, Krenz D, Magnotta V; The FIRST BIRN.
Reducing inter-scanner variability of activation in a multicenter
fMRI study: role of smoothness equalization.
Neuroimage. 2006 Oct 1;32(4):1656-68.
METHOD:
The blurring is done by a conservative finite difference approximation
to the diffusion equation:
du/dt = d/dx[ D_x(x,y,z) du/dx ] + d/dy[ D_y(x,y,z) du/dy ]
+ d/dz[ D_z(x,y,z) du/dz ]
= div[ D(x,y,z) grad[u(x,y,z)] ]
where diffusion tensor D() is diagonal, Euler time-stepping is used, and
with Neumann (reflecting) boundary conditions at the edges of the mask
(which ensures that voxel data inside and outside the mask don't mix).
* At each pseudo-time step, the FWHM is estimated globally (like '3dFWHMx')
and locally (like '3dLocalstat -stat FWHM'). Voxels where the local FWHM
goes past the goal will not be smoothed any more (D gets set to zero).
* When the global smoothness estimate gets close to the goal, the blurring
rate (pseudo-time step) will be reduced, to avoid over-smoothing.
* When an individual direction's smoothness (e.g., FWHMz) goes past the goal,
all smoothing in that direction stops, but the other directions continue
to be smoothed until the overall resolution element goal is achieved.
* When the global FWHM estimate reaches the goal, the program is done.
It will also stop if progress stalls for some reason, or if the maximum
iteration count is reached (infinite loops being unpopular).
* The output dataset will NOT have exactly the smoothness you ask for, but
it will be close (fondly we do hope). In our Imperial experiments, the
results (measured via 3dFWHMx) are within 10% of the goal (usually better).
* 2D blurring via -FWHMxy may increase the smoothness in the z-direction
reported by 3dFWHMx, even though there is no inter-slice processing.
At this moment, I'm not sure why. It may be an estimation artifact due
to increased correlation in the xy-plane that biases the variance estimates
used to calculate FWHMz.
ADVANCED OPTIONS:
-maxite ccc = Set maximum number of iterations to 'ccc' [Default=variable].
-rate rrr = The value of 'rrr' should be a number between
0.05 and 3.5, inclusive. It is a factor to change
the overall blurring rate (slower for rrr < 1) and thus
require more or less blurring steps. This option should only
be needed to slow down the program if the it over-smooths
significantly (e.g., it overshoots the desired FWHM in
Iteration #1 or #2). You can increase the speed by using
rrr > 1, but be careful and examine the output.
-nbhd nnn = As in 3dLocalstat, specifies the neighborhood
used to compute local smoothness.
[Default = 'SPHERE(-4)' in 3D, 'SPHERE(-6)' in 2D]
** N.B.: For the 2D -FWHMxy, a 'SPHERE()' nbhd
is really a circle in the xy-plane.
** N.B.: If you do NOT want to estimate local
smoothness, use '-nbhd NULL'.
-ACF or -acf = Use the 'ACF' method (from 3dFWHMx) to estimate
the global smoothness, rather than the 'classic'
Forman 1995 method. This option will be somewhat
slower. It will also set '-nbhd NULL', since there
is no local ACF estimation method implemented.
-bsave bbb = Save the local smoothness estimates at each iteration
with dataset prefix 'bbb' [for debugging purposes].
-bmall = Use all blurmaster sub-bricks.
[Default: a subset will be chosen, for speed]
-unif = Uniformize the voxel-wise MAD in the blurmaster AND
input datasets prior to blurring. Will be restored
in the output dataset.
-detrend = Detrend blurmaster dataset to order NT/30 before starting.
-nodetrend = Turn off detrending of blurmaster.
** N.B.: '-detrend' is the new default [05 Jun 2007]!
-detin = Also detrend input before blurring it, then retrend
it afterwards. [Off by default]
-temper = Try harder to make the smoothness spatially uniform.
-- Author: The Dreaded Emperor Zhark - Nov 2006
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dBrainSync
Usage: 3dBrainSync [options]
This program 'synchronizes' the -inset2 dataset to match the -inset1
dataset, as much as possible (average voxel-wise correlation), using the
same transformation on each input time series from -inset2:
++ With the -Qprefix option, the transformation is an orthogonal matrix,
computed as described in Joshi's original OHBM 2017 presentations.
++ With the -Pprefix option, the transformation is simply a
permutation of the time order of -inset2 (a special case
of an orthogonal matrix).
++ The algorithms and a little discussion of the different features of
these two techniques are discussed in the METHODS section, infra.
++ At least one of '-Qprefix' or '-Pprefix' must be given, or
this program does not do anything! You can use both methods,
if you want to compare them.
++ 'Harmonize' might be a better name for what this program does,
but calling it 3dBrainHarm would probably not be good marketing
(except for Traumatic Brain Injury researchers?).
One possible application of this program is to correlate resting state
FMRI datasets between subjects, voxel-by-voxel, as is sometimes done
with naturalistic stimuli (e.g., movie viewing).
--------
OPTIONS:
--------
-inset1 dataset1 = Reference dataset
-inset2 dataset2 = Dataset to be matched to the reference dataset,
as much as possible.
++ These 2 datasets must be on the same spatial grid,
and must have the same number of time points!
++ There must be at least twice as many voxels being
processed as there are time points (see '-mask', below).
++ These are both MANDATORY 'options'.
++ As usual in AFNI, since the computations herein are
voxel-wise, it is possible to input plain text .1D
files as datasets. When doing so, remember that
a ROW in the .1D file is interpreted as a time series
(single voxel's data). If your .1D files are oriented
so that time runs in down the COLUMNS, you will have to
transpose the inputs, which can be done on the command
line with the \' operator, or externally using the
1dtranspose program.
-->>++ These input datasets should be pre-processed first
to remove undesirable components (motions, baseline,
spikes, breathing, etc). Otherwise, you will be trying
to match artifacts between the datasets, which is not
likely to be interesting or useful. 3dTproject would be
one way to do this. Even better: afni_proc.py!
++ In particular, the mean of each time series should have
been removed! Otherwise, the calculations are fairly
meaningless.
-Qprefix qqq = Specifies the output dataset to be used for
the orthogonal matrix transformation.
++ This will be the -inset2 dataset transformed
to be as correlated as possible (in time)
with the -inset1 dataset, given the constraint
that the transformation applied to each time
series is an orthogonal matrix.
-Pprefix ppp = Specifies the output dataset to be used for
the permutation transformation.
++ The output dataset is the -inset2 dataset
re-ordered in time, again to make the result
as correlated as possible with the -inset1
dataset.
-normalize = Normalize the output dataset(s) so that each
time series has sum-of-squares = 1.
++ This option is not usually needed in AFNI
(e.g., 3dTcorrelate does not care).
-mask mset = Only operate on nonzero voxels in the mset dataset.
++ Voxels outside the mask will not be used in computing
the transformation, but WILL be transformed for
your application and/or edification later.
++ For FMRI purposes, a gray matter mask would make
sense here, or at least a brain mask.
++ If no masking option is given, then all voxels
will be processed in computing the transformation.
This set will include all non-brain voxels (if any).
++ Any voxel which is all constant in time
(in either input) will be removed from the mask.
++ This mask dataset must be on the same spatial grid
as the other input datasets!
-verb = Print some progress reports and auxiliary information.
++ Use this option twice to get LOTS of progress
reports; mostly useful for debugging.
------
NOTES:
------
* Is this program useful? Not even The Shadow knows!
(But do NOT call it BS.)
* The output dataset is in floating point format.
* Although the goal of 3dBrainSync is to make the transformed
-inset2 as correlated (voxel-by-voxel) as possible with -inset1,
it does not actually compute that correlation dataset. You can do
that computation with program 3dTcorrelate, as in
3dTcorrelate -polort -1 -prefix AB.pcor.nii \
dataset1 transformed-dataset2
* Besides the transformed dataset(s), if the '-verb' option is used,
some other (text formatted) files are written out:
{Qprefix}.sval.1D = singular values from the BC' decomposition
{Qprefix}.qmat.1D = Q matrix
{Pprefix}.perm.1D = permutation indexes p(i)
You probably do not have any use for these files; they are mostly
present to diagnose any problems.
--------
METHODS:
--------
* Notation used in the explanations below:
M = Number of time points
N = Number of voxels > M (N = size of mask)
B = MxN matrix of time series from -inset1
C = MxN matrix of time series from -inset2
Both matrices will have each column normalized to
have sum-of-squares = 1 (L2 normalized) --
the program does this operation internally; you do not have
to ensure that the input datasets are so normalized)
Q = Desired orthgonal MxM matrix to transform C such that B-QC
is as small as possible (sum-of-squares = Frobenius norm)
normF(A) = sum_{ij} A_{ij}^2 = trace(AA') = trace(A'A).
NOTE: This norm is different from the matrix L2 norm.
NOTE: A' denotes the transpose of A.
* The expansion below shows why the matrix BC' is crucial to the analysis:
normF(B-QC) = trace( [B-QC][B'-C'Q'] )
= trace(BB') + trace(QCC'Q') - trace(BC'Q') - trace(QCB')
= trace(BB') + trace(C'C) - 2 trace(BC'Q')
The second term collapses because trace(AA') = trace(A'A), so
trace([QC][QC]') = trace([QC]'[QC]) = trace(C'Q'QC) = trace(C'C)
because Q is orthogonal. So the first 2 terms in the expansion of
normF(B-QC) do not depend on Q at all. Thus, to minimize normF(B-QC),
we have to maximize trace(BC'Q') = trace([B][QC]') = trace([QC][B]').
Since the columns of B and C are the (normalized) time series,
each row represents the image at a particular time. So the (i,j)
element of BC' is the (spatial) dot product of the i-th TR image from
-inset1 with the j-th TR image from -inset2. Furthermore,
trace(BC') = trace(C'B) = sum of dot products (correlations)
of all time series. So maximizing trace(BC'Q') will maximize the
summed correlations of B (time series from -inset1) and QC
(transformed time series from -inset2).
Note again that the sum of correlations (dot products) of all the time
series is equal to the sum of dot products of all the spatial images.
So the algorithm to find the transformation Q is to maximize the sum of
dot products of spatial images from B with Q-transformed spatial images
from C -- since there are fewer time points than voxels, this is more
efficient and elegant than trying to maximize the sum over voxels of dot
products of time series.
If you use the '-verb' option, these summed correlations ('scores')
are printed to stderr during the analysis.
* Joshi method [-Qprefix]:
(a) compute MxM matrix B C'
(b) compute SVD of B C' = U S V' (U, S, V are MxM matrices)
(c) Q = U V'
[note: if B=C, then U=V, so Q=I, as it should]
(d) transform each time series from -inset2 using Q
This matrix Q is the solution to the restricted least squares
problem (i.e., restricted to have Q be an orthogonal matrix).
NOTE: The sum of the singular values in S is equal to the sum
of the time series dot products (correlations) in B and QC,
when Q is calculated as above.
A pre-print of this method is available as:
AA Joshi, M Chong, RM Leahy.
BrainSync: An Orthogonal Transformation for Synchronization of fMRI
Data Across Subjects, Proc. MICCAI 2017
https://www.dropbox.com/s/tu4kuqqlg6r02kt/brainsync_miccai2017.pdf
https://www.google.com/search?q=joshi+brainsync
http://neuroimage.usc.edu/neuro/Resources/BrainSync
* Permutation method [-Pprefix]:
(a) Compute B C' (as above)
(b) Find a permutation p(i) of the integers {0..M-1} such
that sum_i { (BC')[i,p(i)] } is as large as possible
(i.e., p() is used as a permutation of the COLUMNS of BC').
This permutation is equivalent to post-multiplying BC'
by an orthogonal matrix P representing the permutation;
such a P is full of 0s except for a single 1 in each row
and each column.
(c) Permute the ROWS (time direction) of the time series matrix
from -inset2 using p().
Only an approximate (greedy) algorithm is used to find this
permutation; that is, the best permutation is not guaranteed to be found
(just a 'good' permutation -- it is the best thing I could code quickly :).
Algorithm currently implemented (let D=BC' for notational simplicity):
1) Find the largest element D(i,j) in the matrix.
Then the permutation at row i is p(i)=j.
Strike row i and column j out of the matrix D.
2) Repeat, finding the largest element left, say at D(f,g).
Then p(f) = g. Strike row f and column g from the matrix.
Repeat until done.
(Choosing the largest possible element at each step is what makes this
method 'greedy'.) This permutation is not optimal but is pretty good,
and another step is used to improve it:
3) For all pairs (i,j), p(i) and p(j) are swapped and that permutation
is tested to see if the trace gets bigger.
4) This pair-wise swapping is repeated until it does not improve things
any more (typically, it improves the trace about 1-2% -- not much).
The purpose of the pair swapping is to deal with situations where D looks
something like this: [ 1 70 ]
[ 70 99 ]
Step 1 would pick out 99, and Step 2 would pick out 1; that is,
p(2)=2 and then p(1)=1, for a total trace/score of 100. But swapping
1 and 2 would give a total trace/score of 140. In practice, extreme versions
of this situation do not seem common with real FMRI data, probably because
the subject's brain isn't actively conspiring against this algorithm :)
[Something called the 'Hungarian algorithm' can solve for the optimal]
[permutation exactly, but I've not had the inclination to program it.]
This whole permutation optimization procedure is very fast: about 1 second.
In the RS-FMRI data I've tried this on, the average time series correlation
resulting from this optimization is 50-65% of that which comes from
optimizing over ALL orthogonal matrices (Joshi method). If you use '-verb',
the stderr output line that looks like this
+ corr scores: original=-722.5 Q matrix=22366.0 permutation=12918.7 57.8%
shows trace(BC') before any transforms, with the Q matrix transform,
and with the permutation transform. As explained above, trace(BC') is
the summed correlations of the time series (since the columns of B and C
are normalized prior to the optimizations); in this example, the ratio of
the average time series correlation between the permutation method and the
Joshi method is about 58% (in a gray matter mask with 72221 voxels).
* Results from the permutation method MUST be less correlated (on average)
with -inset1 than the Joshi method's results: the permutation can be
thought of as an orthogonal matrix containing only 1s and 0s, and the BEST
possible orthogonal matrix, from Joshi's method, has more general entries.
++ However, the permutation method has an obvious interpretation
(re-ordering time points), while the general method linearly combines
different time points (perhaps far apart); the interpretation of this
combination in terms of synchronizing brain activity is harder to intuit
(at least for me).
++ Another feature of a permutation-only transformation is that it cannot
change the sign of data, unlike a general orthgonal matrix; e.g.,
[ 0 -1]
[-1 0], which swaps 2 time points AND negates them, is a valid
orthogonal matrix. For rs-FMRI datasets, this consideration might not
be important, since correlations are generally positive, so don't often
need sign-flipping to make them so.
* This program is NOT multi-threaded. Typically, I/O is a big part of
the run time (at least, for the cases I've tested). The '-verb' option
will give progress reports with elapsed-time stamps, making it easy to
see which parts of the program take the most time.
* Author: RWCox, servant of the ChronoSynclastic Infundibulum - July 2017
* Thanks go to Anand Joshi for his clear exposition of BrainSync at OHBM 2017,
and his encouragement about the development of this program.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dBRAIN_VOYAGERtoAFNI
Usage: 3dBRAIN_VOYAGERtoAFNI <-input BV_VOLUME.vmr>
[-bs] [-qx] [-tlrc|-acpc|-orig] [<-prefix PREFIX>]
Converts a BrainVoyager vmr dataset to AFNI's BRIK format
The conversion is based on information from BrainVoyager's
website: www.brainvoyager.com.
Sample data and information provided by
Adam Greenberg and Nikolaus Kriegeskorte.
If you get error messages about the number of
voxels and file size, try the options below.
I hope to automate these options once I have
a better description of the BrainVoyager QX format.
Optional Parameters:
-bs: Force byte swapping.
-qx: .vmr file is from BrainVoyager QX
-tlrc: dset in tlrc space
-acpc: dset in acpc-aligned space
-orig: dset in orig space
If unspecified, the program attempts to guess the view from
the name of the input.
[-novolreg]: Ignore any Rotate, Volreg, Tagalign,
or WarpDrive transformations present in
the Surface Volume.
[-noxform]: Same as -novolreg
[-setenv "'ENVname=ENVvalue'"]: Set environment variable ENVname
to be ENVvalue. Quotes are necessary.
Example: suma -setenv "'SUMA_BackgroundColor = 1 0 1'"
See also options -update_env, -environment, etc
in the output of 'suma -help'
Common Debugging Options:
[-trace]: Turns on In/Out debug and Memory tracing.
For speeding up the tracing log, I recommend
you redirect stdout to a file when using this option.
For example, if you were running suma you would use:
suma -spec lh.spec -sv ... > TraceFile
This option replaces the old -iodbg and -memdbg.
[-TRACE]: Turns on extreme tracing.
[-nomall]: Turn off memory tracing.
[-yesmall]: Turn on memory tracing (default).
NOTE: For programs that output results to stdout
(that is to your shell/screen), the debugging info
might get mixed up with your results.
Global Options (available to all AFNI/SUMA programs)
-h: Mini help, at time, same as -help in many cases.
-help: The entire help output
-HELP: Extreme help, same as -help in majority of cases.
-h_view: Open help in text editor. AFNI will try to find a GUI editor
-hview : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_web: Open help in web browser. AFNI will try to find a browser.
-hweb : on your machine. You can control which it should use by
setting environment variable AFNI_GUI_EDITOR.
-h_find WORD: Look for lines in this programs's -help output that match
(approximately) WORD.
-h_raw: Help string unedited
-h_spx: Help string in sphinx loveliness, but do not try to autoformat
-h_aspx: Help string in sphinx with autoformatting of options, etc.
-all_opts: Try to identify all options for the program from the
output of its -help option. Some options might be missed
and others misidentified. Use this output for hints only.
Compile Date:
Dec 12 2019
Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov
AFNI program: 3dBrickStat
Usage: 3dBrickStat [options] dataset
Compute maximum and/or minimum voxel values of an input dataset
The output is a number to the console. The input dataset
may use a sub-brick selection list, as in program 3dcalc.
Note that this program computes ONE number as the output; e.g.,
the mean over all voxels and time points. If you want (say) the
mean over all voxels but for each time point individually, see
program 3dmaskave.
Note: If you don't specify one sub-brick, the parameter you get
----- back is computed from all the sub-bricks in dataset.
Options :
-quick = get the information from the header only (default)
-slow = read the whole dataset to find the min and max values
all other options except min and max imply slow
-min = print the minimum value in dataset
-max = print the maximum value in dataset (default)
-mean = print the mean value in dataset
-sum = print the sum of values in the dataset
-var = print the variance in the dataset
-stdev = print the standard deviation in the dataset
-stdev and -var are mutually exclusive
-count = print the number of voxels included
-volume = print the volume of voxels included in microliters
-positive = include only positive voxel values
-negative = include only negative voxel values
-zero = include only zero voxel values
-non-positive = include only voxel values 0 or negative
-non-negative = include only voxel values 0 or greater
-non-zero = include only voxel values not equal to 0
-absolute = use absolute value of voxel values for all calculations
can be combined with restrictive non-positive, non-negative,
etc. even if not practical. Ignored for percentile and
median computations.
-nan = include only voxel values that are finite numbers,
not NaN, or inf. -nan forces -slow mode.
-nonan =exclude voxel values that are not numbers
-mask dset = use dset as mask to include/exclude voxels
-mrange MIN MAX = Only accept values between MIN and MAX (inclusive)
from the mask. Default it to accept all non-zero
voxels.
-mvalue VAL = Only accept values equal to VAL from the mask.
-automask = automatically compute mask for dataset
Can not be combined with -mask
-percentile p0 ps p1 write the percentile values starting
at p0% and ending at p1% at a step of ps%
Output is of the form p% value p% value ...
Percentile values are output first.
Only one sub-brick is accepted as input with this option.
Write the author if you REALLY need this option
to work with multiple sub-bricks.
-median a shortcut for -percentile 50 1 50
-ver = print author and version info
-help = print this help screen
INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
'r1+orig[3..5]' {sub-brick selector}
'r1+orig<100..200>' {sub-range selector}
'r1+orig[3..5]<100..200>' {both selectors}
'3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )' {calculation}
For the gruesome details, see the output of 'afni -help'.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dbuc2fim
[7m*+ WARNING:[0m This program (3dbuc2fim) is old, not maintained, and probably useless!
This program converts bucket sub-bricks to fim (fico, fitt, fift, ...)
type dataset.
Usage:
3dbuc2fim -prefix pname d1+orig[index]
This produces a fim dataset.
-or-
3dbuc2fim -prefix pname d1+orig[index1] d2+orig[index2]
This produces a fico (fitt, fift, ...) dataset,
depending on the statistic type of the 2nd subbrick,
with d1+orig[index1] -> intensity sub-brick of pname
d2+orig[index2] -> threshold sub-brick of pname
-or-
3dbuc2fim -prefix pname d1+orig[index1,index2]
This produces a fico (fitt, fift, ...) dataset,
depending on the statistic type of the 2nd subbrick,
with d1+orig[index1] -> intensity sub-brick of pname
d1+orig[index2] -> threshold sub-brick of pname
where the options are:
-prefix pname = Use 'pname' for the output dataset prefix name.
OR -output pname [default='buc2fim']
-session dir = Use 'dir' for the output dataset session directory.
[default='./'=current working directory]
-verb = Print out some verbose output as the program
proceeds
Command line arguments after the above are taken as input datasets.
A dataset is specified using one of these forms:
'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
Sub-brick indexes start at 0.
N.B.: The sub-bricks are output in the order specified, which may
not be the order in the original datasets. For example, using
fred+orig[5,3]
will cause the sub-brick #5 in fred+orig to be output as the intensity
sub-brick, and sub-brick #3 to be output as the threshold sub-brick
in the new dataset.
N.B.: The '$', '(', ')', '[', and ']' characters are special to
the shell, so you will have to escape them. This is most easily
done by putting the entire dataset plus selection list inside
single quotes, as in 'fred+orig[5,9]'.
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dbucket
++ 3dbucket: AFNI version=AFNI_19.3.16 (Dec 12 2019) [64-bit]
Concatenate sub-bricks from input datasets into one big 'bucket' dataset. ~1~
Usage: 3dbucket options
where the options are: ~1~
-prefix pname = Use 'pname' for the output dataset prefix name.
OR -output pname [default='buck']
-session dir = Use 'dir' for the output dataset session directory.
[default='./'=current working directory]
-glueto fname = Append bricks to the end of the 'fname' dataset.
This command is an alternative to the -prefix
and -session commands.
* Note that fname should include the view, as in
3dbucket -glueto newset+orig oldset+orig'[7]'
-aglueto fname= If fname dset does not exist, create it (like -prefix).
Otherwise append to fname (like -glueto).
This option is useful when appending in a loop.
* As with -glueto, fname should include the view, e.g.
3dbucket -aglueto newset+orig oldset+orig'[7]'
-dry = Execute a 'dry run'; that is, only print out
what would be done. This is useful when
combining sub-bricks from multiple inputs.
-verb = Print out some verbose output as the program
proceeds (-dry implies -verb).
-fbuc = Create a functional bucket.
-abuc = Create an anatomical bucket. If neither of
these options is given, the output type is
determined from the first input type.
Command line arguments after the above are taken as input datasets.
A dataset is specified using one of these forms:
'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
You can also add a sub-brick selection list after the end of the
dataset name. This allows only a subset of the sub-bricks to be
included into the output (by default, all of the input dataset
is copied into the output). A sub-brick selection list looks like
one of the following forms:
fred+orig[5] ==> use only sub-brick #5
fred+orig[5,9,17] ==> use #5, #9, and #17
fred+orig[5..8] or [5-8] ==> use #5, #6, #7, and #8
fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
Sub-brick indexes start at 0. You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
fred+orig[0..$(3)]
Notes: ~1~
N.B.: The sub-bricks are output in the order specified, which may
not be the order in the original datasets. For example, using
fred+orig[0..$(2),1..$(2)]
will cause the sub-bricks in fred+orig to be output into the
new dataset in an interleaved fashion. Using
fred+orig[$..0]
will reverse the order of the sub-bricks in the output.
N.B.: Bucket datasets have multiple sub-bricks, but do NOT have
a time dimension. You can input sub-bricks from a 3D+time dataset
into a bucket dataset. You can use the '3dinfo' program to see
how many sub-bricks a 3D+time or a bucket dataset contains.
N.B.: The '$', '(', ')', '[', and ']' characters are special to
the shell, so you will have to escape them. This is most easily
done by putting the entire dataset plus selection list inside
single quotes, as in 'fred+orig[5..7,9]'.
N.B.: In non-bucket functional datasets (like the 'fico' datasets
output by FIM, or the 'fitt' datasets output by 3dttest), sub-brick
[0] is the 'intensity' and sub-brick [1] is the statistical parameter
used as a threshold. Thus, to create a bucket dataset using the
intensity from dataset A and the threshold from dataset B, and
calling the output dataset C, you would type
3dbucket -prefix C -fbuc 'A+orig[0]' -fbuc 'B+orig[1]'
WARNING: ~1~
Using this program, it is possible to create a dataset that
has different basic datum types for different sub-bricks
(e.g., shorts for brick 0, floats for brick 1).
Do NOT do this! Very few AFNI programs will work correctly
with such datasets!
++ Compile date = Dec 12 2019 {AFNI_19.3.16:linux_ubuntu_16_64}
AFNI program: 3dcalc
++ 3dcalc: AFNI version=AFNI_19.3.16 (Dec 12 2019) [64-bit]
++ Authored by: A cast of thousands
Program: 3dcalc
Author: RW Cox et al
3dcalc - AFNI's calculator program ~1~
This program does voxel-by-voxel arithmetic on 3D datasets
(only limited inter-voxel computations are possible).
The program assumes that the voxel-by-voxel computations are being
performed on datasets that occupy the same space and have the same
orientations.
3dcalc has a lot of input options, as its capabilities have grown
over the years. So this 'help' output has gotten kind of long.
For simple voxel-wise averaging of datasets: cf. 3dMean
For averaging along the time axis: cf. 3dTstat
For smoothing in time: cf. 3dTsmooth
For statistics from a region around each voxel: cf. 3dLocalstat
------------------------------------------------------------------------
Usage: ~1~
-----
3dcalc -a dsetA [-b dsetB...] \
-expr EXPRESSION \
[options]
Examples: ~1~
--------
1. Average datasets together, on a voxel-by-voxel basis:
3dcalc -a fred+tlrc -b ethel+tlrc -c lucy+tlrc \
-expr '(a+b+c)/3' -prefix subjects_mean
Averaging datasets can also be done by programs 3dMean and 3dmerge.
Use 3dTstat to averaging across sub-bricks in a single dataset.
2. Perform arithmetic calculations between the sub-bricks of a single
dataset by noting the sub-brick number on the command line:
3dcalc -a 'func+orig[2]' -b 'func+orig[4]' -expr 'sqrt(a*b)'
3. Create a simple mask that consists only of values in sub-brick #0
that are greater than 3.14159:
3dcalc -a 'func+orig[0]' -expr 'ispositive(a-3.14159)' \
-prefix mask
4. Normalize subjects' time series datasets to percent change values in
preparation for group analysis:
Voxel-by-voxel, the example below divides each intensity value in
the time series (epi_r1+orig) with the voxel's mean value (mean+orig)
to get a percent change value. The 'ispositive' command will ignore
voxels with mean values less than 167 (i.e., they are labeled as
'zero' in the output file 'percent_change+orig') and are most likely
background/noncortical voxels.
3dcalc -a epi_run1+orig -b mean+orig \
-expr '100 * a/b * ispositive(b-167)' -prefix percent_chng
5. Create a compound mask from a statistical dataset, where 3 stimuli
show activation.
NOTE: 'step' and 'ispositive' are identical expressions that can
be used interchangeably:
3dcalc -a 'func+orig[12]' -b 'func+orig[15]' -c 'func+orig[18]' \
-expr 'step(a-4.2)*step(b-2.9)*step(c-3.1)' \
-prefix compound_mask
In this example, all 3 statistical criteria must be met at once for
a voxel to be selected (value of 1) in this mask.
6. Same as example #5, but this time create a mask of 8 different values
showing all combinations of activations (i.e., not only where
everything is active, but also each stimulus individually, and all
combinations). The output mask dataset labels voxel values as such:
0 = none active 1 = A only active 2 = B only active
3 = A and B only 4 = C only active 5 = A and C only
6 = B and C only 7 = all A, B, and C active
3dcalc -a 'func+orig[12]' -b 'func+orig[15]' -c 'func+orig[18]' \
-expr 'step(a-4.2)+2*step(b-2.9)+4*step(c-3.1)' \
-prefix mask_8
In displaying such a binary-encoded mask in AFNI, you would probably
set the color display to have 8 discrete levels (the '#' menu).
7. Create a region-of-interest mask comprised of a 3-dimensional sphere.
Values within the ROI sphere will be labeled as '1' while values
outside the mask will be labeled as '0'. Statistical analyses can
then be done on the voxels within the ROI sphere.
The example below puts a solid ball (sphere) of radius 3=sqrt(9)
about the point with coordinates (x,y,z)=(20,30,70):
3dcalc -a anat+tlrc \
-expr 'step(9-(x-20)*(x-20)-(y-30)*(y-30)-(z-70)*(z-70))' \
-prefix ball
The spatial meaning of (x,y,z) is discussed in the 'COORDINATES'
section of this help listing (far below).
8. Some datsets are 'short' (16 bit) integers with a scalar attached,
which allow them to be smaller than float datasets and to contain
fractional values.
Dataset 'a' is always used as a template for the output dataset. For
the examples below, assume that datasets d1+orig and d2+orig consist
of small integers.
a) When dividing 'a' by 'b', the result should be scaled, so that a
value of 2.4 is not truncated to '2'. To avoid this truncation,
force scaling with the -fscale option:
3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot -fscale
b) If it is preferable that the result is of type 'float', then set
the output data type (datum) to float:
3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot \
-datum float
c) Perhaps an integral division is desired, so that 9/4=2, not 2.24.
Force the results not to be scaled (opposite of example 8a) using
the -nscale option:
3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot -nscale
9. Compare the left and right amygdala between the Talairach atlas,
and the CA_N27_ML atlas. The result will be 1 if TT only, 2 if CA
only, and 3 where they overlap.
3dcalc -a 'TT_Daemon::amygdala' -b 'CA_N27_ML::amygdala' \
-expr 'step(a)+2*step(b)' -prefix compare.maps
(see 'whereami -help' for more information on atlases)
10. Convert a dataset from AFNI short format storage to NIfTI-1 floating
point (perhaps for input to an non-AFNI program that requires this):
3dcalc -a zork+orig -prefix zfloat.nii -datum float -expr 'a'
This operation could also be performed with program 3dAFNItoNIFTI.
11. Compute the edge voxels of a mask dataset. An edge voxel is one
that shares some face with a non-masked voxel. This computation
assumes 'a' is a binary mask (particularly for 'amongst').
3dcalc -a mask+orig -prefix edge \
-b a+i -c a-i -d a+j -e a-j -f a+k -g a-k \
-expr 'a*amongst(0,b,c,d,e,f,g)'
consider similar erode or dilate operations:
erosion: -expr 'a*(1-amongst(0,b,c,d,e,f,g))'
dilation: -expr 'amongst(1,a,b,c,d,e,f,g)'
------------------------------------------------------------------------
ARGUMENTS for 3dcalc (must be included on command line): ~1~
---------
-a dname = Read dataset 'dname' and call the voxel values 'a' in the
expression (-expr) that is input below. Up to 26 dnames
(-a, -b, -c, ... -z) can be included in a single 3dcalc
calculation/expression.
** If some letter name is used in the expression, but
not present in one of the dataset options here, then
that variable is set to 0.
** You can use the subscript '[]' method
to select sub-bricks of datasets, as in
-b dname+orig'[3]'
** If you just want to test some 3dcalc expression,
you can supply a dataset 'name' of the form
jRandomDataset:64,64,16,40
to have the program create and use a dataset
with a 3D 64x64x16 grid, with 40 time points,
filled with random numbers (uniform on [-1,1]).
-expr = Apply the expression - within quotes - to the input
datasets (dnames), one voxel at time, to produce the
output dataset.
** You must use 1 and only 1 '-expr' option!
NOTE: If you want to average or sum up a lot of datasets, programs
3dTstat and/or 3dMean and/or 3dmerge are better suited for these
purposes. A common request is to increase the number of input
datasets beyond 26, but in almost all cases such users simply
want to do simple addition!
NOTE: If you want to include shell variables in the expression (or in
the dataset sub-brick selection), then you should use double
"quotes" and the '$' notation for the shell variables; this
example uses csh notation to set the shell variable 'z':
set z = 3.5
3dcalc -a moose.nii -prefix goose.nii -expr "a*$z"
The shell will not expand variables inside single 'quotes',
and 3dcalc's parser will not understand the '$' character.
NOTE: You can use the ccalc program to play with the expression
evaluator, in order to get a feel for how it works and
what it accepts.
------------------------------------------------------------------------
OPTIONS for 3dcalc: ~1~
-------
-help = Show this help.
-verbose = Makes the program print out various information as it
progresses.
-datum type= Coerce the output data to be stored as the given type,
which may be byte, short, or float.
[default = datum of first input dataset]
-float }
-short } = Alternative options to specify output data format.
-byte }
-fscale = Force scaling of the output to the maximum integer
range. This only has effect if the output datum is byte
or short (either forced or defaulted). This option is
often necessary to eliminate unpleasant truncation
artifacts.
[The default is to scale only if the computed values
seem to need it -- are all <= 1.0 or there is at
least one value beyond the integer upper limit.]
** In earlier versions of 3dcalc, scaling (if used) was
applied to all sub-bricks equally -- a common scale
factor was used. This would cause trouble if the
values in different sub-bricks were in vastly
different scales. In this version, each sub-brick
gets its own scale factor. To override this behavior,
use the '-gscale' option.
-gscale = Same as '-fscale', but also forces each output sub-brick
to get the same scaling factor. This may be desirable
for 3D+time datasets, for example.
** N.B.: -usetemp and -gscale are incompatible!!
-nscale = Don't do any scaling on output to byte or short datasets.
This may be especially useful when operating on mask
datasets whose output values are only 0's and 1's.
** Only use this option if you are sure you
want the output dataset to be integer-valued!
-prefix pname = Use 'pname' for the output dataset prefix name.
[default='calc']
-session dir = Use 'dir' for the output dataset session directory.
[default='./'=current working directory]
You can also include the output directory in the
'pname' parameter to the -prefix option.
-usetemp = With this option, a temporary file will be created to
hold intermediate results. This will make the program
run slower, but can be useful when creating huge
datasets that won't all fit in memory at once.
* The program prints out the name of the temporary
file; if 3dcalc crashes, you might have to delete
this file manually.
** N.B.: -usetemp and -gscale are incompatible!!
-dt tstep = Use 'tstep' as the TR for "manufactured" 3D+time
*OR* datasets.
-TR tstep = If not given, defaults to 1 second.
-taxis N = If only 3D datasets are input (no 3D+time or .1D files),
*OR* then normally only a 3D dataset is calculated. With
-taxis N:tstep: this option, you can force the creation of a time axis
of length 'N', optionally using time step 'tstep'. In
such a case, you will probably want to use the pre-
defined time variables 't' and/or 'k' in your
expression, or each resulting sub-brick will be
identical. For example:
'-taxis 121:0.1' will produce 121 points in time,
spaced with TR 0.1.
N.B.: You can also specify the TR using the -dt option.
N.B.: You can specify 1D input datasets using the
'1D:n@val,n@val' notation to get a similar effect.
For example:
-dt 0.1 -w '1D:121@0'
will have pretty much the same effect as
-taxis 121:0.1
N.B.: For both '-dt' and '-taxis', the 'tstep' value is in
seconds.
-rgbfac A B C = For RGB input datasets, the 3 channels (r,g,b) are
collapsed to one for the purposes of 3dcalc, using the
formula value = A*r + B*g + C*b
The default values are A=0.299 B=0.587 C=0.114, which
gives the grayscale intensity. To pick out the Green
channel only, use '-rgbfac 0 1 0', for example. Note
that each channel in an RGB dataset is a byte in the
range 0..255. Thus, '-rgbfac 0.001173 0.002302 0.000447'
will compute the intensity rescaled to the range 0..1.0
(i.e., 0.001173=0.299/255, etc.)
-cx2r METHOD = For complex input datasets, the 2 channels must be
converted to 1 real number for calculation. The
methods available are: REAL IMAG ABS PHASE
* The default method is ABS = sqrt(REAL^2+IMAG^2)
* PHASE = atan2(IMAG,REAL)
* Multiple '-cx2r' options can be given:
when a complex dataset is given on the command line,
the most recent previous method will govern.
This also means that for -cx2r to affect a variable
it must precede it. For example, to compute the
phase of data in 'a' you should use
3dcalc -cx2r PHASE -a dft.lh.TS.niml.dset -expr 'a'
However, the -cx2r option will have no effect in
3dcalc -a dft.lh.TS.niml.dset -cx2r PHASE -expr 'a'
which will produce the default ABS of 'a'
The -cx2r option in the latter example only applies
to variables that will be defined after it.
When in doubt, check your output.
* If a complex dataset is used in a differential
subscript, then the most recent previous -cx2r
method applies to the extraction; for example
-cx2r REAL -a cx+orig -cx2r IMAG -b 'a[0,0,0,0]'
means that variable 'a' refers to the real part
of the input dataset and variable 'b' to the
imaginary part of the input dataset.
* 3dcalc cannot be used to CREATE a complex dataset!
[See program 3dTwotoComplex for that purpose.]
-sort = Sort each output brick separately, before output:
-SORT 'sort' ==> increasing order, 'SORT' ==> decreasing.
[This is useful only under unusual circumstances!]
[Sorting is done in spatial indexes, not in time.]
[Program 3dTsort will sort voxels along time axis]
-isola = After computation, remove isolated non-zero voxels.
This option can be repeated to iterate the process;
each copy of '-isola' will cause the isola removal
process to be repeated one more time.
------------------------------------------------------------------------
DATASET TYPES: ~1~
-------------
The most common AFNI dataset types are 'byte', 'short', and 'float'.
A byte value is an 8-bit signed integer (0..255), a short value ia a
16-bit signed integer (-32768..32767), and a float value is a 32-bit
real number. A byte value has almost 3 decimals of accuracy, a short
has almost 5, and a float has approximately 7 (from a 23+1 bit
mantissa).
Datasets can also have a scalar attached to each sub-brick. The main
use of this is allowing a short type dataset&n