Usage: 3dAllineate [options] sourcedataset
Program to align one dataset (the ‘source’) to a base dataset. Options are available to control:
- ++ How the matching between the source and the base is computed
- (i.e., the ‘cost functional’ measuring image mismatch).
++ How the resliced source is interpolated to the base space. ++ The complexity of the spatial transformation (‘warp’) used. ++ And many many technical options to control the process in detail,
if you know what you are doing (or just like to fool around).
=====———————————————————————- NOTES: For most 3D image registration purposes, we now recommend that you ===== use Daniel Glen’s script align_epi_anat.py (which, despite its name,
can do many more registration problems than EPI-to-T1-weighted).
- –>> In particular, using 3dAllineate with the ‘lpc’ cost functional
- (to align EPI and T1-weighted volumes) requires using a ‘-weight’ volume to get good results, and the align_epi_anat.py script will automagically generate such a weight dataset that works well for EPI-to-structural alignment.
- –>> This script can also be used for other alignment purposes, such
- as T1-weighted alignment between field strengths using the ‘-lpa’ cost functional. Investigate align_epi_anat.py to
see if it will do what you need – you might make your life a little easier and nicer and happier and more tranquil.
- –>> Also, if/when you ask for registration help on the AFNI
- message board, we’ll probably start by recommending that you try align_epi_anat.py if you haven’t already done so.
- –>> For aligning EPI and T1-weighted volumes, we have found that
- using a flip angle of 50-60 degrees for the EPI works better than a flip angle of 90 degrees. The reason is that there is more internal contrast in the EPI data when the flip angle is smaller, so the registration has some image structure to work with. With the 90 degree flip angle, there is so little internal contrast in the EPI dataset that the alignment process ends up being just trying to match brain outlines – which doesn’t always give accurate results: see http://dx.doi.org/10.1016/j.neuroimage.2008.09.037
- –>> Although the total MRI signal is reduced at a smaller flip angle,
- there is little or no loss in FMRI/BOLD information, since the bulk of the time series ‘noise’ is from physiological fluctuation signals, which are also reduced by the lower flip angle – for more details, see http://dx.doi.org/10.1016/j.neuroimage.2010.11.020
** of 3dAllineate for the preliminary affine alignment. If you are ** * interested, see the output of ‘3dQwarp -help’ for the details. *
- -base bbb = Set the base dataset to be the #0 sub-brick of ‘bbb’.
- If no -base option is given, then the base volume is taken to be the #0 sub-brick of the source dataset. (Base must be stored as floats, shorts, or bytes.)
- -source ttt = Read the source dataset from ‘ttt’. If no -source
- OR (or -input) option is given, then the source dataset
- -input ttt is the last argument on the command line.
(Source must be stored as floats, shorts, or bytes.)
- ** 3dAllineate can register 2D datasets (single slice),
- but both the base and source must be 2D – you cannot use this program to register a 2D slice into a 3D volume!
- ** See the script @2dwarper.Allin for an example of using
- 3dAllineate to do slice-by-slice nonlinear warping to align 3D volumes distorted by time-dependent magnetic field inhomogeneities.
** NOTA BENE: The base and source dataset do NOT have to be defined ** ** on the same 3D grids; the alignment process uses the ** ** coordinate systems defined in the dataset headers to ** ** make the match between spatial locations, rather than ** ** matching the 2 datasets on a voxel-by-voxel basis ** ** (as 3dvolreg and 3dWarpDrive do). ** ** –>> However, this coordinate-based matching requires that ** ** image volumes be defined on roughly the same patch of ** ** of (x,y,z) space, in order to find a decent starting ** ** point for the transformation. You might need to use ** ** the script @Align_Centers to do this, if the 3D ** ** spaces occupied by the images do not overlap much. ** ** –>> Or the ‘-cmass’ option to this program might be ** ** sufficient to solve this problem, maybe, with luck. ** ** (Another reason why you should use align_epi_anat.py) **
- -prefix ppp = Output the resulting dataset to file ‘ppp’. If this
- OR option is NOT given, no dataset will be output! The
- -out ppp transformation matrix to align the source to the base will
be estimated, but not applied. You can save the matrix for later use using the ‘-1Dmatrix_save’ option.
-floatize = Write result dataset as floats. Internal calculations
- -float are all done on float copies of the input datasets.
- [Default=convert output dataset to data format of ] [ source dataset; if the source dataset was ] [ shorts with a scale factor, then the new ] [ dataset will get a scale factor as well; ] [ if the source dataset was shorts with no ] [ scale factor, the result will be unscaled.]
- -1Dparam_save ff = Save the warp parameters in ASCII (.1D) format into
file ‘ff’ (1 row per sub-brick in source).
- A historical synonym for this option is ‘-1Dfile’.
- At the top of the saved 1D file is a #comment line listing the names of the parameters; those parameters that are fixed (e.g., via ‘-parfix’) will be marked by having their symbolic names end in the ‘$’ character. You can use ‘1dcat -nonfixed’ to remove these columns from the 1D file if you just want to further process the varying parameters somehow (e.g., 1dsvd).
- However, the ‘-1Dparam_apply’ option requires the full list of parameters, including those that were fixed, in order to work properly!
- -1Dparam_apply aa = Read warp parameters from file ‘aa’, apply them to
the source dataset, and produce a new dataset. (Must also use the ‘-prefix’ option for this to work! ) (In this mode of operation, there is no optimization of) (the cost functional by changing the warp parameters; ) (previously computed parameters are applied directly. )*N.B.: A historical synonym for this is ‘-1Dapply’. *N.B.: If you use -1Dparam_apply, you may also want to use
-master to control the grid on which the new dataset is written – the base dataset from the original 3dAllineate run would be a good possibility. Otherwise, the new dataset will be written out on the 3D grid coverage of the source dataset, and this might result in clipping off part of the image.
- *N.B.: Each row in the ‘aa’ file contains the parameters for
- transforming one sub-brick in the source dataset. If there are more sub-bricks in the source dataset than there are rows in the ‘aa’ file, then the last row is used repeatedly.
- *N.B.: A trick to use 3dAllineate to resample a dataset to
- a finer grid spacing:
- 3dAllineate -input dataset+orig
- -master template+orig -prefix newdataset
-final wsinc5 -1Dparam_apply ‘1D: 12@0’‘Here, the identity transformation is specified by giving all 12 affine parameters as 0 (note the extra ‘ at the end of the ‘1D: 12@0’ input!).
- **N.B.: Some expert options for modifying how the wsinc5
- method works are described far below, if you use ‘-HELP’ instead of ‘-help’.
- -1Dmatrix_save ff = Save the transformation matrix for each sub-brick into
file ‘ff’ (1 row per sub-brick in the source dataset). If ‘ff’ does NOT end in ‘.1D’, then the program will append ‘.aff12.1D’ to ‘ff’ to make the output filename.
- *N.B.: This matrix is the coordinate transformation from base
- to source DICOM coordinates. In other terms:
- Xin = Xsource = M Xout = M Xbase
- or
Xout = Xbase = inv(M) Xin = inv(M) Xsource
where Xin or Xsource is the 4x1 coordinates of a location in the input volume. Xout is the coordinate of that same location in the output volume. Xbase is the coordinate of the corresponding location in the base dataset. M is ff augmented by a 4th row of [0 0 0 1], X. is an augmented column vector [x,y,z,1]’ To get the inverse matrix inv(M) (source to base), use the cat_matvec program, as in
cat_matvec fred.aff12.1D -I
- -1Dmatrix_apply aa = Use the matrices in file ‘aa’ to define the spatial
- The -1Dmatrix_* options can be used to save and re-use the transformation *
- matrices. In combination with the program cat_matvec, which can multiply *
- saved transformation matrices, you can also adjust these matrices to *
- other alignments. *
- The script ‘align_epi_anat.py’ uses 3dAllineate and 3dvolreg to align EPI *
- datasets to T1-weighted anatomical datasets, using saved matrices between *
- the two programs. This script is our currently recommended method for *
- doing such intra-subject alignments. *
- -cost ccc = Defines the ‘cost’ function that defines the matching
- between the source and the base; ‘ccc’ is one of
- ls OR leastsq = Least Squares [Pearson Correlation] mi OR mutualinfo = Mutual Information [H(b)+H(s)-H(b,s)] crM OR corratio_mul = Correlation Ratio (Symmetrized*) nmi OR norm_mutualinfo = Normalized MI [H(b,s)/(H(b)+H(s))] hel OR hellinger = Hellinger metric crA OR corratio_add = Correlation Ratio (Symmetrized+) crU OR corratio_uns = Correlation Ratio (Unsym)
You can also specify the cost functional using an option of the form ‘-mi’ rather than ‘-cost mi’, if you like to keep things terse and cryptic (as I do). [Default == ‘-hel’ (for no good reason, but it sounds nice).]
- -interp iii = Defines interpolation method to use during matching
- process, where ‘iii’ is one of
- NN OR nearestneighbour OR nearestneighbor linear *OR trilinear cubic OR tricubic quintic OR triquintic
Using ‘-NN’ instead of ‘-interp NN’ is allowed (e.g.). Note that using cubic or quintic interpolation during the matching process will slow the program down a lot. Use ‘-final’ to affect the interpolation method used to produce the output dataset, once the final registration parameters are determined. [Default method == ‘linear’.]
- ** N.B.: Linear interpolation is used during the coarse
- alignment pass; the selection here only affects the interpolation method used during the second (fine) alignment pass.
- ** N.B.: ‘-interp’ does NOT define the final method used
- to produce the output dataset as warped from the input dataset. If you want to do that, use ‘-final’.
- -final iii = Defines the interpolation mode used to create the
output dataset. [Default == ‘cubic’]
- ** N.B.: For ‘-final’ ONLY, you can use ‘wsinc5’ to specify
that the final interpolation be done using a weighted sinc interpolation method. This method is so SLOW that you aren’t allowed to use it for the registration itself.
- ++ wsinc5 interpolation is highly accurate and should
- reduce the smoothing artifacts from lower order interpolation methods (which are most visible if you interpolate an EPI time series to high resolution and then make an image of the voxel-wise variance).
- ++ On my Intel-based Mac, it takes about 2.5 s to do
- wsinc5 interpolation, per 1 million voxels output. For comparison, quintic interpolation takes about 0.3 s per 1 million voxels: 8 times faster than wsinc5.
- ++ The ‘5’ refers to the width of the sinc interpolation
- weights: plus/minus 5 grid points in each direction; this is a tensor product interpolation, for speed.
- -nmatch nnn = Use at most ‘nnn’ scattered points to match the
- datasets. The smaller nnn is, the faster the matching algorithm will run; however, accuracy may be bad if nnn is too small. If you end the ‘nnn’ value with the ‘%’ character, then that percentage of the base’s voxels will be used. [Default == 47% of voxels in the weight mask]
-nopad = Do not use zero-padding on the base image.
[Default == zero-pad, if needed; -verb shows how much]
- -zclip = Replace negative values in the input datasets (source & base)
- with zero. The intent is to clip off a small set of negative values that may arise when using 3dresample (say) with cubic interpolation.
- -conv mmm = Convergence test is set to ‘mmm’ millimeters.
- This doesn’t mean that the results will be accurate to ‘mmm’ millimeters! It just means that the program stops trying to improve the alignment when the optimizer (NEWUOA) reports it has narrowed the search radius down to this level. [Default == 0.05 mm]
-verb = Print out verbose progress reports. [Using ‘-VERB’ will give even more prolix reports.]
-quiet = Don’t print out verbose stuff.
- -usetemp = Write intermediate stuff to disk, to economize on RAM.
Using this will slow the program down, but may make it possible to register datasets that need lots of space.
- **N.B.: Temporary files are written to the directory given
- in environment variable TMPDIR, or in /tmp, or in ./ (preference in that order). If the program crashes, these files are named TIM_somethingrandom, and you may have to delete them manually. (TIM=Temporary IMage)
- **N.B.: If the program fails with a ‘malloc failure’ type of
- message, then try ‘-usetemp’ (malloc=memory allocator).
- **N.B.: If you use ‘-verb’, then memory usage is printed out
- at various points along the way.
-nousetemp = Don’t use temporary workspace on disk [the default].
- -check kkk = After cost functional optimization is done, start at the
final parameters and RE-optimize using the new cost function ‘kkk’. If the results are too different, a warning message will be printed. However, the final parameters from the original optimization will be used to create the output dataset. Using ‘-check’ increases the CPU time, but can help you feel sure that the alignment process did not go wild and crazy. [Default == no check == don’t worry, be happy!]**N.B.: You can put more than one function after ‘-check’, as in
-nmi -check mi hel crU crMto register with Normalized Mutual Information, and then check the results against 4 other cost functionals.
- **N.B.: On the other hand, some cost functionals give better
- results than others for specific problems, and so a warning that ‘mi’ was significantly different than ‘hel’ might not actually mean anything useful (e.g.).
** PARAMETERS THAT AFFECT THE COST OPTIMIZATION STRATEGY **
- -onepass = Use only the refining pass – do not try a coarse
- resolution pass first. Useful if you know that only small amounts of image alignment are needed. [The default is to use both passes.]
- -twopass = Use a two pass alignment strategy, first searching for
- a large rotation+shift and then refining the alignment. [Two passes are used by default for the first sub-brick] [in the source dataset, and then one pass for the others.] [‘-twopass’ will do two passes for ALL source sub-bricks.]
- -twoblur rr = Set the blurring radius for the first pass to ‘rr’
millimeters. [Default == 11 mm]
- **N.B.: You may want to change this from the default if
- your voxels are unusually small or unusually large (e.g., outside the range 1-4 mm along each axis).
- -twofirst = Use -twopass on the first image to be registered, and
then on all subsequent images from the source dataset, use results from the first image’s coarse pass to start the fine pass. (Useful when there may be large motions between the ) (source and the base, but only small motions within ) (the source dataset itself; since the coarse pass can ) (be slow, doing it only once makes sense in this case.)**N.B.: [-twofirst is on by default; ‘-twopass’ turns it off.]
- -twobest bb = In the coarse pass, use the best ‘bb’ set of initial
points to search for the starting point for the fine pass. If bb==0, then no search is made for the best starting point, and the identity transformation is used as the starting point. [Default=5; min=0 max=11]**N.B.: Setting bb=0 will make things run faster, but less reliably.
- -fineblur x = Set the blurring radius to use in the fine resolution
pass to ‘x’ mm. A small amount (1-2 mm?) of blurring at the fine step may help with convergence, if there is some problem, especially if the base volume is very noisy. [Default == 0 mm = no blurring at the final alignment pass]**NOTES ON **STRATEGY: * If you expect only small-ish (< 2 voxels?) image movement,
then using ‘-onepass’ or ‘-twobest 0’ makes sense.
- If you expect large-ish image movements, then do not use ‘-onepass’ or ‘-twobest 0’; the purpose of the ‘-twobest’ parameter is to search for large initial rotations/shifts with which to start the coarse optimization round.
- If you have multiple sub-bricks in the source dataset, then the default ‘-twofirst’ makes sense if you don’t expect large movements WITHIN the source, but expect large motions between the source and base.
- ‘-twopass’ re-starts the alignment process for each sub-brick in the source dataset – this option can be time consuming, and is really intended to be used when you might expect large movements between sub-bricks; for example, when the different volumes are gathered on different days. For most purposes, ‘-twofirst’ (the default process) will be adequate and faster, when operating on multi-volume source datasets.
- -cmass = Use the center-of-mass calculation to bracket the shifts.
[This option is OFF by default]If given in the form ‘-cmass+xy’ (for example), means to do the CoM calculation in the x- and y-directions, but not the z-direction.
- -nocmass = Don’t use the center-of-mass calculation. [The default]
- (You would not want to use the C-o-M calculation if the ) (source sub-bricks have very different spatial locations,) (since the source C-o-M is calculated from all sub-bricks)
- **EXAMPLE: You have a limited coverage set of axial EPI slices you want to
- register into a larger head volume (after 3dSkullStrip, of course). In this case, ‘-cmass+xy’ makes sense, allowing CoM adjustment along the x = R-L and y = A-P directions, but not along the z = I-S direction, since the EPI doesn’t cover the whole brain along that axis.
- -autoweight = Compute a weight function using the 3dAutomask
algorithm plus some blurring of the base image.
- **N.B.: ‘-autoweight+100’ means to zero out all voxels
with values below 100 before computing the weight.
- ‘-autoweight**1.5’ means to compute the autoweight
- and then raise it to the 1.5-th power (e.g., to increase the weight of high-intensity regions).
- These two processing steps can be combined, as in
- ‘-autoweight+100**1.5’
- ** Note that that ‘**’ must be enclosed in quotes;
- otherwise, the shell will treat it as a wildcard and you will get an error message before 3dAllineate even starts!!
**N.B.: Some cost functionals do not allow -autoweight, and
will use -automask instead. A warning message will be printed if you run into this situation. If a clip level ‘+xxx’ is appended to ‘-autoweight’, then the conversion into ‘-automask’ will NOT happen. Thus, using a small positive ‘+xxx’ can be used trick -autoweight into working on any cost functional.
-automask
- = Compute a mask function, which is like -autoweight,
- but the weight for a voxel is set to either 0 or 1.
- **N.B.: ‘-automask+3’ means to compute the mask function, and
then dilate it outwards by 3 voxels (e.g.). ** Note that ‘+’ means something very different
for ‘-automask’ and ‘-autoweight’!!
- -autobox = Expand the -automask function to enclose a rectangular
box that holds the irregular mask.
- **N.B.: This is the default mode of operation!
For intra-modality registration, ‘-autoweight’ may be better!
- If the cost functional is ‘ls’, then ‘-autoweight’ will be the default, instead of ‘-autobox’.
- -nomask = Don’t compute the autoweight/mask; if -weight is not
- also used, then every voxel will be counted equally.
- -weight www = Set the weighting for each voxel in the base dataset;
larger weights mean that voxel counts more in the cost function.
- -wtprefix p = Write the weight volume to disk as a dataset with
- prefix name ‘p’. Used with ‘-autoweight/mask’, this option lets you see what voxels were important in the algorithm.
- -emask ee = This option lets you specify a mask of voxels to EXCLUDE from
the analysis. The voxels where the dataset ‘ee’ is nonzero will not be included (i.e., their weights will be set to zero).
- Like all the weight options, it applies in the base image coordinate system.
- Like all the weight options, it means nothing if you are using one of the ‘apply’ options.
Method Allows -autoweight —— ——————
ls YES mi NO crM YES nmi NO hel NO crA YES crU YES
-source_mask sss = Mask the source (input) dataset, using ‘sss’.
- -source_automask = Automatically mask the source dataset.
[By default, all voxels in the source] [dataset are used in the matching. ]
- **N.B.: You can also use ‘-source_automask+3’ to dilate
- the default source automask outward by 3 voxels.
- -warp xxx = Set the warp type to ‘xxx’, which is one of
shift_only OR sho = 3 parameters shift_rotate OR shr = 6 parameters shift_rotate_scale OR srs = 9 parameters affine_general OR aff = 12 parameters[Default = affine_general, which includes image] [ shifts, rotations, scaling, and shearing]
- -warpfreeze = Freeze the non-rigid body parameters (those past #6)
- after doing the first sub-brick. Subsequent volumes will have the same spatial distortions as sub-brick #0, plus rigid body motions only.
- -replacebase = If the source has more than one sub-brick, and this
- option is turned on, then after the #0 sub-brick is aligned to the base, the aligned #0 sub-brick is used as the base image for subsequent source sub-bricks.
- -replacemeth m = After sub-brick #0 is aligned, switch to method ‘m’
- for later sub-bricks. For use with ‘-replacebase’.
- -EPI = Treat the source dataset as being composed of warped
EPI slices, and the base as comprising anatomically ‘true’ images. Only phase-encoding direction image shearing and scaling will be allowed with this option.
- **N.B.: For most people, the base dataset will be a 3dSkullStrip-ed
- T1-weighted anatomy (MPRAGE or SPGR). If you don’t remove the skull first, the EPI images (which have little skull visible due to fat-suppression) might expand to fit EPI brain over T1-weighted skull.
- **N.B.: Usually, EPI datasets don’t have as complete slice coverage
- of the brain as do T1-weighted datasets. If you don’t use some option (like ‘-EPI’) to suppress scaling in the slice- direction, the EPI dataset is likely to stretch the slice thicknesss to better ‘match’ the T1-weighted brain coverage.
- **N.B.: ‘-EPI’ turns on ‘-warpfreeze -replacebase’.
- You can use ‘-nowarpfreeze’ and/or ‘-noreplacebase’ AFTER the ‘-EPI’ on the command line if you do not want these options used.
-parfix n v = Fix parameter #n to be exactly at value ‘v’.
- -parang n b t = Allow parameter #n to range only between ‘b’ and ‘t’.
- If not given, default ranges are used.
- -parini n v = Initialize parameter #n to value ‘v’, but then
allow the algorithm to adjust it.
- -maxrot dd = Allow maximum rotation of ‘dd’ degrees. Equivalent
- to ‘-parang 4 -dd dd -parang 5 -dd dd -parang 6 -dd dd’ [Default=30 degrees]
- -maxshf dd = Allow maximum shift of ‘dd’ millimeters. Equivalent
to ‘-parang 1 -dd dd -parang 2 -dd dd -parang 3 -dd dd’ [Default=32% of the size of the base image]
- **N.B.: This max shift setting is relative to the center-of-mass
- shift, if the ‘-cmass’ option is used.
- -maxscl dd = Allow maximum scaling factor to be ‘dd’. Equivalent
- to ‘-parang 7 1/dd dd -parang 8 1/dd dd -paran2 9 1/dd dd’ [Default=1.2=image can go up or down 20% in size]
- -maxshr dd = Allow maximum shearing factor to be ‘dd’. Equivalent
- to ‘-parang 10 -dd dd -parang 11 -dd dd -parang 12 -dd dd’ [Default=0.1111 for no good reason]
- NOTE: If the datasets being registered have only 1 slice, 3dAllineate
- will automatically fix the 6 out-of-plane motion parameters to their ‘do nothing’ values, so you don’t have to specify ‘-parfix’.
- -master mmm = Write the output dataset on the same grid as dataset
‘mmm’. If this option is NOT given, the base dataset is the master.
- **N.B.: 3dAllineate transforms the source dataset to be ‘similar’
- to the base image. Therefore, the coordinate system of the master dataset is interpreted as being in the reference system of the base image. It is thus vital that these finite 3D volumes overlap, or you will lose data!
- **N.B.: If ‘mmm’ is the string ‘SOURCE’, then the source dataset
- is used as the master for the output dataset grid. You can also use ‘BASE’, which is of course the default.
- -mast_dxyz del = Write the output dataset using grid spacings of
- OR ‘del’ mm. If this option is NOT given, then the
- -newgrid del grid spacings in the master dataset will be used.
- This option is useful when registering low resolution data (e.g., EPI time series) to high resolution datasets (e.g., MPRAGE) where you don’t want to consume vast amounts of disk space interpolating the low resolution data to some artificially fine (and meaningless) spatial grid.
The 3x3 spatial transformation matrix is calculated as [S][D][U], where [S] is the shear matrix,
[D] is the scaling matrix, and [U] is the rotation (proper orthogonal) matrix.
Thes matrices are specified in DICOM-ordered (x=-R+L,y=-A+P,z=-I+S) coordinates as:
- [U] = [Rotate_y(param#6)] [Rotate_x(param#5)] [Rotate_z(param #4)]
- (angles are in degrees)
[D] = diag( param#7 , param#8 , param#9 )
[ 1 0 0 ] [ 1 param#10 param#11 ]
- [S] = [ param#10 1 0 ] OR [ 0 1 param#12 ]
- [ param#11 param#12 1 ] [ 0 0 1 ]
The shift vector comprises parameters #1, #2, and #3.
as closely as possible in some sense of ‘similar’, where J(x) is the base image, and I(x) is the source image.
Using ‘-parfix’, you can specify that some of these parameters are fixed. For example, ‘-shift_rotate_scale’ is equivalent ‘-affine_general -parfix 10 0 -parfix 11 0 -parfix 12 0’. Don’t even think of using the ‘-parfix’ option unless you grok this example!
———– Special Note for the ‘-EPI’ Option’s Coordinates ———– In this case, the parameters above are with reference to coordinates
x = frequency encoding direction (by default, first axis of dataset) y = phase encoding direction (by default, second axis of dataset) z = slice encoding direction (by default, third axis of dataset)
This option lets you freeze some of the warping parameters in ways that make physical sense, considering how echo-planar images are acquired. The x- and z-scaling parameters are disabled, and shears will only affect the y-axis. Thus, there will be only 9 free parameters when ‘-EPI’ is used. If desired, you can use a ‘-parang’ option to allow the scaling fixed parameters to vary (put these after the ‘-EPI’ option):
-parang 7 0.833 1.20 to allow x-scaling -parang 9 0.833 1.20 to allow z-scaling
You could also fix some of the other parameters, if that makes sense in your situation; for example, to disable out-of-slice rotations:
-parfix 5 0 -parfix 6 0
******* CHANGING THE ORDER OF MATRIX APPLICATION *******
-SDU or -SUD }= Set the order of the matrix multiplication
-DSU or -DUS }= for the affine transformations:
- -USD or -UDS }= S = triangular shear (params #10-12)
D = diagonal scaling matrix (params #7-9) U = rotation matrix (params #4-6)Default order is ‘-SDU’, which means that the U matrix is applied first, then the D matrix, then the S matrix.
-Supper }= Set the S matrix to be upper or lower
-Slower }= triangular [Default=lower triangular]
-ashift OR }= Apply the shift parameters (#1-3) after OR
-bshift }= before the matrix transformation. [Default=after]
- ===== RWCox - September 2006 - Live Long and Prosper =====
- * From Webster’s Dictionary: Allineate == ‘to align’ *
[[[ To see a plethora of advanced/experimental options, use ‘-HELP’. ]]]
automatic parallelizer software toolkit, which splits the work across multiple CPUs/cores on the same shared memory computer.
by a network (e.g., OpenMP doesn’t work with ‘cluster’ setups).
your system. You can control this value by setting environment variable OMP_NUM_THREADS to some smaller value (including 1).
using all CPUs available. ++ However, on some systems (such as the NIH Biowulf), it seems to be
necessary to set OMP_NUM_THREADS explicitly, or you only get one CPU.
count, since using more than (say) 16 threads is probably useless.
since OpenMP queries this variable BEFORE the program actually starts. ++ You can’t usefully set this variable in your ~/.afnirc file or on the
command line with the ‘-D’ option.
it was coded. You’ll have to experiment on your own systems!
The number of CPUs on this particular computer system is ...... 16.
The maximum number of CPUs that will be used is now set to .... 7.
tests show that it provides some benefit, particularly when using the more complicated interpolation methods (e.g., ‘-cubic’ and/or ‘-final wsinc5’), for up to 3-4 CPU threads.
Probably because my parallelization efforts were pretty limited.
++ Compile date = Dec 16 2015