History of AFNI updates  

|
September 12, 2007 11:54AM
There is no way to FORCE all AFNI programs to max-scale the data before writing it out. Partly this is a result of the differing modes of processing adopted in various codes (e.g., 3dDetrend works voxel-wise, extracting a time series, processing it, and reinserting it -- so it can't easily change the scaling factor), and partly this is a result of my thinking that float format is the way to go if you are concerned about this effect. (Basically, max-scaling is a form of block float format, whereas in true float format each number has its own scaling factor.)

You could also start by making a copy of the dataset in max-scaled format with something like

3dcalc -prefix rawf -a raw+orig -expr a -fscale

and then process rawf+orig. This dataset will be max-scaled, but it's successors probably won't be. But this will keep the disk storage in short format, if you are concerned about disk space, and provide some protection against the loss of precision.

You could run an analysis all 3 ways (no scaling, scaling at the beginning, and float format) and compare the results. That's the best way to convince yourself of the effect's impact on your brain activation maps.
Subject Author Posted

Precision loss over multiple preprocessing steps

David Weiss September 11, 2007 03:25PM

Re: Precision loss over multiple preprocessing steps

bob cox September 12, 2007 11:54AM