Hi, Brady-
I think the question is whether one is applying blurring *to* data or estimating smoothness *in* data.
For estimating the smoothness in data, one wants to estimate more smoothly than a Gaussian approximation, because there will be inherent smoothness (=spatial structure of correlation) from the data itself, in combination with any further applied smoothing.
For applying blurring, using a Gaussian seems perfectly reasonable and appropriate, and that is what happens in AP with default.
Note that it is also possible to apply a (Gaussian) blur that is non-constant across space, to get all locations *to* apprximately the same smoothness. That could be useful because, as noted above, there is already some spatial structure in the data. That could be signalled in AP by using this option:
-blur_to_fwhm : blur TO the blur size (not add a blur size)
This option changes the program used to blur the data. Instead of
using 3dmerge, this applies 3dBlurToFWHM. So instead of adding a
blur of size -blur_size (with 3dmerge), the data is blurred TO the
FWHM of the -blur_size.
Note that 3dBlurToFWHM should be run with a mask. So either:
o put the 'mask' block before the 'blur' block, or
o use -blur_in_automask
It is not appropriate to include non-brain in the blur estimate.
Note that extra options can be added via -blur_opts_B2FW.
Please see '3dBlurToFWHM -help' for more information.
See also -blur_size, -blur_in_automask, -blur_opts_B2FW.
--pt