Hi Brian,
1. That looks okay, but I worry about the 0.5. That means if half
of your data is zero (at some voxel) and half has valid data
(with a mean of 2.3, for example), then you include it in the
analysis.
Normally I would suggest using an intersection mask.
At this point, afni_proc.py suggests not masking until group
space. But since you have already done it, an intersection mask
seems appropriate.
2. Hopefully afni_proc.py is running that step for you, using the
-regress_est_blur_errts option. Then you can average the blur
estimates across subjects.
It is expected that all subjects have basically the same blur,
as they are (presumably) scanned on the same scanner and blurred
by the same FWHM later.
3. Maybe use 3dTstat per file and then 3dMean on the 40 resulting files.
You might considering adding -regress_est_blur_errts to your
afni_proc.py command, and looking at what it does with 3dTstat.
3b. 3dTstat should take only 1 input datasets. That is why you would
use that per subject, to average across runs.
An alternative would be to simplay 'cat' all of the runs for all of
the subjects into a single .1D file, and use 3dTcat on that.
Note that the files should have a .1D suffix.
- rick