Dear all,
I remember quite some time ago that for group analysis (e.g., t-test on individually estimated contrast images) it was advised to compute the smoothness to feed to AlphaSim on the standard error dataset for the same group analysis, which could be obtained by:
3dcalc -a 'group.ttest+orig[0]' -b 'group.ttest+orig[1]' \
-expr 'a/b' -prefix group.sterr
I saw recently on Gang's page that now the suggested procedure is to estimate the smoothness on the residual time series dataset from the individual analyses and then average those together to get just one smoothness triplet.
Today in trying to explain this to a grad student, I got quite confused myself and realized I have now two fresh doubts:
1) Is it appropriate to use the smoothness of the GLM residuals at the *individual level* to generate a null-hypothesis cluster distribution for the *group analysis* of a *specific contrast*?
2) Wouldn't it be more correct to actually proceed in the following way:
- let's call C1,...,Cn the individual contrast images that enter the group t-test (subject=1,...,n).
- <C> is the group mean of the contrast images (the 'coeff' subbrik computed by 3dttest)
- compute the residual image for each subject:
Ri = Ci - <C>, for i=1,...,n
- Estimate the smoothness of each Ri => FWHM_i
- Average all the FWHM_i to get the smoothness value to feed to AlphaSim
Any comments?
thanks in advance!
g.