Hi Rick,
thanks a lot for your quick reply. Pardon me for being somewhat thick here, but let me elaborate a bit further. If I had understood how AlphaSim works, we are basically generating a 3D random gaussian field with a certain kernel size (FWHM), thresholding it according to the specified single-voxel probability threshold, and looking at the distribution of the sizes of clusters of connected suprathreshold voxels, generated by repeating the process many times.
Now, I would think that these simulated datasets represent the data under the null hypothesis, right? We are trying to find a combination of single-voxel p-threshold and cluster size such that it is quite unlikely to find clusters that survive this combined threshold in a dataset representing the null hypothesis.
A critical aspect in generating the null hypothesis dataset for the purpose of estimating the cluster distribution is of course the degree of spatial correlation we would expect if the data that *entered* our statistical test contained only noise. But what is this noise in a group-level analysis (e.g., our group t-test) and how can we estimate it? It seems to me that it would correspond to the subject-to-subject variance vis-a-vis the effect of interest, i.e., the random (subject) effect, rather than to the residuals of an individual GLM analysis.
In other words, I would think that the null hypothesis dataset *should* be tied to the particular test seeking to disprove that null hypothesis, and that therefore the spatial correlation should be estimated on the residuals of the input images (individual contrast images) that enter the group statistical test.
I am obviously not sure that this line of reasoning is correct either (and I guess you already suggested that it is not..
but I just wanted to double check in case I didn't explain myself well enough in the first mail.
thanks again
g.