I have two rather basic questions regarding Matthew Belmonte’s permutation test, as implemented in the AFNI plugins. The first one relates to the mask that it expects. Looks like we have to use the threshold plugin to create a mask. We tried to select a mask created with 3dAutomask, but Permutation doesn’t seem to recognize it, even though it’s also a binary intensity fim. Threshold does not seem to allow us any adjustments (such as –dilate in 3dAutomask) and we end up with masks that are too small. Editing in Draw Dataset is quite tedious. So I wonder whether Permutation can be cajoled into accepting 3dAutomask outputs?
The second question concerns group statistics based on permutation tests in individual subjects. I haven’t been able to find any discussion in Matthew's IEEE (2001) paper, but I assume (following Greg Allen’s suggestions) that the most straightforward way here is to determine significance based on the probability of detecting significant activation in a given voxel in x number of subjects within a total sample of n subjects. It seems to me that what is gained by more realistic correction and thresholding in individual subjects is lost by the fact that effects remaining slightly below significance in single-subject analyses are lost. (This obviously depends on power and signal of each single-subject study.) Assume 8/10 subjects show mean signal increases for an experimental condition, but these do not survive conventional Bonferroni or clusterwise correction, and are p>.05 on the permutation test. On a conventional 1-sample t-test using mean signal change or fit coefficients, this voxel may be “significantly activated”, but not in permutation based group analysis. In fact, this is exactly what we're seeing in an older dataset (at 1.5T with 98 reps).
In a nutshell, my question is: Are there better alternatives for group analyses based on permutation test?
Thanks,
Axel