Hi Christine,
For what it is worth, your two options seem identical
to me, assuming the cluster and threshold limits are
the same across tests. Have you compared them?
Either way, neither your method nor the one Gang stated
(which is what we have used for a while) quite match the
original Monte Carlo simulation. Your extra cluster
step at the end would be much more conservative.
I would have to wonder whether a reasonable correction
might start with uncorrected p-values = sqrt(original
uncorrected p-values). Or for an N-overlapping
conjunction test, start with N-th root of the original
uncorrected p-value.
Then, assuming independent tests (which the null
hypothesis would imply?), the uncorrected p-value of
the conjunction would be the product of individual ones,
which would equal the original one.
Subsequent clustering (after intersection) would then
work like the original Monte Carlo simulation.
For example, if you currently use uncorrected p-values
of 0.01 and a minimum cluster of 100 voxels, then the
conjunction result would mean thresholding at just 0.1,
intersecting and clustering.
Even assuming I am not way off base here, an odd result
of this would be that the conjunction results would not
need to correspond to the individual results, which might
be troubling.
Or one could go the more direct route. The simulations
could threshold each map (per test), intersect and count
clusters). The result would be require smaller clusters
for the conjunction. But we do not have software for
this.
Again, this could produce results that are very different
from the original ones.
The difference between these two methods is that the
latter would require smaller clusters while the former
would require a less significant uncorrected p-value.
- rick