AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
November 25, 2022 11:09AM
Hi, Brady-

OK, understood about the hardware constraints.

Re. 1) Wow, that is a huge number of voxels in the mask! Can I ask what spatial resolution the output data has? For a 2.5 mm isotopic EPI output dataset in the Bootcamp example, there are only 68,809 voxels. So, are your output voxels around 0.75mm isotropic or so?
-> and sorry to hear your computer got bored with AFNI running and would go to sleep, but glad that was resolvable and made the processing time reasonable.

Re. 4) There are different routes to take with forming a group mask. I guess the main idea is that you want something representative of the group. Two ways contending to do this might be:
A) use the strict intersection of masks across the group
B) use something less strict like voxels that overlap 70% of the group masks.
("Union" might be a bit overly generous; a higher percentile overlap might be preferable, as above.) In both cases, you would probably be using the mask_epi_anat*HEAD dsets, if using afni_proc.py. I guess you might consider A preferable if the per-subject maps themselves are less strict, and B if they are more strict. This is a fuzzy response, perhaps, but there are likely multiple reasonable ways to go ("semi-arbitrary" choice).
+ in the NARPS processing in the "Highlight, Don't Hide..." paper github code, the afni_proc.py processing had the 'blur' block precede the 'mask' block; making the mask there, we then used strict intersection to generate the 'group level' mask for cluster simulation:
3dmask_tool                                                                  \
    -prefix  group_mask.inter.nii.gz                                         \
    -frac    1.0                                                             \
    -input   ${all_mask}
+ in another recent processing project, the afni_proc.py processing had the 'mask' block precede the 'blur' block; we didn't create a group-level mask for it, but we might lean toward a 70% overlap then:
3dmask_tool                                                                  \
    -prefix  group_mask.7.nii.gz                                         \
    -frac    0.7                                                             \
    -input   ${all_mask}

Re. 5) There is a bit of guidance on reading the table output here:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6328330/
... in Fig. 2 (and a bit more in Fig. 3).

In the cluster table output, there are simulation-based cluster thresholds (as voxel counts) for lots of different scenarios. A scenario in this case is: what per voxel p-value do you want to threshold at, and then what FWE value (which is what "alpha" is) do you want the final (adjusted) result to approximately be at? The p-value choices are listed in the row labels, and the FWE (or alpha) values are listed in the column labels. Typically, people want FWE of 5%, which is "0.05" there---that is pretty standard. On the p-value side, people often tend to like 0.001 because... well, because. The paper cited in #4 points out how adjustments need to be made if using software that does pairs of one-sided testing separately; if you don't, you end up with more-than-doubled false positive rate, from what you think you have. Typically bisided would be most appropriate for most analyses. (But again, the "Highlight, Don't Hide..." paper discusses why visualizing sub-threshold stuff is still very important.)

The question of, "Why do thresholds get smaller as the pvalue shrinks?" comes up a lot. It does seem counterintuitive, but recall what is being done here: we say that we want to threshold the voxels at some p-value (say, 0.001), but because we have so many voxels being thresholded at one, we don't think that that 0.001 accurately reflects the false positive rate in our results. There is a call to do a "multiple comparisons adjustment". Clustering is an ad hoc way of doing that, suggesting that clumps of low-p values should be more believable than individual ones (the latter which are more likely due to scattered noise, say). So, this is a 2 step process:
+ first, threshold by p-value and get "islands" of candidate locations
+ then, filter to those to keep only ones "big enough" to be believable at a given FWE
So, the cluster size needed to achieve a given FWE depends on the first thresholding that was done. And as the thresholds in the first "cut" get stricter, *all the islands* get smaller; therefore, the clustersize threshold to choose among them *also gets smaller*.
Thus, as you move "up" an FWE column with shrinking p-values, the clustersize threshold shrinks, because all the islands you are filtering have also shrunk.

Re. 6) I'm a little confused by the terminology here. If I wanted to visualize the output dataset from "3dttest++ -Clustsim ...", I would
+ Use the UnderLay button to load the anatomical reference (say, template data) as underlay
+ Use the OverLay button to load the output dataset as the overlay; this has multiple subbricks in it, and I would:
- go to the "OLay" selector in the overlay panel and use the mean volume as the overlay (the colors to view)
- go to the "Thr" selector in the overlay panel and use the Zscr volume as the thresholding volume (to select voxels to see by significance).
+ Then you can select the threshold value based on your p-value threshold of choice (e.g., rightclick on Thr next to the "A" and "B" buttons, and "Set p-value")
+ Then click Clusterize and enter your chosen NN, sidedness, and cluster-output value from the table (consistent with your p-value threshold).
That would be a standard view of your clusterized results. ***BUT***, having read the "Highlight, Don't Hide..." paper, I would also:
+ Turn on the "alpha" and "boxed" functionality by clicking the "A" and "B" buttons, because sub-threshold results matter, too.
+ and actually, I might consider calculating a straight-up t-test without masking, and visualizing *its* effect estimate and T-stat volume, applying the thresholding+clustering and "A" and "B" buttons, and viewing the results there, which should look the same except I could see everywhere in the whole FOV---that way, I would be more certain that no artifacts or anything are leaking into my brain results.

--pt
Subject Author Posted

3dttest++ input files and settings

BradyRoberts November 22, 2022 11:33AM

Re: 3dttest++ input files and settings

ptaylor November 24, 2022 07:26AM

Re: 3dttest++ input files and settings

rick reynolds November 24, 2022 10:10AM

Re: 3dttest++ input files and settings

rick reynolds November 24, 2022 01:29PM

Re: 3dttest++ input files and settings

BradyRoberts November 24, 2022 07:56PM

Re: 3dttest++ input files and settings

BradyRoberts November 24, 2022 07:46PM

Re: 3dttest++ input files and settings

ptaylor November 25, 2022 11:09AM

Re: 3dttest++ input files and settings

BradyRoberts November 26, 2022 07:41PM

Re: 3dttest++ input files and settings

ptaylor November 27, 2022 08:28AM

Re: 3dttest++ input files and settings Attachments

BradyRoberts November 30, 2022 03:45PM

Re: 3dttest++ input files and settings

ptaylor November 30, 2022 05:19PM