AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
December 31, 2020 09:28AM
Hi, J-

OK, an initial apology for the length of somewhat wandering message here...

Taking a step back here, let's consider the purpose of the mask: it is to delineate a subset of your volume within which you are testing data meaningfully. Indeed, this might be somewhat overlooked in The Field as something that has a choice in how it is created, but as you point out here, it certainly does. There are a lot of considerations. What gets masked and how (and when) surely depends on the purpose of a given step (and downstream steps).

Not many people think that the air in the acquired FOV outside the brain will have meaningful activation. Nor that the skull, eyes, nose or other facial tissue would, either. Therefore, excluding those regions from analysis seems pretty obvious, esp. when we are in a framework of dealing with massively univariate analysis (MUA), where we have to "correct" for the fact that we are analyzing N voxels separately and then want to have an overall false positive rate (FPR) that accounts for that parallel testing (e.g., using clustering or something as a "second level correction" for each voxel's statistical estimate). In the MUA framework, having a larger N (e.g., having a larger mask or FOV) would make the second level correction harsher. Then we arrive at the funny situation that if we have a larger mask, calculate our second level correction and then threshold based on that, the size of our mask affects our presented results-- an odd situation.

Going back to the question of what to include in the mask, then, it might seem automatic to make as tight a mask as possible around the brain, so that we have a lower N for second level correction and a less-harsh MUA penalty to pay to achieve a given FPR. Hence, removing non-brain tissue. But as you point out, we could also remove the CSF part of the brain mask, and why not also remove the WM (for an FMRI study) from our analyzed part? Veeery roughly, let's approximate GM, WM and CSF as each about 1/3 of the brain volume-- then making a mask that includes the GM part reduces your correction-N by 2/3. Whee. This was actually something we noted when addressing some of the Great Cluster Panic of 2016, in the Discussion here:
[www.ncbi.nlm.nih.gov]
Choosing a GM-centric mask when you have MUA+clustering might be useful an additional way to control FPR, at least in theory. (In practice, it might make little difference, more below.)

Of course, *now* you need to decide what is a good GM mask? Is it *strictly* just GM in a template? That might ignore slightly imperfect alignment, which is always present, even with the best nonlinear alignment programs. It might also ignore the fact that MRI data has partial voluming effects along tissue boundaries: our voxels are so big, voxels around the gray-white boundary probably contain both actual "WM" and "GM". Furthermore, one typically blurs data spatially, further spreading GM info into WM (and maybe into CSF?). Again, there is choice about how one defines a GM mask, as simple a task as that might seem.

However, it should seem odd that the size of the mask-- whether you include WM or not, or exactly how big you make your GM mask-- will affect your final outcomes. This is a larger problem/issue with MUA-type analyses: the mask size will affect final outcomes; what you choose as your second-level correction will affect your final outcomes (all methods are independent fixes from the initial testing, whether permutation- or cluster-based). It would be nice if we didn't have to *do* this kind of second level correction. What if clustering with one mask shows 4 clusters, and shrinking the mask a bit provides a less harsh correction and then we see 5 clusters? If both masks are "reasonable" ones, what is correct?

These kinds of issues are discussed a bit here:
[pubmed.ncbi.nlm.nih.gov]
... by resident statistician Gang Chen, who has been trying to avoid the need for clustering by using a different approach to statistical modeling, one that uses Bayesian reasoning to make a hierarchical model that includes all the data in one go (instead of the first MUA and second level correction), and therefore it doesn't require the semi-arbitrary correction part. Adding in more regions affects the outcomes muuuuch less in the Bayesian case, effectively making masking much less of an issue. This kind of approach has a lot of appeal for interpreting data. At the moment, it does have some computation-based limitations, like probably requiring ROI-based analysis rather than voxelwise, but it might be something to consider. Further discussions on it would require Gang's brainpower, rather than my own :(

Note that if your modeling and paradigm end up requiring a "classical" paradigm of voxelwise testing with 2nd level correction, OK, go for it. You can make a reasonable mask in a couple ways-- the 3dmask_tool approach you first described is certainly reasonable. Making a GM mask from a tissue mask is also reasonable-- because of issues like imperfect alignment, smoothing, partial voluming and imperfect segmentation, I would probably use a slightly inflated GM mask. (Note that slightly inflating the GM mask will increase the volume, reducing the difference from using a whole brain mask, but that is life, I think.)

In terms of the mechanics of getting out the GM mask, I am not sure about the format that FSL's FAST outputs tissue maps. I can comment on the commands you wrote, and you should check the results visually:
---> this one says take the part of the input dset and make a mask wherever it has a value >3, and multiply that mask region by the actual value there (output type: short):
3dcalc -a TT_N27_grey_seg.nii.gz -prefix TT_N27_grey_seg_short -datum short -expr 'a*step(3-a)'
---> while this one says: return a copy of the input dset as dtype=short (the "-expr a" works no math on the input).
3dcalc -a TT_N27_grey_seg.nii.gz -prefix TT_N27_grey_seg_short -datum short -expr a
I don't see why it would be necessary to convert to short, but OK?
--> and this command would resample the input dset to Reward* grid, sure:
3dfractionize -template ./Reward_vs_Neutral_ShapeNonfixed+tlrc. -clip 0.2 -preserve -prefix TT_N27_grey_seg_short_F -input TT_N27_grey_seg_short+tlrc.
... though you might want to inflate it, such as with:
3dmask_tool -dilate_inputs 1 -infiles TT_N27_grey_seg_short_F+tlrc.HEAD
... afterward.

Re. your question here:
Quote
I am also confused why here the template need be the output from the group analysis (i.e, Reward_vs_Neutral_ShapeNonfixed+tlrc). It looks like a circular argument, as we want to get a mask for the next group analysis (e.g., 3dttest++ and/or ANOVA2). But here the template is from the group analysis output. Something must be wrong. Any good pointers?
... I don't quite understand. The "template" in 3dfractionize just provides the grid for resampling your GM mask (assuming it is on a different grid to start with? if it weren't you wouldn't need to fractionize it). I don't understand what the issue is, or what circularity you are referring to?

Two more comments:

+ Re. masking of data: while above it might make sense to mask out skull, air, etc., throughout much of the processing, we actually *don't* want to mask those out, so we can see any potential QC problems, such as misalignment, ghosting, odd scanner artifacts, and more. To be honest, I wish people showed data that wasn't really masked out at all, because it would help describe more useful information of their processing. Certainly, at the single subject processing level, we show results across the whole FOV in the QC generated by afni_proc.py for just this reason.

+ Re. presenting data: as Gang notes in his article above, strict thresholding of results is also a bit to blame with MUA problems. The idea that clusters come and go with mask changes arises because of the strict thresholding used. It is better to try to present results as fully and transparently as possible, for example using the alpha+boxed methodology for voxelwise results, as described here:
[www.youtube.com]
and/or is used in some of the ROI-based approaches here:
[pubmed.ncbi.nlm.nih.gov]

--pt
Subject Author Posted

On mask preparation

Juan December 28, 2020 01:57AM

Re: On mask preparation

ptaylor December 31, 2020 09:28AM

Re: On mask preparation

Juan January 03, 2021 07:21PM