AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
September 20, 2016 10:55AM
Hi AFNI,

I have a few questions around the correct implementation of whole-brain cluster correction in AFNI. Given the recent bug identified in 3dclustsim and the subsequent development of some fixes and new tools to improve the accuracy of cluster-wise correction, I’m in the process of revisiting several analysis and want to make sure I am on the right path. FYI - I am using an updated version of AFNI (May 11, 2016).

1) One of my analysis uses 3dttest++ at the group level. For this analysis, I plan to try the non-parametric approach to cluster-size thresholding, as described in AFNI and Clustering: False Positive Rates Redux by Cox, Reynolds, and Taylor. Would this approach be preferred/better than the -acf solution?

2) My other analysis uses 3dMVM at the group level. For this, I would like to use the new -acf approach implemented in 3dFWHMx and 3dClustsim and want to make sure I am using this method correctly…

First, I’m using the following command to determine the ACF model parameters (a,b,c) for each subject’s individual-level errts time series file output from 3dDeconvolve:

3dFWHMx -detrend -ACF temp.1D -mask ./full_mask.${subj}+orig ./errts.${subj}+orig >> blur_errts.${subj}.1D

Note: The errts.${subj}+orig file in the command above is concatenated across 5 runs, however in my full script I do include additional code to ensure that the detrending and -ACF calculations are done separately for each of the five runs. This results in different ACF model parameters for each run. I subsequently average each parameter across all runs for each subject.

Then, for cluster-wise correction at the group level, I calculate the average a, b, and c parameters across all subjects, which are input into the following:

3dClustSim -mask grpmask.nii -acf 0.7830668 3.295774 10.68484 -prefix Clust.WLgroup.1D

Is this correct, or should I be using something output from the group analysis to calculate the ACF parameters? (as I believe is done with the new -clustsim option in 3dttest++)

Also, because this is a connectivity analysis using manually-traced hippocampal regions, I wanted to keep the individual-level 3dDeconvolve analysis in native-space. I've then been transforming the stats+orig output from 3dDeconvolve into standard space before moving to the group analysis, but I realize that my ACF parameters are estimated on the native-space data. Is this an issue at all?

3) My final, somewhat unrelated question concerns something I heard recently, regarding the use of 3dClustSim for whole brain correction. I have been using 3dclustsim for whole brain correction for some time, but was recently told that in some circles this practice is not currently accepted. Rather, it "might" only be appropriate for cluster correction across smaller, a priori regions-of-interest. Unfortunately, I don't have any further information from the source, and I have been unable to find any hint of this discussion online. I'm wondering if you have any insights about this?

Thank you in advance for your help!
Subject Author Posted

Cluster Correction - reposting in hopes of an answer smiling smiley

Liesel-Ann Meusel September 20, 2016 10:55AM

Re: Cluster Correction - reposting in hopes of an answer smiling smiley

rick reynolds September 21, 2016 03:47PM