>>It seems that the memory crash happened when 3dMVM was trying to save the results to the output file. One easy solution is to break the input files into a few >>chunks. For example, suppose there are 90 slices along the Z-axis, use 3dZcutup to create 3 separate files: one for slices 1-30, one for 31-60, and one for >>61-90. Do this in a loop for all the input files. Then run 3dMVM for each of the three chunks separately. In the end glue them back with 3dZcat.
Thanks good idea! If you mean that it crashes when trying to save the resulting residuals. I do get the output files, but not the output-residuals files.
>>FWIW, there is no such an approach that could be considered "standard". You're trying to draw a rigorous line in the sand, but there are so many factors up >>for debate.
I understand your position here and I agree. And from previous debates I think I got that your approach (which I like) is to report "everything" that is reasonable and interesting and be open about the stats (publish all scores, effect sizes, p-values).
But, AFNI, SPM and FSL are used by researchers who want's to publish. And you want to communicate your results to the journal that you submit to. When using SPM you can check a box and it automatically gives you results that are FWE corrected (usually feels less conservative but that's another discussion). Then you simply write, these results came form SPM, they are FWE corrected/adjusted. And the reviewer will understand this and see the results as "significant". We have to calculate smoothness (or residuals) and run simulations and them look at a cluster table. Then read some papers and documentation to realize that the cluster sizes are only reasonable with p-values below 0.002 (from the FMRI Clustering in AFNI: False Positive Rates Redux paper).
Quote
ACF parameters (which are computed by the standard afni_proc.py pipeline), and a voxelwise p=0.001 or p=0.002 is reasonably safe with this method
So, in order to be able to write that the p-values are adjusted/corrected for multiple-comparison / FWE errors you have to do this. Or the permutaiton-testing in 3dttest++ which in my experience is super conservative). The reviwer want's to know (right or wrong) if the findings are significant.
And don't get me wrong, I love AFNI and it feels so much more honest and in control this way. But I also hope you understand that it is kind of frustrating to not have an "offical way" on how to do these things =).
I just hope reviewers are getting a bit more open minded about what a signficiant finding is.
Thanks again Gang!