Thanks Gang!
I'm just thinking out loud, maybe this is unreasonable. Big data is getting more and more popular. It's not rare to run statistics on between 2000-10000 subjects. 3dMVM seems to able to run huge datasets, but the internal function -resid can not. And this is even on a 500 GB RAM system. Larger systems are rare.
Are you planning on updating the functionalities in the future to support big-datasets? Maybe make them create temporary files on the hard-drive instead of keeping it all in the RAM? I have no idea of what I'm talking about. But I think this is a relevant topic for the future =).
fsl-randomize also has an issue with memory if you want to parallelize it (randomize_parallel). And running it normally with 6k subjects takes days for a single contrast. Big data has its issues...
Another 3dMVM -resid question:
When running a standard AFNI-pipeline analysis with group analysis in 3dMVM the process has been to use the residuals from the pre-proc to calculate the smoothness of the data -> 3dFWHMx -acf -> 3dCluststim.
In this standard approach, would it also be OK to use the 3dMVM residuals instead of the pre-proc residuals?
Is this reasonable for
task: input is a a stats-file sub-brik
rest: input is e.g. an R-map based on the residuals (errts)
Thanks!