Hi Brady,
Paul has many good things to say of course. One thing to focus on is simply that your computer, in its current state, is not really good enough for that computation, even at 1000 iterations. Taking more than a day for 1000 iterations means it is already thrashing, spending most time swapping memory in and out because it does not have enough RAM for what it is doing. Doing 10 times as many iterations will possibly take far more than 10 times as long, because memory use will increase, and the thrashing will get worse.
Exactly how big are those inputs?
3dinfo -n4 -datum p04_DN+tlrc
For comparison, I went to AFNI_data6/group_results and modified s5.ttest.paired to be more like your command:
- added -Clustsim and maybe -DAFNI_TTEST_NUMCSIM=1000
- deleted the -setB line, so there would be 20 input dsets
Running this took 5 minutes on my 16 GB RAM laptop, and 7 minutes on my weaker 8 GB RAM laptop. The data resolution might be lower than yours, but to put it another way, if your computer has enough resources, the time should be in that range (scaled by the resolution or the number of voxels, in some way).
Consider that for 1000 or 10000 iterations, the program is running t-tests and cluster simulations on the residuals. That means storing a time series of length 1000 or 10000 in RAM to do so. Going to 10000, my 8 GB RAM laptop started thrashing somewhere around 2000. There is no way it would make it to 10000.
Maybe there are other processes on it that could be shut down, such as web browsers or MS programs (Word, Powerpoint, etc). Those use a lot of RAM. But it seems highly unlikely that your 8 GB iMac would be able to run to 10000. And if this data is higher resolution (my test was with 2 mm voxels), game over.
Because it needs to store those iteration results, the program will need substantially more RAM than a simple t-test. Do you have any better machine it could be run on?
- rick