Hi AFNI team,
I'm running 3dDeconvolve on my dataset and it often got killed. The dataset is quite large. It has 32 runs of 10 min scanning, each run is ~400MB.
I run it on different computers and it seems not solely depend on the size of memory whether the process will be killed.
Here are my questions:
1) why is it that some computers have enough memory and hard drive but still got killed? For example, I ran this on two computers, both have 64GB memory and >500GB hard drive. One always ran successfully and another always got killed. What could be the reason, the age of computers?
2) Is there anything I can do to reduce the memory it uses so I can run it on computers with smaller memory?
3) Currently the process takes whole 5 days and I'm running a deadline, is there any way to speed it up?
4) Could 3) and 2) be solved using the parallel job option in the 3dDeconvolve? My computers usually have 4 cores, so I should use -jobs 4. But will this require each of the core have a large memory?
Thank you so much!
--Lingyan