Hello AFNI experts,
I'm running a very straightforward 3dDeconvolve script that works without problems. However, memory usage is currently 1.1GB for a single session (112x112x30x1548 voxels), while we plan to concatenate data from different sessions which would increase memory usage to >2GB. Since my computer has only 2GB, I expect this will cause an enormous amount of swapping, if the program does not crash.
Since 3dDeconvolve uses only a univariate approach, my question is whether it is possible to reduce memory usage significantly (with possibly a small increase in CPU time or I/O)
I'm using 3dDeconvolve with both the multiple jobs option and a mask. I observed that the mask does not decrease memory usage, the whole dataset is still loaded (CPU time *is* decreased though). This makes sense, I guess, because only loading voxels within the mask requires that the hard drive does a lot of seeking.
Nevertheless, it seems to me that memory usage could be decreased by only loading parts of the dataset at a time (step 1). For example, the data could be loaded and processes slice by slice. I haven't found an option in 3dDeconvolve; am i missing something, and if not, would it be hard to implement this?
Note that I found another post [1] where 3dDeconvolve is run slice by slice using a mask, but since the whole dataset is loaded anyway I don't expect this will reduce memory usage.
Furthermore, am I the only person who deals with this problem? Is there another solution, other than resampling, dealing with a machine that swaps a lot, or begging my supervisor to buy more memory?
thanks,
nick
[1] [
afni.nimh.nih.gov]