AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
March 23, 2021 09:47PM
Hi again!
I have been using a computing cluster at our university to run 3dDeconvolve. Recently, I started working with a new dataset (EPI resolution 2x2x2mm) where I have to run trial-by-trial 3dDeconvolve models for 162 trials, and found that it is taking about 6 hours to run deconvolve per trial. In the past, I was able to bring down computation time by increasing the number of jobs specified using the -jobs flag in the 3dDeconvolve command to 24, and asking the job scheduler on the cluster to place deconvolve processes on separate cores.

However, with the new dataset, I'm not able to increase the number of jobs beyond 12, because it seems that each job is requiring that approximately 125GB of memory is allocated to each process (presumably because of the higher resolution of the EPI). And we have a limited number of nodes that have high enough available memory for this. I was working with the IT folks who maintain the cluster to get this working, and they said that the processes never actually use that much memory (i.e. 125GB), but the job terminates if that much memory is not allocated through the job scheduler.

So finally to my question: Is there any way that I can bring down the computing time required for 3dDeconvolve, other than increasing the number of jobs? Alternatively, is there some way to resolve this issue where 3dDeconvolve requires more memory to be allocated to the job than it actually ends up using? If I'm able to do that, I will be able to increase the number of jobs. The cluster will have sufficient resources if I'm not requesting 125GB for each job.

I'm not super familiar with these concepts, so please let me know if I'm missing something/if you need more information from me!
Thank you,
Mrinmayi
Subject Author Posted

Allocating memory for 3dDeconvolve

mrinmayik March 23, 2021 09:47PM

Re: Allocating memory for 3dDeconvolve

RWCox March 24, 2021 10:04AM