AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
January 22, 2020 03:53PM
Your model is huge. There are 3750 time points, but 3612 regressors. Even once you get the results, they will basically be noise.

It is surprising this takes so much memory, but it has to do with generating statistics for all of the sub-models. So having 3612 regressors makes a big difference there, along with overall speed of the computation.

And the speed would be slow to begin with, but it seems you are using all of the RAM, in which case the program is probably thrashing (swapping in and out memory).

On to your questions...

1. Since the command does not include -fout, -fitts or similar options, there is probably no good way to reduce RAM use, other than with something like 3dZcutup. On the flip side, I think you are over-modeling the data, and this analysis might not be so great to begin with.

2. It would take some looking into, but you are probably better off not using any temporary files.

3. Are you running 3dDeconvolve per run now? Using 3dZcutup would just break the data into pieces that could be handed to 3dDeconvolve, and then put back together. The point is not to run a parallel analysis, it is to run sequentially on smaller datasets to save RAM.

This is not a very convenient way to go.

4. 3dDeconvolve has a -jobs option to use multiple threads. That is a good way to speed up the analysis, but it will not prevent the RAM problem. If the program has used all of the RAM and is thrashing, it will still be very slow.


It might be best to think more about this approach. There are 40 stim categories, each with 90 regressors? The output might not be very useful.

- rick
Subject Author Posted

3dDeconvolve stalls at "current memory mallocated"

oryonessoe January 22, 2020 01:58PM

Re: 3dDeconvolve stalls at "current memory mallocated"

rick reynolds January 22, 2020 03:53PM

Re: 3dDeconvolve stalls at "current memory mallocated"

oryonessoe January 22, 2020 05:04PM

Re: 3dDeconvolve stalls at "current memory mallocated"

oryonessoe January 22, 2020 05:47PM

Re: 3dDeconvolve stalls at "current memory mallocated"

rick reynolds January 27, 2020 09:18AM

Re: 3dDeconvolve stalls at "current memory mallocated"

oryonessoe January 27, 2020 12:37PM