Show all posts by user
Dear AFNI users-
We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:
https://discuss.afni.nimh.nih.gov
Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.
The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.
Sincerely,
AFNI HQ
History of AFNI updates
Page 1 of 1 Pages: 1
Results 1 - 10 of 10
Hi all,
I'm working with a dataset with a very large amount of data collected from each participant over many sessions. I'd like to run 3dFWHMx in order to perform cluster correction on an analysis using all of a participant's data, but the errts file produced by 3dREMLfit is hundreds of gigabytes, which is creating logistical issues. Would it be reasonable to use residuals fro
by
Dillon Plunkett
-
AFNI Message Board
Hi Rick,
Thanks again! I gave this a try with RAM of 20x the size of the data set (2.6TB for about 130GB of data) and had the same issue about 20 seconds into the job. I can't find any indications that the process is running out of RAM (e.g., no complaints from SLURM, the cluster's scheduler, for exceeding the requested memory). I'm in touch with the HPC team, but I can't
by
Dillon Plunkett
-
AFNI Message Board
Hi Rick,
Thanks for the quick reply! I'm not applying either -errts and -fitts (at least not intentionally). The only output I'm creating is a -bucket with an fstat, 7 coefficients, and 2 GLTs (plus a 1d file and a jpg with -x1D and -xjpeg).
Additionally (although I may be misunderstanding what you're saying and this might not be relevant), I'm running the command in o
by
Dillon Plunkett
-
AFNI Message Board
Hi all,
I'm trying to run 3dDeconvolve on a large amount of data obtained from a single participant over many sessions (~250 runs of about 500MB each). When I do, 3dDeconvolve crashes with the following error (followed by a long stack trace and memory map):
*** Error in `3dDeconvolve': free(): invalid next size (normal): 0x0000000002bce690 ***
I can run the same command on t
by
Dillon Plunkett
-
AFNI Message Board
Hi Bob,
Did this change ever make it to live AFNI? I just tried to use -BIDS for the first time on CentOS 7 and encountered what looks to be the same issue.
find: unknown predicate `-9'
find: warning: you have specified the -depth option after a non-option argument -type, but options are not positional (-depth affects tests specified before it as well as those specified after it).
by
Dillon Plunkett
-
AFNI Message Board
Hi all,
I'm trying to use make_random_timing.py to generate stimulus timing files, but running into an issue when I try to set a minimum duration for rest. I can't identify what I'm doing wrong or misunderstanding.
Setting any min > 0 for the rest timing class seems to increase the value used for post_stim_rest, such that the last stimulus is well before the end of the r
by
Dillon Plunkett
-
AFNI Message Board
Hi,
I believe I have the same question as Bill, but I don't think either of above options addresses it.
After running 3dDeconvolve with a single "-stim_times_IM", I have a bucket dataset with beta weights for each regressor (including each separate event of the individually modulated stimulus). Like Bill, I'd like to end up with a single 3D+time dataset with each beta
by
Dillon Plunkett
-
AFNI Message Board
When running the following command on my system, volume registered datasets are produced in the current directory, but the final, aligned datasets are ending up in the dset2 directory (whether omitting -output_dir or using "-output_dir .").
align_epi_anat.py -dset1 dset1+orig -dset2 ../../dset2_dir/dset2+orig \
-child_dset2 ../../dset2_dir/dset2_child+orig -dset2to1 -volreg on \
by
Dillon Plunkett
-
AFNI Message Board