Hi,
I am trying extract voxel-based numeric values into .1D and .txt files using 3dmaskdump.
The extraction works fine for all ROIs except for one. The ROI where 3dmaskdump fails also has the highest voxel number (compared to the other ROIs), and I assume this is the reason why 3dmaskdump fails with the following message.
**Error: Line too long for buffer of 5048576 chars.** ERROR: mri_read_ascii: can't read any valid data from file XYZ
I have 32 GB of RAM, so I assume that this is rather a bug, and not a RAM limitation of my computer. 23 subjects are part of the script. Interestingly, when I remove one subject, the code works fine without the error message shown above (due to the slightly lower number of voxels after one subject was removed).
Here is the relevant part of my AFNI script:
3dmaskdump \
-mask $directory_ROIs/ROI.nii \
-noijk \
$directory_PD/subject1/XYZ_file+tlrc \
$directory_PD/subject2/XYZ_file+tlrc \
# ... until Subject 23
> $directory_results/AllSubjects_ROI.1D
1dcat \
$directory_results/AllSubjects_ROI.1D'[0]'\' \
$directory_results/AllSubjects_ROI.1D'[1]'\' \
# ... until Subject 23 ([22])
> $directory_results/Transpose_ROI.1D
1dcat \
$directory_results/Transpose_ROI.1D\' > $directory_results/ROI.txt
The script fails at the very last step, that is:
1dcat \
$directory_results/Transpose_ROI.1D\' > $directory_results/ROI.txt
Is this indeed a bug? Please let me know what you think, and if there is even a solution to "save" the last subject.
Update: I just realized that I can simply process the "last" subject manually, i.e., using 3dmaskdump again just for this subject. Then, in a second step, I can simply add the results of this subject into the .1D or .txt result file that contains the results of all subjects.
Of course, it would still be nicer and cleaner if the script just runs for all subjects.
Thanks,
Philipp
Edited 2 time(s). Last edit at 07/05/2022 06:39AM by Philipp.