There have been a few threads about how to deal with very large datasets (and avoiding crashes caused by memory allocation errors). Here's a good example with some sample solutions:
[
afni.nimh.nih.gov]
Your choices are essentially doing the slice at a time method using 3dZcutup and then 3dZcat to put them back together again, using 3dAutobox and/or 3dZeropad to reduce the spatial extent to an area in which you are interested like brain only or resampling (3dresample) to a coarser resolution. For the 3dZcutup solution, the slices will only be a single line of voxels when viewed across the slice plane (not in-place) until the 3dDeconvolve resulting slices are reunited with 3dZcat. With 3dZcutup, you do not have to split the volume into single slices but into only two parts if you like.
If you have further questions, please feel free to post the entire command and results.