Hello,
It is my understanding that based on AFNI literature and on typical usage, that 3dvolreg works to spatially align all acquisitions in a time series by using a least-squares, best-fit algorithm to match each 3d acquisition with that of a single reference volume.
It is also made clear that 3dvolreg cannot accomodate large translocations of the brain over time across the scan, but can bring minor deviations into alignment.
It seems to me that the vast majority of output file data from 3dvolreg fomr my subjects indicate gradual drifts in directionality and magnitude of alignment relative to the single reference volume, as though the head is settling down and compressing the pillow. In other words, while the translocation from the first volume collected to the last might be as large as 3 millimeters or more, from slice to slice, the shift in adjacent aquisitions is virtually nil.
Thus, I am wondering if 3dvolreg could be set to start with the last image (or first image of the time series) as the base/template image, and in a step-wise fashion, the program would align the previous slice (t-1) to the image at time t, then align the t-2 image to t-1, t-3 to the image at t-2, etc.
It just seems that the way the program tries to work now, with all volumes aligned with a base image collected several minutes earlier or later, that it misses out on all the transitory data that could explain where and how each voxel got to where it is.
Is there some mathematical rationale for why 3dvolreg could not be applied in a step-wise fashion to better align volumes in a time series that has a lot of net overall translocation/drift with no real sudden, within-aquisition head shifts?
Jim B