History of AFNI updates  

|
January 10, 2017 09:10AM
3dvolreg estimates the amount of movement. But since image registration is not perfect, it's not possible to take an MR image (3D volume) with a large amount of movement and make it look the same as if the subject had never moved.

So afni_proc.py will cast out (censor) volumes (time points) when there was too much movement relative to the previous volume.

By default, we have set this amount of differential movement to 0.3 mm (per TR). If you are losing so much data that many subjects are useless, you are in trouble. You can try the following, but these probably won't help you too much:

* raise the movement threshold for censoring to 0.4 or even 0.5
* add the option -regress_apply_mot_types demean deriv to the afni_proc.py command, to include the derivatives of the motion parameters in the regression model, which might reduce the motion artifacts
* add the option -regress_anaticor_fast to the afni_proc.py command, to use tissue based regressors that might also be sensitive to motion

But, if you have so much motion, it will be hard to get good results.
Subject Author Posted

3dvolreg and censor motion

heretic133 January 09, 2017 05:32PM

Re: 3dvolreg and censor motion

Bob Cox January 10, 2017 09:10AM