AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
January 10, 2017 09:10AM
3dvolreg estimates the amount of movement. But since image registration is not perfect, it's not possible to take an MR image (3D volume) with a large amount of movement and make it look the same as if the subject had never moved.

So afni_proc.py will cast out (censor) volumes (time points) when there was too much movement relative to the previous volume.

By default, we have set this amount of differential movement to 0.3 mm (per TR). If you are losing so much data that many subjects are useless, you are in trouble. You can try the following, but these probably won't help you too much:

* raise the movement threshold for censoring to 0.4 or even 0.5
* add the option -regress_apply_mot_types demean deriv to the afni_proc.py command, to include the derivatives of the motion parameters in the regression model, which might reduce the motion artifacts
* add the option -regress_anaticor_fast to the afni_proc.py command, to use tissue based regressors that might also be sensitive to motion

But, if you have so much motion, it will be hard to get good results.
Subject Author Posted

3dvolreg and censor motion

heretic133 January 09, 2017 05:32PM

Re: 3dvolreg and censor motion

Bob Cox January 10, 2017 09:10AM