AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
January 15, 2019 04:57PM
Hi Jim,

The most simple first thought is that if you need to process this subject differently, more than just a different cost function or something, then it might be best to drop them. But we can ignore that just to understand what is happening.

Also, you might consider trying to use afni_proc.py to do the standard processing. It might make your life much easier.

Regarding the motion parameters, it really does not matter that the second method produced smaller numbers. The same thing would be achieved by de-meaning the parameters per run (which is a side effect of your second method), and doing that would not even affect the motion betas at all (it would be absorbed by the constant polort terms). So it is not safe to judge the success based on those magnitudes. Note that with afni_proc.py, we suggest regression motion per run, in which case your methods should be very similar (with motion regression).


"My traditional way still yielded a large head-shift that was also evident visually when I cycled the entire time-series in the afni underlay viewer."

This suggests that the between-run displacement did not just cause the image of the brain to move. There are (at least) 2 other potential problems that come with a large displacement:

1. Distortion: if there was a change in the distortion, rigid body registration will not be able to correct for this, it will just do its best to get close. If this happens, there will almost certainly still be a residual distortion change, which would require a non-linear solution.

2. Differential shading in the images: there is often strong shading in the images of a multi-channel scanner due to physical proximity to the coils, where the images will be brighter where the subject's head happened to lie closer to some coil. A large shift will change this non-uniformity pattern, and can therefore affect registration. In particular, since 3dvolreg uses least squares as a cost function, a change in shading would have an impact on registration.

Either or both of these issues might be affecting the cross-run registration, and could leave a residual jump between those runs.


Using epi2anat across runs is indeed like dealing with multi-session data, and suffers from the same problem: a potential distortion difference across runs. Such distortions are typically non-linear, and the EPI->anat registration can only do so much to correct for them. But also, EPI->anat registration will not tend to be as robust as EPI->EPI registration. It is a harder problem. That adds a sort of cross-run "noise" to the voxel positions in the brain, which can both hurt and distort activation patterns.

If you analyzed one run at a time, the patterns should be similar between the methods. But as a multi-run model, the cross-run brain shifts seem to be bigger in your second method (which is not a surprise).

Note that we will be adding an option to afni_proc.py that runs 3dvolreg per run, but then concatenates it with a cross-run affine transformation between the volreg bases. That might help a bit in a case like this. The lpa cost function should mitigate the effect from a (change in) shading artifact, and the affine xform might account for part of the distortion problem, though probably not much. Note that this is basically trusting EPI->EPI registration more than EPI->anat.

Anyway, this is a good example of the "noise" that is somewhat inherent in any multi-session analysis.

- rick
Subject Author Posted

Tactical/methods question on volume registration with inter-run head shift

Jim Bjork January 10, 2019 04:50PM

Re: Tactical/methods question on volume registration with inter-run head shift

rick reynolds January 15, 2019 04:57PM

Re: Tactical/methods question on volume registration with inter-run head shift

jmbjork January 18, 2019 08:21AM

Re: Tactical/methods question on volume registration with inter-run head shift

rick reynolds January 18, 2019 08:57AM