AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
December 22, 2004 01:15PM
We have found that 3dvolreg minimizes just about any robust BOLD signal change. This appears to be a bit of a generic problem with 3dvolreg. It's too good at what it does! It works very well to 'correct' for apparent real motion (as viewed in cine mode). But minimizing 'as much of the variance as possible' is problemmatic when you have either a task with diffuse activation, or under conditions where you may see robust signal changes such as in pharmacologic experiments. The more diffuse the signal change, the more it effectively 'leeches' power from your analysis. The problem is that any robust signal change is included in the variance calculation and 'mistaken' for 'motion'.

As evidence for this, in pharmacologic time series, a clear pharmacologic effect can be seen in the motion vectors, even when motion is well under 1mm! Any subsequent statistical analysis is therefore crippled: the variance due to a robust change across many voxels is already minimized and that saps statistical power from the analysis faster than Kryptonite in proximity to Superman.

Some combination of surface matching and rigid body transformation may do better in discriminating between motion and diffuse activaiton. No matter where in the brain the signal changes, 3dvolreg injudiciously calculates a total rms and minimizing this variance according to a rigid body transformation of all voxels. Conceptually, 3dvolreg should not need the interior volume of the brain to register motion (or for that matter the exterior, but we'll stay with the shell idea for now). Other unmentionable registration algorithms use surface matching. 3dvolreg however could be utilized in a similar manner if we could weight the variance of exterior voxels more than interior. More verbosely, if a rigid body has a varied surface topography then 3dvolreg should be able to identify variance of surface voxels, and uniformly apply the resultant motion vectors across the entire volume using a rigid body transformation without necessarily incorporating the variance due to inner voxels. This could be done using a center of mass weighting. The implementation of -weight however appears to be primarily designed for anatomical images applied to functional data series.

We have been playing with such an approach using the -weight option. In weighting, 3dvolreg first takes only the top 2.5% of the maximal voxel values to eliminate voxels outside the brain. We have tried to get around this by weighting by the distance of each voxel to the center of mass of the brain and then scaling this weighting into the upper 2.5% range. Alas, Superman never quite seems to fly again.

Is there some other way to input a weight for the variance calculation in 3dvolreg, or is there some other ingenious way to adapt this approach to the current implementation of the -weight option?
Subject Author Posted

Weighting in 3dvolreg

rob risinger December 22, 2004 01:15PM

Re: Weighting in 3dvolreg

Robert Cox December 22, 2004 01:54PM