Hi,
So the basic idea would be very much like VBM, but now using individual voxels from an independent "predictor" dataset to predict voxel values in the dependent dataset.
What problems do you see with the following approach? Let's say we're trying to assess whether gray-matter (GM) density predicts the degree of activation in a voxel. Assuming datasets with equivalent geometry one could pull out an individual voxel value from, say, the GM segment of a T1 image, for location xyz using 3dmaskdump and supply this value to one column of 3dRegAna. Then one could run 3dRegAna on fMRI data, maybe including additional task predictors, take the resulting output dataset and pull out only the value at location xyz. After iterating through all the xyz locations you could build a dataset using 3dUndump. Without masking it could be alot of extra computation, but it might do the trick.
Looking back on this idea, for a group of participants, you'd have to obtain the GM value from location xyz for each separate participant and feed it to the right row(s) of 3dRegAna.
It seems to me this would answer the question of whether a voxel's GM signal predicts activation? But is it problematic that each xyz location has a different GM predictor?
I have some DWI/DTI data from a group of MDMA users, so I'll have to try out the new tools. Regarding GM/WM segmentation, I've been using FSL's FAST, but it always seems easier to keep things in one (AFNI) format, rather than migrating between formats and trying to maintain the right positioning and orientation at each step.
thanks,
jim