Hi Paul,
> Does the process you've developed do task-dependent GCA, resting-
> state GCA or both?
It definitely works well with resting-state data since there is not much concern about confounding effects, such as tasks of no interest, except for physiological noise. For task-dependent experiment, things could be a little complicated. For block designs with relatively long-duration blocks, we could extract the time points of those blocks which could be troubling since it involves a lot of breakup points in time, and then feed them into the causality model. For rapid event-related designs, it is probably OK to keep the whole time series without any breakup since we can simply regress out those tasks of no interest.
> Any chance of implementing a more exploratory yet less rigorous
> approach that calculates the predictive capacity of one seed region's time-
> course relative to the time-courses of all other voxels?
Could you explain a little bit further about this? The current modeling approach I've adopted is kind of mixed. It is model-based in the sense that we start with a number of pre-selected regions and the between-regions relationship is assumed to be linear. Any missing region or an extra region in the model could ruin the whole analysis. But it is also exploratory in the sense all the coefficients are estimated in a data-driven fashion.
> We have 20-participants worth of "resting-state" data that have been
> acquired with maximizing sensitivity of GCA in mind [TR = 1200 ms; 250
> acquisitions; 18 5-mm slices covering, on average, the bottom of the
> temporal lobes to the top of the brain]. If you think that these data might
> be compatible with the current incarnation of your GCM process, then
> we'd be happy to take them for a test-drive.
Thanks for the offer. Yes, this would be a nice candidate for testing the program from individual subject level all the way to group analysis.
Gang