Here's a little background on our data. We have two runs of an event-related design that I've concatenated with 3dTcat and ran 3dDeconvolve using the -concat option. As expected, our bucket dataset has baseline coefficients for both Run#1 and Run#2.
Here's our issue...
The next step in our standard analysis path (at least with a single run) would be to convert the data into percent signal change using the baseline calculated by 3dDeconvolve. This was simple in the case of a single run dataset (only one baseline coefficient).
However, now we have 2 separate baselines, and the scaling procedure gets complicated. Is it justifiable to combine / average these baselines to have one value to use?
I have a few other ideas, and I'm not sure if they're the best approach. The main alternative I see is to scale our data using the mean as the baseline *before* running 3dDeconvolve.
Any comments would be most helpful