Hello,
We are analyzing a pseudo-randomized event related data set. Our processed multiple regression results via 3dDeconvolve seem to indicate that we may not have a representative baseline for our data. This is due to some unexpected results and the post-observation that we don’t have any “baseline” dedicated time points in any of our time series.
One perspective of our data that gives intriguing results is one where we acquire an IRF curve (via 3dDeconvolve) for the total extent of the expected BOLD response (about 15 sec, or 6, 2500ms TRs) and subtract the initial time point least squares estimate from the “peak” or time point least squares estimate (thus we subtract the [0] sub-brick from the [2] time point). After this we would still calculate the percent signal change from the baseline constant.
Our reasoning begins with the vagueness of what our baseline constant represents, as there are no “pre-set” baseline points in the time series. So with this in mind if we consider our activation as difference from the first time point and not so much from this indistinct baseline we felt we would in some small way compensate for the lack in baseline points in the time series.
My question is if there is anything fundamentally wrong with looking at the data in such a way?
If not I would love to hear any suggestions as to where would be a good place to look for solutions to such annoying baseline issues.
I apologize for any lack of clarity and look forward to any response that can be given.
Thank you for your time!
Jeremy