Hi Rick,
Thanks for the input -- you were correct:
I was wrongly comparing the smoothing factor of the detrended raw data with that of the ricor data (which includes the trends).
However, this doesn't solve the main problem that started my inquiry -- I'd very much appreciate you input on it.
What seems to happen is that the retroicor procedure increases the tSNR of the time series, but due to a process that is not completely clear to me. To make a long story short (and I'll be telling the long story below), I am worried that the process increases SNR, but not just due to removing the physiological-noise variance. (I had initially thought it does so by introducing spatial smoothing, but after your comment I think that is not the case.)
Here are the details
(please see graph here, as they refer to it)
[
bit.ly]
This graph shows the distribution of tSNR values (cumulative distribution) of one participant's data. We're dealing with non-smoothed data, few volumes, and a short TR so overall values are pretty low (raw data in Green, leftmost line; very few voxels pass SNR of 100). detrending the time series helps a bit (purple line second from left).
What happens when we run retroicor? SNR increases massively (without noticeable change in smoothing factor). This is seen in the Rightmost (red) line in the graph. It's seemingly terrific -- about 50% of the voxels have an SNR greater than 150.
BUT, when we use "fake" physiological time series as surrogates (initially done in context of permutation testing) we *also* get increases in SNR -- those are the Blue lines in the graph.
So it seems that retroicor works in the sense that applying the participant's physiological data to their time series outputs the best result, but even using non-relevant time series in retroicor increases the SNR. What is driving this increase? We find this pattern for almost all our participants (an instructive exception in which the permutations look just like removal of real noise is here [
bit.ly])
Any ideas?
Oori