Hello AFNI folks,
I noticed something when trying to implement an AFNI-based real time fMRI engine that I made.
To trouble shoot I collected all EPI files after the run and stitched them together into a full 4D run.
Using the full run I used 3dvolreg to align it to the EPI containing the mask from where we would like the feedback. For trouble shooting purposes I used visual cortex to get a strong signal. The paradigm is looking at images. I then scaled it.
The scaled data was the input to 3dDeconvolve using the motion_demean.txt file I got from the 3dVolreg command (dfile.txt -> motion_demean.txt).
This gives the motion corrected data, errts. I did not use any drift regressors or the mean (polort = -1).
This gives a very nice average time course:
But when actually running the neuro feedback we can ofc not use the full run. What I do is to 3dTcat out 13 TRs of data (of which we use TR 0-3 as baseline (red), and TR 9-12 during the peak of the BOLD response to the image (green).
Using this "zoomed-in" snippet of data, the time course is identical as using the full data set (the corresponding 13 TRs) if using the .scaled data. But motion correction (3dDeconvolve) removes the increase of BOLD that is induced by seeing the image.
Is this expected? We know visual cortex do react to viusal stimuli. The full run with motion correction still has an increase in signal at fixation (red) and image peak (green). But not when running mot-corr on just those 13 TRs (red until green).
I would think this is a "regression artifact" of having too few TRs.
So I'm just checking :)