Hi Rick!
That is not how I do it, but I can if you think that will help!
What we do now:
Time progresses, each scan/TR/epi is identified and stored.
When it's time for the subject to get feedback (the last TR to include in the calculation) the progam gets that TR and 12 TRs worth of history.
E.g. TR 14..27.
This BLOCK, as I call it, is volreged to the mask that we made in a previous localizer run.
This block of 13 TRs is scaled, giving it a mean value of 100 and I only keep the voxels that are in the mask to save time and space.
Then I run
3dDeconvolve using the motion information I got from volreg above.
This is done voxel by voxel, like in a normal pre-proc.
THEN I take the average of the mask - and get the problem I described.
What you suggest, it that I average WITHIN the mask for each of the TRs, before the regression. And I feed 3dDeconvolve a single (averaged) timecourse instead of the 4D-data (.scale).
Did not know you could feed 3dDeconvolve with non image data =)
Do you think that would make a difference?
And how would I do it?
3dMaskave block1.scale+orig. >> avg_scale_mask_tc.txt
Change it suck that each TR is in a new row?
"**** You can input a 1D time series file here,
but the time axis should run along the
ROW direction, not the COLUMN direction as
in the -input1D option. You can automatically
transpose a 1D file on input using the \'
operator at the end of the filename, as in
-input fred.1D\' "
3dDeconvolve -input avg_scale_mask_tc.txt
-errts avg_scale_mask_tc_motiuon_corr.txt
Then take row 0-3 for my baseline
and
row 9,10,11,12 as my image peak
Signal = (peak - fixation) / fixation
Don't we still have a degree of freedom issuee?
Thanks!