I am running into a problem doing the pre-processing required to run a context-dependent correlation analysis.
I am using the option where the baseline is exracted from a 3dDeconvolve run that contained all EPI runs (6 runs total). Then that baseline is subtracted from the raw time series (which is a concatenation of all 6 EPI runs). Later, I will break the runs apart to create the seed for the correlation analysis.
The problem I am running into is that some type of weird signal is getting introduced into the last 10 volumes of each EPI run during step #4 below.
Here are my commands:
1) Run a 3dDeconvolve and ask it to output the baselines. Output = decon_TrialTypes_confident_getbaseline+orig (79 bricks)
2) #Get rid of the Full F subbrick at subbrick 0 because 3dSynthesize complains that these files are different lengths
3dbucket decon_TrialTypes_confident_getbaseline+orig[1..78] -prefix decon_TrialTypes_confident_getbaseline_noF
3) # extract the baseline component from the deconvolve file that has all the baseline info
3dSynthesize -cbucket decon_TrialTypes_confident_getbaseline_noF+orig -matrix Decon.xmat.1D -select baseline -prefix decon_TrialTypes_confident_baselineONLY
4) # remove the baseline component from the EPI run files that have been concatenated together
3dcalc -a allruns+orig -b decon_TrialTypes_confident_baselineONLY+orig -expr "a-b" -prefix allruns_nobaseline
The way I figured out the problem is that it looks like there is pre-steady state signal at the end of run numbers 2-6 after step #4 above in the file allruns_nobaseline+orig. This weird signal is not there in the file that contains the raw concatenated runs (allruns+orig).
It is worth mentioning that the end of run 1 does not have this problem. Also, run 1 is shorter than the other 5 runs. It is 255 volumes whereas the other runs are 265 volumes.
Is this part of my problem or have I done something else wrong?
Best,
Christine