I want to analyze data from an experiment in which two functional scans were run. In each scan, the same stimulus conditions were applied. But, in between the two scans, a “treatment” was administered. So, what I want to do is compare the response to the stimulus from scan 1 to the response to the stimulus in scan 2.
I can set up 3dDeconvolve to do this (by concatenating the two scans and setting up a –glt to test the difference between the coefficients for the stimulus during scan 1 and the coefficients during scan 2), but I am getting confused about how best to “equate” the two scans so that the change in response to the stimulus is meaningful. Ultimately, I want to be able to do group analysis – and, from what I have read on the afni site, this will require converting to % signal change in order to best compare the responses across subjects.
There have been two suggestions for calculating % change (based on 3dDeconvolve coefficients) on the afni board: a) use the estimate of baseline (and linear drift) from the 3dDeconvolve output to calculate % signal change; b) normalize before concatenating.
Since I want to compare the two scans directly (instead of “averaging” the response to the stimulus across the two scans, as is usually done when concatenating), what is my best approach?
Also, normalizing before concatenating considers all the time points in the dataset as the mean baseline – does 3dDeconvolve do the same thing? (Because I was under the impression that 3dDeconvolve calculates the baseline estimate using only the timepoints that are designated as “baseline” – i.e. using only the “0”-disignated time points in the stim_file). Might these two approaches produce different results??
Any suggestions/advice would be appreciated.
Thanks