Hi Gang,
I appreciate your help. It is rather difficult to explain, but maybe it would help if I showed you the script for one subject.
Per the instructions in How-to #5, I volume registered, then blurred the datasets from the three runs. Then I used this script (which was also adapted from How-to #5):
3dTstat -prefix mean_run1 s2_run1_reg_bl+orig
3dTstat -prefix mean_run2 s2_run2_reg_bl+orig
3dTstat -prefix mean_run3 s2_run3_reg_bl+orig
3dcalc -a s2_run1_reg_bl+orig -b mean_run1+orig -fscale -expr "(a/b*100) * step (b-1160)" -prefix scaled_run1
3dcalc -a s2_run2_reg_bl+orig -b mean_run2+orig -fscale -expr "(a/b*100) * step (b-1160)" -prefix scaled_run2
3dcalc -a s2_run3_reg_bl+orig -b mean_run3+orig -fscale -expr "(a/b*100) * step (b-1160)" -prefix scaled_run3
Here is an excerpt from the How-to about how percent signal change is calculated
"'3dTstat' will be used to compute the mean intensity value on a voxel-by-voxel basis. This mean is needed because it will serve as our baseline. This baseline will be used in our percent change equation (remember
A/B*100? The mean for each voxel will be placed in the "B" slot of this equation)."
It indicates that the mean of the voxel is used as the baseline in this instance.
Then I concatenated all 3 runs.
3dTcat -prefix s2_allruns \
scaled_run1+orig \
scaled_run2+orig \
scaled_run3+orig
I then used the normalized, concatenated file as the input for 3dDeconvolve.
3dDeconvolve -input s2_allruns+orig \
-num_stimts 2 \
-stim_file 1 taskA.1D -stim_label 1 taskA \
-stim_file 2 taskB.1D -stim_label 2 taskB \
-concat concat2.1D \
-glt 1 taskAtaskB.txt -glt_label 1 taskAvtaskB \
-full_first -fout -tout -bucket s2_deconvolve
I think you thought I was talking about the baseline and linear drift estimation during 3dDeconvolve, but I was actually talking about the baselines for each run during the normalization process, before they are concatenated and 3dDeconvolved. I am basically just wondering if it is possible to combine runs that have different control conditions due to the fact that the baselines (as calculated above - mean intensity of voxel) are different.
>
> Maybe I misunderstand what you mean here, but the baseline and
> drifting effect for each run are estimated not based on the
> mean of activity. Instead they are the "best" fit with least
> square estimation. So ideally all signal should be correctly
> detected no matter what their relative magnitudes are. Of
> course negative regression coefficients do occur in the real
> world of FMRI data analysis from time to time, and some are
> real while the others are false. But the reason for false
> negative beta's is not really because of the relative magnitude
> of BOLD signal to the baseline and drifting effect, and usually
> it is due to some other issues such as incorrect timing, bad
> experiment design, etc..
>
> Gang