AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
August 15, 2003 11:39AM
I want to analyze data from an experiment in which two functional scans were run. In each scan, the same stimulus conditions were applied. But, in between the two scans, a “treatment” was administered. So, what I want to do is compare the response to the stimulus from scan 1 to the response to the stimulus in scan 2.

I can set up 3dDeconvolve to do this (by concatenating the two scans and setting up a –glt to test the difference between the coefficients for the stimulus during scan 1 and the coefficients during scan 2), but I am getting confused about how best to “equate” the two scans so that the change in response to the stimulus is meaningful. Ultimately, I want to be able to do group analysis – and, from what I have read on the afni site, this will require converting to % signal change in order to best compare the responses across subjects.

There have been two suggestions for calculating % change (based on 3dDeconvolve coefficients) on the afni board: a) use the estimate of baseline (and linear drift) from the 3dDeconvolve output to calculate % signal change; b) normalize before concatenating.

Since I want to compare the two scans directly (instead of “averaging” the response to the stimulus across the two scans, as is usually done when concatenating), what is my best approach?
Also, normalizing before concatenating considers all the time points in the dataset as the mean baseline – does 3dDeconvolve do the same thing? (Because I was under the impression that 3dDeconvolve calculates the baseline estimate using only the timepoints that are designated as “baseline” – i.e. using only the “0”-disignated time points in the stim_file). Might these two approaches produce different results??

Any suggestions/advice would be appreciated.

Thanks

Subject Author Posted

3dDeconvolve - across scan comparison

Elizabeth Felix August 15, 2003 11:39AM