History of AFNI updates  

|
September 16, 2007 12:29PM
Dear AFNI experts,

I´m currently trying to analyze a block design where all subjects see the same order of stimuli and each stimulus is presented the same time (matching tasks of either negative faces or negative scenes compared to a matching task of simple geometric shapes). I had problems to align the unprocessed epi to the mprage which doesn´t seem to be the best idea anyway (time consuming). If I understood the message board correctly it is better to align just the stats+orig file to the anatomy. So I tried 2 different preprocessing methods.

1.) Scaling before 3dDeconvolve

timeshift
motion correction
smoothing
scaling (% of mean)
3dDeconvolve (with 'GAM')
Alignment of the stats+orig file to the anatomy

2.) Scaling after 3dDeconvolve

timeshift
motion correction
smoothing
3dDeconvolve (with 'GAM')
computing %auc:

e.g.
3dcalc \
-fscale \
-a "stats+orig[0]" \ #baseline constant
-b "stats+orig[10]" \ #subbrick with regression coef
-expr "100 * b/a * step(1-abs(b/a))" \
-prefix firstnegativecoef_auc

3dcalc \
-fscale \
-a "stats+orig[0]" \
-b "stats+orig[13]" \
-expr "100 * b/a * step(1-abs(b/a))" \
-prefix secondnegativecoef_auc

3drefit -sublabel 0 "%SignalChange_firstnegativestim" firstnegativecoef_auc+orig
3drefit -sublabel 0 "%SignalChange_secondnegativestim"
secondnegativecoef_auc+orig
3dbucket -glueto stats+orig firstnegativecoef_auc+orig
3dbucket -glueto stats+orig secondnegativecoef_auc+orig

Alignment of the stats+orig file to the anatomy


Sorry for this detailed description of scaling after 3dDeconvolve but I´m really not sure whether I did it correctly.

When I choose ß-weight as overlay the %SignalChange_firstnegativestim subbrick (2. method) compared to glt_coef (1. method) was much better. The same happened with the other coef subbrick.
1. problem:
I´m not sure whether subbrick #0 (Full_Fstat) is my baseline constant (sorry for my little understanding of statistics). I have read that scaling of regression coefficients is recommended if the baseline constant is too far away from 100. When I select #0 as overlay, the values are mostly over 200. So, if subbrick #0 is my baseline constant, could this explain why I see almost no activation when I scale before 3dDeconvolve?

When I choose glt_Tstat as overlay both methods show similar results.
2. problem:
Does scaling before 3dDeconvolve change the information contained in the statistical subbricks (finally when scaling after 3dDeconvolve I only scale the coefficients)?

I know that you cannot explain the statistics here but I just want to know whether my assumptions are right and how to do correct scaling after 3dDeconvolve if I want to look at T stats.

I would really appreciate any help!

Beate
Subject Author Posted

scaling and 3dDeconvolve

Beate September 16, 2007 12:29PM

Re: scaling and 3dDeconvolve

Gang Chen September 17, 2007 05:40PM

Re: scaling and 3dDeconvolve

Beate September 18, 2007 10:56AM