AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
August 03, 2015 07:16PM
Hi- Thanks in advance for reading through this!

I'm trying to find the best strategy for modeling my data w/ 3dDeconvolve & have questions about how breaking up my block design into different regressors, including different numbers of regressors, affects that.

In essence, there are two main conditions of interest that are distributed across the task in the same pseurandom order of 12 roughly 30-sec blocks (6 EX, 6 INC, total):

INC EX INC INC EX INC EX INC EX EX EX INC

I first modeled the data such that I pitted all EX's against all INC's w/ one stimulus timing file for each condition (i.e., each file composed of one line (for one functional run) of 6 start times & durations using dmBLOCK). The script was:

First modeling-
afni_proc.py -subj_id S$SID \
-script proc.S$SID -scr_overwrite \
-blocks volreg blur mask scale regress \
-copy_anat $anat_dir/S$SID.mprage_aligned.nii.gz \
-tcat_remove_first_trs 0 \
-dsets $epi_dir/S$SID.epi01_aligned.nii.gz \
-volreg_align_to third \
-blur_size 4.0 \
-regress_stim_times \
$stim_dir/$SID.ex.1D \
$stim_dir/$SID.inc.1D \
$stim_dir/$SID.but.1D \
$stim_dir/$SID.beg.1D \
$stim_dir/$SID.ins.1D \
-regress_stim_labels \
ex inc but beg ins \
-regress_basis_multi \
'dmBLOCK(1)' 'dmBLOCK(1)' 'GAM' 'BLOCK(2,1)' 'BLOCK(8,1)' \
-regress_stim_types \
AM1 AM1 times times times \
-regress_censor_motion 1.0 \
-regress_apply_mot_types demean deriv \
-regress_opts_3dD \
-gltsym 'SYM: ex -inc' -glt_label 1 ex-inc \
-gltsym 'SYM: -ex inc' -glt_label 2 inc-ex \
-gltsym 'SYM: ex inc' -glt_label 3 fullF \
-regress_compute_fitts \
-regress_make_ideal_sum sum_ideal.1D \
-regress_est_blur_epits \
-regress_est_blur_errts \
-regress_run_clustsim no

However, I wanted to refine my analysis bc, as indicated below in bold, the task actually has 3 consecutive EX blocks that were lumped together to serve as a particularly powerful psychological manipulation of the EX condition w/ an effect likely deserving its own analysis:

INC EX INC INC EX INC EX INC EX EX EX INC

My question becomes how to best model the paradigm w/ sensitivity to this block & maybe also in light of where/how other blocks occur (e.g., early EX's or INC's vs. late EX's or INC's; EX's or INC's in alternation vs. consecutively). To most effectively accomplish this, should each block become a separate regressor such that there is a different timing file for the first INC, second INC, etc., first EX, second EX, etc., then I flexibly mix & match these regressors into different contrasts while setting the GLT options? I tried this as the following:

Second modeling-
afni_proc.py -subj_id S$SID \
-script proc.S$SID -scr_overwrite \
-blocks volreg blur mask scale regress \
-copy_anat $anat_dir/S$SID.mprage_aligned.nii.gz \
-tcat_remove_first_trs 0 \
-dsets $epi_dir/S$SID.epi01_aligned.nii.gz \
-volreg_align_to third \
-blur_size 4.0 \
-regress_stim_times \
$stim_dir/$SID.inc1.1D \
$stim_dir/$SID.ex2.1D \
$stim_dir/$SID.inc3.1D \
$stim_dir/$SID.inc4.1D \
$stim_dir/$SID.ex5.1D \
$stim_dir/$SID.inc6.1D \
$stim_dir/$SID.ex7.1D \
$stim_dir/$SID.inc8.1D \
$stim_dir/$SID.ex9.1D \
$stim_dir/$SID.ex10.1D \
$stim_dir/$SID.ex11.1D \
$stim_dir/$SID.inc12.1D \
$stim_dir/$SID.but.1D \
$stim_dir/$SID.beg.1D \
$stim_dir/$SID.ins.1D \
-regress_stim_labels \
inc1 ex2 inc3 inc4 ex5 inc6 ex7 inc8 ex9 ex10 ex11 inc12 but beg ins \
-regress_basis_multi \
'dmBLOCK(1)' 'dmBLOCK(1)' 'dmBLOCK(1)' 'dmBLOCK(1)' 'dmBLOCK(1)' 'dmBLOCK(1)' 'dmBLOCK(1)' 'dmBLOCK(1)' 'dmBLOCK(1)' 'dmBLOCK(1)' 'dmBLOCK(1)' 'dmBLOCK(1)' 'GAM' 'BLOCK(2,1)' 'BLOCK(8,1)' \
-regress_stim_types \
AM1 AM1 AM1 AM1 AM1 AM1 AM1 AM1 AM1 AM1 AM1 AM1 times times times \
-regress_censor_motion 1.0 \
-regress_apply_mot_types demean deriv \
-regress_opts_3dD \
-gltsym 'SYM: -inc1 ex2 -inc3 -inc4 ex5 -inc6 ex7 -inc8 ex9 ex10 ex11 -inc12' -glt_label 1 ex-inc_full \
-gltsym 'SYM: -inc4 -inc6 -inc8 ex5 ex7 ex9' -glt_label 1 ex-inc_mid \
-gltsym 'SYM: -inc6 -inc8 -inc12 ex9 ex10 ex11' -glt_label 1 ex-inc_end \
-gltsym 'SYM: -inc1 -inc3 -inc4 -inc6 -inc8 -inc12 ex9 ex10 ex11' -glt_label 2 latex-inc \
-gltsym 'SYM: -inc1 -inc3 -inc4 -inc6 -inc8 -inc12 ex2 ex5 ex7' -glt_label 3 earlyex-inc \
-gltsym 'SYM: -inc1 -inc3 -inc4 -inc6 -inc8 -inc12 ex2' -glt_label 3 earlyex-inc \
-gltsym 'SYM: -ex2 -ex5 -ex7 ex9 ex10 ex11' -glt_label 1 latex-earlyex \
-regress_compute_fitts \
-regress_make_ideal_sum sum_ideal.1D \
-regress_est_blur_epits \
-regress_est_blur_errts \
-regress_run_clustsim no

In doing so, analyzing that supposedly powerful last chunk of EX's as contrasted against all the INC's (the "latex-inc" contrast in the second model) produced an extremely blobby map that looked like a less defined/more spread out version of the EX vs. INC map ("ex-inc") from the first model; before interpreting this too far, is there a statistical reason for why this might be, such as a noisier map coming from averaging across less blocks for the EX condition (3 in the second model vs. 6 in the first)?

Moreover, collapsing across all the EX's to produce one EX condition & across all the INC's to produce one INC condition then contrasting these (the "ex-inc_full" in the second model) did NOT produce the same exact EX vs. INC map ("ex-inc") from the first model; again, is there a statistical reason for why this might be, such as the model's being more constrained by including more regressors? Or should I have gotten the same exact map?

Finally, would it be better to try yet another modeling to get at the effect of that last chunk of EX's? For example, 4 regressors could be made of: the EX's that precede that major EX chunk; the INC's that precede that major EX chunk; the EX chunk itself; then the last INC. Or something else?

Any thoughts greatly appreciated. Thanks, -Robie



Edited 1 time(s). Last edit at 08/04/2015 09:27PM by neurobie.
Subject Author Posted

3dDeconvolve Modeling Question

neurobie August 03, 2015 07:16PM

Re: 3dDeconvolve Modeling Question

rick reynolds August 05, 2015 04:30PM