AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
July 02, 2015 03:45PM
Hi Afni Experts,

I am analyzing a study looking at the amygdala and hippocampus response to novel and repeated stimuli (with two other factors, emotional vs. neutral and humans vs. scenes). I first ran a pretty standard proc.py using a tent function to model the different conditions and extracted the resulting data (tents 2-5) for amygdala and hippocampus ROIs made through freesurfer. Here is the proc.py I used for that:

afni_proc.py \
-subj_id sub${subject}.novelty \
-script ${singlesubscripts}/proc_sub${subject}.novelty.notlrc.noblur \
-out_dir ${subfolder}/sub${subject}/Novelty/results.notlrc.noblur \
-dsets ${subfolder}/sub${subject}/sub${subject}.KAward.Nov+orig.HEAD \
-copy_anat ${subfolder}/sub${subject}/sub${subject}.KAward.spgr+orig.HEAD \
-blocks tshift align volreg mask scale regress \
-tcat_remove_first_trs 3 \
-volreg_align_to third \
-volreg_align_e2a \
-regress_stim_times ${subfolder}/Scripts/Novelty/1dfiles/novelty.*.1D \
-regress_stim_labels NEH NES NNeH NNeS REH RES RNeH RNeS \
-regress_basis 'TENT(0,14,8)' \
-regress_censor_motion 0.3 \
-regress_censor_outliers 0.1 \
-regress_opts_3dD \
-gltsym 'SYM: +NNeH -NNeS' -glt_label 1 NNeH_minus_NNeS \
-gltsym 'SYM: .25*NEH .25*NES .25*REH .25*RES -.25*NNeH -.25*NNeS -.25*RNeH -.25*RNeS' -glt_label 2 Emotional_minus_Neutral \
-gltsym 'SYM: .25*NEH .25*NES .25*NNeH .25*NNeS -.25*REH -.25*RES -.25*RNeH -.25*RNeS' -glt_label 3 Novel_minus_Repeated \
-gltsym 'SYM: .5*NEH .5*NES -.5*REH -.5*RES' -glt_label 4 Both_Novel_Emotional_minus_Both_Repeated_Emotional \
-gltsym 'SYM: .5*NNeH .5*NNeS -.5*RNeH -.5*RNeS' -glt_label 5 Both_Novel_Neutral_minus_Both_Repeated_Neutral \
-jobs 6 \
-regress_est_blur_epits \
-regress_est_blur_errts


I also wanted to be able to look at activity toward each individual trial (to be able to plot the response for each repetition of the repeated stimuli, to show the novelty effect was not just habituation). To do this, I took the file that proc.py made right before 3ddeconvolve and put that into a 3ddeconvolve for each condition using -stim_times_IM on that condition in each one, and then using 3dLSS to get an activation estimate for each trial. To limit the number of regressors I had in these models, I used a gamma function. I then extracted this data for the same freesurfer ROIs as before. Here is an example of that script for one condition:

#NEH

set cond=NEH

mkdir ${output}/${cond}

setenv cond_folder ${output}/${cond}

3dDeconvolve -fout -tout -full_first -polort a -x1D ${cond_folder}/${cond}.xmat.singII.1D \
-x1D_uncensored ${cond_folder}/${cond}.xmat.singII.uncensored.1D \
-input ${data}/pb03.sub${sub}.novelty.r01.scale+orig -num_stimts 14 -jobs 6 \
-censor ${data}/censor_sub${sub}_combined_2.1D \
-stim_times_IM 1 ${stim}/novelty.NEH.1D 'GAM' -stim_label 1 NEH \
-stim_times 2 ${stim}/novelty.NES.1D 'GAM' -stim_label 2 NES \
-stim_times 3 ${stim}/novelty.NNeH.1D 'GAM' -stim_label 3 NNeH \
-stim_times 4 ${stim}/novelty.NNeS.1D 'GAM' -stim_label 4 NNeS \
-stim_times 5 ${stim}/novelty.REH.1D 'GAM' -stim_label 5 REH \
-stim_times 6 ${stim}/novelty.RES.1D 'GAM' -stim_label 6 RES \
-stim_times 7 ${stim}/novelty.RNeH.1D 'GAM' -stim_label 7 RNeH \
-stim_times 8 ${stim}/novelty.RNeS.1D 'GAM' -stim_label 8 RNeS \
-stim_file 9 ${data}/motion_demean.1D'[0]' -stim_base 9 \
-stim_file 10 ${data}/motion_demean.1D'[1]' -stim_base 10 \
-stim_file 11 ${data}/motion_demean.1D'[2]' -stim_base 11 \
-stim_file 12 ${data}/motion_demean.1D'[3]' -stim_base 12 \
-stim_file 13 ${data}/motion_demean.1D'[4]' -stim_base 13 \
-stim_file 14 ${data}/motion_demean.1D'[5]' -stim_base 14 \
-bucket ${cond_folder}/stats.${cond}



foreach cond (NEH NES NNeH NNeS REH RES RNeH RNeS)

setenv cond_folder ${output}/${cond}

3dLSS -matrix ${cond_folder}/${cond}.xmat.singII.1D \
-input ${data}/pb03.sub${sub}.novelty.r01.scale+orig \
-save1D ${cond_folder}/${cond}.LSS.1D \
-prefix ${cond_folder}/${cond}.LSS \

end
end

I ran repeated measures ANOVAs on both of these sets of extracted data, with the only difference being that I included trial as a factor in the ANOVA on the dataset that was modeled individually. These two ANOVAs are giving me very different results. For example, in the main analysis the left hippocampus is showing an effect of valence (p=.005), while in the individually modeled data is not (p=.22). But this isn't just a loss of power, for example, the left amygdala is not showing a valence x image type interaction in the main analysis (p=.737), but it is in the individually modeled analysis (p=.003). I have checked a few times to make sure it isn't a simple matter of getting condition names mixed up or anything. I have also gone through and removed outliers (plus or minus 3.5 SDs) and this does not help bring closer agreement between the analyses.

So, my question is, are these differences possibly just due to the differences in how the data was modeled (a combination of the individual modulation thing and the use of gamma instead of tent), or is it likely that something else went wrong here?

Thank you,

Walker

(sorry for the long question)
Subject Author Posted

Modeling Each Condition vs. Each Trial

wped July 02, 2015 03:45PM

Re: Modeling Each Condition vs. Each Trial

wped July 06, 2015 04:30PM

Re: Modeling Each Condition vs. Each Trial

rick reynolds July 07, 2015 08:57AM