AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
April 15, 2009 03:54PM
We used the new Uber-Script twice, generating PSC files with and then without a mask to compare the 3dDeconvolve stat results. There were substantial difference between these two outputs, whereby when we ran 3dDecon inputting the non-masked PSC files there was barely any activation inside the brain. In contrast, inputting masked PSC files to 3dDecon produced expected activation patterns for our events of interest. We are unclear on why there are such striking differences between the outputs. Pasted below is the script (with mask included in the expression to calculate the PSC).

Thank you very much for your help in advance,
Juliet


#!/usr/bin/env tcsh

echo "auto-generated by afni_proc.py, Thu Apr 2 15:41:44 2009"
echo "(version 1.38, Mar 26, 2009)"

# execute via : tcsh -x s6006pp |& tee output.s6006pp

# --------------------------------------------------
# script setup

# the user may specify a single subject to run with
if ( $#argv > 0 ) then
set subj = $argv[1]
else
set subj = 6006
endif

# assign output directory name
set output_dir = $subj.results

# verify that the results directory does not yet exist
if ( -d $output_dir ) then
echo output dir "$subj.results" already exists
exit
endif

# set list of runs
set runs = (`count -digits 2 1 5`)

# create results directory
mkdir $output_dir

# create stimuli directory, and copy stim files
mkdir $output_dir/stimuli
cp C1T1.1D C1T2.1D C1T3.1D C2T1.1D C2T2.1D C2T3.1D ER.1D $output_dir/stimuli

# -------------------------------------------------------
# apply 3dTcat to copy input dsets to results dir, while
# removing the first 4 TRs
3dTcat -prefix $output_dir/pb00.$subj.r01.tcat 6006_r1+orig'[4..$]'
3dTcat -prefix $output_dir/pb00.$subj.r02.tcat 6006_r2+orig'[4..$]'
3dTcat -prefix $output_dir/pb00.$subj.r03.tcat 6006_r3+orig'[4..$]'
3dTcat -prefix $output_dir/pb00.$subj.r04.tcat 6006_r4+orig'[4..$]'
3dTcat -prefix $output_dir/pb00.$subj.r05.tcat 6006_r5+orig'[4..$]'

# and enter the results directory
cd $output_dir

# -------------------------------------------------------
# run 3dToutcount and 3dTshift for each run
foreach run ( $runs )
3dToutcount -automask pb00.$subj.r$run.tcat+orig > outcount_r$run.1D

3dTshift -heptic -prefix pb01.$subj.r$run.tshift \
pb00.$subj.r$run.tcat+orig
end

# -------------------------------------------------------
# align each dset to the base volume
foreach run ( $runs )
3dvolreg -verbose -zpad 4 -base pb01.$subj.r05.tshift+orig'[163]' \
-1Dfile dfile.r$run.1D -prefix pb02.$subj.r$run.volreg \
-heptic \
pb01.$subj.r$run.tshift+orig
end

# make a single file of registration params
cat dfile.r??.1D > dfile.rall.1D

# -------------------------------------------------------
# blur each volume
foreach run ( $runs )
3dmerge -1blur_fwhm 4.0 -doall -prefix pb03.$subj.r$run.blur \
pb02.$subj.r$run.volreg+orig
end

# -------------------------------------------------------
# create 'full_mask' dataset (union mask)
foreach run ( $runs )
3dAutomask -dilate 1 -prefix rm.mask_r$run pb03.$subj.r$run.blur+orig
end

# get mean and compare it to 0 for taking 'union'
3dMean -datum short -prefix rm.mean rm.mask*.HEAD
3dcalc -a rm.mean+orig -expr 'ispositive(a-0)' -prefix full_mask.$subj

# -------------------------------------------------------
# scale each voxel time series to have a mean of 100
# (subject to maximum value of 200)
foreach run ( $runs )
3dTstat -prefix rm.mean_r$run pb03.$subj.r$run.blur+orig
3dcalc -a pb03.$subj.r$run.blur+orig -b rm.mean_r$run+orig -c full_mask.$subj+orig. \
-expr 'min(200, a/b*100)*c' \
-prefix pb04.$subj.r$run.scale
end

# -------------------------------------------------------
# run the regression analysis
3dDeconvolve -input pb04.$subj.r??.scale+orig.HEAD \
-polort 3 \
-global_times \
-num_stimts 13 \
-stim_times 1 stimuli/C1T1.1D 'GAM' \
-stim_label 1 C1T1 \
-stim_times 2 stimuli/C1T2.1D 'GAM' \
-stim_label 2 C1T2 \
-stim_times 3 stimuli/C1T3.1D 'GAM' \
-stim_label 3 C1T3 \
-stim_times 4 stimuli/C2T1.1D 'GAM' \
-stim_label 4 C2T1 \
-stim_times 5 stimuli/C2T2.1D 'GAM' \
-stim_label 5 C2T2 \
-stim_times 6 stimuli/C2T3.1D 'GAM' \
-stim_label 6 C2T3 \
-stim_times 7 stimuli/ER.1D 'GAM' \
-stim_label 7 ER \
-stim_file 8 dfile.rall.1D'[0]' -stim_base 8 -stim_label 8 roll \
-stim_file 9 dfile.rall.1D'[1]' -stim_base 9 -stim_label 9 pitch \
-stim_file 10 dfile.rall.1D'[2]' -stim_base 10 -stim_label 10 yaw \
-stim_file 11 dfile.rall.1D'[3]' -stim_base 11 -stim_label 11 dS \
-stim_file 12 dfile.rall.1D'[4]' -stim_base 12 -stim_label 12 dL \
-stim_file 13 dfile.rall.1D'[5]' -stim_base 13 -stim_label 13 dP \
-fout -tout -x1D X.xmat.1D -xjpeg X.jpg \
-fitts fitts.$subj \
-bucket stats.$subj


# if 3dDeconvolve fails, terminate the script
if ( $status != 0 ) then
echo '---------------------------------------'
echo '** 3dDeconvolve error, failing...'
echo ' (consider the file 3dDeconvolve.err)'
exit
endif


# create an all_runs dataset to match the fitts, errts, etc.
3dTcat -prefix all_runs.$subj pb04.$subj.r??.scale+orig.HEAD

# create ideal files for each stim type
1dcat X.xmat.1D'[20]' > ideal_C1T1.1D
1dcat X.xmat.1D'[21]' > ideal_C1T2.1D
1dcat X.xmat.1D'[22]' > ideal_C1T3.1D
1dcat X.xmat.1D'[23]' > ideal_C2T1.1D
1dcat X.xmat.1D'[24]' > ideal_C2T2.1D
1dcat X.xmat.1D'[25]' > ideal_C2T3.1D
1dcat X.xmat.1D'[26]' > ideal_ER.1D

# -------------------------------------------------------
# generate a review script for the unprocessed EPI data
gen_epi_review.py -script @epi_review.$subj \
-dsets pb00.$subj.r??.tcat+orig.HEAD

# -------------------------------------------------------

# remove temporary rm.* files
\rm -f rm.*

# return to parent directory
cd ..

# -------------------------------------------------------
# script generated by the command:
#
# afni_proc.py -ask_me
#
# all applied options: -subj_id 6006-script s6006pp-tcat_remove_first_trs \
# 4-volreg_align_to last-regress_basis GAM-regress_stim_times C1T1.1D \
# C1T2.1D C1T3.1D C2T1.1D C2T2.1D C2T3.1D ER.1D-regress_stim_labels C1T1 \
# C1T2 C1T3 C2T1 C2T2 C2T3 ER
Subject Author Posted

afni_proc.py masked v. unmasked?

Juliet April 15, 2009 03:54PM

Re: afni_proc.py masked v. unmasked?

rick reynolds April 15, 2009 04:15PM

Re: afni_proc.py masked v. unmasked?

Juliet April 15, 2009 05:49PM

Re: afni_proc.py masked v. unmasked?

Juliet April 16, 2009 05:01PM

Re: afni_proc.py masked v. unmasked?

rick reynolds April 16, 2009 10:33PM