AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
December 04, 2020 04:28PM
Hi there,

I am receiving this error message when I try to run my pre-processing script (see below). I don't see any additional spaces in my script. Any suggestions would be helpful!

Error Message in Output Script

3dTstat -prefix rm.mean_r02 pb03.1423.r02.blur+tlrc
++ 3dTstat: AFNI version=AFNI_20.3.02 (Nov 12 2020) [64-bit]
++ Authored by: KR Hammett & RW Cox
++ Output dataset ./rm.mean_r02+tlrc.BRIK
3dcalc -a pb03.1423.r02.blur+tlrc -b rm.mean_r02+tlrc -c mask_epi_extents+tlrc -expr c * min(200, a/b*100)*step(a)*step(b) -prefix pb04.1423.r02.scale
++ 3dcalc: AFNI version=AFNI_20.3.02 (Nov 12 2020) [64-bit]
++ Authored by: A cast of thousands
++ Output dataset ./pb04.1423.r02.scale+tlrc.BRIK
end
3dTstat -prefix rm.mean_r03 pb03.1423.r03.blur+tlrc
++ 3dTstat: AFNI version=AFNI_20.3.02 (Nov 12 2020) [64-bit]
++ Authored by: KR Hammett & RW Cox
++ Output dataset ./rm.mean_r03+tlrc.BRIK
3dcalc -a pb03.1423.r03.blur+tlrc -b rm.mean_r03+tlrc -c mask_epi_extents+tlrc -expr c * min(200, a/b*100)*step(a)*step(b) -prefix pb04.1423.r03.scale
++ 3dcalc: AFNI version=AFNI_20.3.02 (Nov 12 2020) [64-bit]
++ Authored by: A cast of thousands
++ Output dataset ./pb04.1423.r03.scale+tlrc.BRIK
end
3dTstat -prefix rm.mean_r04 pb03.1423.r04.blur+tlrc
++ 3dTstat: AFNI version=AFNI_20.3.02 (Nov 12 2020) [64-bit]
++ Authored by: KR Hammett & RW Cox
++ Output dataset ./rm.mean_r04+tlrc.BRIK
3dcalc -a pb03.1423.r04.blur+tlrc -b rm.mean_r04+tlrc -c mask_epi_extents+tlrc -expr c * min(200, a/b*100)*step(a)*step(b) -prefix pb04.1423.r04.scale
++ 3dcalc: AFNI version=AFNI_20.3.02 (Nov 12 2020) [64-bit]
++ Authored by: A cast of thousands
++ Output dataset ./pb04.1423.r04.scale+tlrc.BRIK
end
3dTstat -prefix rm.mean_r05 pb03.1423.r05.blur+tlrc
++ 3dTstat: AFNI version=AFNI_20.3.02 (Nov 12 2020) [64-bit]
++ Authored by: KR Hammett & RW Cox
++ Output dataset ./rm.mean_r05+tlrc.BRIK
3dcalc -a pb03.1423.r05.blur+tlrc -b rm.mean_r05+tlrc -c mask_epi_extents+tlrc -expr c * min(200, a/b*100)*step(a)*step(b) -prefix pb04.1423.r05.scale
++ 3dcalc: AFNI version=AFNI_20.3.02 (Nov 12 2020) [64-bit]
++ Authored by: A cast of thousands
++ Output dataset ./pb04.1423.r05.scale+tlrc.BRIK
end
3dTstat -prefix rm.mean_r06 pb03.1423.r06.blur+tlrc
++ 3dTstat: AFNI version=AFNI_20.3.02 (Nov 12 2020) [64-bit]
++ Authored by: KR Hammett & RW Cox
++ Output dataset ./rm.mean_r06+tlrc.BRIK
3dcalc -a pb03.1423.r06.blur+tlrc -b rm.mean_r06+tlrc -c mask_epi_extents+tlrc -expr c * min(200, a/b*100)*step(a)*step(b) -prefix pb04.1423.r06.scale
++ 3dcalc: AFNI version=AFNI_20.3.02 (Nov 12 2020) [64-bit]
++ Authored by: A cast of thousands
++ Output dataset ./pb04.1423.r06.scale+tlrc.BRIK
end
1d_tool.py -infile dfile_rall.1D -set_nruns 6 -demean -write motion_demean.1D
1d_tool.py -infile dfile_rall.1D -set_nruns 6 -derivative -demean -write motion_deriv.1D
1d_tool.py -infile motion_demean.1D -set_nruns 6 -split_into_pad_runs mot_demean
1d_tool.py -infile motion_deriv.1D -set_nruns 6 -split_into_pad_runs mot_deriv
1d_tool.py -infile dfile_rall.1D -set_nruns 6 -show_censor_count -censor_prev_TR -censor_motion 2 motion_1423
total number of censored TRs (simple form) = 4
1deval -a motion_1423_censor.1D -b outcount_1423_censor.1D -expr a*b
set ktrs = `1d_tool.py -infile censor_${subj}_combined_2.1D
-show_trs_uncensored encoded`
1d_tool.py -infile censor_1423_combined_2.1D -show_trs_uncensored encoded
Unmatched '''.


My Pre-processing script

#!/bin/tcsh -xef

# execute via :
# tcsh -xef proc.Nov30 |& tee output.proc_Nov23.txt

# =========================== auto block: setup ============================
# script setup

# the user may specify a single subject to run with
if ( $#argv > 0 ) then
set subj = $argv[1]
else
set subj = 1423
endif

# assign output directory name
set output_dir = $subj.results

# verify that the results directory does not yet exist
if ( -d $output_dir ) then
echo output dir "$subj.results" already exists
exit
endif

# set list of runs
set runs = (`count -digits 2 1 6`)

# create results and stimuli directories
mkdir $output_dir
mkdir $output_dir/stimuli

# copy stim files into stimulus directory
cp /imaging/Tamara/Youth/1423/Dfeel_Bad_NoResp.txt \
/imaging/Tamara/Youth/1423/Dfeel_Bad_Resp.txt \
/imaging/Tamara/Youth/1423/Dfeel_Happy_NoResp.txt \
/imaging/Tamara/Youth/1423/Dfeel_Happy_Resp.txt \
/imaging/Tamara/Youth/1423/Feel_Bad_NoResp.txt \
/imaging/Tamara/Youth/1423/Feel_Bad_Resp.txt \
/imaging/Tamara/Youth/1423/Feel_Happy_NoResp.txt \
/imaging/Tamara/Youth/1423/Feel_Happy_Resp.txt \
/imaging/Tamara/Youth/1423/Instruct.txt \
/imaging/Tamara/Youth/1423/Nat_neg_NoResp.txt\
/imaging/Tamara/Youth/1423/Nat_neg_Resp.txt\
/imaging/Tamara/Youth/1423/Nat_post_NoResp.txt\
/imaging/Tamara/Youth/1423/Nat_post_Resp.txt\
/imaging/Tamara/Youth/1423/Rating.txt\
$output_dir/stimuli

# copy anatomy to results dir
3dcopy 1423.anat+orig $output_dir/1423.anat

# ============================ auto block: tcat ============================
# apply 3dTcat to copy input dsets to results dir,
# while removing the first 0 TRs
3dTcat -prefix $output_dir/pb00.$subj.r01.tcat 1423.1+orig'[0..$]'
3dTcat -prefix $output_dir/pb00.$subj.r02.tcat 1423.2+orig'[0..$]'
3dTcat -prefix $output_dir/pb00.$subj.r03.tcat 1423.3+orig'[0..$]'
3dTcat -prefix $output_dir/pb00.$subj.r04.tcat 1423.4+orig'[0..$]'
3dTcat -prefix $output_dir/pb00.$subj.r05.tcat 1423.5+orig'[0..$]'
3dTcat -prefix $output_dir/pb00.$subj.r06.tcat 1423.6+orig'[0..$]'


# and make note of repetitions (TRs) per run
set tr_counts = ( 366 366 366 366 366 366 )

# -------------------------------------------------------
# enter the results directory (can begin processing data)
cd $output_dir


# ========================== auto block: outcount ==========================
# data check: compute outlier fraction for each volume
touch out.pre_ss_warn.txt
foreach run ( $runs )
3dToutcount -automask -fraction -polort 'A' -legendre \
pb00.$subj.r$run.tcat+orig > outcount.r$run.1D

# censor outlier TRs per run, ignoring the first 0 TRs
# - censor when more than 0.1 of automask voxels are outliers
# - step() defines which TRs to remove via censoring
1deval -a outcount.r$run.1D -expr "1-step(a-0.1)" > rm.out.cen.r$run.1D

# outliers at TR 0 might suggest pre-steady state TRs
if ( `1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)"` ) then
echo "** TR #0 outliers: possible pre-steady state TRs in run $run" \
>> out.pre_ss_warn.txt
endif
end

# catenate outlier counts into a single time series
cat outcount.r*.1D > outcount_rall.1D

# catenate outlier censor files into a single time series
cat rm.out.cen.r*.1D > outcount_${subj}_censor.1D

# ================================= tshift =================================
# time shift data so all slice timing is the same
foreach run ( $runs )
3dTshift -tzero 0 -quintic -prefix pb01.$subj.r$run.tshift \
pb00.$subj.r$run.tcat+orig
end

# --------------------------------
# extract volreg registration base
3dbucket -prefix vr_base pb01.$subj.r01.tshift+orig"[2]"

# ================================= align ==================================
# for e2a: compute anat alignment transformation to EPI registration base
# (new anat will be intermediate, stripped, 1423.anat_ns+orig)
align_epi_anat.py -anat2epi -anat 1423.anat+orig \
-save_skullstrip -suffix _al_junk \
-epi vr_base+orig -epi_base 0 \
-epi_strip 3dAutomask \
-volreg off -tshift off

# ================================== tlrc ==================================
# warp anatomy to standard space (non-linear warp)
auto_warp.py -base MNI152_T1_2009c+tlrc -input 1423.anat_ns+orig \
-skull_strip_input no

# move results up out of the awpy directory
# (NL-warped anat, affine warp, NL warp)
# (use typical standard space name for anat)
# (wildcard is a cheap way to go after any .gz)
3dbucket -prefix 1423.anat_ns awpy/1423.anat_ns.aw.nii*
mv awpy/anat.un.aff.Xat.1D .
mv awpy/anat.un.aff.qw_WARP.nii .

# ================================= volreg =================================
# align each dset to base volume, align to anat, warp to tlrc space

# verify that we have a +tlrc warp dataset
if ( ! -f 1423.anat_ns+tlrc.HEAD ) then
echo "** missing +tlrc warp dataset: 1423.anat_ns+tlrc.HEAD"
exit
endif

# register and warp
foreach run ( $runs )
# register each volume to the base image
3dvolreg -verbose -zpad 1 -base vr_base+orig \
-1Dfile dfile.r$run.1D -prefix rm.epi.volreg.r$run \
-cubic \
-1Dmatrix_save mat.r$run.vr.aff12.1D \
pb01.$subj.r$run.tshift+orig

# create an all-1 dataset to mask the extents of the warp
3dcalc -overwrite -a pb01.$subj.r$run.tshift+orig -expr 1 \
-prefix rm.epi.all1

# catenate volreg/epi2anat/tlrc xforms
cat_matvec -ONELINE \
anat.un.aff.Xat.1D \
1423.anat_al_junk_mat.aff12.1D -I \
mat.r$run.vr.aff12.1D > mat.r$run.warp.aff12.1D

# apply catenated xform: volreg/epi2anat/tlrc/NLtlrc
# then apply non-linear standard-space warp
3dNwarpApply -master 1423.anat_ns+tlrc -dxyz 2 \
-source pb01.$subj.r$run.tshift+orig \
-nwarp "anat.un.aff.qw_WARP.nii mat.r$run.warp.aff12.1D" \
-prefix rm.epi.nomask.r$run

# warp the all-1 dataset for extents masking
3dNwarpApply -master 1423.anat_ns+tlrc -dxyz 2 \
-source rm.epi.all1+orig \
-nwarp "anat.un.aff.qw_WARP.nii mat.r$run.warp.aff12.1D" \
-interp cubic \
-ainterp NN -quiet \
-prefix rm.epi.1.r$run

# make an extents intersection mask of this run
3dTstat -min -prefix rm.epi.min.r$run rm.epi.1.r$run+tlrc
end

# make a single file of registration params
cat dfile.r*.1D > dfile_rall.1D

# ----------------------------------------
# create the extents mask: mask_epi_extents+tlrc
# (this is a mask of voxels that have valid data at every TR)
3dMean -datum short -prefix rm.epi.mean rm.epi.min.r*.HEAD
3dcalc -a rm.epi.mean+tlrc -expr 'step(a-0.999)' -prefix mask_epi_extents

# and apply the extents mask to the EPI data
# (delete any time series with missing data)
foreach run ( $runs )
3dcalc -a rm.epi.nomask.r$run+tlrc -b mask_epi_extents+tlrc \
-expr 'a*b' -prefix pb02.$subj.r$run.volreg
end

# warp the volreg base EPI dataset to make a final version
cat_matvec -ONELINE \
anat.un.aff.Xat.1D \
1423.anat_al_junk_mat.aff12.1D -I > mat.basewarp.aff12.1D

3dNwarpApply -master 1423.anat_ns+tlrc -dxyz 2 \
-source vr_base+orig \
-nwarp "anat.un.aff.qw_WARP.nii mat.basewarp.aff12.1D" \
-prefix final_epi_vr_base

# create an anat_final dataset, aligned with stats
3dcopy 1423.anat_ns+tlrc anat_final.$subj

# record final registration costs
3dAllineate -base final_epi_vr_base+tlrc -allcostX \
-input anat_final.$subj+tlrc |& tee out.allcostX.txt

# -----------------------------------------
# warp anat follower datasets (non-linear)
3dNwarpApply -source 1423.anat+orig \
-master anat_final.$subj+tlrc \
-ainterp wsinc5 -nwarp anat.un.aff.qw_WARP.nii anat.un.aff.Xat.1D\
-prefix anat_w_skull_warped

# ================================== blur ==================================
# blur each volume of each run
foreach run ( $runs )
3dmerge -1blur_fwhm 4.0 -doall -prefix pb03.$subj.r$run.blur \
pb02.$subj.r$run.volreg+tlrc
end

# ================================== mask ==================================
# create 'full_mask' dataset (union mask)
foreach run ( $runs )
3dAutomask -dilate 1 -prefix rm.mask_r$run pb03.$subj.r$run.blur+tlrc
end

# create union of inputs, output type is byte
3dmask_tool -inputs rm.mask_r*+tlrc.HEAD -union -prefix full_mask.$subj

# ---- create subject anatomy mask, mask_anat.$subj+tlrc ----
# (resampled from tlrc anat)
3dresample -master full_mask.$subj+tlrc -input 1423.anat_ns+tlrc \
-prefix rm.resam.anat

# convert to binary anat mask; fill gaps and holes
3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.anat+tlrc \
-prefix mask_anat.$subj

# compute tighter EPI mask by intersecting with anat mask
3dmask_tool -input full_mask.$subj+tlrc mask_anat.$subj+tlrc \
-inter -prefix mask_epi_anat.$subj

# compute overlaps between anat and EPI masks
3dABoverlap -no_automask full_mask.$subj+tlrc mask_anat.$subj+tlrc \
|& tee out.mask_ae_overlap.txt

# note Dice coefficient of masks, as well
3ddot -dodice full_mask.$subj+tlrc mask_anat.$subj+tlrc \
|& tee out.mask_ae_dice.txt

# ---- create group anatomy mask, mask_group+tlrc ----
# (resampled from tlrc base anat, MNI152_T1_2009c+tlrc)
3dresample -master full_mask.$subj+tlrc -prefix ./rm.resam.group \
-input /home/ttavare/abin/MNI_avg152T1+tlrc

# convert to binary group mask; fill gaps and holes
3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.group+tlrc \
-prefix mask_group

# ================================= scale ==================================
# scale each voxel time series to have a mean of 100
# (be sure no negatives creep in)
# (subject to a range of [0,200])
foreach run ( $runs )
3dTstat -prefix rm.mean_r$run pb03.$subj.r$run.blur+tlrc
3dcalc -a pb03.$subj.r$run.blur+tlrc -b rm.mean_r$run+tlrc \
-c mask_epi_extents+tlrc \
-expr 'c * min(200, a/b*100)*step(a)*step(b)' \
-prefix pb04.$subj.r$run.scale
end

# ================================ regress =================================

# compute de-meaned motion parameters (for use in regression)
1d_tool.py -infile dfile_rall.1D -set_nruns 6 \
-demean -write motion_demean.1D

# compute motion parameter derivatives (for use in regression)
1d_tool.py -infile dfile_rall.1D -set_nruns 6 \
-derivative -demean -write motion_deriv.1D

# convert motion parameters for per-run regression
1d_tool.py -infile motion_demean.1D -set_nruns 6 \
-split_into_pad_runs mot_demean

1d_tool.py -infile motion_deriv.1D -set_nruns 6 \
-split_into_pad_runs mot_deriv

# create censor file motion_${subj}_censor.1D, for censoring motion
1d_tool.py -infile dfile_rall.1D -set_nruns 6 \
-show_censor_count -censor_prev_TR \
-censor_motion 2 motion_${subj}

# combine multiple censor files
1deval -a motion_${subj}_censor.1D -b outcount_${subj}_censor.1D \
-expr "a*b" > censor_${subj}_combined_2.1D

# note TRs that were not censored
set ktrs = `1d_tool.py -infile censor_${subj}_combined_2.1D \
-show_trs_uncensored encoded`

# ------------------------------
# run the regression analysis
3dDeconvolve -input pb04.$subj.r*.scale+tlrc.HEAD \
-censor censor_${subj}_combined_2.1D \
-polort 'A' \
-num_stimts 62 \
-stim_times 1 stimuli/Dfeel_Bad_NoResp.txt 'GAM' \
-stim_label 1 Dfeel_Bad_NoResp \
-stim_times 2 stimuli/Dfeel_Bad_Resp.txt 'GAM ' \
-stim_label 2 Dfeel_Bad_Resp \
-stim_times 3 stimuli/Dfeel_Happy_NoResp.txt 'GAM ' \
-stim_label 3 Dfeel_Happy_NoResp \
-stim_times 4 stimuli/Dfeel_Happy_Resp.txt ‘GAM' \
-stim_label 4 Dfeel_Happy_Resp \
-stim_times 5 stimuliFeel_Bad_NoResp.txt ' GAM' \
-stim_label 5 Feel_Bad_NoResp \
-stim_times 6 stimuli/Feel_Bad_Resp.txt 'GAM ' \
-stim_label 6 Feel_Bad_Resp \
-stim_times 7 stimuli/Feel_Happy_NoResp.txt 'GAM ' \
-stim_label 7 Feel_Happy_NoResp \
-stim_times 8 stimuli/Feel_Happy_Resp.txt 'GAM ' \
-stim_label 8 Feel_Happy_Resp \
-stim_times 9 stimuli/Instruct.txt 'GAM ' \
-stim_label 9 Instruct \
-stim_times 10 stimuli/Nat_neg_NoResp.txt 'GAM ' \
-stim_label 10 Nat_neg_NoResp \
-stim_times 11 stimuli/Nat_neg_Resp.txt 'GAM ' \
-stim_label 11 Nat_neg_Resp \
-stim_times 12 stimuli/Nat_post_NoResp.txt 'GAM ' \
-stim_label 12 Nat_post_NoResp \
-stim_times 13 stimuli/Nat_post_Resp.txt 'GAM ' \
-stim_label 13 Nat_post_Resp \
-stim_times 14 stimuli/Rating.txt 'GAM ' \
-stim_label 14 Rating \
-stim_file 15 mot_demean.r01.1D'[0]' -stim_base 15 -stim_label 15 roll_01 \
-stim_file 16 mot_demean.r01.1D'[1]' -stim_base 16 -stim_label 16 \
pitch_01 \
-stim_file 17 mot_demean.r01.1D'[2]' -stim_base 17 -stim_label 17 yaw_01 \
-stim_file 18 mot_demean.r01.1D'[3]' -stim_base 18 -stim_label 18 dS_01 \
-stim_file 19 mot_demean.r01.1D'[4]' -stim_base 19 -stim_label 19 dL_01 \
-stim_file 20 mot_demean.r01.1D'[5]' -stim_base 20 -stim_label 20 dP_01 \
-stim_file 21 mot_demean.r02.1D'[0]' -stim_base 21 -stim_label 21 roll_02 \
-stim_file 22 mot_demean.r02.1D'[1]' -stim_base 22 -stim_label 22 \
pitch_02 \
-stim_file 23 mot_demean.r02.1D'[2]' -stim_base 23 -stim_label 23 yaw_02 \
-stim_file 24 mot_demean.r02.1D'[3]' -stim_base 24 -stim_label 24 dS_02 \
-stim_file 25 mot_demean.r02.1D'[4]' -stim_base 25 -stim_label 25 dL_02 \
-stim_file 26 mot_demean.r02.1D'[5]' -stim_base 26 -stim_label 26 dP_02 \
-stim_file 27 mot_demean.r03.1D'[0]' -stim_base 27 -stim_label 27 roll_03 \
-stim_file 28 mot_demean.r03.1D'[1]' -stim_base 28 -stim_label 28 \
pitch_03 \
-stim_file 29 mot_demean.r03.1D'[2]' -stim_base 29 -stim_label 29 yaw_03 \
-stim_file 30 mot_demean.r03.1D'[3]' -stim_base 30 -stim_label 30 dS_03 \
-stim_file 31 mot_demean.r03.1D'[4]' -stim_base 31 -stim_label 31 dL_03 \
-stim_file 32 mot_demean.r03.1D'[5]' -stim_base 32 -stim_label 32 dP_03 \
-stim_file 33 mot_demean.r04.1D'[0]' -stim_base 33 -stim_label 33 roll_04 \
-stim_file 34 mot_demean.r04.1D'[1]' -stim_base 34 -stim_label 34 \
pitch_04 \
-stim_file 35 mot_demean.r04.1D'[2]' -stim_base 35 -stim_label 35 yaw_04 \
-stim_file 36 mot_demean.r04.1D'[3]' -stim_base 36 -stim_label 36 dS_04 \
-stim_file 37 mot_demean.r04.1D'[4]' -stim_base 37 -stim_label 37 dL_04 \
-stim_file 38 mot_demean.r04.1D'[5]' -stim_base 38 -stim_label 38 dP_04 \
-stim_file 39 mot_deriv.r01.1D'[0]' -stim_base 39 -stim_label 39 roll_05 \
-stim_file 40 mot_deriv.r01.1D'[1]' -stim_base 40 -stim_label 40 pitch_05 \
-stim_file 41 mot_deriv.r01.1D'[2]' -stim_base 41 -stim_label 41 yaw_05 \
-stim_file 42 mot_deriv.r01.1D'[3]' -stim_base 42 -stim_label 42 dS_05 \
-stim_file 43 mot_deriv.r01.1D'[4]' -stim_base 43 -stim_label 43 dL_05 \
-stim_file 44 mot_deriv.r01.1D'[5]' -stim_base 44 -stim_label 44 dP_05 \
-stim_file 45 mot_deriv.r02.1D'[0]' -stim_base 45 -stim_label 45 roll_06 \
-stim_file 46 mot_deriv.r02.1D'[1]' -stim_base 46 -stim_label 46 pitch_06 \
-stim_file 47 mot_deriv.r02.1D'[2]' -stim_base 47 -stim_label 47 yaw_06 \
-stim_file 48 mot_deriv.r02.1D'[3]' -stim_base 48 -stim_label 48 dS_06 \
-stim_file 49 mot_deriv.r02.1D'[4]' -stim_base 49 -stim_label 49 dL_06 \
-stim_file 50 mot_deriv.r02.1D'[5]' -stim_base 50 -stim_label 50 dP_06 \
-stim_file 51 mot_deriv.r03.1D'[0]' -stim_base 51 -stim_label 51 roll_07 \
-stim_file 52 mot_deriv.r03.1D'[1]' -stim_base 52 -stim_label 52 pitch_07 \
-stim_file 53 mot_deriv.r03.1D'[2]' -stim_base 53 -stim_label 53 yaw_07 \
-stim_file 54 mot_deriv.r03.1D'[3]' -stim_base 54 -stim_label 54 dS_07 \
-stim_file 55 mot_deriv.r03.1D'[4]' -stim_base 55 -stim_label 55 dL_07 \
-stim_file 56 mot_deriv.r03.1D'[5]' -stim_base 56 -stim_label 56 dP_07 \
-stim_file 57 mot_deriv.r04.1D'[0]' -stim_base 57 -stim_label 57 roll_08 \
-stim_file 58 mot_deriv.r04.1D'[1]' -stim_base 58 -stim_label 58 pitch_08 \
-stim_file 59 mot_deriv.r04.1D'[2]' -stim_base 59 -stim_label 59 yaw_08 \
-stim_file 60 mot_deriv.r04.1D'[3]' -stim_base 60 -stim_label 60 dS_08 \
-stim_file 61 mot_deriv.r04.1D'[4]' -stim_base 61 -stim_label 61 dL_08 \
-stim_file 62 mot_deriv.r04.1D'[5]' -stim_base 62 -stim_label 62 dP_08 \
-fout -tout -x1D X.xmat.1D -xjpeg X.jpg \
-x1D_uncensored X.nocensor.xmat.1D \
-fitts fitts.$subj \
-errts errts.${subj} \
-bucket stats.$subj


# if 3dDeconvolve fails, terminate the script
if ( $status != 0 ) then
echo '---------------------------------------'
echo '** 3dDeconvolve error, failing...'
echo ' (consider the file 3dDeconvolve.err)'
exit
endif


# display any large pairwise correlations from the X-matrix
1d_tool.py -show_cormat_warnings -infile X.xmat.1D |& tee out.cormat_warn.txt

# create an all_runs dataset to match the fitts, errts, etc.
3dTcat -prefix all_runs.$subj pb04.$subj.r*.scale+tlrc.HEAD

# --------------------------------------------------
# create a temporal signal to noise ratio dataset
# signal: if 'scale' block, mean should be 100
# noise : compute standard deviation of errts
3dTstat -mean -prefix rm.signal.all all_runs.$subj+tlrc"[$ktrs]"
3dTstat -stdev -prefix rm.noise.all errts.${subj}+tlrc"[$ktrs]"
3dcalc -a rm.signal.all+tlrc \
-b rm.noise.all+tlrc \
-c full_mask.$subj+tlrc \
-expr 'c*a/b' -prefix TSNR.$subj

# ---------------------------------------------------
# compute and store GCOR (global correlation average)
# (sum of squares of global mean of unit errts)
3dTnorm -norm2 -prefix rm.errts.unit errts.${subj}+tlrc
3dmaskave -quiet -mask full_mask.$subj+tlrc rm.errts.unit+tlrc \
> gmean.errts.unit.1D
3dTstat -sos -prefix - gmean.errts.unit.1D\' > out.gcor.1D
echo "-- GCOR = `cat out.gcor.1D`"

# ---------------------------------------------------
# compute correlation volume
# (per voxel: average correlation across masked brain)
# (now just dot product with average unit time series)
3dcalc -a rm.errts.unit+tlrc -b gmean.errts.unit.1D -expr 'a*b' -prefix rm.DP
3dTstat -sum -prefix corr_brain rm.DP+tlrc

# create ideal files for fixed response stim types
1dcat X.nocensor.xmat.1D'[16]' > ideal_ Dfeel_Bad_NoResp.1D
1dcat X.nocensor.xmat.1D'[17]' > ideal_ Dfeel_Bad_Resp.1D
1dcat X.nocensor.xmat.1D'[18]' > ideal_ Dfeel_Happy_NoResp.1D
1dcat X.nocensor.xmat.1D'[19]' > ideal_ Dfeel_Happy_Resp.1D
1dcat X.nocensor.xmat.1D'[20]' > ideal_ Feel_Bad_NoResp.1D
1dcat X.nocensor.xmat.1D'[21]' > ideal_ Feel_Bad_Resp.1D
1dcat X.nocensor.xmat.1D'[22]' > ideal_ Feel_Happy_NoResp.1D
1dcat X.nocensor.xmat.1D'[23]' > ideal_ Feel_Happy_Resp.1D
1dcat X.nocensor.xmat.1D'[24]' > ideal_ Instruct.1D
1dcat X.nocensor.xmat.1D'[25]' > ideal_ Nat_neg_NoResp.1D
1dcat X.nocensor.xmat.1D'[26]' > ideal_ Nat_neg_Resp.1D
1dcat X.nocensor.xmat.1D'[27]' > ideal_ Nat_post_NoResp.1D
1dcat X.nocensor.xmat.1D'[28]' > ideal_ Nat_post_Resp.1D
1dcat X.nocensor.xmat.1D'[29]' > ideal_Rating.1D


# --------------------------------------------------------
# compute sum of non-baseline regressors from the X-matrix
# (use 1d_tool.py to get list of regressor colums)
set reg_cols = `1d_tool.py -infile X.nocensor.xmat.1D -show_indices_interest`
3dTstat -sum -prefix sum_ideal.1D X.nocensor.xmat.1D"[$reg_cols]"

# also, create a stimulus-only X-matrix, for easy review
1dcat X.nocensor.xmat.1D"[$reg_cols]" > X.stim.xmat.1D

# ============================ blur estimation =============================
# compute blur estimates
touch blur_est.$subj.1D # start with empty file

# create directory for ACF curve files
mkdir files_ACF

# -- estimate blur for each run in epits --
touch blur.epits.1D

# restrict to uncensored TRs, per run
foreach run ( $runs )
set trs = `1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \
-show_trs_run $run`
if ( $trs == "" ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc \
-ACF files_ACF/out.3dFWHMx.ACF.epits.r$run.1D \
all_runs.$subj+tlrc"[$trs]" >> blur.epits.1D
end

# compute average FWHM blur (from every other row) and append
set blurs = ( `3dTstat -mean -prefix - blur.epits.1D'{0..$(2)}'\'` )
echo average epits FWHM blurs: $blurs
echo "$blurs # epits FWHM blur estimates" >> blur_est.$subj.1D

# compute average ACF blur (from every other row) and append
set blurs = ( `3dTstat -mean -prefix - blur.epits.1D'{1..$(2)}'\'` )
echo average epits ACF blurs: $blurs
echo "$blurs # epits ACF blur estimates" >> blur_est.$subj.1D

# -- estimate blur for each run in errts --
touch blur.errts.1D

# restrict to uncensored TRs, per run
foreach run ( $runs )
set trs = `1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \
-show_trs_run $run`
if ( $trs == "" ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc \
-ACF files_ACF/out.3dFWHMx.ACF.errts.r$run.1D \
errts.${subj}+tlrc"[$trs]" >> blur.errts.1D
end

# compute average FWHM blur (from every other row) and append
set blurs = ( `3dTstat -mean -prefix - blur.errts.1D'{0..$(2)}'\'` )
echo average errts FWHM blurs: $blurs
echo "$blurs # errts FWHM blur estimates" >> blur_est.$subj.1D

# compute average ACF blur (from every other row) and append
set blurs = ( `3dTstat -mean -prefix - blur.errts.1D'{1..$(2)}'\'` )
echo average errts ACF blurs: $blurs
echo "$blurs # errts ACF blur estimates" >> blur_est.$subj.1D


# ================== auto block: generate review scripts ===================

# generate a review script for the unprocessed EPI data
gen_epi_review.py -script @epi_review.$subj \
-dsets pb00.$subj.r*.tcat+orig.HEAD

# generate scripts to review single subject results
# (try with defaults, but do not allow bad exit status)
gen_ss_review_scripts.py -mot_limit 2.0 -out_limit 0.1 -exit0

# ========================== auto block: finalize ==========================

# remove temporary files
\rm -fr rm.* awpy

# if the basic subject review script is here, run it
# (want this to be the last text output)
if ( -e @ss_review_basic ) ./@ss_review_basic |& tee out.ss_review.$subj.txt

# return to parent directory
cd ..

echo "execution finished: `date`"
Subject Author Posted

Error message in afni proc script

tamtam December 04, 2020 04:28PM

Re: Error message in afni proc script

ptaylor December 04, 2020 05:35PM

Re: Error message in afni proc script

tamtam December 04, 2020 10:02PM

Re: Error message in afni proc script

ptaylor December 05, 2020 11:14AM

Re: Error message in afni proc script

tamtam December 05, 2020 11:34AM

Re: Error message in afni proc script

ptaylor December 05, 2020 03:56PM

Re: Error message in afni proc script

tamtam December 05, 2020 05:34PM

Re: Error message in afni proc script

rick reynolds December 07, 2020 10:26AM