AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
April 21, 2015 06:25PM
Hello AFNI team,

I'm currently running an analysis and I have ran into a problem. During registration afni_proc executes 3dAllineate looking for a file called warp.all.anat.aff12.1D. Unfortunately this file has not been created so the afni_proc script crashes. I was wondering if anyone knows what preceding part of the afni_proc generates this file and why it may not have been generated in my analysis?

Thanks,

Steve

Function that is called with error.

3dAllineate -source anat.uni+orig -master anat_final.AH740+tlrc -final wsinc5 -1Dmatrix_apply warp.all.anat.aff12.1D -prefix anat_w_skull_warped
++ 3dAllineate: AFNI version=AFNI_2011_12_21_1014 (Apr 15 2015) [64-bit]
++ Authored by: Zhark the Registrator
** FATAL ERROR: Can't read -1Dmatrix_apply 'warp.all.anat.aff12.1D' sad smiley
** Program compile date = Apr 15 2015

AFNI_Proc I am running

#!/bin/tcsh -xef

echo "auto-generated by afni_proc.py, Tue Apr 21 10:30:14 2015"
echo "(version 4.37, April 9, 2015)"
echo "execution started: `date`"

# execute via :
# tcsh -xef afniProc.sh |& tee output.afniProc.sh

# =========================== auto block: setup ============================
# script setup

# take note of the AFNI version
afni -ver

# check that the current AFNI version is recent enough
afni_history -check_date 1 Apr 2015
if ( $status ) then
echo "** this script requires newer AFNI binaries (than 1 Apr 2015)"
echo " (consider: @update.afni.binaries -defaults)"
exit
endif

# the user may specify a single subject to run with
if ( $#argv > 0 ) then
set subj = $argv[1]
else
set subj = AH740
endif

# assign output directory name
set output_dir = /home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Analysis/v1.0/1stSession/AH740/MIDTask//subAverage/

# verify that the results directory does not yet exist
if ( -d $output_dir ) then
echo output dir "$subj.results" already exists
exit
endif

# set list of runs
set runs = (`count -digits 2 1 2`)

# create results and stimuli directories
mkdir $output_dir
mkdir $output_dir/stimuli

# copy stim files into stimulus directory
cp \
/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Analysis/v1.0/1stSession/AH740/Onsets/MIDTask/GainHi.txt \
/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Analysis/v1.0/1stSession/AH740/Onsets/MIDTask/GainLow.txt \
/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Analysis/v1.0/1stSession/AH740/Onsets/MIDTask/LoseHi.txt \
/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Analysis/v1.0/1stSession/AH740/Onsets/MIDTask/LoseLow.txt \
$output_dir/stimuli

# copy anatomy to results dir
3dcopy \
/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Data/Imaging/RawData/v1.0/1stSession/AH740/imagingFiles/anat.uni+orig \
$output_dir/anat.uni

# copy over the external volreg base
3dbucket -prefix $output_dir/external_volreg_base \
'/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Data/Imaging/RawData/v1.0/1stSession/AH740/imagingFiles/func.MID.r1+orig.HEAD[0]'

# copy over the external align_epi_anat.py EPI volume
3dbucket -prefix $output_dir/ext_align_epi \
'/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Data/Imaging/RawData/v1.0/1stSession/AH740/imagingFiles/func.MID.r1+orig.HEAD[0]'

# ============================ auto block: tcat ============================
# apply 3dTcat to copy input dsets to results dir, while
# removing the first 4 TRs
3dTcat -prefix $output_dir/pb00.$subj.r01.tcat \
/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Data/Imaging/RawData/v1.0/1stSession/AH740/imagingFiles/func.MID.r1+orig'[4..$]'
3dTcat -prefix $output_dir/pb00.$subj.r02.tcat \
/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Data/Imaging/RawData/v1.0/1stSession/AH740/imagingFiles/func.MID.r2+orig'[4..$]'

# and make note of repetitions (TRs) per run
set tr_counts = ( 221 221 )

# -------------------------------------------------------
# enter the results directory (can begin processing data)
cd $output_dir


# ========================== auto block: outcount ==========================
# data check: compute outlier fraction for each volume
touch out.pre_ss_warn.txt
foreach run ( $runs )
3dToutcount -automask -fraction -polort 3 -legendre \
pb00.$subj.r$run.tcat+orig > outcount.r$run.1D

# censor outlier TRs per run, ignoring the first 0 TRs
# - censor when more than 0.05 of automask voxels are outliers
# - step() defines which TRs to remove via censoring
1deval -a outcount.r$run.1D -expr "1-step(a-0.05)" > rm.out.cen.r$run.1D

# outliers at TR 0 might suggest pre-steady state TRs
if ( `1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)"` ) then
echo "** TR #0 outliers: possible pre-steady state TRs in run $run" \
>> out.pre_ss_warn.txt
endif
end

# catenate outlier counts into a single time series
cat outcount.r*.1D > outcount_rall.1D

# catenate outlier censor files into a single time series
cat rm.out.cen.r*.1D > outcount_${subj}_censor.1D

# ================================= tshift =================================
# time shift data so all slice timing is the same
foreach run ( $runs )
3dTshift -tzero 0 -quintic -prefix pb01.$subj.r$run.tshift \
pb00.$subj.r$run.tcat+orig
end

# ================================= align ==================================
# a2e: align anatomy to EPI registration base
# (new anat will be aligned and stripped, anat.uni_al_keep+orig)
align_epi_anat.py -anat2epi -anat anat.uni+orig \
-suffix _al_keep \
-epi ext_align_epi+orig -epi_base 0 \
-epi_strip 3dAutomask \
-volreg off -tshift off

# ================================== tlrc ==================================
# warp anatomy to standard space
@auto_tlrc -base TT_N27+tlrc -input anat.uni_al_keep+orig -no_ss

# store forward transformation matrix in a text file
cat_matvec anat.uni_al_keep+tlrc::WARP_DATA -I > warp.anat.Xat.1D

# ================================= volreg =================================
# align each dset to base volume, warp to tlrc space

# verify that we have a +tlrc warp dataset
if ( ! -f anat.uni_al_keep+tlrc.HEAD ) then
echo "** missing +tlrc warp dataset: anat.uni_al_keep+tlrc.HEAD"
exit
endif

# register and warp
foreach run ( $runs )
# register each volume to the base
3dvolreg -verbose -zpad 1 -base external_volreg_base+orig \
-1Dfile dfile.r$run.1D -prefix rm.epi.volreg.r$run \
-cubic \
-1Dmatrix_save mat.r$run.vr.aff12.1D \
pb01.$subj.r$run.tshift+orig

# create an all-1 dataset to mask the extents of the warp
3dcalc -overwrite -a pb01.$subj.r$run.tshift+orig -expr 1 \
-prefix rm.epi.all1

# catenate volreg and tlrc transformations
cat_matvec -ONELINE \
anat.uni_al_keep+tlrc::WARP_DATA -I \
mat.r$run.vr.aff12.1D > mat.r$run.warp.aff12.1D

# apply catenated xform : volreg and tlrc
3dAllineate -base anat.uni_al_keep+tlrc \
-input pb01.$subj.r$run.tshift+orig \
-1Dmatrix_apply mat.r$run.warp.aff12.1D \
-mast_dxyz 1.75 \
-prefix rm.epi.nomask.r$run

# warp the all-1 dataset for extents masking
3dAllineate -base anat.uni_al_keep+tlrc \
-input rm.epi.all1+orig \
-1Dmatrix_apply mat.r$run.warp.aff12.1D \
-mast_dxyz 1.75 -final NN -quiet \
-prefix rm.epi.1.r$run

# make an extents intersection mask of this run
3dTstat -min -prefix rm.epi.min.r$run rm.epi.1.r$run+tlrc

# if there was an error, exit so user can see
if ( $status ) exit

end

# make a single file of registration params
cat dfile.r*.1D > dfile_rall.1D

# ----------------------------------------
# create the extents mask: mask_epi_extents+tlrc
# (this is a mask of voxels that have valid data at every TR)
3dMean -datum short -prefix rm.epi.mean rm.epi.min.r*.HEAD
3dcalc -a rm.epi.mean+tlrc -expr 'step(a-0.999)' -prefix mask_epi_extents

# and apply the extents mask to the EPI data
# (delete any time series with missing data)
foreach run ( $runs )
3dcalc -a rm.epi.nomask.r$run+tlrc -b mask_epi_extents+tlrc \
-expr 'a*b' -prefix pb02.$subj.r$run.volreg
end

# create an anat_final dataset, aligned with stats
3dcopy anat.uni_al_keep+tlrc anat_final.$subj

# -----------------------------------------
# warp anat follower datasets (affine)
#3dAllineate -source anat.uni+orig -master anat_final.$subj+tlrc \
# -final wsinc5 -1Dmatrix_apply warp.all.anat.aff12.1D\
# -prefix anat_w_skull_warped

# ================================== blur ==================================
# blur each volume of each run
foreach run ( $runs )
3dmerge -1blur_fwhm 6 -doall -prefix pb03.$subj.r$run.blur \
pb02.$subj.r$run.volreg+tlrc
end

# ================================== mask ==================================
# create 'full_mask' dataset (union mask)
foreach run ( $runs )
3dAutomask -dilate 1 -prefix rm.mask_r$run pb03.$subj.r$run.blur+tlrc
end

# create union of inputs, output type is byte
3dmask_tool -inputs rm.mask_r*+tlrc.HEAD -union -prefix full_mask.$subj

# ---- create subject anatomy mask, mask_anat.$subj+tlrc ----
# (resampled from tlrc anat)
3dresample -master full_mask.$subj+tlrc -input anat.uni_al_keep+tlrc \
-prefix rm.resam.anat

# convert to binary anat mask; fill gaps and holes
3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.anat+tlrc \
-prefix mask_anat.$subj

# compute overlaps between anat and EPI masks
3dABoverlap -no_automask full_mask.$subj+tlrc mask_anat.$subj+tlrc \
|& tee out.mask_ae_overlap.txt

# note Pearson correlation of masks, as well
3ddot -demean full_mask.$subj+tlrc mask_anat.$subj+tlrc \
|& tee out.mask_ae_corr.txt

# ---- create group anatomy mask, mask_group+tlrc ----
# (resampled from tlrc base anat, TT_N27+tlrc)
3dresample -master full_mask.$subj+tlrc -prefix ./rm.resam.group \
-input /opt/abin/TT_N27+tlrc

# convert to binary group mask; fill gaps and holes
3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.group+tlrc \
-prefix mask_group

# ================================= scale ==================================
# scale each voxel time series to have a mean of 100
# (be sure no negatives creep in)
# (subject to a range of [0,200])
foreach run ( $runs )
3dTstat -prefix rm.mean_r$run pb03.$subj.r$run.blur+tlrc
3dcalc -a pb03.$subj.r$run.blur+tlrc -b rm.mean_r$run+tlrc \
-c mask_epi_extents+tlrc \
-expr 'c * min(200, a/b*100)*step(a)*step(b)' \
-prefix pb04.$subj.r$run.scale
end

# ================================ regress =================================

# compute de-meaned motion parameters (for use in regression)
1d_tool.py -infile dfile_rall.1D -set_nruns 2 \
-demean -write motion_demean.1D

# compute motion parameter derivatives (just to have)
1d_tool.py -infile dfile_rall.1D -set_nruns 2 \
-derivative -demean -write motion_deriv.1D

# create censor file motion_${subj}_censor.1D, for censoring motion
1d_tool.py -infile dfile_rall.1D -set_nruns 2 \
-show_censor_count -censor_prev_TR \
-censor_motion 0.3 motion_${subj}

# combine multiple censor files
1deval -a motion_${subj}_censor.1D -b outcount_${subj}_censor.1D \
-expr "a*b" > censor_${subj}_combined_2.1D

# ------------------------------
# run the regression analysis
3dDeconvolve -input pb04.$subj.r*.scale+tlrc.HEAD \
-censor censor_${subj}_combined_2.1D \
-polort 3 \
-num_stimts 10 \
-stim_times 1 stimuli/GainHi.txt 'BLOCK(2)' \
-stim_label 1 GainHi \
-stim_times 2 stimuli/GainLow.txt 'BLOCK(2)' \
-stim_label 2 GainLow \
-stim_times 3 stimuli/LoseHi.txt 'BLOCK(2)' \
-stim_label 3 LoseHi \
-stim_times 4 stimuli/LoseLow.txt 'BLOCK(2)' \
-stim_label 4 LoseLow \
-stim_file 5 motion_demean.1D'[0]' -stim_base 5 -stim_label 5 roll \
-stim_file 6 motion_demean.1D'[1]' -stim_base 6 -stim_label 6 pitch \
-stim_file 7 motion_demean.1D'[2]' -stim_base 7 -stim_label 7 yaw \
-stim_file 8 motion_demean.1D'[3]' -stim_base 8 -stim_label 8 dS \
-stim_file 9 motion_demean.1D'[4]' -stim_base 9 -stim_label 9 dL \
-stim_file 10 motion_demean.1D'[5]' -stim_base 10 -stim_label 10 dP \
-jobs 12 \
-num_glt 4 \
-gltsym 'SYM: +GainHi' \
-glt_label 1 GainHi \
-gltsym 'SYM: +GainLow' \
-glt_label 1 GainLo \
-gltsym 'SYM: +LoseHi' \
-glt_label 1 LoseHi \
-gltsym 'SYM: +LoseLow' \
-glt_label 1 LoseLo \
-fout -tout -x1D X.xmat.1D -xjpeg X.jpg \
-x1D_uncensored X.nocensor.xmat.1D \
-errts errts.${subj} \
-bucket stats.$subj


# if 3dDeconvolve fails, terminate the script
if ( $status != 0 ) then
echo '---------------------------------------'
echo '** 3dDeconvolve error, failing...'
echo ' (consider the file 3dDeconvolve.err)'
exit
endif


# display any large pariwise correlations from the X-matrix
1d_tool.py -show_cormat_warnings -infile X.xmat.1D |& tee out.cormat_warn.txt

# create an all_runs dataset to match the fitts, errts, etc.
3dTcat -prefix all_runs.$subj pb04.$subj.r*.scale+tlrc.HEAD

# --------------------------------------------------
# create a temporal signal to noise ratio dataset
# signal: if 'scale' block, mean should be 100
# noise : compute standard deviation of errts
3dTstat -mean -prefix rm.signal.all all_runs.$subj+tlrc
3dTstat -stdev -prefix rm.noise.all errts.${subj}+tlrc
3dcalc -a rm.signal.all+tlrc \
-b rm.noise.all+tlrc \
-c full_mask.$subj+tlrc \
-expr 'c*a/b' -prefix TSNR.$subj

# ---------------------------------------------------
# compute and store GCOR (global correlation average)
# (sum of squares of global mean of unit errts)
3dTnorm -norm2 -prefix rm.errts.unit errts.${subj}+tlrc
3dmaskave -quiet -mask full_mask.$subj+tlrc rm.errts.unit+tlrc > \
gmean.errts.unit.1D
3dTstat -sos -prefix - gmean.errts.unit.1D\' > out.gcor.1D
echo "-- GCOR = `cat out.gcor.1D`"

# ---------------------------------------------------
# compute correlation volume
# (per voxel: average correlation across masked brain)
# (now just dot product with average unit time series)
3dcalc -a rm.errts.unit+tlrc -b gmean.errts.unit.1D -expr 'a*b' -prefix rm.DP
3dTstat -sum -prefix corr_brain rm.DP+tlrc

# create fitts dataset from all_runs and errts
3dcalc -a all_runs.$subj+tlrc -b errts.${subj}+tlrc -expr a-b \
-prefix fitts.$subj

# create ideal files for fixed response stim types
1dcat X.nocensor.xmat.1D'[8]' > ideal_GainHi.1D
1dcat X.nocensor.xmat.1D'[9]' > ideal_GainLow.1D
1dcat X.nocensor.xmat.1D'[10]' > ideal_LoseHi.1D
1dcat X.nocensor.xmat.1D'[11]' > ideal_LoseLow.1D

# --------------------------------------------------------
# compute sum of non-baseline regressors from the X-matrix
# (use 1d_tool.py to get list of regressor colums)
set reg_cols = `1d_tool.py -infile X.nocensor.xmat.1D -show_indices_interest`
3dTstat -sum -prefix sum_ideal.1D X.nocensor.xmat.1D"[$reg_cols]"

# also, create a stimulus-only X-matrix, for easy review
1dcat X.nocensor.xmat.1D"[$reg_cols]" > X.stim.xmat.1D

# ============================ blur estimation =============================
# compute blur estimates
touch blur_est.$subj.1D # start with empty file

# -- estimate blur for each run in epits --
touch blur.epits.1D

# restrict to uncensored TRs, per run
foreach run ( $runs )
set trs = `1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \
-show_trs_run $run`
if ( $trs == "" ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc \
all_runs.$subj+tlrc"[$trs]" >> blur.epits.1D
end

# compute average blur and append
set blurs = ( `3dTstat -mean -prefix - blur.epits.1D\'` )
echo average epits blurs: $blurs
echo "$blurs # epits blur estimates" >> blur_est.$subj.1D

# -- estimate blur for each run in errts --
touch blur.errts.1D

# restrict to uncensored TRs, per run
foreach run ( $runs )
set trs = `1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \
-show_trs_run $run`
if ( $trs == "" ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc \
errts.${subj}+tlrc"[$trs]" >> blur.errts.1D
end

# compute average blur and append
set blurs = ( `3dTstat -mean -prefix - blur.errts.1D\'` )
echo average errts blurs: $blurs
echo "$blurs # errts blur estimates" >> blur_est.$subj.1D


# add 3dClustSim results as attributes to any stats dset
set fxyz = ( `tail -1 blur_est.$subj.1D` )
3dClustSim -both -mask full_mask.$subj+tlrc -fwhmxyz $fxyz[1-3] \
-prefix ClustSim
set cmd = ( `cat 3dClustSim.cmd` )
$cmd stats.$subj+tlrc


# ================== auto block: generate review scripts ===================

# generate a review script for the unprocessed EPI data
gen_epi_review.py -script @epi_review.$subj \
-dsets pb00.$subj.r*.tcat+orig.HEAD

# generate scripts to review single subject results
# (try with defaults, but do not allow bad exit status)
gen_ss_review_scripts.py -mot_limit 0.3 -out_limit 0.05 -exit0

# ========================== auto block: finalize ==========================

# remove temporary files
\rm -f rm.*

# if the basic subject review script is here, run it
# (want this to be the last text output)
if ( -e @ss_review_basic ) ./@ss_review_basic |& tee out.ss_review.$subj.txt

# return to parent directory
cd ..

echo "execution finished: `date`"




# ==========================================================================
# script generated by the command:
#
# afni_proc.py -blocks tcat tshift align tlrc volreg blur mask scale regress \
# -subj_id AH740 -script afniProc.sh -scr_overwrite -out_dir \
# /home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Analysis/v1.0/1stSession/AH740/MIDTask//subAverage/ \
# -dsets \
# /home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Data/Imaging/RawData/v1.0/1stSession/AH740/imagingFiles/func.MID.r1+orig.HEAD \
# /home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Data/Imaging/RawData/v1.0/1stSession/AH740/imagingFiles/func.MID.r2+orig.HEAD \
# -copy_anat \
# /home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Data/Imaging/RawData/v1.0/1stSession/AH740/imagingFiles/anat.uni+orig.HEAD \
# -tcat_remove_first_trs 4 -align_epi_ext_dset \
# '/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Data/Imaging/RawData/v1.0/1stSession/AH740/imagingFiles/func.MID.r1+orig.HEAD[0]' \
# -volreg_base_dset \
# '/home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Data/Imaging/RawData/v1.0/1stSession/AH740/imagingFiles/func.MID.r1+orig.HEAD[0]' \
# -volreg_tlrc_warp -blur_size 6 -regress_stim_times \
# /home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Analysis/v1.0/1stSession/AH740/Onsets/MIDTask/GainHi.txt \
# /home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Analysis/v1.0/1stSession/AH740/Onsets/MIDTask/GainLow.txt \
# /home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Analysis/v1.0/1stSession/AH740/Onsets/MIDTask/LoseHi.txt \
# /home/LIBRAD/sgreen/storage/labs/jfeinstein/Green/Experiments/Float.01/Analysis/v1.0/1stSession/AH740/Onsets/MIDTask/LoseLow.txt \
# -regress_stim_labels GainHi GainLow LoseHi LoseLow -regress_basis_multi \
# 'BLOCK(2)' 'BLOCK(2)' 'BLOCK(2)' 'BLOCK(2)' -regress_compute_fitts \
# -regress_censor_motion 0.3 -regress_censor_outliers 0.05 \
# -regress_opts_3dD -jobs 12 -num_glt 4 -gltsym 'SYM: +GainHi' -glt_label \
# 1 GainHi -gltsym 'SYM: +GainLow' -glt_label 1 GainLo -gltsym 'SYM: \
# +LoseHi' -glt_label 1 LoseHi -gltsym 'SYM: +LoseLow' -glt_label 1 \
# LoseLo -regress_make_ideal_sum sum_ideal.1D -regress_est_blur_epits \
# -regress_est_blur_errts
Subject Author Posted

Can't read -1Dmatrix_apply 'warp.all.anat.aff12.1D'

sgreen April 21, 2015 06:25PM

Re: Can't read -1Dmatrix_apply 'warp.all.anat.aff12.1D'

rick reynolds April 22, 2015 03:16PM

Re: Can't read -1Dmatrix_apply 'warp.all.anat.aff12.1D'

rick reynolds April 22, 2015 04:24PM

Re: Can't read -1Dmatrix_apply 'warp.all.anat.aff12.1D'

sgreen April 22, 2015 05:34PM