#!/bin/tcsh


This script was written in the Unix T-shell language, an enhancement of
the C-shell.  This line, '#!/bin/tcsh', should appear at the very top of
the file.

Note that although the '#' character signifies a comment in a shell script,
it has a special meaning on the very first line of the file.

The script can be executed in three ways:

    tcsh @ARzs_analyze
    ./@ARzs_analyze
    source @ARzs_analyze

In the first case, using 'tcsh @ARzs_analyze', that first line of the
file is not relevant.  Here we are saying, start a new 'tcsh' shell, and
use it to execute the commands in the file, '@ARzs_analyze'.  In this case,
the first line is indeed treated like a comment.

Note that using this technique for executing a shell script presumes that
the script was written for 'tcsh'.  Many other scripting languages exist
(like sh, bash, perl, tcl, tk, python, etc).

The second case, using './@ARzs_analyze' is actually more complicated.
In order to do this, the user must have execute permissions on the file.
Details of file permissions are beyond what we want to cover in this 
document, so suffice it to say that giving execute permissions on the
file may be accomplished with the command:

    chmod +x @ARzs_analyze

In this './@ARzs_analyze' case, the first line of the file actually specifies
which program will be used to execute the script.  Since the script was
written in the language of the T-shell, we specify to use the '/bin/tcsh'
program to execute the script.  Notice that this ends up being the same as
the first case, where we directly tell the 'tcsh' program to run the script.

The last detail about the './@ARzs_analyze' case is to note the './' in the
command.  Typically, the directory one is in is not searched for commands
to execute.  If it IS searched, then typing '@ARzs_analyze' is enough.  But
in most cases this will not work, and it is necessary to tell the shell that
the command is in the current directory ('.' is the current directory).

The third case is not recommended.  It is basically the same as the second
case, except that the script 'permanently' affects your environment, and it
requires that you are currently running 'csh' or 'tcsh' as your shell
environment.  Please save the 'source' command for instances in which you
specifically want to affect your environment.




# -------------------------------------------------------------------
# if the afni directory is already here, whine a bit and terminate the script


if ( -d ARzs_data/afni ) then
    echo ""
    echo "Failure:"
    echo ""
    echo "The resulting afni directory already exists.  Please move the"
    echo "afni directory and then re-run the script.  Example:"
    echo ""
    echo "    mv ARzs_data/afni ARzs_data/afni.bak"
    echo ""

    exit
endif



This shell script is not designed to be run more than once.  Since a new
'afni' directory is created by the script (under 'ARzs_data'), re-executing
the script makes it unclear what to do with the old data, i.e. should we
remove it, overwrite it, or keep it safe?

Since the script should not make this decision for the user, the user must
either move or remove the existing 'ARzs_data/afni' directory.  One way to do
this is to rename the afni directory to afni.bak.  If the data is not needed,
or if no data exists that wasn't created by the script, then removing it may
be good enough.

For example, one could rename it to afni.bak:

    mv ARzs_data/afni ARzs_data/afni.bak

# Unix

When writing a safe shell script, most of it tends to end up being various
tests, verifying that 'all is well within our universe'.  Examples of such
tests might include tests for proper usage of commands (or even the script,
itself), successful completion of commands, existence of expected files or
directories, and so on.

In this case (our only test in this skeleton of a script), we check whether
the afni directory already exists.  The '-e' test on 'ARzs_data/afni' determines
whether any file or directory of that name Exists.  If so, then we echo some
statements to the screen for the user to see, and 'exit' the shell script.

If the test fails (i.e. if ARzs_data/afni does _not_ exist), then control
jumps down to 'endif', and the script continues on.




# -------------------------------------------------------------------

cd ARzs_data

Start by going into the ARzs_data directory.  This is where
the EPI_data and the SPGR_data directories are.  This is also
where the afni directory will be, once it is created by the
execution of the new_GERT_Reco script, below.

# Unix

The 'cd' command is used to 'change directories'.  It takes
a single argument - a directory to move into.




# -------------------------------------------------------------------
# create AFNI dataset from spgr I-files - move them to afni dir later

to3d -prefix ARzs_spgr -spgr SPGR_data/I.*

Use the to3d program to create an AFNI brick from the I-files
in the SPGR_data directory.

# Unix      (see also 'to3d -help')

In this case, the to3d command has two arguments with dashes before
them, '-prefix' and '-spgr'.  These arguments are referred to as
options.  The -prefix option has a single parameter, 'ARzs_spgr',
which is what we request as the prefix to the AFNI dataset files.

Note that an AFNI dataset is stored across 2 files, 'prefix.HEAD'
and 'prefix.BRIK' (for whichever prefix is used).  The BRIK file
contains the raw data, in this case, a 3 dimensional volume of
SPGR data.  The HEAD file contains much auxiliary information,
such as the orientation, grid spacing, and a unique ID code.

The '-spgr' option tells to3d that this is Spoiled GRASS data.
This option takes no parameters.

What appears to be the last argument, 'SPGR_data/I.*', is in truth
124 arguments.  The shell (tcsh, in this case) sees the '*' as a
wildcard, and replaces 'SPGR_data/I.*' with the list of files that
match that form.  The '*' could replace zero characters, one
character, or fifty characters, as long as some file name matches.

In this case, it expands to:

    SPGR_data/I.001 SPGR_data/I.002  ...  SPGR_data/I.124

So the actual command that gets executed looks like:

    to3d -prefix ARzs_spgr -spgr SPGR_data/I.001 ... SPGR_data/I.124

This might be more clear if you echo the command to the screen
(i.e. display it on the screen, without executing it).  Do this
with the 'echo' command as follows:

    echo to3d -prefix ARzs_spgr -spgr SPGR_data/I.*




# -------------------------------------------------------------------

cd EPI_data


Move into the EPI_data directory.  This contains directories
003, 023, 043, 063, 083 and 103, each of which contains I-files
for some of the run data.

# Unix

Enter the 'ls' command, and note that those directories are here.




# -------------------------------------------------------------------
# create AFNI datasets from both EPI runs

Ifile -nt '???/I.*'


Use Ifile to create the GERT_Reco script.  This program examines
the I-files to determine where runs begin and end, and creates
the GERT_Reco script using that information.  The result of running
the GERT_Reco script is the creation of the afni directory and the
corresponding AFNI datasets.

# Unix      (see also 'Ifile -help')

The single '-nt' option is used to tell Ifile not to evaluate the
runs based on the timestamps of the I-files.  Instead, it uses
data within each I-file to determine this.  Using this option is
recommended.

The argument of '???/I.*' is actually intended to be a list of 5976
arguments, expanding to: 003/I.001 ... 003/I.999 023/I.001 ...
023/I.999 ... 043/I.001 ... 103/I.001 ... 103/I.998 .  This list
represents all of the I-files for all of the runs.

There are 3 special symbols there.  The '*', as we may recall from
above, matches any number of characters in file names of that form,
i.e. anything starting with 'I.'.

The '?' character is similar to '*', except that it matches exactly
one character.  So the '???/' part of it means, any directory with
exactly three characters in its name.  The shell can tell that any
match must be a directory because we have a '/' after the '?'s.

So the '?' characters will match the 003 ... 103 directory names,
and the 'I.*' will match all of the I-files within those directories.

The last characters to note are the single quote characters around
the ???/I.*.  In some cases, using wild cards will cause so many
arguments to appear on the command line (we have almost 6000 here),
that the shell cannot process it.  In this case, one might get a
        csh: line too long
error.  The single quotes cause the shell NOT to expand those
wildcard characters.  Instead, the actual string '???/I.*' is
passed to the program (Ifile, in this case) as a single argument.
It is then up to the program to look for files that match the
given pattern.

Some programs do not do this.  AFNI programs that expect to take
many arguments on the command line (like to3d and Ifile) were
written to handle this sort of input.

Try the following echo commands while in this directory, to see
the results of shell expansions:

    echo ???
    echo ???/I.001
    echo '???/I.*'
    echo ???/I.*






# -------------------------------------------------------------------
# change the prefix for the output file in GERT_Reco

sed 's/OutBrick/ARzs_EPI/' GERT_Reco > new_GERT_Reco


Within the GERT_Reco script, set the prefix of the output AFNI
datasets to be ARzs_EPI.

# Unix

The result of this command is the same as if one were to open
GERT_Reco in an editor (such as nedit), change OutBrick to
ARzs_EPI, and save the file as new_GERT_Reco.

The GERT_Reco script, which was created by Ifile, has a line:

    set OutPrefix = 'OutBrick' #Output brick prefix.

Our 'sed' command outputs the entire file (GERT_Reco, in this case),
but substituting 'ARzs_EPI' for 'OutBrick'.  sed is a Stream EDitor,
allowing us to modify text it reads in.  Try the commands:

    echo hi there Peggy
    echo hi there Peggy | sed 's/Peggy/Ziad/'

Another part of this command is the '>' character, followed by
a new file name.  Instead of everything being displayed on the
screen, this character 'redirects' the output to a file.  So the
file new_GERT_Reco is created, and is the same as GERT_Reco, but
with OutPrefix set to ARzs_EPI.

WARNING - the redirection symbol, '>', will destroy any existing
contents of a file and create a new one.  If you accidentally type:

    ls > some_file_I_care_about

your file contents will be gone (replaced by the output of the 'ls'
command).




# -------------------------------------------------------------------
# now run the new_GERT_Reco script

tcsh new_GERT_Reco


This script, created by Ifile above, will create AFNI datasets from
the EPI run data.  Those I-files are collected into 3D+t (3 dimensions
plus a time dimension) AFNI bricks.  Each single run has 160 3-D
brain volumes (one per TR), and each volume consists of 18 coronal
slices.  Since each I-file is a single slice image, there will be
18 * 160 = 2880 I-files per run.  Ifile figures this out from data in
the I-files.

Executing this command (running the new_GERT_Reco script) leaves us
with a new 'afni' directory, containing an AFNI dataset for each run,
and an Outliers file from each to3d command.

So we have AFNI data files:

    run 1:  ARzs_EPI_r1+orig.BRIK, ARzs_EPI_r1+orig.HEAD
    run 2:  ARzs_EPI_r2+orig.BRIK, ARzs_EPI_r2+orig.HEAD

and outlier files (also created by to3d):

    run 1: ARzs_EPI_r1_Outliers.1D
    run 2: ARzs_EPI_r2_Outliers.1D




# -------------------------
# store outliers in a subdir

cd afni
mkdir outliers
mv ARzs_EPI*Outliers.1D outliers
cd ..


These commands simply move the outlier files to a new directory.

# Unix

We 'change directory' to the new afni directory, create a new directory
called 'outliers', move both of the outlier files to this new directory,
and 'change directory' back to where we started.

Note that the command:

    mv ARzs_EPI*Outliers.1D outliers

actually expands to:

    mv ARzs_EPI_r1_Outliers.1D ARzs_EPI_r2_Outliers.1D outliers

by the shell.  It would be less visual, but since these are the only
files in the new afni directory that ended in '1D', we could simply
use the command 'mv *.1D outliers', instead.  The shell would expand
it to become exactly the same command.




# -------------------------
# put the new afni directory back at the top level

mv afni ..


The new afni directory was created under the EPI_data directory, but
we would rather move it up one level.

Unix :

The 'mv' (move/rename) command can be used to move a file or directory
into another directory, as is being done here.  The '..' directory is
the 'parent' of the current directory.  Since we are currently in
'howto/ht01_ARzs/ARzs_data/EPI_data', moving afni into the parent dir
moves it into 'howto/ht01_ARzs/ARzs_data'.




# -------------------------
# go back to top level data directory

cd ..


We are done with all of our work in the EPI_data directory.  Recall
that after moving to EPI_data, we have run 'Ifile', created and run
'new_GERT_Reco', moved new outlier files into an 'outliers' directory,
and moved the new 'afni' directory up one level.

Now we will 'change directories' up one level (to where the new 'afni'
directory is).




# -------------------------------------------------------------------
# put our spgr anat dataset into that new afni directory

mv ARzs_spgr+orig.* afni


Recall one of our first commands was to make the anatomical dataset
from the SPGR data.  This dataset, in the HEAD and BRIK files, can
now be moved into our new afni directory.

# Unix

Note that the wildcard here will expand to two files, as in:

    mv ARzs_spgr+orig.BRIK ARzs_spgr+orig.HEAD afni




# -------------------------------------------------------------------
# do volume registering


# -------------------------
cd afni

 First go into the new 'afni' directory, where all of our dataset are.

3dvolreg -prefix ARzs_EPI_r1_vr -base ARzs_EPI_r2+orig'[155]' ARzs_EPI_r1+orig
3dvolreg -prefix ARzs_EPI_r2_vr -base ARzs_EPI_r2+orig'[155]' ARzs_EPI_r2+orig


Register all of our volumes, 160 of them from each of 2 runs, to almost the
last volume collected (run 2, volume 155 (zero based)).  This will rotate
and shift each of the volumes in space, attempting to align them with the
'base' volume.  A volume at the end was chosen because the anatomical data
was collected just after the last run.

# Unix      (see also '3dvolreg -help')

The 3dvolreg command here has two options, with each option taking a single
argument.  The '-prefix' option takes an argument to use as the prefix for
the output AFNI dataset.  The '-base' option takes an argument to use as
the volume to register all other input volumes to.  The final argument,
'ARzs_EPI_r1+orig', is the input 3D+t dataset, containing 160 volumes, each
of which may adjusted so that it lines up with the 'base' image.

There are two special things to note in the argument to the '-base' option.
First, there are square brackets around 155.  This is notation that we want
to pass to the 3dvolreg program, telling it to use only the single volume
at time point 155 in dataset 'ARzs_EPI_r2+orig'.  Recall that this dataset
contains 160 volumes, numbered 0 to 159.  We want to use number 155 as the
base volume for registration.

The '[' and ']' characters are special to the shell (recall that '*' and '?'
have special meanings, also).  We do not want the shell to interpret these
characters in any special way, we want the string 'ARzs_EPI_r2+orig[155]'
passed to 3dvolreg unaltered by the shell.  This is why there are single
quotes around '[155]'.  Note that since '[' and ']' are the only special
characters in that argument, we could put the quotes in many different
places, with the same effect:

    ARzs_EPI_r2+orig'[155]'
    ARzs_EPI_r2+orig'['155']'
    'ARzs_EPI_r2+orig[155]'
    ARzs_E'PI_r2+orig[155]'     # a little goofy, but it works




# -------------------------------------------------------------------
# average the two volume registered time series datasets

3dcalc -a ARzs_EPI_r1_vr+orig -b ARzs_EPI_r2_vr+orig \
        -expr '(a+b)/2' -prefix ARzs_EPI_avg 


This command simply averages the data in the two EPI datasets.
It is worth noting that this is not necessarily a good idea, from a 
statistical point of view.

# Unix      (see also '3dcalc -help')

The 3dcalc command takes up to 26 input dataset, labeled a, b, c, ... z.
The '-a' option tells 3dcalc that the corresponding dataset is to be
considered dataset 'a'.  Similar reasoning accompanies the '-b' option.
The '-expr' option takes a single argument, '(a+b)/2', which in an
arithmetic expression, stating how to compute the output brick.  In this
case, the output brick (with prefix 'ARzs_EPI_avg') will come from adding
the corresponding data points of the input bricks, and dividing by 2,
i.e. the average.

As a reminder, note that each input brick, and hence the output brick,
has 160 sub-bricks, with each sub-brick (brain volume) having 18 slices,
and each slice being a 64 by 64 voxel image.  Therefore, each brick
consists of

    11796480 = 160 * 18 * 64 * 64

pieces of data (voxels).  Since each voxel is represented by a 2 byte
integer, we can expect the AFNI BRIK file, 'ARzs_EPI_avg+orig.BRIK',
to have a size of 23592960 bytes (more than 23 Megabytes).

One thing to note about programs that use AFNI datasets is how the
datasets are named on the command line.  An input dataset, such as
what is used with the '-a' option in 3dcalc, can be written in many
ways, but requires the part of the filename with the view.  Examples
that would work here are:

    ARzs_EPI_r1_vr+orig
    ARzs_EPI_r1_vr+orig.
    ARzs_EPI_r1_vr+orig.HEAD
    ARzs_EPI_r1_vr+orig.BRIK
    ARzs_EPI_r1_vr+orig.BRIK.gz   # if such a file exists

These all tell AFNI the proper file to read in.

The '-prefix' option is generally different.  When an output prefix
is specified, it is expected that everything up to, but NOT including
the view type is specified.  The output view type is generally inferred
from the input dataset(s) and the relevant actions.

An output prefix of 'ARzs_EPI_avg' would produce the dataset files,

    'ARzs_EPI_avg+orig.BRIK' and 'ARzs_EPI_avg+orig.HEAD'

If we accidentally include the view (say, '+orig') in the prefix, programs
still append another one.  So an output prefix of 'ARzs_EPI_avg+orig' would
produce dataset file of:

    'ARzs_EPI_avg+orig+orig.BRIK' and 'ARzs_EPI_avg+orig+orig.HEAD'

Note the '\' character at the very end of the first line.  Placing that
character at the end of a line basically tells the shell that the line
does not end there, that it continues on to the next one, as if it were
one long line.




# -------------------------------------------------------------------
# create 1D file (Ref_ARzs.1D) containing ideal reference function

waver   -GAM -dt 2 -numout 160      \
        -inline 15@0                \
                15@1 7@0 15@1 8@0   \
                15@1 7@0 15@1 8@0   \
                15@1 7@0 15@1 8@0   \
                10@0                \
        > Ref_ARzs.1D


The waver program is used to create an ideal reference function.  The
160 point time series for each voxel will be compared to this as a
measure of statistical significance.  The file 'Ref_ARzs.1D' is created
here, and contains that ideal reference function.

# Unix      (see also 'waver -help')

The '-GAM' option tells waver to use a Gamma variate waveform.  The '-dt'
option tells waver to use 2 seconds for the Delta Time, the length of time
between data points that are output.  So the time points will correspond
with the time series for each voxel in our 3D+t datasets.  The '-numout'
option is used to specify that we want 160 points in time to be output
(again, matching out datasets).

The '-inline' option is used to specify 'ON' and 'OFF' periods for input
stimuli.  Recall that this is a block design with stimulus times of:

    30 seconds OFF
    30 seconds ON, 15 seconds OFF
    30 seconds ON, 15 seconds OFF
    30 seconds ON, 15 seconds OFF
    30 seconds ON, 15 seconds OFF
    30 seconds ON, 15 seconds OFF
    30 seconds ON, 15 seconds OFF
    20 seconds OFF

Since our time points are 2 seconds apart (recall that we used TR = 2
in the experiment), each 30 second interval is actually 15 time points.
For the sake of making a basic shell script here, we have 'cheated' a
bit to make the 15 second OFF intervals.  Since 15/2 is not an integer,
we translated this chart to:

    15 time points OFF
    15 time points ON, 7 time points OFF
    15 time points ON, 8 time points OFF
    15 time points ON, 7 time points OFF
    15 time points ON, 8 time points OFF
    15 time points ON, 7 time points OFF
    15 time points ON, 8 time points OFF
    10 time point OFF

This is clearly not a precise way to specify the intervals, but it will
suffice for this introductory document.

Note, as was done with 3dcalc above, line continuation characters are
used here.  Having the '\' characters at the end of the lines results in
the shell interpreting 'waver -GAM ... ... > REF_ARzs.1D' as a
single command (as we intended).  Those line continuation characters are
used to make the commands more readable.

Lastly, note again that we 'redirect' the output using the '>' symbol,
thus creating the file, 'Ref_ARzs.1D'.




# -------------------------------------------------------------------
# gasp! actual statistical analysis of data! ...

3ddelay -input ARzs_EPI_avg+orig -ideal_file Ref_ARzs.1D -fs 0.5 -T 45


The 3ddelay program estimates the delay, in seconds, between the time
series for each voxel, and the time series of the ideal reference function.
The intention is to get some measure of how long it takes for a part of the
brain to react to the stimulus.

# Unix      (see also '3ddelay -help')

The '-input' option is used to specify the dataset to evaluate.  Recall that
this dataset has 73728 ( = 18 * 64 * 64 ) voxels, and a 160 point time series
for each one.  This dataset is the average of the datasets for our 2 runs.

The '-ideal_file' option tells 3ddelay to use 'Ref_ARzs.1D' as the file
containing the ideal reference time series.

The '-fs' option is used to specify the Frequency of the Sampling, in hertz.
Since our TR is 2 seconds, we use 0.5 ( = 1 / 2 seconds ).

The '-T' option is used to tell 3ddelay that the simulus was periodic, and
with a frequency of 45 seconds.  This matches the periodic '30 seconds ON,
15 seconds OFF' intervals.

Lastly, note that if the '-prefix' option is not used, as in this case,
it will simply append '_DEL' to the prefix of the input dataset.  So our
resulting functional dataset will be the pair of files:

    ARzs_EPI_avg.DEL+orig.BRIK   and   ARzs_EPI_avg.DEL+orig.HEAD




# -------------------------------------------------------------------
# extra stuff

# -------------------------
# remove extra-cranial data

3dIntracranial -anat ARzs_spgr+orig -prefix ARzs_spgr_NoSkl


The 3dIntracranial program performs automatic segmentation of an anatomical
dataset, removing data that is considered 'outside the brain'.  Such a
dataset is visually nice for something like volume rendering in AFNI.

# Unix      (see also '3dIntracranial -help')

The '-anat' option specifies the dataset to use as input, and again, the
'-prefix' option specifies the prefix of the output dataset.




# -------------------------
# we are finished!

 echo "results are left in directory: ARzs_data/afni"


This is just a reminder for us.  When the script finishes, we are
told where to find the resulting datasets.