.. _tut_fs_fsprep: ********************************* How to use FS recon-all with AFNI ********************************* .. contents:: :local: Introduction ============ **Download script:** :download:`fs_fsprep.tcsh ` .. highlight:: Tcsh .. comment on creation of this script This script was generated from running: afni_doc/helper_tutorial_rst_scripts/tut_fs_fsprep_MARK.tcsh as described in the _README.txt in that same directory. FreeSurfer (FS) provides a number of useful tools for brain imaging. In particular, the parcellation/segmentations and anatomical surfaces generated by ``recon-all`` can be used in lots of applications. Here we describe using FS's ``recon-all``, and then bringing its results into AFNI/SUMA-land. (In this case, we are using FS ver=7.1.1, but it should work equivalently for most earlier versions.) Start-to-finish FS example ============================ This a compact example of going through the dataset check and running FS. It is, in fact, what is run on the AFNI Bootcamp data example (on the anatomical in ``AFNI_data6/FT_analysis/FT/``). First, we copy the dset to be in NIFTI format (if it isn't already) using ``3dcopy``. Then we run FS's ``recon-all`` to estimate surfaces, tissue maps and specific anatomical parcellations (and in this example we assume that the data came from a 3T scanner-- hence the use of the ``-3T`` flag). To bring that output into standard NIFTI and GIFTI format, as well as to generate standard surfaces and other niceties, we run AFNI's ``@SUMA_Make_Spec_FS``: .. code-block:: Tcsh #!/bin/tcsh # 0) Copy data to NIFTI format (if necessary): 3dcopy FT_anat+orig.HEAD FT_anat_cp.nii.gz # 1) Run FreeSurfer, basic example A. recon-all \ -all \ -3T \ -sd . \ -subjid FT \ -i FT_anat_cp.nii.gz # 2) Import FS results into SUMA-land (and standardize surfaces). @SUMA_Make_Spec_FS \ -fs_setup \ -NIFTI \ -sid FT \ -fspath ./FT And that is all. Note that ``recon-all`` will take a long time to run (several hours). There are some ways to speed it up a bit using its internal parallelization, which you can read about in the next section. | There are a fair number of other options/flags that you could consider using with ``recon-all``. We are not so familiar with them, but a full list for investigating is here: | ``_ | Also, if you have data with isotropic, high-resolution voxels (voxels with equal edge lengths, each :math:`<1~{\rm mm}`), then you will likely have to use additional considerations. For information on these, read here: | ``_ .. _tut_fs_fsprep_par: Run recon-all faster: ``-parallel`` ===================================== From the `FS documentation `_, there has been internal parallelization with parts of ``recon-all`` since v5.3, using OpenMP (which is also what several AFNI programs use for parallelization speedup). You can/should read more about the details from the FS documentation, but we describe using it here. At least in the most recent version FS (v7.\*), you can add a ``-parallel`` option flag at the end of your ``recon-all`` command to take advantage of a default amount of 4 CPUs. So, going from the first example, you could run:: # example B: using default parallelization recon-all \ -all \ -3T \ -sd . \ -subjid FT \ -i FT_anat_cp.nii.gz \ -parallel Additionally, you should have further control by adding an option ``-openmp ..``, whose single argument is the number of CPUs for OpenMP to use. Theoretically, this can be more than 4, if you have the computing power available. So, you could try:: # example C: using parallelization with 8 CPUs recon-all \ -all \ -3T \ -sd . \ -subjid FT \ -i FT_anat_cp.nii.gz \ -parallel \ -openmp 8 .. _tut_fsprep_anec_desk: Anecdote 1: on Linux desktop ------------------------------ As an anecdote (each of these is a single implementation, not the result of averaging a set of them), I ran each of the above ``recon-all`` cases on my desktop for the same Bootcamp dataset described above. This desktop is a modern Ubuntu 20.04 Linux machine with 20 cores (\#humblebrag). In each case, I had 16 threads available (I had set ``setenv OMP_NUM_THREADS 16`` in my tcsh script). The ``recon-all`` timing results were as follows: * **Ex A:** 3.751 hours * **Ex B:** 2.160 hours * **Ex C:** 1.944 hours So, using the ``-parallel`` option **does** seem to help speed things up noticeably (by a bit under a factor of 2, here). Using the ``-openmp 8`` on top of this did not seem to matter much. And note: I also ran Ex. A above with ``setenv OMP_NUM_THREADS 1``, and the runtime was a very similar 3.754 hours. So, if you are *not* using ``-parallel``, you might as well just use a single thread---you don't get any speedup from OpenMP without that option being used. .. _tut_fsprep_anec_bio: Anecdote 2: on Biowulf cluster -------------------------------- As another anecdote, I ran each of the above ``recon-all`` cases on the NIH's Biowulf cluster, for the same Bootcamp dataset described above. In the parallel cases, I actually had 8 CPUs available (I requested 8 CPUs from the cluster, and running ``afni_check_omp`` in the terminal indeed returned the value of 8). The ``recon-all`` timing results were as follows: * **Ex A:** 9.181 hours * **Ex B:** 5.120 hours * **Ex C:** 5.093 hours So, using the ``-parallel`` option **does** seem to help significantly speed things up (by about a factor of 2, here). I did **not** get further benefit by trying to increase the number of threads by also including the ``-openmp ..`` option---I am not sure why. If you are able to get further runtime improvement somehow, please let us know how! .. _tut_fsprep_anec_caveat: Anecdote 3: caveats with ``-parallel`` ---------------------------------------- *However,* please also note: when processing a group of several hundred anatomical volumes on the cluster, I had several ``recon-all`` runs fail when using the ``-parallel`` option. The specific failure that occurred was this message:: Cannot find rh.white.H From searching online, apparently this is a known issue that can occur, related to something with the inner workings of the parallelization. I had the same error occur even when running on my desktop once. So, if this pops up while you are using the ``-parallel`` option, try removing it and rerunning your job. (I had no subsequent failures on the cluster once I had done this.) A note of filenames/paths with FS =================================== Here we describe how to specify and link together output paths for running ``recon-all`` and ``@SUMA_Make_Spec_FS``. By default, FS's ``recon-all`` will put its output directory in a location specified with a ``$SUBJECTS_DIR`` environment variable created at setup. For example, on my computer ``echo $SUBJECTS_DIR`` displayed ``/usr/local/freesurfer/subjects``. However, I much prefer to specify my own path/location, and hence I use the ``-sd ..`` option. Consider the following command: .. code-block:: none recon-all \ -all \ -3T \ -sd AAA \ -subjid BBB \ -i DSET.nii.gz After this, the path to the top of the output directory would be: ``AAA/BBB/``. And to bring the FS output into AFNI/SUMA-land, we could run: .. code-block:: none @SUMA_Make_Spec_FS \ -fs_setup \ -NIFTI \ -sid BBB \ -fspath AAA/BBB \.\.\. and the outputs of interest would be in the ``AAA/BBB/SUMA/`` directory. Note how we use the subject ID "BBB" twice: it is required as part of the path, but we use it optionally after ``-sid ..``, so that various filenames contain it. These conventionalities were used in the above start-to-finish example. But since we get paid by the word, we thought we would describe such things in more explicit and general and technical and detailed detail here. A general tcsh script for FS+SUMA =================================== Putting this altogether, if we were writing a script to combine running ``recon-all`` and ``@SUMA_Make_Spec_FS``, the following is probably what The Royal We would do (with ``tcsh`` syntax). The first four variables at the top would be set with our specific file names and folder locations of choice. After that, everything is automatic, including saving the terminal text to log files, just in case we want to check back on things later (and note that ``recon-all`` here includes the ``-parallel`` option -- whether you want to include that depends on your system): .. code-block:: tcsh #!/bin/tcsh set dset = INPUT_DSET set subj = SUBJECT_ID set dir_fs = PATH_TO_FS_OUTPUT set dir_echo = PATH_TO_SAVE_STDERR_OUTPUT # maybe: "." # ------ setup and/or check number of threads ### can uncomment next line if this should be set here (NB: I am ### aiming to use 4 threads below in recon-all with the '-parallel opt) # setenv OMP_NUM_THREADS 4 set nomp = `afni_check_omp` echo "++ Should be using this many threads: ${nomp}" \ > ${dir_echo}/o.00_fs_${subj}.txt # ------ run programs, logging terminal output and exiting on failure \mkdir -p ${dir_fs} time recon-all \ -all \ -3T \ -sd ${dir_fs} \ -subjid ${subj} \ -i ${dset} \ -parallel \ |& tee -a ${dir_echo}/o.00_fs_${subj}.txt if ( $status ) then echo "** ERROR running FS recon-all for: ${subj}" \ |& tee -a ${dir_echo}/o.00_fs_${subj}.txt exit 1 endif @SUMA_Make_Spec_FS \ -fs_setup \ -NIFTI \ -sid ${subj} \ -fspath ${dir_fs}/${subj} \ |& tee ${dir_echo}/o.01_suma_makespec_${subj}.txt if ( $status ) then echo "** ERROR running @SUMA_Make_Spec_FS for: ${subj}" \ |& tee -a ${dir_echo}/o.01_suma_makespec_${subj}.txt exit 1 endif echo "++ Done with FS + conversion to SUMA for: ${subj}" The main FS output would be in ``${dir_fs}/${subj}/``, and the converted NIFTI/GIFTI files to carry on with would be in ``${dir_fs}/${subj}/SUMA/``. The above could be translated to a ``bash`` script, just changing the syntax in lines with ``setenv`` and ``set``, as well as the way ``tee``\ ing is done. A note on @SUMA_Make_Spec_FS outputs ====================================== The final ``SUMA/`` directory contains: volumetric outputs of segmentations and parcellations, surfaces of various sizes and geometry, and more. Several of these data sets are direct copies of FS output, but in NIFTI and other formats usable by AFNI. We also generate standardized surfaces, which are *very* useful for group analysis, and you can read more about that here: ``_ We also derive some other datasets that we have found to be useful, such as groupings of parcellated ROIs by tissue types. These helpful datasets, stats files and QC images are :ref:`now described in detail here `. | Minor note on FS setup ======================== By default, after you have set up FreeSurfer, every time you open a new terminal or source one of your ``~/.*rc`` files, you will get some text about your FS setup displayed in the terminal. This comes from the FS setup script that is run each time, and looks something like:: -------- freesurfer-linux-centos7_x86_64-7.1.1-20200723-8b40551 -------- Setting up environment for FreeSurfer/FS-FAST (and FSL) FREESURFER_HOME /usr/local/freesurfer FSFAST_HOME /usr/local/freesurfer/fsfast FSF_OUTPUT_FORMAT nii.gz SUBJECTS_DIR /usr/local/freesurfer/subjects MNI_DIR /usr/local/freesurfer/mni The exact text varies based on your OS, version of FS, location of the binaries, etc. Anyways, if you would like to *disable* the display of that text message, you can do the following: * For ``bash`` shell users, put the following into your ``~/.bashrc`` file: .. code-block:: bash export FS_FREESURFERENV_NO_OUTPUT="OFF" \.\.\. **above** the ``source $FREESURFER_HOME/SetUpFreeSurfer.sh`` line. * For ``tcsh`` shell users, put the following into your ``~/.cshrc`` file: .. code-block:: tcsh setenv FS_FREESURFERENV_NO_OUTPUT "OFF" \.\.\. **above** the ``source $FREESURFER_HOME/SetUpFreeSurfer.csh`` line. If you open a new terminal, you should **not** see the setup info text, but you *should* still be able to run FS programs fine. This is, of course, entirely optional.