Show all posts by user
Dear AFNI users-
We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:
https://discuss.afni.nimh.nih.gov
Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.
The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.
Sincerely,
AFNI HQ
History of AFNI updates
Yes, when using a custom template you can just supply that as the base to auto_warp.py (or other AFNI programs). Do check to make sure that you've specified a SPACE and the correct VIEW (tlrc) on your custom template first so those carry over.
There's some concerns with warping stats files, since interpolating statistics could be problematic. The current AFNI recommendation is to
by
Peter Molfese
-
AFNI Message Board
If you continue to struggle, you may need to try doing some hand alignment first before allowing 3dAllineate/align_epi_anat.py to work their magic. I've written up some instructions here. It closely follows the methods we used to perform with ANTs, just with AFNI tools.
by
Peter Molfese
-
AFNI Message Board
Hi Steph,
As author of the blog, I wanted to chime in and agree with Paul - I highly recommend using TORTOISE to process your diffusion data. I've updated the post to include this information at the top as I've been doing for other older posts.
Also - Thanks for the vote of confidence Paul!
-Pete
by
Peter Molfese
-
AFNI Message Board
The paper is a couple days from submission with final edits in the hands of coauthors. In the meantime, you can cite the poster:
Molfese, Glen, Mesite, Pugh, & Cox (2015, June). The Haskins Pediatric Brain Atlas.
Poster session presented at the Organization for Human Brain Mapping, Honolulu, HI
And you've probably already found our template included in AFNI (HaskinsPeds_NL_templ
by
Peter Molfese
-
AFNI Message Board
Hi Qiuhai-
That should be correct, you can input multiple warps to 3dNwarpApply and it will concatenate them on the fly. You could also use cat_matvec to concatenate the two linear warps (_shft + 12.1D) and then pass that as one warp to 3dNwarpApply.
-Peter
by
Peter Molfese
-
AFNI Message Board
It's possible, my first guess is that your data is being corrected for centers that are far apart and that's may not be included in the transform. Can you post the terminal output from the auto_warp.py command?
Also, what version of AFNI are you running?
by
Peter Molfese
-
AFNI Message Board
NIFTI files are almost inherently de-identified as most of the tags in the NIFTI files do not contain PII. You could try just converting them to NIFTI and then check the headers using 3dinfo or nifti_tool to check to make sure nothing slips in the NIFTI file that you're worried about. If this works, you could skip the anonymize dicom step.
A growing number of sites are recommending th
by
Peter Molfese
-
AFNI Message Board
Are you converting the data from DICOM to NIFTI?
If you're still in DICOM space, you can use the anonymize function in mricron/dcm2nii (https://www.nitrc.org/projects/mricron) to anonymize the files. If you're still seeing text on the images themselves, there are some solutions, but would be good to know if you're converting from DICOM to NIFTI.
by
Peter Molfese
-
AFNI Message Board
My advice would be to write two afni_proc commands:
1) Take the first few steps, then run SLOMOCO on the data
2) Another afni_proc command that would take the outputs of SLOMOCO and runs the rest of the pipeline.
If you get stuck, please post your commands.
by
Peter Molfese
-
AFNI Message Board
I'll start by saying, we recommend afni_proc.py, which is a superscript for accomplishing most of your fMRI analysis needs. You can see more info in the docs and class handouts.
To your specific questions on the fairly abstract description of your task:
1. Do not do #1 without some hefty processing (e.g. converting to signal change, detrending, censoring, many more to list)
2. This
by
Peter Molfese
-
AFNI Message Board
We're talking in pretty hypotheticals, so feel free to list what atlas and templates you're using for more directed help. But in the abstract, you could use auto_warp.py to do a combination of linear and nonlinear from one template to another. Then when you apply that combined warp (3dNwarpApply) to your atlas, you specify an interpolation of NN (nearest neighbors), which would preser
by
Peter Molfese
-
AFNI Message Board
Have you taken a look at either 3dNwarpCat or 3dNwarpCalc?
I tend to use 3dNwarpCat more, and then you can apply it with 3dNwarpApply.
by
Peter Molfese
-
AFNI Message Board
Hi Jung-
1) You can use 3dROIstats to get values for EVERY single ROI in your atlas on a given dataset.
2) whereami uses not-so-secret transforms to convert from one space to another.
3) I would probably convert your ROI to different spaces using a variety of options. I believe there's a function in whereami's command line (-show_chain) that would be of help. Daniel Glen
by
Peter Molfese
-
AFNI Message Board
Hi Again-
It's not clear entirely what's going on in your pipeline. My initial idea was simply to convert your MGZ files into GIFTI files doing something like what's described on my other blog:
mris_convert -c ./lh.thickness.fwhm10.fsaverage.mgh \
$SUBJECTS_DIR/fsaverage/surf/lh.white \
Subject01.lh.thickness.fsaverage.gii
No need to merge the hemispheres, just run tw
by
Peter Molfese
-
AFNI Message Board
You can use mris_convert (part of freesurfer) to convert the MGZ files to GIFTI (.gii) and then process them in AFNI’s 3dLME.
by
Peter Molfese
-
AFNI Message Board
There is no reason to do the @auto_tlrc or other warp. The surface is aligned to the MNI template. Your stats were run on the standard surface.
Is the 3dSurf2Vol to visualize? I'll ask Paul or Rick to chime in on grid parent stuff, I haven't extensively played with it. I suspect it'll have minor impact between the SurfVol and the T1. I usually pick the SurfVol.
by
Peter Molfese
-
AFNI Message Board
Hi Meng-
1) You can use any participant's standard surface, or you can use a the standard surfaces from MNI or TT, which admittedly often look very pretty since they're based on a clean template:
MNI152 -
N27 in MNI -
TTN27 -
2) You can use the standard space SurfVol from the links above.
3) I think using the standard brain mesh and SurfVol addresses your other question
by
Peter Molfese
-
AFNI Message Board
Can you post your output log from afni_proc.py?
You should check to make sure the slice pattern is included in the NIFTI files as Siemens scanners often don't encode that in DICOMs.
3info -VERB 6.120.1+orig.HEAD
If you don't see any slice timing information in the info, and the output of afni_proc.py script says "already aligned in time", you should specify a slic
by
Peter Molfese
-
AFNI Message Board
It can certainly happen for a variety of reasons. Some potential questions:
1) What's different between your acquisition and that of your colleague?
2) Have you checked the alignment of your data to the anatomical?
3) Does your data have the slice timing built in correctly? Can you post the output log?
I'd recommend adding some things to your proc script to possibly accoun
by
Peter Molfese
-
AFNI Message Board
Please post your exact commands as well as the output of:
afni_system_check.py -check_all
by
Peter Molfese
-
AFNI Message Board
Ok, I'll need some more information: Where did the file freesurfer_DKT.nii come from?
by
Peter Molfese
-
AFNI Message Board
While you're waiting on an evaluation of your data, try adding either:
-align_opts_aea -giant_move
or
-align_opts_aea -big_move
by
Peter Molfese
-
AFNI Message Board
Hi Naveed-
I generally trust dcm2nii (or the packaged version with AFNI: dcm2niix_afni) for conversion of multiband data.
1) Are you using the CMRR sequence?
2) What's your afni_proc.py command?
-Peter
by
Peter Molfese
-
AFNI Message Board
Please post the full output of:
afni_system_check.py -check_all
by
Peter Molfese
-
AFNI Message Board
An easy way to get a listing of all ROIs across all of the atlases that you have loaded is:
whereami -show_atlas_code
Hypothalamus isn't that common in the atlases that I usually look through, but you could take the TT version and transform it using either 3dWarp's -tta2mni option or by creating a transform (preferably with either auto_warp.py or 3dQwarp) and apply that tranform
by
Peter Molfese
-
AFNI Message Board
To visualize the time course:
1. You can create an ROI in AFNI
2. Extract the time series (3dmaskave) from your full preprocessed data (usually the all_runs)
3. Plot the time series compared to your design (1dplot)
You can also set the underlay to your functional dataset, click on an active voxel, and bring up the graph viewer, and under one of the dropdown menus you can plot the ideal fo
by
Peter Molfese
-
AFNI Message Board
Could be a number of things. I'll say that it's not THAT unusual to have activation outside of the brain. SPM tends to mask this by default. They also used to mask the cerebellum, not sure if that's ever changed. When you adjust the threshold slider to a suitable p-value, do you still see activation in areas that you would expect? Have you tried to plot the activity of a parti
by
Peter Molfese
-
AFNI Message Board
I keep meaning to find an easier way to do this, but my workflow is to usually run:
ROI2dataset followed by SurfClust, which will output the center x, y, z.
by
Peter Molfese
-
AFNI Message Board
You can use SurfToSurf to map data between the original mesh and the standard mesh. The example is tucked away in the MapIcosahedron documentation here.
Say you want to map another (SOMEDSET) dataset defined on the
orignal mesh onto the std.60 mesh and use the same mapping derived
by MapIcosahedron. The command for that would be:
SurfToSurf -i_fs std.60.rh.smoothwm.asc \
-i_fs rh.s
by
Peter Molfese
-
AFNI Message Board
charujing123 Wrote:
-------------------------------------------------------
> Hi Peter
> Thanks
> Did you mean that, for the subcortical, we can
> only perform the volume-based analyses? And we
> can use the segmentation (eg. freesurfer) with
> 3dSurf2Vol to get the subcortical (volume based)
> analyses.
> Is that right? If so, how to get group analyses,
> as
by
Peter Molfese
-
AFNI Message Board