Show all posts by user
Dear AFNI users-
We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:
https://discuss.afni.nimh.nih.gov
Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.
The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.
Sincerely,
AFNI HQ
History of AFNI updates
Page 1 of 1 Pages: 1
Results 1 - 29 of 29
Hi Rick,
Thanks for your reply. I am not censoring but you're right that it's a lot of regressors. I wanted to verify with you that the steps I'm taking are correct to create regression matrices and run nuisance regression. For each participant, I created an ortvec matrix consisting of the 22 regressors of interest (10 CompCors and 6 realignment parameters and their first deriva
by
jkblujus
-
AFNI Message Board
Hi all,
I have preprocessed a set of resting state functional images through fMRIprep version 20.2.6. This entailed slice time correction, volume registration, alignment between anat and EPI, normalization to MNI space, and confound time series estimation. The output confounds file includes the 6 realignment parameters, global signal, WM signal, CSF signal, their temporal derivatives, and the
by
jkblujus
-
AFNI Message Board
Hi Rick,
Great! Thank you for adding the option!
Best,
Jenna
by
jkblujus
-
AFNI Message Board
Hi all,
I have preprocessed a set of resting state images using AFNI ver 21.2.04 ('Nerva'). The anatomical images were sent through SSwarper and recon-all and my proc.py script was modeled after example 11. I am using a functional atlas that consists of 90 ROIs in MNI space. I was hoping to calculate some QC-FC metrics as described in Ciric et al (2017) to evaluate the distance-depe
by
jkblujus
-
AFNI Message Board
Hi all,
I am processing a set of resting state images using AFNI version 21.2.04 ('Nerva'). I am currently running a few versions of proc.py manipulating the nuisance regressors (e.g., motion censoring, outlier censoring, bandpass filtering) to determine a final model to move forward with. I noticed that the polort value is consistently set to 4 across models, regardless of whether b
by
jkblujus
-
AFNI Message Board
Hi Daniel,
Thanks for your reply. Ok, sounds good. I've attached an image with the raw epi overlay on the raw anat and you can see this bright signal is indeed CSF. For my analysis, I plan to extract the mean time series from a set of 90 ROIs registered in MNI space to form a functional connectivity matrix. Would you recommend that I mask out the CSF outside of the warped anatomical brain
by
jkblujus
-
AFNI Message Board
Hi all,
I am processing resting state data using AFNI version 21.2.04 ('Nerva'). Prior to preprocessing, I aligned the centers of the anatomical and functional images to the MNI*2009_SSW template. I ran the centered anatomical images through SSwarper and recon-all and used the following proc.py code to process the functional data:
afni_proc.py
by
jkblujus
-
AFNI Message Board
Hi AFNI team,
I am preprocessing resting state data using AFNI_21.2.04. I've copied my proc.py command below. In my HTML output there are no images generated in the motion block. While I can generate these images independently using 1dplot it would be handy to have them in the QC output for all participants for the review process. I am attaching the output generate from running the motion
by
jkblujus
-
AFNI Message Board
Hi Rick,
Below are the outputs of each command:
nifti_tool -disp_hdr -field pixdim -infiles sub01.nii.gz
N-1 header file 'sub-002_S_0295_task-rest_run-01_bold.nii.gz', num_fields = 1
name offset nvals values
------------------- ------ ----- ------
pixdim 76 8 -1.0 3.3125 3.3125 3.3125 6.021583 0.0 0.0 0.0
3dinfo -tr sub01.n
by
jkblujus
-
AFNI Message Board
Hi AFNI team,
I am preprocessing resting state data from an older sample. I am using AFNI_21.2.04 version and my proc.py command can be found below. When examining the HTML output I saw that for the TSNR images in some participants the highest values are localized to the white matter. Is this odd? How would you expect the distribution of values to be according to the tissue types (e.g., highes
by
jkblujus
-
AFNI Message Board
Hi Rick,
Ok, thank you this is helpful. Below you will find my proc.py command. I modified example 11 to include bandpass filtering (0.01-0.1 Hz) in addition to motion and outlier censoring. Is it correct that while running the proc.py script the TR will be read from the EPI input (NIFTI) header? By including bandpassing in the pipeline with the wrong TR, I am affecting the DOF that are retain
by
jkblujus
-
AFNI Message Board
Hi Daniel,
Thanks for the information. I will update the TRs with 3drefit.
I am following proc.py example 11 for preprocessing. This is a basic question but how is the TR information used in preprocessing? For example, if the TR is wrongly coded as 6s instead of 3s in the NIFTI header, how would this affect the data that's output from proc.py?
Thank you!
Jenna
by
jkblujus
-
AFNI Message Board
Hi AFNI team,
I am analyzing database resting state images with one functional run each. I noticed in my output that some participants have a large max censored displacement ranging from 1-3. How is this value calculated?
Thank you,
Jenna
by
jkblujus
-
AFNI Message Board
Hi AFNI team,
I am using resting state data from a database. I noticed that for some of the participants the TRs in the dicom and nifti headers do not match. To pull this information I used the dicom_hdr command ('ACQ Repetition Time' field) and 3dinfo -tr. Do you know why this might occur? Should I change the information in the nifti header to match the dicom information?
Thanks,
by
jkblujus
-
AFNI Message Board
Hi AFNI team,
I am running a resting state analysis with a pipeline consisting of SSwarper, reconall, and proc.py following example 11. Would you recommend that we use the intensity uniformized output from SSwarper (ex. anatU.sub) or the original anat as input to reconall?
Thanks,
Jenna
by
jkblujus
-
AFNI Message Board
Hi Rick,
No problem! Thank you for the explanation - this makes sense. Thank you for your help!
Jenna
by
jkblujus
-
AFNI Message Board
Hi Paul,
Ok, that makes sense. The warps/ss look fine for most participants. I only found one visual oddity in the warp/SS images. Around the midline, there is missing data (voxel values = 0), which is not present in the raw data or the unifized volumes. Could this be due to the misfit error? Thank you for your help.
Jenna
by
jkblujus
-
AFNI Message Board
Hi Rick,
Just checking in on this again. I've attached an image of the volreg overlay on the anat final. If I'm interpreting correctly, I would think that the voxels that would be most affected by the extents mask would be the voxels near the outer boundary box of the EPI, rather than all voxels outside of the brain (yellow/blue voxels). Do you find it concerning that such a large nu
by
jkblujus
-
AFNI Message Board
Hi Rick and Paul,
Thank you for your responses. All of the skull stripping and alignment look good, only small adjustments need to be made to <10% of the data. Does this suggest that the misfit errors are occurring in voxels outside of the brain? Would you suggest that I formally check this?
Thank you!
Jenna
by
jkblujus
-
AFNI Message Board
Hi experts,
I'm running a resting state analysis closely following example 11 from the proc.py help page. My pipeline consists of running SSwarper, Reconall, and proc.py. The following is my SSwarper command:
\@SSwarper -input ${base_dir}/rawdata/${sid}/anat/*T1w.nii.gz \
-base MNI152_2009_template_SSW.nii.gz \
-subid ${sid} \
-odir ${out_dir}/${sid}
After running SS
by
jkblujus
-
AFNI Message Board
Hi Rick,
Thank you for the information. I've attached a picture here of the pb0*volreg volume overlayed on the anat_final. Just to make sure I understand, does this mean that during the 3dTproject step that all voxels outside of the anat_final space (non-brain; yellow and blue voxels) are set to 0 which is why such a high number of voxels (51,787) have a constant timeseries?
Thank you!
by
jkblujus
-
AFNI Message Board
Hi Experts!
I am preprocessing resting state data in an older sample. I modeled my processing pipeline after Example 11 of afni_proc.py (included below). I got a warning following the execution of 3dTproject stating that 51,787 vectors are constant in the dataset:
3dTproject -polort 3 -prefix rm.det_pcin_r01 -censor rm.censor.r01.1D -cenmode KILL -input pb03.sub-002.r01.v$
++ 3dTproject:
by
jkblujus
-
AFNI Message Board
Hi Rick,
Thanks for your reply. I am working with a sample of healthy older adults, MCI, and AD patients. All participants in my sample have less than 21% of total TRs censored and < 1 mm max censored displacement. However, it seems that about 44% of my sample have average censored motion values greater than 0.1. So it seems like there is consistent movement occurring throughout the scan in
by
jkblujus
-
AFNI Message Board
Hi Paul,
Thanks for your reply. Attached are the corrbrain and Vis seed-based correlation maps. The seed-based correlation maps are pretty noisy, though the time series of regions that you would expect to be positively correlated with the seeds are popping up. You can see in the Vis map that the time series of the ventricles are negatively correlated with the seed.
I will rerun the particip
by
jkblujus
-
AFNI Message Board
Hi experts,
I am running a resting state analysis in an older clinical sample. For preprocessing, I applied SSWarper and the recon-all pipeline to the anatomical images and stuck closely to example 11 for proc.py. I used slightly higher censor (0.3mm) and outlier fraction (0.10) limits because I am working with a special population.
Rick mentioned in the "Start to Finish Hands-On"
by
jkblujus
-
AFNI Message Board
Hi experts,
I am running a resting state analysis using data from a database. I finished preprocessing and I am checking each participant's output. I noticed that for most of my participants the max motion displacement and max censored displacement values are very similar if not the same. Considering that max censored displacement is examining max displacement among non-censored TRs, woul
by
jkblujus
-
AFNI Message Board
Hi Paul,
Thanks for your reply. I previously had been using the resting state file as the master when I was resampling. But I was able to fix this issue by eroding the mask first and then resampling to lpi:
3dmask_tool -input fs_ap_latvent.nii -dilate_input -1 -prefix fs_ap_latvent_erode.nii.gz
3dresample -orient lpi -prefix fs_ap_latvent_erode_lpi.nii.gz -input fs_ap_latvent_erode.nii.gz
by
jkblujus
-
AFNI Message Board
Hello experts,
I am using AFNI to preprocess resting state data from the ADNI database. Prior to running the pipeline, I checked the orientation of each of my files that would be input to the proc.py script for each subject using the "3dinfo -oreint" command. I found that the T1 data was in LPI, resting state in RPI, and WM/vent masks generated in freesurfer were in RSP. Prior to run
by
jkblujus
-
AFNI Message Board
Dear experts,
I am doing a resting state project using older adults, MCI, and AD patients. For the project, I am performing a series of analyses, of which the input to each analysis is a symmetric matrix of functional connectivity. To form this matrix, I will extract the mean time series from a number of ROIs and then Pearson correlate.
Given the population, I am having some trouble with m
by
jkblujus
-
AFNI Message Board