Show all posts by user
Dear AFNI users-
We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:
https://discuss.afni.nimh.nih.gov
Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.
The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.
Sincerely,
AFNI HQ
History of AFNI updates
Page 1 of 3
Pages: 123
Results 1 - 30 of 72
Thank you Paul for this prompt response! It's very helpful.
If not using 3dNetCorr, could you point me to the documentation of how to construct a .niml.dset? Suppose someone has just given you two regions' MNI coordinates and their connection value, how would you visualize them in SUMA with two balls and a stick?
by
Zhihao_Li
-
AFNI Message Board
Dear AFNI experts,
The output from 3dNetCorr includes a "graph dataset" of .niml.dset, which can be used for SUMA visualization of the connectivity matrix. However, my connectome includes a lot of ROIs that I want to filter out in this SUMA visualization. I am wondering if there is a convenient AFNI command to filter the .niml.dset, only keeping connectivity measures across a specified
by
Zhihao_Li
-
AFNI Message Board
Hi AFNI gurus:
I am wondering if there is a convenient AFNI function/command for determining whether a brain is not entirely contained in the box of FOV. The attaching picture shows such a case where some of the occipital lobe is missing.
Thank you so much for your help!
by
Zhihao_Li
-
AFNI Message Board
Hi Paul,
I don't use the option of "-volreg_align_e2a" because there is not anatomical data involved here. My afni_proc.py options are as simple as this:
-blocks tshift volreg mask combine \
-dsets_me_run ./data/rest_echo-?.nii.gz \
-echo_times 15 37.14 59.28 \
-reg_echo 2 \
-volreg_align_to MIN_OUTLIER \
-combine_method tedana \
by
Zhihao_Li
-
AFNI Message Board
Hi AFNI gurus,
I was trying multi-echo fMRI analysis in the native space (so no "tlrc") without anatomy data (so no "align"). In this case, the "volreg" step is as simple as (i) estimating head motions with 3dvolreg on the "$fave_echo", and (ii) apply the transformation matrix to other echoes with 3dAllineate.
In the help doc of 3dvolreg, it says that th
by
Zhihao_Li
-
AFNI Message Board
Hi Gang and Paul,
I want to compare connectivity matrices (.netcc files with 106 ROIs in each) between a single patient and a group of healthy controls, simply like in a more typical situation of voxel-wise 1-sample t-test with "3dttest -base1_dset". I could have done this with 3dttest by "dumping" the .netcc files into some 1D files; but since I have never used the "FAT
by
Zhihao_Li
-
AFNI Message Board
Thank you Daniel!
I actually don't think the 3dUnifize being necessary as "feature normalization" and "batch normalization" are typically used in most machine learning algorithms.
I guess that partial coverage may not be a big issue either, because sparse features are often seen in literature. I may be able to give it a shot if there is a convenient way to extract loca
by
Zhihao_Li
-
AFNI Message Board
Thank you Daniel!
I understand the potential difficulty of comparing cost values across subjects. However, since the "goodness" of an alignment is ultimated determined by visual inspections with an user label of "pass" vs. "fail", do you think that align_epi_anat.py could be improved by a machine learning based classifier? For example, we can have 3dAllineate to ou
by
Zhihao_Li
-
AFNI Message Board
"The goodness of the alignment should always be assessed visually" !
With this sentence keeping in mind, I am wondering if there is an AFNI method conveniently available in assisting this assessment, particularly with a relatively "big" sample size. For example, I received data from several hundreds of subjects with their alignments of "anat2epi" already completed
by
Zhihao_Li
-
AFNI Message Board
Thank you very much Paul, your code does work for me!
I have 3 more questions:
(1) What's the orientation code (LPI, RAI?) in your "sphere_coords.1D" file? I am trying to display my ROI balls against the background of MNI_N27 surface so coded my coordinates in LPI orientation, but they are far off the intended locations in the display.
(2) How does this "Nidos" way di
by
Zhihao_Li
-
AFNI Message Board
Hi Paul,
That page and that particular example were actually what I have been following. I was able to replicate displays shown on that page, but that page always uses a volume background, which I don't want. I want a surface background like shown in the picture of my previous post.
Thank you again for your help!
by
Zhihao_Li
-
AFNI Message Board
Hi AFNI team,
I am trying to show some spherical ROIs on a surface background by a simple command:
suma -spec MNI_N27 -vol ROI_FigShow.nii.gz
While I do can see my ROI balls as shown in the 1st attaching jpg, there are dark boarders surrounding each of the balls and the colors in each ball are inhomogeneous. Therefore, I tried again to convert my volume balls to surface for display:
by
Zhihao_Li
-
AFNI Message Board
Hi Paul and Rick,
You guys are awesome!
It turns out that there does exist a 10 years old "3dvolreg" (2009) sitting in the FreeSurfer's path on my node24, together with the "3dvolreg_afni". Thank you so much for this valuable insight!
by
Zhihao_Li
-
AFNI Message Board
Thank you Rick!
So do you know if there is a major change in 3dvolreg since May.2017, so that amplitudes of head motions are estimated higher than before that time?
by
Zhihao_Li
-
AFNI Message Board
Thank you Paul!
Do you know how old a "3dvolreg" needs to be to skip it's version footprint for "3dinfo"?
by
Zhihao_Li
-
AFNI Message Board
Hi AFNI team,
I am wondering the possible reason for a missing field of AFNI version in the output of 3dinfo. For example, I have the output of 3dinfo from one subject like this:
----- HISTORY -----
{AFNI_18.0.09:linux_xorg7_64} 3dDespike -NEW -nomask -prefix r2_despike.nii.gz r1_original.nii.gz
{AFNI_18.0.09:linux_xorg7_64} 3dresample -orient LPI -prefix r3_reorient.nii.gz -inset r2_desp
by
Zhihao_Li
-
AFNI Message Board
haha, this is smart!
Given uneven sex distribution in my sample, I guess that I should put "m% * male + f% * female"
by
Zhihao_Li
-
AFNI Message Board
3dMVM -prefix blah \
-bsVars 'AGE+GENDER+MOTION+SITE' \
-qVars 'AGE,MOTION' \
-dataTable \
by
Zhihao_Li
-
AFNI Message Board
Hi Gang,
After running 3dMVM without any -glt, I realized that the output only provides F statistics for each of the main effects. If I am interested in the intercept estimate (not it's F statistics), is there a convenient way of getting it?
Thank you so much!
by
Zhihao_Li
-
AFNI Message Board
Thank you so much Paul!
Could you please reply on this thread once you have added the other options? The most useful option for me would be allowing users to provide a 3D pattern directly, because this pattern can be derived from a different group of subjects. Or, besides allowing specification of a single seeding voxel, allowing specification of a seeding mask is also very useful.
by
Zhihao_Li
-
AFNI Message Board
Thank you Paul for this super quick response! Besides the input of a user specified coordinates, could you please also add one more option allowing users to directly provide a 3D data set as the template to correlate with? Also, besides Pearson correlation, could you please add outputs of Euclidean distance between the template and the CP of each voxel?
Thanks a lot!
by
Zhihao_Li
-
AFNI Message Board
Hi AFNI gurus (I guess Paul in this particular case?):
Since "3dSpaceTimeCorr" loops through ijk and calculates spatial correlations between connectivity patters of the same ijk of two data sets, I am wondering if there is also a convenient way of fixing one connectivity pattern and simply looping through ijk in one data set.
For example, by seeding in a fixed location (say, coord
by
Zhihao_Li
-
AFNI Message Board
Hi AFNI Gurus,
Similar to the situation that "afni_proc.py" saves out a specific script (e.g. "proc.FT") for step-by-step data analysis, I am wondering if @RetinoProc can save out a specific script that an user can view and modify.
I want to use @RetinoProc, but need to know what exactly it does in each one of its processing steps.
Thank you for your help!
by
Zhihao_Li
-
AFNI Message Board
Hi Gang,
I was trying your new program of "MBA" but encountered an error message shown below. Do you know what is going wrong here?
--------------------
MBA -prefix MBA_Results_1000iter -chains 4 -iterations 1000 -model '1+crp' -EOI 'Intercept,crp' -qVars 'crp' -dataTable ./Path-WBPar100-Data_New.txt
Loading required package: Rcpp
Loading required
by
Zhihao_Li
-
AFNI Message Board
Hi Gang,
I am interested in a voxel-wise searching for regions with an imaging measure (call it variable M, could be activation, connectivity, or gray matter volume...) mediating the association between behavioral variables of B1 and B2. Specifically, I want to do a "mediation analysis", examining if the following 3 things are true:
(1) in model B1=a+b*B2+error1, b is significant
(2)
by
Zhihao_Li
-
AFNI Message Board
Hi AFNI Gurus,
The option of "-stim_times_AM2" in 3dDeconvolve allows inputs of multiple amplitudes in format like this:
"53.7*2,-6,1"
however, I am wondering if missing values are allowed for some of the amplitudes so some inputs can be like this:
"53.7*2,-6,@" with the "@" indicating a missing value
I guess that I can manually create multiple regre
by
Zhihao_Li
-
AFNI Message Board
Hi AFNI experts,
I need to get a voxel-wise assessment of temporal lag relative to a reference time course. People have done this by temporally shifting the reference signal and seeing which of the shifted version correlated best with the voxel signal (https://www.ncbi.nlm.nih.gov/pubmed/23378326). I came across the AFNI tool of "3ddelay", which I intuitively thought to be appropriate
by
Zhihao_Li
-
AFNI Message Board
Happy new year to Bob and AFNI Team!
I was trying to play with the new "ETAC" option in 3dttest++ and noticed an issue that I don't understand.
I simply added "-ETAC" in a command line of regular 3dttest++ without any other options of ETAC_blur or ETAC_opt, requesting the ETAC procedure to be computed by default. In the outputs, I can see "significant" (p<0
by
Zhihao_Li
-
AFNI Message Board
Page 1 of 3
Pages: 123