Show all posts by user
Dear AFNI users-
We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:
https://discuss.afni.nimh.nih.gov
Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.
The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.
Sincerely,
AFNI HQ
History of AFNI updates
Page 1 of 2
Pages: 12
Results 1 - 30 of 59
Hi,
Mostly out of curiosity - near the autoRange option there is a checkbox with a percent (%) sign next to it. When I check the box, I see more activated voxels surrounding the clusters. What does that do? What is that for? I couldn't find an explanation in any online documentation.
Thanks!
by
Galit
-
AFNI Message Board
Hello,
I want to use 3dNetCorr with the -push_thru_many_zeros option (due to several voxels outside of the brain). I just want to verify what this option does with the null time series.
Does it include voxels of zero activation in the average, or does it exclude them?
If the former, then is there a parameter to set that will calculate the average without null voxels?
Thanks!
G
by
Galit
-
AFNI Message Board
Thank you for the reference, Gang!
As for the current threshold, ttest++ might be a bit too severe, but I don't really know. For the paper, as I can't think of a better solution, I think I will report both with reservations... Thanks again!
by
Galit
-
AFNI Message Board
Hi Gang,
First of all, I'm curious to know - what are the non-convenstional methods by which you would correct for multiple comparisons in a whole brain analysis?
Regarding my 3dClustSim - it took me some time to be able to put into words why I think this is the wrong analysis to use when comparing a voxel to its symmetric voxel in the other hemisphere. This also might explain the disc
by
Galit
-
AFNI Message Board
Hi Gang,
Thank you for the suggestion! I tried running 3dttest++ on two sets of differences, unfortunately nothing came out.
The reason I am so surprised, is that for the same p and alpha thresholds, I need 225 voxels according to 3dttest++ while only 20 according to 3dClustSim. Can it be that the difference is so huge?
Galit
by
Galit
-
AFNI Message Board
Hello AFNI statisticians!
I just wanted to make sure someone sees this, because I am really puzzled by this issue and would appreciate any advice on how to determine the thershold when the model compares two difference voxels (original vs. flipped brain). Intuitively 3dClustSim is not the right thing to use, as the comparison is between different voxels with different auto-correlational structur
by
Galit
-
AFNI Message Board
Hello,
I am performing a t-test to see if my effect is lateralized as hypothesized. I do this by comparing the original contrast (Original) with the contrast in the symmetric voxel in the other hemisphere (Flipped). I used ttest++ with the ClustSim option.
I already ran a 3dClustSim simulation on averaged ACF parameteres, for the group level effect. The threshold in this table is VERY different
by
Galit
-
AFNI Message Board
Hello AFNI team,
I tried to run 3dMVM (which I later changed to 3dLME, because I realized my covariate was within-subject), and I get this error that I don't understand (for both 3dLME and 3dMVM):
Error in seq.default(2, length(sepTerms), 2) :
wrong sign in 'by' argument
Calls: process.LME.opts -> gl_Constr -> glfConstr -> seq -> seq.default
Execution halted
by
Galit
-
AFNI Message Board
Hello AFNI team,
I ran 3dREMLfit on my single subjects, to then use the coefficients and t-stats with 3dMEMA (paired contrast). The output of 3dMEMA was then used to define a mask within which I wanted to perform a statistical test that looks on how the contrast is modulated by different covariates.
To do that, within the mask, I ran 3dMVM on the coefficients obtained with 3dREMLfit, and define
by
Galit
-
AFNI Message Board
Interesting. I checked, and all standard deviations are very similar but not identical. I checked it for two regressors, and for their difference (taken from gltsym). I don't understand mathematically why the standard deviations of the coefficients should be different, according to my previous explanation I would expect them to be exactly the same...
For the maps, I double checked, and I
by
Galit
-
AFNI Message Board
Hi Rick,
I actually simplified the names of my datasets. Anyway, when I write down the 3dinfo command on the actual name of the file, I get this:
q_pos#0_Tstat|q_neg#0_Tstat
Which is indeed what I had in mind.
I also requested with 3dcalc both differences, between coefficients and between t-stats, and I can see in the AFNI GUI eactly at which contrast I am looking at.
However, after thi
by
Galit
-
AFNI Message Board
Hello,
I want to have a map of a contrast between two conditions at the single-subject level. I tried two ways to do that, which resulted in exactly the same thing, which perhaps is good news but this confuses me. I did expect different results.
Option 1: gltsym in 3ddeconvolve
-gltsym 'SYM: +C1 -C2' -glt_label 1 'Contrast_map1' \
Option 2: 3dcalc
(Assuming for the
by
Galit
-
AFNI Message Board
Hi Rick,
I oculdn't find a way to verify that slices were indeed time shifted. I tried looking at the dataset with 3dinfo, with the verbose option, but couldn't find a vector that specifies the shifts.
I assume that it does work, but just for the purpose of self verification I think it is better to apply the vector with 3dTcat, so that I can compare input and output.
Thanks,
Galit
by
Galit
-
AFNI Message Board
UPDATE:
I now tried to specify the tpattern first via 3dTshift:
3dTcat -tpattern @/media/galit/'Seagate Expansion Drive'/fMRI_GALIT/slice_times.1D -prefix S03_Run1.data.t S03_Run1.data+orig
Now when I ask for slice timing with 3dinfo, I get the timing vector and not just a list of zeros.
I apply 3sTshift on the output of 3dTcat:
3dTshift -tpattern @/media/galit/'Seagat
by
Galit
-
AFNI Message Board
Thank you, I extracted the slice times from the dicom header with matlab, and saved it as a text file (one row, numbers in seconds, no spaces or commas). I tried to run it as you suggested, with explicitly calling this text file:
3dTshift -tpattern @slice_times.txt -prefix S03_Run1.data.tshift S03_Run1.data+orig
But when I test the output using
3dinfo -slice_timing S03_Run1.data.tshift+orig
by
Galit
-
AFNI Message Board
Hello AFNI experts,
3dTshift seems to not change anything in my data.
DETAILS:
I have data that was scanned using a multiband protocol (acceleration factor of 2), in a Siemens Magnetom Prisma 3T. My data consists of 64 slices, acquisition was interleaved.
I want to use 3dTshift on this data. As I understand, AFNI can automatically read the order of acquisition from the header, which su
by
Galit
-
AFNI Message Board
I want to identify brain areas that are sensitive to the scores my subjects have on the Root variable (which is a continuous, between-subject variable). The logic is that these scores represent some cognitive capacity that might also be reflected on their brain activation during the task that they performed inside the scanner. The two other variables (Sem and Phon) are there as controls. So assum
by
Galit
-
AFNI Message Board
Hi,
I fitted a model on the group level using 3dMVM/3dMEMA (I tried both ways, got almost the same results).
In the model, I have one within-subject 2-level categorical predictor which relates to the task that was performed in the scanner (words/scrambled), and three between-subject continuous variables that were measured outside the scanner (Root, Sem and Phon). I want to see how these varia
by
Galit
-
AFNI Message Board
I read this thread, and this is very similar for something I am looking for.
Assuming I want the contrast between two predictors on the group level (not between two levels of a predictor, which I can get using gltCode in e.g. 3dMVM). How do I do that?
I should also mention that I want a statistical map of this contrast, not just a mask. So 3dcalc is not what I'm looking for. In addition, t
by
Galit
-
AFNI Message Board
Gang,
Thank you. I am not sure about the whole-brain analysis results, that's why I want to make sure that my results are "real".
This is how I understand it (and please correct me where I'm wrong): On the one hand, regression estimates should represent only unique contributions of the predictors. Hence, shared variability is not considered anyway, so we need not worry a
by
Galit
-
AFNI Message Board
Thank you, Gang. I have mild collinearity between my predictors (around r=0.4), that's why I was thinking that model comparison would be a better choice than relying on the t-statistics.
I suppose I could identify ROIs based on clusters of t-statistics, and then validate the unique contribution of each regressor in that ROI using Bayesian modeling (which I will have to read about, from you
by
Galit
-
AFNI Message Board
Hello AFNI experts,
How to I compare two nested models in AFNI?
More specifically: I have fitted a model using 3dMVM, which uses as predictors three between-subject variables (measured outside the scanner). I want to test for each predictor whether it has a unique contribution to the explained variability in the estimated BOLD contrast. Thus, I want to get for each voxel a p-value that reflec
by
Galit
-
AFNI Message Board
Thank you, Daniel.
So just out of curiosity - what are the reprecussions of changing space (from ORIG to MNI, for example), without changing the view? If the dataset has been transformed to a template, and I read it into the GUI, what is the effect of the view format? Or is it just a technical thing of reading files?
by
Galit
-
AFNI Message Board
Hello,
I used @auto_tlrc to convert my datasets to MNI space, and then I got this warning message:
*+ WARNING: Changing the space of an ORIG view dataset may cause confusion!
*+ WARNING: NIFTI copies will be interpreted as TLRC view (not TLRC space).
*+ WARNING: Consider changing the view of the dataset to TLRC view also
I'm a bit embarassed to say that I don't understand th
by
Galit
-
AFNI Message Board
What an amazing command, thank you!!
It seems like it's finally running now.
by
Galit
-
AFNI Message Board
It is as if the "\" in the end of each line is not recognized, so I get a lot of "Command not found" errors:
3dMVM -prefix MVM_GROUP_LEVEL_fluency2 -jobs 2 -wsVars Morph -bsVars Root_fluency+Phon_fluency -qVars Root_fluency,Phon_fluency -num_glt 2 -gltLabel 1 RootFluency_effect -gltCode 1 Morph : 1*words -1*scrambled Root_fluency : -gltLabel 2 PhonFluency_effect -gltCode 2
by
Galit
-
AFNI Message Board
Gang, thank you for the reply!
Is MVM still the right thing to do, even if I am interested in the covariate (i.e. Root_Fluency and Phon_Fluency), while controlling for the categorical variable (Morph)?
Also, when trying to run 3dMVM I still get an error. This is my script:
3dMVM -prefix MVM_GROUP_LEVEL_fluency2 -jobs 2 \
-wsVars Morph \
-bsVars "Root_fluency+Phon_fluency" \
-qVa
by
Galit
-
AFNI Message Board
Page 1 of 2
Pages: 12