Show all posts by user
Dear AFNI users-
We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:
https://discuss.afni.nimh.nih.gov
Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.
The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.
Sincerely,
AFNI HQ
History of AFNI updates
Page 5 of 5
Pages: 12345
Results 121 - 144 of 144
For negative only, the best way is the popup menu and choose "Neg Only". Please note that this menu applies to the threshold value ("Thr" sub-brick). That is, if you have "Neg Only" chosen, then only voxels with threshold values that are negative and below the minus of the slider value will get color.
The color they get is from the "Olay" sub-brick. In
by
RWCox
-
AFNI Message Board
You don't give enough information to provide a better answer than Paul has given. Are these a single 2D image per subject? Or a collection of 2D images per subject, with very thick slices (say 5 mm thick, with 1 mm in plane resolution)?
by
RWCox
-
AFNI Message Board
I'm confused by your units. Is 0.0123 (the threshold) in percent? That is, a fractional change of 1.23 * 10-4? That isn't much to worry about.
The reason is probably that the calculations are carried out in a different order in different runs, so that the roundoff errors accumulate differently. And then the optimizer will stop at slightly different points in each stage (patch and lev
by
RWCox
-
AFNI Message Board
"Reasonably good TSNR" depends on the scan parameters.
For 3 Tesla BOLD EPI data with 2-3 mm voxels, TR about 2 s, flip angle 50 degrees or more, echo time about 30 ms -- TSNR about 200 is common. But the real thing to check for is if some subjects have TSNR very different from the others. In our recent Shenzhen bootcamp, two guys from Chongqing had TSNR on 100+ subjects in the 180-2
by
RWCox
-
AFNI Message Board
I'm bumping this up (for Christmas!) to help people in China see this report of updates inspired by THEM.
by
RWCox
-
AFNI Message Board
It is best, as Gang implies, to extract the global signal and then include it in your regressions in the pre-processing, which presumably are removing other signals "of no interest". Regressing one thing out by itself after the other regressions is not usually a proper form of analysis -- if you regress out A and then B, the result is not the same as regressing out B and then A (unless
by
RWCox
-
AFNI Message Board
It would help to see the "progress report" from 3dttest++ as it ran before it crashed, in order to narrow down the problem. The crashlog doesn't show all of the history, so I can't actually tell how far the program got before the bad news happened.
by
RWCox
-
AFNI Message Board
You can use the '-nodata' option, as in '-nodata 300 1.0' where '300' is the number of time points and '1.0' is the time interval (in seconds) between them. After that, you set up the processing as normal, and the program outputs the analysis design matrix.
Here is an example copied from the 3dDeconvolve -help output
3dDeconvolve -nodata 300 1 -polort
by
RWCox
-
AFNI Message Board
You can try to use them, but please make sure that the masks are properly aligned with the MNI template brain when you view them in AFNI. Otherwise, bad things will happen.
by
RWCox
-
AFNI Message Board
I don't *think* the small brightness patches will affect the template formation significantly, but I can't really *know*.
As far as the "halo" goes, it won't affect the skull strip in any way. That tiny shell far away from the head will be eliminated almost instantly in the skull stripping process.
by
RWCox
-
AFNI Message Board
I see what you mean. However, I don't understand what your concern might be in "prepping images for preprocessing". What follows the preprocessing? If you are going to do quantitative image segmentation (i.e., partition into WM and GM and count voxels), then the effect you see might be a concern. If you are going on to skull stripping or the like, then I don't think these sma
by
RWCox
-
AFNI Message Board
The two methods are mathematically equivalent -- eliminating the time point from the data and from the regression matrix, or adding a bunch of regressors that are all 0s except a single 1 at the to-be-censored time point. Only a tiny difference should result from trying one method or the other.
The brutal fact is that subjects who move too much produce data than cannot be safely analyzed.
by
RWCox
-
AFNI Message Board
The "way around this" is not to bandpass. Or to gather significantly longer imaging runs.
by
RWCox
-
AFNI Message Board
We are totally unable to answer questions about SPM usage details -- our mental energy is bent towards AFNI. You might try emailing the authors of the paper you mention, or joining the SPM email list http://www.fil.ion.ucl.ac.uk/spm/support/ and asking your questions about SPM in that forum.
by
RWCox
-
AFNI Message Board
I suggest using the script @SSwarper to skull strip and nonlinearly warp the anat to the MNI template.
Then use the outputs from that script in afni_proc.py to bypass the skull stripping and warping options.
I once had a lot of bad registration cases. Until I found I was saying "-anat_has_skull no" in my afni_proc.py script (since I copied the script from something else), and in fac
by
RWCox
-
AFNI Message Board
Without seeing the data, it is hard to say.
Personally, I would first look at the two .1D files -- are the numbers "reasonable"?
Then, I would plot them, as in
1dplot -noline -x r$run.dy1.PCC.1D r$run.dy1.LmPFC.1D
and
1dplot -one -nopush r$run.dy1.PCC.1D r$run.dy1.LmPFC.1D
and see how these plots look. Depending on what I saw, then I'd proceed further.
Also, when you s
by
RWCox
-
AFNI Message Board
Alas, Daniel Glen, who should help you with this problem, has just gone on vacation for the next 3 weeks so it will take a little while for someone else to respond.
by
RWCox
-
AFNI Message Board
Do you have betas from multiple subjects, or just one subject? What you want to calculate is NOT VERY CLEAR from your terse question.
by
RWCox
-
AFNI Message Board
If you are also transforming to MNI space, you could try the script @SSwarper. It is slow, though.
by
RWCox
-
AFNI Message Board
Right-click on the "intensity bar" just to the right of the image. You'll get a popup menu.
The item labeled "Automask?" is a toggle (on/off) switch that restricts the overlay display to the 2D automask generated from the current underlay image. This feature is (at least) approximately what you want.
by
RWCox
-
AFNI Message Board
A quick addition: you can do the calculations inside the AFNI GUI using the InstaCalc plugin, which is available from the InstaCorr drop-down menu in the Define Overlay control panel.
by
RWCox
-
AFNI Message Board
Since I don't know how to duplicate this problem (it works OK on our cluster), I can only make a suggestion.
Try doing this sequence (csh syntax)
Xvfb :88 -screen 0 1024x768x24 >& /dev/null &
setenv DISPLAY :88
3dSkullStrip {whatever}
unsetenv DISPLAY
killall Xvfb
Here, you are starting a virtual (hidden) X11 server, called Xvfb, and telling other programs that follo
by
RWCox
-
AFNI Message Board
Page 5 of 5
Pages: 12345