Show all posts by user
Dear AFNI users-
We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:
https://discuss.afni.nimh.nih.gov
Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.
The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.
Sincerely,
AFNI HQ
History of AFNI updates
Page 1 of 2
Pages: 12
Results 1 - 30 of 35
HI Gang, Yes - that is exactly what I would like to do. Here is the specific outline.
For each subject:
1.I create artificial 'time-courses' for each condition of interest. Each beta series will contain the coef sub-bricks corresponding to individual stimuli within a specific condition. So if I have two conditions, I would have two separate time course files
2.For each condtion bet
by
jyaros
-
AFNI Message Board
Hi gurus! Im trying to figure out the best way to implement Gang's suggestion:
Later on when you compute the correlations among the regions, consider censoring out large beta values by, for example, setting [-2, 2] as your interval.
Im not sure if understanding this correctly - does this mean to create some sort of mask which zeroes out voxels that are outliers, and that this mask sho
by
jyaros
-
AFNI Message Board
Ok thanks!
So to clarify, you would test masking out voxels that have any of those large betas? (I.e. great than [+/-2]?
And I'll look into the censoring thing. Wouldn't adding back in censored stimuli potentially add spurious correlations due to widespread sensitivity of correlations across voxels to motion? Is the point just to see whether that gets rid of the large betas, but n
by
jyaros
-
AFNI Message Board
Hi Gang,
Thanks for this tip!
I just want to make sure I am understanding correctly, and that this data shouldn't be ringing any alarm bells, so I have a couple additional questions:
1. As my data is currently processed, I have not used the SCALE block in proc_py. Do you think for a beta series correlation, since I am concatenating betas together, that the data should have been normal
by
jyaros
-
AFNI Message Board
As a followup, and perhaps I have a better understanding now, but still confused:
This message board answer implies the best approach might be to create separate matrices for each condition of interest, where you have a separate 3dDeconvolve call for each condition, modeling only that specific condition with the stim_type_IM option. So, I SHOULD be creating 16 different matrices if I have 16 con
by
jyaros
-
AFNI Message Board
Hi gurus!
I've spoken with a few of you about modeling single trial regressors with 3dDeconvolve and the stim_times_IM option for every trial. As an additional test, I'd like to model the trials separately using 3dLSS. I'd like to see how each apporach models single trial betas, before deciding which pipeline to use for a beta series correlation.
I already have the 3dDeconvo
by
jyaros
-
AFNI Message Board
Hi Cesar, thank you for the input! I have considered 3dlss and discussed it elsewhere on the messageboard, though for a different type of analysis.
Just to clarify, I have about 333 trials, and over 1,300 1.5s TRs. (Where stimuli are repeated ever 4.5 seconds). So there are actually about 2.5 TRs between each stimulus presentation.
Given these numbers, would you recommend sticking with 3d
by
jyaros
-
AFNI Message Board
Thank you both so much for your recommendations.
@Paul - your points make sense why taking subsets of the fitts series wouldn't make sense... My events are only 3 seconds long, and are randomly distributed across each run. So I can't see a good way of isolating them from fitts....
@Cesar and Paul - I'm glad to hear that the beta-series correlations might be a reasonable appr
by
jyaros
-
AFNI Message Board
Hi Afni gurus,
I'd like to run a whole-brain connectivity analysis on EVENT related data. I've read that 3dNetCorr can be used for this, but so far I have not found any documentation or forum questions about it.
Basically I have a template of over 200 rois, and I'd like to correlate the average timeseries of each roi with one another, but for each condition separately.
Si
by
jyaros
-
AFNI Message Board
Following up on this question - still looking for clarification -thanks!
by
jyaros
-
AFNI Message Board
Hi there, I'm working through learning PPI analysis and have a question regarding step 2A on this document:
2a. If your stimulus onset times were not synchronized with TR grids, pick up a sub_TR, e.g., 0.1 seconds, replace the above 1dtranspose step and upsample seed time series by xx (original TR divided by sub_TR) times:
My question is, how do I know if my stimulus onset times are s
by
jyaros
-
AFNI Message Board
Hi there!
Looking for clarification on significance estimates for individual clusters in the clusterize gui.
I notice that some clusters have a p-value inequality with two <<, and others have just one <
What is the difference between p <<.01 and p <.01 ? How should we be reporting the corrected significance of these clusters?
For instance see the url link to this image:
by
jyaros
-
AFNI Message Board
Hi there,
I attempted to warp a binary mask to standard space using several afni commands and the results don;t look great. (A bunch of the labels become long decimals,especially and there are grainy outlines at the edges that shouldn't be there). I believe the issue involves the fact that I am using a transformation matrix to warp a binary mask, and perhaps have chosen the wrong way to d
by
jyaros
-
AFNI Message Board
Hi there, I've always gotten a specific ANATICOR warning when generating my processing scripts from proc.py. I just want to confirm that this warning is just a default, and not thrown because there are any detected issues.
** WARNING: ANATICOR output now includes zero volumes at
censor points, matching fast ANATICOR and
non-ANATICOR cases
Thanks in advance!
by
jyaros
-
AFNI Message Board
I've had the same result - curious what the explanation might be?
by
jyaros
-
AFNI Message Board
Yes well I at least can go ahead with the condition level RSA, though I was hoping to do single-trial as well. Perhaps condition-level may be all my experimental design allows me to do. I'll have to continue testing to see!
by
jyaros
-
AFNI Message Board
Hi there.
I'd like to confirm that a warning message in my output.proc file is ok to ignore.
It is in the outcount autoblock at the very beginning of pre-processing. The warning states that the input dataset is not 3d+time, and it will therefore assume the TR = 1.0. Since the input file is the 1D outcount file, I am assuming this is just a stock warning for situations in which input files
by
jyaros
-
AFNI Message Board
Unfortunately no jittering. At the time of design i was told that the different stimulus types would serve as an innate jitter, but in hindsight it would have much better from an analysis perspective.
Do you have advice on best way of viewing the time series to see if it is even feasible to disentangle single trial responses?
I'm going to reprocess the data without any GoForIt option
by
jyaros
-
AFNI Message Board
Ok thanks for the tip.
I might as well rerun - could take a day of two for all the subjects but that shouldn't be a problem.
by
jyaros
-
AFNI Message Board
Thanks Gang!!
Glad to hear that about the volreg files
I've seen several instances of people using 3dlss for data intendef for mvp analysis, and I know it was developed for use in afni based on this paper:
My stimulus trial duration is 3 seconds, followed by a 1.5 second fixation cross. The fixation cross is not modeled as a regressor so it assumed baseline. So the actual stimuli a
by
jyaros
-
AFNI Message Board
HI there! Just checking in again since I havent heard back yet, to make sure that the pb03 volreg files (pre the pb04 blur step) as input to 3dDeconvolve and. or 3dLSS is the proper move for MVPA analysis. For reference, the ordering of my blocks are
-blocks tshift despike align volreg blur mask regress \
So I'm thinking that the volreg files should be fully pre-processed data, just min
by
jyaros
-
AFNI Message Board
Hi there,
In the past I ran proc_py on my data, where I smoothed/blurred the the data. Now I'd like to go back and create new stats files based on the unsmoothed timesseries versions of the data for ultimate MVPA analysis. What is the best way to go about this without rerunning all the preprocessing steps?
I know that the pb04 files (in my study at least) are the unsmoothed/blurred dat
by
jyaros
-
AFNI Message Board
HI there,
Is it possible to run a new gltsym using already preprocessed data but with new conditions and stimulus timing files? I actually want to add several conditions to my model, not just update the original stimulus time files.
I know you can use '-write_3dD_script and-write_3dD_prefix' if you just want to define new glms. But what if you want to remodel the functional dat
by
jyaros
-
AFNI Message Board
Hi there,
I am having trouble reproducing the same results using 3dDeconvolve as I get running the full proc_py script. Of course, i'd like to figure this out, so that I don't have to preprocess the data every time I'd like to run a new gltsym. I have pasted the code for both 3dDeconvolves at the bottom to show that the flags and preferences are exactly the same.
1. SubBri
by
jyaros
-
AFNI Message Board
Hi there,
Just want to get an opinion on this.
I have been using GAM functions on data from an event related design with stimulus durations of 3 seconds.
First, is 3 seconds too long for use of the GAM function? I know the documentation mentions short presentation times as being between 0-2 seconds. If it is too long, which function might be preferable?
Second, I realize I modeled f
by
jyaros
-
AFNI Message Board
Page 1 of 2
Pages: 12