AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
June 06, 2019 04:31AM
What you want to do does not fix exactly into the AFNI analysis model, where we have a global "baseline" model (which includes the mean and slow drifts up/down), and we don't have a local "baseline" model.

Below is my quick jet-lagged outline of how you could do such an analysis in AFNI. However, there are probably not enough details here unless you are already a pretty expert AFNI user. If you want to proceed, you should first try to understand this, and then get back to me. Note that I am traveling to Italy for OHBM and so responses will probably be slow.

The way to analyze this model is to replace the mean component of our baseline model, and then provide stim_times for the control (fixation) task. The times and BLOCK durations should be such that the entire imaging run is covered, in the sense that the sum of all the regressors of interest is constant = 1.

Here's a toy example:
* file A.1D = 10 30 50 70 90
* file B.1D = 20 40 60 80 100
Commands:
* 3dDeconvolve -nodata 150 1 -num_stimts 2 -stim_times 1 A.1D 'BLOCK(10,1)' -stim_times 2 B.1D 'BLOCK(10,1)' -polort -1 -x1D Xmat.1D
* 1deval -a Xmat.1D'[0]' -b Xmat.1D'[1]' -expr a+b | 1dplot -stdin
I attached an image from the last command, which (if present) shows that in the middle of this 150s time period, where the stim_times are present, the sum of the 2 interleaved stimuli is close to being constant = 1.

So if this was your data and model, you could analyze the data by
* removing the mean (constant = 1) regression component from the X matrix
* only analyzing the time points where the sum of the stimulus response models is about 1 (20s to 110s in this example)
* you could use -stim_times_IM to get individual betas for each interval
One difficulty is that 3dDeconvolve does not provide any way to remove the mean regressor from the model without removing the higher order polynomials (which have zero mean over time) that represent slow drifts. For real data, it is important to include these drift components. Without modifying 3dDeconvolve, you could still do this analysis with the drift components by using option '-polort -1' to remove ALL polynomial components, then providing those polynomial slow regressors directly yourself using the '-ortvec' option. To construct those polynomial regressors for a single imaging run, you would do something like this:

* 3dDeconvolve -input DATASET_NAME -polort 4 -x1D Xmat.1D -x1D_stop -num_stimts 1 -stim_times 1 '1D: *' 'BLOCK(1)'

File Xmat.1D will have 6 columns:
#0 = all ones = constant regressor (give mean for baseline model)
#1-4 = polynomial of order 1-4 for the slow drift baseline model
#6 = all zeros = the fake stimulus with no actual information (since 3dDeconvolve requires 1 stimulus input)

You can extract columns #1-4 into a separate file with a command like
* 1dcat Xmat.1D'[1..4]' > Xpolort.1D
And then plot them (to be sure they are good) with
* 1dplot Xpolort.1D
And then provide these to 3dDeconvolve with
* 3dDeconvolve -input DATASET_NAME -polort -1 -ortvec Xpolort.1D -num_stimts XXXXXXX

An important detail is what I mentioned above - that you only analyze the data where the sum of the stimulus regressors is essentially constant. You should examine the X matrix to make sure this sum of stimulus regressors is correct.
Attachments:
open | download - Xsum.png (6.3 KB)
Subject Author Posted

Estimating individual block resting/baseline coefficients within task-based run

dberman6 June 04, 2019 04:25PM

Re: Estimating individual block resting/baseline coefficients within task-based run Attachments

RWCox June 06, 2019 04:31AM