AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
April 12, 2019 03:04PM
Hi, Stef-

Gang and I have chatted a bit about this, generating the following thoughts:

Your data set is a little bit different than the one from the Chen et al. work you have cited. In Chen et al., we were looking at a few longer clips of movies that were interspersed with rest. We spliced out the movie clips, and treated each as a separate run for processing (the clip presentation hadn't been randomized, so we didn't have to reassemble/unshuffle them); things like detrending were done "as normal" for each run separately.

In your case, you have several shorter clips back-to-back-to-back, forming individual runs. In order for trends across each run of several clips not to affect things (esp. since they are randomized), then it might be a good idea to first detrend the data to, say, 2nd order (because of their overall duration) using 3dDetrend, to remove the trends across each run. Then, splice the runs into the short clips; demean each short clip; unshuffle the clip order (so that movie clips match across subjects); and concatenate the short clips into a single long run. That final thing can be put into afni_proc.py.

You then shouldn't have to use the usual polort degrees in the regression block later; you can just specify to include a constant in the model, via:
-regress_polort 0

In naturalistic scanning, just as in resting state, we don't have the benefit of having task stimuli to model and use as our data of interest; we *only* can use nuisance regressors in the model, and then we take the remaining residuals (errts* file) as the output time series of interest. Motion has even larger influence than usual; therefore, we often include the derivs of the 3dvolreg motion estimates to try to account for motion to a higher degree than in task data. It seems a good idea to keep doing that here.

Re. bandpass filtering-- that does not seem necessary at all here. (In fact, in much of resting state processing, it might not even be necessary... but that is a separate story.)

--pt
Subject Author Posted

Preprocessing for intersubject correlation

s.meliss April 11, 2019 05:18AM

Re: Preprocessing for intersubject correlation

ptaylor April 12, 2019 03:04PM

Re: Preprocessing for intersubject correlation

s.meliss April 23, 2019 09:25AM

Re: Preprocessing for intersubject correlation

s.meliss July 17, 2019 12:30PM

Re: Preprocessing for intersubject correlation

ptaylor July 17, 2019 01:08PM

Re: Preprocessing for intersubject correlation

s.meliss August 07, 2019 01:27PM

Re: Preprocessing for intersubject correlation

s.meliss September 26, 2019 04:00AM

Re: Preprocessing for intersubject correlation

ptaylor October 03, 2019 06:25PM

Re: Preprocessing for intersubject correlation

s.meliss October 11, 2019 06:39AM