AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
April 17, 2019 01:07PM
Dear AFNI experts,

In my study, participants view short video clips (average duration 32 seconds, 36 in total) pseudo randomised across three runs (i.e. 12 video clips per run) and are asked to rate the clips they have seen. I am currently planning the preprocessing (based on [afni.nimh.nih.gov]) and have a couple of questions I would highly appreciate your thoughts on.

Firstly, if I am interested in subcortical structures as a priori ROI, should I omit the surface reconstruction and use FS recon-all -autorecon1 and -autorecon2 instead of recon-all -all?

When specifying the "-regress_apply_mot_types demean deriv" option in AFNI_proc.py, does the demean only relate to the motion parameters or also to the BOLD time series itself? Do the derivatives account for any scanner drifts over time, so that the data is de facto detrended after pre-processing? If I use this pre-processing pipeline, can I use the "-polort -1" option for the 3dTcorrelate program?

I would like to run the 3dTcorrelate program on the time course that relates to the display of the video clips only (that is, the volumes related to the ratings removed after preprocessing). Should I demean the time course for each video clip using the mean of the time series related to each video clip before combining the data for all 36 video clips? Please note that the video clips are presented in random order across the three runs, so they have to be reordered, so that the final BOLD time series reflects the same order of input across participants.

Lastly, I have been wondering whether I should also include band pass filtering? If so, when would it be recommendable to do so?

Many thanks for your help and best regards,
Stef
Subject Author Posted

Preprocessing for intersubject correlation

s.meliss April 17, 2019 01:07PM

Re: Preprocessing for intersubject correlation

ptaylor April 17, 2019 01:13PM