AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
September 26, 2019 04:00AM
Dear Paul,

I am sorry for bothering you again. I have pre-processed my data, but wanted to make sure that what I did makes sense. Please see below for how I specified the afni_proc.py -- I modified the pre-processing a bit compared to what was suggested in the NI paper: I am pre-processing the data separately for each run whilst regressing out the polynomial degrees (up the third order given the average run length of 380 à default: 1 + floor(380 / 150.0) = 3) to cover scanner drifts. As recommended, I don’t do band pass filtering. I have further modified the -regress_censor_motion to be 0.3 instead of 0.2 based on some resting state recommendations I found in the afni_proc.py help. Afterwards, I take the final output file (the residual files errts*) and concatenate and reshuffle the volumes relating to each video clip without demeaning them. The output of this step is then used to compute the ISC. Is this halfway reasonable? Do you think I should demean the volumes for each magic trick before reshuffling/concatenating?

# specify actual afni_proc.py
afni_proc.py -subj_id $subj.magictrickwatching_perRun \
-blocks despike tshift align tlrc volreg blur mask regress \
-copy_anat $fsindir/$fsanat \
-anat_follower_ROI aaseg anat $fsindir/aparc.a2009s+aseg_rank.nii \
-anat_follower_ROI aeseg epi $fsindir/aparc.a2009s+aseg_rank.nii \
-anat_follower_ROI FSvent epi $fsindir/$fsvent \
-anat_follower_ROI FSWMe epi $fsindir/$fswm \
-anat_follower_erode FSvent FSWMe \
-dsets $epi_dpattern \
-tcat_remove_first_trs 0 \
-tlrc_base /usr/share/afni/atlases/MNI152_T1_2009c+tlrc \
-tlrc_NL_warp \
-volreg_align_to MIN_OUTLIER \
-volreg_align_e2a \
-volreg_tlrc_warp \
-regress_ROI_PC FSvent 3 \
-regress_make_corr_vols aeseg FSvent \
-regress_anaticor_fast \
-regress_anaticor_label FSWMe \
-regress_censor_motion 0.3 \
-regress_censor_outliers 0.1 \
-regress_apply_mot_types demean deriv \
-regress_est_blur_epits \
-regress_est_blur_errts \
-regress_run_clustsim no \
-regress_polort 3

One more question arises: In addition to the movie clip watching, we also did a pre and post task resting state scan and I am interested in changes in the resting state functional connectivity due to the task. Can I preprocess that resting state data in the same way? Would I still include both runs (i.e. pre and post task) into the same afni_proc.py script or would I treat them independently?

Any help with these issues is highly appreciated!
Many thanks and best regards,
Stef
Subject Author Posted

Preprocessing for intersubject correlation

s.meliss April 11, 2019 05:18AM

Re: Preprocessing for intersubject correlation

ptaylor April 12, 2019 03:04PM

Re: Preprocessing for intersubject correlation

s.meliss April 23, 2019 09:25AM

Re: Preprocessing for intersubject correlation

s.meliss July 17, 2019 12:30PM

Re: Preprocessing for intersubject correlation

ptaylor July 17, 2019 01:08PM

Re: Preprocessing for intersubject correlation

s.meliss August 07, 2019 01:27PM

Re: Preprocessing for intersubject correlation

s.meliss September 26, 2019 04:00AM

Re: Preprocessing for intersubject correlation

ptaylor October 03, 2019 06:25PM

Re: Preprocessing for intersubject correlation

s.meliss October 11, 2019 06:39AM