AFNI Message Board

Dear AFNI users-

We are very pleased to announce that the new AFNI Message Board framework is up! Please join us at:

https://discuss.afni.nimh.nih.gov

Existing user accounts have been migrated, so returning users can login by requesting a password reset. New users can create accounts, as well, through a standard account creation process. Please note that these setup emails might initially go to spam folders (esp. for NIH users!), so please check those locations in the beginning.

The current Message Board discussion threads have been migrated to the new framework. The current Message Board will remain visible, but read-only, for a little while.

Sincerely, AFNI HQ

History of AFNI updates  

|
October 11, 2019 06:39AM
Hi Paul,

Thank you very much for your reply.

As indicated here (https://afni.nimh.nih.gov/afni/community/board/read.php?1,161034,161745#msg-161745) when doing what you've suggested I got an incredibly high number of motion censored volumes - even though the motion is not apparent in the original EPI files for each run which was very surprising for me. Hence, I tried to disentangle the effects each step of the pre-processing would have on my data (https://afni.nimh.nih.gov/afni/community/board/read.php?1,161034,161922#msg-161922), especially using the quality check html file. I'm very new to pre-processing in AFNI, but from my layman point of view, it seems that detrending first as suggested in your initial answer (https://afni.nimh.nih.gov/afni/community/board/read.php?1,161034,161061#msg-161061) imposes problems. Likewise does the demeaning. To circumvent the problems of motion censoring if there is de facto no motion (as observed in the original EPI), I thought it might be better to do the pre-processing firstly and do the concatenation/demeaning as a separate step.

This also has the additional benefit that it is easier to select different video clips to be included for the ISC calculation. I study memory, so I'm interested in comparing the ISC for remembered video clips (i.e. both members of a given pair have remembered) with the one for forgotten video clips (i.e. both members of a given pair have forgotten). However, this is different for each pair of subjects, so the concatenation has to be done individually (i.e. 2(memory outcome: remembered vs forgotten) * 1/2*N*(N-1) = 2450 concatenations given my N = 50) and for me, it seemed easier/more efficient to pre-process the data of 50 participants to then use the 50 *errts files to create those 2450 final time series by concatenation than to first concatenate and then pre-process 2450 time series.

Best regards,
Stef
Subject Author Posted

Preprocessing for intersubject correlation

s.meliss April 11, 2019 05:18AM

Re: Preprocessing for intersubject correlation

ptaylor April 12, 2019 03:04PM

Re: Preprocessing for intersubject correlation

s.meliss April 23, 2019 09:25AM

Re: Preprocessing for intersubject correlation

s.meliss July 17, 2019 12:30PM

Re: Preprocessing for intersubject correlation

ptaylor July 17, 2019 01:08PM

Re: Preprocessing for intersubject correlation

s.meliss August 07, 2019 01:27PM

Re: Preprocessing for intersubject correlation

s.meliss September 26, 2019 04:00AM

Re: Preprocessing for intersubject correlation

ptaylor October 03, 2019 06:25PM

Re: Preprocessing for intersubject correlation

s.meliss October 11, 2019 06:39AM