Dear Paul,
I came across some problems in the pre-processing related to a very high number of volumes getting censored due to motion. I acquire my data in three runs and participants have breaks in between them, so it could be that they move their heads in the breaks but not during the EPI acquisition itself. But given that the current pre-processing pipeline firstly reassembles/unshuffles the video clips (after detrending) and then secondly performs the pre-processing for data concatenated from all three runs, it seems that using 3dvolreg with vr_base_min_outlier as base could see between session head movement as within session movement and censor too conservative. Hence, I was wondering whether it might be better to detrend each run, then preprocess each run separately as suggested, transform the resulting errts.* into a nifti file (or is it possible to do that by specifying -errts as eprefix.nii?), then concatenate and unshuffle the time series (whilst demeaning the time course of each video clip) and then use this file to calculate the intersubject correlation.
Again, thank you very much for your help, this is highly appreciated!
Best regards,
Stef