Stef,
The script you showed seems to automatically concatenate the multiple datasets. You can deal with each run separately by modifying the code slightly. For example, suppose the two resting-state runs are called *_task-rest_run-1_bold.nii.gz and *_task-rest_run-2_bold.nii.gz. Try something like
foreach subj ($subjects)
...
# Input data: list of partitioned EPIs (resting state)
foreach run (1 2)
set epi_dpattern = $indir"/"${subj}"_task-rest_run-${run}_bold.nii.gz"
...
# specify actual afni_proc.py
afni_proc.py -subj_id $subj.${task}.${run} \
...
# execute script
tcsh -xef proc.$subj.${task}.${run} |& tee output.proc.$subj.$task
end
end
> given that all my resting state scans (pre and post task) have a duration of 600s (TR = 2s, hence 300 volumes) and
> given that the final .errts file has 600 volumes, would it be possible to simply split this file to create separate files for
> pre and post learning rest by selecting the first and second 300 sub-bricks respectively and save them? can these
> then be used to calculate changes in resting-state functional connectivity between two ROIs?
This is fine too even though the result might be slightly different from the approach of dealing with each run separately.
Gang