Hello,
What would be the best way of approaching very large data sets if I want to use the generated afni_proc.py script?
I was thinking using 3dzcutup, but I'm not entirely sure what is the best way of approaching it. I'm currently trying to edit the relevant portion of the script like the following (don't think this is right):
# ------------------------------
# run the regression analysis
foreach sl ( 'count -dig 2 0 108' )
3dZcutup -prefix zcut.${sl}.$subj -keep $sl $sl pb04.$subj.r*.scale+orig.HEAD
3dDeconvolve -input zcut.${sl}.$subj+orig.HEAD \
-censor motion_${subj}_censor.1D \
-polort 3 \
-num_stimts 8 \
-stim_times 1 stimuli/REF_F_1_E 'GAM' \
-stim_label 1 E \
-stim_times 2 stimuli/REF_F_1_T 'GAM' \
-stim_label 2 T \
-stim_file 3 motion_demean.1D'[0]' -stim_base 3 -stim_label 3 roll \
-stim_file 4 motion_demean.1D'[1]' -stim_base 4 -stim_label 4 pitch \
-stim_file 5 motion_demean.1D'[2]' -stim_base 5 -stim_label 5 yaw \
-stim_file 6 motion_demean.1D'[3]' -stim_base 6 -stim_label 6 dS \
-stim_file 7 motion_demean.1D'[4]' -stim_base 7 -stim_label 7 dL \
-stim_file 8 motion_demean.1D'[5]' -stim_base 8 -stim_label 8 dP \
-gltsym 'SYM: T -E' \
-glt_label 1 T-E \
-fout -tout -x1D X.xmat.1D -xjpeg X.jpg \
-x1D_uncensored X.nocensor.xmat.1D \
-errts errts.${subj}.${sl} \
-bucket stats.$subj.${sl}
time 3dZcat -verb -prefix stats.$subj stats.$subj.${sl}+orig.HEAD
Or is it better to simply carve up the data before hand , say into 4 different sections, and running the script individually on 4 different sections? Would that mess with the motion correction / regressors any? Or whatever else I haven't thought of?
Thanks.
Edited 2 time(s). Last edit at 05/05/2016 10:39AM by Leo.