Hi Rick,
I think the two images could be interpreted as a video, so I'll have onset times for the first image and then a block function of 1 second. So, I have to generate stimuli files for each condition that I want to compare, correct? Basically, in one file (correct) having onsets of the first image when the subject makes a correct decision, and in another file (incorrect) the onsets of the first image when the subject is incorrect. Then, I can do the same for the other two "irrelevant" stimuli, namely confidence (split into low / high confidence) and trust (split into "subject makes a different decision" vs. "subject makes the same decision" after feedback). Does this make sense? See code below.
-regress_stim_files \
stim1_correct_stim_file.1D \
stim1_incorrect_stim_file.1D \
stim2_confident_stim_file.1D \
stim2_nonconfident_stim_file.1D \
stim3_trust_stim_file.1D \
stim3_distrust_stim_file.1D \
-regress_stim_labels \
correct incorrect \
confident nonconfident \
trust distrust \
-regress_basis_multi \
"BLOCK(1,1)" \
"BLOCK(1,1)" \
"BLOCK(2,1)" \
"BLOCK(2,1)" \
"BLOCK(2,1)" \
"BLOCK(2,1)"
Otherwise, if I put the onsets of all stimuli in one file, how do I specify the conditions I want to compare?
Thank you!
Davide