I ran optseq with optimization based on contrasts, to make it similar to @stim_analzye. But the results are the same. In the experiments, the main results are going be be based on the contrasts (the individual conditions compared to baseline are relatively less important) so it makes sense to evaluate the order based on that.
Here are the two cost functions from optseq I tried (both give similar results, which are bad according to 3dDecon criterion):
Efficiency (eff). Efficiency is defined as eff = 1/trace(C*inv(Xt*X)*Ct)
(note: any nuisance regressors are not included in the computation of
the trace but are included in the computation of the inverse). The
quantity trace(C*inv(XtX)*Ct) is a measure of the sum square error in Ghat
(ie, G-Bhat) relative to the noise inherent in the experiment. Therefore,
maximizing eff is a way of finding a schedule that will result in, on
average, the least error in Ghat.
Average Variance Reduction Factor (vrfavg). The Variance Reduction Factor
(VRF) is the amount by which the variance of an individual estimator (ie,
a component of Ghat) is reduced relative to the noise inherent in the
experiment. The VRF for a estimator is the inverse of the corresponding
component on the diagonal of C*inv(XtX)*Ct. The average VRF is this value
averaged across all estimators. This will yield similar results as when
the efficiency is optimized.
vrfavg looks similar to what is being used in 3dDecon, but I don't really understand. Can anyone tell if there is a fundamental difference between these cost functions and what we use with 3dDecon?