Emily,
> In our lab, we generally use tent functions. We have previously used TENT (0, 14, 8).
> Does this seem appropriate for this paradigm?
It's hard to tell without looking into the data. Are you referring to modeling the stimulus of 8 s duration? One possible verification is to check the last couple of beta values from TENT (0, 14, 8) at a few important regions, and see if they come down (or up) to 0. If they are not close to 0, you may want to lengthen the modeled duration.
> However, I have seen others use mean IRF (which would be GAM in afni proc py).
GAM is typically for event-related experiments. For block designs, the counterpart is BLOCK.
> What numbers would I specify for this? Then they used the first 6 images starting from
> stimulus onset to calculate percent area under the curve? How would I go about doing that?
This sounds like a strategy for TENT when going about group analysis, but not necessarily an optimal one.
> I have also seen the SIN function. I heard that this function is best suited for paradigm that have
> few trials. This would fit my paradigm as I only have 5 cs+ and 5 cs-. Would this be a useful
> approach and what 3 numbers would you recommend in terms of timing and number of sin functions?
Unfortunately it's rarely practiced in neuroimaging because of the daunting job of dealing with the huge amount of data, but model building or model tuning is a crucial part of statistical analysis. It's very difficult to make such a generalization as to which approach is optimal under a specific circumstance. My suggestion is to try out different basis functions on a couple of subjects, and compare and examine their fitting quality through a few quality control tools such as Full F-statistic, -iresp, the Graph window (fitted data vs. the original signal), etc.
Gang
Edited 2 time(s). Last edit at 08/08/2014 02:07PM by Gang.