Has anyone used AFNI to analyze data collected using sparse temporal sampling which entails turning the scanner off during audio stimuli and then on to collect BOLD data after some delay, and then turning the scanner off again until the next trial.
I would like to capature 2s windows of data, one portion at a time in each trial as follows. The audio stimulus presented for 2s during which the scanner is not collecting data, followed by 16s with no stimulus presentation. This simulates a slow event related design. However, data is only collected in 2s portions.
In the trial1, data is collected only from time 0 to 2s following the 4s audio stimulus. In trial2, data is collected during 2-4s. In trial3, data is collected only during 4-6s, and so on until all eight 2s time points are collected. I want to combine each 2s portion together to create a picture of the BOLD response across th entire 16s following the 2s audio stimulus.
Two possible methods:
1. cut and paste each 2s portion of data together to characterize the BOLD response curve spanning 16s I think i can do this with 3dTcat to artificially create a sequence that simulate data collected across the entire 16s following the 2s audio stimulus presentation.
2. convolve the audio event with a model fo the HRF (say the Bob Cox special) and use the value indicated at each 2s TR/volume collection, separately, with each trial to characterize the convovled HRF for just that 2s portion and zero everywhere else during which the scanner is not collecting data. In this way i will be able to extract a fit coefficient or beta weight for each 2s TR or volume collected in each trial and then combine to generate a the shape of the BOLD response. Does this make sense? Has someone already tried to something like this? Might there be a better way?
thanks in advance great AFNI advisors,
philippe