Hello all,
I'm trying to make realistic (noise data matched to my observed data) synthetic fMRI data in order to arrive at an empirical statistical threshold via Monte Carlo simulations.
I'd appreciate any conceptual feedback on this effort as well as help w/ the following questions:
1. I can make the synthetic data using raw fMRI data (i.e. data as it comes out of the scanner) or processed data (ie. data as it is right before going into 3dDeconvolve). Which one should I do? Synthesizing processed data is a lot faster, but piping "raw" synthetic data through the processing stream might be better as far as spatial smoothing is concerned.
Technically, this is what I'm thinking of doing: (1) For each EPI voxel, get the mean & stdev (through time). (2) For each voxel in each image, generate a gaussian distributed random number matching the mean and stdev of the real data. (3) Preprocess and 3dDeconvolve this synthetic data.
2. 3dTstat has two stdev's, -stdev (detrended) and -stdevNOD (not detrended). I should use -stdevNOD, right?
3. 3dcalc -expr "gran(m,s)" works like a champ, but I can't seem to be able to nicely make a multi-brick bucket of synthetic data (one brick for each TR of the real data). I have successfully made many (1788) single-brick buckets, pasted them together using "3dTcat," and then "3drefit -TR 2.0 -epan". This works, but it is a bit cumbersome. Can you think of a more efficient way of doing this? Lastly, do you think that I can run 3dcalc with "-short" instead of the default "-float" in order to save disk space?
Many thanks, everybody!
Antonio