This is just a general question. I am taking a new student through each of the steps of a preprocessing pipeline as generated by uber_subject.py. In the 'volreg' section of the proc script after the TLRC step the script does the following:
"# create an all-1 dataset to mask the extents of the warp"
Then it makes an intersection mask for the runs and creates an 'extents' mask of all runs.
My understanding of this step is that it removes any voxel time series where there is invalid data but I have a couple of questions as to its purpose.
1) It seems from the math that the voxel has to have invalid data during ALL runs for it to be masked out. Because if you use 3dMean on all of the runs (say you have 3 runs) and then mask out anything with a mean value less than 1 only, wouldn't you retain the voxel if it had a minimum value of 0 during 2 runs but then a minimum value of 4 for the last run? If this is true I don't really see the point of the command since the voxel has invalid data during 2 of the runs and just happens to have an arbitrarily high enough minimum value in the last run for it to get over the masking hump. However it's also likely I'm not understanding he command correctly so I just wanted to ensure these commands are doing what I think they're doing.
2) I guess my follow-up question is again to the purpose of the extents mask since it seems like - with the mask it created for my data at least - all of the data removed at this step is outside of the ultimate automask anyway. So wouldn't it be more efficient to just use automask at a later step?
Thanks for any clarification here. I just don't want to be giving the wrong information to people.
Lauren