If I have a dataset, foo0+tlrc that is a 0/1 mask, scaled:
-- At sub-brick #0 '#0' datum type is short: 0 to 32767 [internal]
[* 3.05185e-05] 0 to 1 [scaled]
and I run 3dWarp on it:
3dWarp -fsl_matvec -matvec_in2out ../ROI2xfm.txt -linear -prefix foo0_warp foo0+tlrc
my output is:
-- At sub-brick #0 '#0' datum type is float: 0 to 1 [internal]
[* 3.05185e-05] 0 to 3.05185e-05 [scaled]
So, my "1" (or 32767) values are now 3.05185e-05. (Note, linear and NN resampling both do this).
Now, if I have a dataset that is
Number of values stored at each pixel = 1
-- At sub-brick #0 '#0' datum type is short: 0 to 1
( created by 3dcalc -prefix foo1 -a foo0+tlrc. -expr 'step(a)' -nscale) )
and I apply the same warping, I get:
Number of values stored at each pixel = 1
-- At sub-brick #0 '#0' datum type is short: 0 to 1
So, my "1" is still "1". It seems, that when there is a scaling parameter in the HEAD file, the output is a float -- fair enough. But, that when this is the case, the scaling parameter is not being applied properly or that it is being carried over into the float output dataset, where I can't see how it would be useful.
Am I missing something (the caffeine isn't onboard yet, so certainly a clear possibility) or is this non-ideal 3dWarp behavior? (FWIW, this is to get an extension of the ROI-AL technique I just described in a J NSci article to use a lot more of AFNI).
Craig