Hi Elizabeth.
What you are seeing is not surprising, but it takes us back to the
original -fscale comment, and some specifics about the 3dcalc
program. 3dcalc does not know where your data is coming from,
so it tries (or Dr. Cox tried, when he wrote it) not to assume what
type of output the user may want.
In 3dcalc, if the output data type is short, it is only automatically
scaled when all results are between +/- 1, or when they exceed
the maximum signed value for the type (being 32767 for shorts).
By scaled I mean a multiplicative factor is applied. So for that
program and short output, it is common for the results to be all
integral. This is why Doug Ward and others have mentioned using
the -fscale option.
Note that 3dDeconvolve, for instance, automagically scales the
short output. That was a decision Doug Ward made, based on his
understanding of what the output would be like.
So from your first 3dcalc command,
3dcalc –a TS+orig –expr “a*.2789” –datum short –prefix TS_2+orig
TS_2 is all integral (since the datum is short, and there is no
-fscale option). After that, even if you multiply by 1.0 and convert
to floats, it is still integral (though is stored in float format). When
values are printed, if they have no fractional part, they are often
printed as integers, just for readability. People usually don't want
to see -17.00000, when they could see -17 .
If you want to produce an accurate float-type dataset, change the
datum above to float. If you want to verify that a short dataset can
preserve the fractional parts (to a limited degree of accuracy), then
add the -fscale option to that command, and redo your second
command of multiplying by 1.0.
I hope this is clear enough. Please let me know if you have more
questions.
- rick