posted 18 years ago
I think loss of "significance" would be a better choice. When we move large ints to a float we will have loss of precision but no loss of significance since the assumption is that low order digits are not significant in floats.
Not a bad term there. The only problem is that it's not widely used already. "Loss of precision" may not be great terminology, but it's in common usage in the numeric computing community. I also disagree a bit later - in casting int to float there's often no "loss of significance", but there certainly may be; we don't know how many digits really are significant.
Or putting it another way - to me, "significance" refers to what the value means. If a variable x represents a number of apples, and then someone says it "lost its significance", to me that implies that it no longer refers to a number of apples. (Perhaps because someone inadvisedly added the number of oranges to it, for example.) While Tom's new usage of "significance" may be clever, it doesn't match up well with existing usage, IMO.
Perhaps we could say:
int -> float : "Possible loss of least significant bits."
float -> int : "Possible loss of most significant bits."
I don't see what's so bad about "loss of information". It's perfectly accurate, and it at least draws attention to the problem. You just have to decide for yourself what it means in a given context. But that's true for most error messages after all. :roll:
"I'm not back." - Bill Harding, Twister