I am assigning a 32-bit int literal to a byte and it compiles. It appears that the compiler implicitly narrows the literal so that it will fit into the smaller byte.
However, the following will not compile:
I am assigning a double literal to a float. Why dont I get the same "it compiles" behavior when assigning a literal double to a float? The compiler seems willing narrow the big int when assigning to a byte, but not the big double when assigning to a float.
Because a double variable uses more bits to represent the fractional part of the number than a float variable, so converting from double to float is very likely to lead to a loss of precision. i.e. the float variable will possibly have a different value to the double variable.
Because they don't have fractional parts. There are a finite number of values between the highest and lowest byte value, therefore every value inbetween can be represented by both a byte and an int. However there are an infinite number of values between the highest and lowest float value. Therefore even if a double value falls within the range of the highest and lowest float value, it could be a value that can't be represented by a float, so assigning it to a float would result in a loss of precision.
Because ints use exact values. When you assign 4 to a byte, you can be sure that it will fit.
For floating point values, the same may not be true. The literal 1.2 may actually have a different value in double precision than in single precision. Some decimal values can not be represented exactly in binary with a finite precision.
So 1.2 may actually be something like 1.19999 as a double, while it's 1.21 as a float (these values are exaggerated of course).
And I see Joanne already answered your question.
You showed up just in time for the waffles! And this tiny ad:
Building a Better World in your Backyard by Paul Wheaton and Shawn Klassen-Koop