In this code above casting was done. Why is casting necessary? I know that since int / int, there is a possibility of a decimal value in result, hence the result is defined in double. However, before I learned casting, I could define a variable with double even if the other two variables are defined in int. Hence, in what situations must it be used and is it used practically in real world projects? Thank you.

This may help: http://www.java2s.com/Book/Java/0020__Language-Basics/The_Type_Promotion_Rules.htm

(number promotion)

Julian West wrote:...with "possible loss of precision"

From information I gathered in this thread, since integer division always discards the fractional part, you cannot say that integer division results in "loss of precision". You can say that integer division results in "loss of information" (which is worse than "loss of precision"). Precision comes into play with datatypes that have fractional parts. Precision is about how close measured values are to each other. So if you take multiple

`double`measurements, you will get higher precision than multiple

`float`measurements.

Campbell Ritchie wrote:

I am not sure I would call that loss of precision.Ganesh Patekar wrote:. . . results var = 2 because var/4 means 10 /4 which results 2 where we lost precision i.e. 0.5 because both diviser and dividend are int. . . .

*is*loss of information. And discarding the fractional part is actually also called loss of precision. Try assigning a double to an int without casting it, and see what compiler error you get.

*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

The JLS tends to use the phrase "loss of precision" when dealing with floating point numbers and "loss of information" when dealing with whole numbers.

5.1.3. Narrowing Primitive Conversion wrote:A narrowing primitive conversion from double to float is governed by the IEEE 754 rounding rules (§4.2.4). This conversion can

lose precision, but also lose range,e resulting in a float zero from a nonzero double and a float infinity from a finite double. A double NaN is converted to a float NaN and a double infinity is converted to the same-signed float infinity.

A narrowing conversion of a signed integer to an integral type T simply discards all but the nlowest order bits, where n is the number of bits used to represent type T. In addition to a possibleloss of informationabout the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.

Daniel Cox wrote:

Julian West wrote:...with "possible loss of precision"

From information I gathered in this thread, since integer division always discards the fractional part, you cannot say that integer division results in "loss of precision". You can say that integer division results in "loss of information" (which is worse than "loss of precision"). Precision comes into play with datatypes that have fractional parts. Precision is about how close measured values are to each other. So if you take multipledoublemeasurements, you will get higher precision than multiplefloatmeasurements.

Campbell Ritchie wrote:

I am not sure I would call that loss of precision.Ganesh Patekar wrote:. . . results var = 2 because var/4 means 10 /4 which results 2 where we lost precision i.e. 0.5 because both diviser and dividend are int. . . .

You quoted me out of context, missing my point. To clarify:

If the compiler didn't force us to explicitly promote integers to doubles as the OP discussed, then we wouldn't be able to do integer division implicitly but instead, would give us the "possible lossy conversion" incompatible types compiler error like the following does:

If stuffing a double-precision floating point fraction into a single-precision floating point variable is a loss of precision, then certainly stuffing the same double-precision fraction into a zero-precision fraction (integer) is a greater loss of precision.

Precision is an attribute of information. I could say that a new flux capacitor costs $30K, $32K, $32-seven, $32,768, or $32,768.16. It's the same information, just with different precision. A hallway conversation wouldn't warrant two-place-decimal point precision but the person disbursing payment does.

Implicitly, java math is all integers, so 10/4=2; the fraction isn't dropped because it was never there in the first place because we're only dealing with ints until explicitly specified otherwise; hence...

...is the same as...

...both yield 0.0 since there were never any fractions. "I bought two flux capacitors @ $30K each; I spent $60K."

If the compiler weren't designed this way, then we would have to explicitly cast (int) whenever we're doing simple integer division since the potential for fractions will always exist, making the "possible lossy conversion" always an issue.

I say all this not just for semantics but to understand--heh, precisely--what is going on and why rather than to advocate relying on memory to pass the exam and to not want to punch the compiler in the face when things don't seem to make sense.

Stephan van Hulst wrote:Try assigning a double to an int without casting it, and see what compiler error you get.

Yes. I can see how "loss of precision" applies here because you're losing the double precision of the

`double`datatype. However, when dealing with whole numbers (as in integer division), there is no precision to lose.

Julian West wrote:You quoted me out of context, missing my point.

Oh I see. I think I quoted you out of context. I see you're describing to a scenerio where the compiler allowed the assignment of

`double`to

`int`in which case, "loss of precision" will be a fitting description of what's happening.

Daniel Cox wrote:However, when dealing with whole numbers (as in integer division), there is no precision to lose.

Right.

*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*