I am confused by the behaviour of float and double in Java.

Say I have

float a2 = 99999.8f;

float b2 = 99999.65f;

which gives me 199999.44. However, when I use double it gives me the correct output.

Since float gives precision upto 6 digits after decimal, shouldn't it also give me the correct answer?

Thanks in advance

Rupjit Chakraborty wrote:Since float gives precision upto 6 digits after decimal...

Unfortunately that isn't a true statement. What is true is that float values provide approximately 6 significant digits. In your example, several of the significant digits are to the left of the decimal point.

You might see the “correct” display, but you do probably not get the correct result.That is because you can only represent a small proportion of fractions exactly in binary. It is like trying to print ⅓ exactly in decimal. Try it. Hint: it starts with 0.333. The first time I tried that arithmetic I copied it wrongly and wrote 99999.99 + 999999.85 which makes 199999.75. The fractional part is ¾ which can be exactly expressed in binary (0.1100) (but it is represented differently inside aRupjit Chakraborty wrote:. . . when I use double it gives me the correct output. . . .

`double`).

[edit]Don't use floating‑point arithmetic if you require precision. Don't use

`float`s at all (unless some part of the API requires them).

Rupjit Chakraborty wrote:I am confused by the behaviour of float and double in Java.

This is still the best page I know on the subject. It's quite detailed, and it contains some maths that goes over my head, but it's a comprehensive description of the IEEE standard on which Java (and many other languages) base their floating-point types.

HIH

Winston

"Leadership is nature's way of removing morons from the productive flow" - Dogbert

Articles by Winston can be found here

The "precision" in question refers to the number of trustworthy digits to the right of the decimal point. Interestingly, it is sometimes fractional. For example 6-1/2 digit precision means that for certain predictable values, you can actually trust 7 digits.

Floating-point is a fuzzy number representation. Not merely because there's an abstract cutoff point in decimal digits, but because on most machinery, floating-point is held in binary form and certain very popular decimal fractions are irrational in binary. Not just the obvious ones like 1/3, but the more significant 1/10, which is why printing floating-point numbers are prone to print imprecisely.

Also, "double-precision" is a slight misnomer. What it really means is "double the storage". Take, for example, the old IBM mainframe floating-point representation (which is a little weird, but good enough). A single-precision floating-point number would occupy 4 bytes of main store. The number itself consists of a sign bit, a 7-bit exponent, and the post-decimal part (mantissa). So in single-precision floating point, there were 3 bytes available for the mantissa but in double-precision, there were 7 bytes for mantissa. The exponent range is the same for both formats.

An IDE is no substitute for an Intelligent Developer.

But that is a different format from IEEE754. ATim Holloway wrote:. . . Take, for example, the old IBM mainframe floating-point representation (which is a little weird, but good enough). . . .

`float`uses 23 bits for the mantissa and a

`double`uses 53 bits. Multiply those numbers by

*log*2 and you get the number of digits' precision in the normalised range of the number.

Campbell Ritchie wrote:But that is a different format from IEEE754. ATim Holloway wrote:. . . Take, for example, the old IBM mainframe floating-point representation (which is a little weird, but good enough). . . .floatuses 23 bits for the mantissa and adoubleuses 53 bits. Multiply those numbers bylog2 and you get the number of digits' precision in the normalised range of the number.

Same general concept, though. I'm just showing my age!

An IDE is no substitute for an Intelligent Developer.

Fred Kleinschmidt wrote:Tim Holloway wrote:The "precision" in question refers to the number of trustworthy digits to the right of the decimal point.

This is not strictly true. For example, how many trustworthy digits are there after the decimal point in 0.000000000000123456f ? or 10000000.1234f?

There is absolutely no way to tell. "Trustworthy" is a hardware issue. Also, your first example is unnormalized. 0.000000000000123456f is not the same thing internally as 1.23456f-13 (if I've counted decimals correctly).

Don't mistake the mathematical precision for the machine precision.

An IDE is no substitute for an Intelligent Developer.

Tim Holloway wrote:There is absolutely no way to tell. "Trustworthy" is a hardware issue.

Erm, no it isn't. It might be an issue of the internal

*format*, but that's almost certainly dictated by a standard (like IEEE-754). And Java

`float`s and

`double`s will be the same on

*any*machine.

It also has less to do with the decimal point than with the absolute number of digits in the written (decimal) form of the number, not counting leading zeroes. And that applies whether it's in standard form or just written as a regular number, with or without a decimal point.

The trouble is that the value for

`float`is 6.9236899, which is slightly less than 7, so you can generally rely on 6 digits, while the 7th will be "close".

However, I agree that you shouldn't mistake precision for

*accuracy*. Both

`float`and

`double`can store 0.5

__exactly__, but there aren't too many fractional values that they can.

It's also governed by the size (ie, magnitude) of the number. A

`float`can't, for example, hold 10,000,000.5 (or indeed 10,000,000) exactly, because there are more than 6.92 digits in the written form.

IEEE-754 also has the business of "denormal" values which complicates the whole thing even more, but that's the general idea.

Winston

"Leadership is nature's way of removing morons from the productive flow" - Dogbert

Articles by Winston can be found here

Which, incidentally does support a "native" floating-point option in addition to the write-once/run-anywhere IEEE-754 standard JVM option. Or at least it did. I'm having problems finding the magic word that declares it now.

In fact, I'm reasonably sure that IBM had a lot to do with that option, since about 40 years of data in that old format were floating around* when Java first got started, and since floating-point generally expects high-speed computations, converting formats was overhead to be avoided.

However, what goes around, comes around. IEEE-754 support is native in the IBM zSeries mainframes and I doubt not that it was to support Java in large part.

===

*Absolutely no pun intended

An IDE is no substitute for an Intelligent Developer.

An IDE is no substitute for an Intelligent Developer.

Tim Holloway wrote:Anyone with a stake in floating point should read this: http://introcs.cs.princeton.edu/java/91float/

Bookmarked. Thanks Tim.

Winston

"Leadership is nature's way of removing morons from the productive flow" - Dogbert

Articles by Winston can be found here

Winston Gutkowski wrote:Afloatcan't, for example, hold 10,000,000.5 (or indeed 10,000,000) exactly.

My mistake. It can hold 10,000,000 (actually <= ±16377216) exactly.

Winston

Articles by Winston can be found here

Campbell Ritchie wrote:I was mistaken too; afloatuses 24 bits not 23 in its normal range.

If you include the sign bit.

Working from left ro right:

`float`is specifically 1 + 8 + 23 (sign / exponent / fraction) (the Wikipedia article doesn't use the word 'mantissa')

`double`is 1 + 11 + 52

Winston

Articles by Winston can be found here

Tim Holloway wrote:Which, incidentally does support a "native" floating-point option in addition to the write-once/run-anywhere IEEE-754 standard JVM option. Or at least it did. I'm having problems finding the magic word that declares it now.

Perhaps you're thinking of "strictfp"? Last time I heard of this concept it was because it didn't actually have any effect in the Sun JVMs of the time, and that was eons ago.

Campbell Ritchie wrote:Similarly a double has 53 bits of precision, but only in its normal range.

Which would imply 7 decimal digits for a

`float`then (7.22 by your formula).

<sigh>Another mistake by yours truly.</sigh>

Question for the hardware geeks out there:

We're told to prefer

`double`s over

`float`s in Java, which in these days of 64-bit architecture seems to make sense. But are 64-bit flops actually

*native*?

I read somewhere that some chips (can't remember which) actually use 80-bit FP, which would suggest that some conversion to/from 64 is required.

I realise it's micro-optimization of the first water, but just wondered...

Winston

Articles by Winston can be found here

Maybe after Intel got their fingers burnt about fp arithmetic twenty years ago, all chip manufacturers will use > 64 bits to avoid repeating the fiasco.

Did you see how Paul cut 87% off of his electric heat bill with 82 watts of micro heaters? |