Win a copy of Kotlin in Action this week in the Kotlin forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic

float vs double in Java  RSS feed

 
Rupjit Chakraborty
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi

I am confused by the behaviour of float and double in Java.

Say I have
float a2 = 99999.8f;
float b2 = 99999.65f;
which gives me 199999.44. However, when I use double it gives me the correct output.
Since float gives precision upto 6 digits after decimal, shouldn't it also give me the correct answer?

Thanks in advance
 
Paul Clapham
Sheriff
Posts: 22493
43
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Rupjit Chakraborty wrote:Since float gives precision upto 6 digits after decimal...


Unfortunately that isn't a true statement. What is true is that float values provide approximately 6 significant digits. In your example, several of the significant digits are to the left of the decimal point.
 
Campbell Ritchie
Marshal
Posts: 55717
163
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Rupjit Chakraborty wrote:. . . when I use double it gives me the correct output. . . .
You might see the “correct” display, but you do probably not get the correct result.That is because you can only represent a small proportion of fractions exactly in binary. It is like trying to print ⅓ exactly in decimal. Try it. Hint: it starts with 0.333. The first time I tried that arithmetic I copied it wrongly and wrote 99999.99 + 999999.85 which makes 199999.75. The fractional part is ¾ which can be exactly expressed in binary (0.1100) (but it is represented differently inside a double).

[edit]Don't use floating‑point arithmetic if you require precision. Don't use floats at all (unless some part of the API requires them).
 
Winston Gutkowski
Bartender
Posts: 10573
65
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Rupjit Chakraborty wrote:I am confused by the behaviour of float and double in Java.

This is still the best page I know on the subject. It's quite detailed, and it contains some maths that goes over my head, but it's a comprehensive description of the IEEE standard on which Java (and many other languages) base their floating-point types.

HIH

Winston
 
Tim Holloway
Bartender
Posts: 18709
71
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Actually, the more accurate terminology is "single-precision floating point" and "double-precision floating point".

The "precision" in question refers to the number of trustworthy digits to the right of the decimal point. Interestingly, it is sometimes fractional. For example 6-1/2 digit precision means that for certain predictable values, you can actually trust 7 digits.

Floating-point is a fuzzy number representation. Not merely because there's an abstract cutoff point in decimal digits, but because on most machinery, floating-point is held in binary form and certain very popular decimal fractions are irrational in binary. Not just the obvious ones like 1/3, but the more significant 1/10, which is why printing floating-point numbers are prone to print imprecisely.

Also, "double-precision" is a slight misnomer. What it really means is "double the storage". Take, for example, the old IBM mainframe floating-point representation (which is a little weird, but good enough). A single-precision floating-point number would occupy 4 bytes of main store. The number itself consists of a sign bit, a 7-bit exponent, and the post-decimal part (mantissa). So in single-precision floating point, there were 3 bytes available for the mantissa but in double-precision, there were 7 bytes for mantissa. The exponent range is the same for both formats.
 
Campbell Ritchie
Marshal
Posts: 55717
163
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tim Holloway wrote:. . . Take, for example, the old IBM mainframe floating-point representation (which is a little weird, but good enough). . . .
But that is a different format from IEEE754. A float uses 23 bits for the mantissa and a double uses 53 bits. Multiply those numbers by log2 and you get the number of digits' precision in the normalised range of the number.
 
Tim Holloway
Bartender
Posts: 18709
71
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Campbell Ritchie wrote:
Tim Holloway wrote:. . . Take, for example, the old IBM mainframe floating-point representation (which is a little weird, but good enough). . . .
But that is a different format from IEEE754. A float uses 23 bits for the mantissa and a double uses 53 bits. Multiply those numbers by log2 and you get the number of digits' precision in the normalised range of the number.


Same general concept, though. I'm just showing my age!
 
Fred Kleinschmidt
Bartender
Posts: 560
9
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tim Holloway wrote:The "precision" in question refers to the number of trustworthy digits to the right of the decimal point.

This is not strictly true. For example, how many trustworthy digits are there after the decimal point in 0.000000000000123456f ? or 10000000.1234f?
 
Tim Holloway
Bartender
Posts: 18709
71
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Fred Kleinschmidt wrote:
Tim Holloway wrote:The "precision" in question refers to the number of trustworthy digits to the right of the decimal point.

This is not strictly true. For example, how many trustworthy digits are there after the decimal point in 0.000000000000123456f ? or 10000000.1234f?


There is absolutely no way to tell. "Trustworthy" is a hardware issue. Also, your first example is unnormalized. 0.000000000000123456f is not the same thing internally as 1.23456f-13 (if I've counted decimals correctly).

Don't mistake the mathematical precision for the machine precision.
 
Winston Gutkowski
Bartender
Posts: 10573
65
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tim Holloway wrote:There is absolutely no way to tell. "Trustworthy" is a hardware issue.

Erm, no it isn't. It might be an issue of the internal format, but that's almost certainly dictated by a standard (like IEEE-754). And Java floats and doubles will be the same on any machine.

It also has less to do with the decimal point than with the absolute number of digits in the written (decimal) form of the number, not counting leading zeroes. And that applies whether it's in standard form or just written as a regular number, with or without a decimal point.

The trouble is that the value for float is 6.9236899, which is slightly less than 7, so you can generally rely on 6 digits, while the 7th will be "close".

However, I agree that you shouldn't mistake precision for accuracy. Both float and double can store 0.5 exactly, but there aren't too many fractional values that they can.
It's also governed by the size (ie, magnitude) of the number. A float can't, for example, hold 10,000,000.5 (or indeed 10,000,000) exactly, because there are more than 6.92 digits in the written form.

IEEE-754 also has the business of "denormal" values which complicates the whole thing even more, but that's the general idea.

Winston
 
Tim Holloway
Bartender
Posts: 18709
71
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
You got me. I used the term "hardware" rather loosely. In the case of Java, the "hardware" is the JVM.

Which, incidentally does support a "native" floating-point option in addition to the write-once/run-anywhere IEEE-754 standard JVM option. Or at least it did. I'm having problems finding the magic word that declares it now.

In fact, I'm reasonably sure that IBM had a lot to do with that option, since about 40 years of data in that old format were floating around* when Java first got started, and since floating-point generally expects high-speed computations, converting formats was overhead to be avoided.

However, what goes around, comes around. IEEE-754 support is native in the IBM zSeries mainframes and I doubt not that it was to support Java in large part.

===
*Absolutely no pun intended
 
Tim Holloway
Bartender
Posts: 18709
71
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Anyone with a stake in floating point should read this: http://introcs.cs.princeton.edu/java/91float/
 
Winston Gutkowski
Bartender
Posts: 10573
65
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tim Holloway wrote:Anyone with a stake in floating point should read this: http://introcs.cs.princeton.edu/java/91float/

Bookmarked. Thanks Tim.

Winston
 
Winston Gutkowski
Bartender
Posts: 10573
65
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Winston Gutkowski wrote:A float can't, for example, hold 10,000,000.5 (or indeed 10,000,000) exactly.

My mistake. It can hold 10,000,000 (actually <= ±16377216) exactly.

Winston
 
Campbell Ritchie
Marshal
Posts: 55717
163
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I was mistaken too; a float uses 24 bits not 23 in its normal range.
 
Winston Gutkowski
Bartender
Posts: 10573
65
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Campbell Ritchie wrote:I was mistaken too; a float uses 24 bits not 23 in its normal range.

If you include the sign bit.

Working from left ro right:
float is specifically 1 + 8 + 23 (sign / exponent / fraction) (the Wikipedia article doesn't use the word 'mantissa')
double is 1 + 11 + 52

Winston
 
Campbell Ritchie
Marshal
Posts: 55717
163
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
No, in the normal range the mantissa might occupy 23 bits of memory but it retains 24 bits because it is presumed to start with 1 and that one bit is omitted. Similarly a double has 53 bits of precision, but only in its normal range.
 
Paul Clapham
Sheriff
Posts: 22493
43
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tim Holloway wrote:Which, incidentally does support a "native" floating-point option in addition to the write-once/run-anywhere IEEE-754 standard JVM option. Or at least it did. I'm having problems finding the magic word that declares it now.


Perhaps you're thinking of "strictfp"? Last time I heard of this concept it was because it didn't actually have any effect in the Sun JVMs of the time, and that was eons ago.
 
Winston Gutkowski
Bartender
Posts: 10573
65
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Campbell Ritchie wrote:Similarly a double has 53 bits of precision, but only in its normal range.

Which would imply 7 decimal digits for a float then (7.22 by your formula).
<sigh>Another mistake by yours truly.</sigh>

Question for the hardware geeks out there:

We're told to prefer doubles over floats in Java, which in these days of 64-bit architecture seems to make sense. But are 64-bit flops actually native?
I read somewhere that some chips (can't remember which) actually use 80-bit FP, which would suggest that some conversion to/from 64 is required.

I realise it's micro-optimization of the first water, but just wondered...

Winston
 
Campbell Ritchie
Marshal
Posts: 55717
163
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Not being at all interested in hardware, I can only presume that my 64‑bit chip does native 64‑bit arithmetic on any doubles I care to use. Not sure where you will find a reference; maybe this Wikipedia page will help. Yes, chips are free to use > 64 bits for fp calculations, and also to extend the bits used for the exponent part (> 11 for a double) to avoid overflow to ∞. The latter phenomenon is mentioned in the JLS.

Maybe after Intel got their fingers burnt about fp arithmetic twenty years ago, all chip manufacturers will use > 64 bits to avoid repeating the fiasco.
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!