When I run this it prints out "0.07692307692307693". Is that an error? Rounded correctly it should be "0.07692307692307692", shouldn't it? The repeating series is "076923". Do the Java specs allow a value returned that's off in the least significant digit?

Kevin S

Kevin Simonson wrote:I wrote a little piece of code like so:

When I run this it prints out "0.07692307692307693". Is that an error? Rounded correctly it should be "0.07692307692307692", shouldn't it? The repeating series is "076923". Do the Java specs allow a value returned that's off in the least significant digit?

Kevin S

Furthermore, if I make it more general:

and run it with "java Rec 7", I get "1.0 / 7.0 == 0.14285714285714285". This time it

**round up, but it**

*didn't***have, since the next digit would have been '7'. What's the deal here?**

*should*Kevin S

`float`and

`double`) are in general much less accurate than what you got in your example. You're lucky that 1.0 / 13.0 gave such an accurate answer.

Java uses the ubiquituous IEEE floating point formats for the types

`float`and

`double`. (Most modern CPUs have hardware support for these formats). They store floating-point numbers as binary fractions. Some decimal numbers, for example 0.1, cannot be stored with full precision in these types.

Jesper de Jong wrote:Floating-point numbers (

floatanddouble) are in general much less accurate than what you got in your example. You're lucky that 1.0 / 13.0 gave such an accurate answer.

Java uses the ubiquituous IEEE floating point formats for the typesfloatanddouble. (Most modern CPUs have hardware support for these formats). They store floating-point numbers as binary fractions. Some decimal numbers, for example 0.1, cannot be stored with full precision in these types.

So you're saying that Java does the division of 1.0 by 13.0, stores the quotient in binary format, losing a little precision as it does that, and then with the {println()} command converts it to decimal, losing a little more precision as it does that, and results in a decimal number that's off from what it should be in its least significant digit? And that's okay?

Kevin S

- 2

Kevin Simonson wrote:So you're saying that Java does the division of 1.0 by 13.0, stores the quotient in binary format, losing a little precision as it does that, and then with the {println()} command converts it to decimal, losing a little more precision as it does that, and results in a decimal number that's off from what it should be in its least significant digit? And that's okay?

It's not a question about whether it is "okay" or not, it's just how floating-point numbers work in Java and most other programming languages.

`float`and

`double`are not infinitely precise. If you think about it, they simply cannot be infinitely precise in principle, since they have a finite memory size (32 bits or 64 bits). To be able to store any arbitrary floating-point number with exact precision, you would need need an unlimited amount of memory.

If you need really precise numbers, use java.math.BigDecimal.