• Post Reply Bookmark Topic Watch Topic
  • New Topic

Floating point arithmetic  RSS feed

 
Adrian Sosialuk
Ranch Hand
Posts: 57
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi everyone,

This line:



outputs 7.999999999999999

This is due to how floating point numbers are stored (IEEE 754). When
writing a code, sometimes it behaves in an intended way, sometimes it
doesn't (like the one above). Is there a way to "predict" when the code is
ok and when isn't ? Are there any tips to be aware of to get around that
kind of problems ? Do I need to be prepared for a compromise of inaccurate
results ? Or perhaps I should use BigDecimal whenever I need to be sure
the result is accurate ?

Cheers,

Adrian
 
Campbell Ritchie
Marshal
Posts: 56570
172
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
You have already answered your question. Yes, you ought to use BigDecimal whenever you require precision.

I don't think these imprecisions ("error" is too strong a word) are predictable. Try this sort of thing, too:It's even more entertaining if you try the other 4 arithmetic operators too!
 
Stan James
(instanceof Sidekick)
Ranch Hand
Posts: 8791
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I mostly think of floating types as good for the distances between stars. In inches. The exponent is so huge we've obviously multiplied in a ton of zeros on the end of the number and precision is plus or minus gazillions. Since floats work so nicely for that, what are the chances they'll work for money? I mean, the amount of money I have, not Scrooge McDuck.
 
Peter Chase
Ranch Hand
Posts: 1970
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
For calculations (e.g. money) where you need total accuracy, you should normally use integer types. However, integer calculations cannot cope with big differences in magnitude of the numbers.

BigDecimal can give total accuracy and cope with fairly large differences in magnitude. However, it is a lot slower than integer or floating-point maths. That is because BigDecimal is implemented in Java, whereas integer and floating-point are directly implemented by the hardware (CPU, FPU etc.).

Floating point is for calculations where the magnitude varies hugely and total accuracy is not required. Most engineering and scientific calculations fall into this category, because the numbers going into the calculations are usually physical measurements, which always contain some inaccuracy anyway.

The more calculation steps you do with floating-point, the bigger the inaccuracy becomes. There are ways to calculate an upper-bound on the inaccuracy of any given calculation, so that you know how much you can trust the result. Such techniques are not terribly difficult, but are beyond explanation in a single post on a bulletin board. If Google doesn't help you find some, come back to me.

When printing out floating-point numbers, you should not simply println() then. This always prints out the number to its full precision, but the accuracy is insufficient for this to be a reliable result. You should use DecimalFormat to print out only the amount of precision that you know you can rely upon. You've worked out what that is, using the techniques described above, of course...

If you think this sounds tough, to an extent that's true. But floating-point can do calculations that no other method can do satisfactorily. If you need to do that sort of calculations, and you want results that you can rely upon, you need to do the work.
 
Campbell Ritchie
Marshal
Posts: 56570
172
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Go into the beginners' forum, where somebody else has had the same problem in the last couple of days. There are some useful links there, but beware: some of the papers are very heavy reading.
 
Ernest Friedman-Hill
author and iconoclast
Sheriff
Posts: 24217
38
Chrome Eclipse IDE Mac OS X
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I will add one thing to this: 99% of the time, the problem people really have is not that the difference between 8.0 and 7.999999999999999999 is important, but rather that they want 8.0, not 7.9999999999999, to be displayed. For that, all you have to do is use the tools Java gives you -- the DecimalFormat class, or the new printf() methods in PrintWriter. Don't just call "println()" and accept the default formatting, as it might not be what you want.
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!