• Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

Double only accurate to 18 significant figures?

 
Stephanie Keeler
Greenhorn
Posts: 6
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Back again!

Is there any way of storing a number - double or otherwise - so that it is accurate to up to 40 significant figures? As far as I can tell, 18 significant figures is the limit on the double.

For example,

the number 1234567890123456789012345678901234567890D would be converted to 1234567890123456770000000000000000000000 when stored, and displayed without formatting as 1.23E40. I can fix the formatting, but lose the accuracy. Is there anyone who can help me with this?
 
Rob Spoor
Sheriff
Pie
Posts: 20555
57
Chrome Eclipse IDE Java Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Use BigInteger for whole numbers and BigDecimal for decimal numbers.
 
Ulf Dittmer
Rancher
Posts: 42968
73
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
And for some background, check out #20 in the JavaBeginnersFaq.
 
fred rosenberger
lowercase baba
Bartender
Posts: 12149
31
Chrome Java Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I believe that you will take a performance hit for using BigIntegers or BigDecimals. This may not be an issue if your program doesn't use too many or do too much.

Ask yourself if you REALLY NEED 40 significant digits. Much of software design is a tradeoff between memory, speed, accuracy and probably a few other factors. If you're just messing around, it's probably no big deal, but keep it in the back of your mind as you go...
 
Campbell Ritchie
Sheriff
Pie
Posts: 49466
64
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
The limit on a double is not 18 sig fig in decimal, but 52 * log102 which is 15.65..... Remember that is the absolute theoretical maximum and most of the time you might not achieve 15.65. Also in subnormal ranges, you will definitely never achieve 15.65; in extreme cases you can't even achieve 1 sig fig.
 
Mike Simmons
Ranch Hand
Posts: 3090
14
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
frosenberger wrote:I believe that you will take a performance hit for using BigIntegers or BigDecimals. This may not be an issue if your program doesn't use too many or do too much.

I would guess there's an excellent chance this is not an issue. But I guess it depends on the context of the application. Still, this smells like premature optimization to me. The original poster clearly has expressed a desire for more precision, but has not expressed a need for better performance. (And even if she did, it's very possible that the bottleneck in the system is elsewhere, and using BigDecimal could have no significant effect.)

frosenberger wrote:Ask yourself if you REALLY NEED 40 significant digits. Much of software design is a tradeoff between memory, speed, accuracy and probably a few other factors. If you're just messing around, it's probably no big deal, but keep it in the back of your mind as you go...

I agree with this, except I would remove the "if you're just messing around". I think there's an excellent chance that using BigDecimal or BigInteger will be either desirable, or a non-issue, even in production code. But there is also a significant chance that it could create a performance problem. So yes, keep this in the back of your mind: if performance becomes a problem, the decision to use BigDecimal or BigInteger could be the cause of it. And so you should be prepared to reconsider this decision.

To my mind, however, the biggest drawback to BigDecimal or BigInteger has nothing to do with performance. You say "much of software design is a tradeoff between memory, speed, accuracy and probably a few other factors" - to me, the biggest other factor, often much more important than any of the others, is readability. Which translates into maintainability. Java's rejection of operator overloading makes it tough to write easily readable mathematical code using anything other then primitives. Because we recognize "a + b" more readily than "a.add(b)". And Ghu help us if we need to interpret any remotely-complex formula. Java's BigDecimal and BigInteger are painful to read, in my opinion, and that's a big strike against using them. But if you need the added precision, you need it, and you need to bite the bullet and do it.
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic