Is there any way of storing a number - double or otherwise - so that it is accurate to up to 40 significant figures? As far as I can tell, 18 significant figures is the limit on the double.
the number 1234567890123456789012345678901234567890D would be converted to 1234567890123456770000000000000000000000 when stored, and displayed without formatting as 1.23E40. I can fix the formatting, but lose the accuracy. Is there anyone who can help me with this?
Ask yourself if you REALLY NEED 40 significant digits. Much of software design is a tradeoff between memory, speed, accuracy and probably a few other factors. If you're just messing around, it's probably no big deal, but keep it in the back of your mind as you go...
frosenberger wrote:I believe that you will take a performance hit for using BigIntegers or BigDecimals. This may not be an issue if your program doesn't use too many or do too much.
I would guess there's an excellent chance this is not an issue. But I guess it depends on the context of the application. Still, this smells like premature optimization to me. The original poster clearly has expressed a desire for more precision, but has not expressed a need for better performance. (And even if she did, it's very possible that the bottleneck in the system is elsewhere, and using BigDecimal could have no significant effect.)
frosenberger wrote:Ask yourself if you REALLY NEED 40 significant digits. Much of software design is a tradeoff between memory, speed, accuracy and probably a few other factors. If you're just messing around, it's probably no big deal, but keep it in the back of your mind as you go...
I agree with this, except I would remove the "if you're just messing around". I think there's an excellent chance that using BigDecimal or BigInteger will be either desirable, or a non-issue, even in production code. But there is also a significant chance that it could create a performance problem. So yes, keep this in the back of your mind: if performance becomes a problem, the decision to use BigDecimal or BigInteger could be the cause of it. And so you should be prepared to reconsider this decision.
To my mind, however, the biggest drawback to BigDecimal or BigInteger has nothing to do with performance. You say "much of software design is a tradeoff between memory, speed, accuracy and probably a few other factors" - to me, the biggest other factor, often much more important than any of the others, is readability. Which translates into maintainability. Java's rejection of operator overloading makes it tough to write easily readable mathematical code using anything other then primitives. Because we recognize "a + b" more readily than "a.add(b)". And Ghu help us if we need to interpret any remotely-complex formula. Java's BigDecimal and BigInteger are painful to read, in my opinion, and that's a big strike against using them. But if you need the added precision, you need it, and you need to bite the bullet and do it.