Originally posted by Paul Clapham: The people who wrote the API documentation for the constructor BigDecimal(double) noticed that and wrote about it. #3 in the "Notes" addresses your question.
I'd prefer they wrote a solution in the code rather than a note in the API. Or, gotten rid of the constructor all together. It seems odd to me the note should say "the results of this constructor can be somewhat unpredictable". That's a good enough reason to *never* use it.
Although they do pass the blame off to the people who wrote double, it does seem odd to me that System.out.println(0.95) does not display as "0.9499..." normally. I'd say BigDecimal's constructor should use a solution similar to what double is doing, perhaps explicitly converting to string first.
The fact that primitive doubles have limited accuracy is the whole reason why people use BigDecimal. It doesn't seem like a surprise that any value derived from a double would be similarly unreliable. Getting rid of the constructor seems like overkill to me - it makes sense to have a way to convert a double to BigDecimal, if only to see what value the double "really" is (since toString() does some rounding). The fact that it can be used incorrectly doesn't mean that no one else should be allowed to use it, IMO. [ March 20, 2008: Message edited by: Jim Yingst ]
That's the problem with the Real number representation in computers: it works till it doesn't Basically, you're approximating a infinite line by using points, the number gets rounded to the closest point. That's why you have .95=0.9499999999999999555910790149937383830547332763671875 (any number between those 2 will give the same result)
Want to have more fun? Try: double x1=1000; double x2=2e19; System.out.println("x1="+x1); System.out.println("x2="+x2); System.out.println("x1+x2-x1="+(x1+x2-x2)); (it's called catastrophic cancellation)