So I was noticing imprecision with division of large numbers, and have concluded by process of elimination that it stems from the divide() method. Would it be possible to fix the method to make it accurate to more sig figs? Or is there an inevitable reason for the shortcoming?

What made me notice it was calculating derangements (aka subfactorials). I used the asymptotic formula !n = n!/e. I have a method for factorials that returns a BigInteger. I initialized a BigDecimal from the BigInteger, then divided it by Math.E. When it got a slightly wrong answer I increased the precision to a few thousand, but that got the same answer. I thought maybe E didn't have enough sig figs, so I tried changing some of the last digits to see if that would make the answer more wrong. It didn't. So I'm thinking the problem resides in the divide() method itself.

It's wrong when n>=19.

For n=19 the last two digits of the integer part should be 06 or 05 (the number has to round to an integer ending in 06, shown here:

http://www.wolframalpha.com/input/?i=!19). Instead they're 09 and the decimal part begins with 922 so the integer part rounds to 10.

My derangement method isn't important, I'm just curious about this for when I use BigDecimals in the future. I don't see the point in BigDecimal division if divide() isn't more precise than dividing two doubles together. Nor the point in allowing the user to set the precision so high, when all the digits are gonna be wrong anyway.

Thanks for any light you can shed.