This is a bit more complicated than it may seem, since every decimal can't be held accurately. You basically have to find a count of decimal points you find significant, round to that significance, then FLOOR. Compare the floored value with the original and if they are the same, then the original was an integer.

There are a handful of ways of doing it, here are two methods - one where I keep things primitive and do the rounding, and one that uses BigDecimal. Not sure which is better ...

Note the main method is using a loop of a bunch of large numbers with a small increment to

test how well it works. The incremental changes help exacerbate the problem of storing decimals (as soon as one number is reached that can't accurately be help is reached the error is propagated through the rest of the increments). Large numbers mean I can use smaller display precisions to show off how the values get stored (if I wanted to, which I think I removed from this version of code).

---- EDIT ----

I take that back. I do know which is better - use the BigDecimal version. My rounding still gets flawed when there are some combinations of significantDigits and values getting passed into the methods. BigD seems to not have those problems. So go with isIntegerValue2(double input) and the asInt2(double input) method shown below.