In a method after doing some number crunching we get an int or a double based on specific conditions. We then set this value in an output object's String parameter.
The way this output's object value is used in another routine is it has separate criteria based on whether the value is an string or double.
Normally from the condition rendering a double we get values like 0.43 or say 2.33 but in some cases this condition renders values like 0.0, 1.0 or 2.0
In cases where the values are with .0 we actually want to set the values (0.0,1.0,2.0) as 0,1,2 and ignore .0 but when it is proper double type like 0.43 or 2.33 we want to retain the precision
Let me know your thoughts on how to go about this.
This is a bit more complicated than it may seem, since every decimal can't be held accurately. You basically have to find a count of decimal points you find significant, round to that significance, then FLOOR. Compare the floored value with the original and if they are the same, then the original was an integer.
There are a handful of ways of doing it, here are two methods - one where I keep things primitive and do the rounding, and one that uses BigDecimal. Not sure which is better ...
Note the main method is using a loop of a bunch of large numbers with a small increment to test how well it works. The incremental changes help exacerbate the problem of storing decimals (as soon as one number is reached that can't accurately be help is reached the error is propagated through the rest of the increments). Large numbers mean I can use smaller display precisions to show off how the values get stored (if I wanted to, which I think I removed from this version of code).
---- EDIT ----
I take that back. I do know which is better - use the BigDecimal version. My rounding still gets flawed when there are some combinations of significantDigits and values getting passed into the methods. BigD seems to not have those problems. So go with isIntegerValue2(double input) and the asInt2(double input) method shown below.