To be honest, in 18 years of using Java, I've rarely seen them used except for values that don't need to be completely accurate and where approximations are fine. The best example I can come up with are latitude and longitude values, where I've seen doubles used. (Though I personally tend toward BigDecimal).
I don't think I've ever seen a float.
I'm sure others can come up with other examples where absolute accuracy is not all that important. Currency, of course, is not one of these.
Not all calculations require complete accuracy. For example if you are an engineering company building a dam, you do some calculations to decide how high to build the dam. You don't need to arrive at a completely accurate number -- within the nearest millimeter would be fine, I'm sure, or maybe within the nearest few centimeters. And floating-point numbers can represent values over a huge range, so if you're interested in representing the distance from here to Pluto (which we don't know completely accurately, remember) or the precision with which magnets must be mounted in CERN's Large Hadron Collider (a very small number but again not a completely accurate one) then floating-point is just what you want.
On the other hand if you're keeping the accounts for a company then you do need complete accuracy. One penny out and the auditors get all upset. So that's why you will frequently see people here telling beginners not to use floating-point numbers to represent money.
Sure... but BigDecimal is a very inconvenient way to do that. Floating-point calculations automatically give you enough accuracy to answer Fred's question without having to faff around with specifying the number of decimal places in your intermediate calculations. Now if you wanted to calculate pi to 50 decimal places, that's where BigDecimal would be useful. Or even BigInteger, if you scaled things correctly.
I often use them for UIs and physics simulations. In collision detection for instance, you want to know if two line segments intersect. This has to be done using a certain error margin, because of the 'refresh rate' of the simulation. That's why the calculation doesn't need to be 100% accurate, but calculating with doubles can be much faster than using BigDecimal.
I wrote my entire physics thesis around a computer model of how laser light is scattered by a glass tube with inner and outer diameters in the range of the wavelength of the laser. Doing that with integers would have been challenging. My calculations were, necessarily, all approximations, but experimental physics tends to be all about approximations anyway. Over the range of values I was computing, a floating point value was more than precise enough. (I was using FORTRAN, but the issue is the same.)
Maybe it would help to understand what floats/doubles are good for if one thinks less about "accuracy" and more about "precision." In my program, say, the amplitude of scattered light at 90 degrees might have been computed as .7145523. But .7145523 what? Well, let's assume the units are something appropriate to light (watts maybe). Now, if the next discrete values above and below .7145523 that my floating point representation can encode are .7145524 and .7145522, that is more than precise enough for the purposes of my experiment. None of those three values is (likely to have been) the exact wattage of the light scattered at 90 degrees, but I know that the precision of my calculation is about one part in ten-million. As long as the overall range of values I need to deal with doesn't take me out of that level precision (that is, if I also had to deal with values like 7.145523 * 10^48, subtracting a value near that isn't going to result in a value I can compare to, say, .7145523, because no floating point representation I've ever encountered retains precision at the seventh decimal place when storing really big values like 10^48), one part in ten-million is more than good enough for physics.
Another use I've made of floats is in computer art. I've written an insane number of programs that use sums of sine waves to modulate radial line patterns, resulting in all sorts of (to me) pretty pictures. Again, the precision is more than good enough for that, and doing it with integers would be hard.
This one wasn't done by me, but by the college professor who taught me how to do them, 40 years ago:
"Il y a peu de choses qui me soient impossibles..."
Stephan van Hulst wrote:I often use them for UIs and physics simulations...
Personally, I've always thought they would be good for Calendars, since a double day-offset value can cover our entire recorded history to millisecond accuracy (perhaps slightly less if you include Lascaux as a "recording"), our planet's history in minutes, and the entire time period since the Big Bang in fractions of hours.
And you have a built-in division between "date" and "time".
"Leadership is nature's way of removing morons from the productive flow" - Dogbert
Articles by Winston can be found here
posted 3 years ago
Thanks for the replies. The "distinction" between precision and accuracy especially helps. On the one hand I was thinking, "What good is a numbering model if you can't use it to count?" To paraphrase some of your responses, on the other hand, measuring (a distance, for example) is usually a matter of settling on an approximation that is precise enough.
You can't expect to wield supreme executive power just because
how do I do my own kindle-like thing - without amazon