posted 3 months ago

The only practical reason I can think of is when you might be creating very large amounts of floating points that don't need to be very precise. Is there anything I'm unaware of or should I just keep making everything a double?

-Zach

-Zach

posted 3 months ago

Hi,

Memory can still be a precious resource depending on the device where your code runs,

for example in the case of embedded software...

Hi,

Memory can still be a precious resource depending on the device where your code runs,

for example in the case of embedded software...

Every good tree bears good fruit

posted 3 months ago

Like Mano hinted at, it's a tradeoff between precision and memory. If memory is not a concern, then you don't use

I almost never use

`float`.I almost never use

`double`either, but that's another discussion.*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

posted 3 months ago

There are some old API's requiring floats, and some time ago Graphic cards often demanded floats or FloatBuffers. Whether the latter is still the case with modern GPU's I don't know.

posted 3 months ago

- 1

Floats have a precision of "6 1/2" decimal digits. The "1/2" part comes because binary fractions and decimal fractions don't correspond precisely. To understand, consider the decimal value of 1 divided by 3.

Doubles have something like 14 1/2 digit precision - I don't remember the exact value, and having done most of my worrying about such things on old IBM mainframes where the floating-point format is slightly different means that I'm probably remembering wrong, but you get the picture. What it basically boils down to is 4 bytes

It's true that floats are less precise, but consider that many, many daily calculations don't even need 6 digits of precision. The old-time mechanical slide rules ran something like 3 1/2 digits* and we built roads, bridges, rockets and war machines with them for decades. Many data collection devices are less than 6 digits. I'm presently working with temperature sensors that are only 4 digits at most.

---

* Obviously slide rules aren't subject to the constraints of binary fractions. But they do use logarithmic scales, so the lower numbers have lots more space between them and thus can be read with more precision.

Doubles have something like 14 1/2 digit precision - I don't remember the exact value, and having done most of my worrying about such things on old IBM mainframes where the floating-point format is slightly different means that I'm probably remembering wrong, but you get the picture. What it basically boils down to is 4 bytes

*versus*8 bytes, which means that double takes (ahem) double the memory for each value stored in RAM or persistent storage. And, typically, a double would require more time to load into the CPU or FPU registers and to operate on, but that depends on the width of the data buses and the internal structure of the floating point processor unit. So you can generally save memory and possibly time by using floats.It's true that floats are less precise, but consider that many, many daily calculations don't even need 6 digits of precision. The old-time mechanical slide rules ran something like 3 1/2 digits* and we built roads, bridges, rockets and war machines with them for decades. Many data collection devices are less than 6 digits. I'm presently working with temperature sensors that are only 4 digits at most.

---

* Obviously slide rules aren't subject to the constraints of binary fractions. But they do use logarithmic scales, so the lower numbers have lots more space between them and thus can be read with more precision.

An IDE is no substitute for an Intelligent Developer.

posted 3 months ago

Hey Everyone,

Thanks for the answers. There's some very useful information here. So my take away is 99% of the time memory is the biggest difference between the two. I was thinking that memory wasn't a huge issue in this day and age but the point on mobile apps definitely changes that. Tim, does the 1/2 point of precision effectively become useless in calculations? Your post was very informative! Take an imaginary cow because I don't have any real ones to give.

Again,Much appreciated everyone,

-Zach

Thanks for the answers. There's some very useful information here. So my take away is 99% of the time memory is the biggest difference between the two. I was thinking that memory wasn't a huge issue in this day and age but the point on mobile apps definitely changes that. Tim, does the 1/2 point of precision effectively become useless in calculations? Your post was very informative! Take an imaginary cow because I don't have any real ones to give.

Again,Much appreciated everyone,

-Zach

posted 3 months ago

It's not so much "useless" as it is a warning. Floating-point numbers are fuzzy. Let me return to my decimal example.

If you divide 1 by 3, you get 0.333333333.....

Technically, this is "useless", because it will never

The same applies to 1/10 in binary. 1/2 or 1/4 can be represented precisely in binary, but 1/10 cannot. However, just as "1.333" can be "close enough for Government work"*, the binary equivalent of 1/10 is likewise. It isn't exact, which is why bookkeeping systems should never use floating point (this is where the infamous "penny-shaving" fraud comes from), but it's perfectly fine for higher-level computations like sales projections.

If memory serves, the potential error has the technical name "epsilon". For best accuracy, the epsilon value should be as small as possible relative to the accuracy actually needed. For example, if your numbers are only accurate to 3-1/2 places, then trying to work with 4 digits is going be more perception than precision. Ultimately, the whole thing falls into the basic Calculus concepts of Range and Domain and computing how precise a value is going to be after all the necessary computing is done.

----

* Literally, in some cases.

Zach Rode wrote:Tim, does the 1/2 point of precision effectively become useless in calculations?

It's not so much "useless" as it is a warning. Floating-point numbers are fuzzy. Let me return to my decimal example.

If you divide 1 by 3, you get 0.333333333.....

Technically, this is "useless", because it will never

*precisely*represent 1/3 this side of infinity.

The same applies to 1/10 in binary. 1/2 or 1/4 can be represented precisely in binary, but 1/10 cannot. However, just as "1.333" can be "close enough for Government work"*, the binary equivalent of 1/10 is likewise. It isn't exact, which is why bookkeeping systems should never use floating point (this is where the infamous "penny-shaving" fraud comes from), but it's perfectly fine for higher-level computations like sales projections.

If memory serves, the potential error has the technical name "epsilon". For best accuracy, the epsilon value should be as small as possible relative to the accuracy actually needed. For example, if your numbers are only accurate to 3-1/2 places, then trying to work with 4 digits is going be more perception than precision. Ultimately, the whole thing falls into the basic Calculus concepts of Range and Domain and computing how precise a value is going to be after all the necessary computing is done.

----

* Literally, in some cases.

An IDE is no substitute for an Intelligent Developer.

posted 3 months ago

I gave one to Tim on your behalf. I gave you one too.

Zach Rode wrote:Take an imaginary cow because I don't have any real ones to give.

I gave one to Tim on your behalf. I gave you one too.

It is sorta covered in the JavaRanch Style Guide. |