Sasha Ruehmkorf

Ranch Hand

Posts: 115

posted 10 years ago

Hi.

I am preparing for the Sun Java Programmer Certification at the moment and am struggeling with the floating-point-numbers right now.

A question arose regarding the IEE 754 standard:

Why in hell are there 8(!) bits reserved for the exponent? It makes absolutely no sense for me, the more I am thinking about it.

The 8 bits lead to a range from approx �10^-38 ... �10^38. A range that is of no use, as the precision is not high enough. For instance even for the relatively small number 399999984f, it is the same as 400000015. Why would I ever want to store a decimal number of 20 digits in a float with an even more gigantic loss of precision then?

Wouldn't it have made much more sense to reserve maybe 5 bits for the exponent and 26 for the mantisse. You'd became a reasonable range and would win much more precision....

I am preparing for the Sun Java Programmer Certification at the moment and am struggeling with the floating-point-numbers right now.

A question arose regarding the IEE 754 standard:

Why in hell are there 8(!) bits reserved for the exponent? It makes absolutely no sense for me, the more I am thinking about it.

The 8 bits lead to a range from approx �10^-38 ... �10^38. A range that is of no use, as the precision is not high enough. For instance even for the relatively small number 399999984f, it is the same as 400000015. Why would I ever want to store a decimal number of 20 digits in a float with an even more gigantic loss of precision then?

Wouldn't it have made much more sense to reserve maybe 5 bits for the exponent and 26 for the mantisse. You'd became a reasonable range and would win much more precision....

posted 10 years ago

just because YOU don't find those numbers useful, you want to limit what everybody else needs?

i'm sure there are lots of people who DO need it to work like it does.

[ March 30, 2007: Message edited by: Fred Rosenberger ]

i'm sure there are lots of people who DO need it to work like it does.

[ March 30, 2007: Message edited by: Fred Rosenberger ]

There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors

Sasha Ruehmkorf

Ranch Hand

Posts: 115

posted 10 years ago

That really helps. I am sure that there are, but I was just trying to explain my train of thoughts.

Of course I am total aware of the fact that people invented this standard with many good reasons and with much deeper insight in the subject than I will ever have. I just want to understand some thoughts that led to this solution and especially where my error in reasoning is.

(edit: answer to the post before last)

[ March 30, 2007: Message edited by: Sasha Ruehmkorf ]

[ March 30, 2007: Message edited by: Sasha Ruehmkorf ]

Of course I am total aware of the fact that people invented this standard with many good reasons and with much deeper insight in the subject than I will ever have. I just want to understand some thoughts that led to this solution and especially where my error in reasoning is.

(edit: answer to the post before last)

[ March 30, 2007: Message edited by: Sasha Ruehmkorf ]

[ March 30, 2007: Message edited by: Sasha Ruehmkorf ]

Sasha Ruehmkorf

Ranch Hand

Posts: 115

posted 10 years ago

It looks like you are confusing some things. How large a number is and precision are two separate things.

If you have a large number, for example 6.02214 x 10^23, then this is not a decimal number with 23 digits that you store with a gigantic loss of precision. The precision of this number is 6 decimal digits. That the number is somewhere in the range of 10^23 is a separate idea that doesn't have anything to do with the precision.

So, an IEEE 754 single precision floating point number has a sign bit, 8 bits for the exponent and 23 bits for the mantissa. Because the mantissa is 23 bits, you have 23 binary digits of precision, which corresponds to about 6 or 7 decimal digits of precision.

Again, the range of the number, determined by the exponent, is something entirely different.

Saying "I want to store the number 83764782346832648372 but storing this in a float gives me a gigantic loss of precision" is the same as saying "I want to store the number 83.764782346832648372 but..." or "I want to store 0.00083764782346832648372 but...".

Storing 83764782346832648372 as 8.37648 x 10^19 gives you exactly the same loss of precision as storing 83.764782346832648372 as 83.7648 - you lose the last 14 digits. You don't lose more precision when you store a large number then when you store a small number.

I hope this makes it clear, it's a bit difficult to explain.

See this for all the details: IEEE 754 (Wikipedia)

[ March 31, 2007: Message edited by: Jesper Young ]

*"Why would I ever want to store a decimal number of 20 digits in a float with an even more gigantic loss of precision then?"*

It looks like you are confusing some things. How large a number is and precision are two separate things.

If you have a large number, for example 6.02214 x 10^23, then this is not a decimal number with 23 digits that you store with a gigantic loss of precision. The precision of this number is 6 decimal digits. That the number is somewhere in the range of 10^23 is a separate idea that doesn't have anything to do with the precision.

So, an IEEE 754 single precision floating point number has a sign bit, 8 bits for the exponent and 23 bits for the mantissa. Because the mantissa is 23 bits, you have 23 binary digits of precision, which corresponds to about 6 or 7 decimal digits of precision.

Again, the range of the number, determined by the exponent, is something entirely different.

Saying "I want to store the number 83764782346832648372 but storing this in a float gives me a gigantic loss of precision" is the same as saying "I want to store the number 83.764782346832648372 but..." or "I want to store 0.00083764782346832648372 but...".

Storing 83764782346832648372 as 8.37648 x 10^19 gives you exactly the same loss of precision as storing 83.764782346832648372 as 83.7648 - you lose the last 14 digits. You don't lose more precision when you store a large number then when you store a small number.

I hope this makes it clear, it's a bit difficult to explain.

See this for all the details: IEEE 754 (Wikipedia)

[ March 31, 2007: Message edited by: Jesper Young ]