vaibhav panghal

Greenhorn

Posts: 25

posted 8 years ago

hi ,

although this question has been asked many times earlier also and i went through all the discussions in those threads but still i'm not able to figure it out exactly .

a long is 8 byte in size which means that its range is -2^63 to (+2^63)-1

a float is 4 byte long but i really do not understand its range .

i had a look at ieee 754 and also field values in the Float class but still could not get a hint . can someone help ??

although this question has been asked many times earlier also and i went through all the discussions in those threads but still i'm not able to figure it out exactly .

a long is 8 byte in size which means that its range is -2^63 to (+2^63)-1

a float is 4 byte long but i really do not understand its range .

i had a look at ieee 754 and also field values in the Float class but still could not get a hint . can someone help ??

posted 8 years ago

The range of a float is (an estimate) from -10^38 to 10^38, which makes this range larger than a long.

The range of a float is larger due to the way it is stored. The best way to describe it, is as two numbers. The first describes a simple value. The second is a value that describes how to shift the decimal point, on the first number.

It is actually much more complex than that -- but with this decimal point shifting, you can have really large numbers, which translates to a really large range.

The drawback is, of course, the precision. The floating point number may be really large, but the precision is not very high. For really big numbers, most of the digits will be zero, even on the left side of the decimal point.

Henry

[ July 05, 2008: Message edited by: Henry Wong ]

The range of a float is larger due to the way it is stored. The best way to describe it, is as two numbers. The first describes a simple value. The second is a value that describes how to shift the decimal point, on the first number.

It is actually much more complex than that -- but with this decimal point shifting, you can have really large numbers, which translates to a really large range.

The drawback is, of course, the precision. The floating point number may be really large, but the precision is not very high. For really big numbers, most of the digits will be zero, even on the left side of the decimal point.

Henry

[ July 05, 2008: Message edited by: Henry Wong ]

posted 8 years ago

I wrote the following illustration a few years ago. See if this helps...

First, consider that whenever we are working with floating point numbers, we are going to have to accept

So, for the sake of a simple illustration, let's say that we decide to cut them off at the 3rd decimal place (without rounding). That is, we store 1/3 as 0.333, and 1/11 as 0.090, and pi as 3.141. None of these values are exact anymore, but we can easily store them. We're traded

Under this standard (assuming a decimal after the first digit), our range is only 0.000 to 9.999. So to increase range, let's add just a few more digits and use scientific notation. For simplicity, we'll use base-10 (although in a computer, this would be binary). Now, when we store a 7-digit number of 1234567, we'll understand this to mean 1.234 x 10^567. Suddenly, we've greatly expanded our range. But the trade off is in precision: Most of these digits are just place-holding zeros (implied by the exponent) to convey

This is great for really big numbers, but what about really small numbers? Well, suppose we agree that this 3-digit exponent will automatically have a "bias" of 500 built into it. In other words, we'll always subtract 500 from whatever value is stored. So if we want an exponent of 234, we'll store 734. Why is this helpful? Because this allows us to imply

We'll add one more refinement: A new digit at the beginning to indicate sign, with 0 indicating positive and 1 indicating a negative.

So now in a simple 8-digit representation, we can store numbers as small as (+/-) 1.000 x 10^(-500) or as large as (+/-) 9.999 x 10^499. So we've got an enormous range to work with --

Those 4 digits represent our "window" of precision -- the only place where we

These are some of the

First, consider that whenever we are working with floating point numbers, we are going to have to accept

*approximations.*The reason is that many values have infinite decimal representation -- either with a repeating pattern (for example, 1/11 = 0.090909... or 1/3 = 0.3333...), or with an irrational, non-repeating pattern (for example, pi = 3.14159...). From a practical standpoint, we have to cut these representations off*somewhere;*and as soon as we do, we have an approximation. Or, in other words, we lose precision.So, for the sake of a simple illustration, let's say that we decide to cut them off at the 3rd decimal place (without rounding). That is, we store 1/3 as 0.333, and 1/11 as 0.090, and pi as 3.141. None of these values are exact anymore, but we can easily store them. We're traded

*precision*for (some degree of) practicality.Under this standard (assuming a decimal after the first digit), our range is only 0.000 to 9.999. So to increase range, let's add just a few more digits and use scientific notation. For simplicity, we'll use base-10 (although in a computer, this would be binary). Now, when we store a 7-digit number of 1234567, we'll understand this to mean 1.234 x 10^567. Suddenly, we've greatly expanded our range. But the trade off is in precision: Most of these digits are just place-holding zeros (implied by the exponent) to convey

*magnitude*rather than an exact quantity.This is great for really big numbers, but what about really small numbers? Well, suppose we agree that this 3-digit exponent will automatically have a "bias" of 500 built into it. In other words, we'll always subtract 500 from whatever value is stored. So if we want an exponent of 234, we'll store 734. Why is this helpful? Because this allows us to imply

*negative*exponents. If we store an exponent value of 123, then subtracting 500 will give us -377. Recall that a negative exponent will "move" the decimal to the left, so now we can represent extremely small numbers, as well as extremely large numbers.We'll add one more refinement: A new digit at the beginning to indicate sign, with 0 indicating positive and 1 indicating a negative.

So now in a simple 8-digit representation, we can store numbers as small as (+/-) 1.000 x 10^(-500) or as large as (+/-) 9.999 x 10^499. So we've got an enormous range to work with --

*far more*than what we would have with any simple 8-digit representation of a whole number -- BUT our*precision*is limited to those 4 digits that aren't the sign or the exponent.Those 4 digits represent our "window" of precision -- the only place where we

*know*the values are exact. Depending on the exponent, this window can "slide" either far to the left of the decimal point to imply very large quantities, or slide far to the right of the decimal point to imply very small quantities. But we're always going to have approximations unless the non-zero digits of our value can "fit" in that window of precision.These are some of the

*basic ideas*behind IEEE 754 standards for floating point numbers. The actual implementation is more complex, but hopefully this illustrates the trade-off between range and precision."We're kind of on the level of crossword puzzle writers... And no one ever goes to them and gives them an award." *~Joe Strummer*

sscce.org

vaibhav panghal

Greenhorn

Posts: 25

posted 8 years ago

hi marc ,

i am getting the feeling that i'll understand the concept with your explanation . but kindly help clear my doubts with what you written .

you've said that 1234567 can be stored as 1.234 x 10^567 . how is that ??

10^567 is a hell lot big dont you think . 1.234 x 10^6 could be used to store 1234567 (ofcourse , precision loss will be there too) . i am stuck at this point that's why i cannot carry on with your article . kindly help clarify the doubt . thank you . Vaib .

i am getting the feeling that i'll understand the concept with your explanation . but kindly help clear my doubts with what you written .

you've said that 1234567 can be stored as 1.234 x 10^567 . how is that ??

10^567 is a hell lot big dont you think . 1.234 x 10^6 could be used to store 1234567 (ofcourse , precision loss will be there too) . i am stuck at this point that's why i cannot carry on with your article . kindly help clarify the doubt . thank you . Vaib .

posted 8 years ago

It's the other way around. The huge number 1.234 x 10^567 could be stored as just seven digits, 1234567,

This gives us the ability to represent an enormous quantity (568 digits!), but only the first 4 of these digits are exact. The remaining 564 are just place-holding zeros. So it's a big number, but it's not precise at all. It's only telling us, "Somewhere in the neighborhood of..."

Originally posted by vaibhav panghal:

...you've said that 1234567 can be stored as 1.234 x 10^567 . how is that ?...

It's the other way around. The huge number 1.234 x 10^567 could be stored as just seven digits, 1234567,

**if**we understand how to interpret the format. In this simple illustration, that means the first 4 digits represent a value with a decimal point after the first digit, and the last 3 digits represent an exponent used with a base of 10.

This gives us the ability to represent an enormous quantity (568 digits!), but only the first 4 of these digits are exact. The remaining 564 are just place-holding zeros. So it's a big number, but it's not precise at all. It's only telling us, "Somewhere in the neighborhood of..."

*~Joe Strummer*

sscce.org

posted 8 years ago

Again, understand that IEEE 754 is more complicated than my hypothetical illustration here, but I think this illustrates the basic concepts around range and precision.

See Steve Hollasch's IEEE 754 page for more details.

See Steve Hollasch's IEEE 754 page for more details.

*~Joe Strummer*

sscce.org

posted 8 years ago

Any discussion of this has to go one step further. While Marc's example shows four decimal digits of precision, in fact, the digits are binary, and there is no easy to understand mapping between fractional binary numbers and our more familiar fractional decimal numbers.

Anyone working with this stuff should write a trivial loop that shows how numbers don't work the way you expect:

Anyone working with this stuff should write a trivial loop that shows how numbers don't work the way you expect:

vaibhav panghal

Greenhorn

Posts: 25

posted 8 years ago

yes i got what you are trying to say . in your simple illustration 1234567 is the simple format to store numbers like 1.234 x 10^567 ( and sucha format gives us the provision of representing huge numbers(although less precise) )

yes i know that ieee 754 is a much more complicated thing but your simple example shows the concept well enough . kindly stick to this thread for 2-3 days more . thank you .

yes i know that ieee 754 is a much more complicated thing but your simple example shows the concept well enough . kindly stick to this thread for 2-3 days more . thank you .