posted 1 year ago

Hello,

The following is similar to a line in my code:

... where value1 and value2 are doubles, and value2 could be incredibly small.

As value2 becomes smaller and smaller (value1 is fixed) I would be interested to know when value1/value2 switches to become Double.POSITIVE_INFINITY. Double.MAX_VALUE + 1 seems to be truncated to Double.MAX_VALUE (though maybe it is not, and I'm only being shown part of it), so what is the determining factor for this switch, and how is a number like Double.MAX_VALUE + 1 or Double.MAX_VALUE*2 represented when determining this - as a BigDecimal etc?

Many thanks!

The following is similar to a line in my code:

... where value1 and value2 are doubles, and value2 could be incredibly small.

As value2 becomes smaller and smaller (value1 is fixed) I would be interested to know when value1/value2 switches to become Double.POSITIVE_INFINITY. Double.MAX_VALUE + 1 seems to be truncated to Double.MAX_VALUE (though maybe it is not, and I'm only being shown part of it), so what is the determining factor for this switch, and how is a number like Double.MAX_VALUE + 1 or Double.MAX_VALUE*2 represented when determining this - as a BigDecimal etc?

Many thanks!

posted 1 year ago

I think that varies from implementation to implementation. Go through the Java® Language Specification, and you will probably not find anything specified at all. Unless, maybe, you are using the

As for Double.MAX_VALUE + 1. That is the same as this, because the precision of

`strictfp`keyword. The IEEE754 specification allows arithmetic to use an extended precision representation with a larger exponent for intermediate values.It may tell you there what happens, or look for it in Wikipedia.As for Double.MAX_VALUE + 1. That is the same as this, because the precision of

`double`arithmetic does not stretch to adding 1. 1 is much smaller than 16 sig fig can permit.
posted 1 year ago

I believe all double operations return a real double value, one which closest approximates what the value would be if you had near infinite precision.

That means that no matter how small

The matter is different when

That means that no matter how small

`value2`becomes, as long as both operands are positive, you well never get`POSITIVE_INFINITY`, because`MAX_VALUE`is the closest approximation of the actual result.The matter is different when

`value2`is`0.0`or`-0.0`. I'm not certain if this is correct, but here's an overview of what I believe are the correct results of divisions when zero or infinity are involved:*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

Campbell Ritchie

Marshal

Posts: 56603

172

posted 1 year ago

It should all be in the Java® Language Specification (=JLS).

MAX_VALUE is (2 − 2⁻⁵²) × 2¹⁰²³ and MIN_VALUE is 2⁻¹⁰⁷⁴, so if you divide the two you get approx; 2²⁰⁹⁸ which will overflow.[edit]Corrected superscripts and add a minus superscript.That JLS section wrote:If the magnitude of the quotient is too large to represent, we say the operation overflows; the result is then an infinity of appropriate sign.

Stephan van Hulst

Saloon Keeper

Posts: 7993

143

F Turner

Ranch Hand

Posts: 31

1

posted 1 year ago

Hi,

Thanks for all of the help so far. The following excerpt is from the JLS (for Java 8):

So what you said, Campbell, about there being an extended precision representation for the intermediate values makes sense - I suppose the result is calculated using this representation, and then rounded to a double according to the rules in the Java specification.

In the case of Double.MAX_VALUE + 1, I doubt that there is any intermediate representation with adequate precision to be able to store this number, so is it just known that such a number could not be stored as a double before the calculation takes place and it is not even attempted? That said, I suppose that an appropriate intermediate representation could exist if managed by software (i.e. the data for each number is just spread across multiple memory locations, and the ALU does multiple additions to process the whole number)?

Also, Double.MAX_VALUE*1.000000000000001 (1) gives infinity, but Double.MAX_VALUE*1.0000000000000001 (2) gives Double.MAX_VALUE. It looks like, by the IEEE 754 specification, a double has a mantissa of 52 binary bits.

1.000000000000001 [decimal] = 1.{0000000000|0000000000|0000000000|0000000000|0000000001|00}1000000011 [binary], and

1.0000000000000001 [decimal] = 1.{0000000000|0000000000|0000000000|0000000000|0000000000|00}0111001101 [binary]

( | indicates a run on 10 0s, { } indicates which bits would fall within the 52 which the mantissa can take).

Would this explain why (2) gives Double.MAX_VALUE - because it simply sees the multiplicand as 1.0 since it is beyond the precision of a double, while (1) gives positive infinity, because it can see that the maximum double value is being multiplied by a value which is larger than one? This doesn't conform with the idea of an intermediate higher precision value at first, but I suppose that the program would have to store (2)'s multiplicand before even doing the calculation, at which point it would be interpreted as the double 1.0?

Thanks, and sorry for the endless questions :P

Thanks for all of the help so far. The following excerpt is from the JLS (for Java 8):

The Java programming language requires that floating-point arithmetic behave as if every floating-point operator rounded its floating-point result to the result precision. Inexact results must be rounded to the representable value nearest to the infinitely precise result; if the two nearest representable values are equally near, the one with its least significant bit zero is chosen. This is the IEEE 754 standard's default rounding mode known as round to nearest.

So what you said, Campbell, about there being an extended precision representation for the intermediate values makes sense - I suppose the result is calculated using this representation, and then rounded to a double according to the rules in the Java specification.

In the case of Double.MAX_VALUE + 1, I doubt that there is any intermediate representation with adequate precision to be able to store this number, so is it just known that such a number could not be stored as a double before the calculation takes place and it is not even attempted? That said, I suppose that an appropriate intermediate representation could exist if managed by software (i.e. the data for each number is just spread across multiple memory locations, and the ALU does multiple additions to process the whole number)?

Also, Double.MAX_VALUE*1.000000000000001 (1) gives infinity, but Double.MAX_VALUE*1.0000000000000001 (2) gives Double.MAX_VALUE. It looks like, by the IEEE 754 specification, a double has a mantissa of 52 binary bits.

1.000000000000001 [decimal] = 1.{0000000000|0000000000|0000000000|0000000000|0000000001|00}1000000011 [binary], and

1.0000000000000001 [decimal] = 1.{0000000000|0000000000|0000000000|0000000000|0000000000|00}0111001101 [binary]

( | indicates a run on 10 0s, { } indicates which bits would fall within the 52 which the mantissa can take).

Would this explain why (2) gives Double.MAX_VALUE - because it simply sees the multiplicand as 1.0 since it is beyond the precision of a double, while (1) gives positive infinity, because it can see that the maximum double value is being multiplied by a value which is larger than one? This doesn't conform with the idea of an intermediate higher precision value at first, but I suppose that the program would have to store (2)'s multiplicand before even doing the calculation, at which point it would be interpreted as the double 1.0?

Thanks, and sorry for the endless questions :P

Campbell Ritchie

Marshal

Posts: 56603

172

posted 1 year ago

More likely that the 1.0 is shifted so its exponent is the same as MAX_VALUE, and the solitary 1 bit is shifted so far to the right that even an extended exponent representation will come out as 0.F Turner wrote:. . . In the case of Double.MAX_VALUE + 1, I doubt that there is any intermediate representation with adequate precision to be able to store this number, so is it just known that such a number could not be stored as a double before the calculation takes place and it is not even attempted?

As you showed it, the IEEE754 format imputes a 53rd 1 bit to the left end of the number. So a. . . a double has a mantissa of 52 binary bits.

1.000000000000001 [decimal] = 1.{0000000000|0000000000|0000000000|0000000000|0000000001|00}1000000011 [binary], and

1.0000000000000001 [decimal] = 1.{0000000000|0000000000|0000000000|0000000000|0000000000|00}0111001101 [binary]

`double`in its normal range uses 53 bits' precision.

Yes; 1.0000000000000001 can only be repesented as a. . . Would this explain why (2) gives Double.MAX_VALUE - because it simply sees the multiplicand as 1.0 since it is beyond the precision of a double, . . .

`double`identical to 1.0.

No problems about the many questions. Yes, that value will be recorded as 1.0 before the calculation is attempted. Multiplication by the slightly larger value will cause an overflow irrespective of how many additional bits are used in the intermediate representation, because the result of the calculation will be reduced to 64 bits. Use the javap tool to see what is happening:-The value of X is shown at number 5.This doesn't conform with the idea of an intermediate higher precision value at first, but I suppose that the program would have to store (2)'s multiplicand before even doing the calculation, at which point it would be interpreted as the double 1.0?

Thanks, and sorry for the endless questions :P