I was playing with some primitive casting, like so:

public class Test {

public static void main(String[] args) {

float f = (float)99.9;

double d1 = 99.9;

double d2 = f;

System.out.println("f: " + f);

System.out.println("d1: " + d1);

System.out.println("d2: " + d2);

}

}

Here's the output:

99.9

f: 99.9

d1: 99.9

d2: 99.9000015258789

I'd be *very* grateful if someone could explain to me why d2 seems to be greater than d1 (I know its not much, but don't understand why!).

Ta

Tony

in short, floating-point types like "float" and "double" are never exact; they are only approximations of the number you want. the differences between the number you want and the number you get, with these types, are complex and detailed, but can be considered more or less random, at least as a rule of thumb.

moreover, the differently-sized floating point types (float versus double, in your case) are inexact in

*different ways*. "double" has more power to represent more decimal places — or to represent larger numbers without dropping low-order digits — but there's not necessarily any one-to-one correspondence between the numbers a "float" can represent and those that "double" can represent, and the inaccuracies each format will introduce are different too.

this is just a fact of life when using floating-point numbers. if it's simply not acceptable, either use rounding methods to your desired number of decimal points, or find a way to make do with integers alone.

Originally posted by M Beck:

in short, floating-point types like "float" and "double" are never exact; they are only approximations of the number you want.

Well, that's not *totally* true - 0.5 for example can always be presented exactly, as well as 0.25, 0.75 ...

The point is that float and double use the binary system, whereas your literals are written in the decimal system. Not all finite decimal numbers translate to finite binary numbers, though. 0.1, for example, becomes an infinite number in the binary system! Because both float and double have to stop somewhere, but double uses higher precision, a 0.1d is a little bit bigger than a 0.1f...

As a similar example, consider how the ternary number 0.1 (= 3^-1 = 1/3) becomes an infinite number when translated to the decimal system.

The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus

to make a long story short, the problem mr. Walters was seeing stems from a floating-point conversion operation. when a float is cast to a double, the implicit conversion — like any other operation on floating-point numbers, really — always runs a risk of rounding error, introducing inaccuracy. frankly, i'm surprised that a double declared from a constant doesn't also reduce to the same inaccurate approximation as the double cast from a float. i can only assume that the compiler uses a different conversion method when reading constants from source code than the JVM uses when casting variables at run-time. which is odd, but i've seen odder...

Originally posted by Ilja Preuss:

Well, that's not *totally* true - 0.5 for example can always be presented exactly, as well as 0.25, 0.75 ...

Mathematically it's indeed not totally true, but in the context of computer floating point mathematics in computers using a discrete system of representing data it is

42

Ta

Tony

Sheriff

Originally posted by Jeroen Wenting:

Mathematically it's indeed not totally true, but in the context of computer floating point mathematics in computers using a discrete system of representing data it is

Sorry, I don't understand this. Are you saying that there are no values that can be represented exactly by IEEE floating point numbers? That sounds obviously wrong to me...

The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus

Sheriff

Originally posted by M Beck:

i'm surprised that a double declared from a constant doesn't also reduce to the same inaccurate approximation as the double cast from a float.

As far as I know, it does. It's just that Java "knows" that the internal representation is inaccurate, and therefore manages to conceal it to some amount when printing to the console.

Try the following code:

The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus

A binary system doesn't care whether something is a "nice" fracture after all. 2463/4235327 is a nice fracture yet I doubt 0.0010936581756261086806284378986558 (which was calculated using a PC based calculator) can be guaranteed to be without rounding errors.

1/2 will usually (probably in 99.999999999% of cases) deliver 0.5000000000000000 but in that last case it may produce 0.49999999999999994 instead. Now if the viewer wanted to see that result to a higher precision than that he'd get an unexpected result.

42

Sheriff

Originally posted by Jeroen Wenting:

There are no floating point numbers that can be guaranteed to be represented without rounding errors.

Of course there are. A float is just a number of bits that represent a number, and all numbers that can be represented using those bits are, of course, represented without "rounding errors". Wether those numbers can be represented in the decimal system using a finite representation is a different question - and that can certainly be answered with "yes" for some of the possible values.

A binary system doesn't care whether something is a "nice" fracture after all.

I didn't say that it does.

2463/4235327 is a nice fracture yet I doubt 0.0010936581756261086806284378986558 (which was calculated using a PC based calculator) can be guaranteed to be without rounding errors.

Which doesn't prove at all that there are *no* numbers at all that can be guaranteed to be without rounding errors.

1/2 will usually (probably in 99.999999999% of cases) deliver 0.5000000000000000 but in that last case it may produce 0.49999999999999994 instead.

1/2 is 0.1 binary. I'd think that there is a float bit pattern that *exactly* represents this value. And I'd assume that Java deterministically uses that exact value for literal values of 0.5f.

Of course that doesn't mean that *every computation* that theoretically should result in 1/2 ends up with that exact bit pattern. Is that what you were referring to?

Consider Paul's rocket mass heater. |