Basically you only have the 32 or 64 bits (with float & double) to play with to give a certain level of accuracy. If the decimal point was fixed, a lot of the time it would make the most significant bits redundant eg 000010.567 so the point 'floats' as far left as it can ie 10.5672345, and thus gives the greatest accuracy using the remaining bits. It actually manages this my having an exponent part to the bit
pattern, which tells it the magnitude of the number.