Unit of least precision. It is used to discuss the accuracy of floating-point data. It represents the minimum step between representable values near a desired number: the true result to infinite precision, assuming the argument is exact. For instance, the ulp of 1.977436582E+22 is 1.0E+13, since the least significant (leftmost) digit of the mantissa is in the 10 13 place. Within 0.5 ulp is the best approximation representable.
My questions are, what is a minimum step and how do we get to 0.5 ulp (what is the calculation)?