0f is forced to a float while 0 is forced to a double which is then narrowed to a float to fit in the float.
But all this is happening at compile time, so there really is no actual difference. (BTW, I'd think that 0 is an int literal. Is it really converted to a double before being narrowed to a float?)
d would (if memory serves) produce a float widened to a double while d2 would produce a double directly.
Actually, 1/3 produces an int of the value 0.
And if I remember correctly, 1/(3f) produces a double. I would be surprised if (1/3)f where a valid expression, but I didn't check.
Note: this example is very contrived and might not be entirely correct but it serves the purpose.
You are, of course, correct that you can find circumstances where using the f qualifier (is that the correct term?) makes a difference - else there wouldn't be a need for it. But in the specific example of the original poster, it doesn't.