I just wanted to point out that in Chapter 2 it is not explained in any way that floating-point numeric literals that appear in a piece of code are interpreted (default to) double.
Furthermore, Chapter 3 refers to this concept, which is a bit misleading:
" As you may remember from Chapter 2, floating-point literals are assumed to be double, unless postfixed with an f, as in 2.1f (Boyarsky, 20191119, p. 89)
Boyarsky, J., Selikoff, S. (20191119). OCP Oracle Certified Professional Java SE 11 Programmer I Study Guide [VitalSource Bookshelf version]. Retrieved from vbk://9781119584568
Table 2.1 has an example of 123.45f. Table 2.3 (page 53) has a a table with the default types. It lists 0.0 for both float and double. We intentionally don't cover 0.0f vs 0.0d there because we are trying to illustrate the principle of a floating vs integer type there. And we cover that a smaller type can be stored in a larger type. So looks good to me!