But why did we not have to do a cast for //1 like this :
return (byte)c;
Instead code runs with
return c; // 1
???
This is because "\u0001" is a valid value of a byte.
If you convert it into integer, its valid is 1, which fits into the byte value's range: -127 to + 128.
You provide this value during complication time, so the complier knows it is valid.
However, when you pass a character into a function, the value cannot be determined during complication time. Thus, the compiler does not know what value will be passed in (as a casting down may occur).
Consider a simplier example:
1. byte b1 = 1; // valid
2. byte b2 = 2; // valid
3. byte b3 = b1+b2; //invalid
Recall that the default type of literial of a numeric value is either integer or double, line 1 tries to put integer "1" into byte "b1", which works, becos it is a valid byte value which does not need casting, AND IT CAN BE DETERMINED during compliation time.
So does line 2.
However, for line 3, integer + integer returns integer, since it assigns b3 to the result of "b1+b2", which cannot be determined during compliation (becos b1+b2 is actually performed during runtime), we acutally tries to cast a byte into integer, which should be a "potential" problem.
Hope this makes clear.
Nick.
SCJP 1.2, OCP 9i DBA, SCWCD 1.3, SCJP 1.4 (SAI), SCJD 1.4, SCWCD 1.4 (Beta), ICED (IBM 287, IBM 484, IBM 486), SCMAD 1.0 (Beta), SCBCD 1.3, ICSD (IBM 288), ICDBA (IBM 700, IBM 701), SCDJWS, ICSD (IBM 348), OCP 10g DBA (Beta), SCJP 5.0 (Beta), SCJA 1.0 (Beta), MCP(70-270), SCBCD 5.0 (Beta), SCJP 6.0, SCEA for JEE5 (in progress)