This is an interesting compilation error. The maximum values for the datatypes byte,short, char, int, long are all odd numbers because they're all two raised to some power minus one. So char's range is 0 to 65535 (2^16 -1). So the first code doesn't work because you try to assign a literal value to c which is too big.
In the second piece of code, you assign an acceptable value to c to begin with (65535). Then you add 1. You think what you're printing out is char c = 65536. But what that second println actually prints out is char c =0. This is because
Java wraps around when you add to the maximum value of an integer or char datatype - basically this means if you add one to the maximum value of a datatype you end up with the minimum value. Try:-
c=0;
c--;
System.out.println("c is "+ c);
If you replace c++ with the longhand form
c=c+1;
the code no longer compiles. This is because the compiler automatically converts chars to integers in most arithmetic operations but it doesn't do this for the ++ and -- operators. So it converts the c on the right hand side to an int, adds one and then tries to assign an int value to the char c. You have to cast:-
c = (char)(c+1);
There's no easy way around this problem, but you could do something like this:-
// Check against maximum value for char using Character Wrapper class
if (c==Character.MAX_VALUE)
{
System.out.println("Char can't get any bigger!");
}
else
{
c++;
}
Hope this helps,
Kathy