Jothi Shankar Kumar Sankararaj wrote:

i.e., after byte b = (byte)128, b will be -128. I dont know the logic behind. Can anyone please explain the undelying logic behind this?

Good news:

This will not be on the exam.

Bad news:

It is a bit

complicated

How is +127 represented in a byte?

as 64+32+16+8+4+2+1:

127=1111111 seven bits, the first bit is for the sign, plus is 0 and minus is 1.

Piece of cake.

If you want to know what happens, if you add one to the 127, you got to know, how negative numbers are represented:

By a Two's complement (see wikipedia or read on)

The Two's complement is made as follows:

first invert all bits:

second add one:

so 1 0 0 0 0 0 0 1 represents -127

If the first bit is negative, you know it is a negative number

to calculate negative numbers to positive ones, do the same thing and in the same direction (invert all bits, then add one).

It would sound more logical if you first substract one and then invert all bits, but as for the two's complement...

... this makes no difference ;-)

back to your question:

what happens if you add 1 to 127 and store it into a byte?

imagine you add two int literals (byte b = (byte)(127 + 1);

if you store this in a byte, all left 24 bits are cut off and you get:

10000000

From the first bit = 1 you know it is a negative number. the positive representation would be

first invert all bits,

second, add one

so:

oops! now we're stuck!

The largest possible negative number (also called "weird number") gives us some trouble here. Because the largest possible negative number (-128 in case of a byte) is the one and only exception to the rule that inversion of all bits plus one makes a negative number out of a positive one (and vice versa).

To show, that 1000 0000 really represents -128 you can show with a trick:

We already know, that 1000 0001 represents minus one two seven (see above). If we just substract one, it gives us

or, more formal:

add minus one. what is minus one?

Add them:

The leading one is just cut of (overflow) as there are only eight bits in a byte (I know you remembered that!).

result 1000 0000 = minus 128

So the bit

pattern 1000 0000 represents -128 (if it's a byte)

if it's a short:

0000 0000 1000 0000 represents just +128

But now at this point I'm totally disturbed and have forgotten what this

thread was all about and what was the question.

Sorry!

Yours,

Bu.

[ December 04, 2006: Message edited by: Burkhard Hassel ]