• Post Reply Bookmark Topic Watch Topic
  • New Topic

What happening when we casting negative number to char data type?  RSS feed

 
Tim Florian
Greenhorn
Posts: 2
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Could you, please,  step by step explain the output of the code below.



So, as far as I understant the
Step one will be we are casting "-32" to the byte data type which was range from  -32,768 and a maximum value of 32,767... So nothing special happen here.
Step two : we cast "-32" to char but char have range from 0 to 65536 ( 2^16 ) without sign apperantly,  so the magic begin here ... because "-32" is negative
Step three ???
...
Step N =>  we somehow endup to the situation when 65536 - 32 because 65504 integer value

please help me to figure out this trick
Many many thanks in advance !
 
Tim Florian
Greenhorn
Posts: 2
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Big MooseSaloon wrote:
Step N =>  we somehow endup to the situation when 65536 - 32 because 65504 integer value


upps I meant
Step N =>  we somehow endup to the situation when 65536 - 32 become 65504 integer value

sorry
 
Henry Wong
author
Sheriff
Posts: 23295
125
C++ Chrome Eclipse IDE Firefox Browser Java jQuery Linux VI Editor Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Unfortunately, there are a few things going on, so this explanation may get a bit complicated.

First, Java uses the Twos Complement format for fixed point primitive variables. Second, when explicit casting from a smaller type to a larger type, it uses sign extension. The exception, of course, is for char, which doesn't have a sign.

And also, it is easier to see the changes in binary... so, I will show it in binary.

So...

The byte value of -32 is 1110 0000 in binary, which of course, is the twos complement representation for -32. When the byte is explicitly cast to a char, it becomes 1111 1111 1110 0000 due to sign extension. And when the char is explicitly cast to an int, it becomes 0000 0000 0000 0000 1111 1111 1110 0000, which is because it is *not* sign extended.

Convert that binary back to decimal, and you get that value mentioned.

Henry
 
Campbell Ritchie
Marshal
Posts: 56541
172
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Remember also that (cast) associates to the right, so the (byte) cast must be regarded as happening first.

And welcome to the Ranch
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!