• Post Reply Bookmark Topic Watch Topic
  • New Topic

Type Casting question  RSS feed

 
Richard Attix
Greenhorn
Posts: 10
1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
[size=12]

First time questioner...sorry if it is an old question!  

I understand all about automatic type conversion, when the java compiler takes a number with a smaller type (say a byte) and converts it automatically into a larger type (say an int) because it will
fit with no problems or truncated digits.  What I don't understand is when you use type conversion to force a larger type number (say a Long) into a smaller one (say a Short).  If you don't
use type conversion it will create an error during compile time, but if you do use type conversion, it will run and convert the Long to a Short, but the number it creates will be strange and unusable. 
So....why do it?  Why consider it "a usable option" when the output will be a strange number you can't do anything with? What am I missing??

Thanks for your help folks!!



Output:
5398758
24806           //so this number obviously isn't 5398758
false             //and where they should be the same and "true" they are different and not equal (false)[/size]
 
Paul Clapham
Sheriff
Posts: 22816
43
Eclipse IDE Firefox Browser MySQL Database
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Welcome to the Ranch!

You've chosen an example where the number doesn't fit after the narrowing conversion (you know the range of values for a short variable, right?) and so you get the result which you described. But you phrased the question as if converting a long value to a short value will always produce something different. That isn't the case at all. Try your code with this replacement:



and you'll see there's nothing strange or unusable about that. 22 fits into a short but 5398758 doesn't, that's all. So from a design point of view you should only be doing the narrowing conversion like that if you're sure that the number is going to fit into the narrower data type.
 
Junilu Lacar
Sheriff
Posts: 11476
180
Android Debian Eclipse IDE IntelliJ IDE Java Linux Mac Spring Ubuntu
  • Likes 2
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
again, Welcome to the Ranch!

The "strangeness" that you refer to has to do with how numbers are represented in a computer, which is just 0s and 1s or binary. 

People normally use the decimal system. However, the computer uses the binary system. A decimal value of 171 has a binary representation of 10101011. That's 8 bits (binary digits). Now, if you had a data type that could only hold 4 bits and you tried to put a value that had 8 bits in it, something has to give, right? You'd have to drop some of the bits. 4 bits to be exact. When it comes to dropping bits to make a value fit into a smaller space, it's usually the bits on the left that get dropped first.  So, if you forced the 8-bit binary value of 10101011 into a space that only has room for 4 bits, you would end up with just 1011, which in decimal is 11.  So, a decimal 171 becomes 11 when forced from an 8-bit value into a 4-bit value.


So, yes, if you didn't know that computers actually use a binary system to represent numbers then that would seem strange and arbitrary.
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!