• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Tim Cooke
  • paul wheaton
  • Liutauras Vilda
  • Ron McLeod
Sheriffs:
  • Jeanne Boyarsky
  • Devaka Cooray
  • Paul Clapham
Saloon Keepers:
  • Scott Selikoff
  • Tim Holloway
  • Piet Souris
  • Mikalai Zaikin
  • Frits Walraven
Bartenders:
  • Stephan van Hulst
  • Carey Brown

Float vs. Double

 
Greenhorn
Posts: 12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I am posting here because I "think" this is just a basic Java question.

I have just received my Scott/Jeane OCA8 study guide and am getting tripped up right out of the box.
In Appendix B/Study Tips there is an example of "trying it yourself" that says that the following line will not compile:

Okay, simple enough.
The "issue" is that the compiler cannot convert from double to float - okay
The correction is to do either;

or

But looking online, or at the Java Complete Reference, or at a Java textbook I cannot find WHY the original line would not work.
I have found many explanations of single precision vs. double precision - I get that.
Both of these data types hold floating point numbers.
The value being assigned to the variable is not so large that either data type could not hold it.
So why does it work for one and not the other?
What makes the compiler recognize the value of 102.0 as a double and not a float?
Why can't both data types be used with this value?

 
Greenhorn
Posts: 8
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Mike, Welcome to Coderanch. The reason the original line will not work is as follows :

when you enter 102.0, the compiler takes this literal as a double. Now a double is higher in the order that a float. Just like a int is higher than a short and a short is higher than a byte. Thus when you write , you are telling the compiler to do float = double , which is not allowed as double is the larger type. The solution to this is either assigning the literal to a double variable or marking the literal as a float - or explicitly casting it to float, . I hope you got your answer
 
Mike McManus
Greenhorn
Posts: 12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
So, if I am understanding this correctly - instead of the compiler looking at the value and allowing it IF it meets the primitive data types attributes,
it is looking at the value and determining the data type to use regardless of the developer preference.
 
Ajinkya Ghonge
Greenhorn
Posts: 8
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Yes, if a compiler sees a number is the form 100.00 it will assume it as a double
 
Mike McManus
Greenhorn
Posts: 12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Okay, thank you Ajinkya
 
Mike McManus
Greenhorn
Posts: 12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
On the next page of the study guide / Study Tips there is a list of 5 initializations of a float variable.
Why does compile?
If the compiler is determining the data type, why would this one work? Wouldn't it define this as a short or int?
Why does it think that this is a float data type?
The problem is that it seems that in order to pass the OCA/OCP tests that I will need to "think" like a compiler. So I need to understand what is happening here.
 
Sheriff
Posts: 17734
302
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
102 is an int literal. 102.0 is a double literal. An int literal can be promoted to a float. A double cannot.
 
Mike McManus
Greenhorn
Posts: 12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have found a link that describes "data type promotion" here: http://www.java2s.com/Book/Java/0020__Language-Basics/The_Type_Promotion_Rules.htm
But how in the heck can a character value of 'a' be converted to a numeric value??? Why would you want to? This doesn't make sense to me.
One example they have is dividing 50000 by 'a' = 515. Really? What the heck is this? This is akin to dividing 500 miles by 'apple' it just doesn't make sense as to why this is allowed.
 
Junilu Lacar
Sheriff
Posts: 17734
302
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
It will make more sense if you remember that you're dealing with basically a very powerful calculator. As such, everything eventually boils down to a series of 1s and 0s. In Java and similar languages like C and C++, char is an integer type. This allows "math" to be done on them and facilitates certain types of operations like conversions and encoding.

Take this code, for example:

Now, try to imagine how you would do that without the math.
 
Mike McManus
Greenhorn
Posts: 12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I am sorry, but I just don't get this. No calculator I have every used allowed me to use alphabetic characters. Math is math - numbers vs. numbers. Not numbers vs. letters.
Your example shows adding 'A' to ch and then subtracting 'a' - ALL character / alphabetic characters!!! This is not math! This is gibberish. Who knows what the result will be? How can you count on it?
I was able to figure out that in the previous example of 50000 / a = 515 that a = 97 (roughly) this is algebra. But where can i find that a lower case 'a' is ALWAYs = 97 (roughly) - and why would i want to?
In the real world, why would you ever want to do math with apples and oranges - numbers and letters?
 
Junilu Lacar
Sheriff
Posts: 17734
302
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Mike McManus wrote:
One example they have is dividing 50000 by 'a' = 515. Really? What the heck is this? This is akin to dividing 500 miles by 'apple' it just doesn't make sense as to why this is allowed.


You have to bear in mind that the example you cited was meant to demonstrate the mechanisms of type promotion. It doesn't necessarily represent something you'd do in a real-world program. You need to stay within the context in which the authors are giving their example. When you put the example code in a real-world context, it naturally becomes ridiculous.

It's like that prank on YouTube where a couple of guys challenge someone on the street to limbo dance, set up the pole and everything, blindfold their mark, then as soon as the mark bends backwards and starts doing the limbo, they run away. People walking by who didn't see the pranksters run away think that the victim is crazy or something because they aren't seeing the same context that the victim thinks he's still in. To you, doing math with char looks crazy because you're not seeing the context in which it makes sense. Hopefully, with the example I gave previously, you do now.

In Andy Hunt's book, Pragmatic Thinking & Learning, Tip #1 is: "Always consider the context."
 
Mike McManus
Greenhorn
Posts: 12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Junilu - even with your example it still does not make sense to me. In your example of a method that converts a character to upper case, how does the "math" in your example work?
As stated previously, it just looks to me like a letter 'A' adding to the character variable ch and then subtracting another letter 'a' - nothing in that makes sense to me as to how this is math and how you get from lower case to upper case.
As far as imagining how I'd do this without math - there is a limited number of letters in the alphabet, how about doing this with arrays? Find the the lower case value and return the corresponding upper case value - this makes sense to me, it's simple, and does not use this goofy "math".
Perhaps it is just because I haven't been exposed to it, but I still cannot wrap my head around using alphabetic characters in a math equation. Under what circumstances?
 
Sheriff
Posts: 28347
97
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Mike McManus wrote:But where can i find that a lower case 'a' is ALWAYs = 97 (roughly) - and why would i want to?
In the real world, why would you ever want to do math with apples and oranges - numbers and letters?



Here's the Unicode chart which maps characters to their numerical representations: http://unicode-table.com/en/#control-character

Bear in mind that the numbers are hexadecimal, so it appears that 'a' is represented by 0061, which in decimal form is 97.

As for the real world: you sometimes see beginner exercises where you're supposed to do what's called a Caesar encryption which just replaces each letter by the next one in the alphabet, so "dog" gets encrypted as "eph". This just involves adding 1 to each character, although the beginners inevitably stumble over what to do with 'z'. Nobody seriously does that in the real world, but when you're designing a language you have to allow for all kinds of weird stuff. You have no idea what people are going to use your language for.
 
Junilu Lacar
Sheriff
Posts: 17734
302
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Mike McManus wrote:
Your example shows adding 'A' to ch and then subtracting 'a' - ALL character / alphabetic characters!!! This is not math! This is gibberish. Who knows what the result will be? How can you count on it?
I was able to figure out that in the previous example of 50000 / a = 515 that a = 97 (roughly) this is algebra. But where can i find that a lower case 'a' is ALWAYs = 97 (roughly) - and why would i want to?
In the real world, why would you ever want to do math with apples and oranges - numbers and letters?



I'm reading a lot of frustration in your responses. You need to calm down and take a deep breath. Computers and your everyday calculators operate on basically the same principles, even if you've never encountered a calculator that deals with characters. Engineering students who use Texas Instruments calculators know these kinds of calculators all too well: http://tibasicdev.wikidot.com/userinput

As I said, everything eventually gets turned into a series of 1s and 0s, or essentially, a number. Even Strings are broken down into chars which are then turned into 1s and 0s deep inside a computer.

It seems like gibberish to you because you don't understand the "language" that the computer uses to make chars interchangeable with ints. The computer can't adjust to your way of thinking. You have to adjust to the computer's way of "thinking". To the computer, a char is just a number. And yes, the math is reliable because 'A' will always have the same encoded value, namely, 65. Always. Technically, a char is a 16-bit value. 'A' is \u0041. \u0041 is a hexadecimal value (base 16) and is equivalent to decimal 65. This encoding is called "Unicode" and it's a standard so, yes, you can rely on it having that value always.

char values that represent the letters A-Z and a-z and digits 0-9 are guaranteed to have the same distances from each other, so the arithmetic in the example I gave is guaranteed to be correct. Always. You don't have to take my word for it though. Feel free to experiment with them yourself.

Again, you need to consider the context. Computers don't use the same kind of semantics that you use for numbers and characters. Computers are not going to change just because you don't think they make sense so it's up to you to adopt your thinking to the way computers work so that they start to make sense to you. If you insist on using your rules and your context, then I'm afraid you'll just have to get used to being frustrated and confused. ¯\_(ツ)_/¯
 
Junilu Lacar
Sheriff
Posts: 17734
302
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Mike McManus wrote:Junilu - even with your example it still does not make sense to me. In your example of a method that converts a character to upper case, how does the "math" in your example work?



These are the int equivalents of the standard Unicode values for uppercase letters:
A (\u0041) = 65
B (\u0042) = 66
C (\u0043) = 67
...
Z (\u005a) = 90

For lowercase letters:
a (\u0061) = 97
b (\u0062) = 98
c (\u0063) = 99
...
z (\u007a) = 122

So, if you want to convert 'b' to its uppercase equivalent 'B', you do some simple math:

uppercase('b') = 'A' + 'b' - 'a'

Breaking that down, you first see how "far" the char 'b' is from char 'a':

'b' - 'a'
→ 98 - 97
1

So 'b' is one character away from 'a'. Now you get the character that's the same distance away from 'A':

'A' + 1
→ 65 + 1
→ 66
'B'

Therefore, uppercase('b') is 'B'!

As far as imagining how I'd do this without math - there is a limited number of letters in the alphabet, how about doing this with arrays? Find the the lower case value and return the corresponding upper case value - this makes sense to me, it's simple, and does not use this goofy "math".


It only seems goofy because you're not thinking in the right context. Think codes. Think numbers. That's what the computer understands. The computer doesn't really understand "characters". Characters are just an abstraction for your benefit, as a human who is more comfortable thinking in those terms. Computers deal with numbers. In fact, using more complicated methods like arrays to do conversions like this would be considered naive and goofy because the math is simpler and more straightforward; the math is actually quite elegant, if you understand it.

Perhaps it is just because I haven't been exposed to it, but I still cannot wrap my head around using alphabetic characters in a math equation.


I would say you're right

Under what circumstances?


In most circumstances where you're dealing with raw chars
 
Junilu Lacar
Sheriff
Posts: 17734
302
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Mike McManus wrote:As far as imagining how I'd do this without math - there is a limited number of letters in the alphabet, how about doing this with arrays? Find the the lower case value and return the corresponding upper case value - this makes sense to me, it's simple


Actually, it's not as simple as you think it would be without using the numeric nature of chars.Try it yourself. See if you can actually implement the lowercase to uppercase conversion using arrays and none of the "goofy" math.

Ok, I guess you could do it with two parallel arrays of char. You're still basically doing the same thing you'd be doing with the char math though, only you'd be using more memory to store the arrays and you'd be iterating through an array to find the character you want to convert. That's the equivalent of using a hammer to kill a fly.

With the array of chars approach, you'd have uppercase chars in one array and lowercase chars in another array. You'd find the offset of the char you want to convert in the lowercase array, then use that same offset to get the equivalent element in the uppercase char. That's exactly what you're doing with the char math though: finding the distance (offset) of the lowercase char from 'a', then finding the character that has the same distance (offset) from 'A'.
 
Bartender
Posts: 10780
71
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Mike McManus wrote:But how in the heck can a character value of 'a' be converted to a numeric value???


Because it IS a numeric value. Everything inside a computer is, even though it may not always be used that way. Specifically, a Java char is a 16-bit unsigned integer in the range 0-65535.

Why would you want to? This doesn't make sense to me.


Ah, now that's a slightly different question. The fact is that chars are generally not used for arithmetic because the reason they exist is to hold a Unicode character value.

And the reason why Junilu's conversion method works - albeit for a very limited set of input values - is precisely because a char IS a number, and furthermore, that its numeric value always represents the same character.

how about doing this with arrays? Find the the lower case value and return the corresponding upper case value - this makes sense to me, it's simple, and does not use this goofy "math".


The answer is in the very first verb of your solution: FIND. You have to find the character you want to convert before you can return the corresponding uppercase value.
And what if the value supplied is already uppercase? Then you either won't find it (usually the worst-case scenario for a search), or you have to include uppercase characters in your "translation" table.

And the fact is that the compiler (and/or JVM) already has a table of all 65,000-odd Unicode characters that it indexes directly in order to return the right symbol (TBH, I'm not sure exactly how that's done, but you can bet it has to be fast).

So, if you want to convert, why not just use maths to return another char that the language can use again as a direct index? A couple of machine instructions instead of a complicated (and slow) search loop.

In the real world, why would you ever want to do math with apples and oranges - numbers and letters?


Because hopefully you now realise that, to a computer, they are NOT apples and oranges; they're both integers.

HIH

Winston
 
Mike McManus
Greenhorn
Posts: 12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Paul - thank you for the link to the code table. Now this makes more sense, every letter represents a numeric equivalent. So the TOUPPER method is really just calculating an offset ('A' -'a') to apply to the character that is passed in. So in reality it is:
Back to the original question about float vs. double. I now understand that this is because:
1) the compiler determines the appropriate data type for the value (in this case - 102.0)
2) because the compiler determined that the value is double, it cannot evaluate it into a float variable unless an explicit cast is done or using the 'f' suffix on the value.
3) and that there is a data type progression based on the size of the data type. You can normally evaluate from a shorter data type into a longer one, but not vice versa.
Thanks for everybody's help on this.
And the journey continues...
 
Junilu Lacar
Sheriff
Posts: 17734
302
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Mike McManus wrote:So the TOUPPER method is really just calculating an offset ('A' -'a') to apply to the character that is passed in. So in reality it is:


If you think that perspective is easier to understand, that's fine. The operations are commutative and associative anyway. Frankly, I find it ironic that you would find this "reality" easier to understand. I personally think that

'A' + (ch - 'a')

is easier to visualize and equate to calculating an offset (within the lowercase range) and applying it (to the uppercase range) as I explained in my previous reply.

Note that there is a set of non-alphabetic characters between 'Z' and 'a' that separates the two sets of alphabetic characters. The way you view the formula is more like calculating a frame size (the distance between the starts of two separate character ranges, 'A' and 'a'), then moving the frame so that one end is at the character represented by ch. The uppercase character you want is the one that's on the other end of the frame.

I also find the math for your perspective more difficult to relate to the idea of "calculating and applying an offset":

('A' - 'a')
→ 65 - 97
-32

So if that's an offset, I would add it to ch. But it's a negative number so I actually subtract that value from ch. So from the lowercase ch, I go backwards by the "offset" to get the equivalent uppercase character.

That's just a little too convoluted for my old brain and the semantics of the word "offset" don't match the use here, IMO. But whatever works for you is fine, I guess. ¯\_(ツ)_/¯

Thanks for everybody's help on this.
And the journey continues...


I'm glad things are getting clearer for you now...
 
Junilu Lacar
Sheriff
Posts: 17734
302
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Here's a little puzzler that might get you thinking more about how data type promotion of char works with the "+" operator and String concatenation vs addition:

Try to modify the following statement as little as possible so that it prints "ABCDEFG" instead of "131CDEFG"
 
Bartender
Posts: 15737
368
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Does replacement count as one operation, or a remove and an add operation? I have different solutions :P

Wait, no. Nevermind
 
Marshal
Posts: 80138
418
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Junilu Lacar wrote:. . . -32

So if that's an offset, I would add it to ch. . . .

Does that only work for English characters?
 
Winston Gutkowski
Bartender
Posts: 10780
71
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Mike McManus wrote:Paul - thank you for the link to the code table. Now this makes more sense, every letter represents a numeric equivalent. So the TOUPPER method is really just calculating an offset ('A' -'a') to apply to the character that is passed in. So in reality it is...


Ooh noo it isn't. Because the busines of alphabets and "uppercase" isn't as simple as that

For example: French contains the accented letters 'ë' (Noël) and 'ï' (aiguïlle), but because of how they're used, no word in French ever begins with them - and therefore, linguistically, can't be "capitalized".
This means that if your Locale is France, String.toUpper() may not return what you expect; and I'm darn sure it's true in Turkey, but whether it's toUpper() or toLower() that's the problem I forget.

Suffice to say, there are upper and lower translations for every European letter (including accents). Just don't be too simplistic about it.

Winston
 
lowercase baba
Posts: 13091
67
Chrome Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'd think it would be done with a bitwise "and" and a mask.
 
Junilu Lacar
Sheriff
Posts: 17734
302
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Campbell Ritchie wrote:Does that only work for English characters?


That particular formula only works for English characters since it takes 'A' as the start of the conversion range. However, looking through the Unicode table that Paul cited, it seems like similar strategies could be applied to other characters that have uppercase/lowercase sets. It would certainly appear to work for converting chars from \u0561..\u0686 to their equivalents in \u0531..\u5506 and vice versa. Even when the two sets are interwoven as is the case in \u0100..\u017e, the general approach still works. You just have to change the "base" characters you use for each set of chars involved in the conversion.

I remember one of my college professors saying how this works in EBCDIC, too. The thing with EBCDIC is that the codes for A-Z are not consecutive as they are in Unicode/ASCII. EBCDIC has groups of consecutive alphabetic char codes separated by groups of non-alphabetic chars. The relative offsets of all alphabetic uppercase chars from 'A' and lowercase alphabetic chars from 'a', however, are always going to be the same in both sets so the same formula still works even when the codes are not all consecutive. I think the codes for digits 0-9 are supposed to be consecutive for any standard character encoding. I seem to recall this coming up in a discussion about Hamming distance but I could be wrong. At least that's what's coming out my age-addled memory banks.
 
Be reasonable. You can't destroy everything. Where would you sit? How would you read a tiny ad?
Smokeless wood heat with a rocket mass heater
https://woodheat.net
reply
    Bookmark Topic Watch Topic
  • New Topic