File APIs for Java Developers
Manipulate DOC, XLS, PPT, PDF and many others from your application.
The moose likes Beginning Java and the fly likes Data type question Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Java » Beginning Java
Bookmark "Data type question" Watch "Data type question" New topic

Data type question

Arash Ashl

Joined: Mar 04, 2013
Posts: 9
Hi everyone
I was studying about data types in Java, I have some basic questions

what is the meaning of default value for a data type?! what's the use of it?
for example:
default value of int is 0
default value of long is 0L
default value of float is 0.0f
default value of reference data type is null

what's the difference between bit and byte?! is bit a data type as well?

what's the difference between float and double?

the minimum value of character is \uoooo, and the max is \ufffff....what are these symbols?! aren't characters only a,b,c....?

thanks much already
Matthew Brown

Joined: Apr 06, 2010
Posts: 4544

Most of these you could find easily enough by searching, but a quick summary:

In some contexts - e.g. instance variables - if you don't initialise the variable it gets given a value. That's the default value for that type. So null for reference types, false for booleans, and zero for all the numeric primitive types. (Local variables don't get a default value like this - the compiler will make you initialise them before use).

bit isn't a Java data type, unlike byte. In standard computing terminology a bit is a "binary digit" - 0 or 1 - and a byte is (usually) 8 bits.

float and double differ in the level of accuracy. A double uses twice the memory, so can represent floating point numbers more precisely.

Finally, \u as a prefix is a way of using Unicode codes to specify characters, which is useful when you've got characters from unusual character sets.
Campbell Ritchie

Joined: Oct 13, 2005
Posts: 46412
You haven’t read your sources very well. A char can represent values from \u0000 to \uffff not \uoooo or \ufffff.
Yes, a bit is different from a byte. A byte is eight bits (but on older computers, say 40 years ago, a byte might have been six bits).
Campbell Ritchie

Joined: Oct 13, 2005
Posts: 46412
If you look here, you find two versions of what the range of a char is. Note all fields default to 0 values throughout, until you assign them to something real. In the case of primitives, the 0 might represent a real value, but it is highly unlikely to be a real value in reference types.
I agree. Here's the link:
subject: Data type question
It's not a secret anymore!