Why is char c = (char)65.8; allowed in Java?
Shouldn't it throw an error since 65.8 is not an exact Unicode value? I understand that the double is truncated to an integer, in this case, 65, but it seems like bad design to me to allow the programmer to make such a cast.
That is called Narrowing type casting.
From oracle docs:
22 specific conversions on primitive types are called the narrowing
primitive conversions:
short to byte or char
char to byte or short
int to byte, short, or char
long to byte, short, char, or int
float to byte, short, char, int, or long
double to byte, short, char, int, long, or float
A narrowing primitive conversion may lose information about the
overall magnitude of a numeric value and may also lose precision and
range.
In Java, there are two basic types of type conversions: widening and narrowing.
A widening conversion occurs when you convert from a type with smaller (or narrower) to type with larger (or wider) range. Because of this, there is no chance for data loss and the conversion is considered "safe."
A narrowing conversion occurs when you convert from a type with larger (or wider) to type with smaller (or narrower) range. Since we are shrinking the range, there is a chance of data loss so this conversion is considered "unsafe"
The conversion from byte to char is a special case and represents widening and narrowing at the same time. The conversion starts by converting the byte to an int and then the int gets converted to the char.
One reason I can think of why narrowing type casting doesn't result in an error/exception is to allow for a convenient/easy/quick type conversion in the cases when no data will be loss. Compiler leaves it up to us to make sure converted data will be able to fit in the smaller range. It is also useful if we want to quickly truncate values such as rounding the value of a double (by type-casting it to an int).
it doesn't happen automatically on assignment: that would be a compilation error.
The fact that the programmer makes a conscious choice (e.g. the type cast) means she is taking into consideration the possibility of, and responsibility for, possible truncation.
You may have code such as cipher algorithms that may find useful to cast a double or float to char. Also, char is an unsigned type, which means (char)200.5 yields something different than (char)(byte)200.5.
How can the dumb computer know what was intended?
char c = (char)65.8; // valid, double gets converted and explicitly truncated to a char
It may so happen that during a calculation, you might be doing some complex computations involving double arithmetic and finally on the final value, you apply the trucation and display as a character. What's wrong?
Related
Why don't we write s with short data type like short s = 2s; as we write with float e.g float f = 1.23f?
I know when we write float by default the compiler treats it as double and assigns 8 temporary bytes to it, and when it tries to copy that 8 byte to float's 4 that results in a type error, so that is why we write f after initializing a float, but why we don't do something similar with short as by default int is a literal type?
The Java Language Specification allows a type suffix of L or l for long literals. However, it does not provide a type suffix for short values. It's not necessary because:
Any value that could be represented by a short can also by represented by the default integral type int.
Assignments of an int value to a short variable are allowed if the value is a constant expression, whose value can be represented by a short. See the Java Language Specification, "Assignment Contexts":
A narrowing primitive conversion may be used if the type of the variable is byte, short, or char, and the value of the constant expression is representable in the type of the variable.
Why don't we add an “s” suffix to short types?
Because the Java language is inconsistent on this point.
The JLS is more flexible for no floating numeric primitive types as for floating numeric primitive types :
In addition, if the expression is a constant expression (§15.28) of
type byte, short, char, or int:
A narrowing primitive conversion may be used if the type of the
variable is byte, short, or char, and the value of the constant
expression is representable in the type of the variable.
I think that doing a distinction between byte, short, char, or int and float and double in the way to declare and manipulate them was not really required. It creates inconsistency and so potential mistakes in the use of them.
if short s = 10; is valid because the narrowing primitive conversion is validated by the compiler that it has no lost information, so float f = 10.0; would also have been valid for exactly the same reason.
That is because coming from double to a float is truncating value and losing data so the compiler doesn't do that automatically, you have to explicitly tell it. But when going from a smaller size, short, to a larger size, int, it is done automatically as the compiler just has to pad the data and there is no potential loss of data, as opposed to the former case.
Thread.sleep takes a long as an argument for milliseconds. But numeric literals are treated as integers unless specified otherwise with the letter notation like 1000L. So why is this valid code?
Thread.sleep(1000);
Because int can be promoted to long.
long is bigger (more bits) than int and so int can be converted to long without any loss of data. Going the other way may have problems because data could be lost - hence that would be an error.
It's a valid widening conversion, see also JLS-5.1.2. Widening Primitive Conversion which says in part
19 specific conversions on primitive types are called the widening primitive conversions:
int to long, float, or double
If you declare variables of type byte or short and attempt to perform arithmetic operations on these, you receive the error "Type mismatch: cannot convert int to short" (or correspondingly "Type mismatch: cannot convert int to byte").
byte a = 23;
byte b = 34;
byte c = a + b;
In this example, the compile error is on the third line.
Although the arithmetic operators are defined to operate on any numeric type, according the Java language specification (5.6.2 Binary Numeric Promotion), operands of type byte and short are automatically promoted to int before being handed to the operators.
To perform arithmetic operations on variables of type byte or short, you must enclose the expression in parentheses (inside of which operations will be carried out as type int), and then cast the result back to the desired type.
byte a = 23;
byte b = 34;
byte c = (byte) (a + b);
Here's a follow-on question to the real Java gurus: why? The types byte and short are perfectly fine numeric types. Why does Java not allow direct arithmetic operations on these types? (The answer is not "loss of precision", as there is no apparent reason to convert to int in the first place.)
Update: jrudolph suggests that this behavior is based on the operations available in the JVM, specifically, that only full- and double-word operators are implemented. Hence, to operator on bytes and shorts, they must be converted to int.
The answer to your follow-up question is here:
operands of type byte and short are automatically promoted to int before being handed to the operators
So, in your example, a and b are both converted to an int before being handed to the + operator. The result of adding two ints together is also an int. Trying to then assign that int to a byte value causes the error because there is a potential loss of precision. By explicitly casting the result you are telling the compiler "I know what I am doing".
I think, the matter is, that the JVM supports only two types of stack values: word sized and double word sized.
Then they probably decided that they would need only one operation that works on word sized integers on the stack. So there's only iadd, imul and so on at bytecode level (and no operators for bytes and shorts).
So you get an int value as the result of these operations which Java can't safely convert back to the smaller byte and short data types. So they force you to cast to narrow the value back down to byte/short.
But in the end you are right: This behaviour is not consistent to the behaviour of ints, for example. You can without problem add two ints and get no error if the result overflows.
The Java language always promotes arguments of arithmetic operators to int, long, float or double. So take the expression:
a + b
where a and b are of type byte. This is shorthand for:
(int)a + (int)b
This expression is of type int. It clearly makes sense to give an error when assigning an int value to a byte variable.
Why would the language be defined in this way? Suppose a was 60 and b was 70, then a+b is -126 - integer overflow. As part of a more complicated expression that was expected to result in an int, this may become a difficult bug. Restrict use of byte and short to array storage, constants for file formats/network protocols and puzzlers.
There is an interesting recording from JavaPolis 2007. James Gosling is giving an example about how complicated unsigned arithmetic is (and why it isn't in Java). Josh Bloch points out that his example gives the wrong example under normal signed arithmetic too. For understandable arithmetic, we need arbitrary precision.
In Java Language Specification (5.6.2 Binary Numeric Promotion):
1 If any expression is of type double, then the promoted type is double, and other expressions that are not of type double undergo widening primitive conversion to double.
2 Otherwise, if any expression is of type float, then the promoted type is float, and other expressions that are not of type float undergo widening primitive conversion to float.
3 Otherwise, if any expression is of type long, then the promoted type is long, and other expressions that are not of type long undergo widening primitive conversion to long.
4 Otherwise, none of the expressions are of type double, float, or long. In this case, the promoted type is int, and any expressions that are not of type int undergo widening primitive conversion to int.
Your code belongs to case 4. variables a and b are both converted to an int before being handed to the + operator. The result of + operation is also of type int not byte
This may sound too trivial for an intermediate Java programmer. But during my process of reviewing Java fundamentals, found a question:
Why is narrowing conversion like:
byte b = 13;
will be allowed while
int i = 13;
byte b = i;
will be complained by the compiler?
Because byte b = 13 ; is assignment of a constant. Its value is known at compile time, so the compiler can/should/will whine if assignment of the constant's value would result in overflow (try byte b = 123456789 ; and see what happens.)
Once you assign it to a variable, you're assigning the value of an expression, which, while it may well be invariant, the compiler doesn't know that. That expression might result in overflow and so the compiler whines.
From here:
Assignment conversion occurs when the
value of an expression is assigned
(§15.26) to a variable: the type of
the expression must be converted to
the type of the variable. Assignment
contexts allow the use of an identity
conversion (§5.1.1), a widening
primitive conversion (§5.1.2), or a
widening reference conversion
(§5.1.4). In addition, a narrowing
primitive conversion may be used if
all of the following conditions are
satisfied:
The expression is a constant expression of type byte, short, char or int.
The type of the variable is byte, short, or char.
The value of the expression (which is known at compile time, because it is a constant expression) is representable in the type of the variable.
In your example all three conditions are satisfied, so the narrowing conversion is allowed.
P.S. I know the source I'm quoting is old, but this aspect of the language hasn't changed since.
Because a literal number has no type.
Once you give it a type it must be casted to the other one:
int i = 13;
byte b = (byte) i;
A byte has 8 bits. An int, 32 bits, and it is a signed number.
Conversion from a number with a smaller range of magnitude ( like int to long or long to float ) is called widening. The goal of widening conversions is to produce no change in the magnitude of the number while preserving as much of the precision as possible. For example, converting the int 2147483647 to float produces 2.14748365e9 or 2,147,483,650. The difference is usually small, but it may be significant.
Conversely, conversion where there is the possibility of losing information about the magnitude of the number ( like long to int or double to long ) is called narrowing. With narrowing conversions, some information may be lost, but the nearest representation is found whenever possible. For example, converting the float 3.0e19 to long yields -9223372036854775807, a very different number.
If you declare variables of type byte or short and attempt to perform arithmetic operations on these, you receive the error "Type mismatch: cannot convert int to short" (or correspondingly "Type mismatch: cannot convert int to byte").
byte a = 23;
byte b = 34;
byte c = a + b;
In this example, the compile error is on the third line.
Although the arithmetic operators are defined to operate on any numeric type, according the Java language specification (5.6.2 Binary Numeric Promotion), operands of type byte and short are automatically promoted to int before being handed to the operators.
To perform arithmetic operations on variables of type byte or short, you must enclose the expression in parentheses (inside of which operations will be carried out as type int), and then cast the result back to the desired type.
byte a = 23;
byte b = 34;
byte c = (byte) (a + b);
Here's a follow-on question to the real Java gurus: why? The types byte and short are perfectly fine numeric types. Why does Java not allow direct arithmetic operations on these types? (The answer is not "loss of precision", as there is no apparent reason to convert to int in the first place.)
Update: jrudolph suggests that this behavior is based on the operations available in the JVM, specifically, that only full- and double-word operators are implemented. Hence, to operator on bytes and shorts, they must be converted to int.
The answer to your follow-up question is here:
operands of type byte and short are automatically promoted to int before being handed to the operators
So, in your example, a and b are both converted to an int before being handed to the + operator. The result of adding two ints together is also an int. Trying to then assign that int to a byte value causes the error because there is a potential loss of precision. By explicitly casting the result you are telling the compiler "I know what I am doing".
I think, the matter is, that the JVM supports only two types of stack values: word sized and double word sized.
Then they probably decided that they would need only one operation that works on word sized integers on the stack. So there's only iadd, imul and so on at bytecode level (and no operators for bytes and shorts).
So you get an int value as the result of these operations which Java can't safely convert back to the smaller byte and short data types. So they force you to cast to narrow the value back down to byte/short.
But in the end you are right: This behaviour is not consistent to the behaviour of ints, for example. You can without problem add two ints and get no error if the result overflows.
The Java language always promotes arguments of arithmetic operators to int, long, float or double. So take the expression:
a + b
where a and b are of type byte. This is shorthand for:
(int)a + (int)b
This expression is of type int. It clearly makes sense to give an error when assigning an int value to a byte variable.
Why would the language be defined in this way? Suppose a was 60 and b was 70, then a+b is -126 - integer overflow. As part of a more complicated expression that was expected to result in an int, this may become a difficult bug. Restrict use of byte and short to array storage, constants for file formats/network protocols and puzzlers.
There is an interesting recording from JavaPolis 2007. James Gosling is giving an example about how complicated unsigned arithmetic is (and why it isn't in Java). Josh Bloch points out that his example gives the wrong example under normal signed arithmetic too. For understandable arithmetic, we need arbitrary precision.
In Java Language Specification (5.6.2 Binary Numeric Promotion):
1 If any expression is of type double, then the promoted type is double, and other expressions that are not of type double undergo widening primitive conversion to double.
2 Otherwise, if any expression is of type float, then the promoted type is float, and other expressions that are not of type float undergo widening primitive conversion to float.
3 Otherwise, if any expression is of type long, then the promoted type is long, and other expressions that are not of type long undergo widening primitive conversion to long.
4 Otherwise, none of the expressions are of type double, float, or long. In this case, the promoted type is int, and any expressions that are not of type int undergo widening primitive conversion to int.
Your code belongs to case 4. variables a and b are both converted to an int before being handed to the + operator. The result of + operation is also of type int not byte