This question already has answers here:
Conversion between character and int in Java
(3 answers)
Closed 3 years ago.
I understand, that a char can be casted explicitly by me and by compiler implicitly.
In my code in first "for" loop Compiler converts char to int type automatically by implicit type casting. Since char is of size 2 bytes, it get fit into size of 4 bytes.
I am confused, how a integer number be assigned to char variable without explicit casting, since int is of 4 bytes and without explicitly casting it to char using cast operator.
// Compiler converts char to int type automatically by implicit type casting.
for (int i = 'A'; i <= 'Z'; i++) {
System.out.print(i + " ");
} System.out.println();
for (char c = 65; c <= 90; c++) {
System.out.print(c + " ");
} System.out.println();
"It's not assigning an int, it's using UNICODE code for characters."
(eg. A=65 in UNICODE or even ASCII #RealSkeptic)
It's just another way of initialization.
char A = 65;
char a = 'a';
System.out.println(A); \\A
System.out.println((int)A); \\65
System.out.println(a); \\a
System.out.println((int)a); \\97
Def. Oracle for char:
The char data type is a single 16-bit Unicode character. It has a minimum value of '\u0000' (or 0) and a maximum value of '\uffff' (or 65,535 inclusive)
Kindly look on comments also.
Related
This question already has answers here:
Can assign integer value to char but can't assign integer variable to char
(2 answers)
Subtract int from char : possible lossy conversion int to char
(1 answer)
Closed 1 year ago.
I'm currently struggling to understand why this expression
String test = new String("hello");
char y = test.charAt(0) - '0';
is not counted as compile-time constant expression and yields the error
error: incompatible types: possible lossy conversion from int to char
char y = test.charAt(0) - '0';
but this expression does compile
String test = new String("hello");
char[] charArr = test.toCharArray();
char y = (char) (charArr[0] - '0');
As my current understanding right now upon defining the String variable I've already declared a compile-time variable. I am simply just retrieving the character at the specified index of test, then do some arithmetic operation on it.
Is this caused by the inner working of the String.charAt(index) function? Or is it because of the Java language feature? Or it is caused entirely by something else? May I have some clarification on this or further reading resources? Thank you.
Why doesn't the following code compile
int n = 5;
char c = n;
but the following does compile
char c = 5;
Aren't I just assigning an integer value to char in both cases?
A char can be assigned to an int without a cast because that is a widening conversion. To do the reverse, an int to a char requires a cast because it is a narrowing conversion.
See also JLS. Chapter 5. Conversions and Promotions.
His question is why his code does not compile, not how to do what he's trying to do.
The reason the line
char c = n
does not compile, is because the range of char (-2^15 to 2^15 - 1) is much smaller than the range of int (-2^31 to 2^31 - 1). The compiler sees you are trying to assign an int to a char, and stops you, because it realizes this.
Can someone explain to me why the following code compiles OK in Java?
char c = 'a' + 10;
Why is this not equivalent to the following, which does not compile?
int i = 10;
char c = 'a' + i;
The Java Language Specification (section 3.10.1) states "An integer literal is of type long if it is suffixed with an ASCII letter L or l (ell); otherwise it is of type int (§4.2.1)." Section 4.2.2 refers to "The numerical operators, which result in a value of type int or long." So the result of the addition should, in my understanding, be an int, which cannot be assigned to the char variable c.
However, it compiles fine (at least in Sun JDK 1.6.0 release 17 and in Eclipse Helios).
Rather an artificial example perhaps, but it is used in an introductory Java course I have been teaching, and it now occurs to me that I don't really understand why it works.
It is because the compiler can check that it ('a' + 10) is within the bounds of a char whereas it cannot (in general) check that 'a' + <an integer> is within the bounds.
'a' + 10 is a compile-time constant expression with the value of 'k', which can initialise a variable of type char. This is the same as being able to assign a byte variable with a literal integer in [-128, 127]. A byte in the range of [128, 255] may be more annoying.
char is actually an unsigned 16-bit integer with a range 0-65535. So you can assign any integer literal in that range to a char, e.g., "char c = 96", which results in "c" holding the character "a". You can print out the result using System.out.println(c).
For the constant expression on the right-hand-side of "char c = 'a' + 10", 'a' is promoted to int first because of the Java numeric promotion rules and the integer value is 96. After adding 10 to it, we get a literal integer 106, which can be assigned to a char.
The right-hand-side of "char c = 'a' + i" is not a constant expression and the expression result assignment rule requires an explicit cast from int to char, i.e., "char c = (char) ('a' + i)".
This code should works:
int i = 10;
char x = (char)('a' + i);
The constant is of a different type (I know the spec says that 10 should be an int, but the compiler doesn't see it that way).
In char c = 'a' + 10, 10 is actually considered a constant variable of type char (so it can be added to a). Therefore char c = char + char works.
In int i = 10;
char c = 'a' + i; You are adding a char to an integer (an integer can be much bigger than a char, so it chooses the bigger data type [int] to be the result a.k.a: 'a' + i = int + int). So the result of the addition is an integer, which cannot fit into the char c.
If you explicitly casted i to be a char (e.g.: char c = 'a' + (char)i;) it could work or if you did the opposite (e.g.: int c = (int)'a' + i;) it would work.
According to Java specification as of 2020 for widening and narrowing conversions of integral values in expressions:
"In a numeric arithmetic context ... the promoted type is int,
and any expressions that are not of type int undergo widening
primitive conversion to int"
In assignment context:
"...if the expression is a constant expression of type byte, short,
char, or int:
• A narrowing primitive conversion may be used if the
variable is of type byte, short, or char, and the value of the
constant expression is representable in the type of the variable."
So, in char c = 'a' + 10; the left constant value is a charand the right constant value is int fitting into a char. While there is an assignment and int 10 fits into char, int gets converted to char. And the overall result of addition is char.
And in char c = 'a' + i; (where int i = 10;) the i is not constant, so, notwithstanding the assignment, the char 'a' is promoted to int and the overall result is int. Thus, the assignment is erroneous without an explicit typecast.
Note, that the following original answer is wrong (it cites treatment in numeric choice contexts, like in switch statement):
According to Java specification for widening and narrowing conversions in expressions:
If any expression is of type int and is not a constant expression,
then the promoted type is int, and other expressions that are not of
type int undergo widening primitive conversion to int.
...
if any expression is of type char, and every other expression is
either of type 'char' or a constant expression of type 'int' with a
value that is representable in the type 'char', then the promoted type
is char, and the int expressions undergo narrowing primitive
conversion to char.
So, in char c = 'a' + 10; the left expression is a char and the right constant expression is int fitting into a char. So, the constant gets converted to char. And the overall result is char.
And in char c = 'a' + i; (where int i = 10;) the right expression is not constant, so the the char 'a' is promoted to int and the overall result is int.
How come this,
char ch = 9 + '0';
System.out.println(ch);
Will result in,
9
But,
int k = 9;
char ch = k + '0';
System.out.println(ch);
will return
"Type mismatch: cannot convert from int to char"
This is described in JLS Sec 5.2:
5.2 Assignment Contexts
...
if the expression is a constant expression (§15.28) of type byte, short, char, or int:
A narrowing primitive conversion may be used if the type of the variable is byte, short, or char, and the value of the constant expression is representable in the type of the variable.
...
Example: The compile-time narrowing of constant expressions means that code such as:
byte theAnswer = 42;
is allowed.
This applies to your example because 9 + '0' is a constant, and it can be statically determined that it fits into a char. As such, the cast can be done implicitly.
On the other hand, k is not constant, so the compiler can't know for sure that k + '0' will fit into char; hence, it requires an explicit cast.
The compiler knows that 9 + '0' is 57, which fits into a 16-bit char value. The compiler does not keep track of variable assignments (except for static final constants) so it cannot be certain that k + '0' evaluates to a value that fits in 16 bits.
That 16-bit size limitation is also why you will also get the same compiler error for this:
char ch = 10_000_000 + '0';
When using 9 + '0', compiler uses ASCII value of character and adds it to the literal (9) value and then converts it back to the char
i.e.
9 + 48 = 57
57 ACII -->= '9' (char)
When using k + '0'
k is declared as int and '0' is char. Compiler wont compile here as they are different types and target is smaller to accommodate the resultant value.
char char1 = 'a';
System.out.println(char1); //prints char 1
System.out.println(char1+1); //prints char 1
System.out.println(char1++); //prints char 1
System.out.println(char1+=1); //prints incremented char1
char1 += 1;
System.out.println(char1); //prints incremented char1
In the above, why doesn't (char1+1) or (char++) print the incremented character but theother two do?
First, I'm assuming that because you say the increment in System.out.println works, that you have really specified:
char char1 = 'a';
EDIT
In response to the change of the question (char1+1; => char1 += 1;) I see the issue.
The output is
a
98
b
The 98 shows up because the char a was promoted to an int (binary numeric promotion) to add 1. So a becomes 97 (the ASCII value for 'a') and 98 results.
However, char1 += 1; or char1++ doesn't perform binary numeric promotion, so it works as expected.
Quoting the JLS, Section 5.6.2, "Binary Numeric Promotion":
Widening primitive conversion (§5.1.2) is applied to convert either or
both operands as specified by the following rules:
If either operand is of type double, the other is converted to double.
Otherwise, if either operand is of type float, the other is converted
to float.
Otherwise, if either operand is of type long, the other is converted
to long.
Otherwise, both operands are converted to type int.
(emphasis mine)
You didn't assign the result of addition char1+1 to char1. So
char1 = char1 + 1;
or
char1 += 1;
char1++;
are correct.
Okay, first of all, fixing the format of your code:
char char1;
char1 = 'a';
System.out.println(char1); // print 1
System.out.println(char1 + 1); // print 2
char1 += 1;
System.out.println(char1); // print 3
which yields the output:
a
98
b
Now, let's look at each call to println() in detail:
1: This is simply taking the character handle named char1 and printing it. It's been assigned the letter a (note the single quotes around the a in the assignment, indicating character). Not surprisingly, this prints the character a.
2: For this line, you're performing an integer addition. A char in java is held as a unicode character. The unicode value for the letter a maps to the number 97. (Note that this also corresponds to that ASCII value for a). When performing arithmetic operations in Java between mismatched types, the smaller/less precise value type's value will be 'upgraded' to the larger type (this is very imprecisely stated). Because of this, the char is 'upgraded' to an int before the addition is performed, and the result is also an int. With this in mind, it's not surprising that the 97 from a +1 results in a 98 being printed.
3: In this instance we are once again printing the value of a char, so a character is printed. This time the 98 we saw generated before is implicitly cast back into a character. Again, unsurprisingly the next highest number mapping from a is b, so we see a b printed.
try this.
System.out.println(char1);
System.out.println(++char1);
char1 += 1;
System.out.println(char1);
instead
char1 = a;
System.out.println(char1);
system.out.println(char1+1);
char1 += 1;
System.out.println(char1);