int a=128;
byte b;
b=(byte)a;
System.out.println(b);
This prints -128.
But in the Java book the same code outputs 0.
What's the difference between them?
128 represented as a 32-bit integer (int) is 00000000 00000000 00000000 10000000 in binary.
As a byte is only 8 bits, when it is cast to a byte it becomes 10000000. Because all integers in Java are signed integers using two's complement, the first bit (1) is the sign bit, therefore the value becomes -128.
Not sure why the book said the output should be 0. Are you sure the example is exactly the same in the book?
More information on the Java primitives types here and Wikipedia has a fairly comprehensive article on two's complement.
Look, 128 = 0x80, so if you cut off all but less significant byte you will get 1000 0000 (binary). This is -128 for byte.
So there is an error in you java book:)
In Java this should really print -128. If a = 256 it should print 0.
Byte is from -128 to 127, so if you cast 127 it is 127 and 128 is -128 because 127 + 1 = -128.
Right answer was posted already but id like to expand it a little.
To understand better how it works, try to read about it form other sources. #DanielGibbs provided few you could use
i suggest you also try to run code like:
for (int a = -256; a < 256; a++) {
byte b = (byte) a;
System.out.println(a + " -> " + b);
}
output of this code should let you see clearly meaning of less significant (and one most significant which determines sign) bites in int and how they are fit in byte type.
PS. -256 is not 10000000 00000000 00000001 00000000, but 11111111 11111111 11111111 00000000.
Related
import java.util.Scanner;
public class ShortToByte{
public static void main (String args[]){
int i=0;
while (i<6){
Scanner sinput = new Scanner (System.in);
short a = sinput.nextShort();
byte b = (byte) a;
System.out.println("Short value : " + a + ",Byte value : " + b);
i++;
}
}
}
I am trying to understand conversion between different data types, but I am confused as how is short value of 128 = -128 in byte and also how is 1000 in short = -24 in byte ?
I have been using the following logic to convert short to byte :
1000 in decimal -> binary = 0000 0011 1110 1000
while converting to byte : xxxx xxxx 1110 1000 which is equivalent to : 232
I do notice that the correct answer is the two's complement of the binary value, but then when do we use two's complement to convert and when not as while converting 3450 from short to byte I did not use two's complement yet achieved the desired result.
Thank you!
Your cast from short to byte is a narrowing primitive conversion. The JLS, Section 5.1.3, states:
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
(bold emphasis mine)
Numeric types are signed in Java. The short 128, represented in bits as
00000000 10000000
is narrowed to 8 bits as follows:
10000000
... which is -128. Now the 1 bit is no longer interpreted as +128; now it's -128 because of how two's complement works. If the first bit is set, then the value is negative.
Something similar is going on with 1000. The short 1000, represented in bits as
00000011 11101000
is narrowed to 8 bits as follows:
11101000
... which is -24.
From java datatypes:
byte: The byte data type is an 8-bit signed two's complement integer. It has a minimum value of -128 and a maximum value of 127 (inclusive)
Therefore 128 overflows to -128.
I'm curious to know what actually happens on a bitwise comparison using binary literals. I just came across the following thing:
byte b1 = (new Byte("1")).byteValue();
// check the bit representation
System.out.println(String.format("%8s", Integer.toBinaryString(b1 & 0xFF)).replace(' ', '0'));
// output: 00000001
System.out.println(b1 ^ 0b00000001);
// output: 0
So everything behaves as expected, the xor comparison equals 0. However when trying the same with a negative number it won't work:
byte b2 = (new Byte("-1")).byteValue();
// check the bit representation
System.out.println(String.format("%8s", Integer.toBinaryString(b2 & 0xFF)).replace(' ', '0'));
// output: 11111111
System.out.println(b2 ^ 0b11111111);
// output: -256
I would have expected that the last xor comparison also equals 0. However this is only the case if I do an explicit cast of the binary literal to byte:
byte b2 = (new Byte("-1")).byteValue();
// check the bit representation
System.out.println(String.format("%8s", Integer.toBinaryString(b2 & 0xFF)).replace(' ', '0'));
// output: 11111111
System.out.println(b2 ^ (byte)0b11111111);
// output: 0
For me it looks like that before the xor comparison both b1 and 0b11111111 have the same bit representation so even if they get casted to int (or something else) the xor should still equal 0. How do you come to the result of -256 which is 11111111 11111111 11111111 00000000 in binary representation? Why do I have to do an explicit cast to byte in order to obtain 0?
Binary literals without a specific cast represent 32-bit integer values, no matter how many digits there are. For example 0b00000001 is a shorthand for 0b00000000 00000000 00000000 00000001.
Bitwise comparisons in Java use binary numeric promotion (see the Javadocs). In this specific case it means that both operands get converted to int before the comparison is performed.
0b11111111 is already representing an int (without leading 0s) and just represents 0b00000000 00000000 00000000 11111111, while b2 is a byte representing the value -1. During the conversion to int the value is preserved and thus b2 is cast to a 32-bit integer representing the same number (-1): 0b11111111 11111111 11111111 11111111.
The xor then evaluates to 0b11111111 11111111 11111111 00000000 which is the 32-bit binary representation of -256.
In case the xor comparison is performed using (byte)0b11111111 the binary literal will also be treated as a byte and thus equivalently cast to a 32-bit integer representing -1.
It is important to note that binary comparisons are performed with either double, float, long or int (as specified in the Javadocs). If there are only other types participating in the comparison (as for example byte) they will be converted to int. That is why the following piece of code will give a compilation error:
byte b1 = (byte)0b00000001;
byte b2 = (byte)0b00000001;
byte b3 = b1 & b2;
>>> error: incompatible types: possible lossy conversion from int to byte
... because the result of a bitwise comparison of two byte is an int.
Further reading about the why can be done here:
Types and the Java Virtual Machine
Bitwise operators in java only for integer and long?
When you use b1 ^ 0b11111111 you actually do xor between byte to an int
byte is 8 bit variable while int is 32 bit number.
So, what you did is:
b1 ^ 0b(00000000 00000000 00000000 11111111)
so when you use xor between the byte (with additional 1s before it to use it with int. 1s because it is a negetive number. If it was positive it would be 0s) and int the result will be an integer and in your case, -256.
When you cast the integer to byte you are using xor between two bytes and the result will be a byte.
Referring to page number 79 of " Java The complete Reference" 7th edition by Herbert Schildt.
The author says : " If the integer’s value is larger than the range of a
byte, it will be reduced modulo (the remainder of an integer division by the) byte’s range".
The range of byte in java is -128 to 127. So the maximum value that fits in a byte is 128. If an integer value is assigned to a byte as shown below :
int i = 257;
byte b;
b = (byte) i;
Since 257 crosses the range 127, 257 % 127 = 3 should be stored in 'b'.
But am getting the output as 1 instead of 3.
Where have I gone wrong in understanding the concept?
Just consider the binary representation of the numbers :
257 is represented in binary as 00000000 00000000 00000001 00000001
When you cast this 32 bits int to an 8 bits byte, you keep only the lowest 8 bits :
00000001
which is 1
257 = 00000000 000000000 00000001 00000001 in bits and a byte is made up of 8 bits only...
As a result only the lower 8 bits are stored and 1 is the output.
Here is the code:
int i = 200;
byte b = (byte) 200;
System.out.println(b);
System.out.println((short) (b));
System.out.println((b & 0xff));
System.out.println((short) (b & 0xff));
Here is the output:
-56
-56
200
200
Bitwise AND with 0xff shouldn't have changed anything in b, but apparently it does have an effect, why?
It has an effect because 200 is beyond the maximum possible (signed) byte, 127. The value is already assigned -56 because of this overflow. The most significant byte, worth -128, is set.
11001000
The first 2 output statements show -56 because of this, and the fact that casting to a short will perform sign extension to preserve negative values.
When you perform & 0xff, 2 things happen. First, the value is promoted to an int, with sign extension.
11111111 11111111 11111111 11001000
Then, the bit-and is performed, keeping only the last 8 bits. Here, the 8th bit is no longer -128, but 128, so 200 is restored.
00000000 00000000 00000000 11001000
This occurs whether the value is casted to a short or not; a short has 16 bits and can easily represent 200.
00000000 11001000
Java byte is a signed type. That is why you see a negative number when you print it: 200, or 0xC8, is above the largest positive number representable by byte, so it gets interpreted as a negative byte.
However, 0xff constant is an int. When you perform arithmetic and bitwise logic operations on a byte and an int*, the result becomes an int. That is why you see 200 printed in the second set of examples: (b & 0xff) produces an integer 200, which remains 200 after shrinking conversion to short, because 200 fits into a short without becoming negative.
* or another byte, for that matter; Java standard specifies a list of conversions that get applied depending on operand types.
working with different integer types is a mine field.
for example, what's going on here?
byte b = (byte) 200;
it's actually equivalent to
int i = 200;
byte b = (byte)i;
and the narrowing cast (byte) simply takes the lowest 8 bits of the int value.
If I have an byte value 00101011 and I want to represent it as int value I have: 00000000000000000000000000101011, what seems to be clear for me. Bu I have problem with byte value 11010100 which as int is represented by 11111111111111111111111111010100. I don't know where it came from. Can someone explain me the idea of extension byte value to int?
The byte value 11010100 represents a negative number (-44), because the most significant bit is set. When this undergoes primitive widening conversion, it must still represent the same negative value in the two's complement representation. This done using sign extension. That means that all new bits are the same as the original sign bit.
11010100 => -44
11111111 11111111 11111111 11010100 => -44
If this did not occur, then the sign bit would no longer be a sign bit, and it would be interpreted "normally", and the value would no longer be the same.
00000000 00000000 00000000 11010100 => 212