I am porting some C++ samples to Java
I am now stuck trying to pack four integers into a single one in the following layout [10, 10, 10, 2] in bits, that is the first int will occupy the first 10 bits, similar for the second and third one, while the last one just the last two bits.
In the C++ sample, this is the code to pack them:
GLM_FUNC_QUALIFIER uint32 packSnorm3x10_1x2(vec4 const & v)
{
detail::i10i10i10i2 Result;
Result.data.x = int(round(clamp(v.x,-1.0f, 1.0f) * 511.f));
Result.data.y = int(round(clamp(v.y,-1.0f, 1.0f) * 511.f));
Result.data.z = int(round(clamp(v.z,-1.0f, 1.0f) * 511.f));
Result.data.w = int(round(clamp(v.w,-1.0f, 1.0f) * 1.f));
return Result.pack;
}
Result.data is the following struct:
union i10i10i10i2
{
struct
{
int x : 10;
int y : 10;
int z : 10;
int w : 2;
} data;
uint32 pack;
};
With an input equal to [-1f,-1f,0f,1f] we get a Result.data equal to [-511,-511,0,1] and this return value 1074267649, that, in binary is:
0 -511
| | | |
0100 0000 0000 1000 0000 0110 0000 0001
|| | |
1 -511
What I did so far is:
public static int packSnorm3x10_1x2(float[] v) {
int[] tmp = new int[4];
tmp[0] = (int) (Math.max(-1, Math.min(1, v[0])) * 511.f);
tmp[1] = (int) (Math.max(-1, Math.min(1, v[1])) * 511.f);
tmp[2] = (int) (Math.max(-1, Math.min(1, v[2])) * 511.f);
tmp[3] = (int) (Math.max(-1, Math.min(1, v[3])) * 1.f);
int[] left = new int[4];
left[0] = (tmp[0] << 22);
left[1] = (tmp[1] << 22);
left[2] = (tmp[2] << 22);
left[3] = (tmp[3] << 30);
int[] right = new int[4];
right[0] = (left[0] >> 22);
right[1] = (left[1] >> 12);
right[2] = (left[2] >> 2);
right[3] = (left[3] >> 0);
return right[0] | right[1] | right[2] | right[3];
}
tmp is [-511,-511,0,1], left is [-2143289344,-2143289344,0,1073741824], which in binary is:
[1000 0000 0100 0000 0000 0000 0000 0000,
1000 0000 0100 0000 0000 0000 0000 0000,
0000 0000 0000 0000 0000 0000 0000 0000,
0100 0000 0000 0000 0000 0000 0000 0000]
And it makes sense so far. Now that I cleaned the value on the left, I want to drag them on the right at their right place. But when I do so, I get the gap on the left filled with 1s, I guess because of the signed int in java(?).
Then right is [-511,-523264,0,1073741824] or:
[1111 1111 1111 1111 1111 1110 0000 0001,
1111 1111 1111 1000 0000 0100 0000 0000,
0000 0000 0000 0000 0000 0000 0000 0000,
0100 0000 0000 0000 0000 0000 0000 0000]
So, why is this happening and how can I fix this? Maybe with ANDing only the bits I am interested in?
The unsigned right shift operator >>> shifts a zero into the leftmost position.
Source: https://docs.oracle.com/javase/tutorial/java/nutsandbolts/op3.html
struct
{
int x : 10;
int y : 10;
int z : 10;
int w : 2;
} data;
This code is completely non-portable, that is, if it even works as expected on your current system.
There is absolutely no way for you to tell what this struct will contain. You can't know what is MSB, you can't know how signedness will be treated, you can't know the endianess, you can't know if there will be padding etc etc. See this.
The only portable solution is to use a raw uint32_t and shift values into place.
Related
I'm having trouble trying to reverse engineer this section of code; I need to be able to get x, y, z from l would anyone be able to point me in the right direction. Thanks
int l = ((X << 20) + (Y << 19) + Z);
The ranges are as follows
X = 0 - 4095
Y = 0 - 1
Z = 0 - 384,794
X being 0-4095 = 12 bits
Y being 0-1 = 1 bit
Z being 0-384794 = 19 bits
Java integers are 32 bit, so starting from 0:
int l = 0000 0000 0000 0000 0000 0000 0000 0000
+ X << 20 = the value of X, moved 20 places to the right. So:
int l = XXXX XXXX XXXX 0000 0000 0000 0000 0000
+ Y << 19 = the value of Y, moved 19 places to the right. So:
int l = XXXX XXXX XXXX Y000 0000 0000 0000 0000
+ Z = the value of Z, moved 0 places. So:
int l = XXXX XXXX XXXX YZZZ ZZZZ ZZZZ ZZZZ ZZZZ
Since no bits are ever overwritten, we can recover them:
int x = l >> 20 & 0xFFF; //reverse the shifting (by 20), then isolate the X bits (0xFFF = 12 bits set to 1, equal to 4095)
int y = l >> 19 & 0x1; //reverse the shifting (by 19), then isolate the Y bit (0x1 = 1 bit set to 1, equal to 1)
int z = l & 0x7FFFF; //no shifting to reverse, but we still isolate the Z bits (0x7FFFF = 19 bits set to 1, equal to 524287)
Recently I was given a codility problem which says the following code has a bug.
So, the code problem is that we have a 30-bit unsigned integer from N[29]... N[0] and performing the right cyclic shift (>>) should give us a number like N[0]N[29]... N[1].
Our goal is to find the number of right shifts which produce the maximum value achievable from a given number.
For Example:
for N = 9736 (00 0000 0000 0000 0010 0110 0000 1000)
9736 >> 1 = 4868 -> 00 0000 0000 0000 0001 0011 0000 0100
.
.
.
9736 >> 11 = 809500676 -> 11 0000 0100 0000 0000 0000 0000 0100
.
.
till 30 (as we have 30 bits integers)
from the example above on the 11th iteration, we receive the maximum number possible for 9736.
Hence the answer = 11
Given Code:
int shift(int N) {
int largest = 0;
int shift = 0;
int temp = N;
for (int i = 1; i < 30; ++i) {
int index = (temp & 1);
temp = ((temp >> 1) | (index << 29));
if (temp > largest) {
largest = temp;
shift = i;
}
}
return shift;
}
N is in range [0... 1,073,741,823]
I tried but couldn't find the bug here or the test case where this fails.
It fails for 0b10000...000 (0x20000000), Because the largest value is for shift==0
the simplest solution is to define largest as N instead of 0.
Please, describe what these n| = n >>> x 5 lines do?
I am not interested in what | or >>> operators do.
I am interested in what that complex logic do under cover in a math language.
/**
* Returns a power of two size for the given target capacity.
*/
static final int tableSizeFor(int cap) {
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
All (positive) powers of two have exactly 1 bit set; and (power of 2 - 1) has all of the bits set less than the most significant bit. So, we can find the next largest power of two by
Subtracting 1
Setting all of the less significant bits
Adding 1 back
These bit shifting operations are implementing the second step of this process, by "smearing" the set bits.
So:
n |= n >>> 1;
Would do something like:
01010000
| 00101000
= 01111000
If you do this again, you "smear" the number down again (still shifting by just 1):
01111000
| 00111100
= 01111100
Keep on doing this, and you will end up with a number with all of the less significant bits set:
01111111
In the worst case, you'd have to do this 30 times (for a positive, signed 32 bit integer), when the most significant bit is the 31st bit:
01xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
=> 011xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
=> 0111xxxxxxxxxxxxxxxxxxxxxxxxxxxx
=> 01111xxxxxxxxxxxxxxxxxxxxxxxxxxx
=> 011111xxxxxxxxxxxxxxxxxxxxxxxxxx
...
=> 01111111111111111111111111111111
(x just means it could be a zero or a one)
But you might notice something interesting: after the first smear, when shifting by 1, we have the two most significant bits set. So, instead of shifting by 1, we can skip an operation by shifting by 2:
01xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
=> 011xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
=> 01111xxxxxxxxxxxxxxxxxxxxxxxxxxx
Continuing with this pattern, shift by 4 next:
=> 011111111xxxxxxxxxxxxxxxxxxxxxxx
Shift by 8:
=> 01111111111111111xxxxxxxxxxxxxxx
Shift by 16:
=> 01111111111111111111111111111111
So, instead of taking 30 operations to set all the less significant bits, we have taken 5.
To understand the process lets assume the value of cap passed is 10.
So n = capacity - 1 = 9; 0000 1001
n |= n >>> 1 = 0000 1101
n |= n >>> 2 = 0000 1111
n |= n >>> 4 = 0000 1111
n |= n >>> 8 = 0000 1111
n |= n >>> 16 = 0000 1111 = 15
Finally the method returns n + 1 = 16
For large numbers
cap = 0000 1000 0000 0000 0000 0000 0000 0001
n = cap - 1 = 0000 1000 0000 0000 0000 0000 0000 0000
n |= n >>> 1 = 0000 1100 0000 0000 0000 0000 0000 0000
n |= n >>> 2 = 0000 1111 0000 0000 0000 0000 0000 0000
n |= n >>> 4 = 0000 1111 1111 0000 0000 0000 0000 0000
n |= n >>> 8 = 0000 1111 1111 1111 1111 0000 0000 0000
n |= n >>> 16 = 0000 1111 1111 1111 1111 1111 1111 1111
return n + 1 = 0001 0000 0000 0000 0000 0000 0000 0000
I'm attempting to make a customized buffer that's going to be using a List<Byte> and currently I've only gotten as far as a single method before it completely broke down on me, and I'm not sure exactly why. I've been referencing the Source code of the DataOutputStream and DataInputStream classes to make sure that I'm reading/writing the data correctly.
I must be doing something wrong.
private List<Byte> buffer = new ArrayList<>();
public void writeInt(int value) {
buffer.add((byte)((value >>> 24) & 0xFF));
buffer.add((byte)((value >>> 16) & 0xFF));
buffer.add((byte)((value >>> 8) & 0xFF));
buffer.add((byte)((value >>> 0) & 0xFF));
}
public void readInt() {
int ch1 = buffer.get(0);
int ch2 = buffer.get(1);
int ch3 = buffer.get(2);
int ch4 = buffer.get(3);
System.out.println("CH1: " + ch1);
System.out.println("CH2: " + ch2);
System.out.println("CH3: " + ch3);
System.out.println("CH4: " + ch4);
System.out.println("===============");
int value = ((ch1 << 24) + (ch2 << 16) + (ch3 << 8) + (ch4 << 0));
System.out.println("Value: " + value);
}
If I write a small value(Anything from 0->127), it's perfectly fine, however the moment I get to 128, all hell breaks lose, here's the output of 127vs128
#writeInt(127)
CH1: 0
CH2: 0
CH3: 0
CH4: 127
===============
Value: 127
#writeInt(128)
CH1: 0
CH2: 0
CH3: 0
CH4: -128
===============
Value: 128
And just for some more (I don't understand it) examples, here's a large number.
#writeInt(999999999)
CH1: 59
CH2: -102
CH3: -55
CH4: -1
===============
Value: 983156991
I'm honestly not sure where I'm going wrong, hopefully someone can tell me.
EDIT: I also thought it could be because I'm getting the byte as an int, and then trying to do the math, so I changed it up, but it didn't change the result at all. Modification example:
public void readInt() {
int ch1 = buffer.get(0) << 24;
int ch2 = buffer.get(1) << 16;
int ch3 = buffer.get(2) << 8;
int ch4 = buffer.get(3) << 0;
System.out.println("CH1: " + ch1);
System.out.println("CH2: " + ch2);
System.out.println("CH3: " + ch3);
System.out.println("CH4: " + ch4);
System.out.println("===============");
int value = (ch1 + ch2 + ch3 + ch4);
System.out.println("Value: " + value);
}
The type byte in Java is signed, like all primitive types with number semantics (char is the sole exception, but I wouldn't call char number semantics anyway). And Java, like the vast majority of devices, uses two's complement to store values.
Therefore the value range of byte is -128 to 127, here's a few of the corresponding 2's complement bit patterns that would be stored in a byte:
-128 -> 1000 0000
-127 -> 1000 0001
-2 -> 1111 1110
-1 -> 1111 1111
0 -> 0000 0000
1 -> 0000 0001
126 -> 0111 1110
127 -> 0111 1111
When you cast byte to int, and that's what happens implicitly when you do buffer.get() because you do arithmetic with the return value, it happens in a sign-extending way - that's how it's defined in Java.
In other words:
(int) 128 -> 128 (0000 0000 0000 0000 0000 0000 1000 0000)
(byte) (int) 128 -> -128 (---- ---- ---- ---- ---- ---- 1000 0000)
(int) (byte) (int) 128 -> -128 (1111 1111 1111 1111 1111 1111 1000 0000)
You want to negate the effects of sign-extension explicitly. You can do so by using & 0xFF before you shift the value. The corresponding part of your readInt() method should be like this:
int ch1 = buffer.get(0) & 0xFF;
int ch2 = buffer.get(1) & 0xFF;
int ch3 = buffer.get(2) & 0xFF;
int ch4 = buffer.get(3) & 0xFF;
I have written the following code but findbugs is shwowing this error: BIT_ADD_OF_SIGNED_BYTE. I tried a lot but may be I am not getting the concept of left shift correctly.
void problem() {
byte [] byteArray = {1, 2, 3, 4, 5};
int localOne = 0;
for(int i = 0; i < 4; i++) {
localOne = (localOne<<8) + byteArray[i];
}
}
You'r doing the shift correctly, your (possible) error is when adding a signed byte to an int
Because of sign extension you need to do this:
localOne = (localOne<<8) + (0xFF & byteArray[i]);
Say you have the byte 80 (hex), which is 1000 0000 (binary), this is -128 (decimal) because of the two's complement representation. Now, when adding it to an int it first gets converted to an int. The resulting int is not
0000 0000 0000 0000 0000 0000 1000 0000
(binary) it will be
1111 1111 1111 1111 1111 1111 1000 0000
(binary) because of sign extension. To get the first, you have to apply a bitwise and with 0xFF wich is this in binary:
0000 0000 0000 0000 0000 0000 1111 1111