I have a peculiar problem for which I am looking for an efficient solution. I have a byte array which contains the most significant n bytes of an unsigned 4 byte integer (most sig byte first). The value of the remaining bytes (if any) are unknown. I need to check whether the partially known integer value could fall within a certain range (+ or - x) of a known integer. It's also valid for the integer represented by the byte array under test to wrap around.
I have a solution which works (below). The problem is that this solution performs way more comparisons than I believe is necessary and a whole load of comparisons will be duplicated in the scenario in which least sig bytes are unknown. I'm pretty sure it can be done more efficiently but can't figure out how. The scenario in which least significant bytes are unknown is an edge case so I might be able to live with it but it forms part of a system which needs to have low latency so if anyone could help with this that would be great.
Thanks in advance.
static final int BYTES_IN_INT = 4;
static final int BYTE_SHIFT = 010;
// partial integer byte array length guaranteed to be 1-4 so no checking necessary
static boolean checkPartialIntegerInRange(byte[] partialIntegerBytes, int expectedValue, int range)
{
boolean inRange = false;
if(partialIntegerBytes.length == BYTES_IN_INT)
{
// normal scenario, all bytes known
inRange = Math.abs(ByteBuffer.wrap(partialIntegerBytes).getInt() - expectedValue) <= range;
}
else
{
// we don't know least significant bytes, could have any value
// need to check if partially known int could lie in the range
int partialInteger = 0;
int mask = 0;
// build partial int and mask
for (int i = 0; i < partialIntegerBytes.length; i++)
{
int shift = ((BYTES_IN_INT - 1) - i) * BYTE_SHIFT;
// shift bytes to correct position
partialInteger |= (partialIntegerBytes[i] << shift);
// build up mask to mask off expected value for comparison
mask |= (0xFF << shift);
}
// check partial int falls in range
for (int i = -(range); i <= range; i++)
{
if (partialInteger == ((expectedValue + i) & mask))
{
inRange = true;
break;
}
}
}
return inRange;
}
EDIT: Thanks to the contributors below. Here is my new solution. Comments welcome.
static final int BYTES_IN_INT = 4;
static final int BYTE_SHIFT = 010;
static final int UBYTE_MASK = 0xFF;
static final long UINT_MASK = 0xFFFFFFFFl;
public static boolean checkPartialIntegerInRange(byte[] partialIntegerBytes, int expectedValue, int range)
{
boolean inRange;
if(partialIntegerBytes.length == BYTES_IN_INT)
{
inRange = Math.abs(ByteBuffer.wrap(partialIntegerBytes).getInt() - expectedValue) <= range;
}
else
{
int partialIntegerMin = 0;
int partialIntegerMax = 0;
for(int i=0; i < BYTES_IN_INT; i++)
{
int shift = ((BYTES_IN_INT - 1) - i) * BYTE_SHIFT;
if(i < partialIntegerBytes.length)
{
partialIntegerMin |= (((partialIntegerBytes[i] & UBYTE_MASK) << shift));
partialIntegerMax = partialIntegerMin;
}
else
{
partialIntegerMax |=(UBYTE_MASK << shift);
}
}
long partialMinUnsigned = partialIntegerMin & UINT_MASK;
long partialMaxUnsigned = partialIntegerMax & UINT_MASK;
long rangeMinUnsigned = (expectedValue - range) & UINT_MASK;
long rangeMaxUnsigned = (expectedValue + range) & UINT_MASK;
if(rangeMinUnsigned <= rangeMaxUnsigned)
{
inRange = partialMinUnsigned <= rangeMaxUnsigned && partialMaxUnsigned >= rangeMinUnsigned;
}
else
{
inRange = partialMinUnsigned <= rangeMaxUnsigned || partialMaxUnsigned >= rangeMinUnsigned;
}
}
return inRange;
}
Suppose you have one clockwise interval (x, y) and one normal interval (low, high) (each including their endpoints), determining whether they intersect can be done as (not tested):
if (x <= y) {
// (x, y) is a normal interval, use normal interval intersect
return low <= y && high >= x;
}
else {
// (x, y) wraps
return low <= y || high >= x;
}
To compare as unsigned integers, you can use longs (cast up with x & 0xffffffffL to counteract sign-extension) or Integer.compareUnsigned (in newer versions of Java) or, if you prefer you can add/subtract/xor both operands with Integer.MIN_VALUE.
Convert your unsigned bytes to an integer. Right-shift by 32-n (so your meaningful bytes are the min bytes). Right-shift your min/max integers by the same amount. If your shifted test value is equal to either shifted integer, it might be in the range. If it's between them, it's definitely in the range.
Presumably the sign bit on your integers is always zero (if not, just forcibly convert the negative to zero, since your test value can't be negative). But because that's only one bit, unless you were given all 32 bits as n, that shouldn't matter (it's not much of a problem in that special case).
Related
I was solving Nth root of M problem and I solved it with Java and here is the solution:
public int NthRoot(int n, int m)
{
// code here
int ans = -1;
if (m == 0)
return ans;
if (n == 1 || n == 0)
return m;
if (m == 1)
return 1;
int low = 1;
int high = m;
while (low < high) {
int mid = (low + high) / 2;
int pow = (int)Math.pow(mid, n);
if (pow == m) {
ans = mid;
break;
}
if (pow > m) {
high = mid;
} else {
low = mid + 1;
}
}
return ans;
}
It passed all the test cases. But, when I solved it using C++, some test cases didn't pass. Here is the C++ solution:
int NthRoot(int n, int m)
{
// Code here.
int ans = -1;
if (m == 0)
return ans;
if (n == 1 || n == 0)
return m;
if (m == 1)
return 1;
int low = 1;
int high = m;
while (low < high) {
int mid = (low + high) / 2;
int po = (int)pow(mid, n);
if (po == m) {
ans = (int)mid;
break;
}
if (po > m) {
high = mid;
} else {
low = mid + 1;
}
}
return ans;
}
One of the test cases it didn't pass is:
6 4096
Java's output is 4 (Expected result)
C++'s output is -1 (Incorrect result)
When I traced it using paper and pen, I got a solution the same as Java's.
But, when I used long long int in the C++ code, it worked fine – but the size of Int/int in both Java and C++ are the same, right? (When I print INT_MAX and Integer.MAX_VALUE in C++ and Java, it outputs the same value.)
As you have probably guessed, the problem is due to the attempt to convert a double value to an int value, when that source is larger than the maximum representable value of an int. More specifically, it relates to the difference between how Java and C++ handle the cast near the start of your while loop: int po = (int)pow(mid, n);.
For your example input (6 4096), the value returned by the pow function on the first run through that loop is 7.3787e+19, which is too big for an int value. In Java, when you attempt to cast a too-big value to an integer, the result is the maximum value representable by the integer type, as specified in this answer (bolding mine):
The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest
representable value of type int or long.
However, in C++, when the source value exceeds INT_MAX, the behaviour is undefined (according to this C++11 Draft Standard):
7.10 Floating-integral conversions [conv.fpint]
1 A prvalue of a floating-point
type can be converted to a prvalue of an integer type. The conversion
truncates; that is, the fractional part is discarded. The behavior is
undefined if the truncated value cannot be represented in the
destination type.
However, although formally undefined, many/most compilers/platforms will apply 'rollover' when this occurs, resulting in a very large negative value (typically, INT_MIN) for the result. This is what MSVC in Visual Studio does, giving a value of -2147483648, thus causing the else block to run … and keep running until the while loop reaches its terminating condition – at which point ans will never have been assigned any value except the initial -1.
You can fix the problem readily by checking the double returned by the pow call and setting po to INT_MAX, if appropriate, to emulate the Java bevahiour:
while (low < high) {
int mid = (low + high) / 2;
double fpo = pow(mid, n);
int po = (int)(fpo);
if (fpo > INT_MAX) po = INT_MAX; // Emulate Java for overflow
if (po == m) {
ans = (int)mid;
break;
}
if (po > m) {
high = mid;
}
else {
low = mid + 1;
}
}
I encountered a strange problem while doing an leetcode problem. This is about bits representation in Java.
Write a function that takes an unsigned integer and returns the number of ’1' bits it has (also known as the Hamming weight).
For example, the 32-bit integer ’11' has binary representation 00000000000000000000000000001011, so the function should return 3.
My solution is
public class Solution {
// you need to treat n as an unsigned value
public int hammingWeight(int n) {
int count = 0;
for(int i = 0; i < 32; ++i){
if((n >>> i) % 2 == 1){
++count;
}
}
return count;
}
}
This code is not accepted because of the input case:
4294967295 (11111111111111111111111111111111)
I reviewed the bit representation of integer in java but still do not know the problem of the solution?
Could anyone help me?
public int hammingWeight(int n) {
return Integer.bitCount(n);
}
Integer.bitCount(int i)
Returns the number of one-bits in the two's complement binary
representation of the specified int value.
The issue is performing modulo when you want a bitwise &. Something like,
public static int hammingWeight(int n) {
int count = 0;
for (int i = 0; i < 32; ++i) {
if (((n >>> i) & 1) == 1) {
++count;
}
}
return count;
}
public static void main(String[] args) {
int c = -1;
System.out.println(hammingWeight(c));
}
Outputs (as expected)
32
Java uses twos compliment. So the negative bit is the far left one. That means if you have a number greater than Integer.MAX_VALUE your input will be a negative number. When you do %2the sign remains unchanged. An alternative would be to use &1which will change the sign bit. After the first iteration, and you have done a bit shift, the sign bit will be zero.
In order to explore some solutions, I need to generate all possibilities. I'm doing it by using bit masking, like this:
for (long i = 0; i < 1L << NB; i++) {
System.out.println(Long.toBinaryString(i));
if(checkSolution(i)) {
this.add(i); // add i to solutions
}
}
this.getBest(); // get the solution with lowest number of 1
this allow me to explore (if NB=3):
000
001
010
011
100
101
110
111
My problem is that the best solution is the one with the lowest number of 1.
So, in order to stop the search as soon as I found a solution, I would like to have a different order and produce something like this:
000
001
010
100
011
101
110
111
That would make the search a lot faster since I could stop as soon as I get the first solution. But I don't know how can I change my loop to get this output...
PS: NB is undefined...
The idea is to turn your loop into two nested loops; the outer loop sets the number of 1's, and the inner loop iterates through every combination of binary numbers with N 1's. Thus, your loop becomes:
for (long i = 1; i < (1L << NB); i = (i << 1) | 1) {
long j = i;
do {
System.out.println(Long.toBinaryString(j));
if(checkSolution(j)) {
this.add(j); // add j to solutions
}
j = next_of_n(j);
} while (j < (1L << NB));
}
next_of_n() is defined as:
long next_of_n(long j) {
long smallest, ripple, new_smallest, ones;
if (j == 0)
return j;
smallest = (j & -j);
ripple = j + smallest;
new_smallest = (ripple & -ripple);
ones = ((new_smallest / smallest) >> 1) - 1;
return (ripple | ones);
}
The algorithm behind next_of_n() is described in C: A Reference Manual, 5th edition, section 7.6, while showing an example of a SET implementation using bitwise operations. It may be a little hard to understand the code at first, but here's what the book says about it:
This code exploits many unusual properties of unsigned arithmetic. As
an illustration:
if x == 001011001111000, then
smallest == 000000000001000
ripple == 001011010000000
new_smallest == 000000010000000
ones == 000000000000111
the returned value == 001011010000111
The overall idea is that you find the rightmost contiguous group of
1-bits. Of that group, you slide the leftmost 1-bit to the left one
place, and slide all the others back to the extreme right. (This code
was adapted from HAKMEM.)
I can provide a deeper explanation if you still don't get it. Note that the algorithm assumes 2 complement, and that all arithmetic should ideally take place on unsigned integers, mainly because of the right shift operation. I'm not a huge Java guy, I tested this in C with unsigned long and it worked pretty well. I hope the same applies to Java, although there's no such thing as unsigned long in Java. As long as you use reasonable values for NB, there should be no problem.
This is an iterator that iterates bit patterns of the same cardinality.
/**
* Iterates all bit patterns containing the specified number of bits.
*
* See "Compute the lexicographically next bit permutation"
* http://graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
*
* #author OldCurmudgeon
*/
public class BitPattern implements Iterable<BigInteger> {
// Useful stuff.
private static final BigInteger ONE = BigInteger.ONE;
private static final BigInteger TWO = ONE.add(ONE);
// How many bits to work with.
private final int bits;
// Value to stop at. 2^max_bits.
private final BigInteger stop;
// Should we invert the output.
private final boolean not;
// All patterns of that many bits up to the specified number of bits - invberting if required.
public BitPattern(int bits, int max, boolean not) {
this.bits = bits;
this.stop = TWO.pow(max);
this.not = not;
}
// All patterns of that many bits up to the specified number of bits.
public BitPattern(int bits, int max) {
this(bits, max, false);
}
#Override
public Iterator<BigInteger> iterator() {
return new BitPatternIterator();
}
/*
* From the link:
*
* Suppose we have a pattern of N bits set to 1 in an integer and
* we want the next permutation of N 1 bits in a lexicographical sense.
*
* For example, if N is 3 and the bit pattern is 00010011, the next patterns would be
* 00010101, 00010110, 00011001,
* 00011010, 00011100, 00100011,
* and so forth.
*
* The following is a fast way to compute the next permutation.
*/
private class BitPatternIterator implements Iterator<BigInteger> {
// Next to deliver - initially 2^n - 1
BigInteger next = TWO.pow(bits).subtract(ONE);
// The last one we delivered.
BigInteger last;
#Override
public boolean hasNext() {
if (next == null) {
// Next one!
// t gets v's least significant 0 bits set to 1
// unsigned int t = v | (v - 1);
BigInteger t = last.or(last.subtract(BigInteger.ONE));
// Silly optimisation.
BigInteger notT = t.not();
// Next set to 1 the most significant bit to change,
// set to 0 the least significant ones, and add the necessary 1 bits.
// w = (t + 1) | (((~t & -~t) - 1) >> (__builtin_ctz(v) + 1));
// The __builtin_ctz(v) GNU C compiler intrinsic for x86 CPUs returns the number of trailing zeros.
next = t.add(ONE).or(notT.and(notT.negate()).subtract(ONE).shiftRight(last.getLowestSetBit() + 1));
if (next.compareTo(stop) >= 0) {
// Dont go there.
next = null;
}
}
return next != null;
}
#Override
public BigInteger next() {
last = hasNext() ? next : null;
next = null;
return not ? last.not(): last;
}
#Override
public void remove() {
throw new UnsupportedOperationException("Not supported.");
}
#Override
public String toString () {
return next != null ? next.toString(2) : last != null ? last.toString(2): "";
}
}
public static void main(String[] args) {
System.out.println("BitPattern(3, 10)");
for (BigInteger i : new BitPattern(3, 10)) {
System.out.println(i.toString(2));
}
}
}
First you loop over your number of ones, say n. First you start with 2^n-1, which is the first integer to contain exactly n ones and test it. To get the next one, you use the algorithm from Hamming weight based indexing (it's C code, but should not be to hard to translate it to java).
Here's some code I put together some time ago to do this. Use the combinadic method giving it the number of digits you want, the number of bits you want and which number in the sequence.
// n = digits, k = weight, m = position.
public static BigInteger combinadic(int n, int k, BigInteger m) {
BigInteger out = BigInteger.ZERO;
for (; n > 0; n--) {
BigInteger y = nChooseK(n - 1, k);
if (m.compareTo(y) >= 0) {
m = m.subtract(y);
out = out.setBit(n - 1);
k -= 1;
}
}
return out;
}
// Algorithm borrowed (and tweaked) from: http://stackoverflow.com/a/15302448/823393
public static BigInteger nChooseK(int n, int k) {
if (k > n) {
return BigInteger.ZERO;
}
if (k <= 0 || k == n) {
return BigInteger.ONE;
}
// ( n * ( nChooseK(n-1,k-1) ) ) / k;
return BigInteger.valueOf(n).multiply(nChooseK(n - 1, k - 1)).divide(BigInteger.valueOf(k));
}
public void test() {
System.out.println("Hello");
BigInteger m = BigInteger.ZERO;
for ( int i = 1; i < 10; i++ ) {
BigInteger c = combinadic(5, 2, m);
System.out.println("c["+m+"] = "+c.toString(2));
m = m.add(BigInteger.ONE);
}
}
Not sure how it matches up in efficiency with the other posts.
I just came across a problem; it was easy to solve in pseudo code, but when I started coding it in java; I started to realize I didn't know where to start...
Here is what I need to do:
I need a bit array of size 10 million (bits) (let's call it A).
I need to be able to set the elements in this array to 1 or 0 (A[99000]=1).
I need to iterate through the 10 million elements.
The "proper" way in Java is to use the already-existing BitSet class pointed out by Hunter McMillen. If you're figuring out how a large bit-array is managed purely for the purpose of thinking through an interesting problem, then calculating the position of a bit in an array of bytes is just basic modular arithmetic.
public class BitArray {
private static final int ALL_ONES = 0xFFFFFFFF;
private static final int WORD_SIZE = 32;
private int bits[] = null;
public BitArray(int size) {
bits = new int[size / WORD_SIZE + (size % WORD_SIZE == 0 ? 0 : 1)];
}
public boolean getBit(int pos) {
return (bits[pos / WORD_SIZE] & (1 << (pos % WORD_SIZE))) != 0;
}
public void setBit(int pos, boolean b) {
int word = bits[pos / WORD_SIZE];
int posBit = 1 << (pos % WORD_SIZE);
if (b) {
word |= posBit;
} else {
word &= (ALL_ONES - posBit);
}
bits[pos / WORD_SIZE] = word;
}
}
Use BitSet (as Hunter McMillen already pointed out in a comment). You can easily get and set bits. To iterate just use a normal for loop.
Here is a more optimized implementation of phatfingers 'BitArray'
class BitArray {
private static final int MASK = 63;
private final long len;
private long bits[] = null;
public BitArray(long size) {
if ((((size-1)>>6) + 1) > 2147483647) {
throw new IllegalArgumentException(
"Field size to large, max size = 137438953408");
}else if (size < 1) {
throw new IllegalArgumentException(
"Field size to small, min size = 1");
}
len = size;
bits = new long[(int) (((size-1)>>6) + 1)];
}
public boolean getBit(long pos) {
return (bits[(int)(pos>>6)] & (1L << (pos&MASK))) != 0;
}
public void setBit(long pos, boolean b) {
if (getBit(pos) != b) { bits[(int)(pos>>6)] ^= (1L << (pos&MASK)); }
}
public long getLength() {
return len;
}
}
Since we use fields of 64 we extend the maximum size to 137438953408-bits which is roughly what fits in 16GB of ram. Additionally we use masks and bit shifts instead of division and modulo operations the reducing the computation time. The improvement is quite substantial.
byte[] A = new byte[10000000];
A[99000] = 1;
for(int i = 0; i < A.length; i++) {
//do something
}
If you really want bits, you can use boolean and let true = 1, and false = 0.
boolean[] A = new boolean[10000000];
//etc
If I have an integer in Java how do I count how many bits are zero except for leading zeros?
We know that integers in Java have 32 bits but counting the number of set bits in the number and then subtracting from 32 does not give me what I want because this will also include the leading zeros.
As an example, the number 5 has one zero bit because in binary it is 101.
Take a look at the API documentation of Integer:
32 - Integer.numberOfLeadingZeros(n) - Integer.bitCount(n)
To count non-leading zeros in Java you can use this algorithm:
public static int countNonleadingZeroBits(int i)
{
int result = 0;
while (i != 0)
{
if (i & 1 == 0)
{
result += 1;
}
i >>>= 1;
}
return result;
}
This algorithm will be reasonably fast if your inputs are typically small, but if your input is typically a larger number it may be faster to use a variation on one of the bit hack algorithms on this page.
Count the total number of "bits" in your number, and then subtract the number of ones from the total number of bits.
This what I would have done.
public static int countBitsSet(int num) {
int count = num & 1; // start with the first bit.
while((num >>>= 1) != 0) // shift the bits and check there are some left.
count += num & 1; // count the next bit if its there.
return count;
}
public static int countBitsNotSet(int num) {
return 32 - countBitsSet(num);
}
Using some built-in functions:
public static int zeroBits(int i)
{
if (i == 0) {
return 0;
}
else {
int highestBit = (int) (Math.log10(Integer.highestOneBit(i)) /
Math.log10(2)) + 1;
return highestBit - Integer.bitCount(i);
}
}
Since evaluation order in Java is defined, we can do this:
public static int countZero(int n) {
for (int i=1,t=0 ;; i<<=1) {
if (n==0) return t;
if (n==(n&=~i)) t++;
}
}
Note that this relies on the LHS of an equality being evaluated first; try the same thing in C or C++ and the compiler is free to make you look foolish by setting your printer on fire.