My use case is this,
I wish to reduce an extremely long number like 97173329791011L to a smaller integer by shifting down and be able to get back the long number 97173329791011L from the smaller integer by shifting up .I have implemented a function called reduceLong to implement this as shown below
private int reduceLong(long reduceable) {
return (int) (reduceable >> 32);
}
However, I feel the function I have is in a way wrong as the result produced is incorrect. Here is the result from my console output when trying to reduce 97173329791011L to a smaller integer
Trying to reduce 97173329791011L
Result= 0
Your help would be greatly appreciated. Thanks alot.
The int datatype can hold all integral values in the range [-2^31, +2^31-1], inclusive. That's, in decimal, [-2147483648, 2147483647]. The total range covers 2^32 different numbers, and that makes sense because ints are 32 bits in memory. Just like you can't store an elephant in a matchbox, you can't store an infinite amount of data in 32 bits worth of data. You can store at most... 32 bits worth of data.
3706111600L is a long; it is (slightly) outside of the range of the int. In binary, it is:
11011100111001101100011001110000
How do you propose to store these 64 bits into a mere 32 bits? There is no general strategy here, and that is mathematically impossible to have: You can store exactly 2^64 different numbers in a long, and that's more unique values than 2^32, so whatever 'compression' algorithm you suggest, it cannot work except for at most 2^32 unique long values, which is only a very small number of them.
Separate from that, running your snippet: first, you do 11011100111001101100011001110000 >> 32, which gets rid of all of the bits. (there are exactly 32 bits there), hence why you get 0.
Perhaps you want this 'compression' algorithm: The 2^32 longs we decree as representable in this scheme are:
all the longs from 0 to 2^31-1, by mapping them to the same integer value, and then also another batch of 2^31 longs that immediately follow that, by mapping them bitwise, although, given that in java all numbers are signed, these then map to negative ints. All other long values (so all values above 2^32-1 and all negative longs) cannot be mapped (or if you try, they'd unmap to the wrong value).
If you want that, all you need to do:
int longToInt = (int) theLong;
long backToLong = 0xFFFFFFFFL & theLong;
Normally if you cast an int to a long it 'sign extends', filling the top 32 bits with 1s to represent the fact that your int is negative. The bitwise & operation clears the top 32 bits all back down to 0 and you're back to your original... IF the original long had 32 zero-bits at the top (which 3706111600L does).
Your test number is too small. Converted into Hexadecimal, 3706111600L is 0x00000000DCE6C670.
If you shift this number 32 bits to the right, you will lose the last 8 nibbles; your resulting number is 0x00000000L. Casted to int this value is still 0.
Related
I am working on a file reader and came into a problem when trying to read a short. In short (punintended), java is converting a two bytes I'm using to make the short into an int to do bitwise operations and is converting it in a way to keep the same value. I need to convert the byte into an int in a way that would preserve its value so the bits stayed the same.
example of what's happening:
byte number = -1; //-1
int otherNumber = 1;
number | otherNumber; // -1
example of what I want:
byte number = -1; //-1
int otherNumber = 1;
number | otherNumber; // 129
This can be done pretty easily with some bit magic.
I'm sure you're aware that a short is 16 bits (2 bytes) and an int is 32 bits (4 bytes). So, between an integer and a short, there is a two-byte difference. Now, for positive numbers, copying the value of a short to an int is effectively copying the binary data, however, as you've pointed out, this is not the case for negative numbers.
Now let's look at how negative numbers are represented in binary. It's a bit confusing, so I'll try to keep it simple. Modern systems use what's called the two's compliment to store negative numbers. Basically all this means is that the very first bit in the set of bytes representing the number determines whether or not it's negative. For mathematical purposes, the rest of the bits are also inverted and offset 1 bit to the right (since you can't have negative 0). For example, 2 as a short would be represented as 0000 0000 0000 0010, while -2 would be represented as 1111 1111 1111 1110. Now, since the bytes are inverted in a negative number, this means that -2 in int form is the same but with 2 more bytes (16 bits) at the beginning that are all set to 1.
So, in order to combat this, all we need to do is change the extra 1s to 0s. This can be done by simply using the bitwise and operator. This operator goes through each bit and checks if the bits at each position in each operand are a 1 or a 0. If they're both 1, the bit is flipped to a 0. If not, nothing happens.
Now, with this knowledge, all we need to do is create another integer where the first two bytes are all 1. This is fairly simple to do using hexidecimal literals. Since they are an integer by default, we simply need to use this to get four bytes of 1s. With a single byte, if you were to set every bit to 1, the max value you can get is 255. 255 in hex is 0xFF, so 2 bytes would be 0xFFFF. Pretty simple, now you just need to apply it.
Here is an example that does exactly that:
short a = -2;
int b = a & 0xFFFF;
You could also use Short.toUnsignedInt(), but where's the fun in that? 😉
Like the title says, I don't understand how Java's primitive data type long maximum and minimum values are smaller than a float's maximum and minimum. Despite the long being 64bit and float being 32bit.
What is going on?
the reason is because a float using floating point precision. A Long can store a high number of precise digits, while I float can store a high value but without the same precision in its lower bits.
In a sense, a float stores values in scientific/exponential notation. thus a large value can be stored in a small number of bits. Think 2 x10^200, this is huge number but can be stored in a small number of bits.
A 32-bit float isn't just a simple number, like a long; it's broken up into fields. The fields are:
1 bit for the sign;
8 bits for the exponent, which is a power of 2;
23 bits for the mantissa.
The exponent field is encoded (usually, it's just an offset from the real exponent), but it represents exponents from -126 to 127. (That covers 254 of the possible 256 values; the other two are used for special purposes.) The mantissa represents a value that can be from 1 to a value just below 2 (actually 2 - 2-23). This means that the largest possible float is 2127 x (2 - 2-23), or almost 2128, which is larger than the largest possible long value which is 263-1.
More information:
https://en.wikipedia.org/wiki/IEEE_754
http://www.adambeneschan.com/How-Does-Floating-Point-Work/showfloat.php?floatvalue=340282346638528859811704183484516925440&floattype=float
I'm checking the Java doc, and seeing that Integer.MIN_VALUE is -2^31:
A constant holding the minimum value an int can have, -2^31.
While in C++, the 32 bits signed integer "long" has a different MIN value:
LONG_MIN: Minimum value for an object of type long int -2147483647
(-2^31+1) or less*
I am very confused why they are different and how Java get -2^31? In Java, if an integer has 32 bits and the first bit is used for sign, -2^31+1 is more logical, isn't it?
The value of the most significant bit in a 32 bit number is 2^31, and as this is negative in a signed integer the value is -2^31. I guess C++ uses -2^31+1 as the MIN_VALUE because this means it has the same absolute value as MAX_VALUE i.e. 2^31-1. This means an integer could store -MIN_VALUE, which is not the case in Java (which can cause some fun bugs).
C and C++ are intended to work with the native representation on machines using 1s or 2s complement or signed magnitude. With 2s complement you get a range from -2 n through 2 n -1, about like in Java. Otherwise you lose the most negative number and gain a representation for -0.
In C++ you're only guaranteed that n (the number of bits)is at least 16 where Java guarantees it's exactly 32.
I'm doing a calculation that returns a long decimal, for example 4611686018427387904. I need to first convert it to hex and then chack an array of size (16) depending on the bits set.
So the above number gets converted to 0x40000000000000000L, this corresponds to the first index in the array. If the number is 0x0004000000000000L it corresponds to the 3rd index in the array.
My questions are:
Is there a quick way to convert a decimal to hex?
Is there a quick way to access an array depending on the bits set of the value (instead of using loops)?
If the number is in the long range, use Long.highestOneBit(). Alternatively, BigInteger has a bitLength() method.
First, a guess: you have a long that you are viewing as 16 4-bit numbers. You want to use each 4-bit number as an index into a 16-element array.
I think the fastest way to do this is going to be with a mask and some bit-shifting. Both are fast operations. If you mask out of the bottom, you don't have to shift the result after you mask.
There are 16 4-bit groupings in a long, so:
long l = 1234;
int[] results = new int[16];
for (int i=15; i>=0; i--)
{
int index = (int)l & 0xF;
results[i] = index;
l = l >> 4;
}
This gets your 16 indices from the right (lower-order) bits of your long, and you say the left (higher-order) bits are the first indices. So this gets them all in reverse order and stores them accordingly. Adjust for shorts or whatever you need, and this will be fast.
Warning: I have not run this code. I mean it as a kind of java pseudo-code.
IN CASE YOU ARE NOT FAMILIAR WITH THE '&' OPERATOR: it gives you back the ones and zeros in the positions which have 1s in them, so & with 0xF will tell you whatever the lower-order 4 bits are.
Remember internally all primitives in Java is stored as binary. The fact that it prints a decimal number when you pass it to System.out.println() is a view on that binary value. So converting a decimal to hex if it's stored in a primitive (double, float, long, integer, short, byte) is already done. If you want to convert that value to a hex string for display purposes you can Integer.toHexString( int ).
Java has an upper limit on array sizes which is an integer so 2^31 slots. Anything over that and you'll have to use two or more structures and combine them. I suppose that's sorta what you are doing. But, if you are doing that maybe use a long, and Long.highestOneBit() could be a first index into an array of arrays where each slow is an array of potentially 2^31 slots. Of course that isn't a particularly efficient structure for memory usage. But that would give you a view that is potentially the size of a long. You probably don't have enough memory for that, but who knows.
I have a scenario where I'm working with large integers (e.g. 160 bit), and am trying to create the biggest possible unsigned integer that can be represented with an n bit number at run time. The exact value of n isn't known until the program has begun executing and read the value from a configuration file. So for example, n might be 160, or 128, or 192, etcetera...
Initially what I was thinking was something like:
BigInteger.valueOf((long)Math.pow(2, n));
but then I realized, the conversion to long that takes place sort of defeats the purpose, given that long is not comprised of enough bits in the first place to store the result. Any suggestions?
On the largest n-bit unsigned number
Let's first take a look at what this number is, mathematically.
In an unsigned binary representation, the largest n-bit number would have all bits set to 1. Let's take a look at some examples:
1(2)= 1 =21 - 1
11(2)= 3 =22 - 1
111(2)= 7 =23 - 1
:
1………1(2)=2n -1
   n
Note that this is analogous in decimal too. The largest 3 digit number is:
103- 1 = 1000 - 1 = 999
Thus, a subproblem of finding the largest n-bit unsigned number is computing 2n.
On computing powers of 2
Modern digital computers can compute powers of two efficiently, due to the following pattern:
20= 1(2)
21= 10(2)
22= 100(2)
23= 1000(2)
:
2n= 10………0(2)
       n
That is, 2n is simply a number having its bit n set to 1, and everything else set to 0 (remember that bits are numbered with zero-based indexing).
Solution
Putting the above together, we get this simple solution using BigInteger for our problem:
final int N = 5;
BigInteger twoToN = BigInteger.ZERO.setBit(N);
BigInteger maxNbits = twoToN.subtract(BigInteger.ONE);
System.out.println(maxNbits); // 31
If we were using long instead, then we can write something like this:
// for 64-bit signed long version, N < 64
System.out.println(
(1L << N) - 1
); // 31
There is no "set bit n" operation defined for long, so traditionally bit shifting is used instead. In fact, a BigInteger analog of this shifting technique is also possible:
System.out.println(
BigInteger.ONE.shiftLeft(N).subtract(BigInteger.ONE)
); // 31
See also
Wikipedia/Binary numeral system
Bit Twiddling Hacks
Additional BigInteger tips
BigInteger does have a pow method to compute non-negative power of any arbitrary number. If you're working in a modular ring, there are also modPow and modInverse.
You can individually setBit, flipBit or just testBit. You can get the overall bitCount, perform bitwise and with another BigInteger, and shiftLeft/shiftRight, etc.
As bonus, you can also compute the gcd or check if the number isProbablePrime.
ALWAYS remember that BigInteger, like String, is immutable. You can't invoke a method on an instance, and expect that instance to be modified. Instead, always assign the result returned by the method to your variables.
Just to clarify you want the largest n bit number (ie, the one will all n-bits set). If so, the following will do that for you:
BigInteger largestNBitInteger = BigInteger.ZERO.setBit(n).subtract(BigInteger.ONE);
Which is mathematically equivalent to 2^n - 1. Your question has how you do 2^n which is actually the smallest n+1 bit number. You can of course do that with:
BigInteger smallestNPlusOneBitInteger = BigInteger.ZERO.setBit(n);
I think there is pow method directly in BigInteger. You can use it for your purpose
The quickest way I can think of doing this is by using the constructor for BigInteger that takes a byte[].
BigInteger(byte[] val) constructs the BigInteger Object from an array of bytes. You are, however, dealing with bits, and so creating a byte[] that might consist of {127, 255, 255, 255, 255} for a 39 bit integer representing 2^40 - 1 might be a little tedious.
You could also use the constructor BigInteger(String val, int radix) - which might be readily more apparently what's going on in your code if you don't mind a performance hit for parsing a String. Then you could generate a string like val = "111111111111111111111111111111111111111" and then call BigInteger myInt = new BigInteger(val, 2); - resulting in the same 39 bit integer.
The first option will require some thinking about how to represent your number. That particular constructor expects a two's-compliment, big-endian representation of the number. The second will likely be marginally slower, but much clearer.
EDIT: Corrected numbers. I thought you meant represent 2^n, and didn't correctly read the largest value n bits could store.