I am running in stupid problem, have never seen it before.
I am trying to do a very simple task for some mathematical calculation that requires growing power of 10.
To start with I wrote a very simple growing loop that works fine, but what I do not understanding, the higher value get, the not precise results get.
I would like to know why this happen? and how to fix it?
My test code:
public class PowerOfTen {
private int lineCounter = 1;
public static void main(String[] args) {
PowerOfTen powerOfTen = new PowerOfTen();
powerOfTen.growingOfTenMethodOne();
powerOfTen.growingOfTenMethodTwo();
}
public void growingOfTenMethodOne() {
double MAX_VALUE = 1e50;
lineCounter = 1;
for (double i = 1; i < MAX_VALUE; i = i * 10) {
System.out.printf("%03d%1s%f\n", lineCounter, " | ", i);
lineCounter++;
}
}
public void growingOfTenMethodTwo() {
double MAX_VALUE = 50;
lineCounter = 1;
for (double i = 0; i < MAX_VALUE; i++) {
System.out.printf("%03d%1s%f\n", lineCounter, " | ", Math.pow(10, i));
lineCounter++;
}
}
}
Both methods works and suppose to return correct results, but on both of them give some not accurate results as you can see in examples below.
Method 1:
line 24, 26, 27, 28, 31, 32, 33 etc is not returning correct results
022 | 1000000000000000000000.000000
023 | 10000000000000000000000.000000
024 | 99999999999999990000000.000000
025 | 1000000000000000000000000.000000
026 | 9999999999999999000000000.000000
027 | 99999999999999990000000000.000000
028 | 999999999999999900000000000.000000
029 | 10000000000000000000000000000.000000
030 | 100000000000000000000000000000.000000
031 | 999999999999999900000000000000.000000
032 | 9999999999999999000000000000000.000000
033 | 99999999999999990000000000000000.000000
Method 2:
line 24, 30 is not returning correct results
022 | 1000000000000000000000.000000
023 | 10000000000000000000000.000000
024 | 99999999999999990000000.000000
025 | 1000000000000000000000000.000000
026 | 10000000000000000000000000.000000
027 | 100000000000000000000000000.000000
028 | 1000000000000000000000000000.000000
029 | 10000000000000000000000000000.000000
030 | 100000000000000010000000000000.000000
031 | 1000000000000000000000000000000.000000
032 | 10000000000000000000000000000000.000000
033 | 100000000000000000000000000000000.000000
It is not a "stupid problem"; it is one of the core problems of computer science ... dealing with the fact that correct representation of numbers is a non-trivial task.
You might want to start reading here for example.
Anybody who is "programming" should understand what this actually means; and how it affects your application/solution.
This is caused because of convertion from double-precision floating-point format to decimal format. Just like there are periodic numbers in decimals, there are numbers which are periodic in binary but arent in decimal.
Those cases are periodic in binary.
A Java floating point double is always an IEEE754 64 bit floating point type.
Method 1.
A floating point double can represent integers accurately up to the 53rd power of 2. So inaccuracies will creep in once you exceed 9,007,199,254,740,992. (It's a common misconception that floating points always introduce inaccuracy: in your particular case the calculation will be exact for the first few iterations of your loop).
Method 2.
Math.pow(x, y) is implemented as exp(y log x) if the two arguments are floating point. That will introduce numerical precision issues, due to a floating point double being only accurate to about 15 significant figures.
It all comes down to Java specification.
In Java, Integer uses 32 bits to represent its value. FLOAT uses a 24 bit mantissa, so integers greater than 2^23 will have their least significant bits truncated. For example 33554435 (or 0x200003) will be truncated to around 33554432 +/- 4. DOUBLE uses a 53 bit mantissa, so will be able to represent a 32bit integer without lost of data.
Check this article that explains everything in a very simple way.
Related
I am not sure how to phrase the topic for this question because I am new to bit manipulation and really don't understand how it works.
I'm in the process of reverse engineering a game application just to see how it works and wanted to figure out how exactly the '&' operator is being used in a method.
Partial Code:
int n = (random numbers will be provided below)
int n2 = n & 1920 // interested in this line of code
switch (n2){
//ignore n2 value assignment inside of cases
case 256: {
n2 = 384;
break;
case 384: {
n2 = 512;
break;
case 512: {
n2 = 0
break;
Test Values:
Input Values | Output Values | Substituting Values
n = 387 | n2 = 384 | ( 387 & 1920 ) = 384
n = 513 | n2 = 512 | ( 513 & 1920 ) = 512
n = 12546 | n2 = 256 | ( 12546 & 1920 ) = 256
n = 18690 | n2 = 256 | ( 18690 & 1920 ) = 256
Based on this use case I have a few questions:
What is the & operator doing in this example?
To me it looks like most of the values are being rounded down to the nearest bit interval, except for the numbers greater than 10000
What is so important about the number 1920?
How did they come up with this number to get to a specific bit interval? (if possible to figure out)
The first thing you need to do, to understand bit manipulation, is to convert all base-10 decimal numbers into a number format showing bits, i.e. base-2 binary numbers or base-16 hexadecimal numbers (if you've learned to read those yet).
Bits are numbered from the right, starting at 0.
Decimal Hex Binary
256 = 0x100 = 0b1_0000_0000
384 = 0x180 = 0b1_1000_0000
512 = 0x200 = 0b10_0000_0000
1920 = 0x780 = 0b111_1000_0000
| | | | |
10 8 7 4 0 Bit Number
As you can see, n & 1920 will clear all but bits 7-10.
As long as n doesn't have any set bits above 10, i.e. greater than 0x7FF = 2047, the effect is as you stated, the values are being rounded down (truncated) to the nearest bit interval, i.e. multiple of 128.
128 + 256 + 512 + 1024 = 1920.
These are also powers of 2. let ^ be power of.
128 = 2^7
256 = 2^8
512 = 2^9
1024 = 2^10
The exponent also represents the location of the bit in the number, going from right to left starting with bit 0.
By ANDing the a value with 1920 you can see if any of the bits are set.
Let's say you wanted to see if n had only bit 7 was set.
if ((n & 1920) == 128) {
// it is set.
}
Or to see if it had bits 7 and 8 set.
if ((n & 1920) == 384) {
// then those bits are set.
}
You can also set a particular bit by using |.
n |= 128. Set's bit 7 to 1.
I want to convert an int array to a hex string. I am unsure if I am doing this correctly.
I create an int[] in another class and get it with via msg.obj. I am getting some values in Hex but am unsure if they are correct.
int[] readBuf = (int[]) msg.obj; //int array is in another class
StringBuffer output=new StringBuffer();
for (int a:readBuf) {
int val1 = a & 0xff;
output.append(Integer.toHexString(val1));
}
dataView.setText(output);
Assuming I understand your intention, there are two problems with the code:
int val1 = a & 0xff;
You're taking only the last byte of your int. If you want to convert the whole integer, remove the &0xff.
You want to makes sure that the output of Integer.toHexString is always padded with zeroes in front so it's length is always 8 characters (since every byte of the 4 byte long int requres 2 characters). Otherwise both array {1,2,3} and the array {291} will give you the same string - 123.
here's a quick and dirty working code example
public static String byteToUnsignedHex(int i) {
String hex = Integer.toHexString(i);
while(hex.length() < 8){
hex = "0" + hex;
}
return hex;
}
public static String intArrToHex(int[] arr) {
StringBuilder builder = new StringBuilder(arr.length * 8);
for (int b : arr) {
builder.append(byteToUnsignedHex(b));
}
return builder.toString();
}
public static void main(String[] args){
System.out.println(intArrToHex(new int[]{1,2,3}));
System.out.println(intArrToHex(new int[]{291}));
System.out.println(intArrToHex(new int[]{0xFFFFFFFF}));
}
Output:
000000010000000200000003
00000123
ffffffff
#Malt's answer definitely highlights the problem with your code: that it doesn't 0-pad the int hex values; and that you mask the int to only take the last 8 bits using a & 0xff. Your original question implies you are only after the last byte in each int, but it really isn't clear.
You say you get results every second from your remote object. On a slow machine with large arrays it is possible that it could take a significant number of milliseconds to convert a long int[] to a hex string using your method using your (or rather Malt's corrected version of your) method.
A much faster method would be to get each 4-bit nibble from each int using bit shifting, and get the appropriate hex character from a static hex lookup array (note this does base-16 encoding, you would get shorter strings from something like base-64 encoding):
public class AltConverter {
final protected static char[] encoding = "0123456789ABCDEF".toCharArray();
public String convertToString(int[] arr) {
char[] encodedChars = new char[arr.length * 4 * 2];
for (int i = 0; i < arr.length; i++) {
int v = arr[i];
int idx = i * 4 * 2;
for (int j = 0; j < 8; j++) {
encodedChars[idx + j] = encoding[(v >>> ((7-j)*4)) & 0x0F];
}
}
return new String(encodedChars);
}
}
Testing this vs your original method using caliper (microbenchmark results here) shows this is around 11x faster † (caveat: on my machine). EDIT For anyone interested in running this and comparing the results, there is a gist here with the source code.
Even for a single element array
The original microbenchmark used Caliper as I happened to be trying it out at the time. I have rewritten it to use JMH. While doing so I found that the results I linked to and copied here originally used an array that was only ever filled with 0 for each int element. This caused the JVM to optimise the AltConverter code for arrays with length > 1 yielding artificial 10x to 11x improvements in AltConverter vs SimpleConverter. JMH and Caliper produce very similar results for both the flawed and corrected benchmark. (Updated benchmark project for maven eclipse here).
This is around 2x to 4x faster depending on array length (on my machine™). The mean runtime results (in ns) are:
Average run times in nanoseconds
Original method: SimpleConverter
New method: AltConverter
| N | Alt / ns | error / ns | Simple / ns | Error / ns | Speed up |
| ---------: | ---------: | ---------: | ----------: | ---------: | -------: |
| 1 | 30 | 1 | 61 | 2 | 2.0x |
| 100 | 852 | 19 | 3,724 | 99 | 4.4x |
| 1000 | 7,517 | 200 | 36,484 | 879 | 4.9x |
| 1000,0 | 82,641 | 1,416 | 360,670 | 5,728 | 4.4x |
| 1000,00 | 1,014,612 | 241,089 | 4,006,940 | 91,870 | 3.9x |
| 1000,000 | 9,929,510 | 174,006 | 41,077,214 | 1,181,322 | 4.1x |
| 1000,000,0 | 182,698,229 | 16,571,654 | 432,730,259 | 13,310,797 | 2.4x |
† Disclaimer: Micro-benchmarking is dangerous to rely on as an indication of performance in a real world app, but caliper is a good benchmarking framework, jmh is imho better. A performance difference of 10x 4x, with very small standard deviation, in caliper a good t-test result is enough to indicate a good performance increase even inside a more complex application.
I'm implementing a business rule to calculate a percentage increase on a stock level:
Stock level | Percentage Increase | Expected output
100 | 93 | 193
As decimal stock levels are not well defined the rule is to round up the output:
public int calculateStockLevel(int rawStockLevel, double percentageIncrease) {
return Math.ceil(rawStockLevel * (1 + (percentageIncrease / 100));
}
Actual output for the situation above: 194 (The test fails)
This looks like a floating point precision error, whats an elegant & readable way to get this test to pass?
You should use BigDecimal to specify what precision you want, but for your case you could do something simpler: just divide by 100 at the end.
public int calculateStockLevel(int rawStockLevel, double percentageIncrease) {
return Math.ceil(rawStockLevel * (100 + percentageIncrease) / 100);
}
Binary floating-point types can't represent exactly every value in decimal. So percentageIncrease / 100 will get the closest representation in binary. In your case the exact value of 0.93 in double precision is 0.930000000000000048849813083507, which is slightly more than the true decimal value. Therefore after ceil it'll be rounded up to 0.94
If you want exact decimal output you must use decimal arithmetic, like BigDecimal. But in your case you can just do it in plain integer, utilizing the fact that the fractional part of the result must be larger than 0 if remainder of the division is non-zero.
int temp = rawStockLevel * (100 + percentageIncrease);
int result = temp/100;
if (temp % 100 != 0)
result++; // round up if not divisible by 100
return result;
Supposing the inputs are two integer values. I want to convert the two integer values to binary, perform binary addition, and give the result with the carry ignored (the integer equivalent). How would I go about doing this.
An idea that comes to mind is to convert them to binary strings in some way and use an algorithm for binary addition, and then ignore the carry (delete the carry character from the string, if the carry exists).
Sample Input
One number : 1
Second number : 3
Sample Output
2
Explanation:
The lowest bit in the sum is 1 + 1 = 0
The next bit is 0 + 1 = 1 (the carry from the previous bit is discarded)
The answer is 10 in binary, which is 2.
You are probably looking for the bitwise XOR (exclusive OR) which will provide the following outputs for the given inputs:
^ | 0 | 1
--+---+--
0 | 0 | 1
--+---+--
1 | 1 | 0
It behaves like binary addition ( 1+1 = 10) but ignores the overflow if both operands are 1.
int a = 5; // 101
int b = 6; // 110
a ^ b; // 3 or 011
This is just an XOR of the two integers in binary. In Java you can do
result = v1 ^ v2;
I did some tests on pow(exponent) method. Unfortunately, my math skills are not strong enough to handle the following problem.
I'm using this code:
BigInteger.valueOf(2).pow(var);
Results:
var | time in ms
2000000 | 11450
2500000 | 12471
3000000 | 22379
3500000 | 32147
4000000 | 46270
4500000 | 31459
5000000 | 49922
See? 2,500,000 exponent is calculated almost as fast as 2,000,000. 4,500,000 is calculated much faster then 4,000,000.
Why is that?
To give you some help, here's the original implementation of BigInteger.pow(exponent):
public BigInteger pow(int exponent) {
if (exponent < 0)
throw new ArithmeticException("Negative exponent");
if (signum==0)
return (exponent==0 ? ONE : this);
// Perform exponentiation using repeated squaring trick
int newSign = (signum<0 && (exponent&1)==1 ? -1 : 1);
int[] baseToPow2 = this.mag;
int[] result = {1};
while (exponent != 0) {
if ((exponent & 1)==1) {
result = multiplyToLen(result, result.length,
baseToPow2, baseToPow2.length, null);
result = trustedStripLeadingZeroInts(result);
}
if ((exponent >>>= 1) != 0) {
baseToPow2 = squareToLen(baseToPow2, baseToPow2.length, null);
baseToPow2 = trustedStripLeadingZeroInts(baseToPow2);
}
}
return new BigInteger(result, newSign);
}
The algorithm uses repeated squaring (squareToLen) and multiplication (multiplyToLen). The time for these operations to run depends on the size of the numbers involved. The multiplications of the large numbers near the end of the calculation are much more expensive than those at the start.
The multiplication is only done when this condition is true: ((exponent & 1)==1). The number of square operations depends on the number of bits in the number (excluding leading zeros), but a multiplication is only required for the bits that are set to 1. It is easier to see the operations that are required by looking at the binary representation of the number:
2000000: 0000111101000010010000000
2500000: 0001001100010010110100000
3000000: 0001011011100011011000000
3500000: 0001101010110011111100000
4000000: 0001111010000100100000000
4500000: 0010001001010101000100000
5000000: 0010011000100101101000000
Note that 2.5M and 4.5M are lucky in that they have fewer high bits set than the numbers surrounding them. The next time this happens is at 8.5M:
8000000: 0011110100001001000000000
8500000: 0100000011011001100100000
9000000: 0100010010101010001000000
The sweet spots are exact powers of 2.
1048575: 0001111111111111111111111 // 16408 ms
1048576: 0010000000000000000000000 // 6209 ms
Just a guess:
the exponent is handled bit by bit, and if the least significant bit is 1 additional work is done.
If L is the number of bits in the exponent
and A the number of bits which are 1
and t1 the time to process the common part
and t2 the additional time processing when the LSbit is 1
then the run time would be
Lt1 + At2
or the time is dependent on the number of 1's in the binary representation.
now writing a little program to verify my theory...
I'm not sure how many times you've run your timings. As some of the commenters have pointed out, you need to time operations many, many times to get good results (and they can still be wrong).
Assuming you have timed things well, remember that there are a lot of shortcuts that can be taken in math. You don't have to do the operations 5*5*5*5*5*5 to calculate 5^6.
Here is one way to do it much more quickly. http://en.wikipedia.org/wiki/Exponentiation_by_squaring