This question already has answers here:
Multiplication of two ints overflowing to result in a negative number
(5 answers)
Closed 9 years ago.
public class Test {
public static void main(String[] args) {
int sum=0;
for(int i=1;i<10;i++)
sum = sum+i*i*i*i*i*i*i*i*i*i;
System.out.println(sum);
}
}
OUTPUT:619374629
for(int i=1;i<10;i++)
sum = sum+i*i*i*i*i*i*i*i*i*i*i;
System.out.println(sum);
OUTPUT:
-585353335
In the second output i thought the integer range crossed.But why it is giving -ve number.It need to give me an error.What is the reason for this behaviour?
Thanks in advance...
You have overflowed the size of a 32 bit integer.
Consider what happens when i is equal to 10:
sum = sum + 100000000000 //1 with 11 zeroes
But the maximum positive number that can be stored in a 32 bit integer is only about 2 billion (2 with nine zeroes).
In fact, it gets even worse! The intermediate calculations will be performed with limited precision, and as soon as the multiplication of 10*10*10*10... overflows, then the 10s will be being multiplied with a weird negative number, and be already wrong.
So the number you end up with is not seeming to follow any rules of arithmetic, but in fact it makes perfect sense once you know that primitive integers have limited storage.
The solution is to use 64-bit long and hope you don't overflow THAT too, if you do then you need BigInteger.
Java defines integer math as signed 2s-complement mod 2^32 (for int) and 2^64 (for long). So any time that the result of an int multiply is 2^31 or higher, it wraps around and becomes a negative number. That's just the way integer math works in java.
Java spec
You are using the primitive type. So, when an integer overflows, it will only print out the bits contained in it which is negative. If you want error, try Integer.
As you predicted, you passed the integer range, though causing infinite values (as the sign [the highest bit] gets touched).
Related
This question already has answers here:
BigInteger.pow(BigInteger)?
(9 answers)
Closed 3 years ago.
I have code that will get the exponent value from given input:
BigInteger a= new BigInteger(2,10);
BigInteger b;
b=a.pow(9999999999);
It is working when value is lower than 7 digits. For example:
BigInteger a= new BigInteger(2,10);
BigInteger b;
b=a.pow(1234567);
Does my code make it possible or is it not possible to have 10 digit in the exponent?
I'm using JDK 1.8.
BigInteger.pow() only exists for int parameters, so you can't take the power bigger than Integer.MAX_VALUE at once.
Those numbers would also be incredibly big (as in "rapidly approaching and passing the number of particles in the observable universe), if you could do it, and there are very few uses for this.
Note that the "power with modulus" operation which is often used in cryptography is implemented using BigInteger.modPow() which does take BigInteger arguments and can therefore handle effectively arbitrarily large values.
pow's parameter is an int. The range of int is -2147483648 to 2147483647, so the answer is it depends on which 10 digits you use. If those 10 digits are 1234567890, that's fine, it's in range (though potentially you'll get a really, really big BigInteger which may push your memory limits); if they're 9999999999 as in your question, that's not fine, it's out of range.
E.g., this compiles:
int a = 1234567890;
This does not:
int b = 9999999999;
^--------------- error: integer number too large: 9999999999
This question already has answers here:
Large Numbers in Java
(6 answers)
Closed 5 years ago.
I've written a method that tests whether a number is prime or not. To maximise the range of numbers that the user can input, I wanted to use doubles. The issue is, after testing it on a very large prime, like 40 digits or so, my method returns false (I've already tested the logic with an int version, and it works just fine as far as I can tell).
Here is my code:
public static boolean isPrime(double number) {
double sqrt = Math.sqrt(number);
for(double i = 2; i<= sqrt; i++) {
if(number%i == 0) {
return false;
}
}
I know the reason it's not working at very high numbers is because of the accuracy error, but is there around this?
I know the reason it's not working at very high numbers is because of the accuracy error, but is there around this?
Yes. Use BigInteger.
Note that long would be better than double, since long can represent all integers up to 2^63 - 1 precisely. By contrast, with double you start losing precision at 2^53 + 1. However neither of these types are suitable for 40 (decimal) digit numbers.
BigInteger arithmetic is significantly slower, but the you will be able to go up to (at least) 2^Integer.MAX_VALUE ... provided that you have enough heap space.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 5 years ago.
I am writing a program that takes raw double values from a database and converts them to 8-byte hex strings, but I don't know how to prevent loss of precision. The data recieved from all devices are stored as doubles, including the 8-byte identification values.
Instances of doubles such as 7.2340172821234e+16 parse correctly without loss of precision, where the exponent is 10^16.
However, in instances where the exponent is 10^17, Java loses precision.
For example, 2.88512954935019e+17 is interpreted by Java as 1.44464854248327008E17
The code I am using looks like this:
public Foo(double bar) {
this.barString = Long.toHexString((long) bar);
if (barString.length == 15) {
barString = "0" + barString; //to account for leading zeroes lost on data entry
}
}
I am using a test case similar to this to test it:
#Test
public void testFooConstructor() {
OtherClass other = new OtherClass();
OtherClass.Foo test0 = other.new Foo(72340172821234000d); //7.2340172821234e+16
assertEquals("0101010100000150", test0.barString); //This test passes
OtherClass.Foo test1 = other.new Foo(144464854248327000d);//1.44464854248327e+17
assertEquals("02013e0500000758, test1.barString); //This test fails
}
The unit test states:
Expected: 02013e0500000758
Actual: 02013e0500000760
When I print out the values that Java stored 72340172821234000d and 144464854248327000d as it respectively prints:
7.2340172821234E16
1.44464854248327008E17
The latter value is off by 8, which seems to be consistent for the few that I have tested.
Is there anything I can do to correct this error?
EDIT: This is not a problem where I care about what is past the ones place. The question that some think this is a duplicate of is asking why floating point numbers are less precise, I am asking how to avoid the loss of precision, through similar workarounds to those that Roman Puchkovskiy suggested.
You could take your floating point values from database as strings (and not floating points) and then use BigDecimal to convert them to long:
String fpAsString = getFromDB();
long longValue = new BigDecimal(fpAsString).longValue();
this.barString = Long.toHexString(longValue);
BigDecimal.longValue() is analogous to narrowing primitive conversion from double to long, but it does not lose precision (apart from the loss of fractional part). You can lose something if the result does not fit into long, but the same will happen with your cast to long.
Float and Double types are variables that very good for storing either very large numbers or very small numbers but very bad with storing numbers with large number of digits and this is due to their binary representation.
Basically if taking a look at how Double or Float are stored in the memory, there are one bit for the sign, several bits for the exponent and several bits for the fraction.
So when looking at how the value is actually stored in the memory it is something like this:
And the actual value is calculated as follow:
(This example refer to Float which represented with 32 bits, Doubles is represented with 64 bits but the same principles apply)
The number of digits the number can represent is limited to the number of digits the fraction part can represent, but even with a very limited number of digits doubles and floats can represent very big numbers and very small numbers by using the exponent.
In java Double, the fraction part take 52 bits, if you will check in the calculator what is the biggest number a 52 bits number can be () you will see you will get a 16 digits number. Double can represent bigger numbers than that by adding zeroes before or after using the number represented by the exponent, but it can't store number that have more than 16 digits without lose of precision.
Notice, there is actually more to it and this is only a very basic explanation to Double and Float representation.
If you want to dive to the more accurate explanation you can check this wikipedia page: https://en.wikipedia.org/wiki/Single-precision_floating-point_format
I am a beginner in Java, and have just started learning this language.
I am learning and experimenting with examples from Herbert Schildt's book to test my understanding.
My objective is to convert negative integer to hex using in-built java methods, and then back to integer (or long). However, there are two issues that I am facing:
Issue #1: as I understand from the thread hex string to decimal conversion, Java Integer parseInt error and Converting Hexadecimal String to Decimal Integer that the converted value fffffff1 is too big to fit into Integer, so I have used Long. However, when fffffff1 is converted back to Long, I don't get -15. I get garbage value = 4294967281.
Moreover, when I type-cast the result from Long to Integer, it works well. I am not sure why Long result would show garbage value and then magically I would get the right value by just typecasting the number to integer-type. I am sure I am missing something crucial here.
Issue#2: If I don't use radix (or even change it from 16 to 4) in the method Long.parseLong(), I get an exception.
Here's my code:
public class HexByte {
public static void main(String[] args) {
byte b = (byte) 0xf1;
System.out.println("Integer value is:"+Integer.valueOf(b)); //you would get -15
int i = -15;
System.out.println("Hexadecimal value is:"+Integer.toHexString(i));
//Let's try to convert to hex and then back to integer:
System.out.println("Integer of Hexadecimal of -15 is:"+(int)Long.parseLong(Integer.toHexString(i),16 ));
//type-cast to integer works well, but not sure why
System.out.println("Integer of Hexadecimal of -15 is:"+Long.parseLong(Integer.toHexString(i),16));
//This surprisingly throws garbage value
System.out.println("Integer of Hexadecimal of -15 is:"+Long.parseLong(Integer.toHexString(i)));
//doesn't work - throws an exception
}
}
Can someone please help me? I didn't want to open duplicate threads for above issues so I have included them herewith.
Issue 1:
Negative numbers are represented using Two's Complement, which you can read more about here: wikipedia link
Basically, the left-most bit is used to determine the sign of the integer (whether it's positive or negative). A 0 means the number is positive, and a 1 means it's negative.
fffffff1 doesn't go all the way to the left, so the left-most bit when you convert that to long is a 0, which means the number is positive. When you cast it to an integer, the left bits are just dropped, and you end up somewhere in the middle of the 1s such that your leftmost digit is now a 1, so that the result is negative.
An example, with much shorter lengths for demonstration's sake:
4-bit number: 0111 -> this is positive since it starts with a 0
2-bit number using the last 2 bits of the 4-bit number: 11 -> this is negative since it starts with a 1.
Longs are like the 4-bit number in this example, and ints are like the 2-bit number. Longs have 64 bits and ints have 32 bits.
Issue 2:
If you don't specify a radix, Long.parseLong assumes base 10. You're giving it "fffffff1", and it doesn't recognize "f" as a digit in base 10 and thus throws an exception. When you specify the radix 16 then it knows "f" = 15 so there aren't any problems.
I'm trying to convert a function in java to pl/pgsql, and one problem that I found is when I'm trying to sum 2 negative numbers, and a get a positive number, more specifically :
public void sum(){
int n1 = -1808642602;
int n2 = -904321301;
System.out.println(n1 + n2);// result is 1582003393
}
And in pl/pgsql I get an integer out of range error, and if I change the variables type to bigint i get a normal sum of 2 negative numbers, that is -2712963903, instead of 1582003393
How do I do to pl/pgsql get the same result without printing an integer out of range error?
This happens because the Java int underflows and doesn't tell you.
To get the answer you are looking for in pl, you need to use bigint. Then detect the case when the result is less than Java Integer.MIN_INT (-2^31), which is the case where Java will give a positive result. To get what Java would give you, add 2^32.
You are overflowing the int try the same thing with long and it should work.
It's overflowing, because the result is too big for an int. Use long instead.
Java sums them as 32 bit signed integers, which wrap at -231. Have you tried a 64 bit long instead?
That valid rand of int in Java is -2,147,483,648 to 2,147,483,647 (-2^31 - 2^31-1).
You are causing an Integer underflow. You should use a type long instead of int which ranges from -2^63 to 2^63-1
Java treats those 2 numbers as signed integers.
Basically the scope for singed integers is -2,147,483,648 to 2,147,483,647 (-2^31 - 2^31-1).
Hence the sum of those to values is -2712963903 which is less than the the minimum -2^31 so underflows and then wraps around by doing 2^32 - 2712963903 which gives you the signed int 1582003393.
Try using a long which will not overflow in this case (but will for large enough numbers)
System.out.println((long) n1 + n2);