This question already has answers here:
Why, In Java arithmetic, overflow or underflow will never throw an Exception?
(4 answers)
Closed 5 years ago.
I'm doing an AP Comp Sci Review sheet and I do not understand why when I compile
System.out.println(365 * 24 * 3600 * 1024 * 1024 * 1024); the answer is 0. I understand that an int is 32 bits and has a maximum of 2147483647 and could not give you 3386152216166400 but why wouldn't it give an overflow exception error or a similar error?
From the Java Language Specification,
The integer operators do not indicate overflow or underflow in any
way.
That's why you are not getting any exception.
The results are specified by the language as follows,
If an integer multiplication overflows, then the result is the
low-order bits of the mathematical product as represented in some
sufficiently large two's-complement format. As a result, if overflow
occurs, then the sign of the result may not be the same as the sign of
the mathematical product of the two operand values.
Integer.MAX_VALUE + 1 == Integer.MIN_VALUE
Integer.MIN_VALUE - 1 == Integer.MAX_VALUE
That's why you are getting zero as the result.
you have to use BigInteger.
BigInteger bi1, bi2, bi3;
// assign values to bi1, bi2
bi1 = new BigInteger("123");
bi2 = new BigInteger("50");
// perform multiply operation on bi1 using bi2
bi3 = bi1.multiply(bi2);
String str = "Result of multiply is " +bi3;;
// print bi3 value
System.out.println( str );
Just add an L to identify that one of the values is of primitive type long:
System.out.println(365L * 24 * 3600 * 1024 * 1024 * 1024);
/* ^
* 'L' to indicate that one of the factors is long
*/
In Java, all math is done in the largest data type required to handle
all of the current values. So, if you have int * int, it will always
do the math as an integer, but int * long is done as a long.
In this case, the 1024*1024*1024*80 is done as an Int, which overflows
int.
The "L" of course forces one of the operands to be an Int-64 (long),
therefore all the math is done storing the values as a Long, thus no
overflow occurs.
Credit goes to Erich: https://stackoverflow.com/a/1494884/5645656
EDIT:
In many cases Java is based on C or C++ and these are based on
Assembly. An overflow/underflow is silent in C and C++ and almost
silent in assembly (unless you check special flags). This is likely
due to the fact that C and C++ didn't have exceptions when they were
first proposed. If you wanted to see overflows/underflows you just
used a larger type. e.g. long long int or long double ;) BTW assembly
has something similar to exceptions called traps or interrupts,
overflows/underflow doesn't cause a trap AFAIK.
Credit goes to Peter Lawrey: https://stackoverflow.com/a/15998029/5645656
TIP: Sometimes using google can answer your question before you ask it.
Related
I was practicing some castings in Java and I faced a situation for which I couldn't find any answers, anywhere. There are a lot of similar questions with answers, but none gave me an explanation for this particular case.
When I do something like
long l = 165787121844687L;
int i = (int) l;
System.out.println("long: " + l);
System.out.println("after casting to int: " + i);
The output is
long: 165787121844687
after casting to int: 1384219087
This result is very intriguing for me.
I know that the type long is a 64-bit integer, and the type int is a 32-bit integer. I also know that when we cast a larger type to a smaller one, we can lose information. And I know that there is a Math.toIntExact() method that is quite useful.
But what's the explanation for this "1384219087" output? There was loss of data, but why this number? How "165787121844687" became "1384219087"? Why does the code even compile?
That's it. Thanks!
165787121844687L in hex notation = 0000 96C8 5281 81CF
1384219087 in hex notation = 5281 81CF
So the cast truncated the top 32 bits as expected.
32-bits
deleted
▼▼▼▼ ▼▼▼▼
165_787_121_844_687L = 0000 96C8 5281 81CF ➣ 1_384_219_087
64-bit long ▲▲▲▲ ▲▲▲▲ 32-bit int
32-bits
remaining
If you convert these two numbers to hexadecimal, you get
96C8528181CF
528181CF
See what's happened here?
The Answer by OldProgrammer is correct, and should be accepted. Here is some additional info, and a workaround.
Java spec says so
Why does the code even compile?
When you cast a numeric primitive in Java, you take responsibility for the result including the risk of information loss.
Why? Because the Java spec says so. Always best to read the documentation. Programming by intuition is risky business.
See the Java Language Specification, section 5.1.3. Narrowing Primitive Conversion. To quote (emphasis mine):
A narrowing primitive conversion may lose information …
A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T. In addition to a possible loss of information about the magnitude of the numeric value, this may cause the sign of the resulting value to differ from the sign of the input value.
Math#…Exact…
When you want to be alerted to data loss during conversion from a long to a short, use the Math methods for exactitude. If the operation overflows, an execution is thrown. You can trap for that exception.
try
{
int i = Math.toIntExact( 165_787_121_844_687L ) ; // Convert from a `long` to an `int`.
}
catch ( ArithmeticException e )
{
// … handle conversion operation overflowing an `int` …
}
You will find similar Math#…Exact… methods for absolute value, addition, decrementing, incrementing, multiplying, negating, and subtraction.
This question already has answers here:
BigInteger.pow(BigInteger)?
(9 answers)
Closed 3 years ago.
I have code that will get the exponent value from given input:
BigInteger a= new BigInteger(2,10);
BigInteger b;
b=a.pow(9999999999);
It is working when value is lower than 7 digits. For example:
BigInteger a= new BigInteger(2,10);
BigInteger b;
b=a.pow(1234567);
Does my code make it possible or is it not possible to have 10 digit in the exponent?
I'm using JDK 1.8.
BigInteger.pow() only exists for int parameters, so you can't take the power bigger than Integer.MAX_VALUE at once.
Those numbers would also be incredibly big (as in "rapidly approaching and passing the number of particles in the observable universe), if you could do it, and there are very few uses for this.
Note that the "power with modulus" operation which is often used in cryptography is implemented using BigInteger.modPow() which does take BigInteger arguments and can therefore handle effectively arbitrarily large values.
pow's parameter is an int. The range of int is -2147483648 to 2147483647, so the answer is it depends on which 10 digits you use. If those 10 digits are 1234567890, that's fine, it's in range (though potentially you'll get a really, really big BigInteger which may push your memory limits); if they're 9999999999 as in your question, that's not fine, it's out of range.
E.g., this compiles:
int a = 1234567890;
This does not:
int b = 9999999999;
^--------------- error: integer number too large: 9999999999
Hi so i recently saw a question structured much like this
int a= (int) Math.pow(2,32);
System.out.println(a); //prints out Integer.MAX_VALUE
After i answered the question it turns out i got it wrong, i answered Integer.MIN_VALUE but the correct answer was Integer.MAX_VALUE. After further testing i realized any double that i cast to an int that is greater than Integer.MAX_VALUE just makes the int equal to Integer.MAX_VALUE.
For Example
int a = (int) ((double) Integer.MAX_VALUE+100);
System.out.println(a); //prints out Integer.MAX_VALUE
After further testing i realized if you try to cast a long to an int, it seems to assign the int to a seemingly random number.
So my question is. What the heck is going on, why does the double value not overflow the integer when you cast it to an int? and why does casting a long to an int return a seemingly random number
The logic of these conversions is part of the Java language specification, Item 5.1.3.
You can see there, that when converting from long to int, most significant bits are discarded, leaving the least significant 32 bits.
And also, that if the result of rounding a double or float is a number that is too small or too large to represent as an int (or long), the minimal or maximal representable number will be chosen.
There is no way for us here to answer "why" for a decision that has been made long ago. But this is the way the language is defined, and you can rely on it being the same in any Java environment you work in.
for casting a double value to integer it would erase the after point value and the the remainder is like rounding down the value of the double
and for the first question the int would return the value it could save in 8 bites so it would seem to be the min.value
long a= Math.pow(2,32); would give you the max.value
and for casting long to an integer It's not a random number. It's the rightmost 32 bits of the long number
sorry for my bad writing
english is my second language
I have this code:
long i = 0;
while (true) {
i += 10*i + 5;
System.out.println(i);
Thread.sleep(100);
}
Why does the long i get negative after a few prints? If the range is exceeded, shouldn't an error occur?
Java doesn't throw an error if you increase a number after its maximum value. If you wish to have this behaviour, you could use the Math.addExact(long x, long y) method from Java 8. This method will throw an ArithmeticException if you pass the Long.MAX_VALUE.
The reason why Java doesn't throw an exception and you receive negative numbers has to do with the way numbers are stored. For a long primitive the first byte is used for indicating the sign of the number (0 -> positive, 1 -> negative), while the rest are used for the numeric value. This means that Long.MAX_VALUE which is the biggest positive value will be stored as 01111...111 (0 followed by 63 bits of 1). Since you add a number to Long.MAX_VALUE you will start receiving negative integers since the sign byte changes to 1. This means you have an numeric overflow, but this error isn't thrown by Java.
If the operation overflows, the results goes back to the minimum value and continues from there.
There is no exception thrown.
If your code can overflows you can use a BigInteger instead.
An extract from Math javadoc:
"The platform uses signed two's complement integer arithmetic with int and long primitive types. The developer should choose the primitive type to ensure that arithmetic operations consistently produce correct results, which in some cases means the operations will not overflow the range of values of the computation. The best practice is to choose the primitive type and algorithm to avoid overflow."
For Java 8:
"In cases where the size is int or long and overflow errors need to be detected, the methods addExact, subtractExact, multiplyExact, and toIntExact throw an ArithmeticException when the results overflow. For other arithmetic operations such as divide, absolute value, increment, decrement, and negation overflow occurs only with a specific minimum or maximum value and should be checked against the minimum or maximum as appropriate"
This question already has answers here:
Multiplication of two ints overflowing to result in a negative number
(5 answers)
Closed 9 years ago.
public class Test {
public static void main(String[] args) {
int sum=0;
for(int i=1;i<10;i++)
sum = sum+i*i*i*i*i*i*i*i*i*i;
System.out.println(sum);
}
}
OUTPUT:619374629
for(int i=1;i<10;i++)
sum = sum+i*i*i*i*i*i*i*i*i*i*i;
System.out.println(sum);
OUTPUT:
-585353335
In the second output i thought the integer range crossed.But why it is giving -ve number.It need to give me an error.What is the reason for this behaviour?
Thanks in advance...
You have overflowed the size of a 32 bit integer.
Consider what happens when i is equal to 10:
sum = sum + 100000000000 //1 with 11 zeroes
But the maximum positive number that can be stored in a 32 bit integer is only about 2 billion (2 with nine zeroes).
In fact, it gets even worse! The intermediate calculations will be performed with limited precision, and as soon as the multiplication of 10*10*10*10... overflows, then the 10s will be being multiplied with a weird negative number, and be already wrong.
So the number you end up with is not seeming to follow any rules of arithmetic, but in fact it makes perfect sense once you know that primitive integers have limited storage.
The solution is to use 64-bit long and hope you don't overflow THAT too, if you do then you need BigInteger.
Java defines integer math as signed 2s-complement mod 2^32 (for int) and 2^64 (for long). So any time that the result of an int multiply is 2^31 or higher, it wraps around and becomes a negative number. That's just the way integer math works in java.
Java spec
You are using the primitive type. So, when an integer overflows, it will only print out the bits contained in it which is negative. If you want error, try Integer.
As you predicted, you passed the integer range, though causing infinite values (as the sign [the highest bit] gets touched).