Multiply an int in Java by itself - java

Why if I multiply int num = 2,147,483,647 by the same int num is it returning 1 as result? Note that I am in the limit of the int possible value.
I already try to catch the exception but still give the result as 1.

Before any multiplication java translates ints to binary numbers. So you are actually trying to multiply 01111111111111111111111111111111 by 01111111111111111111111111111111. The result of this is something like
1111111111111111111111111111111000000000000000000000000000000001. The int can hold just 32 bits, so in fact you get 00000000000000000000000000000001 which is =1 in decimal.

In integer arithmetic, Java doesn't throw an exception when an overflow occurs. Instead, it just the 32 least significant bits of the outcome, or equivalently, it "wraps around". That is, if you calculate 2147483647 + 1, the outcome is -2147483648.
2,147,483,647 squared happens to be, in binary:
11111111111111111111111111111100000000000000000000000000000001
The least significant 32 bits of the outcome are equal to the value 1.
If you want to calculate with values which don't fit in 32 bits, you have to use either long (if 64 bits are sufficient) or java.math.BigInteger (if not).

int cannot handle just any large value.Look here. In JAVA you have an exclusive class for this problem which comes quite handy
import java.math.BigInteger;
public class BigIntegerDemo {
public static void main(String[] args) {
BigInteger b1 = new BigInteger("987654321987654321000000000"); //change it to your number
BigInteger b2 = new BigInteger("987654321987654321000000000"); //change it to your number
BigInteger product = b1.multiply(b2);
BigInteger division = b1.divide(b2);
System.out.println("product = " + product);
System.out.println("division = " + division);
}
}
Source : Using BigInteger In JAVA

The Java Language Specification exactly rules what should happen in the given case.
If an integer multiplication overflows, then the result is the low-order bits of the mathematical product as represented in some sufficiently large two's-complement format. As a result, if overflow occurs, then the sign of the result may not be the same as the sign of the mathematical product of the two operand values.
It means, that when you multiply two ints, the result will be represented in a long value first (that type holds sufficient bits to represent the result). Then, because you assign it to an int variable, the lower bits are kept for your int.
The JLS also says:
Despite the fact that overflow, underflow, or loss of information may occur, evaluation of a multiplication operator * never throws a run-time exception.
That's why you never get an exception.
My guess: Store the result in a long, and check what happens if you downcast to int. For example:
int num = 2147483647;
long result = num * num;
if (result != (long)((int)result)) {
// overflow happened
}
To really follow the arithmetics, let's follow the calculation:
((2^n)-1) * ((2^n)-1) =
2^(2n) - 2^n - 2^n + 1 =
2^(2n) - 2^(n+1) + 1
In your case, n=31 (your number is 2^31 - 1). The result is 2^62 + 2^32 + 1. In bits it looks like this (split by the 32bit boundary):
01000000000000000000000000000001 00000000000000000000000000000001
From this number, you get the rightmost part, which equals to 1.

It seems that the issue is because the int can not handle such a large value. Based on this link from oracle regarding the primitive types, the maximum range of values allowed is 2^31 -1 (2,147,483,647) which is exactly the same value that you want to multiply.
So, in this case is recommended to use the next primitive type with greater capacity, for example you could change your "int" variables to "long" which have a bigger range between -2^63 to 2^63-1 (-9223372036854775808 to 9223372036854775807).
For example:
public static void main(String[] args) {
long num = 2147483647L;
long total = num * num;
System.out.println("total: " + total);
}
And the output is:
total: 4611686014132420609
I hope this can help you.
Regards.

According to this link, a Java 'int' signed is 2^31 - 1. Which is equal to 2,147,483,647.
So if you are already at the max for int, and if you multiply it by anything, I would expect an error.

Related

Multiplying Large Number in java

I am multiplying the 2 very large number in java , but the multiply output seems to be little strange
Code
long a = 2539586720l;
long b = 77284752003l;
a*=b;
System.out.println(a);
a=(long)1e12;
b=(long)1e12;
a*=b;
System.out.println(a);
Output:
-6642854965492867616
2003764205206896640
In the first case why the result is negative , if it's because of overflow then how come the result of second is positive ? Please explain this behavior ?
Code
Edit:
I am using mod=100000000009 operation still it's negative ?
a = ((a%mod)*(b%mod))%mod
The result that you get is typically an overflow issue, for a long: java allocates 63 bits for the number and the Most Significant Bit (MSB) for the sign (0 for positive values and 1 for negative values) so 64 bits in total.
So knowing that, Long.MAX_VALUE + 1 equals to -9223372036854775808 because Long.MAX_VALUE = 2^63 - 1 = 9223372036854775807 = 0x7fffffffffffffffL so if we add 1 to it, we get 0x8000000000000000L= Long.MIN_VALUE = -2^63 = -9223372036854775808. In this case the MSB switches from 0 to 1 so the result is negative which is actually what you get in the first use case.
If the MSB is set to 1 and you cause a new overflow with some computation, it will switch to 0 again (because we keep only the first 64 bits) so the result will be positive, which is actually what you get in the second use case.
To avoid that you need to use BigInteger.
Yes. It is an overflow issue. The long size is 8 bytes and the range goes from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
If you want to multiply really big numbers. Use BigInteger
import java.math.*;
public static void main(String[] args){
BigInteger bi1, bi2, bi3;
bi1 = new BigInteger("2539586720"); //or 1000000000000
bi2 = new BigInteger("77284752003");
// multiply bi1 with bi2 and assign result to bi3
bi3 = bi1.multiply(bi2);
String str = bi1 + " * " + bi2 + " = " +bi3;
//Multiplication result is 2539586720 * 77284752003 = 196271329845312200160
}
As per JLS 15.17.1
If an integer multiplication overflows, then the result is the low-order bits of the mathematical product as represented in some sufficiently large two's-complement format. As a result, if overflow occurs, then the sign of the result may not be the same as the sign of the mathematical product of the two operand values.
This is why you are getting negative values and doesn't have any correlation with the input numbers. This is because of the fact that long in Java can represent only from -2^63 to (2^63)-1 and your result is greater than this.
In order to avoid this issue, when dealing with large number arithmetic, you should always use BigInteger. A sample code is given below
BigInteger.valueOf(123L).multiply(BigInteger.valueOf(456L));
In regards to the behavior, both are examples are overflows. The fact that one answer is negative does not add any special meaning. The first set of numbers you multipled happen to result in a long whose most significant bit is 1, while the latter set didn't.

Why Type Casting of larger variable to smaller variable results in modulo of larger variable by the range of smaller variable

Recently,while I was going through typecasting concept in java, I have seen that type casting of larger variable to smaller variable results in the modulo of larger variable by the range of smaller variable.Can anyone please explain this in detail why this is the case and is it true for any explicit type conversion?.
class conversion {
public static void main(String args[])
{
double a = 295.04;
int b = 300;
byte c = (byte) a;
byte d = (byte) b;
System.out.println(c + " " + d);
}
}
The above code gives the answer of d as 44 since 300 modulo 256 is 44.Please explain why this is the case and also what happens to the value of c?
It is a design decision that goes all the way back the the C programming language and possibly to C's antecedents too.
What happens when you convert from a larger integer type to a smaller integer type is that the top bits are lopped off.
Why? Originally (and currently) because that is what hardware integer instructions support.
The "other" logical way to do this (i.e. NOT the way that Java defines integer narrowing) would be to convert to that largest (or smallest) value representable in the smaller type; e.g.
// equivalent to real thin in real java
// b = (byte) (Math.max(Math.min(i, 127), -128))
would give +127 as the value of b. Incidentally, this is what happens when you convert a floating-point value to an integer value, and the value is too large. That is what is happening in your c example.
You also said:
The above code gives the answer of d as 44 since 300 modulo 256 is 44.
In fact, the correct calculation would be:
int d = ((b + 128) % 256) - 128;
That is because the range of the Java byte type is -128 to +127.
For completeness, the above behavior only happens in Java when the larger type is an integer type. If the larger type is a floating point type and the smaller one is an integer type, then a source value that is too large or too small (or an infinity) gets converted to the largest or smallest possible integer value for the target type; e.g.
double x = 1.0e200;
int i = (int) x; // assigns 'Integer.MAX_VALUE' to 'i'
And a NaN is converted to zero.
Reference:
Java 17 Language Specification: ยง5.1.3

Number at f(93) in fibonacci series has negative value, how?

I am trying to printout fibonacci series upto 'N' numbers. All works as per expectation till f(92) but when I am trying to get the value of f(93), values turns out in negative: "-6246583658587674878". How this could be possible? What is the mistake in the logic below?
public long fibo(int x){
long[] arr = new long[x+1];
arr[0]=0;
arr[1]=1;
for (int i=2; i<=x; i++){
arr[i]=arr[i-2]+arr[i-1];
}
return arr[x];
}
f(91) = 4660046610375530309
f(92) = 7540113804746346429
f(93) = -6246583658587674878
Is this because of data type? What else data type I should use for printing fibonacci series upto N numbers? N could be any integer within range [0..10,000,000].
You've encountered an integer overflow:
4660046610375530309 <-- term 91
+7540113804746346429 <-- term 92
====================
12200160415121876738 <-- term 93: the sum of the previous two terms
9223372036854775808 <-- maximum value a long can store
To avoid this, use BigInteger, which can deal with an arbitrary number of digits.
Here's your implementation converted to use BigDecimal:
public String fibo(int x){
BigInteger[] arr = new BigInteger[x+1];
arr[0]=BigInteger.ZERO;
arr[1]=BigInteger.ONE;
for (int i=2; i<=x; i++){
arr[i]=arr[i-2].add(arr[i-1]);
}
return arr[x].toString();u
}
Note that the return type must be String (or BigInteger) because even the modest value of 93 for x produces a result that is too great for any java primitive to represent.
This happened because the long type overflowed. In other words: the number calculated is too big to be represented as a long, and because of the two's complement representation used for integer types, after an overflow occurs the value becomes negative. To have a better idea of what's happening, look at this code:
System.out.println(Long.MAX_VALUE);
=> 9223372036854775807 // maximum long value
System.out.println(Long.MAX_VALUE + 1);
=> -9223372036854775808 // oops, the value overflowed!
The value of fibo(93) is 12200160415121876738, which clearly is greater than the maximum value that fits in a long.
This is the way integers work in a computer program, after all they're limited and can not be infinite. A possible solution would be to use BigInteger to implement the method (instead of long), it's a class for representing arbitrary-precision integers in Java.
As correctly said in above answers, you've experienced overflow, however with below java 8 code snippet you can print series.
Stream.iterate(new BigInteger[] {BigInteger.ZERO, BigInteger.ONE}, t -> new BigInteger[] {t[1], t[0].add(t[1])})
.limit(100)
.map(t -> t[0])
.forEach(System.out::println);

Why do these two similar pieces of code produce different results?

I've been experimenting with Python as a begninner for the past few hours. I wrote a recursive function, that returns recurse(x) as x! in Python and in Java, to compare the two. The two pieces of code are identical, but for some reason, the Python one works, whereas the Java one does not. In Python, I wrote:
x = int(raw_input("Enter: "))
def recurse(num):
if num != 0:
num = num * recurse(num-1)
else:
return 1
return num
print recurse(x)
Where variable num multiplies itself by num-1 until it reaches 0, and outputs the result. In Java, the code is very similar, only longer:
public class Default {
static Scanner input = new Scanner(System.in);
public static void main(String[] args){
System.out.print("Enter: ");
int x = input.nextInt();
System.out.print(recurse(x));
}
public static int recurse(int num){
if(num != 0){
num = num * recurse(num - 1);
} else {
return 1;
}
return num;
}
}
If I enter 25, the Python Code returns 1.5511x10E25, which is the correct answer, but the Java code returns 2,076,180,480, which is not the correct answer, and I'm not sure why.
Both codes go about the same process:
Check if num is zero
If num is not zero
num = num multiplied by the recursion of num - 1
If num is zero
Return 1, ending that stack of recurse calls, and causing every returned num to begin multiplying
return num
There are no brackets in python; I thought that somehow changed things, so I removed brackets from the Java code, but it didn't change. Changing the boolean (num != 0) to (num > 0 ) didn't change anything either. Adding an if statement to the else provided more context, but the value was still the same.
Printing the values of num at every point gives an idea of how the function goes wrong:
Python:
1
2
6
24
120
720
5040
40320
362880
3628800
39916800
479001600
6227020800
87178291200
1307674368000
20922789888000
355687428096000
6402373705728000
121645100408832000
2432902008176640000
51090942171709440000
1124000727777607680000
25852016738884976640000
620448401733239439360000
15511210043330985984000000
15511210043330985984000000
A steady increase. In the Java:
1
2
6
24
120
720
5040
40320
362880
3628800
39916800
479001600
1932053504
1278945280
2004310016
2004189184
-288522240
-898433024
109641728
-2102132736
-1195114496
-522715136
862453760
-775946240
2076180480
2076180480
Not a steady increase. In fact, num is returning negative numbers, as though the function is returning negative numbers, even though num shouldn't get be getting below zero.
Both Python and Java codes are going about the same procedure, yet they are returning wildly different values. Why is this happening?
Two words - integer overflow
While not an expert in python, I assume it may expand the size of the integer type according to its needs.
In Java, however, the size of an int type is fixed - 32bit, and since int is signed, we actually have only 31 bits to represent positive numbers. Once the number you assign is bigger than the maximum, it overflows the int (which is - there is no place to represent the whole number).
While in the C language the behavior in such case is undefined, in Java it is well defined, and it just takes the least 4 bytes of the result.
For example:
System.out.println(Integer.MAX_VALUE + 1);
// Integer.MAX_VALUE = 0x7fffffff
results in:
-2147483648
// 0x7fffffff + 1 = 0x800000000
Edit
Just to make it clearer, here is another example. The following code:
int a = 0x12345678;
int b = 0x12345678;
System.out.println("a*b as int multiplication (overflown) [DECIMAL]: " + (a*b));
System.out.println("a*b as int multiplication (overflown) [HEX]: 0x" + Integer.toHexString(a*b));
System.out.println("a*b as long multiplication (overflown) [DECIMAL]: " + ((long)a*b));
System.out.println("a*b as long multiplication (overflown) [HEX]: 0x" + Long.toHexString((long)a*b));
outputs:
a*b as int multiplication (overflown) [DECIMAL]: 502585408
a*b as int multiplication (overflown) [HEX]: 0x1df4d840
a*b as long multiplication (overflown) [DECIMAL]: 93281312872650816
a*b as long multiplication (overflown) [HEX]: 0x14b66dc1df4d840
And you can see that the second output is the least 4 bytes of the 4 output
Unlike Java, Python has built-in support for long integers of unlimited precision. In Java, an integer is limited to 32 bit and will overflow.
As other already wrote, you get overflow; the numbers simply won't fit within java's datatype representation. Python has a built-in capability of bignum as to where java has not.
Try some smaller values and you will see you java-code works fine.
Java's int range
int
4 bytes, signed (two's complement). -2,147,483,648 to 2,147,483,647. Like all numeric types ints may be cast into other numeric types (byte, short, long, float, double). When lossy casts are done (e.g. int to byte) the conversion is done modulo the length of the smaller type.
Here the range of int is limited
The problem is very simple ..
coz in java the max limit of integer is 2147483647 u can print it by System.out.println(Integer.MAX_VALUE);
and minimum is System.out.println(Integer.MIN_VALUE);
Because in the java version you store the number as an int which I believe is 32-bit. Consider the biggest (unsigned) number you can store with two bits in binary: 11 which is the number 3 in decimal. The biggest number that can be stored four bits in binary is 1111 which is the number 15 in decimal. A 32-bit (signed) number cannot store anything bigger than 2,147,483,647. When you try to store a number bigger than this it suddenly wraps back around and starts counting up from the negative numbers. This is called overflow.
If you want to try storing bigger numbers, try long.

Are these two approaches to the smallest Double values in Java equivalent?

Alternative wording: When will adding Double.MIN_VALUE to a double in Java not result in a different Double value? (See Jon Skeet's comment below)
This SO question about the minimum Double value in Java has some answers which seem to me to be equivalent. Jon Skeet's answer no doubt works but his explanation hasn't convinced me how it is different from Richard's answer.
Jon's answer uses the following:
double d = // your existing value;
long bits = Double.doubleToLongBits(d);
bits++;
d = Double.longBitsToDouble();
Richards answer mentions the JavaDoc for Double.MIN_VALUE
A constant holding the smallest
positive nonzero value of type double,
2-1074. It is equal to the hexadecimal
floating-point literal
0x0.0000000000001P-1022 and also equal
to Double.longBitsToDouble(0x1L).
My question is, how is Double.logBitsToDouble(0x1L) different from Jon's bits++;?
Jon's comment focuses on the basic floating point issue.
There's a difference between adding
Double.MIN_VALUE to a double value,
and incrementing the bit pattern
representing a double. They're
entirely different operations, due to
the way that floating point numbers
are stored. If you try to add a very
little number to a very big number,
the difference may well be so small
that the closest result is the same as
the original. Adding 1 to the current
bit pattern, however, will always
change the corresponding floating
point value, by the smallest possible
value which is visible at that scale.
I don't see any difference to Jon's approach of incrementing a long, "bits++", with adding Double.MIN_VALUE. When will they produce different results?
I wrote the following code to test the differences. Maybe someone could provide more/better sample double numbers or use a loop to find a number where there is a difference.
double d = 3.14159269123456789; // sample double
long bits = Double.doubleToLongBits(d);
long bitsBefore = bits;
bits++;
long bitsAfter = bits;
long bitsDiff = bitsAfter - bitsBefore;
long bitsMinValue = Double.doubleToLongBits(Double.MIN_VALUE);
long bitsSmallValue = Double.doubleToLongBits(Double.longBitsToDouble(0x1L));
if (bitsMinValue == bitsSmallValue)
{
System.out.println("Double.doubleToLongBits(0x1L) is same as Double.doubleToLongBits(Double.MIN_VALUE)");
}
if (bitsDiff == bitsMinValue)
{
System.out.println("bits++ increments the same amount as Double.MIN_VALUE");
}
if (bitsDiff == bitsMinValue)
{
d = d + Double.MIN_VALUE;
System.out.println("Using Double.MIN_VALUE");
}
else
{
d = Double.longBitsToDouble(bits);
System.out.println("Using doubleToLongBits/bits++");
}
System.out.println("bits before: " + bitsBefore);
System.out.println("bits after: " + bitsAfter);
System.out.println("bits diff: " + bitsDiff);
System.out.println("bits Min value: " + bitsMinValue);
System.out.println("bits Small value: " + bitsSmallValue);
OUTPUT:
Double.doubleToLongBits(Double.longBitsToDouble(0x1L)) is same as Double.doubleToLongBits(Double.MIN_VALUE)
bits++ increments the same amount as Double.MIN_VALUE
Using doubleToLongBits/bits++
bits before: 4614256656636814345
bits after: 4614256656636814346
bits diff: 1
bits Min value: 1
bits Small value: 1
Okay, let's imagine it this way, sticking with decimal numbers. Suppose you have a floating decimal point type which allows you to represent 5 decimal digits, and a number between 0 and 3 for the exponent, to multiple the result by 1, 10, 100 or 1000.
So the smallest non-zero value is just 1 (i.e. mantissa=00001, exponent=0). The largest value is 99999000 (mantissa=99999, exponent=3).
Now, what happens when you add 1 to 50000000? You can't represent 50000001...the next representable number after 500000000 is 50001000. So if you try to add them together, the result is just going to be the closest value to the "true" result - which is still 500000000. That's like adding Double.MIN_VALUE to a large double.
My version (converting to bits, incrementing and then converting back) is like taking that 50000000, splitting into mantissa and exponent (m=50000, e=3) then incrementing it the smallest amount, to (m=50001, e=3) and then reassembling to 50001000.
Do you see how they're different?
Now here's a concrete example:
public class Test{
public static void main(String[] args) {
double before = 100000000000000d;
double after = before + Double.MIN_VALUE;
System.out.println(before == after);
long bits = Double.doubleToLongBits(before);
bits++;
double afterBits = Double.longBitsToDouble(bits);
System.out.println(before == afterBits);
System.out.println(afterBits - before);
}
}
This tries both approaches with a large number. The output is:
true
false
0.015625
Going through the output, that means:
Adding Double.MIN_VALUE didn't have any effect
Incrementing the bit did have an effect
The difference between afterBits and before is 0.015625, which is much bigger than Double.MIN_VALUE. No wonder the simple addition had no effect!
It's exactly as Jon said:
"If you try to add a very little
number to a very big number, the
difference may well be so small that
the closest result is the same as the
original."
For example:
// True:
(Double.MAX_VALUE + Double.MIN_VALUE) == Double.MAX_VALUE
// False:
Double.longBitsToDouble(Double.doubleToLongBits(Double.MAX_VALUE) + 1) == Double.MAX_VALUE)
MIN_VALUE is the smallest representable positive double, but that certainly does not imply that adding it to an arbitrary double results in a unequal one.
In contrast, adding 1 to the underlying bits results in a new bit pattern, and thus does result in a unequal double.

Categories

Resources