I am multiplying the 2 very large number in java , but the multiply output seems to be little strange
Code
long a = 2539586720l;
long b = 77284752003l;
a*=b;
System.out.println(a);
a=(long)1e12;
b=(long)1e12;
a*=b;
System.out.println(a);
Output:
-6642854965492867616
2003764205206896640
In the first case why the result is negative , if it's because of overflow then how come the result of second is positive ? Please explain this behavior ?
Code
Edit:
I am using mod=100000000009 operation still it's negative ?
a = ((a%mod)*(b%mod))%mod
The result that you get is typically an overflow issue, for a long: java allocates 63 bits for the number and the Most Significant Bit (MSB) for the sign (0 for positive values and 1 for negative values) so 64 bits in total.
So knowing that, Long.MAX_VALUE + 1 equals to -9223372036854775808 because Long.MAX_VALUE = 2^63 - 1 = 9223372036854775807 = 0x7fffffffffffffffL so if we add 1 to it, we get 0x8000000000000000L= Long.MIN_VALUE = -2^63 = -9223372036854775808. In this case the MSB switches from 0 to 1 so the result is negative which is actually what you get in the first use case.
If the MSB is set to 1 and you cause a new overflow with some computation, it will switch to 0 again (because we keep only the first 64 bits) so the result will be positive, which is actually what you get in the second use case.
To avoid that you need to use BigInteger.
Yes. It is an overflow issue. The long size is 8 bytes and the range goes from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
If you want to multiply really big numbers. Use BigInteger
import java.math.*;
public static void main(String[] args){
BigInteger bi1, bi2, bi3;
bi1 = new BigInteger("2539586720"); //or 1000000000000
bi2 = new BigInteger("77284752003");
// multiply bi1 with bi2 and assign result to bi3
bi3 = bi1.multiply(bi2);
String str = bi1 + " * " + bi2 + " = " +bi3;
//Multiplication result is 2539586720 * 77284752003 = 196271329845312200160
}
As per JLS 15.17.1
If an integer multiplication overflows, then the result is the low-order bits of the mathematical product as represented in some sufficiently large two's-complement format. As a result, if overflow occurs, then the sign of the result may not be the same as the sign of the mathematical product of the two operand values.
This is why you are getting negative values and doesn't have any correlation with the input numbers. This is because of the fact that long in Java can represent only from -2^63 to (2^63)-1 and your result is greater than this.
In order to avoid this issue, when dealing with large number arithmetic, you should always use BigInteger. A sample code is given below
BigInteger.valueOf(123L).multiply(BigInteger.valueOf(456L));
In regards to the behavior, both are examples are overflows. The fact that one answer is negative does not add any special meaning. The first set of numbers you multipled happen to result in a long whose most significant bit is 1, while the latter set didn't.
Related
We can easily get random floating point numbers within a desired range [X,Y) (note that X is inclusive and Y is exclusive) with the function listed below since Math.random() (and most pseudorandom number generators, AFAIK) produce numbers in [0,1):
function randomInRange(min, max) {
return Math.random() * (max-min) + min;
}
// Notice that we can get "min" exactly but never "max".
How can we get a random number in a desired range inclusive to both bounds, i.e. [X,Y]?
I suppose we could "increment" our value from Math.random() (or equivalent) by "rolling" the bits of an IEE-754 floating point double precision to put the maximum possible value at 1.0 exactly but that seems like a pain to get right, especially in languages poorly suited for bit manipulation. Is there an easier way?
(As an aside, why do random number generators produce numbers in [0,1) instead of [0,1]?)
[Edit] Please note that I have no need for this and I am fully aware that the distinction is pedantic. Just being curious and hoping for some interesting answers. Feel free to vote to close if this question is inappropriate.
I believe there is much better decision but this one should work :)
function randomInRange(min, max) {
return Math.random() < 0.5 ? ((1-Math.random()) * (max-min) + min) : (Math.random() * (max-min) + min);
}
First off, there's a problem in your code: Try randomInRange(0,5e-324) or just enter Math.random()*5e-324 in your browser's JavaScript console.
Even without overflow/underflow/denorms, it's difficult to reason reliably about floating point ops. After a bit of digging, I can find a counterexample:
>>> a=1.0
>>> b=2**-54
>>> rand=a-2*b
>>> a
1.0
>>> b
5.551115123125783e-17
>>> rand
0.9999999999999999
>>> (a-b)*rand+b
1.0
It's easier to explain why this happens with a=253 and b=0.5: 253-1 is the next representable number down. The default rounding mode ("round to nearest even") rounds 253-0.5 up (because 253 is "even" [LSB = 0] and 253-1 is "odd" [LSB = 1]), so you subtract b and get 253, multiply to get 253-1, and add b to get 253 again.
To answer your second question: Because the underlying PRNG almost always generates a random number in the interval [0,2n-1], i.e. it generates random bits. It's very easy to pick a suitable n (the bits of precision in your floating point representation) and divide by 2n and get a predictable distribution. Note that there are some numbers in [0,1) that you will will never generate using this method (anything in (0,2-53) with IEEE doubles).
It also means that you can do a[Math.floor(Math.random()*a.length)] and not worry about overflow (homework: In IEEE binary floating point, prove that b < 1 implies a*b < a for positive integer a).
The other nice thing is that you can think of each random output x as representing an interval [x,x+2-53) (the not-so-nice thing is that the average value returned is slightly less than 0.5). If you return in [0,1], do you return the endpoints with the same probability as everything else, or should they only have half the probability because they only represent half the interval as everything else?
To answer the simpler question of returning a number in [0,1], the method below effectively generates an integer [0,2n] (by generating an integer in [0,2n+1-1] and throwing it away if it's too big) and dividing by 2n:
function randominclusive() {
// Generate a random "top bit". Is it set?
while (Math.random() >= 0.5) {
// Generate the rest of the random bits. Are they zero?
// If so, then we've generated 2^n, and dividing by 2^n gives us 1.
if (Math.random() == 0) { return 1.0; }
// If not, generate a new random number.
}
// If the top bits are not set, just divide by 2^n.
return Math.random();
}
The comments imply base 2, but I think the assumptions are thus:
0 and 1 should be returned equiprobably (i.e. the Math.random() doesn't make use of the closer spacing of floating point numbers near 0).
Math.random() >= 0.5 with probability 1/2 (should be true for even bases)
The underlying PRNG is good enough that we can do this.
Note that random numbers are always generated in pairs: the one in the while (a) is always followed by either the one in the if or the one at the end (b). It's fairly easy to verify that it's sensible by considering a PRNG that returns either 0 or 0.5:
a=0 b=0 : return 0
a=0 b=0.5: return 0.5
a=0.5 b=0 : return 1
a=0.5 b=0.5: loop
Problems:
The assumptions might not be true. In particular, a common PRNG is to take the top 32 bits of a 48-bit LCG (Firefox and Java do this). To generate a double, you take 53 bits from two consecutive outputs and divide by 253, but some outputs are impossible (you can't generate 253 outputs with 48 bits of state!). I suspect some of them never return 0 (assuming single-threaded access), but I don't feel like checking Java's implementation right now.
Math.random() is twice for every potential output as a consequence of needing to get the extra bit, but this places more constraints on the PRNG (requiring us to reason about four consecutive outputs of the above LCG).
Math.random() is called on average about four times per output. A bit slow.
It throws away results deterministically (assuming single-threaded access), so is pretty much guaranteed to reduce the output space.
My solution to this problem has always been to use the following in place of your upper bound.
Math.nextAfter(upperBound,upperBound+1)
or
upperBound + Double.MIN_VALUE
So your code would look like this:
double myRandomNum = Math.random() * Math.nextAfter(upperBound,upperBound+1) + lowerBound;
or
double myRandomNum = Math.random() * (upperBound + Double.MIN_VALUE) + lowerBound;
This simply increments your upper bound by the smallest double (Double.MIN_VALUE) so that your upper bound will be included as a possibility in the random calculation.
This is a good way to go about it because it does not skew the probabilities in favor of any one number.
The only case this wouldn't work is where your upper bound is equal to Double.MAX_VALUE
Just pick your half-open interval slightly bigger, so that your chosen closed interval is a subset. Then, keep generating the random variable until it lands in said closed interval.
Example: If you want something uniform in [3,8], then repeatedly regenerate a uniform random variable in [3,9) until it happens to land in [3,8].
function randomInRangeInclusive(min,max) {
var ret;
for (;;) {
ret = min + ( Math.random() * (max-min) * 1.1 );
if ( ret <= max ) { break; }
}
return ret;
}
Note: The amount of times you generate the half-open R.V. is random and potentially infinite, but you can make the expected number of calls otherwise as close to 1 as you like, and I don't think there exists a solution that doesn't potentially call infinitely many times.
Given the "extremely large" number of values between 0 and 1, does it really matter? The chances of actually hitting 1 are tiny, so it's very unlikely to make a significant difference to anything you're doing.
What would be a situation where you would NEED a floating point value to be inclusive of the upper bound? For integers I understand, but for a float, the difference between between inclusive and exclusive is what like 1.0e-32.
Think of it this way. If you imagine that floating-point numbers have arbitrary precision, the chances of getting exactly min are zero. So are the chances of getting max. I'll let you draw your own conclusion on that.
This 'problem' is equivalent to getting a random point on the real line between 0 and 1. There is no 'inclusive' and 'exclusive'.
The question is akin to asking, what is the floating point number right before 1.0? There is such a floating point number, but it is one in 2^24 (for an IEEE float) or one in 2^53 (for a double).
The difference is negligible in practice.
private static double random(double min, double max) {
final double r = Math.random();
return (r >= 0.5d ? 1.5d - r : r) * (max - min) + min;
}
Math.round() will help to include the bound value. If you have 0 <= value < 1 (1 is exclusive), then Math.round(value * 100) / 100 returns 0 <= value <= 1 (1 is inclusive). A note here is that the value now has only 2 digits in its decimal place. If you want 3 digits, try Math.round(value * 1000) / 1000 and so on. The following function has one more parameter, that is the number of digits in decimal place - I called as precision:
function randomInRange(min, max, precision) {
return Math.round(Math.random() * Math.pow(10, precision)) /
Math.pow(10, precision) * (max - min) + min;
}
How about this?
function randomInRange(min, max){
var n = Math.random() * (max - min + 0.1) + min;
return n > max ? randomInRange(min, max) : n;
}
If you get stack overflow on this I'll buy you a present.
--
EDIT: never mind about the present. I got wild with:
randomInRange(0, 0.0000000000000000001)
and got stack overflow.
I am fairly less experienced, So I am also looking for solutions as well.
This is my rough thought:
Random number generators produce numbers in [0,1) instead of [0,1],
Because [0,1) is an unit length that can be followed by [1,2) and so on without overlapping.
For random[x, y],
You can do this:
float randomInclusive(x, y){
float MIN = smallest_value_above_zero;
float result;
do{
result = random(x, (y + MIN));
} while(result > y);
return result;
}
Where all values in [x, y] has the same possibility to be picked, and you can reach y now.
Generating a "uniform" floating-point number in a range is non-trivial. For example, the common practice of multiplying or dividing a random integer by a constant, or by scaling a "uniform" floating-point number to the desired range, have the disadvantage that not all numbers a floating-point format can represent in the range can be covered this way, and may have subtle bias problems. These problems are discussed in detail in "Generating Random Floating-Point Numbers by Dividing Integers: a Case Study" by F. Goualard.
Just to show how non-trivial the problem is, the following pseudocode generates a random "uniform-behaving" floating-point number in the closed interval [lo, hi], where the number is of the form FPSign * FPSignificand * FPRADIX^FPExponent. The pseudocode below was reproduced from my section on floating-point number generation. Note that it works for any precision and any base (including binary and decimal) of floating-point numbers.
METHOD RNDRANGE(lo, hi)
losgn = FPSign(lo)
hisgn = FPSign(hi)
loexp = FPExponent(lo)
hiexp = FPExponent(hi)
losig = FPSignificand(lo)
hisig = FPSignificand(hi)
if lo > hi: return error
if losgn == 1 and hisgn == -1: return error
if losgn == -1 and hisgn == 1
// Straddles negative and positive ranges
// NOTE: Changes negative zero to positive
mabs = max(abs(lo),abs(hi))
while true
ret=RNDRANGE(0, mabs)
neg=RNDINT(1)
if neg==0: ret=-ret
if ret>=lo and ret<=hi: return ret
end
end
if lo == hi: return lo
if losgn == -1
// Negative range
return -RNDRANGE(abs(lo), abs(hi))
end
// Positive range
expdiff=hiexp-loexp
if loexp==hiexp
// Exponents are the same
// NOTE: Automatically handles
// subnormals
s=RNDINTRANGE(losig, hisig)
return s*1.0*pow(FPRADIX, loexp)
end
while true
ex=hiexp
while ex>MINEXP
v=RNDINTEXC(FPRADIX)
if v==0: ex=ex-1
else: break
end
s=0
if ex==MINEXP
// Has FPPRECISION or fewer digits
// and so can be normal or subnormal
s=RNDINTEXC(pow(FPRADIX,FPPRECISION))
else if FPRADIX != 2
// Has FPPRECISION digits
s=RNDINTEXCRANGE(
pow(FPRADIX,FPPRECISION-1),
pow(FPRADIX,FPPRECISION))
else
// Has FPPRECISION digits (bits), the highest
// of which is always 1 because it's the
// only nonzero bit
sm=pow(FPRADIX,FPPRECISION-1)
s=RNDINTEXC(sm)+sm
end
ret=s*1.0*pow(FPRADIX, ex)
if ret>=lo and ret<=hi: return ret
end
END METHOD
Why if I multiply int num = 2,147,483,647 by the same int num is it returning 1 as result? Note that I am in the limit of the int possible value.
I already try to catch the exception but still give the result as 1.
Before any multiplication java translates ints to binary numbers. So you are actually trying to multiply 01111111111111111111111111111111 by 01111111111111111111111111111111. The result of this is something like
1111111111111111111111111111111000000000000000000000000000000001. The int can hold just 32 bits, so in fact you get 00000000000000000000000000000001 which is =1 in decimal.
In integer arithmetic, Java doesn't throw an exception when an overflow occurs. Instead, it just the 32 least significant bits of the outcome, or equivalently, it "wraps around". That is, if you calculate 2147483647 + 1, the outcome is -2147483648.
2,147,483,647 squared happens to be, in binary:
11111111111111111111111111111100000000000000000000000000000001
The least significant 32 bits of the outcome are equal to the value 1.
If you want to calculate with values which don't fit in 32 bits, you have to use either long (if 64 bits are sufficient) or java.math.BigInteger (if not).
int cannot handle just any large value.Look here. In JAVA you have an exclusive class for this problem which comes quite handy
import java.math.BigInteger;
public class BigIntegerDemo {
public static void main(String[] args) {
BigInteger b1 = new BigInteger("987654321987654321000000000"); //change it to your number
BigInteger b2 = new BigInteger("987654321987654321000000000"); //change it to your number
BigInteger product = b1.multiply(b2);
BigInteger division = b1.divide(b2);
System.out.println("product = " + product);
System.out.println("division = " + division);
}
}
Source : Using BigInteger In JAVA
The Java Language Specification exactly rules what should happen in the given case.
If an integer multiplication overflows, then the result is the low-order bits of the mathematical product as represented in some sufficiently large two's-complement format. As a result, if overflow occurs, then the sign of the result may not be the same as the sign of the mathematical product of the two operand values.
It means, that when you multiply two ints, the result will be represented in a long value first (that type holds sufficient bits to represent the result). Then, because you assign it to an int variable, the lower bits are kept for your int.
The JLS also says:
Despite the fact that overflow, underflow, or loss of information may occur, evaluation of a multiplication operator * never throws a run-time exception.
That's why you never get an exception.
My guess: Store the result in a long, and check what happens if you downcast to int. For example:
int num = 2147483647;
long result = num * num;
if (result != (long)((int)result)) {
// overflow happened
}
To really follow the arithmetics, let's follow the calculation:
((2^n)-1) * ((2^n)-1) =
2^(2n) - 2^n - 2^n + 1 =
2^(2n) - 2^(n+1) + 1
In your case, n=31 (your number is 2^31 - 1). The result is 2^62 + 2^32 + 1. In bits it looks like this (split by the 32bit boundary):
01000000000000000000000000000001 00000000000000000000000000000001
From this number, you get the rightmost part, which equals to 1.
It seems that the issue is because the int can not handle such a large value. Based on this link from oracle regarding the primitive types, the maximum range of values allowed is 2^31 -1 (2,147,483,647) which is exactly the same value that you want to multiply.
So, in this case is recommended to use the next primitive type with greater capacity, for example you could change your "int" variables to "long" which have a bigger range between -2^63 to 2^63-1 (-9223372036854775808 to 9223372036854775807).
For example:
public static void main(String[] args) {
long num = 2147483647L;
long total = num * num;
System.out.println("total: " + total);
}
And the output is:
total: 4611686014132420609
I hope this can help you.
Regards.
According to this link, a Java 'int' signed is 2^31 - 1. Which is equal to 2,147,483,647.
So if you are already at the max for int, and if you multiply it by anything, I would expect an error.
I have a simple program:
public class Mathz {
static int i = 1;
public static void main(String[] args) {
while (true){
i = i + i;
System.out.println(i);
}
}
}
When I run this program, all I see is 0 for i in my output. I would have expected the first time round we would have i = 1 + 1, followed by i = 2 + 2, followed by i = 4 + 4 etc.
Is this due to the fact that as soon as we try to re-declare i on the left hand-side, its value gets reset to 0?
If anyone can point me into the finer details of this that would be great.
Change the int to long and it seems to be printing numbers as expected. I'm surprised at how fast it hits the max 32-bit value!
Introduction
The problem is integer overflow. If it overflows, it goes back to the minimum value and continues from there. If it underflows, it goes back to the maximum value and continues from there. The image below is of an Odometer. I use this to explain overflows. It's a mechanical overflow but a good example still.
In an Odometer, the max digit = 9, so going beyond the maximum means 9 + 1, which carries over and gives a 0 ; However there is no higher digit to change to a 1, so the counter resets to zero. You get the idea - "integer overflows" come to mind now.
The largest decimal literal of type int is 2147483647 (231-1). All
decimal literals from 0 to 2147483647 may appear anywhere an int
literal may appear, but the literal 2147483648 may appear only as the
operand of the unary negation operator -.
If an integer addition overflows, then the result is the low-order
bits of the mathematical sum as represented in some sufficiently large
two's-complement format. If overflow occurs, then the sign of the
result is not the same as the sign of the mathematical sum of the two
operand values.
Thus, 2147483647 + 1 overflows and wraps around to -2147483648. Hence int i=2147483647 + 1 would be overflowed, which isn't equal to 2147483648. Also, you say "it always prints 0". It does not, because http://ideone.com/WHrQIW. Below, these 8 numbers show the point at which it pivots and overflows. It then starts to print 0s. Also, don't be surprised how fast it calculates, the machines of today are rapid.
268435456
536870912
1073741824
-2147483648
0
0
0
0
Why integer overflow "wraps around"
Original PDF
The issue is due to integer overflow.
In 32-bit twos-complement arithmetic:
i does indeed start out having power-of-two values, but then overflow behaviors start once you get to 230:
230 + 230 = -231
-231 + -231 = 0
...in int arithmetic, since it's essentially arithmetic mod 2^32.
No, it does not print only zeros.
Change it to this and you will see what happens.
int k = 50;
while (true){
i = i + i;
System.out.println(i);
k--;
if (k<0) break;
}
What happens is called overflow.
static int i = 1;
public static void main(String[] args) throws InterruptedException {
while (true){
i = i + i;
System.out.println(i);
Thread.sleep(100);
}
}
out put:
2
4
8
16
32
64
...
1073741824
-2147483648
0
0
when sum > Integer.MAX_INT then assign i = 0;
Since I don't have enough reputation I cannot post the picture of the output for the same program in C with controlled output, u can try yourself and see that it actually prints 32 times and then as explained due to overflow i=1073741824 + 1073741824 changes to
-2147483648 and one more further addition is out of range of int and turns to Zero .
#include<stdio.h>
#include<conio.h>
int main()
{
static int i = 1;
while (true){
i = i + i;
printf("\n%d",i);
_getch();
}
return 0;
}
The value of i is stored in memory using a fixed quantity of binary digits. When a number needs more digits than are available, only the lowest digits are stored (the highest digits get lost).
Adding i to itself is the same as multiplying i by two. Just like multiplying a number by ten in decimal notation can be performed by sliding each digit to the left and putting a zero on the right, multiplying a number by two in binary notation can be performed the same way. This adds one digit on the right, so a digit gets lost on the left.
Here the starting value is 1, so if we use 8 digits to store i (for example),
after 0 iterations, the value is 00000001
after 1 iteration , the value is 00000010
after 2 iterations, the value is 00000100
and so on, until the final non-zero step
after 7 iterations, the value is 10000000
after 8 iterations, the value is 00000000
No matter how many binary digits are allocated to store the number, and no matter what the starting value is, eventually all of the digits will be lost as they are pushed off to the left. After that point, continuing to double the number will not change the number - it will still be represented by all zeroes.
It is correct, but after 31 iterations, 1073741824 + 1073741824 doesn't calculate correctly (overflows) and after that prints only 0.
You can refactor to use BigInteger, so your infinite loop will work correctly.
public class Mathz {
static BigInteger i = new BigInteger("1");
public static void main(String[] args) {
while (true){
i = i.add(i);
System.out.println(i);
}
}
}
For debugging such cases it is good to reduce the number of iterations in the loop. Use this instead of your while(true):
for(int r = 0; r<100; r++)
You can then see that it starts with 2 and is doubling the value until it is causing an overflow.
I'll use an 8-bit number for illustration because it can be completely detailed in a short space. Hex numbers begin with 0x, while binary numbers begin with 0b.
The max value for an 8-bit unsigned integer is 255 (0xFF or 0b11111111).
If you add 1, you would typically expect to get: 256 (0x100 or 0b100000000).
But since that's too many bits (9), that's over the max, so the first part just gets dropped, leaving you with 0 effectively (0x(1)00 or 0b(1)00000000, but with the 1 dropped).
So when your program runs, you get:
1 = 0x01 = 0b1
2 = 0x02 = 0b10
4 = 0x04 = 0b100
8 = 0x08 = 0b1000
16 = 0x10 = 0b10000
32 = 0x20 = 0b100000
64 = 0x40 = 0b1000000
128 = 0x80 = 0b10000000
256 = 0x00 = 0b00000000 (wraps to 0)
0 + 0 = 0 = 0x00 = 0b00000000
0 + 0 = 0 = 0x00 = 0b00000000
0 + 0 = 0 = 0x00 = 0b00000000
...
The largest decimal literal of type int is 2147483648 (=231). All decimal literals from 0 to 2147483647 may appear anywhere an int literal may appear, but the literal 2147483648 may appear only as the operand of the unary negation operator -.
If an integer addition overflows, then the result is the low-order bits of the mathematical sum as represented in some sufficiently large two's-complement format. If overflow occurs, then the sign of the result is not the same as the sign of the mathematical sum of the two operand values.
I've been experimenting with Python as a begninner for the past few hours. I wrote a recursive function, that returns recurse(x) as x! in Python and in Java, to compare the two. The two pieces of code are identical, but for some reason, the Python one works, whereas the Java one does not. In Python, I wrote:
x = int(raw_input("Enter: "))
def recurse(num):
if num != 0:
num = num * recurse(num-1)
else:
return 1
return num
print recurse(x)
Where variable num multiplies itself by num-1 until it reaches 0, and outputs the result. In Java, the code is very similar, only longer:
public class Default {
static Scanner input = new Scanner(System.in);
public static void main(String[] args){
System.out.print("Enter: ");
int x = input.nextInt();
System.out.print(recurse(x));
}
public static int recurse(int num){
if(num != 0){
num = num * recurse(num - 1);
} else {
return 1;
}
return num;
}
}
If I enter 25, the Python Code returns 1.5511x10E25, which is the correct answer, but the Java code returns 2,076,180,480, which is not the correct answer, and I'm not sure why.
Both codes go about the same process:
Check if num is zero
If num is not zero
num = num multiplied by the recursion of num - 1
If num is zero
Return 1, ending that stack of recurse calls, and causing every returned num to begin multiplying
return num
There are no brackets in python; I thought that somehow changed things, so I removed brackets from the Java code, but it didn't change. Changing the boolean (num != 0) to (num > 0 ) didn't change anything either. Adding an if statement to the else provided more context, but the value was still the same.
Printing the values of num at every point gives an idea of how the function goes wrong:
Python:
1
2
6
24
120
720
5040
40320
362880
3628800
39916800
479001600
6227020800
87178291200
1307674368000
20922789888000
355687428096000
6402373705728000
121645100408832000
2432902008176640000
51090942171709440000
1124000727777607680000
25852016738884976640000
620448401733239439360000
15511210043330985984000000
15511210043330985984000000
A steady increase. In the Java:
1
2
6
24
120
720
5040
40320
362880
3628800
39916800
479001600
1932053504
1278945280
2004310016
2004189184
-288522240
-898433024
109641728
-2102132736
-1195114496
-522715136
862453760
-775946240
2076180480
2076180480
Not a steady increase. In fact, num is returning negative numbers, as though the function is returning negative numbers, even though num shouldn't get be getting below zero.
Both Python and Java codes are going about the same procedure, yet they are returning wildly different values. Why is this happening?
Two words - integer overflow
While not an expert in python, I assume it may expand the size of the integer type according to its needs.
In Java, however, the size of an int type is fixed - 32bit, and since int is signed, we actually have only 31 bits to represent positive numbers. Once the number you assign is bigger than the maximum, it overflows the int (which is - there is no place to represent the whole number).
While in the C language the behavior in such case is undefined, in Java it is well defined, and it just takes the least 4 bytes of the result.
For example:
System.out.println(Integer.MAX_VALUE + 1);
// Integer.MAX_VALUE = 0x7fffffff
results in:
-2147483648
// 0x7fffffff + 1 = 0x800000000
Edit
Just to make it clearer, here is another example. The following code:
int a = 0x12345678;
int b = 0x12345678;
System.out.println("a*b as int multiplication (overflown) [DECIMAL]: " + (a*b));
System.out.println("a*b as int multiplication (overflown) [HEX]: 0x" + Integer.toHexString(a*b));
System.out.println("a*b as long multiplication (overflown) [DECIMAL]: " + ((long)a*b));
System.out.println("a*b as long multiplication (overflown) [HEX]: 0x" + Long.toHexString((long)a*b));
outputs:
a*b as int multiplication (overflown) [DECIMAL]: 502585408
a*b as int multiplication (overflown) [HEX]: 0x1df4d840
a*b as long multiplication (overflown) [DECIMAL]: 93281312872650816
a*b as long multiplication (overflown) [HEX]: 0x14b66dc1df4d840
And you can see that the second output is the least 4 bytes of the 4 output
Unlike Java, Python has built-in support for long integers of unlimited precision. In Java, an integer is limited to 32 bit and will overflow.
As other already wrote, you get overflow; the numbers simply won't fit within java's datatype representation. Python has a built-in capability of bignum as to where java has not.
Try some smaller values and you will see you java-code works fine.
Java's int range
int
4 bytes, signed (two's complement). -2,147,483,648 to 2,147,483,647. Like all numeric types ints may be cast into other numeric types (byte, short, long, float, double). When lossy casts are done (e.g. int to byte) the conversion is done modulo the length of the smaller type.
Here the range of int is limited
The problem is very simple ..
coz in java the max limit of integer is 2147483647 u can print it by System.out.println(Integer.MAX_VALUE);
and minimum is System.out.println(Integer.MIN_VALUE);
Because in the java version you store the number as an int which I believe is 32-bit. Consider the biggest (unsigned) number you can store with two bits in binary: 11 which is the number 3 in decimal. The biggest number that can be stored four bits in binary is 1111 which is the number 15 in decimal. A 32-bit (signed) number cannot store anything bigger than 2,147,483,647. When you try to store a number bigger than this it suddenly wraps back around and starts counting up from the negative numbers. This is called overflow.
If you want to try storing bigger numbers, try long.
Alternative wording: When will adding Double.MIN_VALUE to a double in Java not result in a different Double value? (See Jon Skeet's comment below)
This SO question about the minimum Double value in Java has some answers which seem to me to be equivalent. Jon Skeet's answer no doubt works but his explanation hasn't convinced me how it is different from Richard's answer.
Jon's answer uses the following:
double d = // your existing value;
long bits = Double.doubleToLongBits(d);
bits++;
d = Double.longBitsToDouble();
Richards answer mentions the JavaDoc for Double.MIN_VALUE
A constant holding the smallest
positive nonzero value of type double,
2-1074. It is equal to the hexadecimal
floating-point literal
0x0.0000000000001P-1022 and also equal
to Double.longBitsToDouble(0x1L).
My question is, how is Double.logBitsToDouble(0x1L) different from Jon's bits++;?
Jon's comment focuses on the basic floating point issue.
There's a difference between adding
Double.MIN_VALUE to a double value,
and incrementing the bit pattern
representing a double. They're
entirely different operations, due to
the way that floating point numbers
are stored. If you try to add a very
little number to a very big number,
the difference may well be so small
that the closest result is the same as
the original. Adding 1 to the current
bit pattern, however, will always
change the corresponding floating
point value, by the smallest possible
value which is visible at that scale.
I don't see any difference to Jon's approach of incrementing a long, "bits++", with adding Double.MIN_VALUE. When will they produce different results?
I wrote the following code to test the differences. Maybe someone could provide more/better sample double numbers or use a loop to find a number where there is a difference.
double d = 3.14159269123456789; // sample double
long bits = Double.doubleToLongBits(d);
long bitsBefore = bits;
bits++;
long bitsAfter = bits;
long bitsDiff = bitsAfter - bitsBefore;
long bitsMinValue = Double.doubleToLongBits(Double.MIN_VALUE);
long bitsSmallValue = Double.doubleToLongBits(Double.longBitsToDouble(0x1L));
if (bitsMinValue == bitsSmallValue)
{
System.out.println("Double.doubleToLongBits(0x1L) is same as Double.doubleToLongBits(Double.MIN_VALUE)");
}
if (bitsDiff == bitsMinValue)
{
System.out.println("bits++ increments the same amount as Double.MIN_VALUE");
}
if (bitsDiff == bitsMinValue)
{
d = d + Double.MIN_VALUE;
System.out.println("Using Double.MIN_VALUE");
}
else
{
d = Double.longBitsToDouble(bits);
System.out.println("Using doubleToLongBits/bits++");
}
System.out.println("bits before: " + bitsBefore);
System.out.println("bits after: " + bitsAfter);
System.out.println("bits diff: " + bitsDiff);
System.out.println("bits Min value: " + bitsMinValue);
System.out.println("bits Small value: " + bitsSmallValue);
OUTPUT:
Double.doubleToLongBits(Double.longBitsToDouble(0x1L)) is same as Double.doubleToLongBits(Double.MIN_VALUE)
bits++ increments the same amount as Double.MIN_VALUE
Using doubleToLongBits/bits++
bits before: 4614256656636814345
bits after: 4614256656636814346
bits diff: 1
bits Min value: 1
bits Small value: 1
Okay, let's imagine it this way, sticking with decimal numbers. Suppose you have a floating decimal point type which allows you to represent 5 decimal digits, and a number between 0 and 3 for the exponent, to multiple the result by 1, 10, 100 or 1000.
So the smallest non-zero value is just 1 (i.e. mantissa=00001, exponent=0). The largest value is 99999000 (mantissa=99999, exponent=3).
Now, what happens when you add 1 to 50000000? You can't represent 50000001...the next representable number after 500000000 is 50001000. So if you try to add them together, the result is just going to be the closest value to the "true" result - which is still 500000000. That's like adding Double.MIN_VALUE to a large double.
My version (converting to bits, incrementing and then converting back) is like taking that 50000000, splitting into mantissa and exponent (m=50000, e=3) then incrementing it the smallest amount, to (m=50001, e=3) and then reassembling to 50001000.
Do you see how they're different?
Now here's a concrete example:
public class Test{
public static void main(String[] args) {
double before = 100000000000000d;
double after = before + Double.MIN_VALUE;
System.out.println(before == after);
long bits = Double.doubleToLongBits(before);
bits++;
double afterBits = Double.longBitsToDouble(bits);
System.out.println(before == afterBits);
System.out.println(afterBits - before);
}
}
This tries both approaches with a large number. The output is:
true
false
0.015625
Going through the output, that means:
Adding Double.MIN_VALUE didn't have any effect
Incrementing the bit did have an effect
The difference between afterBits and before is 0.015625, which is much bigger than Double.MIN_VALUE. No wonder the simple addition had no effect!
It's exactly as Jon said:
"If you try to add a very little
number to a very big number, the
difference may well be so small that
the closest result is the same as the
original."
For example:
// True:
(Double.MAX_VALUE + Double.MIN_VALUE) == Double.MAX_VALUE
// False:
Double.longBitsToDouble(Double.doubleToLongBits(Double.MAX_VALUE) + 1) == Double.MAX_VALUE)
MIN_VALUE is the smallest representable positive double, but that certainly does not imply that adding it to an arbitrary double results in a unequal one.
In contrast, adding 1 to the underlying bits results in a new bit pattern, and thus does result in a unequal double.