I have the following code in Java but it overflows when it shouldn't. Why?
public classO {
public static void main(String[] args) {
int big = Integer.MAX_VALUE;
System.out.println("big = " + big);
long bigger = big + 2;
System.out.println("bigger = " + bigger);
}
}
I get this output:
big = 2147483647
bigger = -2147483647
Why does it overflow? I have defined bigger as a long. What is wrong?
big + 2 is an integer operation and leads to your overflow (first you add two integers and afterwards you cast it to a long but at the point of castig the overflow has already happend).
use the following code to create a Long operation by casting your int before the add operation:
long bigger = (long) big + 2L;
big+2 will overflow as big is max. value while (long) big not
long bigger = (long) big +2
will work for you as it will treat big as long instead of integer.So make a cast of float to it.
Related
Does there exist, or can you create, a fast method A() which converts a double to a long such that:
A(0d) = 0L
A(Math.nextUp(0d)) = 1L
A(Math.nextUp(Math.nextUp(0d))) = 2L
A(Math.nextUp(Math.nextUp(Math.nextUp(0d)))) = 3L
...
A(Double.MAX_VALUE) = Long.MAX_VALUE
A(-Double.MAX_VALUE) = -Long.MAX_VALUE
EDIT:
Just found an adequate answer myself. First of all the above is not quite possible due to the existence of NaN and infinity for double. The fact that Long does not have these means that Long.MAX_VALUE is larger than the amount of double bit permutations that create actual numbers.
However, despite this my usecase is actually just that I want a conversion that maps double to long while being order-preserving. As long as I don't encounter NaN or infinity I guess I should be able to just reinterpret double to long.
Double.doubleToLongBits(value)
Math.nextUp() will increase the fractional part not the whole part. To get the desired output, you have to call a() for each Math.nextUp().
public class D2L {
public static Long a(double input) {
return Math.round(Math.ceil(input));
}
public static void main(String[] args) {
System.out.println(a(0d)); //0L
System.out.println(a(Math.nextUp(a(0d)))); //1L
System.out.println(a(Math.nextUp(a(Math.nextUp(a(0d)))))); //2L
System.out.println(a(Math.nextUp(a(Math.nextUp(a(Math.nextUp(a(0d)))))))); //3L
System.out.println(a(Double.MAX_VALUE) + " = " +Long.MAX_VALUE); //9223372036854775807 = 9223372036854775807
System.out.println(-a(Double.MAX_VALUE) + " = " + -Long.MAX_VALUE); //-9223372036854775807 = -9223372036854775807
}
}
I am new to java, and I don't understand the differences between these two:
Lets init some variables for the overflow:
byte myByte = 100;
short myShort = 5000 ;
int myInt = 2_000_150_000;
I know whenever I got variable and arithmetic I need to do a casting with (long)
long myLong = (long)(50_000 + 10 * (long)(myByte + myShort + myInt));
long myLong2 =(long)(50_000 + 10 * (myByte + myShort + myInt));
sysout(myLong);
sysout(myLong2);
OUTPUT:
20001601000
-1473235480
but why do I need to do it outside two times?
for short type, this works differently:
short myShortTest = (short)(50_000 + 10*(short)(myByte + myInt +myShort));
short myShortTest2 = (short)(50_000 + 10*(myByte + myInt +myShort));
sysout(myShortTest);
sysout(myShortTest2);
OUTPUT
13800
13800
Whenever an overflow happens, an int will move to the other end of the boundary as seen in the output of the following program:
public class Main {
public static void main(String[] args) {
System.out.println(Integer.MAX_VALUE);
System.out.println(Integer.MAX_VALUE + 1);
System.out.println(Integer.MIN_VALUE);
System.out.println(Integer.MIN_VALUE - 1);
}
}
Output:
2147483647
-2147483648
-2147483648
2147483647
In the case of test1, because of casting to long, the result of the intermediate calculation [10*(long)(myByte + myShort + myInt)] was stored as long which can accommodate the result without an overflow and hence you got the correct value.
In the case of test2, in lack of proper cast, the result of the intermediate calculation [10*(myByte + myShort + myInt)] was stored as int but the value overflew for int and hence you got the negative value.
Your first version reads: add up my variables, treat the result as a long, multiply by 10, add 50000 and treat that as a long.
Your second version reads: add up my variables (result is an int), multiply by 10 (which is still an int but might be overflown), add 50000 (still a possibly overflown int) and treat that as a long.
So your fist version starts to treat the sum as a long value and reserves sufficient memory while your second version does this step at the very end, working with lower memory until then.
I have some integers which I am adding using Arrays.stream(int arr[]).sum(). It returns the truncated sum in int while the actual sum is bigger and fits in long. why does the stream API returns only int by truncating it but not long?
Tried for small integers like int myArray[] = { 1, 5, 8 };
int sum = Arrays.stream(myArray).sum();
works fine.
but doesn't work for longer integers whose sum leads to long.
Scenario 1 below works fine and returns 14
int myArray[] = { 1, 5, 8 };
int sum = Arrays.stream(myArray).sum();
while scenario 2 won't work as sum is going beyond 32 bits. it gives sum as -105032716 while expected is 4189934580
int myArray[] = { 2094967290, 2094967290};
int sum = Arrays.stream(myArray).sum();
and to get the correct sum, if I do below I get the expected result 4189934580
long sum = 0L + 2094967290+2094967290;
System.out.println(sum);
This is something that's up to the developer. You know what to expect, what data you're dealing with, and, more importantly, you test it. In my opinion, this is not different from how we handle the possible overflow of any intOne + intTwo.
If you know that the sum will exceed the int range, then switch to a long stream (thanks to Carlos Heuberger for asLongStream() instead of mapToLong(i -> i)):
long sum = Arrays.stream(myArray).asLongStream().sum();
You are summing int(s) (a 32-bit primitive data type). If you want a long, sum long(s). This is how primitive math works.
int c = 2094967290 + 2094967290;
System.out.println(c);
is the same as
System.out.println(2094967290 + 2094967290);
Using long(s)
System.out.println(2094967290 + 2094967290L);
or with streams like this will yield the correct result here
long sum = Arrays.stream(myArray).mapToLong(Long::valueOf).sum();
or (as pointed out in the comments)
long sum = Arrays.stream(myArray).asLongStream().sum();
However, long can also overflow (it is a 64-bit primitive data type). For arbitrary precision, you should use BigInteger.
BigInteger sum = Arrays.stream(myArray).mapToObj(BigInteger::valueOf)
.reduce(BigInteger.ZERO, (a, b) -> a.add(b));
I looked and found one post which helped me here:
Convert Linear scale to Logarithmic
However I either did some mistake I can't find or I got wrong idea of the formula. I want to convert any number from linear 1-256 range to their respective values if it was logarithmic scale. Can someone help me to correct my code? When i try values low values from zero it works kind of fine but trying to convert anything over 160 gets me result which is > 256.
Here is my code:
package linear2log;
public class Linear2Log {
public static void main(String[] args) {
System.out.println("Ats3: " + lin2log(160));
}
public static long lin2log(int z) {
int x = 1;
int y = 256;
double b = Math.log(y/x)/(y-x);
double a = 10 / Math.exp(b*10);
double tempAnswer = a * Math.exp(b*z);
long finalAnswer = Math.max(Math.round(tempAnswer) - 1, 0);
return finalAnswer;
}
}
You get the formula wrong.
For sure this line
double a = 10 / Math.exp(b*10);
You are using value 10 from example, but you should be using your value which is 256.
double a = y / Math.exp(b * y);
I don't get why are you using this line:
long finalAnswer = Math.max(Math.round(tempAnswer) - 1, 0);
This way you are always getting value that is one less than actual value.
given the following code:
long l = 1234567890123;
double d = (double) l;
is the following expression guaranteed to be true?
l == (long) d
I should think no, because as numbers get larger, the gaps between two doubles grow beyond 1 and therefore the conversion back yields a different long value. In case the conversion does not take the value that's greater than the long value, this might also happen earlier.
Is there a definitive answer to that?
Nope, absolutely not. There are plenty of long values which aren't exactly representable by double. In fact, that has to be the case, given that both types are represented in 64 bits, and there are obviously plenty of double values which aren't representable in long (e.g. 0.5)
Simple example (Java and then C#):
// Java
class Test {
public static void main(String[] args) {
long x = Long.MAX_VALUE - 1;
double d = x;
long y = (long) d;
System.out.println(x == y);
}
}
// C#
using System;
class Test
{
static void Main()
{
long x = long.MaxValue;
double d = x;
long y = (long) d;
Console.WriteLine(x == y);
}
}
I observed something really strange when doing this though... in C#, long.MaxValue "worked" in terms of printing False... whereas in Java, I had to use Long.MAX_VALUE - 1. My guess is that this is due to some inlining and 80-bit floating point operations in some cases... but it's still odd :)
You can test this as there are a finite number of long values.
for (long l = Long.MIN_VALUE; l<Long.MAX_VALUE; l++)
{
double d = (double) l;
if (l == (long)d)
{
System.out.println("long " + l + " fails test");
}
}
Doesn't take many iterations to prove that;
l = -9223372036854775805
d = -9.223372036854776E18
(long)d = -9223372036854775808
My code started with 0 and incremented by 100,000,000. The smallest number that failed the test was found to be 2,305,843,009,300,000,000 (19 digits). So, any positive long less than 2,305,843,009,200,000,000 is representable exactly by doubles. In particular, 18-digit longs are also representable exactly by doubles.
By the way, the reason I was interested in this question is that I wondered if I can use doubles to represent timestamps (in milliseconds). Since current timestamps are on the order of 13 digits (and it will take for them rather long time to get to 18 digits), I'll do that.