1 public class Foo {
2 public static void main(String[]a){
3 foo(1000000000); // output: 1000000000
4 foo(1000000000 * 10); // output: 1410065408
5 foo((long)1000000000 * 10); // output: 10000000000
6
7 long l = 1000000000 * 10;
8 foo(l); // output: 1410065408
9 //long m = 10000000000; // compile error
10 }
static void foo(long l){
System.out.println(l);
}
}
Why line 4 output: 1410065408 instead of 10000000000?
Why line 9 is a compile error? can't the compiler create a Long as the expected type is a Long?
By default, integer literals are ints -- which means they abide by int arithmetic, and they can't represent numbers larger than an int can hold. Note that in your line 4, the two ints are first multiplied (using int arithmetic) and only then is the result casted to a long -- but by then it's too late, and the overflow has already happened.
To put a long literal, just append L to it:
long m = 10000000000L;
(A lowercase 'l' would also compile, but that looks like a digit '1' so you should avoid it; use the capital 'L' so that it stands out).
Literals default to int unless otherwise specified. Instead of casting with (long) you can add L to the end of your literal to note that it should be of type long, in the same way that 1.1 is a double and 1.1f is a float.
foo(1000000000L * 10); // output: 10000000000
and
long m = 10000000000L;
When you perform any operation in java, the compiler tries to make an implicit cast to the operands transforming both of them to the bigger datatype. (int / int = result is allways int).
Add an explicit cast to your literals (defaulted to int) as indicated in other answers:
long l = 1000000000L * 10;
Related
What rules govern such unexpected behaviour?
int i1 = -+0007;
int i2 = +-0007;
System.out.println(i1); // output: -7
System.out.println(i2); // output: -7
int i3 = +-+0007; // -7
int i4 = -+-0007; // 7
I understand that unary sign operator is right-associative. But if minus goes first (rightmost), it seems that next plus (to the left) does not change sign from minus to plus (i.e. does nothing!). But second minus changes sign (from negative to positive).
Besides we cannot use ++ or -- anywhere in this sequence of +-+-+- (+--+ is compile error: rightmost + (or leftmost +) is wrong operand of -- operation).
Unary - changes the sign. Unary + doesn't, it's mostly pointless, it does an unary numeric promotion, you can see here for some more details about it.
So in your first line:
int i1 = -+0007;
+ does nothing - changes to -7
Second line:
int i2 = +-0007;
- changes to -7 + does nothing
+--+ is a compile error because -- is a different operator that can only operate on a variable. Something like the following will work:
int b = 1;
int a = -++b; // increment b by 1, change the sign and assign to a
System.out.println(a); // prints -2
You can use parenthesis or add space to make +--+ compile
int i = +-(-+7); // will be equal to 7 because only the 2 `-` change the sign
int i = +- -+7;
It has tog do with Maths. If you multiply a negative and a positive number, the result is negative. If you multiply two negative numbers, the result is positive.
So, the - or + could also be written as:
-…: (-1) * …
+…: (1) * …
This question already has answers here:
Using the letter L in long variable declaration
(2 answers)
Closed 5 years ago.
What's the difference between the following?
long var1 = 2147483647L;
long var2 = 2147483648;
(or any primitive variable declaration)
Does it have any performance issue with or without L? Is it mandatory?
In the first case you are assigning a long literal to a long variable (the L or l suffix indicates long type).
In the second case you are assigning an int literal (that's the default type when no suffix is supplied) to a long variable (which causes an automatic type cast from int to long), which means you are restricted to the range from Integer.MIN_VALUE to Integer.MAX_VALUE (-2147483648 to 2147483647).
That's the reason why
long var2 = 2147483648;
doesn't pass compilation (2147483648 is larger than Integer.MAX_VALUE).
On the other hand
long var2 = 2147483648L;
would pass compilation.
For easy understanding each of the type have range in java.
By default every digit you entered in java is either byte or short or integer.
short s = 32767;
byte b = 127;
int i = 2147483647;
So if you assign anything except from their range you'll get compilation error.
int i = 2147483648; //compilation error.
Type range
And when you write long longNumber = 2147483647;
though it falls in long range but internally java treat it as
long l = (int) 2147483647;
you wont get any errors.
But if we assign beyond the range of integer like
longNumber = 2147483648; we will get compilation error as
long o = (int) 2147483648;
here java will try to convert the 2147483648 to int but it is not in int range so widening error is thrown.
To indicate java that the number what we have written is beyond the integer range just append l or L to the end of the number.
so java will wide his range till long and convert it as
long o = (long) 2147483648;
By default every floating point or digit with floating points (.) are size of double. So when you write some digits with (.) java treat as a double and it must be in double range.
As we know the float range is smaller then double.
so when you write
float f = 3.14;
though it falls in double range but internally java treat this assignment as
float f = (double) 3.14;
here you are assigning the double to float narrowing which is not correct.
so either you have to convert the expression like that
float f = (float)3.14;
or
float f = 3.14f; // tell jvm to assign this in float range by appending **f** or **F**
If we don't mention the L with the value then value is considered to be int value.
It type casts the int to long automatically.
So this the line with the precision error fault;
A[i]= m % 3;
m is long
A is int[];
And my error is
error: possible loss of precision
A[i]= m % 3.
required int
found long.
How can I have error when the only potential answers are 0,1,2?
Isn't there another way than declaring A as long[]?
It's a potentially big array so I don't want that (in fact I would even prefer for A to be short[])
Also I tried error: A[i]= m % 3L , but same result.
The compiler doesn't look at the result, it looks at the type. The type of m%3 is long, and you are trying to put it into an int.
So, the compiler is angry, because potentially, the value stored in a longis bigger than the one you can store into an int.
In order to remove the problem, you have to cast the result back into an int:
A[i] = (int) (m % 3);
However, you can do this because you know the result is 0,1 or 2. If you do not know the value of the long you are casting, you may have an integer overflow:
public static void main(String[] args) {
long l = Integer.MAX_VALUE + 1L;
System.out.println(l);
// 2147483648
System.out.println((int)l);
// -2147483648
}
Why does this output 0 instead of 1?
System.out.println((int) (Math.ceil(1/2)));
While this one correct outputs 1
System.out.println((int) (Math.ceil((double) 1/ (double) 2)));
Shouldn't Math.ceil(double) automatically type cast the 1/2 to double?
Math.ceil does, indeed, cast the integer to a double. But it only does so after the integer operation has been performed. This is the order of operations:
int a = 1;
int b = 2;
int c = a / b; // now equals 0, because it's an integer operation.
double d = (double)c; // now it's a double - but equals 0.0.
double e = Math.ceil(d); // still 0.0.
You're thinking of 1/2 as a fraction, but it's not - it's an expression of two ints and an operator that has to be resolved before its value can be used in further expressions.
Explicit casting always require (datatype) to be mentioned. Here 1 and 2 represents itself as int and to cast from int to double explicit casting will be introduced. Whenever casting is preformed from lower to higher datatypes explicit casting should be imposed. See example below;
public class MainClass{
public static void main(String[] argv){
int a = 100;
long b = a; // Implicit cast, an int value always fits in a long
}
}
An explicit casts looks like this:
public class MainClass{
public static void main(String[] argv){
float a = 100.001f;
int b = (int)a; // Explicit cast, the float could lose info
}
}
Code Snippet: Source
The first thing which happens when that line is executed, is that the division 1/2 is resolved. This happens without any consideration for the method-call to Math.ceil it is embedded in.
The literals 1 and 2 are integers. When you perform a division with only integers as arguments, an integer division is performed, which always rounds down. So the term gets resolved to the int value 0. Math.ceil() only accepts type double, but that's not a problem because Java can perform the conversion automatically and turn the int 0 to a double 0.0.
To perform an explicit floating point division, have one or both of the parameters to the division be floating point literals:
System.out.println((int) (Math.ceil(1.0/2.0)));
given the following code:
long l = 1234567890123;
double d = (double) l;
is the following expression guaranteed to be true?
l == (long) d
I should think no, because as numbers get larger, the gaps between two doubles grow beyond 1 and therefore the conversion back yields a different long value. In case the conversion does not take the value that's greater than the long value, this might also happen earlier.
Is there a definitive answer to that?
Nope, absolutely not. There are plenty of long values which aren't exactly representable by double. In fact, that has to be the case, given that both types are represented in 64 bits, and there are obviously plenty of double values which aren't representable in long (e.g. 0.5)
Simple example (Java and then C#):
// Java
class Test {
public static void main(String[] args) {
long x = Long.MAX_VALUE - 1;
double d = x;
long y = (long) d;
System.out.println(x == y);
}
}
// C#
using System;
class Test
{
static void Main()
{
long x = long.MaxValue;
double d = x;
long y = (long) d;
Console.WriteLine(x == y);
}
}
I observed something really strange when doing this though... in C#, long.MaxValue "worked" in terms of printing False... whereas in Java, I had to use Long.MAX_VALUE - 1. My guess is that this is due to some inlining and 80-bit floating point operations in some cases... but it's still odd :)
You can test this as there are a finite number of long values.
for (long l = Long.MIN_VALUE; l<Long.MAX_VALUE; l++)
{
double d = (double) l;
if (l == (long)d)
{
System.out.println("long " + l + " fails test");
}
}
Doesn't take many iterations to prove that;
l = -9223372036854775805
d = -9.223372036854776E18
(long)d = -9223372036854775808
My code started with 0 and incremented by 100,000,000. The smallest number that failed the test was found to be 2,305,843,009,300,000,000 (19 digits). So, any positive long less than 2,305,843,009,200,000,000 is representable exactly by doubles. In particular, 18-digit longs are also representable exactly by doubles.
By the way, the reason I was interested in this question is that I wondered if I can use doubles to represent timestamps (in milliseconds). Since current timestamps are on the order of 13 digits (and it will take for them rather long time to get to 18 digits), I'll do that.