Place your declaration for i at line 3 so that the loop becomes an infinite loop.
public class Puzzel3 {
public static void main(String[] args) {
// Line 3
while (i == i + 1) {
System.out.println(i);
}
System.out.println("done");
}
}
Math says, that Infinity + 1 == Infinity, so
// The declaration required
double i = Double.POSITIVE_INFINITY;
// It's infinite loop now...
while (i == i + 1) {
System.out.println(i);
}
System.out.println("done");
double i=1/0.0;
It will turn the loop in infinite
The while loop is infinite if the loop condition remains true. Since the expression only depends on i, and i is not assigned in the loop body, that is equivalent to the loop condition being true on first evaluation.
Therefore, the question is for which values of which types the expression i == i + 1 is true.
Java has the following types:
reference types: do not support the + operator, except for strings, which get longer by concatentating "1", and therefore can not remain identical.
primitive types:
boolean: does not support +
integral types: adding 1 is guaranteed to change the value, even in case of overflow
floating point types: a float of floating point type is either:
positive 0: 0- + 1 is 1 and therefore != 0
negative 0: 0+ + 1 is 1 and therefore != 0
NaN: NaN + 1 is NaN, but NaN != NaN
positive infinity: inf+ + 1 is inf+, and therefore == inf+
negative infinity: inf- + 1 is inf-, and therefore == inf-
normal: c + 1 is not an accurate computation. Roughly speaking, 1 is added to c, and the nearest float (or double) to that value is taken as the result. Whether that float (or double) is distinct from the initial value depends on the density of floating point values around c. Internally, a floating point type is represented by a sign bit, and two fixed-with integers m and e, where the value of the float is given by s * m * 2^e.
Adding 1 will unlikely change e (and if it does, the result is distinct anyway). Otherwise:
if e <= 0, adding 1 will change m
if e == 1, adding 1 might change m, depending on the rounding mode
if e > 1, adding 1 will not change m, and therefore c + 1 == c. Now, for which values will this occur?
For float, m < 2^24. Therefore, e > 1 if c >= 2^25 or c <= - (2^25)
For double, m < 2^53. Therefore, e > 1 if c >= 2^54 or c <= -(2^54)
Those ought to be all cases :-)
Related
During the study of java expression calculation order I faced with one phenomenon I can't explain to myself clearly. There are two quiz questions. It is asked to define the console output.
Example 1
int[] a = {5, 5};
int b = 1;
a[b] = b = 0;
System.out.println(Arrays.toString(a));
Correct console output is: [5,0]
Example 2
public class MainClass {
static int f1(int i) {
System.out.print(i + ",");
return 0;
}
public static void main(String[] args) {
int i = 0;
i = i++ + f1(i);
System.out.print(i);
}
}
Correct console output is: 1,0
As I learned, there is operators groups (levels) with ordered precedence in java and expressions are evaluated according to operator precedence. Also there is associativity of each group and if operators have the same precedence, then they are evaluated in order specified by group associativity. The operators precedence table (from Cay S. Horstmann - Core Java V.1):
# operator associativity
1 [] . () method call left to right
2 ! ~ ++ -- + - (type) cast new right to left
3 * / % left to right
4 + - left to right
...
14 = += -= the rest are omitted right to left
With the table above it's become clear that in example 1 the operator with highest priority is array indexing a[b] and then аssignment operators are evaluated from right to left: b=0, then a[1]=0. That is why a=[5,0].
But the example 2 confuses me. According to the precedence table, the operator with highest priority is f1(i) method invocation (which should print 0), then unary post-increment i++ (which uses current i=0 and increments it after), then addition operator 0+0 and аssignment operator finally i=0. So, I supposed the correct output is 0,0.
But in fact it is not. In fact the unary post-increment i++ is calculated first (increasing i to 1), then method invocation f1(i) prints 1 and returns 0 and finally аssignment operator assigns i=0+0, so the final i value is 0 and correct answer is 1,0.
I suppose this is so due to binary addition operator associativity "from left to right", but in this case why do addition is calculated first in example 2, but in example 1 the highest priority operator a[b] is calculated first? I noticed that all operators in example 2 are in different groups, so we shouldn't take operator associativity into consideration at all, should we? Shouldn't we just order all operators from example 2 by precedence and evaluate it in resulting order?
You are confusing evaluation order with precedence.
The right-to-left associativity of = means that
a[b] = b = 0;
is evaluated as
a[b] = (b = 0);
but the evaluation is still left-to-right, so the value of the first b is evaluated before the value of b is updated.
a[b] = (b = 0) a = { 5, 5 }, b = 1
// evaluate 'b'
a[1] = (b = 0) a = { 5, 5 }, b = 1
// evaluate 'b = 0'
a[1] = 0 a = { 5, 5 }, b = 0
// evaluate 'a[1] = 0'
0 a = { 5, 0 }, b = 0
Operator presendence and associativity affect how source code is parsed into an expression tree. But : evaluation order in any expression is still left-to-right.
That's why in i++ + f1(i), we first evaluate i++, then f1(i), and then compute their sum.
Method call having the highest priority means that i++ + f1(i) will never be parsed as (i++ + f1)(i) (if that even makes sense), but always i++ + (f1(i)). Priority does not mean "is evaluated before anything else."
I was trying to remove the fractional part from a double in case it is whole using:
(d % 1) == 0 ? d.intValue() : d
And encountered the following behavior which i don't understand:
public static void main(String[] args) {
Double d = 5D;
System.out.println((d % 1) == 0); // true
System.out.println((d % 1) == 0 ? d.intValue() : "not whole"); // 5
System.out.println((d % 1) == 0 ? d.intValue() : d); // 5.0
}
As you can see on the third line, the operator chooses the else value - 5.0 even though the condition (d % 1) == 0 is met.
What's going on here?
The return type of the ternary conditional operator must be such that both the 2nd and 3rd operands can be assigned to it.
Therefore, in your second case, the return type of the operator is Object (since both d.intValue() and "not whole" must be assignable to it) while in the third case it is Double (since both d.intValue() and d must be assignable to it).
Printing an Object whose runtime type is Integer gives you 5 while printing a Double gives you 5.0.
The type of an expression a ? b : c is always the same as c or the closest common parent of b and c.
System.out.println((d % 1) == 0 ? d.intValue() : "not whole"); // Comparable a parent of Integer and String
System.out.println((d % 1) == 0 ? d.intValue() : d); // Double is a widened int
BTW d % 1 will only check it is a whole not, not that it's small enough to fit in anint value. A safer check is to see if the value is the same when cast to an int or long
double d = 5.0;
if ((long) d == d)
System.out.println((long) d);
else
System.out.println(d);
or you can prevent it widening the long back to a double with
double d = 5.0;
System.out.println((long) d == d ? Long.toString((long) d) : Double.toString(d));
It chooses correctly. Then it wraps it in double. These are 3 key points:
If the second and third operands have the same type, that is the type of the conditional expression. In other words, you can avoid the whole mess by steering clear of mixed-type computation.
If one of the operands is of type T where T is byte , short , or char and the other operand is a constant expression of type int whose value is representable in type T, the type of the conditional expression is T.
Otherwise, binary numeric promotion is applied to the operand types, and the type of the conditional expression is the promoted type of the second and third operands.
In your case the second and third arguments of the ternery operator are types "int" and "Double". Java must convert these values to the same type so they can be returned from the ternary operator. The rules for doing this are given in the Java language specification. https://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.25
In your case these rules result in the conversion of both parameters to type "double" (the "Double" is unboxed, the int is value-converted).
The fix is to cast the arguments to the ternary operator so that they are of the same type (there may be more brackets in the below than strictly needed, i'm a bit rusty on java operator precedence rules).
System.out.println((d % 1) == 0 ? ((Number)(d.intValue())) : (Number)d);
Would this test work?:
if (testInt/2).ofType(Integer){
//to-do if even
}
I assume it would iff the compiler resolves testInt/2 before ofType(); is this the case??
Best way to do it is always use the modulus operator.
if (testInt % 2 == 0)
{
//Do stuff
}
//3 % 2 = 1 , therefore odd
//4 % 2 = 0 , therefore even
Modulus is just getting the remainder from division.
Using the modulus operator works, but there's a better way to check. The least significant bit is 0 for all even numbers, and 1 for odds. Performing a bitwise AND operation with 1, will clear all but the LSB. Check the bit to determine the integer's parity. Less memory is used to clear bits than to compute a remainder.
if ((testInt & 1) == 0) //Even Number
if ((testInt & 1) == 1) //Odd Number
/*
4 & 1 = 0
5 & 1 = 1
1342424 & 1 = 0
5987833 & 1 = 1
*/
Would this test work?
No. The expression performs integer division (because both operands are integers), and the result of the expression always has type int. So for example
1 / 2 => 0 // not 0.5
2 / 2 => 1
-3 / 2 => -2 // not -1.5 or -1 ...
// integer division rounds towards minus infinity
I assume it would iff the compiler resolves testInt/2 before ofType(); is this the case??
The compiler resolves the static type of testInt/2 at compile time while the instanceof test is performed at runtime.
However, your assumption is incorrect because it is based on an incorrect understanding expressions and typing.
The compile time type of an expression does not depend on the values of the operands. It only depends on the operands' compile-time types.
For primitive types, there is no polymorphism, so the runtime type and the compile time type are the same.
As #Cameron C states, the correct way to do the test is to use the modulus operator %.
#styro's approach works, but:
is it less readable (IMO)
it is possibly slower
if it is faster, it probably doesn't matter.
Try this code...
public static boolean isEven (int testInt) {
int i=0;
while (i <= testInt) {
if (i==testInt) return true;
i+=2;
}
return false;
}
Here is outputs from Google Chrome Javascript Console.
Here is outputs from DrJava Java Console.
My Javascript code is
(baseCPUCyclesPerIteration - CPUCyclesTotalRoundoff) | 0
Seems to compile fine in Java if both variables are integers but apparently they are doubles in javascript. Even though
typeof baseCPUCyclesPerIteration reveals "number"
The results make it pretty obvious it's a double datatype. I don't understand why bitwise OR 0 works on a double in Javascript but doesn't work on Java double's.
Seems the purpose of the | 0 is just to trim of the decimal points in double datatype. I'm guessing in Java the equivalent will be (int) or (long) cast is this correct? or the bitwise | 0 does more then just trim of the decimal point in javascript?
Edit:
ya | 0 doesn't just trim in javascript just ran this. 8899811111.111113453456754645 | 0 got back 309876519.
(Although I passed the double limit lol still tries to compute it in javascript, I'm guessing this is where the overflows happen).
In javascript, all bitwise operators will cast decimal numbers to 32 bit integers. It acts like floor for positive numbers and ceil for negative numbers. Things like |0 or ~~ are often used as tricks to cast numbers to integer in JavaScript.
To explain the overflow you're seeing, we can look at the specifications for how Javascript converts numbers to int32: http://es5.github.io/#x9.5
The abstract operation ToInt32 converts its argument to one of 2^32 integer values in the range −2^31 through 2^31−1, inclusive. This abstract operation functions as follows:
Let number be the result of calling ToNumber on the input argument.
If number is NaN, +0, −0, +∞, or −∞, return +0.
Let posInt be sign(number) * floor(abs(number)).
Let int32bit be posInt modulo 2^32; that is, a finite integer value k of Number type with positive sign and less than 2^32 in magnitude such that the mathematical difference of posInt and k is mathematically an integer multiple of 2^32.
If int32bit is greater than or equal to 2^31, return int32bit − 2^32, otherwise return int32bit.
So, to reproduce this behavior, you would have to reproduce this logic.
Edit: Here's how Mozilla's Rhino engine does it in Java: (as per the github link supplied by user3435580)
public static int toInt32(double d) {
int id = (int)d;
if (id == d) {
// This covers -0.0 as well
return id;
}
if (d != d
|| d == Double.POSITIVE_INFINITY
|| d == Double.NEGATIVE_INFINITY)
{
return 0;
}
d = (d >= 0) ? Math.floor(d) : Math.ceil(d);
double two32 = 4294967296.0;
d = Math.IEEEremainder(d, two32);
// (double)(long)d == d should hold here
long l = (long)d;
// returning (int)d does not work as d can be outside int range
// but the result must always be 32 lower bits of l
return (int)l;
}
I was recently reading about storing floating point values in the memory. And I've written a small program to test what I've read. And I noticed that there is a difference in the way Java processes the floating point values.
public class Test
{
public static void main(String args[])
{
double a = 0.90;
System.out.println(a);
System.out.println(2.00-1.10);
}
}
The above program is printing
0.9
0.8999999999999999
Why both these statements are not printing the same value? I know some floating values can't be represented exactly. In that case, both should give same value.
Why both these statements are not printing the same value?
The result is not the same.
I know some floating values can't be represented exactly.
So you should assume that the result of an operation can depend on the amount of representation error of the values you use.
for (long l = 1; l <= 1e16; l *= 10) {
double a = l + 2;
double b = l + 1.1;
System.out.println(a + " - " + b + " is " + (a - b));
}
as the value gets larger the representation error increases and gets larger compares with the result of 0.9
3.0 - 2.1 is 0.8999999999999999
12.0 - 11.1 is 0.9000000000000004
102.0 - 101.1 is 0.9000000000000057
1002.0 - 1001.1 is 0.8999999999999773
10002.0 - 10001.1 is 0.8999999999996362
100002.0 - 100001.1 is 0.8999999999941792
1000002.0 - 1000001.1 is 0.9000000000232831
1.0000002E7 - 1.00000011E7 is 0.900000000372529
1.00000002E8 - 1.000000011E8 is 0.9000000059604645
1.000000002E9 - 1.0000000011E9 is 0.8999999761581421
1.0000000002E10 - 1.00000000011E10 is 0.8999996185302734
1.00000000002E11 - 1.000000000011E11 is 0.899993896484375
1.000000000002E12 - 1.0000000000011E12 is 0.9000244140625
1.0000000000002E13 - 1.00000000000011E13 is 0.900390625
1.00000000000002E14 - 1.000000000000011E14 is 0.90625
1.000000000000002E15 - 1.0000000000000011E15 is 0.875
1.0000000000000002E16 - 1.0000000000000002E16 is 0.0
and on the topic of when representation error gets so large your operation does nothing.
for (double d = 1; d < Double.MAX_VALUE; d *= 2) {
if (d == d + 1) {
System.out.println(d + " + 1 == " + (d + 1));
break;
}
}
for (double d = 1; d < Double.MAX_VALUE; d *= 2) {
if (d == d - 1) {
System.out.println(d + " - 1 == " + (d - 1));
break;
}
}
prints
9.007199254740992E15 + 1 == 9.007199254740992E15
1.8014398509481984E16 - 1 == 1.8014398509481984E16
When “0.90” is converted to double, the result is .9 plus some small error, e0. Thus a equals .9+e0.
When “1.10” is converted to double, the result is 1.1 plus some small error, e1, so the result is 1.1+e1.
These two errors, e0 and e1, are generally unrelated to each other. Simply put, different decimal numbers are different distances away from binary floating-point numbers. When you evaluate 2.00-1.10, the result is 2–(1.1+e1) = .9–e1. So one of your numbers is .9+e0, and the other is .9-e1, and there is no reason to expect them to be the same.
(As it happens in this case, e0 is .00000000000000002220446049250313080847263336181640625, and e1 is .000000000000000088817841970012523233890533447265625. Also, subtracting 1.1 from 2 introduces no new error, after the conversion of “1.1” to double, by Sterbenz’ Lemma.)
Additional details:
In binary, .9 is .11100110011001100110011001100110011001100110011001100 11001100… The bits in bold fit into a double. The trailing bits do not fit, so the number is rounded at that point. That causes a difference between the exact value of .9 and the value of “.9” represented as a double. In binary, 1.1 is 1.00011001100110011001100110011001100110011001 10011001… Again, the number is rounded. But observe the amount rounding is different. For .9, 1100 1100… was rounded up to 1 0000 0000…, which adds 00110011… at that position. For 1.1, 1001 1001 is rounded up to 1 0000 0000…, which adds 01100110… at that position (and causes a carry in the bold bits). And the two positions are different; 1.1 starts to the left of the radix point, so it looks like this: 1.[52 bits here][place where rounding occurs]. .9 starts to the right of the radix point, so it looks like this: .[53 bits here][place where rounding occurs]. So the rounding for 1.1, besides being 01100110… instead of 00110011…, is also doubled because it occurs one bit to the left of the .9 rounding. So you have two effects making e0 different from e1: The trailing bits that were rounded are different, and the place where rounding occurs is different.
I know some floating values can't be represented exactly
Well that is your answer (or more precisely, as pointed out by Mark Byers, some decimal values can't be represented exactly as a double)! Neither 0.9 or 1.1 can be represented as a double so you get rounding errors.
You can check the exact value of the various doubles with BigDecimal:
public static void main(String args[]) {
double a = 0.9d;
System.out.println(a);
System.out.println(new BigDecimal(a));
double b = 2d - 1.1d;
System.out.println(b);
System.out.println(new BigDecimal(2.0d));
System.out.println(new BigDecimal(1.1d));
System.out.println(new BigDecimal(b));
}
which outputs:
0.9
0.90000000000000002220446049250313080847263336181640625
0.8999999999999999
2
1.100000000000000088817841970012523233890533447265625
0.899999999999999911182158029987476766109466552734375
Your reasoning is that, even if 0.9 can't be represented precisely by a double, that it should have exactly the same double value as 2.0 - 1.1, and so result in the same printed value. That's the error -- this subtraction does not yield the double represented by "0.9" (or the exact value 0.9).