Would this test work?:
if (testInt/2).ofType(Integer){
//to-do if even
}
I assume it would iff the compiler resolves testInt/2 before ofType(); is this the case??
Best way to do it is always use the modulus operator.
if (testInt % 2 == 0)
{
//Do stuff
}
//3 % 2 = 1 , therefore odd
//4 % 2 = 0 , therefore even
Modulus is just getting the remainder from division.
Using the modulus operator works, but there's a better way to check. The least significant bit is 0 for all even numbers, and 1 for odds. Performing a bitwise AND operation with 1, will clear all but the LSB. Check the bit to determine the integer's parity. Less memory is used to clear bits than to compute a remainder.
if ((testInt & 1) == 0) //Even Number
if ((testInt & 1) == 1) //Odd Number
/*
4 & 1 = 0
5 & 1 = 1
1342424 & 1 = 0
5987833 & 1 = 1
*/
Would this test work?
No. The expression performs integer division (because both operands are integers), and the result of the expression always has type int. So for example
1 / 2 => 0 // not 0.5
2 / 2 => 1
-3 / 2 => -2 // not -1.5 or -1 ...
// integer division rounds towards minus infinity
I assume it would iff the compiler resolves testInt/2 before ofType(); is this the case??
The compiler resolves the static type of testInt/2 at compile time while the instanceof test is performed at runtime.
However, your assumption is incorrect because it is based on an incorrect understanding expressions and typing.
The compile time type of an expression does not depend on the values of the operands. It only depends on the operands' compile-time types.
For primitive types, there is no polymorphism, so the runtime type and the compile time type are the same.
As #Cameron C states, the correct way to do the test is to use the modulus operator %.
#styro's approach works, but:
is it less readable (IMO)
it is possibly slower
if it is faster, it probably doesn't matter.
Try this code...
public static boolean isEven (int testInt) {
int i=0;
while (i <= testInt) {
if (i==testInt) return true;
i+=2;
}
return false;
}
Related
I have learned a lot from my last question hopefully I don't make the same mistakes again:)
This stems from a previous question. Here is what I THINK I know:
For ints in java (I assume in all languages but I’m asking in JAVA specifically):
1/3 = 0
1%3 = 1
I was stumped as to why i%j = i when i < j and a previous poster explained how this worked and also stated that "First, in Java, % is the remainder (not modulo) operator, which has slightly different semantics...."
Their explanation was perfect for what I needed. However, I was confused by their quote because I was always taught that in mathematics modular == remainder division.
How does one execute modular division in JAVA and are there pitfalls to watch for when trying to use % as a modulus operator?
mathematicaly, modulo is division with remainder.
7 mod 4 = 1 R3
see:
n = a * m + r
The modulo operator in Java (like in most other languages) gives only the remainder part and not i dont know, if it works with negative numbers correct.
In Detail, mathematicaly the modulo is allways positive. That is the differece to the modulo operator in java.
a mod n = b, if there is a number k existing with a = b + kn
and 0 <= b < n
That means, if you take -14 mod 4:
-14 = b + k * 4 //lets take -3 for k
-14 = b + -3 * 4
-14 = b - 12
-2 = b
that would be wrong (mathematically) becaouse b is negative.
so we need to take -4 for k
-14 = b + -4 * 4
-14 = b + 16
2 = b
that is the correct answer. In this case only the sign is the difference, but if you take -15 mod 4 you will get -3 in java and most other languages, but the mathematically correct answer would be 1 (-15 + 16)
using java, you will get the negative values.
You may be confused by the "modulo operator" in arithmetic, which is the same as the % operator in Java and similar languages, I don't think there is such thing as "modular division". The % operator in java will always return the integer remainder from repeated division between two numbers. Just like in arithmetic, (i % j) = i where i < j and i >= 0. The result of the operation is less than j.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
We know that we can use bitwise operators to divide any two numbers. For example:
int result = 10 >> 1; //reult would be 5 as it's like dividing 10 by 2^1
Is there any chance we can divide a number by 0 using bits manipulation?
Edit 1: If I rephrase my question, I want to actually divide a number by zero and break my machine. How do I do that?
Edit 2: Let's forget about Java for a moment. Is it feasible for a machine to divide a number by 0 regardless of the programming language used?
Edit 3: As it's practically impossible to do this, is there a way we can simulate this using a really small number that approaches 0?
Another edit: Some people mentioned that CPU hardware prevents division by 0. I agree, there won't be a direct way to do it. Let's see this code for example:
i = 1;
while(0 * i != 10){
i++;
}
Let's assume that there is no cap on the maximum value of i. In this case there would be no compiler error nor the CPU would resist this. Now, I want my machine to find the number that's when multiplied with 0 gives me a result (which is obviously never going to happen) or die trying.
So, as there is a way to do this. How can I achieve this by directly manipulating bits?
Final Edit: How to perform binary division in Java without using bitwise operators? (I'm sorry, it purely contradicts the title).
Note: I've tried simulating divison by 0 and posted my answer. However, I'm looking for a faster way of doing it.
If what you want is a division method faster than division by repeated subtraction (which you posted), and that will run indefinitely when you try to divide by zero, you can implement your own version of the Goldschmidt division, and not throw an error when the divisor is zero.
The algorithm works like this:
1. Create an estimate for the factor f
2. Multiply both the dividend and the divisor by f
3. If the divisor is close enough to 1
Return the dividend
4. Else
Go back to step 1
Normally, we would need to scale down the dividend and the divisor before starting, so that 0 < divisor < 1 is satisfied. In this case, since we are going to divide by zero, there's no need for this step. We also need to choose an arbitrary precision beyond which we consider the result good enough.
The code, with no check for divisor == 0, would be like this:
static double goldschmidt(double dividend, double divisor) {
double epsilon = 0.0000001;
while (Math.abs(1.0 - divisor) > epsilon) {
double f = 2.0 - divisor;
dividend *= f;
divisor *= f;
}
return dividend;
}
This is much faster than the division by repeated subtraction method, since it converges to the result quadratically instead of linearly. When dividing by zero, it would not really matter, since both methods won't converge. But if you try to divide by a small number, such as 10^(-9), you can clearly see the difference.
If you don't want the code to run indefinitely, but to return Infinity when dividing by zero, you can modify it to stop when dividend reaches Infinity:
static double goldschmidt(double dividend, double divisor) {
double epsilon = 0.0000001;
while (Math.abs(1.0 - divisor) > 0.0000001 && !Double.isInfinite(dividend)) {
double f = 2.0 - divisor;
dividend *= f;
divisor *= f;
}
return dividend;
}
If the starting values for dividend and divisor are such that dividend >= 1.0 and divisor == 0.0, you will get Infinity as a result after, at most, 2^10 iterations. That's because the worst case is when dividend == 1 and you need to multiply it by two (f = 2.0 - 0.0) 1024 times to get to 2^1024, which is greater than Double.MAX_VALUE.
The Goldschmidt division was implemented in AMD Athlon CPUs. If you want to read about some lower level details, you can check this article:
Floating Point Division and Square Root Algorithms and Implementation
in the AMD-K7
TM
Microprocessor.
Edit:
Addressing your comments:
Note that the code for the Restoring Division method you posted iterates 2048 (2^11) times. I lowered the value of n in your code to 1024, so we could compare both methods doing the same number of iterations.
I ran both implementations 100000 times with dividend == 1, which is the worst case for Goldschmidt, and measured the running time like this:
long begin = System.nanoTime();
for (int i = 0; i < 100000; i++) {
goldschmidt(1.0, 0.0); // changed this for restoringDivision(1) to test your code
}
long end = System.nanoTime();
System.out.println(TimeUnit.NANOSECONDS.toMillis(end - begin) + "ms");
The running time was ~290ms for Goldschmidt division and ~23000ms (23 seconds) for your code. So this implementation was about 80x faster in this test. This is expected, since in one case we are doing double multiplications and in the other we are working with BigInteger.
The advantage of your implementation is that, since you are using BigInteger, you can make your result as large as BigInteger supports, while the result here is limited by Double.MAX_VALUE.
In practice, when dividing by zero, the Goldschmidt division is doubling the dividend, which is equivalent to a shift left, at each iteration, until it reaches the maximum possible value. So the equivalent using BigInteger would be:
static BigInteger divideByZero(int dividend) {
return BigInteger.valueOf(dividend)
.shiftLeft(Integer.MAX_VALUE - 1 - ceilLog2(dividend));
}
static int ceilLog2(int n) {
return (int) Math.ceil(Math.log(n) / Math.log(2));
}
The function ceilLog2() is necessary, so that the shiftLeft() will not cause an overflow. Depending on how much memory you have allocated, this will probably result in a java.lang.OutOfMemoryError: Java heap space exception. So there is a compromise to be made here:
You can get the division simulation to run really fast, but with a result upper bound of Double.MAX_VALUE,
or
You can get the result to be as big as 2^(Integer.MAX_VALUE - 1), but it would probably take too much memory and time to get to that limit.
Edit 2:
Addressing your new comments:
Please note that no division is happening in your updated code. It's just trying to find the biggest possible BigInteger
First, let us show that the Goldschmidt division degenerates into a shift left when divisor == 0:
static double goldschmidt(double dividend, double divisor) {
double epsilon = 0.0000001;
while (Math.abs(1.0 - 0.0) > 0.0000001 && !Double.isInfinite(dividend)) {
double f = 2.0 - 0.0;
dividend *= f;
divisor = 0.0 * f;
}
return dividend;
}
The factor f will always be equal to 2.0 and the first while condition will always be true. So if we eliminate the redundancies:
static double goldschmidt(double dividend, 0) {
while (!Double.isInfinite(dividend)) {
dividend *= 2.0;
}
return dividend;
}
Assuming dividend is an Integer, we can do the same multiplication using a shift left:
static int goldschmidt(int dividend) {
while (...) {
dividend = dividend << 1;
}
return dividend;
}
If the maximum value we can reach is 2^n, we need to loop n times. When dividend == 1, this is equivalent to:
static int goldschmidt(int dividend) {
return 1 << n;
}
When the dividend > 1, we need to subtract ceil(log2(dividend)) to prevent an overflow:
static int goldschmidt(int dividend) {
return dividend << (n - ceil(log2(dividend));
}
Thus showing that the Goldschmidt division is equivalent to a shift left if divisor == 0.
However, shifting the bits to the left would pad bits on the right with 0. Try running this with a small dividend and left shift it (once or twice to check the results). This thing will never get to 2^(Integer.MAX_VALUE - 1).
Now that we've seen that a shift left by n is equivalent to a multiplication by 2^n, let's see how the BigInteger version works. Consider the following examples that show we will get to 2^(Integer.MAX_VALUE - 1) if there is enough memory available and the dividend is a power of 2:
For dividend = 1
BigInteger.valueOf(dividend).shiftLeft(Integer.MAX_VALUE - 1 - ceilLog2(dividend))
= BigInteger.valueOf(1).shiftLeft(Integer.MAX_VALUE - 1 - 0)
= 1 * 2^(Integer.MAX_VALUE - 1)
= 2^(Integer.MAX_VALUE - 1)
For dividend = 1024
BigInteger.valueOf(dividend).shiftLeft(Integer.MAX_VALUE - 1 - ceilLog2(dividend))
= BigInteger.valueOf(1024).shiftLeft(Integer.MAX_VALUE - 1 - 10)
= 1024 * 2^(Integer.MAX_VALUE - 1)
= 2^10 * 2^(Integer.MAX_VALUE - 1 - 10)
= 2^(Integer.MAX_VALUE - 1)
If dividend is not a power of 2, we will get as close as we can to 2^(Integer.MAX_VALUE - 1) by repeatedly doubling the dividend.
Your requirement is impossible.
The division by 0 is mathematically impossible. The concept just don't exist, so there is no way to simulate it.
If you were actually trying to do limits operation (divide by 0+ or 0-) then there is still no way to do it using bitwise as it will only allow you to divide by power of two.
Here an exemple using bitwise operation only to divide by power of 2
10 >> 1 = 5
Looking at the comments you posted, if what you want is simply to exit your program when an user try to divide by 0 you can simply validate it :
if(dividant == 0)
System.exit(/*Enter here the exit code*/);
That way you will avoid the ArithmeticException.
After exchanging a couple of comments with you, it seems like what you are trying to do is crash the operating system dividing by 0.
Unfortunately for you, as far as I know, any language that can be written on a computer are validated enought to handle the division by 0.
Just think to a simple calculator that you pay 1$, try to divide by 0 and it won't even crash, it will simply throw an error msg. This is probably validated at the processor level anyway.
Edit
After multiple edits/comments to your question, it seems like you are trying to retrieve the Infinity dividing by a 0+ or 0- that is very clause to 0.
You can achieve this with double/float division.
System.out.println(1.0f / 0.0f);//prints infinity
System.out.println(1.0f / -0.0f);//prints -Infinity
System.out.println(1.0d / 0.0d);//prints infinity
System.out.println(1.0d / -0.0d);//prints -Infinity
Note that even if you write 0.0, the value is not really equals to 0, it is simply really close to it.
No, there isn't, since you can only divide by a power of 2 using right shift.
One way to simulate division of unsigned integers (irrespective of divisor used) is by division by repeated subtraction:
BigInteger result = new BigInteger("0");
int divisor = 0;
int dividend = 2;
while(dividend >= divisor){
dividend = dividend - divisor;
result = result.add(BigInteger.ONE);
}
Second way to do this is by using Restoring Division algorithm (Thanks #harold) which is way faster than the first one:
int num = 10;
BigInteger den = new BigInteger("0");
BigInteger p = new BigInteger(new Integer(num).toString());
int n = 2048; //Can be changed to find the biggest possible number (i.e. upto 2^2147483647 - 1). Currently it shows 2^2048 - 1 as output
den = den.shiftLeft(n);
BigInteger q = new BigInteger("0");
for(int i = n; i > 0; i -= 1){
q = q.shiftLeft(1);
p = p.multiply(new BigInteger("2"));
p = p.subtract(den);
if(p.compareTo(new BigInteger("0")) == 1
|| p.compareTo(new BigInteger("0")) == 0){
q = q.add(new BigInteger("1"));
} else {
p = p.add(den);
}
}
System.out.println(q);
As others have indicated, you cannot mathematically divide by 0.
However if you want methods to divide by 0, there are some constants in Double you could use. For example you could have a method
public static double divide(double a, double b){
return b == 0 ? Double.NaN : a/b;
}
or
public static double posLimitDivide(double a, double b){
if(a == 0 && b == 0)
return Double.NaN;
return b == 0 ? (a > 0 ? Double.POSITIVE_INFINITY : Double.NEGATIVE_INFINITY) : a/b;
Which would return the limit of a/x where x approaches +b.
These should be ok, as long as you account for it in whatever methods use them. And by OK I mean bad, and could cause indeterminate behavior later if you're not careful. But it is a clear way to indicate the result with an actual value rather than an exception.
Here is outputs from Google Chrome Javascript Console.
Here is outputs from DrJava Java Console.
My Javascript code is
(baseCPUCyclesPerIteration - CPUCyclesTotalRoundoff) | 0
Seems to compile fine in Java if both variables are integers but apparently they are doubles in javascript. Even though
typeof baseCPUCyclesPerIteration reveals "number"
The results make it pretty obvious it's a double datatype. I don't understand why bitwise OR 0 works on a double in Javascript but doesn't work on Java double's.
Seems the purpose of the | 0 is just to trim of the decimal points in double datatype. I'm guessing in Java the equivalent will be (int) or (long) cast is this correct? or the bitwise | 0 does more then just trim of the decimal point in javascript?
Edit:
ya | 0 doesn't just trim in javascript just ran this. 8899811111.111113453456754645 | 0 got back 309876519.
(Although I passed the double limit lol still tries to compute it in javascript, I'm guessing this is where the overflows happen).
In javascript, all bitwise operators will cast decimal numbers to 32 bit integers. It acts like floor for positive numbers and ceil for negative numbers. Things like |0 or ~~ are often used as tricks to cast numbers to integer in JavaScript.
To explain the overflow you're seeing, we can look at the specifications for how Javascript converts numbers to int32: http://es5.github.io/#x9.5
The abstract operation ToInt32 converts its argument to one of 2^32 integer values in the range −2^31 through 2^31−1, inclusive. This abstract operation functions as follows:
Let number be the result of calling ToNumber on the input argument.
If number is NaN, +0, −0, +∞, or −∞, return +0.
Let posInt be sign(number) * floor(abs(number)).
Let int32bit be posInt modulo 2^32; that is, a finite integer value k of Number type with positive sign and less than 2^32 in magnitude such that the mathematical difference of posInt and k is mathematically an integer multiple of 2^32.
If int32bit is greater than or equal to 2^31, return int32bit − 2^32, otherwise return int32bit.
So, to reproduce this behavior, you would have to reproduce this logic.
Edit: Here's how Mozilla's Rhino engine does it in Java: (as per the github link supplied by user3435580)
public static int toInt32(double d) {
int id = (int)d;
if (id == d) {
// This covers -0.0 as well
return id;
}
if (d != d
|| d == Double.POSITIVE_INFINITY
|| d == Double.NEGATIVE_INFINITY)
{
return 0;
}
d = (d >= 0) ? Math.floor(d) : Math.ceil(d);
double two32 = 4294967296.0;
d = Math.IEEEremainder(d, two32);
// (double)(long)d == d should hold here
long l = (long)d;
// returning (int)d does not work as d can be outside int range
// but the result must always be 32 lower bits of l
return (int)l;
}
Place your declaration for i at line 3 so that the loop becomes an infinite loop.
public class Puzzel3 {
public static void main(String[] args) {
// Line 3
while (i == i + 1) {
System.out.println(i);
}
System.out.println("done");
}
}
Math says, that Infinity + 1 == Infinity, so
// The declaration required
double i = Double.POSITIVE_INFINITY;
// It's infinite loop now...
while (i == i + 1) {
System.out.println(i);
}
System.out.println("done");
double i=1/0.0;
It will turn the loop in infinite
The while loop is infinite if the loop condition remains true. Since the expression only depends on i, and i is not assigned in the loop body, that is equivalent to the loop condition being true on first evaluation.
Therefore, the question is for which values of which types the expression i == i + 1 is true.
Java has the following types:
reference types: do not support the + operator, except for strings, which get longer by concatentating "1", and therefore can not remain identical.
primitive types:
boolean: does not support +
integral types: adding 1 is guaranteed to change the value, even in case of overflow
floating point types: a float of floating point type is either:
positive 0: 0- + 1 is 1 and therefore != 0
negative 0: 0+ + 1 is 1 and therefore != 0
NaN: NaN + 1 is NaN, but NaN != NaN
positive infinity: inf+ + 1 is inf+, and therefore == inf+
negative infinity: inf- + 1 is inf-, and therefore == inf-
normal: c + 1 is not an accurate computation. Roughly speaking, 1 is added to c, and the nearest float (or double) to that value is taken as the result. Whether that float (or double) is distinct from the initial value depends on the density of floating point values around c. Internally, a floating point type is represented by a sign bit, and two fixed-with integers m and e, where the value of the float is given by s * m * 2^e.
Adding 1 will unlikely change e (and if it does, the result is distinct anyway). Otherwise:
if e <= 0, adding 1 will change m
if e == 1, adding 1 might change m, depending on the rounding mode
if e > 1, adding 1 will not change m, and therefore c + 1 == c. Now, for which values will this occur?
For float, m < 2^24. Therefore, e > 1 if c >= 2^25 or c <= - (2^25)
For double, m < 2^53. Therefore, e > 1 if c >= 2^54 or c <= -(2^54)
Those ought to be all cases :-)
I don't understand the % comment.length bit of the following code:
comment.charAt(i % comment.length())
Does the part between the brackets convert to an integer with the value that represents i in relation to the comment length?
For instance, if:
comment = "test"
i = 2
what would comment.charAt(i % comment.length()) be?
% is the modulo operator, thus for your example i % comment.length() would resolve to 2 % 4 = 2. This would return the third character (at index 2).
The modulo operation seems to be a safeguard for cases where i >= comment.length().
Consider the following case: i = 11 and comment = "test".
If you just use comment.chatAt(i) you'd get an exception since there are only 4 characters. The modulo operation would wrap that around and result in 11 % 4 = 3 and return the fourth character (index 3) in that case.
% is the modulo operator: it gives you the remainder of an integer division
10 % 3 = 1
as 10 / 3 = 3 with a remainder of 1
Your statement just ensures that the function argument will be less then the string length.
But I would rather check this in another way. It is pretty counterintuitive to ask for character at position 11 in a string long 10 characters and get the character at position 1 instead of a warning or error message.