What way is the best to get minimum number and which improved the performance or both are same as performance wise?
One way to get the min distance between two number :
double minDistance = Double.MAX_VALUE;
double distance = coordinate1.distnaceTo(coordinate2);
if(distnace < minDistance ) {
minDistance = distance;
}
Another way to get the min distance between two number :
double minDistance = Double.MAX_VALUE;
double minDistance = Math.min(coordinate1.distnaceTo(coordinate2), minDistance);
If you are dealing with positive numbers, not equal to NaN, -0.0 or 0.0, then there shouldn't be much difference.
The following from the Math.min source code in Java 8 highlights the following differences:
If either value is NaN, then the result is NaN. Unlike the numerical comparison operators, this method considers negative zero to be
strictly smaller than positive zero. If one argument is positive zero
and the other is negative zero, the result is negative zero.
So Math.min may be slightly less efficient since it checks for NaN and -0.0 vs 0.0, but it is arguably more readable - you need to consider firstly whether the special cases are applicable or not, and then readability vs (ever so slight) performance difference after that.
Personally I would use Math.min, but that's my own opinion.
Java Math#min(double, double) does the following:
Returns the smaller of two double values. That is, the result is the value closer to negative infinity.
If the arguments have the same value, the result is that same value.
If either value is NaN, then the result is NaN.
If one argument is positive zero and the other is negative zero, the result is negative zero.
Take a look at the source:
public static double min(double a, double b) {
if (a != a) return a; // a is NaN
if ((a == 0.0d) && (b == 0.0d) && (Double.doubleToLongBits(b) == negativeZeroDoubleBits)) {
return b;
}
return (a <= b) ? a : b;
}
And here is your implementation:
if(distnace < minDistance ) {
....
}
So, yes your code is a little bit faster than Math.min() as it checks some additional condition, NaN, zero or negativity whereas your if-else doesn't care any of those.
Related
how can i tell which method is better to use for my situation or faster?.
for example:-
public boolean isSquareNumber(){
double nd = Math.sqrt(num); // num is a class member variable.
if(nd == Math.floor(nd))
{
return true;
} else {
return false;
}
}
and this method
public boolean isSquareNumber(){
double nd = Math.sqrt(num);
if(nd == (int)Math.sqrt(nd))
{
return true;
} else {
return false;
}
}
Math.floor() and Math.sqrt().
both were exactly the same for this situation but how do i decide which is faster?.
Thank you for your time <3.
The fastest way to test if a double value is an integer will be this:
double d = ...
if (d == ((long) d)) {
// It is an integer
}
Note that is (theoretically) possible for sqrt(someValue) to produce an double value that is indistinguishable from an integer value, even though the true square root of someValue is not an integer. As the javadoc states:
Otherwise, the result is the double value closest to the true mathematical square root of the argument value.
So you could get a case where the "closest" double value corresponds to an integer, even though the actual square root is irrational.
The other point of contention is whether Math.floor is actually correct.
On the one hand, the narrowing cast and Math.floor are different:
narrowing uses IEE 754 "rounding towards zero" mode
Math.floor() returns "the largest (closest to positive infinity) floating-point value that less than or equal to the argument and is equal to a mathematical integer". In other words, it rounds towards negative infinity.
On the other hand, if we are testing a double value that is known to be non-negative1, then rounding towards zero and towards negative infinity are the same thing.
1 - Is this the case for Math.sqrt()? Strictly no, since sqrt(-0.0) is defined to return -0.0 ... per the javadoc. However, -0.0 should be treated as +0.0 for the purposes of rounding.
Java 8 gave us Math.addExact() for integers but not decimals.
Is it possible for double and BigDecimal to overflow? Judging by Double.MAX_VALUE and How to get biggest BigDecimal value I'd say the answer is yes.
As such, why don't we have Math.addExact() for those types as well? What's the most maintainable way to check this ourselves?
double overflows to Infinity and -Infinity, it doesn't wrap around. BigDecimal doesn't overflow, period, it is only limited by the amount of memory in your computer. See: How to get biggest BigDecimal value
The only difference between + and .addExact is that it attempts to detect if overflow has occurred and throws an Exception instead of wraps. Here's the source code:
public static int addExact(int x, int y) {
int r = x + y;
// HD 2-12 Overflow iff both arguments have the opposite sign of the result
if (((x ^ r) & (y ^ r)) < 0) {
throw new ArithmeticException("integer overflow");
}
return r;
}
If you want to check that an overflow has occurred, in one sense it's simpler to do it with double anyway because you can simply check for Double.POSITIVE_INFINITY or Double.NEGATIVE_INFINITY; in the case of int and long, it's a slightly more complicated matter because it isn't always one fixed value, but in another, these could be inputs (e.g. Infinity + 10 = Infinity and you probably don't want to throw an exception in this case).
For all these reasons (and we haven't even mentioned NaN yet), this is probably why such an addExact method doesn't exist in the JDK. Of course, you can always add your own implementation to a utility class in your own application.
The reason you do not need a addExact function for floating point digits is because instead of wrapping around, it overflows to Double.Infinity.
Consequently you can very easily check at the end of the operation whether it overflowed or not. Since Double.POSITIVE_INFINITY + Double.NEGATIVE_INFINITY is NaN you also have to check for NaN in case of more complicated expressions.
This is not only faster but also easier to read. Instead of having Math.addExact(Math.addExact(x, y), z) to add 3 doubles together, you can instead write:
double result = x + y + z;
if (Double.isInfinite(result) || Double.isNan(result)) throw ArithmeticException("overflow");
BigDecimal on the other hand will indeed overflow and throw a corresponding exception in that case as well - this is very unlikely to ever happen in practice though.
For double, please check the other answers.
BigDecimal has the addExact() protection already built in. Many arithmetic operation methods (e.g. multiply) of BigDecimal contain a check on the scale of the result:
private int checkScale(long val) {
int asInt = (int)val;
if (asInt != val) {
asInt = val>Integer.MAX_VALUE ? Integer.MAX_VALUE : Integer.MIN_VALUE;
BigInteger b;
if (intCompact != 0 &&
((b = intVal) == null || b.signum() != 0))
throw new ArithmeticException(asInt>0 ? "Underflow":"Overflow");
}
return asInt;
}
Is it possible to check if a float is a positive zero (0.0) or a negative zero (-0.0)?
I've converted the float to a String and checked if the first char is a '-', but are there any other ways?
Yes, divide by it. 1 / +0.0f is +Infinity, but 1 / -0.0f is -Infinity. It's easy to find out which one it is with a simple comparison, so you get:
if (1 / x > 0)
// +0 here
else
// -0 here
(this assumes that x can only be one of the two zeroes)
You can use Float.floatToIntBits to convert it to an int and look at the bit pattern:
float f = -0.0f;
if (Float.floatToIntBits(f) == 0x80000000) {
System.out.println("Negative zero");
}
Definitly not the best aproach. Checkout the function
Float.floatToRawIntBits(f);
Doku:
/**
* Returns a representation of the specified floating-point value
* according to the IEEE 754 floating-point "single format" bit
* layout, preserving Not-a-Number (NaN) values.
*
* <p>Bit 31 (the bit that is selected by the mask
* {#code 0x80000000}) represents the sign of the floating-point
* number.
...
public static native int floatToRawIntBits(float value);
Double.equals distinguishes ±0.0 in Java. (There's also Float.equals.)
I'm a bit surprised no-one has mentioned these, as they seem to me clearer than any method given so far!
The approach used by Math.min is similar to what Jesper proposes but a little clearer:
private static int negativeZeroFloatBits = Float.floatToRawIntBits(-0.0f);
float f = -0.0f;
boolean isNegativeZero = (Float.floatToRawIntBits(f) == negativeZeroFloatBits);
When a float is negative (including -0.0 and -inf), it uses the same sign bit as a negative int. This means you can compare the integer representation to 0, eliminating the need to know or compute the integer representation of -0.0:
if(f == 0.0) {
if(Float.floatToIntBits(f) < 0) {
//negative zero
} else {
//positive zero
}
}
That has an extra branch over the accepted answer, but I think it's more readable without a hex constant.
If your goal is just to treat -0 as a negative number, you could leave out the outer if statement:
if(Float.floatToIntBits(f) < 0) {
//any negative float, including -0.0 and -inf
} else {
//any non-negative float, including +0.0, +inf, and NaN
}
For negative:
new Double(-0.0).equals(new Double(value));
For positive:
new Double(0.0).equals(new Double(value));
Just use Double.compare (d1,d2).
double d1 = -0.0; // or d1 = 0.0
if ( Double.compare (d1, 0.0) < 0 )
System.out.println("negative zero");
else
System.out.println("positive zero");
I want to determine whether a number (in double) is a perfect square or not. I have used the below code but it fails for many inputs.
private static boolean isSquare(double i) {
double s = Math.sqrt(i);
return ((s*s) == i);
}
When s results in scientific form, the code fails. For example when s is 2.719601835756618E9
Your code makes no attempt to test whether the square root of the number is an integer. Any nonnegative real number is the square of some other real number; your code's result depends entirely on floating-point rounding behavior.
Test whether the square root is an integer:
if (Double.isInfinite(i)) {
return false;
}
sqrt = Math.sqrt(i);
return sqrt == Math.floor(sqrt) && sqrt*sqrt == i;
The sqrt*sqrt == i check should catch some cases where a number exceptionally close to a square has a square root whose closest double approximation is an integer. I have not tested this and make no warranties as to its correctness; if you want your software to be robust, don't just copy the code out of the answer.
UPDATE: Found a failing edge case. If an integer double has a greatest odd factor long enough that the square is not representable, feeding the closest double approximation of the square to this code will result in a false positive. The best fix I can think of at the moment is examing the significand of the square root directly to determine how many bits of precision it would take to represent the square. Who knows what else I've missed, though?
static boolean checkPerfectSquare(double x)
{
// finding the square root of given number
double sq = Math.sqrt(x);
/* Math.floor() returns closest integer value, for
* example Math.floor of 984.1 is 984, so if the value
* of sq is non integer than the below expression would
* be non-zero.
*/
return ((sq - Math.floor(sq)) == 0);
}
I know that the question is old, but I would post my solution anyway:
return Math.sqrt(i) % 1d == 0;
Simply check if the sqrt has decimals.
I have some code like this:
class Foo {
public double x;
}
void test() {
Foo foo = new Foo();
// Is this a valid way to test for zero? 'x' hasn't been set to anything yet.
if (foo.x == 0) {
}
foo.x = 0.0;
// Will the same test be valid?
if (foo.x == 0) {
}
}
I basically want to avoid a divide-by-zero exception in the future.
Thanks
Numeric primitives in class scope are initialized to zero when not explicitly initialized.
Numeric primitives in local scope (variables in methods) must be explicitly initialized.
If you are only worried about division by zero exceptions, checking that your double is not exactly zero works great.
if(value != 0)
//divide by value is safe when value is not exactly zero.
Otherwise when checking if a floating point value like double or float is 0, an error threshold is used to detect if the value is near 0, but not quite 0.
public boolean isZero(double value, double threshold){
return value >= -threshold && value <= threshold;
}
Yes; all primitive numeric types default to 0.
However, calculations involving floating-point types (double and float) can be imprecise, so it's usually better to check whether it's close to 0:
if (Math.abs(foo.x) < 2 * Double.MIN_VALUE)
You need to pick a margin of error, which is not simple.
In Java, 0 is the same as 0.0, and doubles default to 0 (though many advise always setting them explicitly for improved readability).
I have checked and foo.x == 0 and foo.x == 0.0 are both true if foo.x is zero
Yes, it's a valid test although there's an implicit conversion from int to double. For clarity/simplicity you should use (foo.x == 0.0) to test. That will hinder NAN errors/division by zero, but the double value can in some cases be very very very close to 0, but not exactly zero, and then the test will fail (I'm talking about in general now, not your code). Division by that will give huge numbers.
If this has anything to do with money, do not use float or double, instead use BigDecimal.
The safest way would be bitwise OR ing your double with 0.
Look at this XORing two doubles in Java
Basically you should do if ((Double.doubleToRawLongBits(foo.x) | 0 ) ) (if it is really 0)