I had some code I was profiling and was surprised at how much time was being spent on Math.min(float, float).
In my use case I needed to get the min of 3 float values, each value is guaranteed to not be NAN or another edge case float value.
My original method was:
private static float min2(float v1, float v2, float v3) {
return Math.min(Math.min(v1,v2),v3);
}
But I found that this was about 5x faster:
private static float min1(float v1, float v2, float v3) {
if (v1 < v2 && v1 < v3) {
return v1;
}
else if (v2 < v3) {
return v2;
}
else {
return v3;
}
}
For reference this is the code for Math.min:
public static float min(float f1, float f2) {
if (f1 > f2) {
return f2;
}
if (f1 < f2) {
return f1;
}
/* if either arg is NaN, return NaN */
if (f1 != f2) {
return Float.NaN;
}
/* min(+0.0,-0.0) == -0.0 */
/* 0x80000000 == Float.floatToRawIntBits(-0.0f) */
if (Float.floatToRawIntBits(f1) == 0x80000000) {
return -0.0f;
}
return f2;
}
Note: My use case was symmetric and the above was all true for max instead of min.
EDIT1:
It turns out ~5x was an overstatement, but I am still seeing a speed difference inside my application. Although I suspect that may be due to not having a proper timing test.
After posting this question I wrote a proper micro optimization speed test in a separate project. Tested each method 1000 times on random floats and they both took the same amount of time. I don't think it would be useful to post that code as it's just confirming what we all already thought.
There must be something specific to the project I'm working on causing the speed difference.
I'm doing some graphic work in an Android app, and I was finding the min/max of the values from 3 touch events. Again, edge cases like -0.0f and the different infinities are not an issue here. Values range between 0.0f and say 3000f.
Originally I profiled my code using the Android Device Monitor's Method Profiling tool, which did show a ~5x difference. But, this isn't the best way to micro-profile code as I have now learned.
I added the below code inside my application to attempt to get better data:
long min1Times = 0L;
long min2Times = 0L;
...
// loop assigning touch values to v1, v2, v3
long start1 = System.nanoTime();
float min1 = min1(v1, v2, v3);
long end1 = System.nanoTime();
min1Times += end1 - start1;
long start2 = System.nanoTime();
float min2 = min2(v1, v2, v3);
long end2 = System.nanoTime();
min2Times += end2 - start2;
double ratio = (double) (min1Times) / (double) (min2Times);
Log.d("", "ratio: " + ratio);
This prints a running ratio with each new touch event. As I swirl my finger on the screen, the first ratios logged are either 0.0 or Infinity or NaN. Which makes me think this test isn't very accurately measuring the time. As more data is collected the ratio tends to vary between .85 and 1.15.
The problem is about the precision of float values.
If you would call your method with the arguments (0.0f, -0.0f, 0.0f), it will return you 0.0f to be the smallest float - which it isn't (float-wise, -0.0f is smaller)
the nested Min-Method will return the expected result.
So, to answer your question: If two methods are not 100% equal - there is no point in comparing their performance :-)
Java will handle 0.0f == -0.0f as true, but: new Float(0.0)).equals(new Float(-0.0)) will be false! Math.Min will consider this, your method not.
When working with float values, you should never use smaller or equal to operators. Instead you should compare numbers based on your preselected delta to consider them smaller, greater or equal.
float delta = 0.005
if (Math.abs(f1 - f2) < delta) //Consider them equal.
if (Math.abs(f1 - f2) > delta) // not equal.
And that's what's happening at the end of the Math.min method - JUST in a very very precise manner by actually checking if one number is -0.0f - bitwise.
So the drawback in performance is just the more prcise result beeing calculated.
However, if you compared float values like "10", "5", and "8" - there shouldnt be a performance difference, cause the 0-check is never hit.
A problem with the performance of the built-in Math.min function stems from some unfortunate-in-retrospect decisions that were made when formulating the IEEE-754 Standard, especially the behavior of comparison and relational operators. The specified behavior was suitable for some purposes, but unsuitable for some other common purposes.
Most notably, if computation x would yield a positive number that is too small to represent, and computation y would yield a negative number that is too small to represent, then there is no way to directly compare the two values, they're not equivalent. Computation of 1.0/x would be interpreted as division by an infinitesimal positive number, while "1.0/y" behaves as division by an infinitesimal negative number. Thus, even though x and y are infinitesimal, and are close enough that comparison and relational operators report them as equal, both Math.min(x,y) and Math.min(y,x) should return y because it's infinitesimally smaller than x.
It strikes me as crazy that people are still designing hardware and programming languages which lack any nice means of comparing floating-point values in such a way that all pairs of values which aren't fully equivalent to each other from each other will be transitively ranked, but that has unfortunately the state of floating-point math for the last several decades. If one needs a function that will return the minimum of x and y in cases where they represent numbers with a non-zero difference, and otherwise arbitrarily return either x or y, such a function could be written more efficiently than one which has to handle the tricky cases involving positive and negative infinitesimal values.
Your implementation should lead to some really tight bytecode, which is easily turned into equally fast assembly language by a JIT compiler. The version using Math.min has two subroutine calls, and so may not be inlined like yours is. I think the results are to be expected.
Related
I've noticed that calculators and graphing programs like Desmos or Geogebra or Google (google search x^(1/3)) must have modified version the Math.pow() function that allows them to plot some negative bases and fractional exponents that would otherwise be undefined using the regular Math.pow() function.
I'm trying to recreate this modified pow function so I can plot the missing sections of graphs, for example $x^{\frac{1}{3}}$ where $x<0$
My Attempt
I'm not aware what this modified pow function is called in computer science or math literature, so I can't look up references that would help make a robust and optimized version of it. Instead, I've attempted to make my own version with the help of the Fraction class from Apache math3 library to determine some of the conditional statements like when the numerator and denominator is even or odd for fractional exponents.
My version has a few issues that I'll outline and might be missing some extra conditions that I haven't considered which could lead to errors.
/* Main Method */
public static void main(String[] args) {
double xmin = -3;
double xmax = 3;
double epsilon = 0.011;
/* print out x,y coordinates for plot */
for (double x = xmin; x <= xmax; x += epsilon){
System.out.println(x+","+f(x));
}
}
/* Modified Power Function*/
private static double pow2(double base, double exponent){
boolean negativeBase = base < 0;
/* exponent is an integer and base non-negative */
if (exponent == ((int) exponent) && !negativeBase){
/* use regular pow function */
return Math.pow(base, exponent);
}
Fraction fraction;
try {
fraction = new Fraction(exponent, 1000); /* maxDenominator of 1000 for speed */
} catch (FractionConversionException e){
return Double.NaN;
}
/* updated: irrational exponent */
if (exponent != fraction.doubleValue()){
/* handles irrational exponents like π which cannot be reduced to fractions.
* depends heavily on the maxDenominator value set above.
* With a maxDenominator of 1000, fractions like 1/33333 who's denominator has greater then 4 digits
* will be considered as irrational. To avoid this, increase maxDenominator to 10000 but will slow down performance.
* That's the trade off with this part of the algorithm.
* Also this condition helps clear up a lot the mess on a plot.
* If the plot is centered at exactly origin (0,0) the messy artifacts may appear, but by offsetting
* the view of the plot slightly from the origin will make it disappear
* or maybe it has more to do with the stepsize epsilon (0.01 (clean number) vs 0.011 (irregular number))
* */
return Math.pow(base, exponent);
}
if (fraction.getDenominator() % 2 == 0){
/* if even denominator */
if (negativeBase){
return Double.NaN;
}
} else {
/* if odd denominator, allows for negative bases */
if (negativeBase){
if (fraction.getNumerator() % 2 == 0){
/* if even numerator
* (-base)^(2/3) is the same as ((-base)^2)^(1/3)
* any negative base squared is positive */
return Math.pow(-base, exponent);
}
/* return negative answer, make base and exponent positive */
return -Math.pow(-base, exponent);
}
}
return Math.pow(base, exponent);
}
/* Math function */
private static double f(double x){
/* example f(x) = x^(1/x) */
return pow2(x, (double) 1/x);
}
Issue #1
For both issues, I'll be using the math function $f(x) = x^{\frac{1}{x}}$ as an example for a plot that demonstrates both issues - the first being the FractionConversionException that is caused by having a large value for the exponent. This error will occur if the value of epsilon in my code is changed to 0.1, but seems to avoid the error when the stepsize epsilon is 0.11. I'm not sure how to resolve it properly, but looking within the Fraction class where it throws the FractionConversionException, it uses a conditional statement that I could copy over to my pow2() function and get it to return NaN with the code below. But I'm not sure if that the correct thing to do!
long overflow = Integer.MAX_VALUE;
if (FastMath.abs(exponent) > overflow) {
return Double.NaN;
}
EDIT: Adding a try/catch statement around the instantiation of the Fraction class and returning NaN in the catch clause seems to be a good workaround for now. Instead of the above code.
Issue #2
Plotting the math function $f(x) = x^{\frac{1}{x}}$ produces a messy section on the left where $x<0$, see image below
as opposed to what it should look like
https://www.google.com/search?q=x^(1%2Fx)
I don't how to get rid of this mess, so that where $x<0$ it should be undefined (NaN), while allowing the pow2() function to still plot functions like $x^{\frac{1}{3}}$,$x^{\frac{2}{3}}$,$x^{x}$, etc...
I'm also not sure what to set the maxDenominator when instantiating the Fraction object for good performance while not affecting the results of the plot. Maybe there's a faster decimal to fraction conversion algorithm out there, although I'd imagine Apache math3 is probably very optimized.
Edit:
Issue 3
I forgot to consider irrational exponents because I was too consumed with neat fractions. My algorithm fails for irrational exponents like π and plots two versions of the graph together. Because of the way the Fraction class rounds the exponent, some of the denominators are considered even and odd. Maybe having a condition that if the irrational exponent doesn't equal the fraction to instead return NaN. Just quickly tested this condition, had to add a negativeBase condition which flips the graph the right way. Need to do further testing, but might be an idea.
Edit2: After testing, it should actually return the regular pow() function instead of NaN to handle to conditions for irrational exponents (see updated code for the modified power function) and also this approach surprisingly manages to get rid of most of the mess highlighted in Issue #2 as I believe there are more irrational numbers in an interval than rational number which are being discounted by the algorithm making it less dense and harder to connect two points into a line to appear on the plot.
if (exponent != fraction.doubleValue() && negativeBase){
return Double.NaN;
}
Extra Question
Is it accurate to represent/plot this data of a function like most modern graphing programs (mentioned above) seem to do or is it really misleading considering that the regular power function considers this extra data for negative bases or exponents as undefined? And what these regions or parts of the plot called in mathematics (it's technical term)?
Also open to any other approach or algorithm
I'm trying to find some Java code to determine if two doubles are nearly equal. I did a lot of Googling and found bits and pieces that I've put together here. Where it starts to escape me is the use of a "relative epsilon". This approach seems like what I'm looking for. I don't want to have to specify the epsilon directly but want to use an epsilon based on the magnitudes of the two arguments. Here is the code I put together, I need a sanity check on it. (P.S. I know just enough math to be dangerous.)
public class MathUtils
{
// http://stackoverflow.com/questions/3728246/what-should-be-the-
// epsilon-value-when-performing-double-value-equal-comparison
// ULP = Unit in Last Place
public static double relativeEpsilon( double a, double b )
{
return Math.max( Math.ulp( a ), Math.ulp( b ) );
}
public static boolean nearlyEqual( double a, double b )
{
return nearlyEqual( a, b, relativeEpsilon( a, b ) );
}
// http://floating-point-gui.de/errors/comparison/
public static boolean nearlyEqual( double a, double b, double epsilon )
{
final double absA = Math.abs( a );
final double absB = Math.abs( b );
final double diff = Math.abs( a - b );
if( a == b )
{
// shortcut, handles infinities
return true;
}
else if( a == 0 || b == 0 || absA + absB < Double.MIN_NORMAL )
{
// a or b is zero or both are extremely close to it
// relative error is less meaningful here
// NOT SURE HOW RELATIVE EPSILON WORKS IN THIS CASE
return diff < ( epsilon * Double.MIN_NORMAL );
}
else
{
// use relative error
return diff / Math.min( ( absA + absB ), Double.MAX_VALUE ) < epsilon;
}
}
}
I would use a library for this, the one I normally use is DoubleMath fro Googles Guava library. https://google.github.io/guava/releases/19.0/api/docs/com/google/common/math/DoubleMath.html
if (DoubleMath.fuzzyEquals(a, b, epsilon)) {
// a and b are equal within the tolerance given
}
there is also a fuzzyCompare.
The usual way to compare 2 floating values a,b is:
if ( Math.abs(a-b) <= epsilon ) do_stuff_if_equal;
else do_stuff_if_different;
where Math.abs() is absolute value. As I do not code in JAVA you need to use double variant if that is not the case. The epsilon is your difference. As mentioned ulp is too small for this. You need to use value that makes sense for the values you are comparing. So how to compute epsilon?
That is a bit tricky and yes it is possible to use magnitude of a,b but that is not robust way because if exponents of a,b are too different you can obtain false positives easily. Instead you should use a meaning-full value. For example if you are comparing position coordinates then the epsilon should be fraction of min detail or minimal distance you consider as the same point. For angles some minimal angle that is small enough like 1e-6 deg but the value depends on ranges and accuracy you work with. For normalized <-1,1> ranges I usually use 1e-10 or 1e-30.
As you can see the epsilon depends mostly on target accuracy and magnitude and change very from case to case so creating some uniform way (to get rid of epsilon like you want) is not safe and only would lead to head aches later on.
To ease up this I usually define a _zero constant or variable (in case of computational classes) that can be changed. Set it on default to value that is good enough for most cases and if cause problems at some point I know I can easily change it ...
If you want to do it your way anyway (ignoring above text) then you can do this:
if (Math.abs(a)>=Math.abs(b)) epsilon=1e-30*Math.abs(b);
else epsilon=1e-30*Math.abs(a);
but as I said this may lead to wrong results. If you persist on using ulp then I would use Min instead of Max.
You can use the class org.apache.commons.math3.util.Precision from the Apache Commons Math. Example:
if (Precision.equals(sum, price, 0.009)) {
// arguments are equal or within the range of allowed error (inclusive)
}
I would like some advice from people who have more experience working with primitive double equality in Java. Using d1 == d2 for two doubles d1 and d2 is not sufficient due to possible rounding errors.
My questions are:
Is Java's Double.compare(d1,d2) == 0 handling rounding errors to some degree? As explained in the 1.7 documentation it returns value 0 if d1 is numerically equal to d2. Is anyone certain what exactly they mean by numerically equal?
Using relative error calculation against some delta value, is there a generic (not application specific) value of delta you would recommend? Please see example below.
Below is a generic function for checking equality considering relative error. What value of delta would you recommend to capture the majority of rounding errors from simple operations +,-,/,* operations?
public static boolean isEqual(double d1, double d2) {
return d1 == d2 || isRelativelyEqual(d1,d2);
}
private static boolean isRelativelyEqual(double d1, double d2) {
return delta > Math.abs(d1- d2) / Math.max(Math.abs(d1), Math.abs(d2));
}
You could experiment with delta values in the order of 10-15 but you will notice that some calculations give a larger rounding error. Furthermore, the more operations you make the larger will be the accumulated rounding error.
One particularly bad case is if you subtract two almost equal numbers, for example 1.0000000001 - 1.0 and compare the result to 0.0000000001
So there is little hope to find a generic method that would be applicable in all situations. You always have to calculate the accuracy you can expect in a certain application and then consider results equal if they are closer than this accuracy.
For example the output of
public class Main {
public static double delta(double d1, double d2) {
return Math.abs(d1- d2) / Math.max(Math.abs(d1), Math.abs(d2));
}
public static void main(String[] args) {
System.out.println(delta(0.1*0.1, 0.01));
System.out.println(delta(1.0000000001 - 1.0, 0.0000000001));
}
}
is
1.7347234759768068E-16
8.274036411668976E-8
Interval arithmetic can be used to keep track of the accumulated rounding errors. However in practise the error intervals come out too pessimistic, because sometimes rounding errors also cancel each other.
You could try something like this (not tested):
public static int sortaClose(double d1, double d2, int bits) {
long bitMask = 0xFFFFFFFFFFFFFFFFL << bits;
long thisBits = Double.doubleToLongBits(d1) & bitMask;
long anotherBits = Double.doubleToLongBits(d2) & bitMask;
if (thisBits < anotherBits) return -1;
if (thisBits > anotherBits) return 1;
return 0;
}
"bits" would typically be from 1 to 4 or so, depending on how precise you wanted the cutoff.
A refinement would be to add 1 to the position of the first bit to be zeroed before masking (for "rounding"), but then you have to worry about ripple all the way up past the most significant bit.
From the javadoc for compareTo
Double.NaN is considered by this method to be equal to itself and greater than all other double values (including Double.POSITIVE_INFINITY).
0.0d is considered by this method to be greater than -0.0d.
You may find this article very helpful
If you want you can check like
double epsilon = 0.0000001;
if ( d <= ( 0 - epsilon ) ) { .. }
else if ( d >= ( 0 + epsilon ) ) { .. }
else { /* d "equals" zero */ }
I'm using Heron's formula to find the area of a triangle. Given sides a, b, and c, A = √(s(s-a)(s-b)(s-c)) where s is the semiperimeter (a+b+c)/2. This formula should work perfectly, but I noticed that Math.pow() and Math.sqrt() give different results. Why does this happen and how can I fix it?
I wrote two methods that find the area and determine if it is an integer.
In this first method, I take the square roots and then multiply them:
public static boolean isAreaIntegral(long a, long b, long c)
{
double s = (a+b+c)/2.0;
double area = Math.sqrt(s)*Math.sqrt(s-a)*Math.sqrt(s-b)*Math.sqrt(s-c);
return area%1.0==0.0 && area > 0.0;
}
In this second method, I find the product and then take the square root:
public static boolean isAreaIntegral(long a, long b, long c)
{
double s = (a+b+c)/2.0;
double area = Math.pow(s*(s-a)*(s-b)*(s-c),0.5);
return area%1.0==0.0 && area > 0.0;
}
Can anyone explain why these two methods that are mathematically equivalent give different Values? I'm working on Project Euler Problem 94. My answer comes out to 999990060 the first way and 996784416 the second way. (I know that both answers are very far off the actual)
I would certainly vote for "rounding issues", as you multiply the results of multiple method call in the first method (where every method result gets rounded) compared to the single method call in the second method, where you round only once.
The difference between the answers is larger than I'd expect. Or maybe it isn't. It's late and my mathematical mind crashed a while ago.
I think your issue is with rounding. When you multiply a load of roots together, your answer falls further from the true value.
The second method will be more accurate.
Though, not necessarily as accurate as Euler is asking for.
A calculator is a good bet.
Both methods are problematic. You should in general be very careful when comparing floating point values (that is, also double precision floating point values). Particularly, comparing the result of a computation with == or != is nearly always questionable (and quite often it is just wrong). Comparing two floating point values for "equality" should be done with a method like
private static boolean isEqual(double x, double y)
{
double epsilon = 1e-8;
return Math.abs(x - y) <= epsilon * Math.abs(x);
// see Knuth section 4.2.2 pages 217-218
}
In this case, the floating-point remainder operator will also not have the desired result. Consider the following, classic example
public class PrecisionAgain
{
public static void main(String[] args)
{
double d = 0;
for (int i=0; i<20; i++)
{
d += 0.1;
}
System.out.println(d);
double r = d%1.0;
System.out.println(r);
}
}
Output:
2.0000000000000004
4.440892098500626E-16
In your case, in order to rule out these rounding errors, the return statement could probably (!) something simple like
return (area - Math.round(area) < 1e8);
But in other situations, you should definitely read more about floating point operations. (The site http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html is often recommended, but might be a tough one to start with...)
This still does not really answer your actual question: WHY are the results different? In doubt, the answer is this simple: Because they make different errors (but they both make errors - that's in fact more important here!)
I've been having problems with my code for two weeks, and have been unsuccessful in debugging it. I've come here in the hope that someone can help. I've written a program that utilizes the Barnes-Hut algorithm for n-body gravitational simulation. My problem is that one or more 'particles' will have the position of {NaN, NaN, NaN} assigned to them (using three doubles to represent x, y, z of 3-d space). This, in turn, causes the other particles to have an acceleration of {NaN, NaN, NaN}, and in turn, a velocity and position of {NaN, NaN, NaN} as well. Basically, after a frame or two, everything disappears. It seems to be occurring in the updateAcc method, but I have a feeling that this isn't so. I understand that this is a huge undertaking, and am very grateful for anyone that helps me.
What I've checked:
There are no negative square roots, and all the values seem to be within their limits.
The source code is available here. Thanks again.
Code that seems to produce NaN:
private static void getAcc(particle particle, node node)
{
if ((node.particle == null && node.children == null) || node.particle == particle)
{
//Geting gravity to a node that is either empty or the same node...
}
else if (distance(node.centerOfMass, particle.position) / node.sideLength > theta && node.children != null)
{
for (int i = 0; i < node.children.length; i++)
{
if (node.children[i] != null)
{
getAcc(particle, node.children[i]);
}
}
}
else
{
particle.acceleration = vecAdd(particle.acceleration, vecDiv(getForce(particle.position, particle.mass, node.centerOfMass, node.containedMass), particle.mass));
}
}
private static double sumDeltaSquare(double[] pos1, double[] pos2)
{
return Math.pow(pos1[0]-pos2[0],2)+Math.pow(pos1[1]-pos2[1],2)+Math.pow(pos1[2]-pos2[2],2);
}
private static double[] getForce(double[] pos1, double m1, double[] pos2, double m2)
{
double ratio = G*m1*m2;
ratio /= sumDeltaSquare(pos1, pos2);
ratio /= Math.sqrt(sumDeltaSquare(pos1,pos2));
return vecMul(vecSub(pos2, pos1), ratio);
}
private static double distance(double[] position, double[] center)
{
double distance = Math.sqrt(Math.pow(position[0]-center[0],2) + Math.pow(position[1]-center[1],2) + Math.pow(position[2]-center[2],2));
return distance;
}
I'm not sure if this is the only problem, but it is a start.
sumDeltaSquare will sometimes return 0 which means when the value is used in getForce ratio /= sumDeltaSquare(pos1, pos2); it will produce Infinity and start causing issues.
This is a serious problem that you need to debug and work out what everything means. I enjoyed looking at the dots though.
Firstly, why aren't you using Java's Vecmath library? (It's distributed as a part of Java3D. Download Java3D's binary build and then just use vecmath.jar) Your problem is, very likely, somewhere in your custom vector functions. If not, #pimaster is probably right in that your translation magnitude method sumDeltaSquare might be returning 0 if two of your masses occupy a single space. Which means, unless you're inside a black hole, you're doing it wrong :P. Or you need to come up with a quantum gravity theory before you can do this simulation.
If you can't use vecmath (i.e. this is a homework assignment) I would suggest you use a regex to find every instance of return * and then replace it with assert !Double.isNan(*) && Double.isFinite(*);\nreturn *, except substitute * for whatever regex finds a "match group". I've forgotten exactly what that is, but I got you started on Google. I also suggest you avoid optimizations until after you have working code.
I'm not going to debug your code. But NaN values are the result of mathematically invalid operations on floating point numbers. The most famous of those is division by 0 (does not throw an exception with floating point).
How can this happen?
If your computation produces very small numbers, they might become too small to be represented as 64-bit floating point numbers (they require more bits than are available), and Java will return 0.0 instead.
In the other direction, if you get an overflow (the magnitude of the number requires too many bits), Java turns this into infinity. Doing math with infinity and 0 can quickly lead to NaN, and NaN propagates through every operation you apply to it.
For details, see sections 4.2 and 15.17 of the Java language spec.