I'm using Heron's formula to find the area of a triangle. Given sides a, b, and c, A = √(s(s-a)(s-b)(s-c)) where s is the semiperimeter (a+b+c)/2. This formula should work perfectly, but I noticed that Math.pow() and Math.sqrt() give different results. Why does this happen and how can I fix it?
I wrote two methods that find the area and determine if it is an integer.
In this first method, I take the square roots and then multiply them:
public static boolean isAreaIntegral(long a, long b, long c)
{
double s = (a+b+c)/2.0;
double area = Math.sqrt(s)*Math.sqrt(s-a)*Math.sqrt(s-b)*Math.sqrt(s-c);
return area%1.0==0.0 && area > 0.0;
}
In this second method, I find the product and then take the square root:
public static boolean isAreaIntegral(long a, long b, long c)
{
double s = (a+b+c)/2.0;
double area = Math.pow(s*(s-a)*(s-b)*(s-c),0.5);
return area%1.0==0.0 && area > 0.0;
}
Can anyone explain why these two methods that are mathematically equivalent give different Values? I'm working on Project Euler Problem 94. My answer comes out to 999990060 the first way and 996784416 the second way. (I know that both answers are very far off the actual)
I would certainly vote for "rounding issues", as you multiply the results of multiple method call in the first method (where every method result gets rounded) compared to the single method call in the second method, where you round only once.
The difference between the answers is larger than I'd expect. Or maybe it isn't. It's late and my mathematical mind crashed a while ago.
I think your issue is with rounding. When you multiply a load of roots together, your answer falls further from the true value.
The second method will be more accurate.
Though, not necessarily as accurate as Euler is asking for.
A calculator is a good bet.
Both methods are problematic. You should in general be very careful when comparing floating point values (that is, also double precision floating point values). Particularly, comparing the result of a computation with == or != is nearly always questionable (and quite often it is just wrong). Comparing two floating point values for "equality" should be done with a method like
private static boolean isEqual(double x, double y)
{
double epsilon = 1e-8;
return Math.abs(x - y) <= epsilon * Math.abs(x);
// see Knuth section 4.2.2 pages 217-218
}
In this case, the floating-point remainder operator will also not have the desired result. Consider the following, classic example
public class PrecisionAgain
{
public static void main(String[] args)
{
double d = 0;
for (int i=0; i<20; i++)
{
d += 0.1;
}
System.out.println(d);
double r = d%1.0;
System.out.println(r);
}
}
Output:
2.0000000000000004
4.440892098500626E-16
In your case, in order to rule out these rounding errors, the return statement could probably (!) something simple like
return (area - Math.round(area) < 1e8);
But in other situations, you should definitely read more about floating point operations. (The site http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html is often recommended, but might be a tough one to start with...)
This still does not really answer your actual question: WHY are the results different? In doubt, the answer is this simple: Because they make different errors (but they both make errors - that's in fact more important here!)
Related
firstly, im sorry if this is a trivial question. I am a beginner and have been stuck on this for hours.
Below I have tried to create a unitizer method which has a series of if else statements. They are written in descending order, each time checking if a value can be divided by a given number, and if so, performing a division, rounding the value and adding an appropriate unit to the result.
in this question I have attempted to remove all unnecessary code, thus what i am presenting here is only a fragment of the unitizer method.
why is the unitizer method outputting values in hours, when the value should be in seconds?
For clarification, the expected value is ~ 4 seconds.
public class simplified
{
public static void main(String[] args)
{
int i = 5;
double n = Math.pow(2, (double) i);
System.out.println(a6(n)); // correctly displays the expected value.
System.out.println(unitizer(a6(n)));
}
public static double a6 (double n)
{
return Math.pow(2, n); // this value is in nanoseconds.
}
public static String unitizer (double x)
{
String time = "";
if (x/(60*60*1000*1000*1000) >= 1)
{
x = Math.round(x/(60*60*1000*1000*1000) * 100.0) / 100.0;
time = x + "hr ";
}
return time;
}
}
console output:
4.294967296E9
5.25hr
There is an int overflow at the expression 60*60*1000*1000*1000. This means, that the actual result 3,600,000,000,000 is too large to be stored as an int value and is therefore 'reduced' (mod 2^31) to 817,405,952.
This can be fixed by evaluating said expression in a 'larger' arithmetic, e.g. long. There is a nice little modifier, that will force exactly that:
60L*60*1000*1000*1000
^
In particular, it hints the compiler to interpret the preceding literal 60 as a long value and in consequence the whole calculation will be done in long arithmetic.
This modifier is by the way case-insensitive; however I prefer an upper-case L, because the lower-case letter l can easily be mistaken by the number 1.
With this change, the code will not enter the if-statement, because the value x is not larger than one hour. Most probably the omitted code of unitizer will deal with this case.
On a last side note, java has an in-built TimeUnit enum, which can do these conversions, too. However, it does so in long arithmetic and not in double arithmetic as it is required for this specific question.
I had some code I was profiling and was surprised at how much time was being spent on Math.min(float, float).
In my use case I needed to get the min of 3 float values, each value is guaranteed to not be NAN or another edge case float value.
My original method was:
private static float min2(float v1, float v2, float v3) {
return Math.min(Math.min(v1,v2),v3);
}
But I found that this was about 5x faster:
private static float min1(float v1, float v2, float v3) {
if (v1 < v2 && v1 < v3) {
return v1;
}
else if (v2 < v3) {
return v2;
}
else {
return v3;
}
}
For reference this is the code for Math.min:
public static float min(float f1, float f2) {
if (f1 > f2) {
return f2;
}
if (f1 < f2) {
return f1;
}
/* if either arg is NaN, return NaN */
if (f1 != f2) {
return Float.NaN;
}
/* min(+0.0,-0.0) == -0.0 */
/* 0x80000000 == Float.floatToRawIntBits(-0.0f) */
if (Float.floatToRawIntBits(f1) == 0x80000000) {
return -0.0f;
}
return f2;
}
Note: My use case was symmetric and the above was all true for max instead of min.
EDIT1:
It turns out ~5x was an overstatement, but I am still seeing a speed difference inside my application. Although I suspect that may be due to not having a proper timing test.
After posting this question I wrote a proper micro optimization speed test in a separate project. Tested each method 1000 times on random floats and they both took the same amount of time. I don't think it would be useful to post that code as it's just confirming what we all already thought.
There must be something specific to the project I'm working on causing the speed difference.
I'm doing some graphic work in an Android app, and I was finding the min/max of the values from 3 touch events. Again, edge cases like -0.0f and the different infinities are not an issue here. Values range between 0.0f and say 3000f.
Originally I profiled my code using the Android Device Monitor's Method Profiling tool, which did show a ~5x difference. But, this isn't the best way to micro-profile code as I have now learned.
I added the below code inside my application to attempt to get better data:
long min1Times = 0L;
long min2Times = 0L;
...
// loop assigning touch values to v1, v2, v3
long start1 = System.nanoTime();
float min1 = min1(v1, v2, v3);
long end1 = System.nanoTime();
min1Times += end1 - start1;
long start2 = System.nanoTime();
float min2 = min2(v1, v2, v3);
long end2 = System.nanoTime();
min2Times += end2 - start2;
double ratio = (double) (min1Times) / (double) (min2Times);
Log.d("", "ratio: " + ratio);
This prints a running ratio with each new touch event. As I swirl my finger on the screen, the first ratios logged are either 0.0 or Infinity or NaN. Which makes me think this test isn't very accurately measuring the time. As more data is collected the ratio tends to vary between .85 and 1.15.
The problem is about the precision of float values.
If you would call your method with the arguments (0.0f, -0.0f, 0.0f), it will return you 0.0f to be the smallest float - which it isn't (float-wise, -0.0f is smaller)
the nested Min-Method will return the expected result.
So, to answer your question: If two methods are not 100% equal - there is no point in comparing their performance :-)
Java will handle 0.0f == -0.0f as true, but: new Float(0.0)).equals(new Float(-0.0)) will be false! Math.Min will consider this, your method not.
When working with float values, you should never use smaller or equal to operators. Instead you should compare numbers based on your preselected delta to consider them smaller, greater or equal.
float delta = 0.005
if (Math.abs(f1 - f2) < delta) //Consider them equal.
if (Math.abs(f1 - f2) > delta) // not equal.
And that's what's happening at the end of the Math.min method - JUST in a very very precise manner by actually checking if one number is -0.0f - bitwise.
So the drawback in performance is just the more prcise result beeing calculated.
However, if you compared float values like "10", "5", and "8" - there shouldnt be a performance difference, cause the 0-check is never hit.
A problem with the performance of the built-in Math.min function stems from some unfortunate-in-retrospect decisions that were made when formulating the IEEE-754 Standard, especially the behavior of comparison and relational operators. The specified behavior was suitable for some purposes, but unsuitable for some other common purposes.
Most notably, if computation x would yield a positive number that is too small to represent, and computation y would yield a negative number that is too small to represent, then there is no way to directly compare the two values, they're not equivalent. Computation of 1.0/x would be interpreted as division by an infinitesimal positive number, while "1.0/y" behaves as division by an infinitesimal negative number. Thus, even though x and y are infinitesimal, and are close enough that comparison and relational operators report them as equal, both Math.min(x,y) and Math.min(y,x) should return y because it's infinitesimally smaller than x.
It strikes me as crazy that people are still designing hardware and programming languages which lack any nice means of comparing floating-point values in such a way that all pairs of values which aren't fully equivalent to each other from each other will be transitively ranked, but that has unfortunately the state of floating-point math for the last several decades. If one needs a function that will return the minimum of x and y in cases where they represent numbers with a non-zero difference, and otherwise arbitrarily return either x or y, such a function could be written more efficiently than one which has to handle the tricky cases involving positive and negative infinitesimal values.
Your implementation should lead to some really tight bytecode, which is easily turned into equally fast assembly language by a JIT compiler. The version using Math.min has two subroutine calls, and so may not be inlined like yours is. I think the results are to be expected.
I am trying to create a recursive method that uses Horner's algorithm to convert a fractional number in base n to base 10. I've searched here and all over but couldn't find anywhere that dealt with the fractional part in detail. As a heads up, I'm pretty weak in recursion as I have not formally learned it in my programming classes yet, but have been assigned it by another class.
I was able to make a method that handles the integer part of the number, just not the fractional part.
I feel like the method I've written is fairly close as it gets me to double the answer for my test figures (maybe because I'm testing base 2).
The first param passed is an int array filled with the coefficients. I'm not too concerned with the order of the coefficients as I'm making all the coefficients the same to test it out.
The second param is the base. The third param is initialized to the number of coefficients minus 1 which I also used for the integer part method. I tried using the number of coefficients, but that steps out of the array.
I tried dividing by the base one more time as that would give me the right answer, but it doesn't work if I do so in the base case return statement or at the end of the final return statement.
So, when I try to convert 0.1111 base 2 to base 10, my method returns 1.875 (double the correct answer of 0.9375).
Any hints would be appreciated!
//TL;DR
coef[0] = 1; coef[1] = 1; coef[2] = 1; coef[3] = 1;
base = 2; it = 3;
//results in 1.875 instead of the correct 0.9375
public static double fracHorner(int[] coef, int base, int it) {
if (it == 0) {
return coef[it];
}
return ((float)1/base * fracHorner(coef, base, it-1)) + coef[it];
}
Observe that fracHorner always returns a value at least equal to coef[it] because it either returns coef[it] or something positive added to coef[it]. Since coef[it] >= 1 in your tests, it will always return a number greater than or equal to one.
It's relatively easy to fix: divide both coef[it] by base:
public static double fracHorner(int[] coef, int base, int it) {
if (it == 0) {
return ((double)coef[it])/base;
}
return (fracHorner(coef, base, it-1) + coef[it])/base;
}
This question already has answers here:
How to test if a double is an integer
(18 answers)
Closed 9 years ago.
Specifically in Java, how can I determine if a double is an integer? To clarify, I want to know how I can determine that the double does not in fact contain any fractions or decimals.
I am concerned essentially with the nature of floating-point numbers. The methods I thought of (and the ones I found via Google) follow basically this format:
double d = 1.0;
if((int)d == d) {
//do stuff
}
else {
// ...
}
I'm certainly no expert on floating-point numbers and how they behave, but I am under the impression that because the double stores only an approximation of the number, the if() conditional will only enter some of the time (perhaps even a majority of the time). But I am looking for a method which is guaranteed to work 100% of the time, regardless of how the double value is stored in the system.
Is this possible? If so, how and why?
double can store an exact representation of certain values, such as small integers and (negative or positive) powers of two.
If it does indeed store an exact integer, then ((int)d == d) works fine. And indeed, for any 32-bit integer i, (int)((double)i) == i since a double can exactly represent it.
Note that for very large numbers (greater than about 2**52 in magnitude), a double will always appear to be an integer, as it will no longer be able to store any fractional part. This has implications if you are trying to cast to a Java long, for instance.
How about
if(d % 1 == 0)
This works because all integers are 0 modulo 1.
Edit To all those who object to this on the grounds of it being slow, I profiled it, and found it to be about 3.5 times slower than casting. Unless this is in a tight loop, I'd say this is a preferable way of working it out, because it's extremely clear what you're testing, and doesn't require any though about the semantics of integer casting.
I profiled it by running time on javac of
class modulo {
public static void main(String[] args) {
long successes = 0;
for(double i = 0.0; i < Integer.MAX_VALUE; i+= 0.125) {
if(i % 1 == 0)
successes++;
}
System.out.println(successes);
}
}
VS
class cast {
public static void main(String[] args) {
long successes = 0;
for(double i = 0.0; i < Integer.MAX_VALUE; i+= 0.125) {
if((int)i == i)
successes++;
}
System.out.println(successes);
}
}
Both printed 2147483647 at the end.
Modulo took 189.99s on my machine - Cast took 54.75s.
if(new BigDecimal(d).scale() <= 0) {
//do stuff
}
Your method of using if((int)d == d) should always work for any 32-bit integer. To make it work up to 64 bits, you can use if((long)d == d, which is effectively the same except that it accounts for larger magnitude numbers. If d is greater than the maximum long value (or less than the minimum), then it is guaranteed to be an exact integer. A function that tests whether d is an integer can then be constructed as follows:
boolean isInteger(double d){
if(d > Long.MAX_VALUE || d < Long.MIN_VALUE){
return true;
} else if((long)d == d){
return true;
} else {
return false;
}
}
If a floating point number is an integer, then it is an exact representation of that integer.
Doubles are a binary fraction with a binary exponent. You cannot be certain that an integer can be exactly represented as a double, especially not if it has been calculated from other values.
Hence the normal way to approach this is to say that it needs to be "sufficiently close" to an integer value, where sufficiently close typically mean "within X %" (where X is rather small).
I.e. if X is 1 then 1.98 and 2.02 would both be considered to be close enough to be 2. If X is 0.01 then it needs to be between 1.9998 and 2.0002 to be close enough.
I've been having problems with my code for two weeks, and have been unsuccessful in debugging it. I've come here in the hope that someone can help. I've written a program that utilizes the Barnes-Hut algorithm for n-body gravitational simulation. My problem is that one or more 'particles' will have the position of {NaN, NaN, NaN} assigned to them (using three doubles to represent x, y, z of 3-d space). This, in turn, causes the other particles to have an acceleration of {NaN, NaN, NaN}, and in turn, a velocity and position of {NaN, NaN, NaN} as well. Basically, after a frame or two, everything disappears. It seems to be occurring in the updateAcc method, but I have a feeling that this isn't so. I understand that this is a huge undertaking, and am very grateful for anyone that helps me.
What I've checked:
There are no negative square roots, and all the values seem to be within their limits.
The source code is available here. Thanks again.
Code that seems to produce NaN:
private static void getAcc(particle particle, node node)
{
if ((node.particle == null && node.children == null) || node.particle == particle)
{
//Geting gravity to a node that is either empty or the same node...
}
else if (distance(node.centerOfMass, particle.position) / node.sideLength > theta && node.children != null)
{
for (int i = 0; i < node.children.length; i++)
{
if (node.children[i] != null)
{
getAcc(particle, node.children[i]);
}
}
}
else
{
particle.acceleration = vecAdd(particle.acceleration, vecDiv(getForce(particle.position, particle.mass, node.centerOfMass, node.containedMass), particle.mass));
}
}
private static double sumDeltaSquare(double[] pos1, double[] pos2)
{
return Math.pow(pos1[0]-pos2[0],2)+Math.pow(pos1[1]-pos2[1],2)+Math.pow(pos1[2]-pos2[2],2);
}
private static double[] getForce(double[] pos1, double m1, double[] pos2, double m2)
{
double ratio = G*m1*m2;
ratio /= sumDeltaSquare(pos1, pos2);
ratio /= Math.sqrt(sumDeltaSquare(pos1,pos2));
return vecMul(vecSub(pos2, pos1), ratio);
}
private static double distance(double[] position, double[] center)
{
double distance = Math.sqrt(Math.pow(position[0]-center[0],2) + Math.pow(position[1]-center[1],2) + Math.pow(position[2]-center[2],2));
return distance;
}
I'm not sure if this is the only problem, but it is a start.
sumDeltaSquare will sometimes return 0 which means when the value is used in getForce ratio /= sumDeltaSquare(pos1, pos2); it will produce Infinity and start causing issues.
This is a serious problem that you need to debug and work out what everything means. I enjoyed looking at the dots though.
Firstly, why aren't you using Java's Vecmath library? (It's distributed as a part of Java3D. Download Java3D's binary build and then just use vecmath.jar) Your problem is, very likely, somewhere in your custom vector functions. If not, #pimaster is probably right in that your translation magnitude method sumDeltaSquare might be returning 0 if two of your masses occupy a single space. Which means, unless you're inside a black hole, you're doing it wrong :P. Or you need to come up with a quantum gravity theory before you can do this simulation.
If you can't use vecmath (i.e. this is a homework assignment) I would suggest you use a regex to find every instance of return * and then replace it with assert !Double.isNan(*) && Double.isFinite(*);\nreturn *, except substitute * for whatever regex finds a "match group". I've forgotten exactly what that is, but I got you started on Google. I also suggest you avoid optimizations until after you have working code.
I'm not going to debug your code. But NaN values are the result of mathematically invalid operations on floating point numbers. The most famous of those is division by 0 (does not throw an exception with floating point).
How can this happen?
If your computation produces very small numbers, they might become too small to be represented as 64-bit floating point numbers (they require more bits than are available), and Java will return 0.0 instead.
In the other direction, if you get an overflow (the magnitude of the number requires too many bits), Java turns this into infinity. Doing math with infinity and 0 can quickly lead to NaN, and NaN propagates through every operation you apply to it.
For details, see sections 4.2 and 15.17 of the Java language spec.