I've noticed that calculators and graphing programs like Desmos or Geogebra or Google (google search x^(1/3)) must have modified version the Math.pow() function that allows them to plot some negative bases and fractional exponents that would otherwise be undefined using the regular Math.pow() function.
I'm trying to recreate this modified pow function so I can plot the missing sections of graphs, for example $x^{\frac{1}{3}}$ where $x<0$
My Attempt
I'm not aware what this modified pow function is called in computer science or math literature, so I can't look up references that would help make a robust and optimized version of it. Instead, I've attempted to make my own version with the help of the Fraction class from Apache math3 library to determine some of the conditional statements like when the numerator and denominator is even or odd for fractional exponents.
My version has a few issues that I'll outline and might be missing some extra conditions that I haven't considered which could lead to errors.
/* Main Method */
public static void main(String[] args) {
double xmin = -3;
double xmax = 3;
double epsilon = 0.011;
/* print out x,y coordinates for plot */
for (double x = xmin; x <= xmax; x += epsilon){
System.out.println(x+","+f(x));
}
}
/* Modified Power Function*/
private static double pow2(double base, double exponent){
boolean negativeBase = base < 0;
/* exponent is an integer and base non-negative */
if (exponent == ((int) exponent) && !negativeBase){
/* use regular pow function */
return Math.pow(base, exponent);
}
Fraction fraction;
try {
fraction = new Fraction(exponent, 1000); /* maxDenominator of 1000 for speed */
} catch (FractionConversionException e){
return Double.NaN;
}
/* updated: irrational exponent */
if (exponent != fraction.doubleValue()){
/* handles irrational exponents like π which cannot be reduced to fractions.
* depends heavily on the maxDenominator value set above.
* With a maxDenominator of 1000, fractions like 1/33333 who's denominator has greater then 4 digits
* will be considered as irrational. To avoid this, increase maxDenominator to 10000 but will slow down performance.
* That's the trade off with this part of the algorithm.
* Also this condition helps clear up a lot the mess on a plot.
* If the plot is centered at exactly origin (0,0) the messy artifacts may appear, but by offsetting
* the view of the plot slightly from the origin will make it disappear
* or maybe it has more to do with the stepsize epsilon (0.01 (clean number) vs 0.011 (irregular number))
* */
return Math.pow(base, exponent);
}
if (fraction.getDenominator() % 2 == 0){
/* if even denominator */
if (negativeBase){
return Double.NaN;
}
} else {
/* if odd denominator, allows for negative bases */
if (negativeBase){
if (fraction.getNumerator() % 2 == 0){
/* if even numerator
* (-base)^(2/3) is the same as ((-base)^2)^(1/3)
* any negative base squared is positive */
return Math.pow(-base, exponent);
}
/* return negative answer, make base and exponent positive */
return -Math.pow(-base, exponent);
}
}
return Math.pow(base, exponent);
}
/* Math function */
private static double f(double x){
/* example f(x) = x^(1/x) */
return pow2(x, (double) 1/x);
}
Issue #1
For both issues, I'll be using the math function $f(x) = x^{\frac{1}{x}}$ as an example for a plot that demonstrates both issues - the first being the FractionConversionException that is caused by having a large value for the exponent. This error will occur if the value of epsilon in my code is changed to 0.1, but seems to avoid the error when the stepsize epsilon is 0.11. I'm not sure how to resolve it properly, but looking within the Fraction class where it throws the FractionConversionException, it uses a conditional statement that I could copy over to my pow2() function and get it to return NaN with the code below. But I'm not sure if that the correct thing to do!
long overflow = Integer.MAX_VALUE;
if (FastMath.abs(exponent) > overflow) {
return Double.NaN;
}
EDIT: Adding a try/catch statement around the instantiation of the Fraction class and returning NaN in the catch clause seems to be a good workaround for now. Instead of the above code.
Issue #2
Plotting the math function $f(x) = x^{\frac{1}{x}}$ produces a messy section on the left where $x<0$, see image below
as opposed to what it should look like
https://www.google.com/search?q=x^(1%2Fx)
I don't how to get rid of this mess, so that where $x<0$ it should be undefined (NaN), while allowing the pow2() function to still plot functions like $x^{\frac{1}{3}}$,$x^{\frac{2}{3}}$,$x^{x}$, etc...
I'm also not sure what to set the maxDenominator when instantiating the Fraction object for good performance while not affecting the results of the plot. Maybe there's a faster decimal to fraction conversion algorithm out there, although I'd imagine Apache math3 is probably very optimized.
Edit:
Issue 3
I forgot to consider irrational exponents because I was too consumed with neat fractions. My algorithm fails for irrational exponents like π and plots two versions of the graph together. Because of the way the Fraction class rounds the exponent, some of the denominators are considered even and odd. Maybe having a condition that if the irrational exponent doesn't equal the fraction to instead return NaN. Just quickly tested this condition, had to add a negativeBase condition which flips the graph the right way. Need to do further testing, but might be an idea.
Edit2: After testing, it should actually return the regular pow() function instead of NaN to handle to conditions for irrational exponents (see updated code for the modified power function) and also this approach surprisingly manages to get rid of most of the mess highlighted in Issue #2 as I believe there are more irrational numbers in an interval than rational number which are being discounted by the algorithm making it less dense and harder to connect two points into a line to appear on the plot.
if (exponent != fraction.doubleValue() && negativeBase){
return Double.NaN;
}
Extra Question
Is it accurate to represent/plot this data of a function like most modern graphing programs (mentioned above) seem to do or is it really misleading considering that the regular power function considers this extra data for negative bases or exponents as undefined? And what these regions or parts of the plot called in mathematics (it's technical term)?
Also open to any other approach or algorithm
Related
Hello better programmers than me. Not a huge deal, but I am curious about this function, and more importantly the result it gives sometimes. So I defined a recursive power function for a homework assignment that is to include negative exponents. For positive values and 0, it works fine, but when I enter some negative values, the answer is really strange. Here is the function:
public static double Power(int base, int exp){
if (exp == 0)
return 1.0;
else if(exp >=1)
return base * Power(base, exp - 1);
else
return (1.0/base) * Power(base, exp + 1);
}
So for a call Power(5, -1) the function returns 0.2, like it should. But for say Power(5, -2) the function returns 0.04000000000000001 instead of just 0.04.
Again, this isn't a huge deal since it's for homework and not "real life", but just curious as to why this happened. I assume it has something to do with how computer memory or a double value is stored but really have no idea. Thanks all!
PS, this is coded in Java using Netbeans if that makes a difference.
Floating point rounding errors can be reduced by careful organization of your arithmetic. In general, you want to minimize the number of rounding operations, and the number of calculations done on rounded results.
I made a small change to your function:
public static double Power(int base, int exp) {
if (exp == 0)
return 1.0;
else if (exp >= 1)
return base * Power(base, exp - 1);
else
return (1.0 / Power(base, -exp));
}
For your test case, Power(5, -2), this does only one rounded calculation, the division at the top of the recursion. It gets the closest double to 1/25.0, which prints as 0.04.
It's a 1990's thing
This will likely be an ignored or controversial answer but I think it needs to be said.
Others have focused on the message that "floating point" calculations (eg one involving one or more numbers that are "doubles") do approximate math.
My focus in this answer is on the message that, even though it is this way in ordinary Java code, and indeed ordinary code in most programming languages, computations with numbers like 0.1 don't have to be approximate.
A few languages treat numbers like 0.1 as rational numbers, a ratio between two integers (numerator over denominator, in this case 1 over 10 or one tenth) just like they are in school math. Computations involving nothing but integers and rationals is 100% accurate (ignoring integer overflow and/or OOM).
Unfortunately, rational computations can get pathologically slow if the denominator gets too large.
Some languages take a compromise position. They treat some rationals as rationals (so with 100% accuracy) and only give up on 100% accuracy, switching to floats, when rational calculations would be pathologically slow.
For example, here's some code in a relatively new and forward looking programming language:
sub Power(\base, \exp) {
given exp {
when 0 { 1.0 }
when * >= 1 { base * Power(base, exp - 1) }
default { 1.0/base * Power(base, exp + 1) }
}
}
This duplicates your code in this other language.
Now use this function to get results for a list of exponents:
for 1000,20,2,1,0,-1,-2,-20,-1000 -> \exp { say Power 5, exp }
Running this code in glot.io displays:
9332636185032188789900895447238171696170914463717080246217143397959
6691097577563445444032709788110235959498993032424262421548752135403
2394841520817203930756234410666138325150273995075985901831511100490
7962651131182405125147959337908051782711254151038106983788544264811
1946981422866095922201766291044279845616944888714746652800632836845
2647429261829862165202793195289493607117850663668741065439805530718
1363205998448260419541012132296298695021945146099042146086683612447
9295203482686461765792691604742006593638904173789582211836507804555
6628444273925387517127854796781556346403714877681766899855392069265
4394240087119736747017498626266907472967625358039293762338339810469
27874558605253696441650390625
95367431640625
25
5
1
0.2
0.04
0.000000000000010
0
The above results are 100% accurate -- until the last exponent, -1000. We can see where the language gives up on 100% accuracy if we check the types of the results (using WHAT):
for 1000,20,2,1,0,-1,-2,-20,-1000 -> \exp { say WHAT Power 5, exp }
displays:
(Rat)
(Rat)
(Rat)
(Rat)
(Rat)
(Rat)
(Rat)
(Rat)
(Num)
Converting Rats (the default rational type) into FatRats (the arbitrary precision rational type) avoids inaccuracy even with pathologically large denominators:
sub Power(\base, \exp) {
given exp {
when 0 { 1.0.FatRat }
when * >= 1 { base * Power(base, exp - 1) }
default { 1.0.FatRat/base * Power(base, exp + 1) }
}
}
This yields the same display as our original code except for the last calculation which comes out as:
0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000011
I don't know if that's accurate, but, aiui, it's supposed to be.
We can easily get random floating point numbers within a desired range [X,Y) (note that X is inclusive and Y is exclusive) with the function listed below since Math.random() (and most pseudorandom number generators, AFAIK) produce numbers in [0,1):
function randomInRange(min, max) {
return Math.random() * (max-min) + min;
}
// Notice that we can get "min" exactly but never "max".
How can we get a random number in a desired range inclusive to both bounds, i.e. [X,Y]?
I suppose we could "increment" our value from Math.random() (or equivalent) by "rolling" the bits of an IEE-754 floating point double precision to put the maximum possible value at 1.0 exactly but that seems like a pain to get right, especially in languages poorly suited for bit manipulation. Is there an easier way?
(As an aside, why do random number generators produce numbers in [0,1) instead of [0,1]?)
[Edit] Please note that I have no need for this and I am fully aware that the distinction is pedantic. Just being curious and hoping for some interesting answers. Feel free to vote to close if this question is inappropriate.
I believe there is much better decision but this one should work :)
function randomInRange(min, max) {
return Math.random() < 0.5 ? ((1-Math.random()) * (max-min) + min) : (Math.random() * (max-min) + min);
}
First off, there's a problem in your code: Try randomInRange(0,5e-324) or just enter Math.random()*5e-324 in your browser's JavaScript console.
Even without overflow/underflow/denorms, it's difficult to reason reliably about floating point ops. After a bit of digging, I can find a counterexample:
>>> a=1.0
>>> b=2**-54
>>> rand=a-2*b
>>> a
1.0
>>> b
5.551115123125783e-17
>>> rand
0.9999999999999999
>>> (a-b)*rand+b
1.0
It's easier to explain why this happens with a=253 and b=0.5: 253-1 is the next representable number down. The default rounding mode ("round to nearest even") rounds 253-0.5 up (because 253 is "even" [LSB = 0] and 253-1 is "odd" [LSB = 1]), so you subtract b and get 253, multiply to get 253-1, and add b to get 253 again.
To answer your second question: Because the underlying PRNG almost always generates a random number in the interval [0,2n-1], i.e. it generates random bits. It's very easy to pick a suitable n (the bits of precision in your floating point representation) and divide by 2n and get a predictable distribution. Note that there are some numbers in [0,1) that you will will never generate using this method (anything in (0,2-53) with IEEE doubles).
It also means that you can do a[Math.floor(Math.random()*a.length)] and not worry about overflow (homework: In IEEE binary floating point, prove that b < 1 implies a*b < a for positive integer a).
The other nice thing is that you can think of each random output x as representing an interval [x,x+2-53) (the not-so-nice thing is that the average value returned is slightly less than 0.5). If you return in [0,1], do you return the endpoints with the same probability as everything else, or should they only have half the probability because they only represent half the interval as everything else?
To answer the simpler question of returning a number in [0,1], the method below effectively generates an integer [0,2n] (by generating an integer in [0,2n+1-1] and throwing it away if it's too big) and dividing by 2n:
function randominclusive() {
// Generate a random "top bit". Is it set?
while (Math.random() >= 0.5) {
// Generate the rest of the random bits. Are they zero?
// If so, then we've generated 2^n, and dividing by 2^n gives us 1.
if (Math.random() == 0) { return 1.0; }
// If not, generate a new random number.
}
// If the top bits are not set, just divide by 2^n.
return Math.random();
}
The comments imply base 2, but I think the assumptions are thus:
0 and 1 should be returned equiprobably (i.e. the Math.random() doesn't make use of the closer spacing of floating point numbers near 0).
Math.random() >= 0.5 with probability 1/2 (should be true for even bases)
The underlying PRNG is good enough that we can do this.
Note that random numbers are always generated in pairs: the one in the while (a) is always followed by either the one in the if or the one at the end (b). It's fairly easy to verify that it's sensible by considering a PRNG that returns either 0 or 0.5:
a=0 b=0 : return 0
a=0 b=0.5: return 0.5
a=0.5 b=0 : return 1
a=0.5 b=0.5: loop
Problems:
The assumptions might not be true. In particular, a common PRNG is to take the top 32 bits of a 48-bit LCG (Firefox and Java do this). To generate a double, you take 53 bits from two consecutive outputs and divide by 253, but some outputs are impossible (you can't generate 253 outputs with 48 bits of state!). I suspect some of them never return 0 (assuming single-threaded access), but I don't feel like checking Java's implementation right now.
Math.random() is twice for every potential output as a consequence of needing to get the extra bit, but this places more constraints on the PRNG (requiring us to reason about four consecutive outputs of the above LCG).
Math.random() is called on average about four times per output. A bit slow.
It throws away results deterministically (assuming single-threaded access), so is pretty much guaranteed to reduce the output space.
My solution to this problem has always been to use the following in place of your upper bound.
Math.nextAfter(upperBound,upperBound+1)
or
upperBound + Double.MIN_VALUE
So your code would look like this:
double myRandomNum = Math.random() * Math.nextAfter(upperBound,upperBound+1) + lowerBound;
or
double myRandomNum = Math.random() * (upperBound + Double.MIN_VALUE) + lowerBound;
This simply increments your upper bound by the smallest double (Double.MIN_VALUE) so that your upper bound will be included as a possibility in the random calculation.
This is a good way to go about it because it does not skew the probabilities in favor of any one number.
The only case this wouldn't work is where your upper bound is equal to Double.MAX_VALUE
Just pick your half-open interval slightly bigger, so that your chosen closed interval is a subset. Then, keep generating the random variable until it lands in said closed interval.
Example: If you want something uniform in [3,8], then repeatedly regenerate a uniform random variable in [3,9) until it happens to land in [3,8].
function randomInRangeInclusive(min,max) {
var ret;
for (;;) {
ret = min + ( Math.random() * (max-min) * 1.1 );
if ( ret <= max ) { break; }
}
return ret;
}
Note: The amount of times you generate the half-open R.V. is random and potentially infinite, but you can make the expected number of calls otherwise as close to 1 as you like, and I don't think there exists a solution that doesn't potentially call infinitely many times.
Given the "extremely large" number of values between 0 and 1, does it really matter? The chances of actually hitting 1 are tiny, so it's very unlikely to make a significant difference to anything you're doing.
What would be a situation where you would NEED a floating point value to be inclusive of the upper bound? For integers I understand, but for a float, the difference between between inclusive and exclusive is what like 1.0e-32.
Think of it this way. If you imagine that floating-point numbers have arbitrary precision, the chances of getting exactly min are zero. So are the chances of getting max. I'll let you draw your own conclusion on that.
This 'problem' is equivalent to getting a random point on the real line between 0 and 1. There is no 'inclusive' and 'exclusive'.
The question is akin to asking, what is the floating point number right before 1.0? There is such a floating point number, but it is one in 2^24 (for an IEEE float) or one in 2^53 (for a double).
The difference is negligible in practice.
private static double random(double min, double max) {
final double r = Math.random();
return (r >= 0.5d ? 1.5d - r : r) * (max - min) + min;
}
Math.round() will help to include the bound value. If you have 0 <= value < 1 (1 is exclusive), then Math.round(value * 100) / 100 returns 0 <= value <= 1 (1 is inclusive). A note here is that the value now has only 2 digits in its decimal place. If you want 3 digits, try Math.round(value * 1000) / 1000 and so on. The following function has one more parameter, that is the number of digits in decimal place - I called as precision:
function randomInRange(min, max, precision) {
return Math.round(Math.random() * Math.pow(10, precision)) /
Math.pow(10, precision) * (max - min) + min;
}
How about this?
function randomInRange(min, max){
var n = Math.random() * (max - min + 0.1) + min;
return n > max ? randomInRange(min, max) : n;
}
If you get stack overflow on this I'll buy you a present.
--
EDIT: never mind about the present. I got wild with:
randomInRange(0, 0.0000000000000000001)
and got stack overflow.
I am fairly less experienced, So I am also looking for solutions as well.
This is my rough thought:
Random number generators produce numbers in [0,1) instead of [0,1],
Because [0,1) is an unit length that can be followed by [1,2) and so on without overlapping.
For random[x, y],
You can do this:
float randomInclusive(x, y){
float MIN = smallest_value_above_zero;
float result;
do{
result = random(x, (y + MIN));
} while(result > y);
return result;
}
Where all values in [x, y] has the same possibility to be picked, and you can reach y now.
Generating a "uniform" floating-point number in a range is non-trivial. For example, the common practice of multiplying or dividing a random integer by a constant, or by scaling a "uniform" floating-point number to the desired range, have the disadvantage that not all numbers a floating-point format can represent in the range can be covered this way, and may have subtle bias problems. These problems are discussed in detail in "Generating Random Floating-Point Numbers by Dividing Integers: a Case Study" by F. Goualard.
Just to show how non-trivial the problem is, the following pseudocode generates a random "uniform-behaving" floating-point number in the closed interval [lo, hi], where the number is of the form FPSign * FPSignificand * FPRADIX^FPExponent. The pseudocode below was reproduced from my section on floating-point number generation. Note that it works for any precision and any base (including binary and decimal) of floating-point numbers.
METHOD RNDRANGE(lo, hi)
losgn = FPSign(lo)
hisgn = FPSign(hi)
loexp = FPExponent(lo)
hiexp = FPExponent(hi)
losig = FPSignificand(lo)
hisig = FPSignificand(hi)
if lo > hi: return error
if losgn == 1 and hisgn == -1: return error
if losgn == -1 and hisgn == 1
// Straddles negative and positive ranges
// NOTE: Changes negative zero to positive
mabs = max(abs(lo),abs(hi))
while true
ret=RNDRANGE(0, mabs)
neg=RNDINT(1)
if neg==0: ret=-ret
if ret>=lo and ret<=hi: return ret
end
end
if lo == hi: return lo
if losgn == -1
// Negative range
return -RNDRANGE(abs(lo), abs(hi))
end
// Positive range
expdiff=hiexp-loexp
if loexp==hiexp
// Exponents are the same
// NOTE: Automatically handles
// subnormals
s=RNDINTRANGE(losig, hisig)
return s*1.0*pow(FPRADIX, loexp)
end
while true
ex=hiexp
while ex>MINEXP
v=RNDINTEXC(FPRADIX)
if v==0: ex=ex-1
else: break
end
s=0
if ex==MINEXP
// Has FPPRECISION or fewer digits
// and so can be normal or subnormal
s=RNDINTEXC(pow(FPRADIX,FPPRECISION))
else if FPRADIX != 2
// Has FPPRECISION digits
s=RNDINTEXCRANGE(
pow(FPRADIX,FPPRECISION-1),
pow(FPRADIX,FPPRECISION))
else
// Has FPPRECISION digits (bits), the highest
// of which is always 1 because it's the
// only nonzero bit
sm=pow(FPRADIX,FPPRECISION-1)
s=RNDINTEXC(sm)+sm
end
ret=s*1.0*pow(FPRADIX, ex)
if ret>=lo and ret<=hi: return ret
end
END METHOD
I am writing a small physics app. What I am planning to do is to make number rounding. The issue is that it is not a fixed rounding, but rather a variable rounding that depends on the value of the decimal digits. I will give an explanation for the issue.
I always need to keep the whole integer part (if any) and the first five decimal digits (if any).
half up rounding is always used.
21.1521421056 becomes 21.15214
34.1521451056 becomes 34.15215
If the result consists of only decimal digits then:
If the first five digits include non zero digits then keep them.
0.52131125 becomes 0.52131
0.21546874 becomes 0.21547
0.00120012 becomes 0.0012
If the first five digits are all zero digits 0.00000 then go down to first five digits that include non zero digits.
0.0000051234 becomes 0.0000051234
0.000000000000120006130031 becomes 0.00000000000012001
I need to play this rounding while working with BigDecimal because it is a requirement for my needs.
I think this will work, based on experimentation, if I understand correctly what you want. If d is a BigDecimal that contains the number:
BigDecimal rounded = d.round(new MathContext
(d.scale() - d.precision() < 5
? d.precision() - d.scale() + 5
: 5));
Is this, what you are looking for?
public static void main(String[] args){
double d = 0.000000000000120006130031;
System.out.println(round(d, 5));
}
private static double round(double d, int precision) {
double factor = Math.pow(10D, precision);
int value = (int)d;
double re = d-value;
if (re * factor <= 0.1 && d != 0) {
while (re * factor <= 0.1) {
factor *= 10;
}
factor *= Math.pow(10D, precision);
}
re = ((int)(re*factor))/factor+value;
return re;
}
(sorry, it's a little quick & dirty, but you can improve it, if you want)
EDIT:
make it <= in the conditions, this should work better
I want to determine whether a number (in double) is a perfect square or not. I have used the below code but it fails for many inputs.
private static boolean isSquare(double i) {
double s = Math.sqrt(i);
return ((s*s) == i);
}
When s results in scientific form, the code fails. For example when s is 2.719601835756618E9
Your code makes no attempt to test whether the square root of the number is an integer. Any nonnegative real number is the square of some other real number; your code's result depends entirely on floating-point rounding behavior.
Test whether the square root is an integer:
if (Double.isInfinite(i)) {
return false;
}
sqrt = Math.sqrt(i);
return sqrt == Math.floor(sqrt) && sqrt*sqrt == i;
The sqrt*sqrt == i check should catch some cases where a number exceptionally close to a square has a square root whose closest double approximation is an integer. I have not tested this and make no warranties as to its correctness; if you want your software to be robust, don't just copy the code out of the answer.
UPDATE: Found a failing edge case. If an integer double has a greatest odd factor long enough that the square is not representable, feeding the closest double approximation of the square to this code will result in a false positive. The best fix I can think of at the moment is examing the significand of the square root directly to determine how many bits of precision it would take to represent the square. Who knows what else I've missed, though?
static boolean checkPerfectSquare(double x)
{
// finding the square root of given number
double sq = Math.sqrt(x);
/* Math.floor() returns closest integer value, for
* example Math.floor of 984.1 is 984, so if the value
* of sq is non integer than the below expression would
* be non-zero.
*/
return ((sq - Math.floor(sq)) == 0);
}
I know that the question is old, but I would post my solution anyway:
return Math.sqrt(i) % 1d == 0;
Simply check if the sqrt has decimals.
I've been having problems with my code for two weeks, and have been unsuccessful in debugging it. I've come here in the hope that someone can help. I've written a program that utilizes the Barnes-Hut algorithm for n-body gravitational simulation. My problem is that one or more 'particles' will have the position of {NaN, NaN, NaN} assigned to them (using three doubles to represent x, y, z of 3-d space). This, in turn, causes the other particles to have an acceleration of {NaN, NaN, NaN}, and in turn, a velocity and position of {NaN, NaN, NaN} as well. Basically, after a frame or two, everything disappears. It seems to be occurring in the updateAcc method, but I have a feeling that this isn't so. I understand that this is a huge undertaking, and am very grateful for anyone that helps me.
What I've checked:
There are no negative square roots, and all the values seem to be within their limits.
The source code is available here. Thanks again.
Code that seems to produce NaN:
private static void getAcc(particle particle, node node)
{
if ((node.particle == null && node.children == null) || node.particle == particle)
{
//Geting gravity to a node that is either empty or the same node...
}
else if (distance(node.centerOfMass, particle.position) / node.sideLength > theta && node.children != null)
{
for (int i = 0; i < node.children.length; i++)
{
if (node.children[i] != null)
{
getAcc(particle, node.children[i]);
}
}
}
else
{
particle.acceleration = vecAdd(particle.acceleration, vecDiv(getForce(particle.position, particle.mass, node.centerOfMass, node.containedMass), particle.mass));
}
}
private static double sumDeltaSquare(double[] pos1, double[] pos2)
{
return Math.pow(pos1[0]-pos2[0],2)+Math.pow(pos1[1]-pos2[1],2)+Math.pow(pos1[2]-pos2[2],2);
}
private static double[] getForce(double[] pos1, double m1, double[] pos2, double m2)
{
double ratio = G*m1*m2;
ratio /= sumDeltaSquare(pos1, pos2);
ratio /= Math.sqrt(sumDeltaSquare(pos1,pos2));
return vecMul(vecSub(pos2, pos1), ratio);
}
private static double distance(double[] position, double[] center)
{
double distance = Math.sqrt(Math.pow(position[0]-center[0],2) + Math.pow(position[1]-center[1],2) + Math.pow(position[2]-center[2],2));
return distance;
}
I'm not sure if this is the only problem, but it is a start.
sumDeltaSquare will sometimes return 0 which means when the value is used in getForce ratio /= sumDeltaSquare(pos1, pos2); it will produce Infinity and start causing issues.
This is a serious problem that you need to debug and work out what everything means. I enjoyed looking at the dots though.
Firstly, why aren't you using Java's Vecmath library? (It's distributed as a part of Java3D. Download Java3D's binary build and then just use vecmath.jar) Your problem is, very likely, somewhere in your custom vector functions. If not, #pimaster is probably right in that your translation magnitude method sumDeltaSquare might be returning 0 if two of your masses occupy a single space. Which means, unless you're inside a black hole, you're doing it wrong :P. Or you need to come up with a quantum gravity theory before you can do this simulation.
If you can't use vecmath (i.e. this is a homework assignment) I would suggest you use a regex to find every instance of return * and then replace it with assert !Double.isNan(*) && Double.isFinite(*);\nreturn *, except substitute * for whatever regex finds a "match group". I've forgotten exactly what that is, but I got you started on Google. I also suggest you avoid optimizations until after you have working code.
I'm not going to debug your code. But NaN values are the result of mathematically invalid operations on floating point numbers. The most famous of those is division by 0 (does not throw an exception with floating point).
How can this happen?
If your computation produces very small numbers, they might become too small to be represented as 64-bit floating point numbers (they require more bits than are available), and Java will return 0.0 instead.
In the other direction, if you get an overflow (the magnitude of the number requires too many bits), Java turns this into infinity. Doing math with infinity and 0 can quickly lead to NaN, and NaN propagates through every operation you apply to it.
For details, see sections 4.2 and 15.17 of the Java language spec.