For loop won't add terms for some reason - java

So, I was putting my knowledge of for loops to the test by attempting to create the mathematical constant π using a series with user-defined accuracy:
public double pi(int accuracy) {
for (int i = 1; i <= accuracy; i++) {
rawPi += 1 / (i * i);
}
return Math.sqrt(rawPi * 6);
}
Now, you would think that this would get closer and closer to π as int accuracy shoots up, but it doesn't. It just stays at the square root of 6, meaning that private double rawPi gets to 1 and never goes any higher, meaning no terms are being added in my series (represented as a for loop) and I have absolutely no idea what the problem could be. Any ideas?

Try to change this:
rawPi += 1 / (i * i);
to
rawPi += 1.0 / (i * i);
or as commented by "Patricia Shanahan" , use this for better accuracy and to avoid integer overflow on i*i:
1/((double)i*i)

Related

Calculating sin function with JAVA BigDecimal -monomial is going bigger(?)

I'm making sin function with BigDecimal in JAVA, and this is as far as I go:
package taylorSeries;
import java.math.BigDecimal;
public class Sin {
private static final int cutOff = 20;
public static void main(String[] args) {
System.out.println(getSin(new BigDecimal(3.14159265358979323846264), 100));
}
public static BigDecimal getSin(BigDecimal x, int scale) {
BigDecimal sign = new BigDecimal("-1");
BigDecimal divisor = BigDecimal.ONE;
BigDecimal i = BigDecimal.ONE;
BigDecimal num = null;
BigDecimal result = x;
//System.err.println(x);
do {
x = x.abs().multiply(x.abs()).multiply(x).multiply(sign);
i = i.add(BigDecimal.ONE);
divisor = divisor.multiply(i);
i = i.add(BigDecimal.ONE);
divisor = divisor.multiply(i);
num = x.divide(divisor, scale + cutOff, BigDecimal.ROUND_HALF_UP);
result = result.add(num);
//System.out.println("d : " + divisor);
//System.out.println(divisor.compareTo(x.abs()));
System.out.println(num.setScale(9, BigDecimal.ROUND_HALF_UP));
} while(num.abs().compareTo(new BigDecimal("0.1").pow(scale + cutOff)) > 0);
System.err.println(num);
System.err.println(new BigDecimal("0.1").pow(scale + cutOff));
return result.setScale(scale, BigDecimal.ROUND_HALF_UP);
}
}
It uses Taylor series :
picture of the fomular
The monomial x is added every iteration and always negative number.
And the problem is, absolute value of x is getting bigger and bigger, so iteration never ends.
Is there and way to find them or better way to implement it from the first place?
EDIT:
I made this code from scratch with simple interest about trigonometric functions, and now I see lots of childish mistakes.
My intention first was like this:
num is x^(2k+1) / (2k+1)!
divisor is (2k+1)!
i is 2k+1
dividend is x^(2k+1)
So I update divisor and dividend with i and compute num by sign * dividend / divisor and add it to result by result = result.add(num)
so new and good-working code is:
package taylorSeries;
import java.math.BigDecimal;
import java.math.MathContext;
public class Sin {
private static final int cutOff = 20;
private static final BigDecimal PI = Pi.getPi(100);
public static void main(String[] args) {
System.out.println(getSin(Pi.getPi(100).multiply(new BigDecimal("1.5")), 100)); // Should be -1
}
public static BigDecimal getSin(final BigDecimal x, int scale) {
if (x.compareTo(PI.multiply(new BigDecimal(2))) > 0) return getSin(x.remainder(PI.multiply(new BigDecimal(2)), new MathContext(x.precision())), scale);
if (x.compareTo(PI) > 0) return getSin(x.subtract(PI), scale).multiply(new BigDecimal("-1"));
if (x.compareTo(PI.divide(new BigDecimal(2))) > 0) return getSin(PI.subtract(x), scale);
BigDecimal sign = new BigDecimal("-1");
BigDecimal divisor = BigDecimal.ONE;
BigDecimal i = BigDecimal.ONE;
BigDecimal num = null;
BigDecimal dividend = x;
BigDecimal result = dividend;
do {
dividend = dividend.multiply(x).multiply(x).multiply(sign);
i = i.add(BigDecimal.ONE);
divisor = divisor.multiply(i);
i = i.add(BigDecimal.ONE);
divisor = divisor.multiply(i);
num = dividend.divide(divisor, scale + cutOff, BigDecimal.ROUND_HALF_UP);
result = result.add(num);
} while(num.abs().compareTo(new BigDecimal("0.1").pow(scale + cutOff)) > 0);
return result.setScale(scale, BigDecimal.ROUND_HALF_UP);
}
}
The new BigDecimal(double) constructor is not something you generally want to be using; the whole reason BigDecimal exists in the first place is that double is wonky: There are almost 2^64 unique values that a double can represent, but that's it - (almost) 2^64 distinct values, smeared out logarithmically, with about a quarter of all available numbers between 0 and 1, a quarter from 1 to infinity, and the other half the same but as negative numbers. 3.14159265358979323846264 is not one of the blessed numbers. Use the string constructor instead - just toss " symbols around it.
every loop, sign should switch, well, sign. You're not doing that.
In the first loop, you overwrite x with x = x.abs().multiply(x.abs()).multiply(x).multiply(sign);, so now the 'x' value is actually -x^3, and the original x value is gone. Next loop, you repeat this process, and thus you definitely are nowhere near the desired effect. The solution - don't overwrite x. You need x, throughout the calculation. Make it final (getSin(final BigDecimal x) to help yourself.
Make another BigDecimal value and call it accumulator or what not. It starts out as a copy of x.
Every loop, you multiply x to it twice then toggle the sign. That way, the first time in the loop the accumulator is -x^3. The second time, it is x^5. The third time it is -x^7, and so on.
There is more wrong, but at some point I'm just feeding you your homework on a golden spoon.
I strongly suggest you learn to debug. Debugging is simple! All you really do, is follow along with the computer. You calculate by hand and double check that what you get (be it the result of an expression, or whether a while loop loops or not), matches what the computer gets. Check by using a debugger, or if you don't know how to do that, learn, and if you don't want to, add a ton of System.out.println statements as debugging aids. There where your expectations mismatch what the computer is doing? You found a bug. Probably one of many.
Then consider splicing parts of your code up so you can more easily check the computer's work.
For example, here, num is supposed to reflect:
before first loop: x
first loop: x - x^3/3!
second loop: x - x^3/3! + x^5/5!
etcetera. But for debugging it'd be so much simpler if you have those parts separated out. You optimally want:
first loop: 3 separated concepts: -1, x^3, and 3!.
second loop: +1, x^5, and 5!.
That debugs so much simpler.
It also leads to cleaner code, generally, so I suggest you make these separate concepts as variables, describe them, write a loop and test that they are doing what you want (e.g. you use sysouts or a debugger to actually observe the power accumulator value hopping from x to x^3 to x^5 - this is easily checked), and finally put it all together.
This is a much better way to write code than to just 'write it all, run it, realize it doesn't work, shrug, raise an eyebrow, head over to stack overflow, and pray someone's crystal ball is having a good day and they see my question'.
The fact that the terms are all negative is not the problem (though you must make it alternate to get the correct series).
The term magnitude is x^(2k+1) / (2k+1)!. The numerator is indeed growing, but so is the denominator, and past k = x, the denominator starts to "win" and the series always converges.
Anyway, you should limit yourself to small xs, otherwise the computation will be extremely lengthy, with very large products.
For the computation of the sine, always begin by reducing the argument to the range [0,π]. Even better, if you jointly develop a cosine function, you can reduce to [0,π/2].

Is this an effective while loop?

Here is an assignment:
"Let's say you are given a number, a, and you want to find its
square root. One way to do that is to start with a very rough guess about
the answer, x0, and then improve the guess using the following formula
x1 = (x0 + a/x0)/2
For example, if we want to find the square root of 9, and we start with x0 = 6,
then x1 = (6 + 9/6)/2 = 15/4 = 3.75, which is closer.
We can repeat the procedure, using x1 to calculate x2, and so on. In this
case, x2 = 3.075 and x3 = 3.00091. So that is converging very quickly on the
right answer(which is 3).
Write a method called squareRoot that takes a double as a parameter and
that returns an approximation of the square root of the parameter, using this
technique. You may not use Math.sqrt.
As your initial guess, you should use a/2. Your method should iterate until
it gets two consecutive estimates that differ by less than 0.0001; in other
words, until the absolute value is less than 0.0001. You can use
Math.abs to calculate the absolute value."
This is exercise meant to practice while loop. As you see I did the assignment, I think it works ? But I am not sure how did I come to solution ? In other words, what should I improve here ? Is there any other way to enter the loop differently ? How to name variables more appropriately ? And lastly, is my approach good or bad here ?
public class squareRoot {
public static void main(String args[]){
System.out.println(squareRoot(192.0));
}
public static double squareRoot(double a){
double gs = a/2; //guess
double ig = (gs + (a/gs))/2; //improving guess
double ig1 = (ig + (a/ig))/2; //one more improving guess, ig1
while (Math.abs((ig-ig1)) > 0.0001){ //with ig and ig1, I am entering the loop
ig = (ig1 + (a/ig1))/2;
ig1 = (ig + (a/ig))/2; //ig1 has to be less then ig
}
return ig1;
}
}
Your approach is nearly correct.
Let's talk about variables first. IMO, you should use full names for variables instead of acronyms. Use guess instead of gs. Use improvedGuess instead of ig etc.
Now that's out of the way we can see where your problem lies. For the while loop to finish, two consecutive guesses' difference must be less than 0.0001. However, here you are only comparing the 1st and 2nd guesses, the 3rd and 4th guesses, the 5th and 6th guesses etc. What if the 4th and 5th guesses' difference is less than 0.0001? Your loop won't stop. Instead, it returns the value of the 6th guess. Although it is more accurate, it does not fulfill the requirement.
Here's what I've come up with
public static double squareRoot(double a){
double guess = a/2;
double improvedGuess = (guess + (a/guess))/2;
while (Math.abs((guess - improvedGuess)) > 0.0001){
guess = improvedGuess;
improvedGuess = (guess + (a/guess))/2;
}
return improvedGuess;
}
Here is my Solution
private static double squareRoot(double a){
double x0= a/2;
while (true) {
double x1 = (x0 + a / x0) / 2;
if (Math.abs(x1 - x0) < 0.0001) {
break;
}
x0=x1;
}
return x0;
}

What's the difference between those statements?

These days, I'm working on some problems regarding ACM-ICPC (although I already graduated.. just for fun..). Yesterday, I nearly became crazy because there is an online judge which always said "WRONG ANSWER" to code I'd written. And finally after more than horrible 10 hours, I realized that a following statement was the reason.
int target = (int)((double)(M * 100) / N) + 1; // RIGHT!!
int target = (int)((double) M / N * 100) + 1; // WRONG!!
I couldn't exactly know how and why the first statement behaves differently from the second one. Because I'm not allowed to see test cases which are used by the judge, it's a little hard for me to understand when the code can go wrong. Is there anybody who can explain to me? Thank you.
* I'm using Java.
As far as I can tell, the result of these two expressions
(double)(M * 100) / N
(double) M / N * 100
is the same EXCEPT for floating point precision errors (and also for possible overflows, but let us ignore them here since both lines are susceptible to them, albeit in a different way). These errors COULD cause the values to be one slightly above or equal to an integer, and one slightly below the same integer, which would cause
(int)((double)(M * 100) / N)
(int)((double) M / N * 100)
to differ by one. In general, when dealing with floating point, you have more chances to get closer to the "real" value if you leave the division as the last operation.
There is one further consideration, which can get quite tricky: you have no parentheses around (double)M/N in your second line. This MIGHT give additional freedom to an optimizer, which could make the result dependent on the optimization level. I don't know whether this can happen in Java.
As for the order of operations, I tried out this particular case in C (because that's quicker for me):
int i, j, k;
for (i = 1; i <= 100; i++) {
j = (int)(((double)i / 100) * 100);
if (i != j) {
printf("%d -> %d\n", i, j);
}
k = (int)(((double)i * 100) / 100);
if (i != k) {
printf("%d ?? %d\n", i, k);
}
}
and the output on my machine is
29 -> 28
57 -> 56
58 -> 57
Replacing 100 with 10000000 yields 587200 lines of the same kind (i.e., an error rate of 5.872%)

Fast calculation of RMS gives NaNs in Java - floating point error?

I'm getting a perplexing result doing math with floats. I have code that should never produce a negative number producing a negative number, which causes NaNs when I try to take the square root.
This code appears to work very well in tests. However, when operating on real-world (i.e. potentially very small, seven and eight negative exponents) numbers, eventually sum becomes negative, leading to the NaNs. In theory, the subtraction step only ever removes a number that has already been added to the sum; is this a floating-point error problem? Is there any way to fix it?
The code:
public static float[] getRmsFast(float[] data, int halfWindow) {
int n = data.length;
float[] result = new float[n];
float sum = 0.000000000f;
for (int i=0; i<2*halfWindow; i++) {
float d = data[i];
sum += d * d;
}
result[halfWindow] = calcRms(halfWindow, sum);
for (int i=halfWindow+1; i<n-halfWindow; i++) {
float oldValue = data[i-halfWindow-1];
float newValue = data[i+halfWindow-1];
sum -= (oldValue*oldValue);
sum += (newValue*newValue);
float rms = calcRms(halfWindow, sum);
result[i] = rms;
}
return result;
}
private static float calcRms(int halfWindow, float sum) {
return (float) Math.sqrt(sum / (2*halfWindow));
}
For some background:
I am trying to optimize a function that calculates a rolling root mean square (RMS) function on signal data. The optimization is pretty important; it's a hot-spot in our processing. The basic equation is simple - http://en.wikipedia.org/wiki/Root_mean_square - Sum the squares of the data over the window, divide the sum by the size of the window, then take the square.
The original code:
public static float[] getRms(float[] data, int halfWindow) {
int n = data.length;
float[] result = new float[n];
for (int i=halfWindow; i < n - halfWindow; i++) {
float sum = 0;
for (int j = -halfWindow; j < halfWindow; j++) {
sum += (data[i + j] * data[i + j]);
}
result[i] = calcRms(halfWindow, sum);
}
return result;
}
This code is slow because it reads the entire window from the array at each step, instead of taking advantage of the overlap in the windows. The intended optimization was to use that overlap, by removing the oldest value and adding the newest.
I've checked the array indices in the new version pretty carefully. It seems to be working as intended, but I could certainly be wrong in that area!
Update:
With our data, it was enough to change the type of sum to a double. Don't know why that didn't occur to me. But I left the negative check in. And FWIW, I was also able to implement a sol'n where recomputing the sum every 400 samples gave great run-time and enough accuracy. Thanks.
is this a floating-point error problem?
Yes it is. Due to rounding, you could well get negative values after subtracting a previous summand.
For example:
float sum = 0f;
sum += 1e10;
sum += 1e-10;
sum -= 1e10;
sum -= 1e-10;
System.out.println(sum);
On my machine, this prints
-1.0E-10
even though mathematically, the result is exactly zero.
This is the nature of floating point: 1e10f + 1e-10f gives exactly the same value as 1e10f.
As far as mitigation strategies go:
You could use double instead of float for enhanced precision.
From time to time, you could fully recompute the sum of squares to reduce the effect of rounding errors.
When the sum goes negative, you could either do a full recalculation as in (2) above, or simply set the sum to zero. The latter is safe since you know that you'll be pushing the sum towards its true value, and never away from it.
Try checking your indices in the second loop. The last value of i will be n-halfWindow-1 and n-halfWindow-1+halfWindow-1 is n-2.
You may need to change the loop to for (int i=halfWindow+1; i<n-halfWindow+1; i++).
You are running into issues with floating point numbers because you believe that they are just like mathematical real numbers. They are not, they are approximations of real numbers, mapped into discrete numbers, with a few special rules added into the mix.
Take the time to read up on what every programmer should know about floating point numbers, if you intend to use them often. Without some care the differences between floating point numbers and real numbers can come back and bite you in the worst ways.
Or, just take my word for it and know that every floating point number is "pretty close" to the requested value, with some being "dead on" accurate, but most being "mostly" accurate. This means you need to account for measurement error and keep it in mind after the calculations or risk believing you have an exact result at the end of the computation of the value (which you don't).

How can I turn a floating point number into the closest fraction represented by a byte numerator and denominator?

How can I write an algorithm that given a floating point number, and attempts to represent is as accurately as possible using a numerator and a denominator, both restricted to the range of a Java byte?
The reason for this is that an I2C device wants a numerator and denominator, while it would make sense to give it a float.
For example, 3.1415926535... would result in 245/78, rather than 314/100 or 22/7.
In terms of efficiency, this would be called around three times at the start of the program, but after that not at all. So a slow algorithm isn't too bad.
Here's the code I used in the end (based on uckelman's code)
public static int[] GetFraction(double input)
{
int p0 = 1;
int q0 = 0;
int p1 = (int) Math.floor(input);
int q1 = 1;
int p2;
int q2;
double r = input - p1;
double next_cf;
while(true)
{
r = 1.0 / r;
next_cf = Math.floor(r);
p2 = (int) (next_cf * p1 + p0);
q2 = (int) (next_cf * q1 + q0);
// Limit the numerator and denominator to be 256 or less
if(p2 > 256 || q2 > 256)
break;
// remember the last two fractions
p0 = p1;
p1 = p2;
q0 = q1;
q1 = q2;
r -= next_cf;
}
input = (double) p1 / q1;
// hard upper and lower bounds for ratio
if(input > 256.0)
{
p1 = 256;
q1 = 1;
}
else if(input < 1.0 / 256.0)
{
p1 = 1;
q1 = 256;
}
return new int[] {p1, q1};
}
Thanks for those who helped
I've written some code (in Java, even) to do just the thing you're asking for. In my case, I needed to display a scaling factor as both a percentage and a ratio. The most familiar example of this is the zoom dialog you see in image editors, such as the GIMP.
You can find my code here, in the updateRatio() method starting at line 1161. You can simply use it, so long as the LGPL license works for you. What I did essentially follows what's done in the GIMP---this is one of those things where there's pretty much only one efficient, sensible way to do it.
How worried are you about efficiency? If you're not calling this conversion function 100s of times per second or more, then it probably wouldn't be all that hard to brute-force through every possible denominator (most likely only 255 of them) and find which one gives the closest approximation (computing the numerator to go with the denominator is constant time).
I would comment, but I don't have rep yet...
Eric's answer above doesn't consider the case where an exact result is possible. For example, if you use 0.4 as input, then the representation should be 2/5, in which case you end up with a division by zero in the third iteration of the loop (r=0 on second loop => r = 1/r error on third).
So you want to modify the while loop to exclude that option:
while(true)
should be
while(r != 0)
You should look at the Farey Sequence.
Given a limit on the denominator d, the Farey Sequence is every fraction having denominator <= d.
Then, you would simply take your float and compare it to the resolved value of the Farey fraction. This will allow you to represent your float in terms of repeating-decimal reals.
Here is a page on its implementation in java:
http://www.merriampark.com/fractions.htm
Here is a good demonstration of their use:
http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fractions/fareySB.html
What about using Apache's BigFraction:
import org.apache.commons.math3.fraction.BigFraction;
public static BigFraction GetBigFraction(double input)
{
int precision = 1000000000;
return new BigFraction((int)(input * (double)precision), precision);
}
I reached out here out of curiosity, but I remembered there's such feature in Python standard library fractions.
Maybe, we can look into the source code of the two functions:
Fraction.from_float
Fraction.limit_denominator

Categories

Resources