Introduction
I am interested in writing math functions for BigDecimal (actually, also for
my own BigDecimal type written in Delphi,
but that is irrelevant here -- in this question, I use Java's BigDecimal because more people know it and
my BigDecimal is very similar. The test code below is in Java and works fine and works equally well in the Delphi
translation).
I know that BigDecimal is not fast, but it is pretty accurate. I do not want to use some existing Java BigDecimal math library, especially not
because this is for my own BigDecimal type (in Delphi) as well.
As a nice example of how to implement trig functions, I found the following simple example (but I forgot where, sorry). It obviously uses
MacLaurin series to calculate the cosine of a BigDecimal, with a given precision.
Question
This precision is exactly my problem. The code below uses an extra precision of 5 to calculate the result and only in the end, it rounds that down to the desired precision.
I have a feeling that an extra precision of 5 is fine for, say, a target precision up to 50 or even a little more, but not for BigDecimals with a much higher precision (say, 1000 digits or more). Unfortunately, I couldn't find a way to verify this (e.g. with an online extremely accurate calculator).
Finally, my question: am I right -- that 5 is probably not enough for larger numbers -- and if I am, how can I calculate or estimate the extra precision required?
Example code calculates cos(BigDecimal):
public class BigDecimalTrigTest
{
private List _trigFactors;
private int _precision;
private final int _extraPrecision = 5; // Question: is 5 enough?
public BigDecimalTrigTest(int precision)
{
_precision = precision;
_trigFactors = new Vector();
BigDecimal one = new BigDecimal("1.0");
BigDecimal stopWhen = one.movePointLeft(precision + _extraPrecision);
System.out.format("stopWhen = %s\n", stopWhen.toString());
BigDecimal factorial = new BigDecimal(2.0);
BigDecimal inc = new BigDecimal(2.0);
BigDecimal factor = null;
do
{
factor = one.divide(factorial, precision + _extraPrecision,
BigDecimal.ROUND_HALF_UP); // factor = 1/factorial
_trigFactors.add(factor);
inc = inc.add(one); // factorial = factorial * (factorial + 1)
factorial = factorial.multiply(inc);
inc = inc.add(one); // factorial = factorial * (factorial + 1)
factorial = factorial.multiply(inc);
} while (factor.compareTo(stopWhen) > 0);
}
// sin(x) = x - x^3/3! + x^5/5! - x^7/7! + x^9/9! - ... = Sum[0..+inf] (-1^n) * (x^(2*n + 1)) / (2*n + 1)!
// cos(x) = 1 - x^2/2! + x^4/4! - x^6/6! + x^8/8! - ... = Sum[0..+inf] (-1^n) * (x^(2*n)) / (2*n)!
public BigDecimal cos(BigDecimal x)
{
BigDecimal res = new BigDecimal("1.0");
BigDecimal xn = x.multiply(x);
for (int i = 0; i < _trigFactors.size(); i++)
{
BigDecimal factor = (BigDecimal) _trigFactors.get(i);
factor = factor.multiply(xn);
if (i % 2 == 0)
{
factor = factor.negate();
}
res = res.add(factor);
xn = xn.multiply(x);
xn = xn.multiply(x);
xn = xn.setScale(_precision + _extraPrecision, BigDecimal.ROUND_HALF_UP);
}
return res.setScale(_precision, BigDecimal.ROUND_HALF_UP);
}
public static void main(String[] args)
{
BigDecimalTrigTest bdtt = new BigDecimalTrigTest(50);
BigDecimal half = new BigDecimal("0.5");
System.out.println("Math.cos(0.5) = " + Math.cos(0.5));
System.out.println("this.cos(0.5) = " + bdtt.cos(half));
}
}
Update
A test with Wolfram Alpha for cos(.5) to 10000 digits (as #RC commented) gives the same result as my test code for the same precision. Perhaps 5 is enough as extra precision. But I need more tests to be sure.
You can reduce numbers not in -pi>x>=pi to that range. The Taylor expansion for sin(x) gets less accurate as abs(x) increases, so reducing x down to this range will increase your accuracy for large numbers.
Related
Im writing a function that implements the following expression (1/n!)*(1!+2!+3!+...+n!).
The function is passed the arguement n and I have to return the above statement as a double, truncated to the 6th decimal place. The issue im running into is that the factorial value becomes so large that it becomes infinity (for large values of n).
Here is my code:
public static double going(int n) {
double factorial = 1.00;
double result = 0.00, sum = 0.00;
for(int i=1; i<n+1; i++){
factorial *= i;
sum += factorial;
}
//Truncate decimals to 6 places
result = (1/factorial)*(sum);
long truncate = (long)Math.pow(10,6);
result = result * truncate;
long value = (long) result;
return (double) value / truncate;
}
Now, the above code works fine for say n=5 or n= 113, but anything above n = 170 and my factorial and sum expressions become infinity. Is my approach just not going to work due to the exponential growth of the numbers? And what would be a work around to calculating very large numbers that doesnt impact performance too much (I believe BigInteger is quite slow from looking at similar questions).
You can solve this without evaluating a single factorial.
Your formula simplifies to the considerably simpler, computationally speaking
1!/n! + 2!/n! + 3!/n! + ... + 1
Aside from the first and last terms, a lot of factors actually cancel, which will help the precision of the final result, for example for 3! / n! you only need to multiply 1 / 4 through to 1 / n. What you must not do is to evaluate the factorials and divide them.
If 15 decimal digits of precision is acceptable (which it appears that it is from your question) then you can evaluate this in floating point, adding the small terms first. As you develop the algorithm, you'll notice the terms are related, but be very careful how you exploit that as you risk introducing material imprecision. (I'd consider that as a second step if I were you.)
Here's a prototype implementation. Note that I accumulate all the individual terms in an array first, then I sum them up starting with the smaller terms first. I think it's computationally more accurate to start from the final term (1.0) and work backwards, but that might not be necessary for a series that converges so quickly. Let's do this thoroughly and analyse the results.
private static double evaluate(int n){
double terms[] = new double[n];
double term = 1.0;
terms[n - 1] = term;
while (n > 1){
terms[n - 2] = term /= n;
--n;
}
double sum = 0.0;
for (double t : terms){
sum += t;
}
return sum;
}
You can see how very quickly the first terms become insignificant. I think you only need a few terms to compute the result to the tolerance of a floating point double. Let's devise an algorithm to stop when that point is reached:
The final version. It seems that the series converges so quickly that you don't need to worry about adding small terms first. So you end up with the absolutely beautiful
private static double evaluate_fast(int n){
double sum = 1.0;
double term = 1.0;
while (n > 1){
double old_sum = sum;
sum += term /= n--;
if (sum == old_sum){
// precision exhausted for the type
break;
}
}
return sum;
}
As you can see, there is no need for BigDecimal &c, and certainly never a need to evaluate any factorials.
You could use BigDecimal like this:
public static double going(int n) {
BigDecimal factorial = BigDecimal.ONE;
BigDecimal sum = BigDecimal.ZERO;
BigDecimal result;
for(int i=1; i<n+1; i++){
factorial = factorial.multiply(new BigDecimal(i));
sum = sum.add(factorial);
}
//Truncate decimals to 6 places
result = sum.divide(factorial, 6, RoundingMode.HALF_EVEN);
return result.doubleValue();
}
I have this code:
while (counter <= t)
{
a = a * counter;
c = c * r + a;
counter++;
}
and I have t with the value 101835.
The output should be 7.1*10^438, but NetBeans shows
the output as infinity.
Is there anything that will make the output as a decimal number?
Yes, the only thing which you should use here is BigDecimal class. It'll easily handle that complex value without burdening you.
The maximum supported double value is only around 1.7 * 10^ 308 as given by Double.MAX_VALUE specified for Java.
A double as defined by IEEE-754 can't represent such a number, it's too large. The bounds are approximately between -10308 and 10308.
You need to use a BigDecimal to represent it: a number with an arbitrary number of bytes to represent numbers.
Better way to implement this:
double c = 0.0005d;//initialize c
double r = 0.01d; //initialize r
double a = 0.0006d;//initialize a
BigDecimal abd = new BigDecimal(a); //BigDecimal for a
BigDecimal cbd = new BigDecimal(c); //BigDecimal for c
BigDecimal rbd = new BigDecimal(r); //BigDecimal for r
for (int counter = 1; counter <= t; counter++) {//perhaps other offset for counter?
abd = abd.multiply(new BigDecimal(counter));
cbd = cbd.multiply(rbd).add(abd);
}
A potential problem with this approach is that the precision is too high: Java will calculate all operations exactly resulting in numbers that have thousands of digits. Since every operation blows up the number of digits, within a few iterations, simple addition and multiplication operations become unfeasible.
You can solve this by defining a precision using the optional MathContext parameter: it determines on how precise the result should be. You can for instance use MathContext.DECIMAL128:
int t = 101835;
double c = 0.0005d;//initialize c
double r = 0.01d; //initialize r
double a = 0.0006d;//initialize a
BigDecimal abd = new BigDecimal(a); //BigDecimal for a
BigDecimal cbd = new BigDecimal(c); //BigDecimal for c
BigDecimal rbd = new BigDecimal(r); //BigDecimal for r
for (int counter = 1; counter <= t; counter++) {//perhaps other offset for counter?
abd = abd.multiply(new BigDecimal(counter),MathContext.DECIMAL128);
cbd = cbd.multiply(rbd,MathContext.DECIMAL128).add(abd,MathContext.DECIMAL128);
}
System.out.println(abd);
System.out.println(cbd);
This gives:
abd = 3.166049846031012773846494375835059E+465752
cbd = 3.166050156931013454758413539958330E+465752
This is approximately correct, after all the result of a should be:
Which is approximately correct according to Wolfram Alpha.
Furthermore I would advice to use a for and not a while if it is a for loop. Since while tends to create another type of infinity: an infinite loop ;).
Numbers are being stored in a database (out of my control) as floats/doubles etc.
When I pull them out they are damaged - for example 0.1 will come out (when formatted) as 0.100000001490116119384765625.
Is there a reliable way to recover these numbers?
I have tried new BigDecimal(((Number) o).doubleValue()) and BigDecimal.valueOf(((Number) o).doubleValue()) but these do not work. I still get the damaged result.
I am aware that I could make assumptions on the number of decimal places and round them but this will break for numbers that are deliberately 0.33333333333 for example.
Is there a simple method that will work for most rationals?
I suppose I am asking is there a simple way of finding the most minimal rational number that is within a small delta of a float number?.
you can store the numbers in the database as String and on the retrieval just parseDouble() them. This way the number wont be damaged, it will be same as you store there.
is there a simple way of finding a rational number that is within 0.00001 of a float number?.
This is called rounding.
double d = ((Number) o).doubleValue();
double d2 = Math.round(d * 1e5) / 1e5;
BigDecimal bd = BigDecimal.valueOf(d2);
or you can use BigDecimal to perform the rounding (I avoid using BigDecimal as it is needelessly slow once you know how to use rounding of doubles)
double d = ((Number) o).doubleValue();
BigDecimal bd = BigDecimal.valueOf(d).setScale(5, RoundingMode.HALF_UP);
Note: never use new BigDecimal(double) unless you understand what it does. Most likely BigDecial.valueOf(double) is what you wanted.
Here's the bludgeon way I have done it - I would welcome a more elegant solution.
I chose an implementation of Rational that had a mediant method ready-made for me.
I refactored it to use long instead of int and then added:
// Default delta to apply.
public static final double DELTA = 0.000001;
public static Rational valueOf(double dbl) {
return valueOf(dbl, DELTA);
}
// Create a good rational for the value within the delta supplied.
public static Rational valueOf(double dbl, double delta) {
// Primary checks.
if ( delta <= 0.0 ) {
throw new IllegalArgumentException("Delta must be > 0.0");
}
// Remove the integral part.
long integral = (long) Math.floor(dbl);
dbl -= integral;
// The value we are looking for.
final Rational d = new Rational((long) ((dbl) / delta), (long) (1 / delta));
// Min value = d - delta.
final Rational min = new Rational((long) ((dbl - delta) / delta), (long) (1 / delta));
// Max value = d + delta.
final Rational max = new Rational((long) ((dbl + delta) / delta), (long) (1 / delta));
// Start the fairey sequence.
Rational l = ZERO;
Rational h = ONE;
Rational found = null;
// Keep slicing until we arrive within the delta range.
do {
// Either between min and max -> found it.
if (found == null && min.compareTo(l) <= 0 && max.compareTo(l) >= 0) {
found = l;
}
if (found == null && min.compareTo(h) <= 0 && max.compareTo(h) >= 0) {
found = h;
}
if (found == null) {
// Make the mediant.
Rational m = mediant(l, h);
// Replace either l or h with mediant.
if (m.compareTo(d) < 0) {
l = m;
} else {
h = m;
}
}
} while (found == null);
// Bring back the sign and the integral.
if (integral != 0) {
found = found.plus(new Rational(integral, 1));
}
// That's me.
return found;
}
public BigDecimal toBigDecimal() {
// Do it to just 4 decimal places.
return toBigDecimal(4);
}
public BigDecimal toBigDecimal(int digits) {
// Do it to n decimal places.
return new BigDecimal(num).divide(new BigDecimal(den), digits, RoundingMode.DOWN).stripTrailingZeros();
}
Essentially - the algorithm starts with a range of 0-1. At each iteration I check to see if either end of the range falls between my d-delta - d+delta range. If it does we've found an answer.
If no answer is found we take the mediant of the two limits and replace one of the limits with it. The limit to replace is chosen to ensure the limits surround d at all times.
This is essentially doing a binary-chop search between 0 and 1 to find the first rational that falls within the desired range.
Mathematically I climb down the Stern-Brocot Tree choosing the branch that keeps me enclosing the desired number until I fall into the desired delta.
NB: I have not finished my testing but it certainly finds 1/10 for my input of 0.100000001490116119384765625 and 1/3 for 1.0/3.0 and the classic 355/113 for π.
It looks like BigDecimal.setScale truncates to the scale+1 decimal position and then rounds based on that decimal only.
Is this normal or there is a clean way to apply the rounding mode to every single decimal?
This outputs: 0.0697
(this is NOT the rounding mode they taught me at school)
double d = 0.06974999999;
BigDecimal bd = BigDecimal.valueOf(d);
bd = bd.setScale(4, RoundingMode.HALF_UP);
System.out.println(bd);
This outputs: 0.0698
(this is the rounding mode they taught me at school)
double d = 0.0697444445;
BigDecimal bd = BigDecimal.valueOf(d);
int scale = bd.scale();
while (4 < scale) {
bd = bd.setScale(--scale, RoundingMode.HALF_UP);
}
System.out.println(bd);
EDITED
After reading some answers, I realized I messed everything up. I was a bit frustrated when I wrote my question.
So, I'm going to rewrite the question cause, even though the answers helped me a lot, I still need some advice.
The problem is:
I need to round 0.06974999999 to 0.0698, that's because I know those many decimals in fact are meant to be 0.6975 (A rounding error in a place not under my control).
So i've been playing around with a kind of "double rounding" which performs the rounding in two steps: first round to some higher precision, then round to the precision needed.
(Here is where I messed up because I thought a loop for every decimal place would be safer).
The thing is that I don't know which higher precision to round to in the first step (I'm using the number of decimals-1). Also I don't know if I could find some unexpected results for other cases.
Here is the first way I discarded in favour of the one with the loop, which now looks a lot better after reading your answers:
public static BigDecimal getBigDecimal(double value, int decimals) {
BigDecimal bd = BigDecimal.valueOf(value);
int scale = bd.scale();
if (scale - decimals > 1) {
bd = bd.setScale(scale - 1, RoundingMode.HALF_UP);
}
return bd.setScale(decimals, roundingMode.HALF_UP);
}
These prints the following results:
0.0697444445 = 0.0697
0.0697499994 = 0.0697
0.0697499995 = 0.0698
0.0697499999 = 0.0698
0.0697444445 = 0.069744445 // rounded to 9 decimals
0.0697499999 = 0.069750000 // rounded to 9 decimals
0.069749 = 0.0698
The questions now are if there is a better way to do this (maybe a different rounding mode)? and if this is safe to use as a general rounding method?
I need to round many values and having to choose at runtime between this and the standard aproach depending on the kind of numbers I receive seems to be really complex.
Thanks again for your time.
When you are rounding you look at the value that comes after the last digit you are rounding to, in your first example you are rounding 0.06974999999 to 4 decimal places. So you have 0.0697 then 4999999 (or essentially 697.4999999). As the rounding mode is HALF_UP, 0.499999 is less than 0.5, therefore it is rounded down.
If the difference between 0.06974999999 and 0.06975 matters so much, you should have switched to BigDecimals a bit sooner. At the very least, if performance is so important, figure out some way to use longs and ints. Double's and floats are not for people who can tell the difference between 1.0 and 0.999999999999999. When you use them, information gets lost and there's no certain way to recover it.
(This information can seen insignificant, to put it mildly, but if travelling 1,000,000 yards puts you at the top of a cliff, travelling 1,000,001 yards will put you a yard past the top of a cliff. That one last yard matters. And if you loose 1 penny in a billion dollars, you'll be in even worse trouble when the accountants get after you.)
If you need to bias your rounding you can add a small factor.
e.g. to round up to 6 decimal places.
double d =
double rounded = (long) (d * 1000000 + 0.5) / 1e6;
to add a small factor you need to decide how much extra you want to give. e.g.
double d =
double rounded = (long) (d * 1000000 + 0.50000001) / 1e6;
e.g.
public static void main(String... args) throws IOException {
double d1 = 0.0697499994;
double r1 = roundTo4places(d1);
double d2 = 0.0697499995;
double r2= roundTo4places(d2);
System.out.println(d1 + " => " + r1);
System.out.println(d2 + " => " + r2);
}
public static double roundTo4places(double d) {
return (long) (d * 10000 + 0.500005) / 1e4;
}
prints
0.0697499994 => 0.0697
0.0697499995 => 0.0698
The first one is correct.
0.44444444 ... 44445 rounded as an integer is 0.0
only 0.500000000 ... 000 or more is rounded up to 1.0
There is no rounding mode which will round 0.4 down and 0.45 up.
If you think about it, you want an equal chance that a random number will be rounded up or down. If you sum a large enough number of random numbers, the error created by rounding cancels out.
The half up round is the same as
long n = (long) (d + 0.5);
Your suggested rounding is
long n = (long) (d + 5.0/9);
Random r = new Random(0);
int count = 10000000;
// round using half up.
long total = 0, total2 = 0;
for (int i = 0; i < count; i++) {
double d = r.nextDouble();
int rounded = (int) (d + 0.5);
total += rounded;
BigDecimal bd = BigDecimal.valueOf(d);
int scale = bd.scale();
while (0 < scale) {
bd = bd.setScale(--scale, RoundingMode.HALF_UP);
}
int rounded2 = bd.intValue();
total2 += rounded2;
}
System.out.printf("The expected total of %,d rounded random values is %,d,%n\tthe actual total was %,d, using the biased rounding %,d%n",
count, count / 2, total, total2);
prints
The expected total of 10,000,000 rounded random values is 5,000,000,
the actual total was 4,999,646, using the biased rounding 5,555,106
http://en.wikipedia.org/wiki/Rounding#Round_half_up
What about trying previous and next values to see if they reduce the scale?
public static BigDecimal getBigDecimal(double value) {
BigDecimal bd = BigDecimal.valueOf(value);
BigDecimal next = BigDecimal.valueOf(Math.nextAfter(value, Double.POSITIVE_INFINITY));
if (next.scale() < bd.scale()) {
return next;
}
next = BigDecimal.valueOf(Math.nextAfter(value, Double.NEGATIVE_INFINITY));
if (next.scale() < bd.scale()) {
return next;
}
return bd;
}
The resulting BigDecimal can then be rounded to the scale needed.(I can't tell the performance impact of this for a large number of values)
I am using the following code to round the float value which is given as input. But i cant get it right. If i give $80 i should get $80.00 and if i give $40.009889 i should get $40.01. How do i do this ?
public class round {
public static float round_this(float num) {
//float num = 2.954165f;
float round = Round(num,2);
return round;
}
private static float Round(float Rval, int Rpl) {
float p = (float)Math.pow(10,Rpl);
Rval = Rval * p;
float tmp = Math.round(Rval);
return (float)tmp/p;
}
}
This is why you don't use floats for money, because you get stuck in this game of 'Garbage In, Data Out'. Use java.math.BigDecimal instead. BigDecimal lets you specify a fixed-decimal value that isn't subject to representation problems.
(When I say don't use floats that includes doubles, it's the same issue.)
Here's an example. I create two BigDecimal numbers. For the first one I use the constructor that takes a floating-point number. For the first one I use the constructor that takes a string. In both cases the BigDecimal shows me what the number is that it holds:
groovy:000> f = new BigDecimal(1.01)
===> 1.0100000000000000088817841970012523233890533447265625
groovy:000> d = new BigDecimal("1.01")
===> 1.01
See this question for more explanation, also there's another question with a good answer here.
From The Floating-Point Guide:
Why don’t my numbers, like 0.1 + 0.2
add up to a nice round 0.3, and
instead I get a weird result like
0.30000000000000004?
Because internally, computers use a
format (binary floating-point) that
cannot accurately represent a number
like 0.1, 0.2 or 0.3 at all.
When the code is compiled or
interpreted, your “0.1” is already
rounded to the nearest number in that
format, which results in a small
rounding error even before the
calculation happens.
Use BigDecimal class instead of float.
And use Java code like this:
BigDecimal bd = new BigDecimal(floatVal);
bd = bd.setScale(2, BigDecimal.ROUND_HALF_UP);
I wouldn't use float, I suggest using double or int or long or (if you have to) BigDecimal
private static final long[] TENS = new long[19];
static {
TENS[0] = 1;
for (int i = 1; i < TENS.length; i++) TENS[i] = TENS[i - 1] * 10;
}
public static double round(double x, int precision) {
long tens = TENS[precision];
long unscaled = (long) (x < 0 ? x * tens - 0.5 : x * tens + 0.5);
return (double) unscaled / tens;
}
This does not give a precise answer for all fractions as that is not possible with floating point, however it will give you an answer which will print correctly.
double num = 2.954165;
double round = round(num, 2);
System.out.println(round);
prints
2.95
This would do it.
public static void main(String[] args) {
double d = 12.349678;
int r = (int) Math.round(d*100);
double f = r / 100.0;
System.out.println(f);
}
You can short this method, it's easy to understand that's why I have written like this