What is the fastest way to round to three decimal places? - java

The SO community was right, profiling your code before you ask performance questions seems to make more sense then my approach of randomly guessing :-) I profiled my code(very intensive math) and didn't realize over 70% of my code is apparently in a part I didn't think was a source of slowdown, rounding of decimals.
static double roundTwoDecimals(double d) {
DecimalFormat twoDForm = new DecimalFormat("#.###");
return Double.valueOf(twoDForm.format(d));
}
My problem is I get decimal numbers that are normally .01,.02,etc..but sometimes I get something like .070000000001 (I really only care about the 0.07 but floating point precision causes my other formulas that result to fail), I simply want the first 3 decimals to avoid this problem.
So is there a better/faster way to do this?

The standard way to round (positive) numbers would be something like this:
double rounded = floor(1000 * doubleVal + 0.5) / 1000;
Example 1: floor(1000 * .1234 + 0.5) / 1000 = floor(123.9)/1000 = 0.123
Example 2: floor(1000 * .5678 + 0.5) / 1000 = floor(568.3)/1000 = 0.568
But as #nuakh commented, you'll always be plagued by rounding errors to some extent. If you want exactly 3 decimal places, your best bet is to convert to thousandths (that is, multiply everything by 1000) and use an integral data type (int, long, etc.)
In that case, you'd skip the final division by 1000 and use the integral values 123 and 568 for your calculations. If you want the results in the form of percentages, you'd divide by 10 for display:
123 → 12.3%
568 → 56.8%

Using a cast is faster than using floor or round. I suspect a cast is more heavily optimised by the HotSpot compiler.
public class Main {
public static final int ITERS = 1000 * 1000;
public static void main(String... args) {
for (int i = 0; i < 3; i++) {
perfRoundTo3();
perfCastRoundTo3();
}
}
private static double perfRoundTo3() {
double sum = 0.0;
long start = 0;
for (int i = -20000; i < ITERS; i++) {
if (i == 0) start = System.nanoTime();
sum += roundTo3(i * 1e-4);
}
long time = System.nanoTime() - start;
System.out.printf("Took %,d ns per round%n", time / ITERS);
return sum;
}
private static double perfCastRoundTo3() {
double sum = 0.0;
long start = 0;
for (int i = -20000; i < ITERS; i++) {
if (i == 0) start = System.nanoTime();
sum += castRoundTo3(i * 1e-4);
}
long time = System.nanoTime() - start;
System.out.printf("Took %,d ns per cast round%n", time / ITERS);
return sum;
}
public static double roundTo3(double d) {
return Math.round(d * 1000 + 0.5) / 1000.0;
}
public static double castRoundTo3(double d) {
return (long) (d * 1000 + 0.5) / 1000.0;
}
}
prints
Took 22 ns per round
Took 9 ns per cast round
Took 23 ns per round
Took 6 ns per cast round
Took 20 ns per round
Took 6 ns per cast round
Note: as of Java 7 floor(x + 0.5) and round(x) don't do quite the same thing as per this issue. Why does Math.round(0.49999999999999994) return 1
This will round correctly to within the representation error. This means that while the result is not exact the decimal e.g. 0.001 is not represented exactly, when you use toString() it will correct for this. Its only when you convert to BigDecimal or perform an arithmetic operation that you will see this representation error.

Related

Floating point arithmetics - summation - JavaDoubleStream

One of the first things we learn in floating point arithmetics is how rounding error plays a crucial role in double summation. Let's say we have an array of double myArray and we want to find the mean. What we could trivially do is:
double sum = 0.0;
for(int i = 0; i < myArray.length; i++) {
sum += myArray[i];
}
double mean = (double) sum/myArray.length;
However, we would have rounding error. This error can be reduced using other summation algorithm such as the Kahan one (wiki https://en.wikipedia.org/wiki/Kahan_summation_algorithm).
I have recently discovered Java Streams (refer to: https://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html) and in particular DoubleStream (see: https://docs.oracle.com/javase/8/docs/api/java/util/stream/DoubleStream.html).
With the code:
double sum = DoubleStream.of(myArray).parallel().sum();
double average = (double) sum/myArray.length;
we can get the average of our array. Two advantages are remarkable in my opinion:
More concise code
Faster as it is parallelized
Of course we could also have done something like:
double average = DoubleStream.of(myArray).parallel().average();
but I wanted to stress the summation.
At this point I have a question (which API didn't answer): is this method sum() numerically stable? I have done some experiments and it appears to be working fine. However I am not sure is at least good as the Kahan algorithm. Any help really welcomed!
The documentation says it:
Returns the sum of elements in this stream. Summation is a special
case of a reduction. If floating-point summation were exact, this
method would be equivalent to:
return reduce(0, Double::sum);
However, since floating-point summation is not exact, the above code
is not necessarily equivalent to the summation computation done by
this method.
Have you considered using BigDecimal to perform exact results?
Interesting, so I implemented the Kahan variant of Klein, mentioned in the wikipedia article. And a Stream version of it.
The results are not convincing.
double[] values = new double[10_000];
Random random = new Random();
Arrays.setAll(values, (i) -> Math.atan(random.nextDouble()*Math.PI*2) * 3E17);
long t0 = System.nanoTime();
double sum1 = DoubleStream.of(values).sum();
long t1 = System.nanoTime();
double sum2 = DoubleStream.of(values).parallel().sum();
long t2 = System.nanoTime();
double sum3 = kleinSum(values);
long t3 = System.nanoTime();
double sum4 = kleinSumAsStream(values);
long t4 = System.nanoTime();
System.out.printf(
"seq %f (%d ns)%npar %f (%d ns)%nkah %f (%d ns)%nstr %f (%d ns)%n",
sum1, t1 - t0,
sum2, t2 - t1,
sum3, t3 - t2,
sum4, t4 - t3);
An a non-stream version of modified Kahan:
public static double kleinSum(double[] input) {
double sum = 0.0;
double cs = 0.0;
double ccs = 0.0;
for (int i = 0; i < input.length; ++i) {
double t = sum + input[i];
double c = Math.abs(sum) >= Math.abs(input[i])
? (sum - t) + input[i]
: (input[i] - t) + sum;
sum = t;
t = cs + c;
double cc = Math.abs(cs) >= Math.abs(c)
? (cs - t) + c
: (c - t) + cs;
cs = t;
ccs += cc;
}
return sum + cs + ccs;
}
A Stream version:
public static double kleinSumAsStream(double[] input) {
double[] scc = DoubleStream.of(input)
.boxed()
.reduce(new double[3],
(sumCsCcs, x) -> {
double t = sumCsCcs[0] + x;
double c = Math.abs(sumCsCcs[0]) >= Math.abs(x)
? (sumCsCcs[0] - t) + x
: (x - t) + sumCsCcs[0];
sumCsCcs[0] = t;
t = sumCsCcs[1] + c;
double cc = Math.abs(sumCsCcs[1]) >= Math.abs(c)
? (sumCsCcs[1] - t) + c
: (c - t) + sumCsCcs[1];
sumCsCcs[1] = t;
sumCsCcs[2] += cc;
return sumCsCcs;
},
(scc1, scc2) -> new double[] {
scc2[0] + scc1[0],
scc2[1] + scc1[1],
scc2[2] + scc1[2]});
return scc[0] + scc[1] + scc[2];
}
Mind that the times would only be evidence, when a microworkbench would have been used.
However one still sees the overhead of a DoubleStream:
sequential 3363280744568882000000,000000 (5083900 ns)
parallel 3363280744568882500000,000000 (4492600 ns)
klein 3363280744568882000000,000000 (1051600 ns)
kleinStream 3363280744568882000000,000000 (3277500 ns)
Unfortunately I did not correctly cause floating point errors, and its for me late.
Using a Stream instead of the kleinSum would need a reduction with at least 2 doubles (sum and correction), so a double[2] or in newer Java a Record(double sum, double cs, double ccs) value.
A far less magical auxiliary approach is to sort the input by magnitude.
float (used for readability reasons only, double has a precision limit too, used later) has a 24-bit mantissa (of which 23 bits are stored, and the 24th one is considered 1 for "normal" numbers), so if you have the number 2^24, you simply can't add 1 to it, the smallest increment it has is 2:
float f=1<<24;
System.out.println(Float.valueOf(f).intValue());
f++;
f++;
System.out.println(Float.valueOf(f).intValue());
f+=2;
System.out.println(Float.valueOf(f).intValue());
will display
16777216
16777216 <-- 16777216+1+1
16777218 <-- 16777216+2
while summing them in the other direction works
float f=0;
System.out.println(Float.valueOf(f).intValue());
f++;
f++;
System.out.println(Float.valueOf(f).intValue());
f+=2;
System.out.println(Float.valueOf(f).intValue());
f+=1<<24;
System.out.println(Float.valueOf(f).intValue());
produces
0
2
4
16777220 <-- 4+16777216
(of course the pair of f++s is intentional, 16777219 would not exist, just like 16777217 for the previous case. These are not incomprehensibly huge numbers, yet a simple line as System.out.println((int)(float)16777219); already prints 16777220).
The thing applies to double too, just there you have 53-bits precision.
Two things:
the documentation actually suggests this: API Note: Elements sorted by increasing absolute magnitude tend to yield more accurate results
sum() internally ends in Collectors.sumWithCompensation(), which explicitly writes that it's an implementation of Kahan summation. (GitHub link is of JetBrains because Java uses different source control, which is a bit harder to find and link - but the file is present in your JDK too, inside src.zip, usually located in the lib folder)
Ordering by magnitude is something like ordering by log(abs(x)), which is a bit uglier in code, but possible:
double t[]= {Math.pow(2, 53),1,-1,-Math.pow(2, 53),1};
System.out.println(DoubleStream.of(t).boxed().collect(Collectors.toList()));
t=DoubleStream.of(t).boxed()
.sorted((a,b)->(int)(Math.log(Math.abs(a))-Math.log(Math.abs(b))))
.mapToDouble(d->d)
.toArray();
System.out.println(DoubleStream.of(t).boxed().collect(Collectors.toList()));
will print an okay order
[9.007199254740992E15, 1.0, -1.0, -9.007199254740992E15, 1.0]
[1.0, -1.0, 1.0, 9.007199254740992E15, -9.007199254740992E15]
So it's nice, but you can actually break it with little effort (the first few lines show that 2^53 really is the "integer limit" for double, and also "reminds" us of the actual value, then the sum with a single +1 ends up being less than 2^53):
double d=Math.pow(2, 53);
System.out.println(Double.valueOf(d).longValue());
d++;
d++;
System.out.println(Double.valueOf(d).longValue());
d+=2;
System.out.println(Double.valueOf(d).longValue());
double array[]= {Math.pow(2, 53),1,1,1,1};
for(var i=0;i<5;i++) {
var copy=Arrays.copyOf(array, i+1);
d=DoubleStream.of(copy).sum();
System.out.println(i+": "+Double.valueOf(d).longValue());
}
produces
9007199254740992
9007199254740992 <-- 9007199254740992+1+1
9007199254740994 <-- 9007199254740992+2
0: 9007199254740992
1: 9007199254740991 <-- that would be 9007199254740992+1 with Kahan
2: 9007199254740994
3: 9007199254740996 <-- "rounding" upwards, just like with (float)16777219 earlier
4: 9007199254740996
TL;DR: you don't need your own Kahan implementation, but use computers with care in general.

Calculating the nth factorial

Im writing a function that implements the following expression (1/n!)*(1!+2!+3!+...+n!).
The function is passed the arguement n and I have to return the above statement as a double, truncated to the 6th decimal place. The issue im running into is that the factorial value becomes so large that it becomes infinity (for large values of n).
Here is my code:
public static double going(int n) {
double factorial = 1.00;
double result = 0.00, sum = 0.00;
for(int i=1; i<n+1; i++){
factorial *= i;
sum += factorial;
}
//Truncate decimals to 6 places
result = (1/factorial)*(sum);
long truncate = (long)Math.pow(10,6);
result = result * truncate;
long value = (long) result;
return (double) value / truncate;
}
Now, the above code works fine for say n=5 or n= 113, but anything above n = 170 and my factorial and sum expressions become infinity. Is my approach just not going to work due to the exponential growth of the numbers? And what would be a work around to calculating very large numbers that doesnt impact performance too much (I believe BigInteger is quite slow from looking at similar questions).
You can solve this without evaluating a single factorial.
Your formula simplifies to the considerably simpler, computationally speaking
1!/n! + 2!/n! + 3!/n! + ... + 1
Aside from the first and last terms, a lot of factors actually cancel, which will help the precision of the final result, for example for 3! / n! you only need to multiply 1 / 4 through to 1 / n. What you must not do is to evaluate the factorials and divide them.
If 15 decimal digits of precision is acceptable (which it appears that it is from your question) then you can evaluate this in floating point, adding the small terms first. As you develop the algorithm, you'll notice the terms are related, but be very careful how you exploit that as you risk introducing material imprecision. (I'd consider that as a second step if I were you.)
Here's a prototype implementation. Note that I accumulate all the individual terms in an array first, then I sum them up starting with the smaller terms first. I think it's computationally more accurate to start from the final term (1.0) and work backwards, but that might not be necessary for a series that converges so quickly. Let's do this thoroughly and analyse the results.
private static double evaluate(int n){
double terms[] = new double[n];
double term = 1.0;
terms[n - 1] = term;
while (n > 1){
terms[n - 2] = term /= n;
--n;
}
double sum = 0.0;
for (double t : terms){
sum += t;
}
return sum;
}
You can see how very quickly the first terms become insignificant. I think you only need a few terms to compute the result to the tolerance of a floating point double. Let's devise an algorithm to stop when that point is reached:
The final version. It seems that the series converges so quickly that you don't need to worry about adding small terms first. So you end up with the absolutely beautiful
private static double evaluate_fast(int n){
double sum = 1.0;
double term = 1.0;
while (n > 1){
double old_sum = sum;
sum += term /= n--;
if (sum == old_sum){
// precision exhausted for the type
break;
}
}
return sum;
}
As you can see, there is no need for BigDecimal &c, and certainly never a need to evaluate any factorials.
You could use BigDecimal like this:
public static double going(int n) {
BigDecimal factorial = BigDecimal.ONE;
BigDecimal sum = BigDecimal.ZERO;
BigDecimal result;
for(int i=1; i<n+1; i++){
factorial = factorial.multiply(new BigDecimal(i));
sum = sum.add(factorial);
}
//Truncate decimals to 6 places
result = sum.divide(factorial, 6, RoundingMode.HALF_EVEN);
return result.doubleValue();
}

Rounding a double to one Decimal place not working.

first of all i'd like to point out that i know this is a topic covered a lot but after about an hour of looking at other questions and trying what is suggested there i still can't get this to work.
I'm performing a binary search through Performances for numbers that are within 0.1 of a given number time. however, i need both the min/max targets and the number we are searching with (average) to be rounded to only one decimal place.
the method assumes that performance is sorted in ascending order.
public static int binarySearch(Performance[] performances, double time) {
if (performances == null){
return -1;
}
int first = 0;
int last = performances.length - 1;
double targetMax = time + 0.1;
double targetMin = time - 0.1;
targetMax = Math.round((targetMax * 100)) / 100.0;
targetMin = Math.round((targetMin * 100)) / 100.0;
while (first <= last){
int mid = (first + last)/2;
double average = performances[mid].averageTime();
average = Math.round((average * 100)) / 100.0;
if ((targetMax > average) && (targetMin < average) ){
return mid;
}
else if(average < targetMin){
last = mid -1;
}
else {
first = mid + 1;
}
}
return -1;
}
here's where it gets weird. the rounding i have done seems to work fine for targetMax and targetMin with 9.299999999 rounding to 9.3, however when rounding average down from 9.3333333333 it returns 9.33
i really am stumped on this one, aren't i doing the exact same thing to both variables?
new to this website so please excuse anything i've left out, just ask and ill edit it in. trying my best! :)
You're rounding both to two decimal places - it's just that 9.2999999 rounded is 9.30.
Change your 100s to 10 to round to a single decimal place in each case.

Recover the original number from a float

Numbers are being stored in a database (out of my control) as floats/doubles etc.
When I pull them out they are damaged - for example 0.1 will come out (when formatted) as 0.100000001490116119384765625.
Is there a reliable way to recover these numbers?
I have tried new BigDecimal(((Number) o).doubleValue()) and BigDecimal.valueOf(((Number) o).doubleValue()) but these do not work. I still get the damaged result.
I am aware that I could make assumptions on the number of decimal places and round them but this will break for numbers that are deliberately 0.33333333333 for example.
Is there a simple method that will work for most rationals?
I suppose I am asking is there a simple way of finding the most minimal rational number that is within a small delta of a float number?.
you can store the numbers in the database as String and on the retrieval just parseDouble() them. This way the number wont be damaged, it will be same as you store there.
is there a simple way of finding a rational number that is within 0.00001 of a float number?.
This is called rounding.
double d = ((Number) o).doubleValue();
double d2 = Math.round(d * 1e5) / 1e5;
BigDecimal bd = BigDecimal.valueOf(d2);
or you can use BigDecimal to perform the rounding (I avoid using BigDecimal as it is needelessly slow once you know how to use rounding of doubles)
double d = ((Number) o).doubleValue();
BigDecimal bd = BigDecimal.valueOf(d).setScale(5, RoundingMode.HALF_UP);
Note: never use new BigDecimal(double) unless you understand what it does. Most likely BigDecial.valueOf(double) is what you wanted.
Here's the bludgeon way I have done it - I would welcome a more elegant solution.
I chose an implementation of Rational that had a mediant method ready-made for me.
I refactored it to use long instead of int and then added:
// Default delta to apply.
public static final double DELTA = 0.000001;
public static Rational valueOf(double dbl) {
return valueOf(dbl, DELTA);
}
// Create a good rational for the value within the delta supplied.
public static Rational valueOf(double dbl, double delta) {
// Primary checks.
if ( delta <= 0.0 ) {
throw new IllegalArgumentException("Delta must be > 0.0");
}
// Remove the integral part.
long integral = (long) Math.floor(dbl);
dbl -= integral;
// The value we are looking for.
final Rational d = new Rational((long) ((dbl) / delta), (long) (1 / delta));
// Min value = d - delta.
final Rational min = new Rational((long) ((dbl - delta) / delta), (long) (1 / delta));
// Max value = d + delta.
final Rational max = new Rational((long) ((dbl + delta) / delta), (long) (1 / delta));
// Start the fairey sequence.
Rational l = ZERO;
Rational h = ONE;
Rational found = null;
// Keep slicing until we arrive within the delta range.
do {
// Either between min and max -> found it.
if (found == null && min.compareTo(l) <= 0 && max.compareTo(l) >= 0) {
found = l;
}
if (found == null && min.compareTo(h) <= 0 && max.compareTo(h) >= 0) {
found = h;
}
if (found == null) {
// Make the mediant.
Rational m = mediant(l, h);
// Replace either l or h with mediant.
if (m.compareTo(d) < 0) {
l = m;
} else {
h = m;
}
}
} while (found == null);
// Bring back the sign and the integral.
if (integral != 0) {
found = found.plus(new Rational(integral, 1));
}
// That's me.
return found;
}
public BigDecimal toBigDecimal() {
// Do it to just 4 decimal places.
return toBigDecimal(4);
}
public BigDecimal toBigDecimal(int digits) {
// Do it to n decimal places.
return new BigDecimal(num).divide(new BigDecimal(den), digits, RoundingMode.DOWN).stripTrailingZeros();
}
Essentially - the algorithm starts with a range of 0-1. At each iteration I check to see if either end of the range falls between my d-delta - d+delta range. If it does we've found an answer.
If no answer is found we take the mediant of the two limits and replace one of the limits with it. The limit to replace is chosen to ensure the limits surround d at all times.
This is essentially doing a binary-chop search between 0 and 1 to find the first rational that falls within the desired range.
Mathematically I climb down the Stern-Brocot Tree choosing the branch that keeps me enclosing the desired number until I fall into the desired delta.
NB: I have not finished my testing but it certainly finds 1/10 for my input of 0.100000001490116119384765625 and 1/3 for 1.0/3.0 and the classic 355/113 for π.

Java rounding with a bunch of nines at the end

It looks like BigDecimal.setScale truncates to the scale+1 decimal position and then rounds based on that decimal only.
Is this normal or there is a clean way to apply the rounding mode to every single decimal?
This outputs: 0.0697
(this is NOT the rounding mode they taught me at school)
double d = 0.06974999999;
BigDecimal bd = BigDecimal.valueOf(d);
bd = bd.setScale(4, RoundingMode.HALF_UP);
System.out.println(bd);
This outputs: 0.0698
(this is the rounding mode they taught me at school)
double d = 0.0697444445;
BigDecimal bd = BigDecimal.valueOf(d);
int scale = bd.scale();
while (4 < scale) {
bd = bd.setScale(--scale, RoundingMode.HALF_UP);
}
System.out.println(bd);
EDITED
After reading some answers, I realized I messed everything up. I was a bit frustrated when I wrote my question.
So, I'm going to rewrite the question cause, even though the answers helped me a lot, I still need some advice.
The problem is:
I need to round 0.06974999999 to 0.0698, that's because I know those many decimals in fact are meant to be 0.6975 (A rounding error in a place not under my control).
So i've been playing around with a kind of "double rounding" which performs the rounding in two steps: first round to some higher precision, then round to the precision needed.
(Here is where I messed up because I thought a loop for every decimal place would be safer).
The thing is that I don't know which higher precision to round to in the first step (I'm using the number of decimals-1). Also I don't know if I could find some unexpected results for other cases.
Here is the first way I discarded in favour of the one with the loop, which now looks a lot better after reading your answers:
public static BigDecimal getBigDecimal(double value, int decimals) {
BigDecimal bd = BigDecimal.valueOf(value);
int scale = bd.scale();
if (scale - decimals > 1) {
bd = bd.setScale(scale - 1, RoundingMode.HALF_UP);
}
return bd.setScale(decimals, roundingMode.HALF_UP);
}
These prints the following results:
0.0697444445 = 0.0697
0.0697499994 = 0.0697
0.0697499995 = 0.0698
0.0697499999 = 0.0698
0.0697444445 = 0.069744445 // rounded to 9 decimals
0.0697499999 = 0.069750000 // rounded to 9 decimals
0.069749 = 0.0698
The questions now are if there is a better way to do this (maybe a different rounding mode)? and if this is safe to use as a general rounding method?
I need to round many values and having to choose at runtime between this and the standard aproach depending on the kind of numbers I receive seems to be really complex.
Thanks again for your time.
When you are rounding you look at the value that comes after the last digit you are rounding to, in your first example you are rounding 0.06974999999 to 4 decimal places. So you have 0.0697 then 4999999 (or essentially 697.4999999). As the rounding mode is HALF_UP, 0.499999 is less than 0.5, therefore it is rounded down.
If the difference between 0.06974999999 and 0.06975 matters so much, you should have switched to BigDecimals a bit sooner. At the very least, if performance is so important, figure out some way to use longs and ints. Double's and floats are not for people who can tell the difference between 1.0 and 0.999999999999999. When you use them, information gets lost and there's no certain way to recover it.
(This information can seen insignificant, to put it mildly, but if travelling 1,000,000 yards puts you at the top of a cliff, travelling 1,000,001 yards will put you a yard past the top of a cliff. That one last yard matters. And if you loose 1 penny in a billion dollars, you'll be in even worse trouble when the accountants get after you.)
If you need to bias your rounding you can add a small factor.
e.g. to round up to 6 decimal places.
double d =
double rounded = (long) (d * 1000000 + 0.5) / 1e6;
to add a small factor you need to decide how much extra you want to give. e.g.
double d =
double rounded = (long) (d * 1000000 + 0.50000001) / 1e6;
e.g.
public static void main(String... args) throws IOException {
double d1 = 0.0697499994;
double r1 = roundTo4places(d1);
double d2 = 0.0697499995;
double r2= roundTo4places(d2);
System.out.println(d1 + " => " + r1);
System.out.println(d2 + " => " + r2);
}
public static double roundTo4places(double d) {
return (long) (d * 10000 + 0.500005) / 1e4;
}
prints
0.0697499994 => 0.0697
0.0697499995 => 0.0698
The first one is correct.
0.44444444 ... 44445 rounded as an integer is 0.0
only 0.500000000 ... 000 or more is rounded up to 1.0
There is no rounding mode which will round 0.4 down and 0.45 up.
If you think about it, you want an equal chance that a random number will be rounded up or down. If you sum a large enough number of random numbers, the error created by rounding cancels out.
The half up round is the same as
long n = (long) (d + 0.5);
Your suggested rounding is
long n = (long) (d + 5.0/9);
Random r = new Random(0);
int count = 10000000;
// round using half up.
long total = 0, total2 = 0;
for (int i = 0; i < count; i++) {
double d = r.nextDouble();
int rounded = (int) (d + 0.5);
total += rounded;
BigDecimal bd = BigDecimal.valueOf(d);
int scale = bd.scale();
while (0 < scale) {
bd = bd.setScale(--scale, RoundingMode.HALF_UP);
}
int rounded2 = bd.intValue();
total2 += rounded2;
}
System.out.printf("The expected total of %,d rounded random values is %,d,%n\tthe actual total was %,d, using the biased rounding %,d%n",
count, count / 2, total, total2);
prints
The expected total of 10,000,000 rounded random values is 5,000,000,
the actual total was 4,999,646, using the biased rounding 5,555,106
http://en.wikipedia.org/wiki/Rounding#Round_half_up
What about trying previous and next values to see if they reduce the scale?
public static BigDecimal getBigDecimal(double value) {
BigDecimal bd = BigDecimal.valueOf(value);
BigDecimal next = BigDecimal.valueOf(Math.nextAfter(value, Double.POSITIVE_INFINITY));
if (next.scale() < bd.scale()) {
return next;
}
next = BigDecimal.valueOf(Math.nextAfter(value, Double.NEGATIVE_INFINITY));
if (next.scale() < bd.scale()) {
return next;
}
return bd;
}
The resulting BigDecimal can then be rounded to the scale needed.(I can't tell the performance impact of this for a large number of values)

Categories

Resources