Java Multiplication .* - java

Why is .* used in Java?
For example
double probability = 1.*count/numdata;
gives the same output as:
double probability = count/numdata;

If count and numdata are integral: int or long the result again will be integral (integer division), so the fractions gets lost; truncated, not even rounded. As a probability is between 0.0 and 1.0, and so numdata >= count, you'll get only 0 or 1.
Simplest would be to make the division floating point:
double probability = ((double)count) / numdata;
or (more obfuscating though!)
double probability = count;
probability /= numdata;

Related

Generate random float, both bounds inclusive

I need to generate random real numbers in the range [-0.5, 0.5], both bounds inclusive.
I found various ways to generate similar ranges, like
-0.5 + Math.random()
But the upper bound is always exclusive, I need it inclusive as well. 0.5 must be inside the range.
One way to achieve this would be to create random int from -500 to 500 and then divide it by 1000.
int max = 500;
int min = -500;
int randomInt = rand.nextInt((max - min) + 1) + min;
float randomNum = randomInt / 1000.00f;
System.out.println(randomNum);
You can change the precision by adding and removing zeros from the integer boundaries and the divisor. (eG: create integers from -5 to +5 and divide by 10 for less precision)
A disadvantage of that solution is that it does not use the maximum precision provided by float/double data types.
I haven't seen any answer that uses bit-fiddling inside the IEEE-754 Double representation, so here's one.
Based on the observation that a rollover to a next binary exponent is the same as adding 1 to the binary representation (actually this is by design):
Double.longBitsToDouble(0x3ff0000000000000L) // 1.0
Double.longBitsToDouble(0x3ffFFFFFFFFFFFFFL) // 1.9999999999999998
Double.longBitsToDouble(0x4000000000000000L) // 2.0
I came up with this:
long l = ThreadLocalRandom.current().nextLong(0x0010000000000001L);
double r = Double.longBitsToDouble(l + 0x3ff0000000000000L) - 1.5;
This technique works only with ranges that span a binary number (1, 2, 4, 8, 0.5, 0.25, etc) but for those ranges this approach is possibly the most efficient and accurate. This example is tuned for a span of 1. For ranges that do not span a binary range, you can still use this technique to get a different span. Apply the technique to get a number in the range [0, 1] and scale the result to the desired span. This has negligible accuracy loss, and the resulting accuracy is actually identical to that of Random.nextDouble(double, double).
For other spans, execute this code to find the offset:
double span = 0.125;
if (!(span > 0.0) || (Double.doubleToLongBits(span) & 0x000FFFFFFFFFFFFFL) != 0)
throw new IllegalArgumentException("'span' is not a binary number: " + span);
if (span * 2 >= Double.MAX_VALUE)
throw new IllegalArgumentException("'span' is too large: " + span);
System.out.println("Offset: 0x" + Long.toHexString(Double.doubleToLongBits(span)));
When you plug this offset into the second line of the actual code, you get a value in the range [span, 2*span]. Subtract the span to get a value starting at 0.
You can adjust the upper bound by the minimal value (epsilon) larger than the maxium value you expect. To find the epsilon, start with any positive value and make it as small as it can get:
double min = -0.5;
double max = 0.5;
double epsilon = 1;
while (max + epsilon / 2 > max) {
epsilon /= 2;
}
Random random = ThreadLocalRandom.current();
DoubleStream randomDoubles = random.doubles(min, max + epsilon);
Edit: alternative suggested by #DodgyCodeException (results in same epsilon as above):
double min = -0.5;
double max = 0.5;
double maxPlusEpsilon = Double.longBitsToDouble(Double.doubleToLongBits(max) + 1L)
Random random = ThreadLocalRandom.current();
DoubleStream randomDoubles = random.doubles(min, maxPlusEpsilon);
Given HOW GOD SPIDERS answer, here is a ready to use function :
public static double randomFloat(double minInclusive, double maxInclusive, double precision) {
int max = (int)(maxInclusive/precision);
int min = (int)(minInclusive/precision);
Random rand = new Random();
int randomInt = rand.nextInt((max - min) + 1) + min;
double randomNum = randomInt * precision;
return randomNum;
}
then
System.out.print(randomFloat(-0.5, 0.5, 0.01));
#OH GOD SPIDERS' answer gave me an idea to develop it into an answer that gives greater precision. nextLong() gives a value between MIN_VALUE and MAX_VALUE with more than adequate precision when cast to double.
double randomNum = (rand.nextLong() / 2.0) / Long.MAX_VALUE;
Proof that bounds are inclusive:
assert (Long.MIN_VALUE/2.0)/Long.MAX_VALUE == -0.5;
assert (Long.MAX_VALUE/2.0)/Long.MAX_VALUE == 0.5;
Random.nextDouble gives a value in the range of [0, 1]. So to map that to a range of [-0.5, 0.5] you just need to subtract by 0.5.
You can use this code to get the desired output
double value = r.nextDouble() - 0.5;

How do I round up a double to a certain amount of decimals

I want to call a function with a double parameter and an int precision.
This function would have to round that number with the precision number as decimals.
Example: function(1.23432, 4) would have to round that number up with 4 decimals (1.2343). Could anyone help me out with this?
BigDecimal is your friend when it comes to rounding numbers. You can specify a MathContext to explicitly set how you want you rounding to work, and then define the precision you want to use. If you still want a double at the end you can call BigDecimal.doubleValue()
Try this code
String result = String.format("%.2f", 10.0 / 3.0);
// result: "3.33"
First, you get 10precision, then you multiply it by your number, round it to an int and divide by 10precision:
public double round(double number, int precision) {
// 10 to the power of "precision"
double n = Math.pow(10.0, precision);
// multiply by "number" and cast to int (round it to nearest integer value)
int aux = (int) (n * number);
// divide it to get the result
return aux / n;
}
Then you call:
double result = round(1.23432, 4);
System.out.println(result); // 1.2343
Try this:
public String round(double value, int factor) {
double newFactor = convertFactor(factor);
//will convert the factor to a number round() can use
String newVal = Double.toString(Math.round(value / newFactor) * newFactor);
//the value gets rounded
return newVal = newVal.substring(0, Math.min(newVal.length(), factor + 2));
//Convert the result to a string and cut it
//important because a too high value of the factor or value would cause inaccuracies.
//factor + 2 because you convert the double into String, and you have to fill 0.0 out
//Math.min() handles an exception when the factor is higher than the string
}
public double convertFactor(double factor) {
double newFactor = 1;
for(int i = 0; i < factor; i++) {
newFactor /= 10;
//devide the newFactor as many times as the value of the factor isnt reached
}
return newFactor;
}
Use convertFactor() to convert your "normal" factor into a factor (called newFactor) the round() method can use.
The round() method calculates the value and convert it into a String which
the max. lengh of the factor.
Too high values of value and factor would cause inaccuracies, and this little inaccuracies get cutted to get rid of them.
Example code (for your example):
System.out.println("newFactor: " + convertFactor(4)); //only for test!
System.out.println("Rounded value: " + round(1.23432, 4));
//newFactor: 1.0E-4
//Rounded value: 1.2343

Calculating the nth factorial

Im writing a function that implements the following expression (1/n!)*(1!+2!+3!+...+n!).
The function is passed the arguement n and I have to return the above statement as a double, truncated to the 6th decimal place. The issue im running into is that the factorial value becomes so large that it becomes infinity (for large values of n).
Here is my code:
public static double going(int n) {
double factorial = 1.00;
double result = 0.00, sum = 0.00;
for(int i=1; i<n+1; i++){
factorial *= i;
sum += factorial;
}
//Truncate decimals to 6 places
result = (1/factorial)*(sum);
long truncate = (long)Math.pow(10,6);
result = result * truncate;
long value = (long) result;
return (double) value / truncate;
}
Now, the above code works fine for say n=5 or n= 113, but anything above n = 170 and my factorial and sum expressions become infinity. Is my approach just not going to work due to the exponential growth of the numbers? And what would be a work around to calculating very large numbers that doesnt impact performance too much (I believe BigInteger is quite slow from looking at similar questions).
You can solve this without evaluating a single factorial.
Your formula simplifies to the considerably simpler, computationally speaking
1!/n! + 2!/n! + 3!/n! + ... + 1
Aside from the first and last terms, a lot of factors actually cancel, which will help the precision of the final result, for example for 3! / n! you only need to multiply 1 / 4 through to 1 / n. What you must not do is to evaluate the factorials and divide them.
If 15 decimal digits of precision is acceptable (which it appears that it is from your question) then you can evaluate this in floating point, adding the small terms first. As you develop the algorithm, you'll notice the terms are related, but be very careful how you exploit that as you risk introducing material imprecision. (I'd consider that as a second step if I were you.)
Here's a prototype implementation. Note that I accumulate all the individual terms in an array first, then I sum them up starting with the smaller terms first. I think it's computationally more accurate to start from the final term (1.0) and work backwards, but that might not be necessary for a series that converges so quickly. Let's do this thoroughly and analyse the results.
private static double evaluate(int n){
double terms[] = new double[n];
double term = 1.0;
terms[n - 1] = term;
while (n > 1){
terms[n - 2] = term /= n;
--n;
}
double sum = 0.0;
for (double t : terms){
sum += t;
}
return sum;
}
You can see how very quickly the first terms become insignificant. I think you only need a few terms to compute the result to the tolerance of a floating point double. Let's devise an algorithm to stop when that point is reached:
The final version. It seems that the series converges so quickly that you don't need to worry about adding small terms first. So you end up with the absolutely beautiful
private static double evaluate_fast(int n){
double sum = 1.0;
double term = 1.0;
while (n > 1){
double old_sum = sum;
sum += term /= n--;
if (sum == old_sum){
// precision exhausted for the type
break;
}
}
return sum;
}
As you can see, there is no need for BigDecimal &c, and certainly never a need to evaluate any factorials.
You could use BigDecimal like this:
public static double going(int n) {
BigDecimal factorial = BigDecimal.ONE;
BigDecimal sum = BigDecimal.ZERO;
BigDecimal result;
for(int i=1; i<n+1; i++){
factorial = factorial.multiply(new BigDecimal(i));
sum = sum.add(factorial);
}
//Truncate decimals to 6 places
result = sum.divide(factorial, 6, RoundingMode.HALF_EVEN);
return result.doubleValue();
}

Is double division equal to integer division if there is no remainder?

On the JVM, does division between two double values always yield the same exact result as doing the integer division?
With the following prerequisites:
Division without remainder
No division by zero
Both x and y actually hold integer values.
E.g. in the following code
double x = ...;
int resultInt = ...;
double y = x * resultInt;
double resultDouble = y / x; // double division
does resultDouble always equal resultInt or could there be some loss of precision?
There are two reasons that assigning an int to a double or a float might lose precision:
There are certain numbers that just can't be represented as a double/float, so they end up approximated
Large integer numbers may contain too much precision in the lease-significant digits
So it depeands on how big the int is, but in Java a double uses a 52 bit mantissa, so will be able to represent a 32bit integer without lost of data.
The are fabolous examples in this two sites:
1- Java's Floating-Point (Im)Precision
2- About Primitive Data Types In Java
also check:
Loss of precision - int -> float or double
Yes, if x and y are both in the int range, unless the division is -2147483648.0 / -1.0. In that case, the double result will match the result of integer division, but not int division.
If both division inputs are in int range, they are both exactly representable as double. If their ratio is an integer and the division is not -2147483648.0 / -1.0 the ratio is in the int range, and so exactly representable as double. That double is the closest value to the result of the division, and therefore must be the result of the double division.
This reasoning does not necessarily apply if x and y are integers outside the int range.
The statement is true with the exception of Integer.MIN_VALUE, where the result of division of two integers may fall out of integer range.
int n = Integer.MIN_VALUE;
int m = -1;
System.out.println(n / m); // -2147483648
System.out.println((int) ((double)n / (double)m)); // 2147483647

Java rounding with a bunch of nines at the end

It looks like BigDecimal.setScale truncates to the scale+1 decimal position and then rounds based on that decimal only.
Is this normal or there is a clean way to apply the rounding mode to every single decimal?
This outputs: 0.0697
(this is NOT the rounding mode they taught me at school)
double d = 0.06974999999;
BigDecimal bd = BigDecimal.valueOf(d);
bd = bd.setScale(4, RoundingMode.HALF_UP);
System.out.println(bd);
This outputs: 0.0698
(this is the rounding mode they taught me at school)
double d = 0.0697444445;
BigDecimal bd = BigDecimal.valueOf(d);
int scale = bd.scale();
while (4 < scale) {
bd = bd.setScale(--scale, RoundingMode.HALF_UP);
}
System.out.println(bd);
EDITED
After reading some answers, I realized I messed everything up. I was a bit frustrated when I wrote my question.
So, I'm going to rewrite the question cause, even though the answers helped me a lot, I still need some advice.
The problem is:
I need to round 0.06974999999 to 0.0698, that's because I know those many decimals in fact are meant to be 0.6975 (A rounding error in a place not under my control).
So i've been playing around with a kind of "double rounding" which performs the rounding in two steps: first round to some higher precision, then round to the precision needed.
(Here is where I messed up because I thought a loop for every decimal place would be safer).
The thing is that I don't know which higher precision to round to in the first step (I'm using the number of decimals-1). Also I don't know if I could find some unexpected results for other cases.
Here is the first way I discarded in favour of the one with the loop, which now looks a lot better after reading your answers:
public static BigDecimal getBigDecimal(double value, int decimals) {
BigDecimal bd = BigDecimal.valueOf(value);
int scale = bd.scale();
if (scale - decimals > 1) {
bd = bd.setScale(scale - 1, RoundingMode.HALF_UP);
}
return bd.setScale(decimals, roundingMode.HALF_UP);
}
These prints the following results:
0.0697444445 = 0.0697
0.0697499994 = 0.0697
0.0697499995 = 0.0698
0.0697499999 = 0.0698
0.0697444445 = 0.069744445 // rounded to 9 decimals
0.0697499999 = 0.069750000 // rounded to 9 decimals
0.069749 = 0.0698
The questions now are if there is a better way to do this (maybe a different rounding mode)? and if this is safe to use as a general rounding method?
I need to round many values and having to choose at runtime between this and the standard aproach depending on the kind of numbers I receive seems to be really complex.
Thanks again for your time.
When you are rounding you look at the value that comes after the last digit you are rounding to, in your first example you are rounding 0.06974999999 to 4 decimal places. So you have 0.0697 then 4999999 (or essentially 697.4999999). As the rounding mode is HALF_UP, 0.499999 is less than 0.5, therefore it is rounded down.
If the difference between 0.06974999999 and 0.06975 matters so much, you should have switched to BigDecimals a bit sooner. At the very least, if performance is so important, figure out some way to use longs and ints. Double's and floats are not for people who can tell the difference between 1.0 and 0.999999999999999. When you use them, information gets lost and there's no certain way to recover it.
(This information can seen insignificant, to put it mildly, but if travelling 1,000,000 yards puts you at the top of a cliff, travelling 1,000,001 yards will put you a yard past the top of a cliff. That one last yard matters. And if you loose 1 penny in a billion dollars, you'll be in even worse trouble when the accountants get after you.)
If you need to bias your rounding you can add a small factor.
e.g. to round up to 6 decimal places.
double d =
double rounded = (long) (d * 1000000 + 0.5) / 1e6;
to add a small factor you need to decide how much extra you want to give. e.g.
double d =
double rounded = (long) (d * 1000000 + 0.50000001) / 1e6;
e.g.
public static void main(String... args) throws IOException {
double d1 = 0.0697499994;
double r1 = roundTo4places(d1);
double d2 = 0.0697499995;
double r2= roundTo4places(d2);
System.out.println(d1 + " => " + r1);
System.out.println(d2 + " => " + r2);
}
public static double roundTo4places(double d) {
return (long) (d * 10000 + 0.500005) / 1e4;
}
prints
0.0697499994 => 0.0697
0.0697499995 => 0.0698
The first one is correct.
0.44444444 ... 44445 rounded as an integer is 0.0
only 0.500000000 ... 000 or more is rounded up to 1.0
There is no rounding mode which will round 0.4 down and 0.45 up.
If you think about it, you want an equal chance that a random number will be rounded up or down. If you sum a large enough number of random numbers, the error created by rounding cancels out.
The half up round is the same as
long n = (long) (d + 0.5);
Your suggested rounding is
long n = (long) (d + 5.0/9);
Random r = new Random(0);
int count = 10000000;
// round using half up.
long total = 0, total2 = 0;
for (int i = 0; i < count; i++) {
double d = r.nextDouble();
int rounded = (int) (d + 0.5);
total += rounded;
BigDecimal bd = BigDecimal.valueOf(d);
int scale = bd.scale();
while (0 < scale) {
bd = bd.setScale(--scale, RoundingMode.HALF_UP);
}
int rounded2 = bd.intValue();
total2 += rounded2;
}
System.out.printf("The expected total of %,d rounded random values is %,d,%n\tthe actual total was %,d, using the biased rounding %,d%n",
count, count / 2, total, total2);
prints
The expected total of 10,000,000 rounded random values is 5,000,000,
the actual total was 4,999,646, using the biased rounding 5,555,106
http://en.wikipedia.org/wiki/Rounding#Round_half_up
What about trying previous and next values to see if they reduce the scale?
public static BigDecimal getBigDecimal(double value) {
BigDecimal bd = BigDecimal.valueOf(value);
BigDecimal next = BigDecimal.valueOf(Math.nextAfter(value, Double.POSITIVE_INFINITY));
if (next.scale() < bd.scale()) {
return next;
}
next = BigDecimal.valueOf(Math.nextAfter(value, Double.NEGATIVE_INFINITY));
if (next.scale() < bd.scale()) {
return next;
}
return bd;
}
The resulting BigDecimal can then be rounded to the scale needed.(I can't tell the performance impact of this for a large number of values)

Categories

Resources