How to check if a ≤ b with an epsilon precision - java

So I have two doubles, a and b, and I need to check if a ≤ b with the precision of a given epsilon.
So to check if a == b I need to do this (a and b are doubles, EPSILON is a final double, in my case 0.001):
Math.abs(a - b) < EPSILON
My idea as an answer to my own question is:
a < EPSILON + b
My problem is that with a precision of epsilon, what would be the difference between just less than and less than or equal in the final result, maybe someone has a better way to write it?

You don't want to write, a < EPSILON+b, because if b is large, then you might have b == EPSILON+b, and then a < EPSILON+b would fail if a and b were exactly equal.
(a-b) < EPSILON works.
When comparing numbers with "epsilon precision" you consider numbers to be the same if they are within EPSILON, so "a < b, with epsilon precision" is actually:
(a-b) <= -EPSILON

If you can use BigDecimal, then use it, else:
/**
*#param precision number of decimal digits
*/
public static boolean areEqualDouble(double a, double b, int precision) {
return Math.abs(a - b) <= Math.pow(10, -precision);
}

UPDATE: this post is wrong, see comments. #matttimmens' answer seem legit. (I still have to check corner cases)
UPDATE 2 I checked and learned. I thought I'd find corner cases like with integers. In Java, (Integer.MIN_VALUE == ((-Integer.MIN_VALUE))) is true. This is because integer numbers (as in: non-decimal, non-floating point) have asymmetrical ranges:
first positive number is 0, first negative number is -1, and thus
byte: MAX_VALUE is 127, while MIN_VALUE is -128, so MAX_VALUE != -MIN_VALUE and MIN_VALUE != MAX_VALUE
also goes for short, int, long
This, mixed together with silent integer overflow, creates problems when working in the extreme corners, especially the off-by-one problem:
final int a = Integer.MIN_VALUE;
final int b = Integer.MAX_VALUE;
final int eps = 1;
System.out.println("a-b = " + (a - b));
System.out.println((a - b) < eps);
a-b results in 1, and 1 < eps is false, though we clearly see that a is a lot smaller than b.
This is one corner case where the algorithm would fail on integers. Mind: the OP was about doubles.
However, floating-point numbers in Java (float, double), or rather, operations on them, compensate, and do not allow for overflows to happen:
(Double.MAX_VALUE + 1) == Double.MAX_VALUE holds true
(Double.MAX_VALUE + Double.MAX_VALUE) == Double.POSITIVE_INFINITY holds true
(-Double.MAX_VALUE + -Double.MAX_VALUE) == Double.NEGATIVE_INFINITY holds true
So before, with integers, the +1 or -1 silent integer overflows caused a problem.
So when adding a +1 (or -1 or any small number) to a really large double variable, the change gets lost to rounding. Thus the corner cases play no role here, and #matttimmens' solution (a-b)<e is as close to the truth as we can get for floating-point variables, rounding left aside.
My old, wrong post:
To counter #matttimmens' reply, run that code:
final double a = Double.MIN_VALUE;
final double b = Double.MIN_VALUE + 0;
final double c = Double.MIN_VALUE + 10;
final double e = 0.001;
System.out.println("Matches 1: " + ((a - b) < e));
System.out.println("Matches 2: " + ((b - a) < e));
System.out.println("Matches 3: " + ((a - c) < e));
System.out.println("Matches 4: " + ((c - a) < e));
Match 3 gives a false result, thus his answer is wrong.
Funny thing is that he thought about vary big numbers, but left the very small numbers unattended.

Related

Floating point arithmetics - summation - JavaDoubleStream

One of the first things we learn in floating point arithmetics is how rounding error plays a crucial role in double summation. Let's say we have an array of double myArray and we want to find the mean. What we could trivially do is:
double sum = 0.0;
for(int i = 0; i < myArray.length; i++) {
sum += myArray[i];
}
double mean = (double) sum/myArray.length;
However, we would have rounding error. This error can be reduced using other summation algorithm such as the Kahan one (wiki https://en.wikipedia.org/wiki/Kahan_summation_algorithm).
I have recently discovered Java Streams (refer to: https://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html) and in particular DoubleStream (see: https://docs.oracle.com/javase/8/docs/api/java/util/stream/DoubleStream.html).
With the code:
double sum = DoubleStream.of(myArray).parallel().sum();
double average = (double) sum/myArray.length;
we can get the average of our array. Two advantages are remarkable in my opinion:
More concise code
Faster as it is parallelized
Of course we could also have done something like:
double average = DoubleStream.of(myArray).parallel().average();
but I wanted to stress the summation.
At this point I have a question (which API didn't answer): is this method sum() numerically stable? I have done some experiments and it appears to be working fine. However I am not sure is at least good as the Kahan algorithm. Any help really welcomed!
The documentation says it:
Returns the sum of elements in this stream. Summation is a special
case of a reduction. If floating-point summation were exact, this
method would be equivalent to:
return reduce(0, Double::sum);
However, since floating-point summation is not exact, the above code
is not necessarily equivalent to the summation computation done by
this method.
Have you considered using BigDecimal to perform exact results?
Interesting, so I implemented the Kahan variant of Klein, mentioned in the wikipedia article. And a Stream version of it.
The results are not convincing.
double[] values = new double[10_000];
Random random = new Random();
Arrays.setAll(values, (i) -> Math.atan(random.nextDouble()*Math.PI*2) * 3E17);
long t0 = System.nanoTime();
double sum1 = DoubleStream.of(values).sum();
long t1 = System.nanoTime();
double sum2 = DoubleStream.of(values).parallel().sum();
long t2 = System.nanoTime();
double sum3 = kleinSum(values);
long t3 = System.nanoTime();
double sum4 = kleinSumAsStream(values);
long t4 = System.nanoTime();
System.out.printf(
"seq %f (%d ns)%npar %f (%d ns)%nkah %f (%d ns)%nstr %f (%d ns)%n",
sum1, t1 - t0,
sum2, t2 - t1,
sum3, t3 - t2,
sum4, t4 - t3);
An a non-stream version of modified Kahan:
public static double kleinSum(double[] input) {
double sum = 0.0;
double cs = 0.0;
double ccs = 0.0;
for (int i = 0; i < input.length; ++i) {
double t = sum + input[i];
double c = Math.abs(sum) >= Math.abs(input[i])
? (sum - t) + input[i]
: (input[i] - t) + sum;
sum = t;
t = cs + c;
double cc = Math.abs(cs) >= Math.abs(c)
? (cs - t) + c
: (c - t) + cs;
cs = t;
ccs += cc;
}
return sum + cs + ccs;
}
A Stream version:
public static double kleinSumAsStream(double[] input) {
double[] scc = DoubleStream.of(input)
.boxed()
.reduce(new double[3],
(sumCsCcs, x) -> {
double t = sumCsCcs[0] + x;
double c = Math.abs(sumCsCcs[0]) >= Math.abs(x)
? (sumCsCcs[0] - t) + x
: (x - t) + sumCsCcs[0];
sumCsCcs[0] = t;
t = sumCsCcs[1] + c;
double cc = Math.abs(sumCsCcs[1]) >= Math.abs(c)
? (sumCsCcs[1] - t) + c
: (c - t) + sumCsCcs[1];
sumCsCcs[1] = t;
sumCsCcs[2] += cc;
return sumCsCcs;
},
(scc1, scc2) -> new double[] {
scc2[0] + scc1[0],
scc2[1] + scc1[1],
scc2[2] + scc1[2]});
return scc[0] + scc[1] + scc[2];
}
Mind that the times would only be evidence, when a microworkbench would have been used.
However one still sees the overhead of a DoubleStream:
sequential 3363280744568882000000,000000 (5083900 ns)
parallel 3363280744568882500000,000000 (4492600 ns)
klein 3363280744568882000000,000000 (1051600 ns)
kleinStream 3363280744568882000000,000000 (3277500 ns)
Unfortunately I did not correctly cause floating point errors, and its for me late.
Using a Stream instead of the kleinSum would need a reduction with at least 2 doubles (sum and correction), so a double[2] or in newer Java a Record(double sum, double cs, double ccs) value.
A far less magical auxiliary approach is to sort the input by magnitude.
float (used for readability reasons only, double has a precision limit too, used later) has a 24-bit mantissa (of which 23 bits are stored, and the 24th one is considered 1 for "normal" numbers), so if you have the number 2^24, you simply can't add 1 to it, the smallest increment it has is 2:
float f=1<<24;
System.out.println(Float.valueOf(f).intValue());
f++;
f++;
System.out.println(Float.valueOf(f).intValue());
f+=2;
System.out.println(Float.valueOf(f).intValue());
will display
16777216
16777216 <-- 16777216+1+1
16777218 <-- 16777216+2
while summing them in the other direction works
float f=0;
System.out.println(Float.valueOf(f).intValue());
f++;
f++;
System.out.println(Float.valueOf(f).intValue());
f+=2;
System.out.println(Float.valueOf(f).intValue());
f+=1<<24;
System.out.println(Float.valueOf(f).intValue());
produces
0
2
4
16777220 <-- 4+16777216
(of course the pair of f++s is intentional, 16777219 would not exist, just like 16777217 for the previous case. These are not incomprehensibly huge numbers, yet a simple line as System.out.println((int)(float)16777219); already prints 16777220).
The thing applies to double too, just there you have 53-bits precision.
Two things:
the documentation actually suggests this: API Note: Elements sorted by increasing absolute magnitude tend to yield more accurate results
sum() internally ends in Collectors.sumWithCompensation(), which explicitly writes that it's an implementation of Kahan summation. (GitHub link is of JetBrains because Java uses different source control, which is a bit harder to find and link - but the file is present in your JDK too, inside src.zip, usually located in the lib folder)
Ordering by magnitude is something like ordering by log(abs(x)), which is a bit uglier in code, but possible:
double t[]= {Math.pow(2, 53),1,-1,-Math.pow(2, 53),1};
System.out.println(DoubleStream.of(t).boxed().collect(Collectors.toList()));
t=DoubleStream.of(t).boxed()
.sorted((a,b)->(int)(Math.log(Math.abs(a))-Math.log(Math.abs(b))))
.mapToDouble(d->d)
.toArray();
System.out.println(DoubleStream.of(t).boxed().collect(Collectors.toList()));
will print an okay order
[9.007199254740992E15, 1.0, -1.0, -9.007199254740992E15, 1.0]
[1.0, -1.0, 1.0, 9.007199254740992E15, -9.007199254740992E15]
So it's nice, but you can actually break it with little effort (the first few lines show that 2^53 really is the "integer limit" for double, and also "reminds" us of the actual value, then the sum with a single +1 ends up being less than 2^53):
double d=Math.pow(2, 53);
System.out.println(Double.valueOf(d).longValue());
d++;
d++;
System.out.println(Double.valueOf(d).longValue());
d+=2;
System.out.println(Double.valueOf(d).longValue());
double array[]= {Math.pow(2, 53),1,1,1,1};
for(var i=0;i<5;i++) {
var copy=Arrays.copyOf(array, i+1);
d=DoubleStream.of(copy).sum();
System.out.println(i+": "+Double.valueOf(d).longValue());
}
produces
9007199254740992
9007199254740992 <-- 9007199254740992+1+1
9007199254740994 <-- 9007199254740992+2
0: 9007199254740992
1: 9007199254740991 <-- that would be 9007199254740992+1 with Kahan
2: 9007199254740994
3: 9007199254740996 <-- "rounding" upwards, just like with (float)16777219 earlier
4: 9007199254740996
TL;DR: you don't need your own Kahan implementation, but use computers with care in general.

Modulus Operation Returning Weird Results

This program divides a number and calculates its quotient and remainder. But I'm getting odd results for the modulus operation.
public String operater(int arg1, int arg2) throws IllegalArgumentException
{
int quotient;
int remainder;
String resString;
// Check for Divide by 0 Error.
if(arg2 == 0)
{
throw new IllegalArgumentException("Illegal Argument!");
}
else
{
quotient = arg1 / arg2;
remainder = arg1 % arg2;
resString = "Quotient: " + Integer.toString(quotient) +
Remainder: " + Integer.toString(remainder);
}
return resString;
}
58585 / -45 gives the quotient as -1301 and remainder as 40. But Google says that 58585 % -45 = -5. I think the reason that there are special rules to dealing with signs when doing signs.
From Modulo Operations:
"However, this still leaves a sign ambiguity if the remainder is
nonzero: two possible choices for the remainder occur, one negative
and the other positive, and two possible choices for the quotient
occur. Usually, in number theory, the positive remainder is always
chosen, but programming languages choose depending on the language and
the signs of a and/or n.[6] Standard Pascal and ALGOL 68 give a
positive remainder (or 0) even for negative divisors, and some
programming languages, such as C90, leave it to the implementation
when either of n or a is negative. See the table for details. a modulo
0 is undefined in most systems, although some do define it as a."
I want to fix my program but, I don't understand what that means.
Depends on what you want. In math, and in some programming languages if the modulo is not zero, then it has the same sign as the divisor, treating integer division as truncating towards negative infinity. In other programming languages, if the modulo is not zero, it has the same sign as the dividend, treating integer division as truncating towards zero. Some programming languages include both a modulo operator (sign same as divisor) and remainder operator (sign same as dividend).
With the mathematical type of modulo, then r = (a + k*b)%b returns the same value for r regardless if k is negative, zero, or positive. It also means that there are only b possible values for any dividend modulo b, as opposed to the other case where there are 2*b - 1 possible values for a dividend modulo b, depending on the sign of the dividend.
C example to make modulo work the way it does in mathematics:
int modulo(int n, int p)
{
int r = n%p;
if(((p > 0) && (r < 0)) || ((p < 0) && (r > 0)))
r += p;
return r;
}

Java integer division doesn't give floor for negative numbers

I was trying to use java's integer division, and it supposedly takes the floor. However, it rounds towards zero instead of the floor.
public class Main {
public static void main(String[] args) {
System.out.println(-1 / 100); // should be -1, but is 0
System.out.println(Math.floor(-1d/100d)); // correct
}
}
The problem is that I do not want to convert to a double/float because it needs to be efficient. I'm trying to solve this with a method, floorDivide(long a, long b). What I have is:
static long floorDivide(long a, long b) {
if (a < 0) {
// what do I put here?
}
return a / b;
}
How can I do this without a double/float?
floorDiv() from Java.Math that does exactly what you want.
static long floorDiv(long x, long y)
Returns the largest (closest to positive infinity) long value that is less than or equal to the algebraic quotient.
Take the absolute value, divide it, multiply it by -1.
Weird bug.
You can use
int i = (int) Math.round(doubleValToRound);
It will return a double value that you can cast into an int without lost of precission and without performance problems (casts haven't a great computational cost)
Also it's equivalent to
int a = (int) (doubleValToRound + 0.5);
//in your case
System.out.println((int) ((-1 / 100) + 0.5));
With this last one you won't have to enter into tedious and unnecesary "if" instructions. Like a good suit, its valid for every moment and has a higher portability for other languages.
This is ugly, but meets the requirement to not use a double/float. Really you should just cast it to a double.
The logic here is take the floor of a negative result from the integer division if it doesn't divide evenly.
static long floorDivide(long a, long b)
{
if(a % b != 0 && ((a < 0 && b > 0) || (a > 0 && b < 0)))
{
return (a / b - 1);
}
else
{
return (a / b);
}
}
Just divide the two integers. then add -1 to the result (in case the absolute value of both numerator and denominator are not same). For example -3/3 gives you -1, the right answer without adding -1 to the division.
Since a bit late, but you need to convert your parameters to long or double
int result = (int) Math.floor( (double) -1 / 5 );
// result == -1
This worked for me elegantly.
I would use floorDiv() for a general case, as Frank Harper suggested.
Note, however, that when the divisor is a power of 2, the division is often substituted by a right shift by an appropriate number of bits, i.e.
x / d
is the same as
x >> p
when p = 0,1,...,30 (or 0,1,...,62 for longs), d = 2p and x is non-negative. This is not only more effective than ordinary division but gives the right result (in mathematical sense) when x is negative.

Is short-circuit evaluation possible in multiplication?

Suppose I have this code in Java:
public static double function(double x, double y, int k) {
return Math.pow(x, 2) + Math.pow(y, 2) + y + k*Math.sqrt(Math.pow(y, x));
}
It calculates some function at certain point (x,y). Notice that square root is multiplied by integer k. There will be instances where I will give k = 0 (because I wouldn't need square root evaluated). It gives the value I need but the problem is that I am writing time sensitive program, i.e I will call method function many, many times. So, I want that my program wouldn't evaluate Math.sqrt(Math.pow(y, x)) if k = 0.
I googled a bit, but there doesn't seem to be a 'short-circuit' equivalent for arithmetic (well, in many cases it doesn't even make sense, with multiplication possibly being an exclusion) operations as there is for logic operations.
How could I achieve the desired result?
I think adding ternary operator at the end will avoid calling of Math.sqrt(Math.pow(y, x)) computation. As shown below
public static double function(double x, double y, int k) {
return Math.pow(x, 2) + Math.pow(y, 2) + y
+ ( k!=0 ? k*Math.pow(y, x/2) : 0); //ternary operator here
}
There isn't a short-circuiting multiplication operator because math just doesn't work that way. What you could do is something like
result = Math.pow(x, 2) + Math.pow(y, 2) + y;
if (k != 0)
result += k*Math.sqrt(Math.pow(y, x));
return result;
You can achieve this result by doing
k == 0 ? 0 : k*Math.sqrt(Math.pow(y, x))
This is not equivalent to
k*Math.sqrt(Math.pow(y, x))
though as the shorter version can produce NaN even when k == 0.

Java get fraction method

I am trying to make a method that returns the fractional value of a decimal number.
//these are included in a class called 'equazioni'
private static long getMCD(long a, long b) {
if (b == 0)
{ return a; }
else
{ getMCD(b, a % b); }
}
public String getFraction(double a) {
String s = String.valueOf(a);
int decimali = s.length() - 1 - s.indexOf('.');
int den = 1;
for(int i = 0; i < decimali; i++){
a *= 10;
den *= 10;
}
int num = (int) Math.round(a);
long mcd = getMCD(num,den);
return String.valueOf(num/mcd) + "/" + String.valueOf(den/mcd);
}
These 2 methods works perfectly with most of values. For example with a 8.65 it returns a 173/20, or 78.24 it returns a 1956/25. It's called in this way:
equazioni eq = new equazioni(a,b,c);
jTextField4.setText("8.65= " + eq.getFraction(8.65));
I am having troubles with fractions like 2/3, 5/18, 6/45... because the denominator is divisible by 3 and so the fraction is a periodic number. How could I represent it?
My problem is also "How could I recognize that it's a periodic number?". I thought that I could something like this
int test = den % 3;
If the denominator is divisible by 3, then I have to generate a fraction in a particular way. Any suggestion?
If I correctly understood your question, I am afraid it does not have a complete solution. Since a float is stored with a finite number of bits, not all fractions can be represented, especially the ones that are not decimal numbers like 2/3. Even for decimal numbers, not all of them can be represented.
In other words, your method will never be called by the float representation of 2/3 as an input, since this representation does not exist. You might be called with 0.66666666 (with whatever the limit of digits would be in Java), but that is not 2/3...
See this link for more details about floating point representation in Java: http://introcs.cs.princeton.edu/java/91float/

Categories

Resources