Can someone explain the Math.ulp(double) method? - java

I haven't been able to find any information online that doesn't already assume I know things. I wonder if anyone knows any good resources that I can look into to help me wrap my head around what this function does exactly?
From what I gather, and I'm pretty certain this is wrong or at least not fully right, is that given a floating point, it determines the distance between itself and some number next in a sequence? There appears to be something to do with how number are represented bitwise, but the sources I've read never explicitly said anything about that.

http://matlabgeeks.com/tips-tutorials/floating-point-comparisons-in-matlab/
illustrated it rather well:
float2bin(A)
//ans = 0011111110111001100110011001100110011001100110011001100110100000
float2bin(B)
//ans = 0011111110111001100110011001100110011001100110011001100110011010
You can see the difference in precision at a binary level in this example. A and B differ by 6 ulps (units in the last place)

I believe that it is showing the distance between the number you specify, and the next largest binary float that can be encoded.
Because of the range that binary floating point numbers cover and their precision, not all numbers between any two given real numbers are covered, so it looks like this is giving you the positive distance between the number you wish to encode and the actual number it would be stored as.
From Wikipedia:
the unit of least precision (ULP) is the spacing between floating-point numbers, i.e., the value the least significant digit represents if it is 1

Related

subtracting Float.MIN_VALUE from another float number has no effect in android application

I am working on an android application where for a part of the app, I have 2 floating point values and I cannot have them be exactly the same because this is causing a bug in one of my screens. Those numbers are being sent from a server and are out of my control (e.g. I cannot force them to be different).
The app is written in Kotlin, but I assume that this issue is similar (if not exactly the same) for Java, as Kotlin uses the JVM behind the scenes.
I thought of a "creative" way of solving this without changing my logic too much, by subtracting Float.MIN_VALUE from one of them, making them almost, but not exactly the same. What I actually need to happen is for if(a == b) to fail (where b is actually a - Float.MIN_VALUE).
But to my surprise, when the code runs, if(a == b) returns true. When I opened the "evaluate" window in Android Studio here is what I found out:
Let me reiterate that currentPayment is a Float, so there shouldn't be any auto-conversions or rounding going on (like when dividing Float by an Int for example).
Let me also point out that I can guarantee that currentPayment is not
Float.MAX_VALUE or -Float.MAX_VALUE (so the result of the operation is within the bounds of Float).
According to docs and this answer Float.MIN_VALUE is the smallest positive non-zero value of Float and has a value of 1.4E-45, which is confirmed here:
I also saw in another post (which I cannot find again for some reason), that this can also be thought of as the maximum precision of Float, which makes sense.
Since currentPayment is a Float, I would expect it should be able to hold any floating point value within the bounds of Float to it's maximum precision (i.e. Float.MIN_VALUE).
Therefore I would expect currentPayment to NOT equal currentPayment - Float.MIN_VALUE, but alas that is not the case.
Can anyone explain this please?
Since currentPayment is a Float, I would expect it should be able to hold any floating point value within the bounds of Float to it's maximum precision (i.e. Float.MIN_VALUE).
This is a wrong assumption. Float is called "float" because it has floating precision. The amount of precision depends on how big the number is that you're storing. The smallest possible float value is smaller than the precision of almost any other possible number, so it is too small to affect them if you add or subtract it. At the high end, Float numbers have precisions that are much greater than the integer 1. If you subtract 999,000,000 from Float.MAX_VALUE, it will still return Float.MAX_VALUE because the precision is so poor at the highest end.
Also, since floating point numbers are not stored in base-10, they are inappropriate for storing currency amounts, because you can never exactly represent a decimal fraction. (I mention that because your variable name has the word "payment" in it, which is a red flag.)
You should either use BigDecimal, Long, or Int to represent currency, so your currency amounts and arithmetic will be exact.
Edit:
Here's an analogy to help understand it, since it is hard to contemplate binary numbers. Floats are 32-bits in Java and Kotlin, but imagine we have a special kind of computer that can store a floating point number in base-10. Each bit on this computer is not just 0 or 1, but can be anything from 0 to 9. A Float on this computer can have 4 digits and a decimal place, but the decimal place is floating, so it can be placed anywhere relative to the four digits. So a Float on this computer is always five bits--four of the bits are the digits, and the fifth bit tells you where the decimal place goes.
In this imaginary computer's Float, the smallest possible number that can be represented is .0001 and the largest possible number is 9999.. You can't represent 9999.5 or even 1000.5 because there aren't enough digits available. There's no fixed amount of precision--the precision is determined by where the decimal place is in the current number. Precision is better for numbers with a decimal place farther to the left.
For the number storage format to be able to have a fixed precision, we would have to fix the decimal point in one place for all numbers. We would have to choose a precision. Suppose we chose a precision of 0.001. Our fifth bit that told us where the decimal place goes in the floating point can now just be used for a fifth digit. Now we know the precision is always 0.001, but the largest possible number we can represent is 99.999 and the smallest possible number is 0.001, a much smaller possible range than with floating point. This limitation is the reason floating points are used instead.

How to get exact value of Trigonometric functions in Java

This is not duplicate of this, and this
I am developing a Calculator application for Android and I have been searching web for past 20-30 days but did not find any reasonable answer. I have also studied many papers on Floating Point Computation.
I have also tried both Math and StrictMath library.
The following values I have tried
Math.cos(Math.PI/4) result in 0.7071067811865476 which is correct answer
Math.cos(Math.PI/2) result in 6.123233995736766E-17 correct answer is 0
Math.cos(Math.PI) result in -1.0 which is correct
Math.cos((3*Math.PI)/2) result in -1.8369701987210297E-16 correct answer is 0
Math.cos(Math.PI*2) result in 1.0 which is correct
Math.sin(Math.PI/4) result in 0.7071067811865476 which is correct answer
Math.sin(Math.PI/2) result in 1 which is correct
Math.sin(Math.PI) result in 1.2246467991473532E-16 correct answer is 0
Math.sin((3*Math.PI)/2) result in -1 which is correct
Math.sin(Math.PI*2) result in -2.4492935982947064E-16 correct answer is 0
Math.tan(Math.PI/4) result in 0.999999999999999 correct answer is 1
Math.tan(Math.PI/2) result in 1.633123935319537E16 correct answer is NaN
Math.tan(Math.PI) result in -1.2246467991473532E-16 correct answer is 0
Math.tan((3*Math.PI)/2) result in 5.443746451065123E15 correct answer is NaN
Math.tan(Math.PI*2) result in -2.4492935982947064E-16 correct answer is 0
When I tried all these calculations on Google's Official calculator which is included in Stock Lollipop yielded all correct answers except for tan((3*PI)/2) and tan(PI/2)
When I tried all these calculation on my Casio fx-991 PLUS all answers were correct.
Now my question is "How Google's calculator and Casio's calculator managed to get correct answer using limited floating precision of CPU?" and "How can I
achieve same output?"
I am skeptical of many of the "correct answer" values you give. sin(pi) is 0, but Math.PI is not pi, it is an approximation. Sine of something that is only close to PI shouldn't give you 0. How is the user entering the values? If the user enters a decimal input, with 16 decimal places, he/she should expect to have some results that are off in the 16th decimal place. If a user asks for sin(10^-15) and you change the input to 0 and return a result of 0, then you make it so the user can't compute a numerical derivative for sin x at 0 by computing (sin(10^-15)-sin(0))/(10^-15-0). The same is true if the user enters an approximation like Math.PI and you change the input to pi.
As Bryan Reilly answered, you can round results before presenting them to the user, and this will avoid showing a value like 5*10^-15 instead of 0.
You can shift the inputs to ranges near 0. For example, for values of x greater than pi or less than -pi, you can subtract off a multiple of 2pi to get a value in [-pi,pi]. Then you can use trig identities to reduce the domain you need further to [0,pi/2]. For example, if x is in [-pi,pi/2], then use sin(x) = -sin(x+pi).
If any roundoff errors at all are unacceptable, then perhaps you should make a symbolic calculator instead of a floating point calculator.
What is most likely is that Google and Casios calculators simply round down if the result is something smaller than say 1.0E-14.
Something similar can be done if the number is too large.
Floating point inaccuracies are hard to deal with but rounding is the most common way to fix them.
And although it seems like you know what is happening under the hood, this may help you:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Maybe they use something like the Big Decimal Class
Sorry I'm not yet allowed to provide a simple comment.
For example yesterday I created a simple swing application which converted from decimal to unsigned int binary and vice versa. The problem is, what the user enters in a text box are String values to convert, and they can easily exceed even Long.MAX_VALUE.
What I wanted was for the user to enter as many digits as desired in both cases, so I used the Big Integer Class, which is similar to Big Decimal but is suited for Integer values. The result of using it allows users to enter strings of incredible length, and for my program to output the conversion in as much or even more digits, far, far exceeding Long.MAX_VALUE.
However, since the Math functions use doubles, we are still limited here with the 'Big' classes, which initialize with doubles and strings. If you have an application where you can represent numbers with strings (which can have a max character length of Integer.MAX_VALUE), then the 'Big' classes are great. However if you want to initialize with a double, you are obviously limited by the constraints of double.

Why does for loop using a double fail to terminate

I'm looking through old exam questions (currently first year of uni.) and I'm wondering if someone could explain a bit more thoroughly why the following for loop does not end when it is supposed to. Why does this happen? I understand that it skips 100.0 because of a rounding-error or something, but why?
for(double i = 0.0; i != 100; i = i +0.1){
System.out.println(i);
}
The number 0.1 cannot be exactly represented in binary, much like 1/3 cannot be exactly represented in decimal, as such you cannot guarantee that:
0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1==1
This is because in binary:
0.1=(binary)0.00011001100110011001100110011001....... forever
However a double cannot contain an infinite precision and so, just as we approximate 1/3 to 0.3333333 so must the binary representation approximate 0.1.
Expanded decimal analogy
In decimal you may find that
1/3+1/3+1/3
=0.333+0.333+0.333
=0.999
This is exactly the same problem. It should not be seen as a weakness of floating point numbers as our own decimal system has the same difficulties (but for different numbers, someone with a base-3 system would find it strange that we struggled to represent 1/3). It is however an issue to be aware of.
Demo
A live demo provided by Andrea Ligios shows these errors building up.
Computers (at least current ones) works with binary data. Moreover, there is a length limitation for computers to process in their arithmetic logic units (i.e. 32bits, 64bits etc).
Representing integers in binary form is simple on the contrary we cant say the same thing for floating points.
As shown above there is a special way of representing floating points according to IEEE-754 which is also accepted as defacto by processor producers and software guys that's why it is important for everyone to know about it.
If we look at the maximum value of a double in java (Double.MAX_VALUE) is 1.7976931348623157E308 (>10^307). only with 64 bits, huge numbers could be represented however problem is the precision.
As '==' and '!=' operators compare numbers bitwise, in your case 0.1+0.1+0.1 is not equal to 0.3 in terms of bits they are represented.
As a conclusion, to fit huge floating point numbers in a few bits clever engineers decided to sacrifice precision. If you are working on floating points you shouldn't use '==' or '!=' unless you are sure what you are doing.
As a general rule, never use double to iterate with due to rounding errors (0.1 may look nice when written in base 10, but try writing it in base 2—which is what double uses). What you should do is use a plain int variable to iterate and calculate the double from it.
for (int i = 0; i < 1000; i++)
System.out.println(i/10.0);
First of all, I'm going to explain some things about doubles. This will actually take place in base ten for ease of understanding.
Take the value one-third and try to express it in base ten. You get 0.3333333333333.... Let's say we need to round it to 4 places. We get 0.3333. Now, let's add another 1/3. We get 0.6666333333333.... which rounds to 0.6666. Let's add another 1/3. We get 0.9999, not 1.
The same thing happens with base two and one-tenth. Since you're going by 0.110 and 0.110 is a repeating binary value(like 0.1666666... in base ten), you'll have just enough error to miss one hundred when you do get there.
1/2 can be represented in base ten just fine, and 1/5 can as well. This is because the prime factors of the denominator are a subset of the factors of the base. This is not the case for one third in base ten or one tenth in base two.
It should be for(double a = 0.0; a < 100.0; a = a + 0.01)
Try and see if this works instead

Calculating geometric mean of a long list of random doubles

So, I came across a problem today in my construction of a restricted Boltzmann machine that should be trivial, but seems to be troublingly difficult. Basically I'm initializing 2k values to random doubles between 0 and 1.
What I would like to do is calculate the geometric mean of this data set. The problem I'm running into is that since the data set is so long, multiplying everything together will always result in zero, and doing the proper root at every step will just rail to 1.
I could potentially chunk the list up, but I think that's really gross. Any ideas on how to do this in an elegant way?
In theory I would like to extend my current RBM code to have closer to 15k+ entries, and be able to run the RBM across multiple threads. Sadly this rules out apache commons math (geometric mean method is not synchronized), longs.
Wow, using a big decimal type is way overkill!
Just take the logarithm of everything, find the arithmetic mean, and then exponentiate.
Mehrdad's logarithm solution certainly works. You can do it faster (and possibly more accurately), though:
Compute the sum of the exponents of the numbers, say S.
Slam all of the exponents to zero so that each number is between 1/2 and 1.
Group the numbers into bunches of at most 1000.
For each group, compute the product of the numbers. This will not underflow.
Add the exponent of the product to S and slam the exponent to zero.
You now have about 1/1000 as many numbers. Repeat steps 2 and 3 unless you only have one number.
Call the one remaining number T. The geometric mean is T1/N 2S/N, where N is the size of the input.
It looks like after a sufficient number of multiplications the double precision is not sufficient anymore. Too many leading zeros, if you will.
The wiki page on arbitrary precision arithmetic shows a few ways to deal with the problem. In Java, BigDecimal seems the way to go, though at the expense of speed.

Why do floats seem to add incorrectly in Java? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicates:
Is JavaScript's Math broken?
Java floating point arithmetic
I have the current code
for(double j = .01; j <= .17; j+=.01){
System.out.println(j);
}
the output is:
0.01
0.02
0.03
0.04
0.05
0.060000000000000005
0.07
0.08
0.09
0.09999999999999999
0.10999999999999999
0.11999999999999998
0.12999999999999998
0.13999999999999999
0.15
0.16
0.17
Can someone explain why this is happening? How do you fix this? Besides writing a rounding function?
Floats are an approximation of the actual number in Java, due to the way they're stored. If you need exact values, use a BigDecimal instead.
They are working correctly. Some decimal values are not representable exactly in binary floating point and get rounded to the closest value. See my answer to this question for more detail. The question was asked about Perl, but the answer applies equally to Java since it's a limitation of ALL floating point representations that do not have infinite precision (i.e. all of them).
As suggested by #Kaleb Brasee go and use BigDecimal's when accuracy is a must. Here is a link to a nice explanation of tiny details related to using floating point operations in Java http://firstclassthoughts.co.uk/java/traps/java_double_traps.html
There is also a link to issues involved with using BigDecimal's. Highly recommended to read them both. It really helped me.
Enjoy, Boro.
We humans are used to think in 'base 10' when we deal with floating point numbers 'by hand' (that is, literally when writing them on paper or when entering them into a computer). Because of this, it is possible for us to write down an exact representation of, say, 17%. We just write 0.17 (or 1.7E-1 etc). Trying to represent such a trivial thing as a third can not be done exactly with that system, because we have to write 0.3333333... with an infinite number of 3s, which is impossible.
Computers dealing with floating point not only have a limited number of bits to represent the mantissa (or significand) of the number, they are also restricted to express the mantissa in the base of two. That means that most percentages (which we humans with our base 10 floating point convention always can write exactly, like for example '0.17') are impossible for the computer to store exactly. Fractions like 0%, 25%, 50%, 75% and 100% can be expressed exactly as a floating point number in a computer, because it consists of either halves (2E-1) or quarters (2E-4) which fits nicely with a digital representation of a number. Percentage values like 17% or even trivial ones (for us humans!!) like 10% or 1% are as impossible for computers to store exactly simply because those numbers are, for the binary floating point system what the 'one third' is for the human (base 10) floating point system.
But if you carefully pick your floating point values, so they always are made of a whole number of 1/2^n where n might be 10 (meaning an integer number of 1/1024), then they can always be stored exactly without errors as a floating point number. So if you try to store 17/1024 in a computer, it will go smoothly. You can actually store it without error even using the 'human base 10' decimal system (but you would go nuts by the number of actual digits you have to deal with).
This is some reason I believe why some games express angles in a unit where a whole 360 degree turn is 256 angle units. Can be expressed without loss as a floating point number between 0 and 1 (where 1 means you go a full revolution).
It's normal in double representation on the computer. You lose some bits then you will have such results. Better solution is to do this:
for(int j = 1; j <= 17; j++){
System.out.println(j/100.0);
}
This is because floating point values are inherently not the same as reals in the mathematical sense.
In a computer, there is only a fixed number of bits that can be used to represent value. This means there are a finite number of values that it can hold. But there are an infinite amount of real numbers, thus not all of them can be represented exactly. But usually the value is something close. You can find a more detailed explanation here.
That is because of the limitations of IEEE754 the binary format to get the most out of 32 bit.
As others have pointed out, only numbers that are combinations of powers of two are exactly representable in (bianary) floating point format
If you need to store arbitrary numbers with arbitrary precision, then use BigDecimal.
If the problem is just a display issue, then you can get round this in how you display the number. For example:
String.format("%.2f", n)
will format the number to 2 decimal places.

Categories

Resources