This is not duplicate of this, and this
I am developing a Calculator application for Android and I have been searching web for past 20-30 days but did not find any reasonable answer. I have also studied many papers on Floating Point Computation.
I have also tried both Math and StrictMath library.
The following values I have tried
Math.cos(Math.PI/4) result in 0.7071067811865476 which is correct answer
Math.cos(Math.PI/2) result in 6.123233995736766E-17 correct answer is 0
Math.cos(Math.PI) result in -1.0 which is correct
Math.cos((3*Math.PI)/2) result in -1.8369701987210297E-16 correct answer is 0
Math.cos(Math.PI*2) result in 1.0 which is correct
Math.sin(Math.PI/4) result in 0.7071067811865476 which is correct answer
Math.sin(Math.PI/2) result in 1 which is correct
Math.sin(Math.PI) result in 1.2246467991473532E-16 correct answer is 0
Math.sin((3*Math.PI)/2) result in -1 which is correct
Math.sin(Math.PI*2) result in -2.4492935982947064E-16 correct answer is 0
Math.tan(Math.PI/4) result in 0.999999999999999 correct answer is 1
Math.tan(Math.PI/2) result in 1.633123935319537E16 correct answer is NaN
Math.tan(Math.PI) result in -1.2246467991473532E-16 correct answer is 0
Math.tan((3*Math.PI)/2) result in 5.443746451065123E15 correct answer is NaN
Math.tan(Math.PI*2) result in -2.4492935982947064E-16 correct answer is 0
When I tried all these calculations on Google's Official calculator which is included in Stock Lollipop yielded all correct answers except for tan((3*PI)/2) and tan(PI/2)
When I tried all these calculation on my Casio fx-991 PLUS all answers were correct.
Now my question is "How Google's calculator and Casio's calculator managed to get correct answer using limited floating precision of CPU?" and "How can I
achieve same output?"
I am skeptical of many of the "correct answer" values you give. sin(pi) is 0, but Math.PI is not pi, it is an approximation. Sine of something that is only close to PI shouldn't give you 0. How is the user entering the values? If the user enters a decimal input, with 16 decimal places, he/she should expect to have some results that are off in the 16th decimal place. If a user asks for sin(10^-15) and you change the input to 0 and return a result of 0, then you make it so the user can't compute a numerical derivative for sin x at 0 by computing (sin(10^-15)-sin(0))/(10^-15-0). The same is true if the user enters an approximation like Math.PI and you change the input to pi.
As Bryan Reilly answered, you can round results before presenting them to the user, and this will avoid showing a value like 5*10^-15 instead of 0.
You can shift the inputs to ranges near 0. For example, for values of x greater than pi or less than -pi, you can subtract off a multiple of 2pi to get a value in [-pi,pi]. Then you can use trig identities to reduce the domain you need further to [0,pi/2]. For example, if x is in [-pi,pi/2], then use sin(x) = -sin(x+pi).
If any roundoff errors at all are unacceptable, then perhaps you should make a symbolic calculator instead of a floating point calculator.
What is most likely is that Google and Casios calculators simply round down if the result is something smaller than say 1.0E-14.
Something similar can be done if the number is too large.
Floating point inaccuracies are hard to deal with but rounding is the most common way to fix them.
And although it seems like you know what is happening under the hood, this may help you:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Maybe they use something like the Big Decimal Class
Sorry I'm not yet allowed to provide a simple comment.
For example yesterday I created a simple swing application which converted from decimal to unsigned int binary and vice versa. The problem is, what the user enters in a text box are String values to convert, and they can easily exceed even Long.MAX_VALUE.
What I wanted was for the user to enter as many digits as desired in both cases, so I used the Big Integer Class, which is similar to Big Decimal but is suited for Integer values. The result of using it allows users to enter strings of incredible length, and for my program to output the conversion in as much or even more digits, far, far exceeding Long.MAX_VALUE.
However, since the Math functions use doubles, we are still limited here with the 'Big' classes, which initialize with doubles and strings. If you have an application where you can represent numbers with strings (which can have a max character length of Integer.MAX_VALUE), then the 'Big' classes are great. However if you want to initialize with a double, you are obviously limited by the constraints of double.
Related
I'm using Math.sin to calculate trigonometry in Java with 3 decimal precision. However when I calculate values that should result in an Integer I get 1.0000000002 instead of 1.
I have tried using
System.out.printf(Locale.ROOT, "%.3f ", v);
which does solve the problem of 1.000000002 turning into 1.000.
However when I calculate numbers that should result in 0 and instead get -1.8369701987210297E-16 and use
System.out.printf(Locale.ROOT, "%.3f ", v);
prints out -0.000 when I need it to be 0.000.
Any ideas on how to get rid of that negative sign?
Lets start with this:
How do I avoid rounding errors with doubles?
Basically, you can't. They are inherent to numerical calculations using floating point types. Trust me ... or take the time to read this article:
What every Computer Scientist should know about floating-point arithmetic by David Goldberg.
In this case, the other thing that comes into play is that trigonometric functions are implemented by computing a finite number of steps of an infinite series with finite precision (i.e. floating point) arithmetic. The javadoc for the Math class leaves some "wiggle room" on the accuracy of the math functions. It is worth reading the javadocs to understand the expected error bounds.
Finally, if you are computing (for example) sin π/2 you need to consider how accurate your representation of π/2 is.
So what you should really be asking is how to deal with the rounding error that unavoidably happens.
In this case, you are asking is how to make it look like the user of your program as if there isn't any rounding error. There are two approaches to this:
Leave it alone! The rounding errors occur, so we should not lie to the users about it. It is better to educate them. (Honestly, this is high school maths, and even "the pointy haired boss" should understand that arithmetic is inexact.)
Routines like printf do a pretty good job. And the -0.000 displayed in this case is actually a truthful answer. It means that the computed answer rounds to zero to 3 decimal places but is actually negative. This is not actually hard for someone with high school maths to understand. If you explain it.
Lie. Fake it. Put in some special case code to explicitly convert numbers between -0.0005 and zero to exactly zero. The code suggested in a comment
System.out.printf(Locale.ROOT, "%.3f ", Math.round(v * 1000d) / 1000d);
is another way to do the job. But the risk of this is that the lie could be dangerous in some circumstances. On the other hand, you could say that real mistake problem is displaying the numbers to 3 decimal places.
Depends on accuracy you need, you can multiply by X and divide by X where X is X=10^y and y is required floating poing precision.
I am writing a basic neural network in Java and I am writing the activation functions (currently I have just written the sigmoid function). I am trying to use doubles (as apposed to BigDecimal) with hopes that training will actually take a reasonable amount of time. However, I've noticed that the function doesn't work with larger inputs. Currently my function is:
public static double sigmoid(double t){
return (1 / (1 + Math.pow(Math.E, -t)));
}
This function returns pretty precise values all the way down to when t = -100, but when t >= 37 the function returns 1.0. In a typical neural network when the input is normalized is this fine? Will a neuron ever get inputs summing over ~37? If the size of the sum of inputs fed into the activation function vary from NN to NN, what are some of the factors the affect it? Also, is there any way to make this function more precise? Is there an alternative that is more precise and/or faster?
Yes, in a normalized network double is fine to use. But this depend on your input, if your input layer is bigger, your input sum will be bigger of course.
I have encountered the same problem using C++, after t become big, the compiler/rte does not even take into account E^-t and returns plain 1, as it only calculates the 1/1 part. I tried to divide the already normalized input by 1000-1000000 and it worked sometimes, but sometimes it did not as I was using a randomized input for the first epoch and my input layer was a matrix 784x784. Nevertheless, if your input layer is small, and your input is normalized this will help you
The surprising answer is that double is actually more precision than you need. This blog article by Pete Warden claims that even 8 bits are enough precision. And not just an academic idea: NVidia's new Pascal chips emphasize their single-precision performance above everything else, because that is what matters for deep learning training.
You should be normalizing your input neuron values. If extreme values still happen, it is fine to set them to -1 or +1. In fact, this answer shows doing that explicitly. (Other answers on that question are also interesting - the suggestion to just pre-calculate 100 or so values, and not use Math.exp() or Math.pow() at all!)
I haven't been able to find any information online that doesn't already assume I know things. I wonder if anyone knows any good resources that I can look into to help me wrap my head around what this function does exactly?
From what I gather, and I'm pretty certain this is wrong or at least not fully right, is that given a floating point, it determines the distance between itself and some number next in a sequence? There appears to be something to do with how number are represented bitwise, but the sources I've read never explicitly said anything about that.
http://matlabgeeks.com/tips-tutorials/floating-point-comparisons-in-matlab/
illustrated it rather well:
float2bin(A)
//ans = 0011111110111001100110011001100110011001100110011001100110100000
float2bin(B)
//ans = 0011111110111001100110011001100110011001100110011001100110011010
You can see the difference in precision at a binary level in this example. A and B differ by 6 ulps (units in the last place)
I believe that it is showing the distance between the number you specify, and the next largest binary float that can be encoded.
Because of the range that binary floating point numbers cover and their precision, not all numbers between any two given real numbers are covered, so it looks like this is giving you the positive distance between the number you wish to encode and the actual number it would be stored as.
From Wikipedia:
the unit of least precision (ULP) is the spacing between floating-point numbers, i.e., the value the least significant digit represents if it is 1
This question already has answers here:
Closed 11 years ago.
Possible Duplicates:
Is JavaScript's Math broken?
Java floating point arithmetic
I have the current code
for(double j = .01; j <= .17; j+=.01){
System.out.println(j);
}
the output is:
0.01
0.02
0.03
0.04
0.05
0.060000000000000005
0.07
0.08
0.09
0.09999999999999999
0.10999999999999999
0.11999999999999998
0.12999999999999998
0.13999999999999999
0.15
0.16
0.17
Can someone explain why this is happening? How do you fix this? Besides writing a rounding function?
Floats are an approximation of the actual number in Java, due to the way they're stored. If you need exact values, use a BigDecimal instead.
They are working correctly. Some decimal values are not representable exactly in binary floating point and get rounded to the closest value. See my answer to this question for more detail. The question was asked about Perl, but the answer applies equally to Java since it's a limitation of ALL floating point representations that do not have infinite precision (i.e. all of them).
As suggested by #Kaleb Brasee go and use BigDecimal's when accuracy is a must. Here is a link to a nice explanation of tiny details related to using floating point operations in Java http://firstclassthoughts.co.uk/java/traps/java_double_traps.html
There is also a link to issues involved with using BigDecimal's. Highly recommended to read them both. It really helped me.
Enjoy, Boro.
We humans are used to think in 'base 10' when we deal with floating point numbers 'by hand' (that is, literally when writing them on paper or when entering them into a computer). Because of this, it is possible for us to write down an exact representation of, say, 17%. We just write 0.17 (or 1.7E-1 etc). Trying to represent such a trivial thing as a third can not be done exactly with that system, because we have to write 0.3333333... with an infinite number of 3s, which is impossible.
Computers dealing with floating point not only have a limited number of bits to represent the mantissa (or significand) of the number, they are also restricted to express the mantissa in the base of two. That means that most percentages (which we humans with our base 10 floating point convention always can write exactly, like for example '0.17') are impossible for the computer to store exactly. Fractions like 0%, 25%, 50%, 75% and 100% can be expressed exactly as a floating point number in a computer, because it consists of either halves (2E-1) or quarters (2E-4) which fits nicely with a digital representation of a number. Percentage values like 17% or even trivial ones (for us humans!!) like 10% or 1% are as impossible for computers to store exactly simply because those numbers are, for the binary floating point system what the 'one third' is for the human (base 10) floating point system.
But if you carefully pick your floating point values, so they always are made of a whole number of 1/2^n where n might be 10 (meaning an integer number of 1/1024), then they can always be stored exactly without errors as a floating point number. So if you try to store 17/1024 in a computer, it will go smoothly. You can actually store it without error even using the 'human base 10' decimal system (but you would go nuts by the number of actual digits you have to deal with).
This is some reason I believe why some games express angles in a unit where a whole 360 degree turn is 256 angle units. Can be expressed without loss as a floating point number between 0 and 1 (where 1 means you go a full revolution).
It's normal in double representation on the computer. You lose some bits then you will have such results. Better solution is to do this:
for(int j = 1; j <= 17; j++){
System.out.println(j/100.0);
}
This is because floating point values are inherently not the same as reals in the mathematical sense.
In a computer, there is only a fixed number of bits that can be used to represent value. This means there are a finite number of values that it can hold. But there are an infinite amount of real numbers, thus not all of them can be represented exactly. But usually the value is something close. You can find a more detailed explanation here.
That is because of the limitations of IEEE754 the binary format to get the most out of 32 bit.
As others have pointed out, only numbers that are combinations of powers of two are exactly representable in (bianary) floating point format
If you need to store arbitrary numbers with arbitrary precision, then use BigDecimal.
If the problem is just a display issue, then you can get round this in how you display the number. For example:
String.format("%.2f", n)
will format the number to 2 decimal places.
In my course, I am told:
Continuous values are represented approximately in memory, and therefore computing with floats involves rounding errors. These are tiny discrepancies in bit patterns; thus the test e==f is unsafe if e and f are floats.
Referring to Java.
Is this true? I've used comparison statements with doubles and floats and have never had rounding issues. Never have I read in a textbook something similar. Surely the virtual machine accounts for this?
It is true.
It is an inherent limitation of how floating point values are represented in memory in a finite number of bits.
This program, for instance, prints "false":
public class Main {
public static void main(String[] args) {
double a = 0.7;
double b = 0.9;
double x = a + 0.1;
double y = b - 0.1;
System.out.println(x == y);
}
}
Instead of exact comparison with '==' you usually decide on some level of precision and ask if the numbers are "close enough":
System.out.println(Math.abs(x - y) < 0.0001);
This applies to Java just as much as to any other language using floating point. It's inherent in the design of the representation of floating point values in hardware.
More info on floating point values:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Yes, representing 0.1 exactly in base-2 is the same as trying to represent 1/3 exactly in base 10.
This is always true. There are some numbers which cannot be represented accurately using float point representation. Consider, for example, pi. How would you represent a number which has infinite digits, within a finite storage? Therefore, when comparing numbers you should check if the difference between them is smaller then some epsilon. Also, there are several classes which exist that can help you achieve greater accuracy such as BigDecimal and BigInteger.
It is right. Note that Java has nothing to do with it, the problem is inherent in floating point math in ANY language.
You can often get away with it with classroom-level problems but it's not going to work in the real world. Sometimes it won't work in the classroom.
An incident from long ago back in school. The teacher of an intro class assigned a final exam problem that was proving a real doozy for many of the better students--it wasn't working and they didn't know why. (I saw this as a lab assistant, I wasn't in the class.) Finally some started asking me for help and some probing revealed the problem: They had never been taught about the inherent inaccuracy of floating point math.
Now, there were two basic approaches to this problem, a brute force one (which by chance worked in this case as it made the same errors every time) and a more elegant one (which would make different errors and not work.) Anyone who tried the elegant approach would hit a brick wall without having any idea why. I helped a bunch of them and stuck in a comment explaining why and to contact me if he had questions.
Of course next semester I hear from him about this and I basically floored the entire department with a simple little program:
10 X = 3000000
20 X = X + 1
30 If X < X + 1 goto 20
40 Print "X = X + 1"
Despite what every teacher in the department thought, this WILL terminate. The 3 million seed is simply to make it terminate faster. (If you don't know basic: There are no gimmicks here, just exhausting the precision of floating point numbers.)
Yes, as other answers have said. I want to add that I recommend you this article about floating point accuracy: Visualizing floats
Of course it is true. Think about it. Any number must be represented in binary.
Picture: "1000" as 0.5or 1/2, that is, 2 ** -1. Then "0100" is 0.25 or 1/4. You can see where I'm going.
How many numbers can you represent in this manner? 2**4. Adding more bits duplicates the available space, but it is never infinite. 1/3 or 1/10, for the matter 1/n, any number not multiple of 2 cannot be really represented.
1/3 could be "0101" (0.3125) or "0110" (0.375). Either value if you multiply it by 3, will not be 1. Of course you could add special rules. Say you "when you add 3 times '0101', make it 1"... this approach won't work in the long run. You can catch some but then how about 1/6 times 2?
It's not a problem of binary representation, any finite representation has numbers that you cannot represent, they are infinite after all.
Most CPUs (and computer languages) use IEEE 754 floating point arithmetic. Using this notation, there are decimal numbers that have no exact representation in this notation, e.g. 0.1. So if you divide 1 by 10 you won't get an exact result. When performing several calculations in a row, the errors sum up. Try the following example in python:
>>> 0.1
0.10000000000000001
>>> 0.1 / 7 * 10 * 7 == 1
False
That's not really what you'd expect mathematically.
By the way:
A common misunderstanding concerning floating point numbers is, that the results are not precise and cannot be comapared safely. This is only true if you really use fractions of numbers. If all your math is in the integer domain, doubles and floats do exactly the same as ints and also can be compared safely. They can be safely used as loop counters, for example.
yes, Java also uses floating point arithmetic.