What would be the most efficient way to grab the, say, 207th decimal place of a number? Would it just be x * Math.pow(10,207) % 10?
How's this for python
int((x*(10**n)))%10
What you want is impossible.
The only things in java that work with Math.pow and basic operators are the primitives. The only floating point primitives are float and double. These are IEEE754 floating point numbers; doubles are 64 bits and floats are 32 bits.
A simple principle applies: If you have 64 bits, then you can only represent 2^64 different numbers (it's actually a little less). So, you get about 18446744073709551616 numbers, of all numbers in existence, which actually exist as far as the computer is concerned for doubles. all other numbers do not exist.
So what happens if a mathematical operation (say, 0.1 + 0.2) ends up being a number that doesn't exist? Well, java (this is predicated by the IEEE754 standard; most languages and chips do it this way) will return you the nearest number amongst all the 18446744073709551616 numbers that do exist.
The problem with wanting the 207th digit is that obviously, given that only 18446744073709551616 numbers exist, none of those 18446744073709551616 numbers have that kind of precision. Asking for the 207th digit is therefore completely random. It says nothing about the input number... whatsoever.
Let me repeat that: There are no double values that have a significant 207th digit AT ALL.
If you want 'perfect' representation, with no rounding whatsoever, you want BigDecimal, but note that demanding perfection is tricky. Imagine in basic decimal math (computers are binary, but lets stick with decimal as we're all much more familiar with it, what with our 10 fingers and all), I ask you to only give me perfect answers, and then I ask you to divide 1 by 3.
BigDecimal won't let you do that either, so the ops you can run on BigDecimals without telling BigDecimal in what ways it is allowed to be inprecise leads to exceptions.
If you've set it all up exactly how you wanted it, and you really have a BigDecimal with a 207th digit after the comma, you can use the scale feature, or just the power-of-10 feature to get what you want.
Note BigDecimal is not primitive and therefore does not support the +, %, etc operators.
***Special Note: There is no: "This will handle all situations" answer here as the arbitrary value such as 207 could take the calculations way outside the bounds of possible precision of the variable types involved. My answer as such will only work within the bounds of variable type precision for which 207 is really not possible...
To get the specific digit an arbitrary number (like 207) of places after the decimal point... if you just multiply by factor of 10.. and then take mod 10, the answer (in java) is still a floating point type... not a single digit...
To get a specific digit an arbitrary number (n) of places after the decimal point, without converting to string:
Math.floor(x*Math.pow(10,n)) % 10;
to get 4th digit after 2.987654321
x*Math.pow(10, 4) = 29876.54321
Math.floor(29876.54321) = 29876
29876 % 10 = 6
Related
I am working on an android application where for a part of the app, I have 2 floating point values and I cannot have them be exactly the same because this is causing a bug in one of my screens. Those numbers are being sent from a server and are out of my control (e.g. I cannot force them to be different).
The app is written in Kotlin, but I assume that this issue is similar (if not exactly the same) for Java, as Kotlin uses the JVM behind the scenes.
I thought of a "creative" way of solving this without changing my logic too much, by subtracting Float.MIN_VALUE from one of them, making them almost, but not exactly the same. What I actually need to happen is for if(a == b) to fail (where b is actually a - Float.MIN_VALUE).
But to my surprise, when the code runs, if(a == b) returns true. When I opened the "evaluate" window in Android Studio here is what I found out:
Let me reiterate that currentPayment is a Float, so there shouldn't be any auto-conversions or rounding going on (like when dividing Float by an Int for example).
Let me also point out that I can guarantee that currentPayment is not
Float.MAX_VALUE or -Float.MAX_VALUE (so the result of the operation is within the bounds of Float).
According to docs and this answer Float.MIN_VALUE is the smallest positive non-zero value of Float and has a value of 1.4E-45, which is confirmed here:
I also saw in another post (which I cannot find again for some reason), that this can also be thought of as the maximum precision of Float, which makes sense.
Since currentPayment is a Float, I would expect it should be able to hold any floating point value within the bounds of Float to it's maximum precision (i.e. Float.MIN_VALUE).
Therefore I would expect currentPayment to NOT equal currentPayment - Float.MIN_VALUE, but alas that is not the case.
Can anyone explain this please?
Since currentPayment is a Float, I would expect it should be able to hold any floating point value within the bounds of Float to it's maximum precision (i.e. Float.MIN_VALUE).
This is a wrong assumption. Float is called "float" because it has floating precision. The amount of precision depends on how big the number is that you're storing. The smallest possible float value is smaller than the precision of almost any other possible number, so it is too small to affect them if you add or subtract it. At the high end, Float numbers have precisions that are much greater than the integer 1. If you subtract 999,000,000 from Float.MAX_VALUE, it will still return Float.MAX_VALUE because the precision is so poor at the highest end.
Also, since floating point numbers are not stored in base-10, they are inappropriate for storing currency amounts, because you can never exactly represent a decimal fraction. (I mention that because your variable name has the word "payment" in it, which is a red flag.)
You should either use BigDecimal, Long, or Int to represent currency, so your currency amounts and arithmetic will be exact.
Edit:
Here's an analogy to help understand it, since it is hard to contemplate binary numbers. Floats are 32-bits in Java and Kotlin, but imagine we have a special kind of computer that can store a floating point number in base-10. Each bit on this computer is not just 0 or 1, but can be anything from 0 to 9. A Float on this computer can have 4 digits and a decimal place, but the decimal place is floating, so it can be placed anywhere relative to the four digits. So a Float on this computer is always five bits--four of the bits are the digits, and the fifth bit tells you where the decimal place goes.
In this imaginary computer's Float, the smallest possible number that can be represented is .0001 and the largest possible number is 9999.. You can't represent 9999.5 or even 1000.5 because there aren't enough digits available. There's no fixed amount of precision--the precision is determined by where the decimal place is in the current number. Precision is better for numbers with a decimal place farther to the left.
For the number storage format to be able to have a fixed precision, we would have to fix the decimal point in one place for all numbers. We would have to choose a precision. Suppose we chose a precision of 0.001. Our fifth bit that told us where the decimal place goes in the floating point can now just be used for a fifth digit. Now we know the precision is always 0.001, but the largest possible number we can represent is 99.999 and the smallest possible number is 0.001, a much smaller possible range than with floating point. This limitation is the reason floating points are used instead.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do you explain floating point inaccuracy to fresh programmers and laymen who still think computers are infinitely wise and accurate?
Do you have a favourite example or anecdote which seems to get the idea across much better than an precise, but dry, explanation?
How is this taught in Computer Science classes?
There are basically two major pitfalls people stumble in with floating-point numbers.
The problem of scale. Each FP number has an exponent which determines the overall “scale” of the number so you can represent either really small values or really larges ones, though the number of digits you can devote for that is limited. Adding two numbers of different scale will sometimes result in the smaller one being “eaten” since there is no way to fit it into the larger scale.
PS> $a = 1; $b = 0.0000000000000000000000001
PS> Write-Host a=$a b=$b
a=1 b=1E-25
PS> $a + $b
1
As an analogy for this case you could picture a large swimming pool and a teaspoon of water. Both are of very different sizes, but individually you can easily grasp how much they roughly are. Pouring the teaspoon into the swimming pool, however, will leave you still with roughly a swimming pool full of water.
(If the people learning this have trouble with exponential notation, one can also use the values 1 and 100000000000000000000 or so.)
Then there is the problem of binary vs. decimal representation. A number like 0.1 can't be represented exactly with a limited amount of binary digits. Some languages mask this, though:
PS> "{0:N50}" -f 0.1
0.10000000000000000000000000000000000000000000000000
But you can “amplify” the representation error by repeatedly adding the numbers together:
PS> $sum = 0; for ($i = 0; $i -lt 100; $i++) { $sum += 0.1 }; $sum
9,99999999999998
I can't think of a nice analogy to properly explain this, though. It's basically the same problem why you can represent 1/3 only approximately in decimal because to get the exact value you need to repeat the 3 indefinitely at the end of the decimal fraction.
Similarly, binary fractions are good for representing halves, quarters, eighths, etc. but things like a tenth will yield an infinitely repeating stream of binary digits.
Then there is another problem, though most people don't stumble into that, unless they're doing huge amounts of numerical stuff. But then, those already know about the problem. Since many floating-point numbers are merely approximations of the exact value this means that for a given approximation f of a real number r there can be infinitely many more real numbers r1, r2, ... which map to exactly the same approximation. Those numbers lie in a certain interval. Let's say that rmin is the minimum possible value of r that results in f and rmax the maximum possible value of r for which this holds, then you got an interval [rmin, rmax] where any number in that interval can be your actual number r.
Now, if you perform calculations on that number—adding, subtracting, multiplying, etc.—you lose precision. Every number is just an approximation, therefore you're actually performing calculations with intervals. The result is an interval too and the approximation error only ever gets larger, thereby widening the interval. You may get back a single number from that calculation. But that's merely one number from the interval of possible results, taking into account precision of your original operands and the precision loss due to the calculation.
That sort of thing is called Interval arithmetic and at least for me it was part of our math course at the university.
Show them that the base-10 system suffers from exactly the same problem.
Try to represent 1/3 as a decimal representation in base 10. You won't be able to do it exactly.
So if you write "0.3333", you will have a reasonably exact representation for many use cases.
But if you move that back to a fraction, you will get "3333/10000", which is not the same as "1/3".
Other fractions, such as 1/2 can easily be represented by a finite decimal representation in base-10: "0.5"
Now base-2 and base-10 suffer from essentially the same problem: both have some numbers that they can't represent exactly.
While base-10 has no problem representing 1/10 as "0.1" in base-2 you'd need an infinite representation starting with "0.000110011..".
How's this for an explantation to the layman. One way computers represent numbers is by counting discrete units. These are digital computers. For whole numbers, those without a fractional part, modern digital computers count powers of two: 1, 2, 4, 8. ,,, Place value, binary digits, blah , blah, blah. For fractions, digital computers count inverse powers of two: 1/2, 1/4, 1/8, ... The problem is that many numbers can't be represented by a sum of a finite number of those inverse powers. Using more place values (more bits) will increase the precision of the representation of those 'problem' numbers, but never get it exactly because it only has a limited number of bits. Some numbers can't be represented with an infinite number of bits.
Snooze...
OK, you want to measure the volume of water in a container, and you only have 3 measuring cups: full cup, half cup, and quarter cup. After counting the last full cup, let's say there is one third of a cup remaining. Yet you can't measure that because it doesn't exactly fill any combination of available cups. It doesn't fill the half cup, and the overflow from the quarter cup is too small to fill anything. So you have an error - the difference between 1/3 and 1/4. This error is compounded when you combine it with errors from other measurements.
In python:
>>> 1.0 / 10
0.10000000000000001
Explain how some fractions cannot be represented precisely in binary. Just like some fractions (like 1/3) cannot be represented precisely in base 10.
Another example, in C
printf (" %.20f \n", 3.6);
incredibly gives
3.60000000000000008882
Here is my simple understanding.
Problem:
The value 0.45 cannot be accurately be represented by a float and is rounded up to 0.450000018. Why is that?
Answer:
An int value of 45 is represented by the binary value 101101.
In order to make the value 0.45 it would be accurate if it you could take 45 x 10^-2 (= 45 / 10^2.)
But that’s impossible because you must use the base 2 instead of 10.
So the closest to 10^2 = 100 would be 128 = 2^7. The total number of bits you need is 9 : 6 for the value 45 (101101) + 3 bits for the value 7 (111).
Then the value 45 x 2^-7 = 0.3515625. Now you have a serious inaccuracy problem. 0.3515625 is not nearly close to 0.45.
How do we improve this inaccuracy? Well we could change the value 45 and 7 to something else.
How about 460 x 2^-10 = 0.44921875. You are now using 9 bits for 460 and 4 bits for 10. Then it’s a bit closer but still not that close. However if your initial desired value was 0.44921875 then you would get an exact match with no approximation.
So the formula for your value would be X = A x 2^B. Where A and B are integer values positive or negative.
Obviously the higher the numbers can be the higher would your accuracy become however as you know the number of bits to represent the values A and B are limited. For float you have a total number of 32. Double has 64 and Decimal has 128.
A cute piece of numerical weirdness may be observed if one converts 9999999.4999999999 to a float and back to a double. The result is reported as 10000000, even though that value is obviously closer to 9999999, and even though 9999999.499999999 correctly rounds to 9999999.
I'm looking through old exam questions (currently first year of uni.) and I'm wondering if someone could explain a bit more thoroughly why the following for loop does not end when it is supposed to. Why does this happen? I understand that it skips 100.0 because of a rounding-error or something, but why?
for(double i = 0.0; i != 100; i = i +0.1){
System.out.println(i);
}
The number 0.1 cannot be exactly represented in binary, much like 1/3 cannot be exactly represented in decimal, as such you cannot guarantee that:
0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1==1
This is because in binary:
0.1=(binary)0.00011001100110011001100110011001....... forever
However a double cannot contain an infinite precision and so, just as we approximate 1/3 to 0.3333333 so must the binary representation approximate 0.1.
Expanded decimal analogy
In decimal you may find that
1/3+1/3+1/3
=0.333+0.333+0.333
=0.999
This is exactly the same problem. It should not be seen as a weakness of floating point numbers as our own decimal system has the same difficulties (but for different numbers, someone with a base-3 system would find it strange that we struggled to represent 1/3). It is however an issue to be aware of.
Demo
A live demo provided by Andrea Ligios shows these errors building up.
Computers (at least current ones) works with binary data. Moreover, there is a length limitation for computers to process in their arithmetic logic units (i.e. 32bits, 64bits etc).
Representing integers in binary form is simple on the contrary we cant say the same thing for floating points.
As shown above there is a special way of representing floating points according to IEEE-754 which is also accepted as defacto by processor producers and software guys that's why it is important for everyone to know about it.
If we look at the maximum value of a double in java (Double.MAX_VALUE) is 1.7976931348623157E308 (>10^307). only with 64 bits, huge numbers could be represented however problem is the precision.
As '==' and '!=' operators compare numbers bitwise, in your case 0.1+0.1+0.1 is not equal to 0.3 in terms of bits they are represented.
As a conclusion, to fit huge floating point numbers in a few bits clever engineers decided to sacrifice precision. If you are working on floating points you shouldn't use '==' or '!=' unless you are sure what you are doing.
As a general rule, never use double to iterate with due to rounding errors (0.1 may look nice when written in base 10, but try writing it in base 2—which is what double uses). What you should do is use a plain int variable to iterate and calculate the double from it.
for (int i = 0; i < 1000; i++)
System.out.println(i/10.0);
First of all, I'm going to explain some things about doubles. This will actually take place in base ten for ease of understanding.
Take the value one-third and try to express it in base ten. You get 0.3333333333333.... Let's say we need to round it to 4 places. We get 0.3333. Now, let's add another 1/3. We get 0.6666333333333.... which rounds to 0.6666. Let's add another 1/3. We get 0.9999, not 1.
The same thing happens with base two and one-tenth. Since you're going by 0.110 and 0.110 is a repeating binary value(like 0.1666666... in base ten), you'll have just enough error to miss one hundred when you do get there.
1/2 can be represented in base ten just fine, and 1/5 can as well. This is because the prime factors of the denominator are a subset of the factors of the base. This is not the case for one third in base ten or one tenth in base two.
It should be for(double a = 0.0; a < 100.0; a = a + 0.01)
Try and see if this works instead
This question already has answers here:
Closed 11 years ago.
Possible Duplicates:
Is JavaScript's Math broken?
Java floating point arithmetic
I have the current code
for(double j = .01; j <= .17; j+=.01){
System.out.println(j);
}
the output is:
0.01
0.02
0.03
0.04
0.05
0.060000000000000005
0.07
0.08
0.09
0.09999999999999999
0.10999999999999999
0.11999999999999998
0.12999999999999998
0.13999999999999999
0.15
0.16
0.17
Can someone explain why this is happening? How do you fix this? Besides writing a rounding function?
Floats are an approximation of the actual number in Java, due to the way they're stored. If you need exact values, use a BigDecimal instead.
They are working correctly. Some decimal values are not representable exactly in binary floating point and get rounded to the closest value. See my answer to this question for more detail. The question was asked about Perl, but the answer applies equally to Java since it's a limitation of ALL floating point representations that do not have infinite precision (i.e. all of them).
As suggested by #Kaleb Brasee go and use BigDecimal's when accuracy is a must. Here is a link to a nice explanation of tiny details related to using floating point operations in Java http://firstclassthoughts.co.uk/java/traps/java_double_traps.html
There is also a link to issues involved with using BigDecimal's. Highly recommended to read them both. It really helped me.
Enjoy, Boro.
We humans are used to think in 'base 10' when we deal with floating point numbers 'by hand' (that is, literally when writing them on paper or when entering them into a computer). Because of this, it is possible for us to write down an exact representation of, say, 17%. We just write 0.17 (or 1.7E-1 etc). Trying to represent such a trivial thing as a third can not be done exactly with that system, because we have to write 0.3333333... with an infinite number of 3s, which is impossible.
Computers dealing with floating point not only have a limited number of bits to represent the mantissa (or significand) of the number, they are also restricted to express the mantissa in the base of two. That means that most percentages (which we humans with our base 10 floating point convention always can write exactly, like for example '0.17') are impossible for the computer to store exactly. Fractions like 0%, 25%, 50%, 75% and 100% can be expressed exactly as a floating point number in a computer, because it consists of either halves (2E-1) or quarters (2E-4) which fits nicely with a digital representation of a number. Percentage values like 17% or even trivial ones (for us humans!!) like 10% or 1% are as impossible for computers to store exactly simply because those numbers are, for the binary floating point system what the 'one third' is for the human (base 10) floating point system.
But if you carefully pick your floating point values, so they always are made of a whole number of 1/2^n where n might be 10 (meaning an integer number of 1/1024), then they can always be stored exactly without errors as a floating point number. So if you try to store 17/1024 in a computer, it will go smoothly. You can actually store it without error even using the 'human base 10' decimal system (but you would go nuts by the number of actual digits you have to deal with).
This is some reason I believe why some games express angles in a unit where a whole 360 degree turn is 256 angle units. Can be expressed without loss as a floating point number between 0 and 1 (where 1 means you go a full revolution).
It's normal in double representation on the computer. You lose some bits then you will have such results. Better solution is to do this:
for(int j = 1; j <= 17; j++){
System.out.println(j/100.0);
}
This is because floating point values are inherently not the same as reals in the mathematical sense.
In a computer, there is only a fixed number of bits that can be used to represent value. This means there are a finite number of values that it can hold. But there are an infinite amount of real numbers, thus not all of them can be represented exactly. But usually the value is something close. You can find a more detailed explanation here.
That is because of the limitations of IEEE754 the binary format to get the most out of 32 bit.
As others have pointed out, only numbers that are combinations of powers of two are exactly representable in (bianary) floating point format
If you need to store arbitrary numbers with arbitrary precision, then use BigDecimal.
If the problem is just a display issue, then you can get round this in how you display the number. For example:
String.format("%.2f", n)
will format the number to 2 decimal places.
This is in reference to the comments in this question:
This code in Java produces 12.100000000000001 and this is using 64-bit doubles which can present 12.1 exactly. – Pyrolistical
Is this true? I felt that since a floating point number is represented as a sum of powers of two, you cannot represent 12.1 exactly, no matter how many bits you have. However, when I implemented both the algorithms and printed the results of calling them with (12.1, 3) with many significant digits, I get, for his and mine respectively:
12.10000000000000000000000000000000000000000000000000000000000000000000000000
12.10000000000000100000000000000000000000000000000000000000000000000000000000
I printed this using String.format("%76f"). I know that's more zeros than necessary, but I don't see any rounding in the 12.1 .
No. As others noted in followups to his comment, no sum of (a finite number of) powers of two can ever add up to exactly 12.1. Just like you can't represent 1/3 exactly in base ten, no matter how many digits you use after the decimal point.
In binary, 12.1 is:
1100.000110011001100110011...
Since this doesn't terminate, it can't be represented exactly in the 53 significand bits of a double, or any other finite-width binary floating-point type.
Try to express 0.1 in binary:
0.5 is too big
0.25 is too big
0.125 is too big
0.0625 fits, and leaves a remainder of 0.0375
0.03125 fits, and leaves a remainder of 0.00625
0.015625 is too big
0.0078125 is too big
0.00390625 fits, and leaves a remainder of 0.00234375
0.001953125 fits, and leaves a remainder of 0.000390625
It's going to keep repeating indefinitely, creating a base 2 value of 0.00011001100...
No, it can't be expressed exactly in a double. If Java supports BCD, or fixed point decimal, that would work exactly.
Not in binary, no. If you'll allow me to be fanciful, you could in "floating point binary coded decimal" (which, to the best of my knowledge, has never been implemented):
12.1 = 0000 . 0001 0010 0001 * (10^2)
In binary all non-zero values are of the form 1.xyz * m, and IEEE form takes advantage of this to omit the leading 1. I'm not sure what the equivalent is for FP-BCD, so I've gone for values of the form 0.xyz * m instead.
I suggest reading What Every Computer Scientist Should Know About Floating Point Arithmetic. Then you'll know for sure. :)
A way to see what the double is fairly exactly is to convert it to BigDecimal.
// prints 12.0999999999999996447286321199499070644378662109375
System.out.println(new BigDecimal(12.1));
Yes, you can exactly represent 12.1 in floating point.
You merely need a decimal floating point representation, not a binary one.
Use the BigDecimal type, and you'll represent it exactly!
No, the decimal number 12.1 cannot be represented as a finite (terminating) binary floating-point number.
Remember that 12.1 is the rational number 121/10. Note that this fraction is in lowest terms (cannot be reduced by removing common fators of the numerator an denominator).
Suppose (in order to reach a contradiction) that 121/10 could be written also as n / (2**k) where n and k are some positive integers, and 2**k denotes the kth power of two. We would have a counter-example to unique factorization. In particular
10 * n == 2**k * 121
where the left-hand side is divisible by 5 which the right-hand side is not.
One option that you can use is to not store v=0.1, but instead store v10=1. Just divide by 10 when needed ( the division will create truncation error in your result but v will still be OK )
In this case you're basically doing a fixed point hack, but keeping the number in a float.
But its usually not worth doing this unless you really have to.