I have the below code somewhere in my app
float private myMethod(float c){
result = (float) (c+273.15);
}
When "c" gets the value something like -273.1455 the result is something very near to zero like 0.0044.
But when it gets the value -273.15 i get this instead of zero: 6.1035157E-6
Why does this happen?
The problem is that 273.15 is a double, not a float, and neither of them can represent 273.15 exactly. However, since they have different precision they will round actually store different numbers. When the addition is done the c is converted to a double which will be able store the float representation of 273.15. So now you have two doubles with almost the same value and the difference will be non zero.
To get "more predictable" result, use 273.15f to ensure you have floats through the calculations. That should solve this problem but what you need to do is to read up on binary floating point arithmetics and how that differs from decimal arithmetic that we are taught in school.
Wiki on floating point is a good place to start.
Floating point calculations in computers are not accurate. You should read something about floating point arithmetics to prevent such errors.
The problem is not with the value, but with the display to the user.
I'm assuming you are converting it into a String. The way this is done is detailed in http://docs.oracle.com/javase/1.4.2/docs/api/java/lang/Double.html#toString(double)
To Display a correct value use the NumberFormat class http://docs.oracle.com/javase/1.4.2/docs/api/java/text/NumberFormat.html
Example :
NumberFormat formater = NumberFormat.getNumberInstance()
formatter.setMaximumFractionDigits(4);
formater.format(myMethod(-273.15))
Now you should get 0.
Related
We are solving a numeric precision related bug. Our system collects some numbers and spits their sum.
The issue is that the system does not retain the numeric precision, e.g. 300.7 + 400.9 = 701.599..., while expected result would be 701.6. The precision is supposed to adapt to the input values so we cannot just round results to fixed precision.
The problem is obvious, we use double for the values and addition accumulates the error from the binary representation of the decimal value.
The path of the data is following:
XML file, type xsd:decimal
Parse into a java primitive double. Its 15 decimal places should be enough, we expect values no longer than 10 digits total, 5 fraction digits.
Store into DB MySql 5.5, type double
Load via Hibernate into a JPA entity, i.e. still primitive double
Sum bunch of these values
Print the sum into another XML file
Now, I assume the optimal solution would be converting everything to a decimal format. Unsurprisingly, there is a pressure to go with the cheapest solution. It turns out that converting doubles to BigDecimal just before adding a couple of numbers works in case B in following example:
import java.math.BigDecimal;
public class Arithmetic {
public static void main(String[] args) {
double a = 0.3;
double b = -0.2;
// A
System.out.println(a + b);//0.09999999999999998
// B
System.out.println(BigDecimal.valueOf(a).add(BigDecimal.valueOf(b)));//0.1
// C
System.out.println(new BigDecimal(a).add(new BigDecimal(b)));//0.099999999999999977795539507496869191527366638183593750
}
}
More about this:
Why do we need to convert the double into a string, before we can convert it into a BigDecimal?
Unpredictability of the BigDecimal(double) constructor
I am worried that such a workaround would be a ticking bomb.
First, I am not so sure that this arithmetic is bullet proof for all cases.
Second, there is still some risk that someone in the future might implement some changes and change B to C, because this pitfall is far from obvious and even a unit test may fail to reveal the bug.
I would be willing to live with the second point but the question is: Would this workaround provide correct results? Could there be a case where somehow
Double.valueOf("12345.12345").toString().equals("12345.12345")
is false? Given that Double.toString, according to javadoc, prints just the digits needed to uniquely represent underlying double value, so when parsed again, it gives the same double value? Isn't that sufficient for this use case where I only need to add the numbers and print the sum with this magical Double.toString(Double d) method? To be clear, I do prefer what I consider the clean solution, using BigDecimal everywhere, but I am kind of short of arguments to sell it, by which I mean ideally an example where conversion to BigDecimal before addition fails to do the job described above.
If you can't avoid parsing into primitive double or store as double, you should convert to BigDecimal as early as possible.
double can't exactly represent decimal fractions. The value in double x = 7.3; will never be exactly 7.3, but something very very close to it, with a difference visible from the 16th digit or so on to the right (giving 50 decimal places or so). Don't be mislead by the fact that printing might give exactly "7.3", as printing already does some kind of rounding and doesn't show the number exactly.
If you do lots of computations with double numbers, the tiny differences will eventually sum up until they exceed your tolerance. So using doubles in computations where decimal fractions are needed, is indeed a ticking bomb.
[...] we expect values no longer than 10 digits total, 5 fraction digits.
I read that assertion to mean that all numbers you deal with, are to be exact multiples of 0.00001, without any further digits. You can convert doubles to such BigDecimals with
new BigDecimal.valueOf(Math.round(doubleVal * 100000), 5)
This will give you an exact representation of a number with 5 decimal fraction digits, the 5-fraction-digits one that's closest to the input doubleVal. This way you correct for the tiny differences between the doubleVal and the decimal number that you originally meant.
If you'd simply use BigDecimal.valueOf(double val), you'd go through the string representation of the double you're using, which can't guarantee that it's what you want. It depends on a rounding process inside the Double class which tries to represent the double-approximation of 7.3 (being maybe 7.30000000000000123456789123456789125) with the most plausible number of decimal digits. It happens to result in "7.3" (and, kudos to the developers, quite often matches the "expected" string) and not "7.300000000000001" or "7.3000000000000012" which both seem equally plausible to me.
That's why I recommend not to rely on that rounding, but to do the rounding yourself by decimal shifting 5 places, then rounding to the nearest long, and constructing a BigDecimal scaled back by 5 decimal places. This guarantees that you get an exact value with (at most) 5 fractional decimal places.
Then do your computations with the BigDecimals (using the appropriate MathContext for rounding, if necessary).
When you finally have to store the number as a double, use BigDecimal.doubleValue(). The resulting double will be close enough to the decimal that the above-mentioned conversion will surely give you the same BigDecimal that you had before (unless you have really huge numbers like 10 digits before the decimal point - the you're lost with double anyway).
P.S. Be sure to use BigDecimal only if decimal fractions are relevant to you - there were times when the British Shilling currency consisted of twelve Pence. Representing fractional Pounds as BigDecimal would give a disaster much worse than using doubles.
It depends on the Database you are using. If you are using SQL Server you can use data type as numeric(12, 8) where 12 represent numeric value and 8 represents precision. similarly, for my SQL DECIMAL(5,2) you can use.
You won't lose any precision value if you use the above-mentioned datatype.
Java Hibernate Class :
You can define
private double latitude;
Database:
I was just messing around with this method to see what it does. I created a variable with value 3.14 just because it came to my mind at that instance.
double n = 3.14;
System.out.println(Math.nextUp(n));
The preceding displayed 3.1400000000000006.
Tried with 3.1400000000000001, displayed the same.
Tried with 333.33, displayed 333.33000000000004.
With many other values, it displays the appropriate value for example 73.6 results with 73.60000000000001.
What happens to the values in between 3.1400000000000000 and 3.1400000000000006? Why does it skips some values? I know about the hardware related problems but sometimes it works right. Also even though it is known that precise operations cannot be done, why is such method included in the library? It looks pretty useless due to the fact that it doesn't work always right.
One useful trick in Java is to use the exactness of new BigDecimal(double) and of BigDecimal's toString to show the exact value of a double:
import java.math.BigDecimal;
public class Test {
public static void main(String[] args) {
System.out.println(new BigDecimal(3.14));
System.out.println(new BigDecimal(3.1400000000000001));
System.out.println(new BigDecimal(3.1400000000000006));
}
}
Output:
3.140000000000000124344978758017532527446746826171875
3.140000000000000124344978758017532527446746826171875
3.1400000000000005684341886080801486968994140625
There are a finite number of doubles, so only a specific subset of the real numbers are the exact value of a double. When you create a double literal, the decimal number you type is represented by the nearest of those values. When you output a double, by default, it is shown as the shortest decimal fraction that would round to it on input. You need to do something like the BigDecimal technique I used in the program to see the exact value.
In this case, both 3.14 and 3.1400000000000001 are closer to 3.140000000000000124344978758017532527446746826171875 than to any other double. The next exactly representable number above that is 3.1400000000000005684341886080801486968994140625
Floating point numbers are stored in binary: the decimal representation is just for human consumption.
Using Rick Regan's decimal to floating point converter 3.14 converts to:
11.001000111101011100001010001111010111000010100011111
and 3.1400000000000006 converts to
11.0010001111010111000010100011110101110000101001
which is indeed the next binary number to 53 significant bits.
Like #jgreve mentions this has to do due to the use of float & double primitives types in java, which leads to the so called rounding error. The primitive type int on the other hand is a fixed-point number meaning that it is able to "fit" within 32-bits. Doubles are not fixed-point, meaning that the result of double calculations must often be rounded in order to fit back into its finite representation, which leads sometimes (as presented in your case) to inconsistent values.
See the following two links for more info.
https://stackoverflow.com/a/322875/6012392
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
A work around could be the following two, which gives a "direction" to the first double.
double n = 1.4;
double x = 1.5;
System.out.println(Math.nextAfter(n, x));
or
double n = 1.4;
double next = n + Math.ulp(n);
System.out.println(next);
But to handle floating point values it is recommended to use the BigDecimal class
double r = 11.631;
double theta = 21.4;
In the debugger, these are shown as 11.631000000000000 and 21.399999618530273.
How can I avoid this?
These accuracy problems are due to the internal representation of floating point numbers and there's not much you can do to avoid it.
By the way, printing these values at run-time often still leads to the correct results, at least using modern C++ compilers. For most operations, this isn't much of an issue.
I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:
See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.
So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.
If you have a value like:
double theta = 21.4;
And you want to do:
if (theta == 21.4)
{
}
You have to be a bit clever, you will need to check if the value of theta is really close to 21.4, but not necessarily that value.
if (fabs(theta - 21.4) <= 1e-6)
{
}
This is partly platform-specific - and we don't know what platform you're using.
It's also partly a case of knowing what you actually want to see. The debugger is showing you - to some extent, anyway - the precise value stored in your variable. In my article on binary floating point numbers in .NET, there's a C# class which lets you see the absolutely exact number stored in a double. The online version isn't working at the moment - I'll try to put one up on another site.
Given that the debugger sees the "actual" value, it's got to make a judgement call about what to display - it could show you the value rounded to a few decimal places, or a more precise value. Some debuggers do a better job than others at reading developers' minds, but it's a fundamental problem with binary floating point numbers.
Use the fixed-point decimal type if you want stability at the limits of precision. There are overheads, and you must explicitly cast if you wish to convert to floating point. If you do convert to floating point you will reintroduce the instabilities that seem to bother you.
Alternately you can get over it and learn to work with the limited precision of floating point arithmetic. For example you can use rounding to make values converge, or you can use epsilon comparisons to describe a tolerance. "Epsilon" is a constant you set up that defines a tolerance. For example, you may choose to regard two values as being equal if they are within 0.0001 of each other.
It occurs to me that you could use operator overloading to make epsilon comparisons transparent. That would be very cool.
For mantissa-exponent representations EPSILON must be computed to remain within the representable precision. For a number N, Epsilon = N / 10E+14
System.Double.Epsilon is the smallest representable positive value for the Double type. It is too small for our purpose. Read Microsoft's advice on equality testing
I've come across this before (on my blog) - I think the surprise tends to be that the 'irrational' numbers are different.
By 'irrational' here I'm just referring to the fact that they can't be accurately represented in this format. Real irrational numbers (like π - pi) can't be accurately represented at all.
Most people are familiar with 1/3 not working in decimal: 0.3333333333333...
The odd thing is that 1.1 doesn't work in floats. People expect decimal values to work in floating point numbers because of how they think of them:
1.1 is 11 x 10^-1
When actually they're in base-2
1.1 is 154811237190861 x 2^-47
You can't avoid it, you just have to get used to the fact that some floats are 'irrational', in the same way that 1/3 is.
One way you can avoid this is to use a library that uses an alternative method of representing decimal numbers, such as BCD
If you are using Java and you need accuracy, use the BigDecimal class for floating point calculations. It is slower but safer.
Seems to me that 21.399999618530273 is the single precision (float) representation of 21.4. Looks like the debugger is casting down from double to float somewhere.
You cant avoid this as you're using floating point numbers with fixed quantity of bytes. There's simply no isomorphism possible between real numbers and its limited notation.
But most of the time you can simply ignore it. 21.4==21.4 would still be true because it is still the same numbers with the same error. But 21.4f==21.4 may not be true because the error for float and double are different.
If you need fixed precision, perhaps you should try fixed point numbers. Or even integers. I for example often use int(1000*x) for passing to debug pager.
Dangers of computer arithmetic
If it bothers you, you can customize the way some values are displayed during debug. Use it with care :-)
Enhancing Debugging with the Debugger Display Attributes
Refer to General Decimal Arithmetic
Also take note when comparing floats, see this answer for more information.
According to the javadoc
"If at least one of the operands to a numerical operator is of type double, then the
operation is carried out using 64-bit floating-point arithmetic, and the result of the
numerical operator is a value of type double. If the other operand is not a double, it is
first widened (§5.1.5) to type double by numeric promotion (§5.6)."
Here is the Source
I have an api which takes a number as a String input and i need to get the Float value of the number. I currently use the Float.ParseFloat method to get the float value of my String number.
According the java documentation of Float.ParseFloat, it doesn't mention anything about the input being greater than the Float.MAX_VALUE.
One of the ways I was thinking of doing this was by checking the length of the input String is greater than the length of the Float.MAX_VALUE.
Pls suggest how I can go about handling this.
Although the javadoc doesn't make it clear, when I tested it, parseFloat of a String too large simply produced a Float of 'infinity'. You could use the isInfinite() method after creation to check the value.
Using something like BigDecimal would probably be a safer option here, especially if you'll be performing any arithmetic on your value.
You can use greater precision. Try double or BigDecimal. There are also arbitrary precision libraries which are open.
Here you can find how much each IEEE 754 format can hold: http://en.wikipedia.org/wiki/IEEE_754-2008 . Float would be near 1.234567*10^38
If you can't parse it properly (e.g. if there are too many significant digits or the exponent is too big: 1.23456789012345e5000) you won't be able either to hold it in a single precision float.
If the number is too big the result is set to Float.POSITIVE_INFINITY, as the rules of IEEE FP arithmetic require, and as a 10-second test shows.
The exponent clips to the maximum exponent value. See the source, line 1197.
Perhaps check for some maximum useful value for your application?
In my JAVA program there is code like this:
int f_part = (int) ((f_num - num) * 100);
f_num is double and num is long. I just want to take the fractional part out and assign it to f_part. But some times f_part value is one less than it's value. Which means if f_num = 123.55 and num = 123, But f_part equals to 54. And it happens only f_num and num is greater than 100. I don't know why this happening. Please can someone explain why this happens and way to correct it.
This is due to the limited precision in doubles.
The root of your problem is that the literal 123.55 actually represents the value 123.54999....
It may seem like it holds the value 123.55 if you print it:
System.out.println(123.55); // prints 123.55
but in fact, the printed value is an approximation. This can be revealed by creating a BigDecimal out of it, (which provides arbitrary precision) and print the BigDecimal:
System.out.println(new BigDecimal(123.55)); // prints 123.54999999999999715...
You can solve it by going via Math.round but you would have to know how many decimals the source double actually entails, or you could choose to go through the string representation of the double in fact goes through a fairly intricate algorithm.
If you're working with currencies, I strongly suggest you either
Let prices etc be represented by BigDecimal which allows you to store numbers as 0.1 accurately, or
Let an int store the number of cents (as opposed to having a double store the number of dollars).
Both ways are perfectly acceptable and used in practice.
From The Floating-Point Guide:
internally, computers use a format (binary floating-point) that cannot
accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.
It looks like you're calculating money values. double is a completely inappropriate format for this. Use BigDecimal instead.
int f_part = (int) Math.round(((f_num - num) * 100));
This is one of the most often asked (and answered) questions. Floating point arithmetics can not produce exact results, because it's impossible to have an inifinity of real numbers inside 64 bits. Use BigDecimal if you need arbitrary precision.
Floating point arithmetic is not as simple as it may seem and there can be precision issues.
See Why can't decimal numbers be represented exactly in binary?, What Every Computer Scientist Should Know About Floating-Point Arithmetic for details.
If you need absolutely sure precision, you might want to use BigDecimal.