Is there ever a case where a comparison (equals()) between two floating point values would return false if you compare them as DOUBLE but return true if you compare them as FLOAT?
I'm writing some procedure, as part of my group project, to compare two numeric values of any given types. There're 4 types I'd have to deal with altogether : double, float, int and long. So I'd like to group double and float into one function, that is, I'd just cast any float to double and do the comparison.
Would this lead to any incorrect results?
Thanks.
If you're converting doubles to floats and the difference between them is beyond the precision of the float type, you can run into trouble.
For example, say you have the two double values:
9.876543210
9.876543211
and that the precision of a float was only six decimal digits. That would mean that both float values would be 9.87654, hence equal, even though the double values themselves are not equal.
However, if you're talking about floats being cast to doubles, then identical floats should give you identical doubles. If the floats are different, the extra precision will ensure the doubles are distinct as well.
As long as you are not mixing promoted floats and natively calculated doubles in your comparison you should be ok, but take care:
Comparing floats (or doubles) for equality is difficult - see this lengthy but excellent discussion.
Here are some highlights:
You can't use ==, because of problems with the limited precision of floating point formats
float(0.1) and double(0.1) are different values (0.100000001490116119384765625 and 0.1000000000000000055511151231257827021181583404541015625) respectively. In your case, this means that comparing two floats (by converting to double) will probably be ok, but be careful if you want to compare a float with a double.
It's common to use an epsilon or small value to make a relative comparison with (floats a and b are considered equal if a - b < epsilon). In C, float.h defines FLT_EPSILON for exactly this purpose. However, this type of comparison doesn't work where a and b are both very small, or both very large.
You can address this by using a scaled-relative-to-the-sizes-of-a-and-b epsilon, but this breaks down in some cases (like comparisons to zero).
You can compare the integer representations of the floating point numbers to find out how many representable floats there are between them. This is what Java's Float.equals() does. This is called the ULP difference, for "Units in Last Place" difference. It's generally good, but also breaks down when comparing against zero.
The article concludes:
Know what you’re doing
There is no silver bullet. You have to choose wisely.
If you are comparing against zero, then relative epsilons and ULPs based comparisons are usually meaningless. You’ll need to use an absolute epsilon, whose value might be some small multiple of FLT_EPSILON and the inputs to your calculation. Maybe.
If you are comparing against a non-zero number then relative epsilons or ULPs based comparisons are probably what you want. You’ll probably want some small multiple of FLT_EPSILON for your relative epsilon, or some small number of ULPs. An absolute epsilon could be used if you knew exactly what number you were comparing against.
If you are comparing two arbitrary numbers that could be zero or non-zero then you need the kitchen sink. Good luck and God speed.
So, to answer your question:
If you are downgrading doubles to floats, then you might lose precision, and incorrectly report two different doubles as equal (as paxdiablo points out.)
If you are upgrading identical floats to double, then the added precision won't be a problem unless you are comparing a float with a double (Say you'd got 1.234 in float, and you only had 4 decimal digits of accuracy, then the double 1.2345 MIGHT represent the same value as the float. In this case you'd probably be better to do the comparison at the precision of the float, or more generally, at the error level of the most inaccurate representation in the comparison).
If you know the number you'll be comparing with, you can follow the advice quoted above.
If you're comparing arbitrary numbers (which could be zero or non-zero), there's no way to compare them correctly in all cases - pick one comparison and know its limitations.
A couple of practical considerations (since this sounds like it's for an assignment):
The epsilon comparison mentioned by most is probably fine (but include a discussion of the limitations in the write up). If you're ever planning to compare doubles to floats, try to do it in float, but if not, try to do all comparisons in double. Even better, just use doubles everywhere.
If you want to totally ace the assignment, include a write-up of the issues when comparing floats and the rationale for why you chose any particular comparison method.
I don't understand why you're doing this at all. The == operator already caters for all possible types on both sides, with extensive rules on type coercion and widening which are already specified in the relevant language standards. All you have to do is use it.
I'm perhaps not answering the OP's question but rather responding to some more or less fuzzy advice which require clarifications.
Comparing two floating point values for equality is absolutely possible and can be done. If the type is single or double precision is often of less importance.
Having said that the steps leading up to the comparison itself require great care and a thorough understanding of floating-point dos and don'ts, whys and why nots.
Consider the following C statements:
result = a * b / c;
result = (a * b) / c;
result = a * (b / c);
In most naive floating-point programming they are seen as "equivalent" i e producing the "same" result. In the real world of floating-point they may be. Or actually, the first two are equivalent (as the second follows C evaluation rules, i e operators of same priority left to right). The third may or may not be equivalent to the first twp.
Why is this?
"a * b / c" or "b / c * a" may cause the "inexact" exception i e an intermediate or the final result (or both) is (are) not exact(ly representable in floating point format). If this is the case the results will be more or less subtly different. This may or may not lead to the end results being amenable to an equality comparison. Being aware of this and single-stepping through operations one at a time - noting intermediate results - will allow the patient programmer to "beat the system" i e construct a quality floating-point comparison for practically any situation.
For everyone else, passing over the equality comparison for floating-poiny numbers is good, solid advice.
It's really a bit ironic because most programmers know that integer math results in predictable truncations in various situations. When it comes to floating-point almost everyone is more or less thunderstruck that results are not exact. Go figure.
You should be okay to make that cast as long as the equality test involves a delta.
For example: abs((double) floatVal1 - (double) floatVal2) < .000001 should work.
Edit in response to the question change
No you would not. The above still stands.
For the comparison between float f and double d, you can calculate the difference of f and d. If abs(f-d) is less than some threshold, you can think of the equality holds. These threshold could be either absolute or relative as your application requirement. There are some good solutions Here. And I hope it helpful.
Would I ever get an incorrect result if I promote 2 floats to
double and do a 64bit comparison rather than a 32bit comparison?
No.
If you start with two floats, which could be float variables (float x = foo();) or float constants (1.234234234f) then you can compare them directly, of course. If you convert them to double and then compare them then the results will be identical.
This works because double is a super-set of float. That is, every value that can be stored in a float can be stored in a double. The range of the exponent and mantissa are both increased. There are billions of values that can be stored in a double but not in a float, but there are zero values that can be stored in a float but not a double.
As discussed in my float comparison article it can be tricky to do a meaningful comparison between float or double values, because rounding errors may have crept in. But, converting both numbers from float to double doesn't not change this. All of the mentions of epsilons (which are often but not always needed) are completely orthogonal to the question.
On the other hand, comparing a float to a double is madness. 1.1 (a double) is not equal to 1.1f (a float) because 1.1 cannot be exactly represented in either.
Related
This question is kind of language-agnostic but the code is written in Java.
We have all heard that comparing floating-point numbers for equality is generally wrong. But what if I wanted to compare two exact same literal float values (or strings representing exact same literal values converted to floats)?
I'm quite sure that the numbers will be exactly equal (well, because they must be equal in binary—how can the exact same thing result in two different binary numbers?!) but I wanted to be sure.
Case 1:
void test1() {
float f1 = 4.7;
float f2 = 4.7;
print(f1 == f2);
}
Case 2:
class Movie {
String rating; // for some reason the type is String
}
void test2() {
movie1.rating = "4.7";
movie2.rating = "4.7";
float f1 = Float.parse(movie1.rating);
float f2 = Float.parse(movie2.rating);
print(f1 == f2);
}
In both situations, the expression f1 == f2 should result in true. Am I right? Can I safely compare ratings for equality if they have the same literal float or string values?
There's a rule of thumb that you should apply to all programming rules of thumb (rule of thumbs?):
They are oversimplified, and will result in boneheaded decision making if pushed too far. IF you do not fully -grok- the intent behind the rule of thumb, you will mess up. Perhaps the rule of thumb remains a net positive (applying it without thought will improve things more than it will make them worse), but it will cause damage, and in any case it cannot be used as an argument in a debate.
So, with that in mind, clearly, there is no point in asking the question:
"Giving that the rule of thumb 'do not use == to compare floats' exists, is it ALWAYS bad?".
The answer is the extremely obvious: Duh, no. It's not ALWAYS bad, because rules of thumb pretty much by definition, if not by common sense, never ALWAYS apply.
So let's break it down then.
WHY is there a rule of thumb that you shouldn't == compare floats?
Your question suggests you already know this: It's because doing any math on floating points as represented by IEEE754 concepts such as java's double or float are inexact (vs. concepts like java's BigDecimal, which is exact *).
Do what you should always do when faced with a rule of thumb that, upon grokking why the rule of thumb exists and realizing it does not apply to your scenario: Completely ignore it.
Perhaps your question boils down to: I THINK I grok the rule of thumb, but perhaps I'm missing something; aside from the 'floating point math introduces small deviations which mess up == comparison', which does not apply to this case, are there any other reasons for this rule of thumb that I am not aware of?
In which case, my answer is: As far as I know, no.
*) But BigDecimal has its own equality problems, such as: Are two BigDecimal objects that represent the same mathematical number precisely, but which are configured to render at a different scale 'equal'? That depends on whether your viewpoint is that they are numbers or objects representing an exact decimal point number along with some meta properties including how to render it and how to round things if explicitly asked to do so. For what it is worth, the equals implementation of BD, which has to make a sophie's choice and choose between 2 equally valid interpretations of what equality means, chooses 'I represent a number', not 'I represent a number along with a bunch of metadata'. The same sophie's choice exists in all JPA/Hibernate stacks: Does a JPA object represent 'a row in the database' (thus equality being defined solely by the primary key value, and if not saved yet, two objects cannot be equal, not even to itself, unless the same reference identity), or does it represent the thing that the row represents, e.g. a student, and not 'a row in the DB that represents a student', in which case unid is the one field that does NOT matter for identity, and all the others (name, birthdate, social security number, etc) do. equality is hard.
Yes. Compile time constants that are the same are evaluated consistently.
If you think about it, they must be the same, because there’s only one compiler and it converts literals to their floating point representation deterministically.
Yes, you can compare floats like this. The thing is that even if 4.7 isn't 4.7 when converted to a float, it will be converted consistently to the same value.
In general it is not wrong per se to compare floats like this. But for more complex math, you might want to use Math.round() or set a "sameness" difference span that the two should be within to be counted as "the same".
There is also an arbitrariness to fixed point numbers. For instance
1,000,000,001
is bigger than
1.000,000,000
Are these two numbers different? It depends on the precision you need. But for most purposes, these numbers are functionally the same
This question is kind of language-agnostic…
Actually, there is no floating-point issue here, and the answer depends entirely on the language.
There is no floating-point issue because IEEE-754 is clear: Two floating-point datums (finite numbers, infinities, and/or NaNs) compare as equal if and only if they correspond to the same real number.
There are language issues because how literals are mapped to floating-point numbers and how source text is mapped to operations differs from language to language. For example, C 2018 6.4.4.2 5 says:
All floating constants of the same source form77) shall convert to the same internal format with the same value.
And footnote 77 says:
1.23, 1.230, 123e-2, 123e-02, and 1.23L are all different source forms and thus need not convert to the same internal format and value.
Thus the C standard permits 1.23 == 1.230 to evaluate to false. (There are historical reasons this was permitted, leaving it as a quality-of-implementation issue.) If by “same” literal float value, you mean the exact same source text, then this problem does not occur in C; the exact same source text must produce the same floating-point value each time in a particular C implementation. However, this example teaches us to be cautious.
C also allows implementations flexibility in how floating-point operations are performed: It allows an implementation to use more than the nominal precision in evaluating expressions, and it allows using different precisions in different parts of the same expression. So 1./3. == 1./3. could evaluate to false.
Some languages, like Python, do not have a good formal specification and are largely silent about how floating-point operations are performed. It is conceivable a Python implementation could use excess precision available in processor registers to convert the source text 1.3 to a long double or similar type, then save it somewhere as a double, then convert the source text 1.3 to a long double, then retrieve the double to compare it to the long double still in registers and get a result indicating inequality.
This sort of issue does not occur in implementations I am aware of, but, when asking a question like this, asking whether a rule always holds, regardless of language, leaves the door open for possible exceptions.
double r = 11.631;
double theta = 21.4;
In the debugger, these are shown as 11.631000000000000 and 21.399999618530273.
How can I avoid this?
These accuracy problems are due to the internal representation of floating point numbers and there's not much you can do to avoid it.
By the way, printing these values at run-time often still leads to the correct results, at least using modern C++ compilers. For most operations, this isn't much of an issue.
I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:
See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.
So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.
If you have a value like:
double theta = 21.4;
And you want to do:
if (theta == 21.4)
{
}
You have to be a bit clever, you will need to check if the value of theta is really close to 21.4, but not necessarily that value.
if (fabs(theta - 21.4) <= 1e-6)
{
}
This is partly platform-specific - and we don't know what platform you're using.
It's also partly a case of knowing what you actually want to see. The debugger is showing you - to some extent, anyway - the precise value stored in your variable. In my article on binary floating point numbers in .NET, there's a C# class which lets you see the absolutely exact number stored in a double. The online version isn't working at the moment - I'll try to put one up on another site.
Given that the debugger sees the "actual" value, it's got to make a judgement call about what to display - it could show you the value rounded to a few decimal places, or a more precise value. Some debuggers do a better job than others at reading developers' minds, but it's a fundamental problem with binary floating point numbers.
Use the fixed-point decimal type if you want stability at the limits of precision. There are overheads, and you must explicitly cast if you wish to convert to floating point. If you do convert to floating point you will reintroduce the instabilities that seem to bother you.
Alternately you can get over it and learn to work with the limited precision of floating point arithmetic. For example you can use rounding to make values converge, or you can use epsilon comparisons to describe a tolerance. "Epsilon" is a constant you set up that defines a tolerance. For example, you may choose to regard two values as being equal if they are within 0.0001 of each other.
It occurs to me that you could use operator overloading to make epsilon comparisons transparent. That would be very cool.
For mantissa-exponent representations EPSILON must be computed to remain within the representable precision. For a number N, Epsilon = N / 10E+14
System.Double.Epsilon is the smallest representable positive value for the Double type. It is too small for our purpose. Read Microsoft's advice on equality testing
I've come across this before (on my blog) - I think the surprise tends to be that the 'irrational' numbers are different.
By 'irrational' here I'm just referring to the fact that they can't be accurately represented in this format. Real irrational numbers (like π - pi) can't be accurately represented at all.
Most people are familiar with 1/3 not working in decimal: 0.3333333333333...
The odd thing is that 1.1 doesn't work in floats. People expect decimal values to work in floating point numbers because of how they think of them:
1.1 is 11 x 10^-1
When actually they're in base-2
1.1 is 154811237190861 x 2^-47
You can't avoid it, you just have to get used to the fact that some floats are 'irrational', in the same way that 1/3 is.
One way you can avoid this is to use a library that uses an alternative method of representing decimal numbers, such as BCD
If you are using Java and you need accuracy, use the BigDecimal class for floating point calculations. It is slower but safer.
Seems to me that 21.399999618530273 is the single precision (float) representation of 21.4. Looks like the debugger is casting down from double to float somewhere.
You cant avoid this as you're using floating point numbers with fixed quantity of bytes. There's simply no isomorphism possible between real numbers and its limited notation.
But most of the time you can simply ignore it. 21.4==21.4 would still be true because it is still the same numbers with the same error. But 21.4f==21.4 may not be true because the error for float and double are different.
If you need fixed precision, perhaps you should try fixed point numbers. Or even integers. I for example often use int(1000*x) for passing to debug pager.
Dangers of computer arithmetic
If it bothers you, you can customize the way some values are displayed during debug. Use it with care :-)
Enhancing Debugging with the Debugger Display Attributes
Refer to General Decimal Arithmetic
Also take note when comparing floats, see this answer for more information.
According to the javadoc
"If at least one of the operands to a numerical operator is of type double, then the
operation is carried out using 64-bit floating-point arithmetic, and the result of the
numerical operator is a value of type double. If the other operand is not a double, it is
first widened (§5.1.5) to type double by numeric promotion (§5.6)."
Here is the Source
Lets say, I have the following:
float x= ...
float y = ...
What I like to do is just compare them whether x is greater than y or not. I am not interested in their equality.
My question is, should I take into account precision when just performing a > or a < check on floating point values? Am I correct to assume that precision is only taken into account for equality checks?
Since you already have two floats, named x and y, and if there hasn't been any casting before, you can easily use ">" and "<" for comparison. However, let's say if you had two doubles d1 and d2 with d1 > d2 and you cast them to f1 and f2, respectively, you might get f1 == f2 because of precision problems.
There is already a wheel you don't need to invent:
if (Float.compare(x, y) < 0)
// x is less than y
All float values have the same precision as each other.
It really depends on where those two floats came from. If there was roundoff earlier in their computation, then if the accumulated roundoff is large enough to exceed the difference between the "ideal" answers, you may not get the results you expect for any of the comparison values.
As a general rule, any comparison of floats must be considered fuzzy. If you must do it, you are responsible for understanding the sources of roundoff error and deciding whether you care about it and how you want to handle it if so. It's usually better to avoid comparing floats entirely unless your algorithm absolutely requires that you do so... and if you must, to make sure that a "close but not quite" comparison will not break your program. You may be able to restructure the formulas and order of computation to reduce loss of precision. Going up to double will reduce the accumulated error but is not a complete solution.
If you must compare, and the comparison is important, don't use floats. If you need absolute precision of floating-like numbers, use an infinite-precision math package like bignums or a rational-numbers implementation and accept the performance cost. Or switch to scaled integers -- which also round off, but round off in a way that makes more sense to humans.
This is a difficult question. The answer really depends on how you got those numbers.
First, you need to understand that floating point numbers ARE precise, but that they don't necessarily represent the number that you thought they did. Floating point types in typical programming language represent a finite subset of the infinite set of Real numbers. So if you have an arbitrary real number the chances are that you cannot represent it precisely using a float or double type. In fact ...
The only Real numbers that can be represented exactly as float or
double values have the form
mantissa * power(2, exponent)
where "mantissa" and "exponent" are integers in prescribed ranges.
And the corollary is that most "decimal" numbers don't have an exact float or double representation either.
So in fact, we end up with something like this:
true_value = floating_point_value + delta,
where "delta" is the error; i.e. the small (or not so small) difference between the true value and the float or double value.
Next, when you perform a calculation using floating point values, there are cases where the exact result cannot be represented as a floating point value. An obvious example is:
1.0f / 3.0f
for which the true value is 0.33333... recurring, which is not representable in any finite base 2 (or base 10!) floating point representation. Instead what happens is that a result is produced by rounding to the nearest float or double value ... introducing more error.
As you perform more and more calculations, the errors can potentially grow ... or stay stable ... depending on the sequence of operations that are performed. (There's a branch of mathematics that deals with this: Numerical Analysis.)
So back to your questions:
"Should I take into account precision when just performing a > or a < check on floating point values?"
It depends on how you got those floating point values, and what you know about the "delta" values relative to the true Real values they nominally represent.
If there are no errors (i.e. delta < the smallest representable difference for the value), then you can safely compare using ==, < or >.
If there are possible errors, then you need to take account of those errors ... if the semantic of the comparison are intended to couched in terms of the (nominal) true values. Furthermore, you need to have a credible estimate of the "delta" (the accumulated error) when you code the comparison.
In short, there is no simple (correct) answer ...
"Am I correct to assume that precision is only taken into account for equality checks? "
In fact precision is not "taken into account" in any of the comparison operators. These operators treat the operands as precise values, and compare them accordingly. It is up to your code to take account of precision issues, based on your error estimates for the preceding calculations.
If you have estimates for the deltas, then a mathematically sound < comparison would be something like this:
// Assume true_v1 = v1 +- delta_v1 ... (delta_v1 is a non-negative constant)
if (v1 + delta_v1 < v2 - delta_v2) {
// true_v1 is less than true_v2
}
and so on ...
I expected the following code to produce: "Both are equal", but I got "Both are NOT equal":
float a=1.3f;
double b=1.3;
if(a==b)
{
System.out.println("Both are equal");
}
else{
System.out.println("Both are NOT equal");
}
What is the reason for this?
It's because the closest float value to 1.3 isn't the same as the closest double value to 1.3. Neither value will be exactly 1.3 - that can't be represented exactly in a non-recurring binary representation.
To give a different understanding of why this happens, suppose we had two decimal floating point types - decimal5 and decimal10, where the number represents the number of significant digits. Now suppose we tried to assign the value of "a third" to both of them. You'd end up with
decimal5 oneThird = 0.33333
decimal10 oneThird = 0.3333333333
Clearly those values aren't equal. It's exactly the same thing here, just with different bases involved.
However if you restrict the values to the less-precise type, you'll find they are equal in this particular case:
double d = 1.3d;
float f = 1.3f;
System.out.println((float) d == f); // Prints true
That's not guaranteed to be the case, however. Sometimes the approximation from the decimal literal to the double representation, and then the approximation of that value to the float representation, ends up being less accurate than the straight decimal to float approximation. One example of this 1.0000001788139343 (thanks to stephentyrone for finding this example).
Somewaht more safely, you can do the comparison between doubles, but use a float literal in the original assignment:
double d = 1.3f;
float f = 1.3f;
System.out.println(d == f); // Prints true
In the latter case, it's a bit like saying:
decimal10 oneThird = 0.3333300000
However, as pointed out in the comments, you almost certainly shouldn't be comparing floating point values with ==. It's almost never the right thing to do, because of precisely this sort of thing. Usually if you want to compare two values you do it with some sort of "fuzzy" equality comparison, checking whether the two numbers are "close enough" for your purposes. See the Java Traps: double page for more information.
If you really need to check for absolute equality, that usually indicates that you should be using a different numeric format in the first place - for instance, for financial data you should probably be using BigDecimal.
A float is a single precision floating point number. A double is a double precision floating point number. More details here: http://www.concentric.net/~Ttwang/tech/javafloat.htm
Note: It is a bad idea to check exact equality for floating point numbers. Most of the time, you want to do a comparison based on a delta or tolerance value.
For example:
float a = 1.3f;
double b = 1.3;
float delta = 0.000001f;
if (Math.abs(a - b) < delta)
{
System.out.println("Close enough!");
}
else
{
System.out.println("Not very close!");
}
Some numbers can't be represented exactly in floating point (e.g. 0.01) so you might get unexpected results when you compare for equality.
Read this article.
The above article clearly illustrates with examples your scenario while using double and float types.
float a=1.3f;
double b=1.3;
At this point you have two variables containing binary approximations to the Real number 1.3. The first approximation is accurate to about 7 decimal digits, and the second one is accurate to about 15 decimal digits.
if(a==b) {
The expression a==b is evaluate in two stages. First the value of a is converted from a float to a double by padding the binary representation. The result is still only accurate to about 7 decimal digits as a representation of the Real 1.3. Next you compare the two different approximations. Since they are different, the result of a==b is false.
There are two lessons to learn:
Floating point (and double) literals are almost always approximations; e.g. actual number that corresponds to the literal 1.3f is not exactly equal to the Real number 1.3.
Every time you do a floating point computation, errors creep in. These errors tend to build up. So when you are comparing floating points / double numbers it is usually a mistake to use a simple "==", "<", etcetera. Instead you should use |a - b| < delta where delta is chosen appropriately. (And figuring out what is an appropriate delta is not always straight-forward either.)
You should have taken that course in Numerical Analysis :-)
Never check for equality between floating point numbers. Specifically, to answer your question, the number 1.3 is hard to represent as a binary floating point and the double and float representations are different.
The problem is that Java (and alas .NET as well) is inconsistent about whether a float value represents a single precise numeric quantity or a range of quantities. If a float is considered to represents an exact numeric quantity of the form Mant * 2^Exp, where Mant is an integer 0 to 2^25 and Exp is an integer), then an attempt to cast any number not of that form to float should throw an exception. If it's considered to represent "the locus of numbers for which some particular representation in the above form has been deemed likely to be the best", then a double-to-float cast would be correct even for double values not of the above form [casting the double that best represents a quantity to a float will almost always yield the float that best represents that quantity, though in some corner cases (e.g. numeric quantities in the range 8888888.500000000001 to 8888888.500000000932) the float which is chosen may be a few parts per trillion worse than the best possible float representation of the actual numeric quantity].
To use an analogy, suppose two people each have a ten-centimeter-long object and they measure it. Bob uses an expensive set of calibers and determines that his object is 3.937008" long. Joe uses a tape measure and determines that his object is 3 15/16" long. Are the objects the same size? If one converts Joe's measurement to millionths of an inch (3.937500") the measurements will appear different, but one instead converts Bob's measurement to the nearest 1/256" fraction, they will appear equal. Although the former comparison might seem more "precise", the latter is apt to be more meaningful. Joe's measurement if 3 15/16" doesn't really mean 3.937500"--it means "a distance which, using a tape measure, is indistinguishable from 3 15/16". And 3.937008" is, like the size of Joe's object, a distance which using a tape measure would be indistinguishable from 3 15/16.
Unfortunately, even though it would be more meaningful to compare the measurements using the lower precision, Java's floating-point-comparison rules assume that a float represents a single precise numeric quantity, and performs comparisons on that basis. While there are some cases where this is useful (e.g. knowing whether the particular double produced by casting some value to float and back to double would match the starting value), in general direct equality comparisons between float and double are not meaningful. Even though Java does not require it, one should always cast the operands of a floating-point equality comparison to be the same type. The semantics that result from casting the double to float before the comparison are different from those of casting the float to double, and the behavior Java picks by default (cast the float to double) is often semantically wrong.
Actually neither float nor double can store 1.3. I am not kidding. Watch this video carefully.
https://www.youtube.com/watch?v=RtHKwsXuRkk&index=50&list=PL6pxHmHF3F5JPdnEqKALRMgogwYc2szp1
According to this java.sun page == is the equality comparison operator for floating point numbers in Java.
However, when I type this code:
if(sectionID == currentSectionID)
into my editor and run static analysis, I get: "JAVA0078 Floating point values compared with =="
What is wrong with using == to compare floating point values? What is the correct way to do it?
the correct way to test floats for 'equality' is:
if(Math.abs(sectionID - currentSectionID) < epsilon)
where epsilon is a very small number like 0.00000001, depending on the desired precision.
Floating point values can be off by a little bit, so they may not report as exactly equal. For example, setting a float to "6.1" and then printing it out again, you may get a reported value of something like "6.099999904632568359375". This is fundamental to the way floats work; therefore, you don't want to compare them using equality, but rather comparison within a range, that is, if the diff of the float to the number you want to compare it to is less than a certain absolute value.
This article on the Register gives a good overview of why this is the case; useful and interesting reading.
Just to give the reason behind what everyone else is saying.
The binary representation of a float is kind of annoying.
In binary, most programmers know the correlation between 1b=1d, 10b=2d, 100b=4d, 1000b=8d
Well it works the other way too.
.1b=.5d, .01b=.25d, .001b=.125, ...
The problem is that there is no exact way to represent most decimal numbers like .1, .2, .3, etc. All you can do is approximate in binary. The system does a little fudge-rounding when the numbers print so that it displays .1 instead of .10000000000001 or .999999999999 (which are probably just as close to the stored representation as .1 is)
Edit from comment: The reason this is a problem is our expectations. We fully expect 2/3 to be fudged at some point when we convert it to decimal, either .7 or .67 or .666667.. But we don't automatically expect .1 to be rounded in the same way as 2/3--and that's exactly what's happening.
By the way, if you are curious the number it stores internally is a pure binary representation using a binary "Scientific Notation". So if you told it to store the decimal number 10.75d, it would store 1010b for the 10, and .11b for the decimal. So it would store .101011 then it saves a few bits at the end to say: Move the decimal point four places right.
(Although technically it's no longer a decimal point, it's now a binary point, but that terminology wouldn't have made things more understandable for most people who would find this answer of any use.)
What is wrong with using == to compare floating point values?
Because it's not true that 0.1 + 0.2 == 0.3
As of today, the quick & easy way to do it is:
if (Float.compare(sectionID, currentSectionID) == 0) {...}
However, the docs do not clearly specify the value of the margin difference (an epsilon from #Victor 's answer) that is always present in calculations on floats, but it should be something reasonable as it is a part of the standard language library.
Yet if a higher or customized precision is needed, then
float epsilon = Float.MIN_NORMAL;
if(Math.abs(sectionID - currentSectionID) < epsilon){...}
is another solution option.
I think there is a lot of confusion around floats (and doubles), it is good to clear it up.
There is nothing inherently wrong in using floats as IDs in standard-compliant JVM [*]. If you simply set the float ID to x, do nothing with it (i.e. no arithmetics) and later test for y == x, you'll be fine. Also there is nothing wrong in using them as keys in a HashMap. What you cannot do is assume equalities like x == (x - y) + y, etc. This being said, people usually use integer types as IDs, and you can observe that most people here are put off by this code, so for practical reasons, it is better to adhere to conventions. Note that there are as many different double values as there are long values, so you gain nothing by using double. Also, generating "next available ID" can be tricky with doubles and requires some knowledge of the floating-point arithmetic. Not worth the trouble.
On the other hand, relying on numerical equality of the results of two mathematically equivalent computations is risky. This is because of the rounding errors and loss of precision when converting from decimal to binary representation. This has been discussed to death on SO.
[*] When I said "standard-compliant JVM" I wanted to exclude certain brain-damaged JVM implementations. See this.
Foating point values are not reliable, due to roundoff error.
As such they should probably not be used for as key values, such as sectionID. Use integers instead, or long if int doesn't contain enough possible values.
This is a problem not specific to java. Using == to compare two floats/doubles/any decimal type number can potentially cause problems because of the way they are stored.
A single-precision float (as per IEEE standard 754) has 32 bits, distributed as follows:
1 bit - Sign (0 = positive, 1 = negative)
8 bits - Exponent (a special (bias-127) representation of the x in 2^x)
23 bits - Mantisa. The actuall number that is stored.
The mantisa is what causes the problem. It's kinda like scientific notation, only the number in base 2 (binary) looks like 1.110011 x 2^5 or something similar.
But in binary, the first 1 is always a 1 (except for the representation of 0)
Therefore, to save a bit of memory space (pun intended), IEEE deccided that the 1 should be assumed. For example, a mantisa of 1011 really is 1.1011.
This can cause some issues with comparison, esspecially with 0 since 0 cannot possibly be represented exactly in a float.
This is the main reason the == is discouraged, in addition to the floating point math issues described by other answers.
Java has a unique problem in that the language is universal across many different platforms, each of which could have it's own unique float format. That makes it even more important to avoid ==.
The proper way to compare two floats (not-language specific mind you) for equality is as follows:
if(ABS(float1 - float2) < ACCEPTABLE_ERROR)
//they are approximately equal
where ACCEPTABLE_ERROR is #defined or some other constant equal to 0.000000001 or whatever precision is required, as Victor mentioned already.
Some languages have this functionality or this constant built in, but generally this is a good habit to be in.
Here is a very long (but hopefully useful) discussion about this and many other floating point issues you may encounter: What Every Computer Scientist Should Know About Floating-Point Arithmetic
In addition to previous answers, you should be aware that there are strange behaviours associated with -0.0f and +0.0f (they are == but not equals) and Float.NaN (it is equals but not ==) (hope I've got that right - argh, don't do it!).
Edit: Let's check!
import static java.lang.Float.NaN;
public class Fl {
public static void main(String[] args) {
System.err.println( -0.0f == 0.0f); // true
System.err.println(new Float(-0.0f).equals(new Float(0.0f))); // false
System.err.println( NaN == NaN); // false
System.err.println(new Float( NaN).equals(new Float( NaN))); // true
}
}
Welcome to IEEE/754.
First of all, are they float or Float? If one of them is a Float, you should use the equals() method. Also, probably best to use the static Float.compare method.
You can use Float.floatToIntBits().
Float.floatToIntBits(sectionID) == Float.floatToIntBits(currentSectionID)
The following automatically uses the best precision:
/**
* Compare to floats for (almost) equality. Will check whether they are
* at most 5 ULP apart.
*/
public static boolean isFloatingEqual(float v1, float v2) {
if (v1 == v2)
return true;
float absoluteDifference = Math.abs(v1 - v2);
float maxUlp = Math.max(Math.ulp(v1), Math.ulp(v2));
return absoluteDifference < 5 * maxUlp;
}
Of course, you might choose more or less than 5 ULPs (‘unit in the last place’).
If you’re into the Apache Commons library, the Precision class has compareTo() and equals() with both epsilon and ULP.
you may want it to be ==, but 123.4444444444443 != 123.4444444444442
If you *have to* use floats, strictfp keyword may be useful.
http://en.wikipedia.org/wiki/strictfp
Two different calculations which produce equal real numbers do not necessarily produce equal floating point numbers. People who use == to compare the results of calculations usually end up being surprised by this, so the warning helps flag what might otherwise be a subtle and difficult to reproduce bug.
Are you dealing with outsourced code that would use floats for things named sectionID and currentSectionID? Just curious.
#Bill K: "The binary representation of a float is kind of annoying." How so? How would you do it better? There are certain numbers that cannot be represented in any base properly, because they never end. Pi is a good example. You can only approximate it. If you have a better solution, contact Intel.
As mentioned in other answers, doubles can have small deviations. And you could write your own method to compare them using an "acceptable" deviation. However ...
There is an apache class for comparing doubles: org.apache.commons.math3.util.Precision
It contains some interesting constants: SAFE_MIN and EPSILON, which are the maximum possible deviations of simple arithmetic operations.
It also provides the necessary methods to compare, equal or round doubles. (using ulps or absolute deviation)
In one line answer I can say, you should use:
Float.floatToIntBits(sectionID) == Float.floatToIntBits(currentSectionID)
To make you learned more about using related operators correctly, I am elaborating some cases here:
Generally, there are three ways to test strings in Java. You can use ==, .equals (), or Objects.equals ().
How are they different? == tests for the reference quality in strings meaning finding out whether the two objects are the same. On the other hand, .equals () tests whether the two strings are of equal value logically. Finally, Objects.equals () tests for any nulls in the two strings then determine whether to call .equals ().
Ideal operator to use
Well this has been subject to lots of debates because each of the three operators have their unique set of strengths and weaknesses. Example, == is often a preferred option when comparing object reference, but there are cases where it may seem to compare string values as well.
However, what you get is a falls value because Java creates an illusion that you are comparing values but in the real sense you are not. Consider the two cases below:
Case 1:
String a="Test";
String b="Test";
if(a==b) ===> true
Case 2:
String nullString1 = null;
String nullString2 = null;
//evaluates to true
nullString1 == nullString2;
//throws an exception
nullString1.equals(nullString2);
So, it’s way better to use each operator when testing the specific attribute it’s designed for. But in almost all cases, Objects.equals () is a more universal operator thus experience web developers opt for it.
Here you can get more details: http://fluentthemes.com/use-compare-strings-java/
The correct way would be
java.lang.Float.compare(float1, float2)
One way to reduce rounding error is to use double rather than float. This won't make the problem go away, but it does reduce the amount of error in your program and float is almost never the best choice. IMHO.