Comparing mathematical expressions - java

So here's my situation: I have two mathematical expressions which contains variables (x, y, z, etc). I have already compiled them to postfix using the shunting yard algorithm for execution and now I need a way to test if they're mathematical equal.
Examples:
x+5==5+x
x*2==x+x
4/(x/2)==8/x
My initial thinking is to just throw a couple of thousand different random inputs and see if the evaluation result is the same.
Problems I foresee with this approach: Precision problems, NaN-situations and possible overflows.
All calculations are done with Java's double type.
Any ideas? :)
Edit: As this is for a casual game, the solution doesn't need to be perfect, only good enough!

For the example expressions you have provided, you could transform the function to produce one polynomial divided by another, with the most significant coefficient of the divisor one, and with no common factor. This would give you a canonical form - if there was a difference the two functions would really be different. However, you would also need to represent the coefficients as arbitrary precision rationals or hit precision problems here too, and by then you will have written most of a basic computer algebra system, such as those listed at http://en.wikipedia.org/wiki/List_of_computer_algebra_systems - which does include some free systems.

According to Wikipidea on this topic:
http://en.wikipedia.org/wiki/Symbolic_computation
"There are two notions of equality for mathematical expressions. The syntactic equality is the equality of the expressions which means that they are written (or represented in a computer) in the same way. As trivial, it is rarely considered by mathematicians, but it is the only equality that is easy to test with a program. The semantic equality is when two expressions represent the same mathematical object, like in
It is known that there may not exist a algorithm that decides if two expressions representing numbers are semantically equal, if exponentials and logarithms are allowed in the expressions. Therefore (semantical) equality may be tested only on some classes of expressions such as the polynomials and the rational fractions.
To test the equality of two expressions, instead to design a specific algorithm, it is usual to put them in some canonical form or to put their difference in a normal form and to test the syntactic equality of the result."
That seems to be the best practice.

I was trying to write basically the same question when I ended up here. However I found some ideas which are not mentionned here.
First, I agree with #nelshh that in some specific cases you can find canonical forms which allow to test equality of expressions.
I found some examples of canonical forms:
The most famous is probably the minterm canonical form in the boolean algebra which is used for instance in circuit synthesis or verification.
Polynomial expressions also admit a canonical form as a sum of monomials. This can solve your examples:
The canonical form for rational numbers is the irreductible fraction.
Your examples:
Both are already in canonical form, you just need to sort them by increasing degree.
2*x is in its canonical form, x+x is not (because both operands of the addition have same degree).
Both are already in canonical form (monomials of degree -1), except that the coefficient in 4/(x/2) which is 4/(1/2) is not in the canonical forms for rational numbers.
If you are still interested in this, I would suggest that you experiment with a computer algebra system such as sympy for python (it probably also exists for java). However, I also think that you should remove the tags java and floating-point (it has nothing to do with how a computer stores real numbers), and add the tag computer-science.
For instance sympy is able to tell such things:
>>> Rational(3,4)*(x+y)**2
2
3⋅(x + y)
──────────
4
>>> Rational(3,4)*(x**2+y**2)+Rational(1,4)*2*x*y+Rational(4,8)*2*x*y
2 2
3⋅x 3⋅x⋅y 3⋅y
──── + ───── + ────
4 2 4
>>> expand(Rational(3,4)*(x+y)**2)==expand(Rational(3,4)*(x**2+y**2)+Rational(1,4)*2*x*y+Rational(4,8)*2*x*y)
True

Related

Is comparing two same "literal" float numbers for equality wrong?

This question is kind of language-agnostic but the code is written in Java.
We have all heard that comparing floating-point numbers for equality is generally wrong. But what if I wanted to compare two exact same literal float values (or strings representing exact same literal values converted to floats)?
I'm quite sure that the numbers will be exactly equal (well, because they must be equal in binary—how can the exact same thing result in two different binary numbers?!) but I wanted to be sure.
Case 1:
void test1() {
float f1 = 4.7;
float f2 = 4.7;
print(f1 == f2);
}
Case 2:
class Movie {
String rating; // for some reason the type is String
}
void test2() {
movie1.rating = "4.7";
movie2.rating = "4.7";
float f1 = Float.parse(movie1.rating);
float f2 = Float.parse(movie2.rating);
print(f1 == f2);
}
In both situations, the expression f1 == f2 should result in true. Am I right? Can I safely compare ratings for equality if they have the same literal float or string values?
There's a rule of thumb that you should apply to all programming rules of thumb (rule of thumbs?):
They are oversimplified, and will result in boneheaded decision making if pushed too far. IF you do not fully -grok- the intent behind the rule of thumb, you will mess up. Perhaps the rule of thumb remains a net positive (applying it without thought will improve things more than it will make them worse), but it will cause damage, and in any case it cannot be used as an argument in a debate.
So, with that in mind, clearly, there is no point in asking the question:
"Giving that the rule of thumb 'do not use == to compare floats' exists, is it ALWAYS bad?".
The answer is the extremely obvious: Duh, no. It's not ALWAYS bad, because rules of thumb pretty much by definition, if not by common sense, never ALWAYS apply.
So let's break it down then.
WHY is there a rule of thumb that you shouldn't == compare floats?
Your question suggests you already know this: It's because doing any math on floating points as represented by IEEE754 concepts such as java's double or float are inexact (vs. concepts like java's BigDecimal, which is exact *).
Do what you should always do when faced with a rule of thumb that, upon grokking why the rule of thumb exists and realizing it does not apply to your scenario: Completely ignore it.
Perhaps your question boils down to: I THINK I grok the rule of thumb, but perhaps I'm missing something; aside from the 'floating point math introduces small deviations which mess up == comparison', which does not apply to this case, are there any other reasons for this rule of thumb that I am not aware of?
In which case, my answer is: As far as I know, no.
*) But BigDecimal has its own equality problems, such as: Are two BigDecimal objects that represent the same mathematical number precisely, but which are configured to render at a different scale 'equal'? That depends on whether your viewpoint is that they are numbers or objects representing an exact decimal point number along with some meta properties including how to render it and how to round things if explicitly asked to do so. For what it is worth, the equals implementation of BD, which has to make a sophie's choice and choose between 2 equally valid interpretations of what equality means, chooses 'I represent a number', not 'I represent a number along with a bunch of metadata'. The same sophie's choice exists in all JPA/Hibernate stacks: Does a JPA object represent 'a row in the database' (thus equality being defined solely by the primary key value, and if not saved yet, two objects cannot be equal, not even to itself, unless the same reference identity), or does it represent the thing that the row represents, e.g. a student, and not 'a row in the DB that represents a student', in which case unid is the one field that does NOT matter for identity, and all the others (name, birthdate, social security number, etc) do. equality is hard.
Yes. Compile time constants that are the same are evaluated consistently.
If you think about it, they must be the same, because there’s only one compiler and it converts literals to their floating point representation deterministically.
Yes, you can compare floats like this. The thing is that even if 4.7 isn't 4.7 when converted to a float, it will be converted consistently to the same value.
In general it is not wrong per se to compare floats like this. But for more complex math, you might want to use Math.round() or set a "sameness" difference span that the two should be within to be counted as "the same".
There is also an arbitrariness to fixed point numbers. For instance
1,000,000,001
is bigger than
1.000,000,000
Are these two numbers different? It depends on the precision you need. But for most purposes, these numbers are functionally the same
This question is kind of language-agnostic…
Actually, there is no floating-point issue here, and the answer depends entirely on the language.
There is no floating-point issue because IEEE-754 is clear: Two floating-point datums (finite numbers, infinities, and/or NaNs) compare as equal if and only if they correspond to the same real number.
There are language issues because how literals are mapped to floating-point numbers and how source text is mapped to operations differs from language to language. For example, C 2018 6.4.4.2 5 says:
All floating constants of the same source form77) shall convert to the same internal format with the same value.
And footnote 77 says:
1.23, 1.230, 123e-2, 123e-02, and 1.23L are all different source forms and thus need not convert to the same internal format and value.
Thus the C standard permits 1.23 == 1.230 to evaluate to false. (There are historical reasons this was permitted, leaving it as a quality-of-implementation issue.) If by “same” literal float value, you mean the exact same source text, then this problem does not occur in C; the exact same source text must produce the same floating-point value each time in a particular C implementation. However, this example teaches us to be cautious.
C also allows implementations flexibility in how floating-point operations are performed: It allows an implementation to use more than the nominal precision in evaluating expressions, and it allows using different precisions in different parts of the same expression. So 1./3. == 1./3. could evaluate to false.
Some languages, like Python, do not have a good formal specification and are largely silent about how floating-point operations are performed. It is conceivable a Python implementation could use excess precision available in processor registers to convert the source text 1.3 to a long double or similar type, then save it somewhere as a double, then convert the source text 1.3 to a long double, then retrieve the double to compare it to the long double still in registers and get a result indicating inequality.
This sort of issue does not occur in implementations I am aware of, but, when asking a question like this, asking whether a rule always holds, regardless of language, leaves the door open for possible exceptions.

What Crossover Method should I use for crossing Postfix expressions in Genetic Algorithm?

I'm building a project whose main objective is to find a given number (if possible, otherwise closest possible) using 6 given numbers and main operators (+, -, *, /). Idea is to randomly generate expressions, using the numbers given and the operators, in reverse polish (postfix) notation, because I found it the easiest to generate and compute later. Those expressions are Individuals in Population of my Genetic Algorithm. Those expressions have the form of an ArrayList of Strings in Java, where Strings are both the operators and operands (the numbers given).
The main question here is, what would be the best method to crossover these individuals (postfix expressions actually)? Right now I'm thinking about crossing expressions that are made out of all the six operands that are given (and 5 operators with them). Later I'll probably also cross the expressions that would be made out of less operands (5, 4, 3, 2 and also only 1), but I guess that I should figure this out first, as the most complex case (if you think it might be a better idea to start differently, I'm open to any suggestions). So, the thing is that every expression is made from all the operands given, and also the child expression should have all the operands included, too. I understand that this requires some sort of ordered crossover (often used in problems like TSP), and I read a lot about it (for example here where multiple methods are described), but I didn't quite figure out which one would be best in my case (I'm also aware that in Genetic Algorithms there is a lot of 'trial and error' process, but I'm talking about something else here).
What I'm saying is bothering me, are operators. If I had only a list of operands, then it wouldn't be a problem to cross 2 such lists, for example taking a random subarray of half elements from 1 parent, and fill the rest with remaining elements from parent 2 keeping the order like it was. But here, if I, say, take first half of an expression from first parent expression, I would definitely have to fill the child expression with remaining operands, but what should I do with operators? Take them from parent 2 like the remaining operands (but then I would have to watch out because in order to use an operator in postfix expression, I need to have at least 1 operand more, and checking that all the time might be time consuming, or not?), or maybe I could generate random operators for the rest of the child expression (but that wouldn't be a pure crossover then, would it)?
When talking about crossover, there is also mutation, but I guess I have that worked out. I can take an expression and perform a mutation where I'll just switch 2 operands, or take an expression and randomly change 1 or more operators. For that, I have some ideas, but the crossover is what really bothers me.
So, that pretty much sums my problem. Like I said, the main question is how to crossover, but if you have any other suggestions or questions about the program (maybe easier representation of expressions - other then list of strings - which may be easier/faster to crossover, maybe something I didn't mention here, it doesn't matter, maybe even a whole new approach to the problem?), I'd love to hear them. I didn't give any code here because I don't think it's needed to answer this question, but if you think it would help, I'll definitely edit in order to solve this. One more time, main question is to answer how to crossover, this specific part of the problem (idea or pseudocode expected, although the code itself would be great, too :D), but if you think that I need to change something more, or you know some other solutions to my whole problem, feel free to say.
Thanks in advance!
There are two approaches that come to mind:
Approach #1
Encode each genome as a fixed length expression where odd indices are numbers and even indices are the operators. For mutation, you could slightly change the numbers and/or change the operators.
Pros:
Very simple to code
Cons:
Would have to create an infix parser
Fixed length expressions
Approach #2
Encode each genome as a syntax tree. For instance, 4 + 3 / 2 - 1 is equivalent to Add(4, Subtract(Divide(3, 2), 1)) which looks like:
_____+_____
| |
4 ____-____
| |
__/__ 1
| |
3 2
Then when crossing over, pick a random node from each tree and swap them. For mutation, you could add, remove, and/or modify random nodes.
Pros:
Might find better results
Variable length expressions
Cons:
Adds time complexity
Adds programming complexity
Here is an example of the second approach:
Source

Java: How to replace common trigonometric values

I was wondering how to replace common trigonometric values in an expression. To put this into more context, I am making a calculator that needs to be able to evaluate user inputs such as "sin(Math.PI)", or "sin(6 * math.PI/2)". The problem is that floating point values aren't accurate and when I input sin(Math.PI), the calculator ends up with:
1.2245457991473532E-16
But I want it to return 0. I know I could try replacing in the expression all sin(Math.PI) and other common expressions with 0, 1, etc., except I have to check all multiples of Math.PI/2. Can any of you give me some guidance on how to return the user the proper values?
You're running into the problem that it's not quite possible to express a number like pi in a fixed number of bits, so with the available machine precision the computation gives you a small but non-zero number. Math.PI in any case is only an approximation of PI, which is an irrational number. To clean up your answer for display purposes, one possibility is to use rounding. You could instead try adding +1 and -1 to it which may well round the answer to zero.
This question here may help you further:
Java Strange Behavior with Sin and ToRadians
Your problem is that 1.2245457991473532E-16 is in fact zero for many purposes. What about simply rounding the result yielded by sin? With enough rounding, you may achieve what you want and even get 0.5, -0.5 and other important sin values relatively easily.
If you really want to replace those functions as your title suggests, then you can't do that in Java. Your best bet would be to create an SPI specification for common functions that could either fall back to the standard Java implementation or use your own implementation, which replaces the Java one.
Then users of your solution would need to retrieve one of the implementations using dependency injection of explicit references to a factory method.

Why do we need numeric literals in Java?

I have simple question, why do we need to use special literals when it's already obviously what type of variable we are using.
For example, you can see that we are using double type here. And I think compiler should also see it. But if I run such code:
double no_double = 60*(1000/3600);
System.out.format("result is: %.3f",no_double);
I get the result is: 0,000.
But if I run that code:
double a_double = 60.0*(1000.0/3600.0);
System.out.format("result is: %.3f",a_double);
Then I get true result: 16,667.
So why do we need to use literals ?
up. Java Primitive Data Types http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
You're dividing two integers.
The result of that is another integer.
Assigning that integer to a double value later doesn't change the division.
It is not obvious, as the compiler (or JVM) cannot know if you really want floats or integers.
I'd argue that floating point math is hard, if you consider all the corner cases. Floats are inprecise by design, whereas with integers, you get exact results. If you can stick to exact results it is often better to do so and resort to floats only when explicitly needed. For example, if you need to compare the equality of two variables, with floats you have to give it some boundaries and definitions as to what you consider to be equal. With integers there is no need for this, it is self-evident.
There are sevaral programming languages where this kind of explicit separation does not happen, possibly javascript and PHP being the most popular. They choose to autoconvert the datatypes on the fly. It causes some considerable overhead and additional issues in the long run, when you need to know exactly what is this variable you have in your hands.
Still other programming languages exist that don't even have these different data types. Maybe everything there is just an object. This is one way of solving it.
This is just part of the specification of Java as a C-type language. Per the specification, if integer values aren't promoted in an expression, then the result of the calculation is an integer. The language designers could have decided to make the result of all calculations floating point numbers, but decided not to, probably because that behavior for primitive types was not familiar to C and C++ programmers, and because it makes the operations slower.

What's wrong with using == to compare floats in Java?

According to this java.sun page == is the equality comparison operator for floating point numbers in Java.
However, when I type this code:
if(sectionID == currentSectionID)
into my editor and run static analysis, I get: "JAVA0078 Floating point values compared with =="
What is wrong with using == to compare floating point values? What is the correct way to do it?
the correct way to test floats for 'equality' is:
if(Math.abs(sectionID - currentSectionID) < epsilon)
where epsilon is a very small number like 0.00000001, depending on the desired precision.
Floating point values can be off by a little bit, so they may not report as exactly equal. For example, setting a float to "6.1" and then printing it out again, you may get a reported value of something like "6.099999904632568359375". This is fundamental to the way floats work; therefore, you don't want to compare them using equality, but rather comparison within a range, that is, if the diff of the float to the number you want to compare it to is less than a certain absolute value.
This article on the Register gives a good overview of why this is the case; useful and interesting reading.
Just to give the reason behind what everyone else is saying.
The binary representation of a float is kind of annoying.
In binary, most programmers know the correlation between 1b=1d, 10b=2d, 100b=4d, 1000b=8d
Well it works the other way too.
.1b=.5d, .01b=.25d, .001b=.125, ...
The problem is that there is no exact way to represent most decimal numbers like .1, .2, .3, etc. All you can do is approximate in binary. The system does a little fudge-rounding when the numbers print so that it displays .1 instead of .10000000000001 or .999999999999 (which are probably just as close to the stored representation as .1 is)
Edit from comment: The reason this is a problem is our expectations. We fully expect 2/3 to be fudged at some point when we convert it to decimal, either .7 or .67 or .666667.. But we don't automatically expect .1 to be rounded in the same way as 2/3--and that's exactly what's happening.
By the way, if you are curious the number it stores internally is a pure binary representation using a binary "Scientific Notation". So if you told it to store the decimal number 10.75d, it would store 1010b for the 10, and .11b for the decimal. So it would store .101011 then it saves a few bits at the end to say: Move the decimal point four places right.
(Although technically it's no longer a decimal point, it's now a binary point, but that terminology wouldn't have made things more understandable for most people who would find this answer of any use.)
What is wrong with using == to compare floating point values?
Because it's not true that 0.1 + 0.2 == 0.3
As of today, the quick & easy way to do it is:
if (Float.compare(sectionID, currentSectionID) == 0) {...}
However, the docs do not clearly specify the value of the margin difference (an epsilon from #Victor 's answer) that is always present in calculations on floats, but it should be something reasonable as it is a part of the standard language library.
Yet if a higher or customized precision is needed, then
float epsilon = Float.MIN_NORMAL;
if(Math.abs(sectionID - currentSectionID) < epsilon){...}
is another solution option.
I think there is a lot of confusion around floats (and doubles), it is good to clear it up.
There is nothing inherently wrong in using floats as IDs in standard-compliant JVM [*]. If you simply set the float ID to x, do nothing with it (i.e. no arithmetics) and later test for y == x, you'll be fine. Also there is nothing wrong in using them as keys in a HashMap. What you cannot do is assume equalities like x == (x - y) + y, etc. This being said, people usually use integer types as IDs, and you can observe that most people here are put off by this code, so for practical reasons, it is better to adhere to conventions. Note that there are as many different double values as there are long values, so you gain nothing by using double. Also, generating "next available ID" can be tricky with doubles and requires some knowledge of the floating-point arithmetic. Not worth the trouble.
On the other hand, relying on numerical equality of the results of two mathematically equivalent computations is risky. This is because of the rounding errors and loss of precision when converting from decimal to binary representation. This has been discussed to death on SO.
[*] When I said "standard-compliant JVM" I wanted to exclude certain brain-damaged JVM implementations. See this.
Foating point values are not reliable, due to roundoff error.
As such they should probably not be used for as key values, such as sectionID. Use integers instead, or long if int doesn't contain enough possible values.
This is a problem not specific to java. Using == to compare two floats/doubles/any decimal type number can potentially cause problems because of the way they are stored.
A single-precision float (as per IEEE standard 754) has 32 bits, distributed as follows:
1 bit - Sign (0 = positive, 1 = negative)
8 bits - Exponent (a special (bias-127) representation of the x in 2^x)
23 bits - Mantisa. The actuall number that is stored.
The mantisa is what causes the problem. It's kinda like scientific notation, only the number in base 2 (binary) looks like 1.110011 x 2^5 or something similar.
But in binary, the first 1 is always a 1 (except for the representation of 0)
Therefore, to save a bit of memory space (pun intended), IEEE deccided that the 1 should be assumed. For example, a mantisa of 1011 really is 1.1011.
This can cause some issues with comparison, esspecially with 0 since 0 cannot possibly be represented exactly in a float.
This is the main reason the == is discouraged, in addition to the floating point math issues described by other answers.
Java has a unique problem in that the language is universal across many different platforms, each of which could have it's own unique float format. That makes it even more important to avoid ==.
The proper way to compare two floats (not-language specific mind you) for equality is as follows:
if(ABS(float1 - float2) < ACCEPTABLE_ERROR)
//they are approximately equal
where ACCEPTABLE_ERROR is #defined or some other constant equal to 0.000000001 or whatever precision is required, as Victor mentioned already.
Some languages have this functionality or this constant built in, but generally this is a good habit to be in.
Here is a very long (but hopefully useful) discussion about this and many other floating point issues you may encounter: What Every Computer Scientist Should Know About Floating-Point Arithmetic
In addition to previous answers, you should be aware that there are strange behaviours associated with -0.0f and +0.0f (they are == but not equals) and Float.NaN (it is equals but not ==) (hope I've got that right - argh, don't do it!).
Edit: Let's check!
import static java.lang.Float.NaN;
public class Fl {
public static void main(String[] args) {
System.err.println( -0.0f == 0.0f); // true
System.err.println(new Float(-0.0f).equals(new Float(0.0f))); // false
System.err.println( NaN == NaN); // false
System.err.println(new Float( NaN).equals(new Float( NaN))); // true
}
}
Welcome to IEEE/754.
First of all, are they float or Float? If one of them is a Float, you should use the equals() method. Also, probably best to use the static Float.compare method.
You can use Float.floatToIntBits().
Float.floatToIntBits(sectionID) == Float.floatToIntBits(currentSectionID)
The following automatically uses the best precision:
/**
* Compare to floats for (almost) equality. Will check whether they are
* at most 5 ULP apart.
*/
public static boolean isFloatingEqual(float v1, float v2) {
if (v1 == v2)
return true;
float absoluteDifference = Math.abs(v1 - v2);
float maxUlp = Math.max(Math.ulp(v1), Math.ulp(v2));
return absoluteDifference < 5 * maxUlp;
}
Of course, you might choose more or less than 5 ULPs (‘unit in the last place’).
If you’re into the Apache Commons library, the Precision class has compareTo() and equals() with both epsilon and ULP.
you may want it to be ==, but 123.4444444444443 != 123.4444444444442
If you *have to* use floats, strictfp keyword may be useful.
http://en.wikipedia.org/wiki/strictfp
Two different calculations which produce equal real numbers do not necessarily produce equal floating point numbers. People who use == to compare the results of calculations usually end up being surprised by this, so the warning helps flag what might otherwise be a subtle and difficult to reproduce bug.
Are you dealing with outsourced code that would use floats for things named sectionID and currentSectionID? Just curious.
#Bill K: "The binary representation of a float is kind of annoying." How so? How would you do it better? There are certain numbers that cannot be represented in any base properly, because they never end. Pi is a good example. You can only approximate it. If you have a better solution, contact Intel.
As mentioned in other answers, doubles can have small deviations. And you could write your own method to compare them using an "acceptable" deviation. However ...
There is an apache class for comparing doubles: org.apache.commons.math3.util.Precision
It contains some interesting constants: SAFE_MIN and EPSILON, which are the maximum possible deviations of simple arithmetic operations.
It also provides the necessary methods to compare, equal or round doubles. (using ulps or absolute deviation)
In one line answer I can say, you should use:
Float.floatToIntBits(sectionID) == Float.floatToIntBits(currentSectionID)
To make you learned more about using related operators correctly, I am elaborating some cases here:
Generally, there are three ways to test strings in Java. You can use ==, .equals (), or Objects.equals ().
How are they different? == tests for the reference quality in strings meaning finding out whether the two objects are the same. On the other hand, .equals () tests whether the two strings are of equal value logically. Finally, Objects.equals () tests for any nulls in the two strings then determine whether to call .equals ().
Ideal operator to use
Well this has been subject to lots of debates because each of the three operators have their unique set of strengths and weaknesses. Example, == is often a preferred option when comparing object reference, but there are cases where it may seem to compare string values as well.
However, what you get is a falls value because Java creates an illusion that you are comparing values but in the real sense you are not. Consider the two cases below:
Case 1:
String a="Test";
String b="Test";
if(a==b) ===> true
Case 2:
String nullString1 = null;
String nullString2 = null;
//evaluates to true
nullString1 == nullString2;
//throws an exception
nullString1.equals(nullString2);
So, it’s way better to use each operator when testing the specific attribute it’s designed for. But in almost all cases, Objects.equals () is a more universal operator thus experience web developers opt for it.
Here you can get more details: http://fluentthemes.com/use-compare-strings-java/
The correct way would be
java.lang.Float.compare(float1, float2)
One way to reduce rounding error is to use double rather than float. This won't make the problem go away, but it does reduce the amount of error in your program and float is almost never the best choice. IMHO.

Categories

Resources