Do we need to be careful when comparing a double value against zero?
if ( someAmount <= 0){
.....
}
If you want to be really careful you can test whether it is within some epsilon of zero with something like
double epsilon = 0.0000001;
if ( f <= ( 0 - epsilon ) ) { .. }
else if ( f >= ( 0 + epsilon ) ) { .. }
else { /* f "equals" zero */ }
Or you can simply round your doubles to some specified precision before branching on them.
For some interesting details about comparing error in floating point numbers, here is an article by Bruce Dawson.
For equality: (i.e. == or !=) yes.
For the other comparative operators (<, >, <=, >=), it depends whether you are about the edge cases, e.g. whether < is equivalent to <=, which is another case of equality. If you don't care about the edge cases, it usually doesn't matter, though it depends where your input numbers come from and how they are used.
If you are expecting (3.0/10.0) <= 0.3 to evaluate as true (it may not if floating point error causes 3.0/10.0 to evaluate to a number slightly greater than 0.3 like 0.300000000001), and your program will behave badly if it evaluates as false -- that's an edge case, and you need to be careful.
Good numerical algorithms should almost never depend on equality and edge cases. If I have an algorithm which takes as an input 'x' which is any number between 0 and 1, in general it shouldn't matter whether 0 < x < 1 or 0 <= x <= 1. There are exceptions, though: you have to be careful when evaluating functions with branch points or singularities.
If I have an intermediate quantity y and I am expecting y >= 0, and I evaluate sqrt(y), then I have to be certain that floating-point errors do not cause y to be a very small negative number and the sqrt() function to throw an error. (Assuming this is a situation where complex numbers are not involved.) If I'm not sure about the numerical error, I would probably evaluate sqrt(max(y,0)) instead.
For expressions like 1/y or log(y), in a practical sense it doesn't matter whether y is zero (in which case you get a singularity error) or y is a number very near zero (in which case you'll get a very large number out, whose magnitude is very sensitive to the value of y) -- both cases are "bad" from a numerical standpoint, and I need to reevaluate what it is I'm trying to do, and what behavior I'm looking for when y values are in the neighborhood of zero.
Depending on how your someAmount is computed, you may expect some odd behaviour with float/doubles
Basically, converting numeric data to their binary representation using float / doubles is error prone, because some numbers cannot be represented with a mantis/exponent.
For some details about this you can read this small article
You should consider using java.lang.Math.signum or java.math.BigDecimal , especially for currency & tax computing
Watch out for auto-unboxing:
Double someAmount = null;
if ( someAmount <= 0){
Boom, NullPointerException.
Yes you should be careful.
Suggestion : One of the good way would be using BigDecimal for checking equality/non-equality to 0:
BigDecimal balance = pojo.getBalance();//get your BigDecimal obj
0 != balance.compareTo(BigDecimal.ZERO)
Explanation :
The compareTo() function compares this BigDecimal with the specified BigDecimal. Two BigDecimal objects that are equal in value but have a different scale (like 2.0 and 2.00) are considered equal by this method. This method is provided in preference to individual methods for each of the six boolean comparison operators (<, ==, >, >=, !=, <=). The suggested idiom for performing these comparisons is: (x.compareTo(y) <op> 0), where is one of the six comparison operators.
(Thanks to SonarQube documentation)
Floating point math is imprecise because of the challenges of storing such values in a binary representation. Even worse, floating point math is not associative; push a float or a double through a series of simple mathematical operations and the answer will be different based on the order of those operation because of the rounding that takes place at each step.
Even simple floating point assignments are not simple:
float f = 0.1; // 0.100000001490116119384765625
double d = 0.1; // 0.1000000000000000055511151231257827021181583404541015625
(Results will vary based on compiler and compiler settings);
Therefore, the use of the equality (==) and inequality (!=) operators on float or double values is almost always an error. Instead the best course is to avoid floating point comparisons altogether. When that is not possible, you should consider using one of Java's float-handling Numbers such as BigDecimal which can properly handle floating point comparisons. A third option is to look not for equality but for whether the value is close enough. I.e. compare the absolute value of the difference between the stored value and the expected value against a margin of acceptable error. Note that this does not cover all cases (NaN and Infinity for instance).
If you don't care about the edge cases, then just test for someAmount <= 0. It makes the intent of the code clear. If you do care, well... it depends on how you calculate someAmount and why you're testing for the inequality.
Check a double or float value is 0, an error threshold is used to detect if the value is near 0, but not quite 0. I think this method is the best what I met.
How to test if a double is zero?
Answered by #William Morrison
public boolean isZero(double value, double threshold){
return value >= -threshold && value <= threshold;
}
For example, set threshold as 0. like this,
System.out.println(isZero(0.00, 0));
System.out.println(isZero(0, 0));
System.out.println(isZero(0.00001, 0));
The results as true, true and false from above example codes.
Have Fun #.#
Related
in my java book it told me to not directly compare 2 different float(type) numbers when storing them in variables. Because it gives an approx of a number in the variable. Instead it suggested checking the absolute value of the difference and see if it equals 0. If it does they are the same. How is this helpful? What if I store 5 in variable a and 5 in variable b, how can they not be the same? And how does it help if I compare absolute value??
double a=5,b=5;
if (Math.abs(a-b)==0)
//run code
if (a==b)
//run code
I don't see at all why the above method would be more accurate? Since if 'a' is not equal to 'b' it wont matter if I use Math.abs.
I appreciate replies and thank you for your time.
I tried both methods.
Inaccuracy with comparisons using the == operator is caused by the way double values are stored in a computer's memory. We need to remember that there is an infinite number of values that must fit in limited memory space, usually 64 bits. As a result, we can't have an exact representation of most double values in our computers. They must be rounded to be saved.
Because of the rounding inaccuracy, interesting errors might occur:
double d1 = 0;
for (int i = 1; i <= 8; i++) {
d1 += 0.1;
}
double d2 = 0.1 * 8;
System.out.println(d1);
System.out.println(d2);
Both variables, d1 and d2, should equal 0.8. However, when we run the code above, we'll see the following results:
0.7999999999999999
0.8
In that case, comparing both values with the == operator would produce a wrong result. For this reason, we must use a more complex comparison algorithm.
If we want to have the best precision and control over the rounding mechanism, we can use java.math.BigDecimal class.
The recommended algorithm to compare double values in plain Java is a threshold comparison method. In this case, we need to check whether the difference between both numbers is within the specified tolerance, commonly called epsilon:
double epsilon = 0.000001d;
assertThat(Math.abs(d1 - d2) < epsilon).isTrue();
The smaller the epsilon's value, the greater the comparison accuracy. However, if we specify the tolerance value too small, we'll get the same false result as in the simple == comparison
The thing is, That statement you read in your java book just prevents you from some errors in future, that can be hardly debugged. Computers store decimals/floats as binary, thus not everything we can express as rational in decimal numbers can be expressed as rational in binary, so there's always something like 0.7 = 0.699999999998511. You may not see difference in those comparisons you use, but in real project where you may use much more variables, add and subtract from them, this difference may appear in very surprising place.
There is some classic question about floating numbers. You may see it in any language as well
Why does this code print a result of '7'?
I researched this a lot and I know something about precision errors in the doubles; however I couldnt find the answer. My question is: is it always safe to compare double constants? What do I mean by that is, just reading the double from a string or creating in the source code. No operation (adding, subtracting etc.) will be done on them. If I create a variable like
double c = 3.0.
Will the following equation will always be true? c==3.0000.
For example is there any possibility to see wrong evaluation of an equation like this 1.23456789 < 1.23456788?
It's always safe; with one caveat.
Here's an example:
1.232342134214321412421 == 1.232342134214321412422
That is true; all double constants are silently rounded to the nearest representable double, and for both of these numbers, it is the same double.
Thus, given 2 actual mathematical numbers A and B where A < B, then if you turn those numbers into double literals and run these 3 expressions on it, you get the following scenarios:
If A and B are rounded to different doubles, you get guarantees:
A < B will be true
A == B will be false
A > B will be false
If A and B are really close to each other, they may round to the same double, and you get:
A < B will be false
A == B will be true
A > B will be false
In other words, 'less than' and 'greater than' can be bent into being equal, but a lesser number will never be erroneously treated as larger.
For example is there any possibility to see wrong evaluation of an equation like this 1.23456789 < 1.23456788?
Yes, that is possible, because you can write floating-point literals with arbitrary numbers of digits, and these will be approximated to the nearest value that fits in a double when they are interpreted by the compiler.
For example:
boolean result = 1.2003004005006007008009 > 1.2003004005006007008008;
System.out.println(result);
Prints false, even though mathematically the first number is larger than the second number (so you'd expect it to print true).
In conclusion: by using floating-point literal values, you won't suddenly have arbitrary-precision floating-point numbers.
Lets say, I have the following:
float x= ...
float y = ...
What I like to do is just compare them whether x is greater than y or not. I am not interested in their equality.
My question is, should I take into account precision when just performing a > or a < check on floating point values? Am I correct to assume that precision is only taken into account for equality checks?
Since you already have two floats, named x and y, and if there hasn't been any casting before, you can easily use ">" and "<" for comparison. However, let's say if you had two doubles d1 and d2 with d1 > d2 and you cast them to f1 and f2, respectively, you might get f1 == f2 because of precision problems.
There is already a wheel you don't need to invent:
if (Float.compare(x, y) < 0)
// x is less than y
All float values have the same precision as each other.
It really depends on where those two floats came from. If there was roundoff earlier in their computation, then if the accumulated roundoff is large enough to exceed the difference between the "ideal" answers, you may not get the results you expect for any of the comparison values.
As a general rule, any comparison of floats must be considered fuzzy. If you must do it, you are responsible for understanding the sources of roundoff error and deciding whether you care about it and how you want to handle it if so. It's usually better to avoid comparing floats entirely unless your algorithm absolutely requires that you do so... and if you must, to make sure that a "close but not quite" comparison will not break your program. You may be able to restructure the formulas and order of computation to reduce loss of precision. Going up to double will reduce the accumulated error but is not a complete solution.
If you must compare, and the comparison is important, don't use floats. If you need absolute precision of floating-like numbers, use an infinite-precision math package like bignums or a rational-numbers implementation and accept the performance cost. Or switch to scaled integers -- which also round off, but round off in a way that makes more sense to humans.
This is a difficult question. The answer really depends on how you got those numbers.
First, you need to understand that floating point numbers ARE precise, but that they don't necessarily represent the number that you thought they did. Floating point types in typical programming language represent a finite subset of the infinite set of Real numbers. So if you have an arbitrary real number the chances are that you cannot represent it precisely using a float or double type. In fact ...
The only Real numbers that can be represented exactly as float or
double values have the form
mantissa * power(2, exponent)
where "mantissa" and "exponent" are integers in prescribed ranges.
And the corollary is that most "decimal" numbers don't have an exact float or double representation either.
So in fact, we end up with something like this:
true_value = floating_point_value + delta,
where "delta" is the error; i.e. the small (or not so small) difference between the true value and the float or double value.
Next, when you perform a calculation using floating point values, there are cases where the exact result cannot be represented as a floating point value. An obvious example is:
1.0f / 3.0f
for which the true value is 0.33333... recurring, which is not representable in any finite base 2 (or base 10!) floating point representation. Instead what happens is that a result is produced by rounding to the nearest float or double value ... introducing more error.
As you perform more and more calculations, the errors can potentially grow ... or stay stable ... depending on the sequence of operations that are performed. (There's a branch of mathematics that deals with this: Numerical Analysis.)
So back to your questions:
"Should I take into account precision when just performing a > or a < check on floating point values?"
It depends on how you got those floating point values, and what you know about the "delta" values relative to the true Real values they nominally represent.
If there are no errors (i.e. delta < the smallest representable difference for the value), then you can safely compare using ==, < or >.
If there are possible errors, then you need to take account of those errors ... if the semantic of the comparison are intended to couched in terms of the (nominal) true values. Furthermore, you need to have a credible estimate of the "delta" (the accumulated error) when you code the comparison.
In short, there is no simple (correct) answer ...
"Am I correct to assume that precision is only taken into account for equality checks? "
In fact precision is not "taken into account" in any of the comparison operators. These operators treat the operands as precise values, and compare them accordingly. It is up to your code to take account of precision issues, based on your error estimates for the preceding calculations.
If you have estimates for the deltas, then a mathematically sound < comparison would be something like this:
// Assume true_v1 = v1 +- delta_v1 ... (delta_v1 is a non-negative constant)
if (v1 + delta_v1 < v2 - delta_v2) {
// true_v1 is less than true_v2
}
and so on ...
I expected the following code to produce: "Both are equal", but I got "Both are NOT equal":
float a=1.3f;
double b=1.3;
if(a==b)
{
System.out.println("Both are equal");
}
else{
System.out.println("Both are NOT equal");
}
What is the reason for this?
It's because the closest float value to 1.3 isn't the same as the closest double value to 1.3. Neither value will be exactly 1.3 - that can't be represented exactly in a non-recurring binary representation.
To give a different understanding of why this happens, suppose we had two decimal floating point types - decimal5 and decimal10, where the number represents the number of significant digits. Now suppose we tried to assign the value of "a third" to both of them. You'd end up with
decimal5 oneThird = 0.33333
decimal10 oneThird = 0.3333333333
Clearly those values aren't equal. It's exactly the same thing here, just with different bases involved.
However if you restrict the values to the less-precise type, you'll find they are equal in this particular case:
double d = 1.3d;
float f = 1.3f;
System.out.println((float) d == f); // Prints true
That's not guaranteed to be the case, however. Sometimes the approximation from the decimal literal to the double representation, and then the approximation of that value to the float representation, ends up being less accurate than the straight decimal to float approximation. One example of this 1.0000001788139343 (thanks to stephentyrone for finding this example).
Somewaht more safely, you can do the comparison between doubles, but use a float literal in the original assignment:
double d = 1.3f;
float f = 1.3f;
System.out.println(d == f); // Prints true
In the latter case, it's a bit like saying:
decimal10 oneThird = 0.3333300000
However, as pointed out in the comments, you almost certainly shouldn't be comparing floating point values with ==. It's almost never the right thing to do, because of precisely this sort of thing. Usually if you want to compare two values you do it with some sort of "fuzzy" equality comparison, checking whether the two numbers are "close enough" for your purposes. See the Java Traps: double page for more information.
If you really need to check for absolute equality, that usually indicates that you should be using a different numeric format in the first place - for instance, for financial data you should probably be using BigDecimal.
A float is a single precision floating point number. A double is a double precision floating point number. More details here: http://www.concentric.net/~Ttwang/tech/javafloat.htm
Note: It is a bad idea to check exact equality for floating point numbers. Most of the time, you want to do a comparison based on a delta or tolerance value.
For example:
float a = 1.3f;
double b = 1.3;
float delta = 0.000001f;
if (Math.abs(a - b) < delta)
{
System.out.println("Close enough!");
}
else
{
System.out.println("Not very close!");
}
Some numbers can't be represented exactly in floating point (e.g. 0.01) so you might get unexpected results when you compare for equality.
Read this article.
The above article clearly illustrates with examples your scenario while using double and float types.
float a=1.3f;
double b=1.3;
At this point you have two variables containing binary approximations to the Real number 1.3. The first approximation is accurate to about 7 decimal digits, and the second one is accurate to about 15 decimal digits.
if(a==b) {
The expression a==b is evaluate in two stages. First the value of a is converted from a float to a double by padding the binary representation. The result is still only accurate to about 7 decimal digits as a representation of the Real 1.3. Next you compare the two different approximations. Since they are different, the result of a==b is false.
There are two lessons to learn:
Floating point (and double) literals are almost always approximations; e.g. actual number that corresponds to the literal 1.3f is not exactly equal to the Real number 1.3.
Every time you do a floating point computation, errors creep in. These errors tend to build up. So when you are comparing floating points / double numbers it is usually a mistake to use a simple "==", "<", etcetera. Instead you should use |a - b| < delta where delta is chosen appropriately. (And figuring out what is an appropriate delta is not always straight-forward either.)
You should have taken that course in Numerical Analysis :-)
Never check for equality between floating point numbers. Specifically, to answer your question, the number 1.3 is hard to represent as a binary floating point and the double and float representations are different.
The problem is that Java (and alas .NET as well) is inconsistent about whether a float value represents a single precise numeric quantity or a range of quantities. If a float is considered to represents an exact numeric quantity of the form Mant * 2^Exp, where Mant is an integer 0 to 2^25 and Exp is an integer), then an attempt to cast any number not of that form to float should throw an exception. If it's considered to represent "the locus of numbers for which some particular representation in the above form has been deemed likely to be the best", then a double-to-float cast would be correct even for double values not of the above form [casting the double that best represents a quantity to a float will almost always yield the float that best represents that quantity, though in some corner cases (e.g. numeric quantities in the range 8888888.500000000001 to 8888888.500000000932) the float which is chosen may be a few parts per trillion worse than the best possible float representation of the actual numeric quantity].
To use an analogy, suppose two people each have a ten-centimeter-long object and they measure it. Bob uses an expensive set of calibers and determines that his object is 3.937008" long. Joe uses a tape measure and determines that his object is 3 15/16" long. Are the objects the same size? If one converts Joe's measurement to millionths of an inch (3.937500") the measurements will appear different, but one instead converts Bob's measurement to the nearest 1/256" fraction, they will appear equal. Although the former comparison might seem more "precise", the latter is apt to be more meaningful. Joe's measurement if 3 15/16" doesn't really mean 3.937500"--it means "a distance which, using a tape measure, is indistinguishable from 3 15/16". And 3.937008" is, like the size of Joe's object, a distance which using a tape measure would be indistinguishable from 3 15/16.
Unfortunately, even though it would be more meaningful to compare the measurements using the lower precision, Java's floating-point-comparison rules assume that a float represents a single precise numeric quantity, and performs comparisons on that basis. While there are some cases where this is useful (e.g. knowing whether the particular double produced by casting some value to float and back to double would match the starting value), in general direct equality comparisons between float and double are not meaningful. Even though Java does not require it, one should always cast the operands of a floating-point equality comparison to be the same type. The semantics that result from casting the double to float before the comparison are different from those of casting the float to double, and the behavior Java picks by default (cast the float to double) is often semantically wrong.
Actually neither float nor double can store 1.3. I am not kidding. Watch this video carefully.
https://www.youtube.com/watch?v=RtHKwsXuRkk&index=50&list=PL6pxHmHF3F5JPdnEqKALRMgogwYc2szp1
Can every possible value of a float variable can be represented exactly in a double variable?
In other words, for all possible values X will the following be successful:
float f1 = X;
double d = f1;
float f2 = (float)d;
if(f1 == f2)
System.out.println("Success!");
else
System.out.println("Failure!");
My suspicion is that there is no exception, or if there is it is only for an edge case (like +/- infinity or NaN).
Edit: Original wording of question was confusing (stated two ways, one which would be answered "no" the other would be answered "yes" for the same answer). I've reworded it so that it matches the question title.
Yes.
Proof by enumeration of all possible cases:
public class TestDoubleFloat {
public static void main(String[] args) {
for (long i = Integer.MIN_VALUE; i <= Integer.MAX_VALUE; i++) {
float f1 = Float.intBitsToFloat((int) i);
double d = (double) f1;
float f2 = (float) d;
if (f1 != f2) {
if (Float.isNaN(f1) && Float.isNaN(f2)) {
continue; // ok, NaN
}
fail("oops: " + f1 + " != " + f2);
}
}
}
}
finishes in 12 seconds on my machine. 32 bits are small.
In theory, there is not such a value, so "yes", every float should be representable as a double.. Converting from a float to a double should involve just tacking four bytes of 00 on the end -- they are stored using the same format, just with different sized fields.
Yes, floats are a subset of doubles. Both floats and doubles have the form (sign * a * 2^b). The difference between floats and doubles is the number of bits in a & b. Since doubles have more bits available, assigning a float value to a double effectively means inserting extra 0 bits.
As everyone has already said, "no". But that's actually a "yes" to the question itself, i.e. every float can be exactly expressed as a double. Confusing. :)
If I'm reading the language specification correctly (and as everyone else is confirming), there is no such value.
That is, each claims only to hold only IEEE 754 standard values, so casts between the two should incur no change except in memory given.
(clarification: There would be no change as long as the value was small enough to be held in a float; obviously if the value was too many bits to be held in a float to begin with, casting from double to float would result in a loss of precision.)
#KenG: This code:
float a = 0.1F
println "a=${a}"
double d = a
println "d=${d}"
fails not because 0.1f can't be exactly represented. The question was "is there a float value that cannot be represented as a double", which this code doesn't prove. Although 0.1f can't be stored exactly, the value that a is given (which isn't 0.1f exactly) can be stored as a double (which also won't be 0.1f exactly). Assuming an Intel FPU, the bit pattern for a is:
0 01111011 10011001100110011001101
and the bit pattern for d is:
0 01111111011 100110011001100110011010 (followed by lots more zeros)
which has the same sign, exponent (-4 in both cases) and the same fractional part (separated by spaces above). The difference in the output is due to the position of the second non-zero digit in the number (the first is the 1 after the point) which can only be represented with a double. The code that outputs the string format stores intermediate values in memory and is specific to floats and doubles (i.e. there is a function double-to-string and another float-to-string). If the to-string function was optimised to use the FPU stack to store the intermediate results of the to-string process, the output would be the same for float and double since the FPU uses the same, larger format (80bits) for both float and double.
There are no float values that can't be stored identically in a double, i.e. the set of float values is a sub-set of the the set of double values.
Snark: NaNs will compare differently after (or indeed before) conversion.
This does not, however, invalidate the answers already given.
I took the code you listed and decided to try it in C++ since I thought it might execute a little faster and it is significantly easier to do unsafe casting. :-D
I found out that for valid numbers, the conversion works and you get the exact bitwise representation after the cast. However, for non-numbers, e.g. 1.#QNAN0, etc., the result will use a simplified representation of the non-number rather than the exact bits of the source. For example:
**** FAILURE **** 2140188725 | 1.#QNAN0 -- 0xa0000000 0x7ffa1606
I cast an unsigned int to float then to double and back to float. The number 2140188725 (0x7F90B035) results in a NAN and converting to double and back is still a NAN but not the exact same NAN.
Here is the simple C++ code:
typedef unsigned int uint;
for (uint i = 0; i < 0xFFFFFFFF; ++i)
{
float f1 = *(float *)&i;
double d = f1;
float f2 = (float)d;
if(f1 != f2)
printf("**** FAILURE **** %u | %f -- 0x%08x 0x%08x\n", i, f1, f1, f2);
if ((i % 1000000) == 0)
printf("Iteration: %d\n", i);
}
The answer to the first question is yes, the answer to the 'in other words', however is no. If you change the test in the code to be if (!(f1 != f2)) the answer to the second question becomes yes -- it will print 'Success' for all float values.
In theory every normal single can have the exponent and mantissa padded to create a double and then remove the padding and you return to the original single.
When you go from theory to reality is when you will have problems. I dont know if you were interested in theory or implementation. If it is implementation then you can rapidly get into trouble.
IEEE is a horrible format, my understanding it was intentionally designed to be so tough that nobody could meet it and allow the market to catch up to intel (this was a while back) allowing for more competition. If that is true it failed, either way we are stuck with this dreadful spec. Something like the TI format is far superior for the real world in so many ways. I have no connection to either company or any of these formats.
Thanks to this spec there are very few if any fpus that actually meet it (in hardware or even in hardware plus the operating system), and those that do often fail on the next generation. (google: TestFloat). The problems these days tend to lie in the int to float and float to int and not single to double and double to single as you have specified above. Of course what operation is the fpu going to perform to do that conversion? Add 0? Multiply by 1? Depends on the fpu and the compiler.
The problem with IEEE related to your question above is that there is more than one way a number, not every number but many numbers can be represented. If I wanted to break your code I would start with minus zero in the hope that one of the two operations would convert it to a plus zero. Then I would try denormals. And it should fail with a signaling nan, but you called that out as a known exception.
The problem is that equal sign, here is rule number one about floating point, never use an equal sign. Equals is a bit comparison not a value comparison, if you have two values represented in different ways (plus zero and minus zero for example) the bit comparison will fail even though its the same number. Greater than and less than are done in the fpu, equals is done with the integer alu.
I realize that you probably used the equal to explain the problem and not necessarily the code you wanted to succeed or fail.
If a floating-point type is viewed as representing a precise value, then as other posters have noted, every float value is representable as a double, but only a few values of double can be represented by float. On the other hand, if one recognizes that floating-point values are approximations, one will realize the real situation is reversed. If one uses a very precise instrument to measure something which is 3.437mm, one may correctly describe is size as 3.4mm. if one uses a ruler to measure the object as 3.4mm, it would be incorrect to describe its size as 3.400mm.
Even bigger problems exist at the top of the range. There is a float value that represents: "computed value exceeded 2^127 by an unknown amount", but there's no double value that indicates such a thing. Casting an "infinity" from single to double will yield a value "computed value exceeded 2^1023 by an unknown amount" which is off by a factor of over a googol.