What does Strictfp means in java? - java

If anyone can explain me what is strictfp in java with an example and a good explanation then that will be great.
I have already search the internet but still I am unable to get the clear solution for the same
Thanks in advance :)

The explanation given by the JLS seems pretty clear:
Within an FP-strict expression, all intermediate values must be elements of the float value set or the double value set, implying that the results of all FP-strict expressions must be those predicted by IEEE 754 arithmetic on operands represented using single and double formats.
Within an expression that is not FP-strict, some leeway is granted for an implementation to use an extended exponent range to represent intermediate results; the net effect, roughly speaking, is that a calculation might produce "the correct answer" in situations where exclusive use of the float value set or double value set might result in overflow or underflow.
For a discussion of when one might use strictfp, see When should I use the "strictfp" keyword in java?

Related

How to get the smallest next or previous possible double value supported by the architecture?

Lets say I have a double variable d. Is there a way to get the next or previous value that is supported by the CPU architecture.
As a trivial example, if the value was 10.1245125 and the precision of the architecture was fixed to 7 decimal places, then the next value would be 10.1245126 and the previous value would be 10.1245124.
Obviously on floating-point architectures this is not that simple. How would I be able to achieve this (in Java)?
Actually, an IEEE 754 floating-point architecture makes this easy: thanks to the standard, the function is called nextafter in nearly all languages that support it, and this uniformity allowed me to write an answer to your question with very little familiarity with Java:
The java.lang.Math.nextAfter(double start, double direction) returns the floating-point number adjacent to the first argument in the direction of the second argument.
Remember that -infinity and +infinity are floating-point values, and these values are convenient to give the direction (second argument). Do not make the common mistake of writing something like Math.nextAfter(x, x+1), which only works as long as 1 is greater than the ULP of x.
Anyone who writes the above probably means instead Math.nextAfter(x, Double.POSITIVE_INFINITY), which saves an addition and works for all values of x.
Math.nextUp and Math.nextDown can be used to get the next/previous element, which are equivalent to the proposed methods in the accepted answer, but more concise.
(this info has been originally provided as a comment by #BjörnZurmaar)

Java double epsilon

I'm currently in the need of an epsilon of type double (preferred are constants in java's libraries instead of own implementations/definitions)
As far as I can see Double has MIN_VALUE and MAX_VALUE as static members.
Why there is no EPSILON?
What would a epsilon<double> be?
Are there any differences to a std::numeric_limits< double >::epsilon()?
Epsilon: The difference between 1 and the smallest value greater than 1 that is representable for the data type.
I'm presuming you mean epsilon in the sense of the error in the value. I.e this.
If so then in Java it's referred to as ULP (unit in last place). You can find it by using the java.lang.Math package and the Math.ulp() method. See javadocs here.
The value isn't stored as a static member because it will be different depending on the double you are concerned with.
EDIT: By the OP's definition of epsilon now in the question, the ULP of a double of value 1.0 is 2.220446049250313E-16 expressed as a double. (I.e. the return value of Math.ulp(1.0).)
By the edit of the question, explaining what is meant by EPSILON, the question is now clear, but it might be good to point out the following:
I believe that the original question was triggered by the fact that in C there is a constant DBL_EPSILON, defined in the standard header file float.h, which captures what the question refers to. The same standard header file contains definitions of constants DBL_MIN and DBL_MAX, which clearly correspond to Double.MIN_VALUE and Double.MAX_VALUE, respectively, in Java. Therefore it would be natural to assume that Java, by analogy, should also contain a definition of something like Double.EPSILON with the same meaning as DBL_EPSILON in C. Strangely, however, it does not. Even more strangely, C# does contain a definition double.EPSILON, but it has a different meaning, namely the one that is covered in C by the constant DBL_MIN and in Java by Double.MIN_VALUE. Certainly a situation that can lead to some confusion, as it makes the term EPSILON ambiguous.
Without using Math package:
Double.longBitsToDouble(971l << 52)
That's 2^-52 (971 = 1023(double exponent bias) - 52, shift by 52 is because mantissa is stored on the first 52 bits).
It's a little quicker than Math.ulp(1.0);
Also, if you need this to compare double values, there's a really helpful article: https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
double: The double data type is a double-precision 64-bit IEEE 754 floating point. Its range of values is beyond the scope of this discussion, but is specified in the Floating-Point Types, Formats, and Values section of the Java Language Specification. For decimal values, this data type is generally the default choice. As mentioned above, this data type should never be used for precise values, such as currency.
http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
looking up at IEEE 754 you'll find the precision of epsion...
http://en.wikipedia.org/wiki/IEEE_floating_point
binary64:
Base(b)=2
precision(p)=53
machineEpsion(e) (b^-(p-1))/2=2^-53=1.11e-16
machineEpsilon(e) b^-(p-1)=2^-52=2.22e-16

Why does casting Double.NaN to int not throw an exception in Java?

So I know the IEEE 754 specifies some special floating point values for values that are not real numbers. In Java, casting those values to a primitive int does not throw an exception like I would have expected. Instead we have the following:
int n;
n = (int)Double.NaN; // n == 0
n = (int)Double.POSITIVE_INFINITY; // n == Integer.MAX_VALUE
n = (int)Double.NEGATIVE_INFINITY; // n == Integer.MIN_VALUE
What is the rationale for not throwing exceptions in these cases? Is this an IEEE standard, or was it merely a choice by the designers of Java? Are there bad consequences that I am unaware of if exceptions would be possible with such casts?
What is the rationale for not throwing exceptions in these cases?
I imagine that the reasons include:
These are edge cases, and are likely to occur rarely in applications that do this kind of thing.
The behavior is not "totally unexpected".
When an application casts from a double to an int, significant loss of information is expected. The application is either going to ignore this possibility, or the cast will be preceded by checks to guard against it ... which could also check for these cases.
No other double / float operations result in exceptions, and (IMO) it would be a bit schizophrenic to do it in this case.
There could possibly be a performance hit ... on some hardware platforms (current or future).
A commentator said this:
"I suspect the decision to not have the conversion throw an exception was motivated by a strong desire to avoid throwing exceptions for any reasons, for fear of forcing code to add it to a throws clause."
I don't think that is a plausible explanation:
The Java language designers1 don't have a mindset of avoiding throwing exceptions "for any reason". There are numerous examples in the Java APIs that demonstrate this.
The issue of the throws clause is addressed by making the exception unchecked. Indeed, many related exceptions like ArithmeticException or ClassCastException are declared as unchecked for this reason.
Is this an IEEE standard, or was it merely a choice by the designers of Java?
The latter, I think.
Are there bad consequences that I am unaware of if exceptions would be possible with such casts?
None apart from the obvious ones ...
(But it is not really relevant. The JLS and JVM spec say what they say, and changing them would be liable to break existing code. And it is not just Java code we are talking about now ...)
I've done a bit of digging. A lot of the x86 instructions that could be used convert from double to integers seem to generate hardware interrupts ... unless masked. It is not clear (to me) whether the specified Java behavior is easier or harder to implement than the alternative suggested by the OP.
1 - I don't dispute that some Java programmers do think this way. But they were / are not the Java designers, and this question is asking specifically about the Java design rationale.
What is the rationale for not throwing
exceptions in these cases? Is this an
IEEE standard, or was it merely a
choice by the designers of Java?
The IEEE 754-1985 Standard in pages 20 and 21 under the sections 2.2.1 NANs and 2.2.2 Infinity clearly explains the reasons why NAN and Infinity values are required by the standard. Therefore this not a Java thing.
The Java Virtual Machine Specification in section 3.8.1 Floating Point Arithmetic and IEEE 754 states that when conversions to integral types are carried out then the JVM will apply rounding toward zero which explains the results you are seeing.
The standard does mention a feature named "trap handler" that could be used to determine when overflow or NAN occurs but the Java Virtual Machine Specification clearly states this is not implemented for Java. It says in section 3.8.1:
The floating-point operations of the
Java virtual machine do not throw
exceptions, trap, or otherwise signal
the IEEE 754 exceptional conditions of
invalid operation, division by zero,
overflow, underflow, or inexact. The
Java virtual machine has no signaling
NaN value.
So, the behavior is not unspecified regardless of consequences.
Are there bad consequences that I am
unaware of if exceptions would be
possible with such casts?
Understanding the reasons stated in standard should suffice to answer this question. The standard explains with exhaustive examples the consequences you are asking for here. I would post them, but that would be too much information here and the examples can be impossible to format appropriately in this edition tool.
EDIT
I was reading the latest maintenance review of the Java Virtual Machine Specification as published recently by the JCP as part of their work on JSR 924 and in the section 2.11.14 named type conversion istructions contains some more information that could help you in your quest for answers, not yet what you are looking for, but I believe it helps a bit. It says:
In a narrowing numeric conversion of a
floating-point value to an integral
type T, where T is either int or long,
the floating-point value is converted
as follows:
If the floating-point value is NaN, the result of the conversion is an
int or  long 0.
Otherwise, if the floating-point value is not an infinity, the
floating-point value  is rounded to
an integer value V using IEEE 754
round towards zero mode.
There are two cases:
If T is long and this integer value can be represented as a long, then
the result is the long value V.
If T is of type int and this integer value can be represented as
an int, then the result is the int
value V.
Otherwise:
Either the value must be too small (a negative value of large
magnitude or  negative infinity),
and the result is the smallest
representable value of type  int or
long.
Or the value must be too large (a positive value of large magnitude or
posi- tive infinity), and the result
is the largest representable value of
type int or  long.
A narrowing numeric conversion from
double to float behaves in accordance
with IEEE 754. The result is correctly
rounded using IEEE 754 round to
nearest mode. A value too small to be
represented as a float is converted to
a positive or negative zero of type
float; a value too large to be
represented as a float is converted to
a positive or negative infinity. A
double NaN is always converted to
a float NaN.
Despite the fact that overflow,
underflow, or loss of precision
may occur, narrowing conversions among
numeric types never cause the
Java virtual machine to throw a
runtime exception (not to be confused
with an IEEE 754 floating-point
exception).
I know this simply restates what you already know, but it has a clue, it appears that the IEEE standard has a requirement of rounding to the nearest. Perhaps there you can find the reasons for this behavior.
EDIT
The IEEE Standard in question in section 2.3.2 Rounding Modes States:
By
default, rounding means round toward
the
nearest. The standard requires that three other
rounding modes be provided; namely, round toward
0, round toward +Infinity and round toward –Infinity.
When used with the convert to integer operation, round toward –Infinity causes the convert to become the floor function, whereas, round toward +Infinity
is ceiling.
The mode rounding affects overflow because when round toward O or round toward
-Infinite is in effect, an overflow of positive magnitude causes the default result to be the largest representable number, not +Infinity.
Similarly, overflows of
negative magnitude will produce
the largest negative number when round toward +Infinity or round toward O is in effect.
Then they proceed to mention an example of why this is useful in interval arithmetic. Not sure, again, that this is the answer you are looking for, but it can enrich your search.
There is an ACM presentation from 1998 that still seems surprisingly current and brings some light: https://people.eecs.berkeley.edu/~wkahan/JAVAhurt.pdf.
More concretely, regarding the surprising lack of exceptions when casting NaNs and infinities: see page 3, point 3: "Infinities and NaNs unleashed without the protection of floating-point traps and flags mandated by IEEE Standards 754/854 belie Java’s claim to robustness."
The presentation doesn't really answer the "why's", but does explain the consequences of the problematic design decisions in the Java language's implementation of floating point, and puts them in the context of the IEEE standards and even other implementations.
It is in the JLS, see:
JavaRanch post
http://www.coderanch.com/t/239753/java-programmer-SCJP/certification/cast-double-int
However a warning will be nice.
Actually i think during some casting there are bit operations done (probably for perf issues?) so that you can have some unexpected behaviours. See what happens when you use >> and << operators.
For exemple:
public static void main(String[] args) {
short test1 = (short)Integer.MAX_VALUE;
System.out.println(test1);
short test2 = (short)Integer.MAX_VALUE-1;
System.out.println(test2);
short test3 = (short)Integer.MAX_VALUE-2;
System.out.println(test3);
short test4 = (short)Double.MAX_VALUE-3;
System.out.println(test4);
}
will output:
-1
-2
-3
-4

Why does Math.round return a long but Math.floor return a double?

Why the inconsistency?
There is no inconsistency: the methods are simply designed to follow different specifications.
long round(double a)
Returns the closest long to the argument.
double floor(double a)
Returns the largest (closest to positive infinity) double value that is less than or equal to the argument and is equal to a mathematical integer.
Compare with double ceil(double a)
double rint(double a)
Returns the double value that is closest in value to the argument and is equal to a mathematical integer
So by design round rounds to a long and rint rounds to a double. This has always been the case since JDK 1.0.
Other methods were added in JDK 1.2 (e.g. toRadians, toDegrees); others were added in 1.5 (e.g. log10, ulp, signum, etc), and yet some more were added in 1.6 (e.g. copySign, getExponent, nextUp, etc) (look for the Since: metadata in the documentation); but round and rint have always had each other the way they are now since the beginning.
Arguably, perhaps instead of long round and double rint, it'd be more "consistent" to name them double round and long rlong, but this is argumentative. That said, if you insist on categorically calling this an "inconsistency", then the reason may be as unsatisfying as "because it's inevitable".
Here's a quote from Effective Java 2nd Edition, Item 40: Design method signatures carefully:
When in doubt, look to the Java library APIs for guidance. While there are plenty of inconsistencies -- inevitable, given the size and scope of these libraries -- there are also fair amount of consensus.
Distantly related questions
Why does int num = Integer.getInteger("123") throw NullPointerException?
Most awkward/misleading method in Java Base API ?
Most Astonishing Violation of the Principle of Least Astonishment
floor would have been chosen to match the standard c routine in math.h (rint, mentioned in another answer, is also present in that library, and returns a double, as in java).
but round was not a standard function in c at that time (it's not mentioned in C89 - c identifiers and standards; c99 does define round and it returns a double, as you would expect). it's normal for language designers to "borrow" ideas, so maybe it comes from some other language? fortran 77 doesn't have a function of that name and i am not sure what else would have been used back then as a reference. perhaps vb - that does have Round but, unfortunately for this theory, it returns a double (php too). interestingly, perl deliberately avoids defining round.
[update: hmmm. looks like smalltalk returns integers. i don't know enough about smalltalk to know if that is correct and/or general, and the method is called rounded, but it might be the source. smalltalk did influence java in some ways (although more conceptually than in details).]
if it's not smalltalk, then we're left with the hypothesis that someone simply chose poorly (given the implicit conversions possible in java it seems to me that returning a double would have been more useful, since then it can be used both while converting types and when doing floating point calculations).
in other words: functions common to java and c tend to be consistent with the c library standard at the time; the rest seem to be arbitrary, but this particular wrinkle may have come from smalltalk.
I agree, that it is odd that Math.round(double) returns long. If large double values are cast to long (which is what Math.round implicitly does), Long.MAX_VALUE is returned. An alternative is using Math.rint() in order to avoid that. However, Math.rint() has a somewhat strange rounding behavior: ties are settled by rounding to the even integer, i.e. 4.5 is rounded down to 4.0 but 5.5 is rounded up to 6.0). Another alternative is to use Math.floor(x+0.5). But be aware that 1.5 is rounded to 2 while -1.5 is rounded to -1, not -2. Yet another alternative is to use Math.round, but only if the number is in the range between Long.MIN_VALUE and Long.MAX_VALUE. Double precision floating point values outside this range are integers anyhow.
Unfortunately, why Math.round() returns long is unknown. Somebody made that decision, and he probably never gave an interview to tell us why. My guess is, that Math.round was designed to provide a better way (i.e., with rounding) for converting doubles to longs.
Like everyone else here I also don't know the answer, but thought someone might find this useful. I noticed that if you want to round a double to an int without casting, you can use the two round implementations long round(double) and int round(float) together:
double d = something;
int i = Math.round(Math.round(d));

What does fpstrict do in Java?

I read the JVM specification for the fpstrict modifier but still don't fully understand what it means.
Can anyone enlighten me?
Basically, it mandates that calculations involving the affected float and double variables have to follow the IEEE 754 spec to the letter, including for intermediate results.
This has the effect of:
Ensuring that the same input will always generate exactly the same result on all systems
The CPU may have to do some extra work, making it slightly slower
The results will be, in some cases, less accurate (much less, in extreme cases)
Edit:
More specifically, many modern CPUs use 80 bit floating point arithmetic ("extended precision") internally. Thus, they can internally represent some numbers in denormalized form that would cause arithmetic overflow or underflow (yielding Infinity or zero, respectively) in 32 or 64bit floats; in borderline cases, 80 bit just allows to retain more precision. When such numbers occur as intermediate results in a calculation, but with an end result inside the range of numbers that can represented by 32/64bit floats, then you get a "more correct" result on machines that use 80bit float arithmetic than on machines that don't - or in code that uses strictfp.
I like this answer, which hits the point:
It ensures that your calculations are equally wrong on all platforms.
strictfp is a keyword and can be used to modify a class or a method, but never a
variable. Marking a class as strictfp means that any method code in the class will
conform to the IEEE 754 standard rules for floating points. Without that modifier,
floating points used in the methods might behave in a platform-dependent way.
If you don't declare a class as strictfp, you can still get strictfp behavior on a
method-by-method basis, by declaring a method as strictfp.
If you don't know the IEEE 754 standard, now's not the time to learn it. You have, as we say, bigger fish to fry.

Categories

Resources