This question already has answers here:
Closure in Java 7 [closed]
(7 answers)
Closed 6 years ago.
I have heard that closures could be introduced in the next Java standard that is scheduled to be released somewhere around next summer.
What would this syntax look like?
I read somewhere that introducing closures in java is a bigger change than generic was in java 5. Is this true? pros and cons?
(By now we definitely know that closures not will be included in the next Java release)
OR
edit: http://puredanger.com/tech/2009/11/18/closures-after-all/ :D
edit2: Re-thinking JDK7: http://blogs.oracle.com/mr/entry/rethinking_jdk7
edit3: There’s not a moment to lose!: http://blogs.oracle.com/mr/entry/quartet
Have a look at http://www.javac.info/ .
It seems like this is how it would look:
boolean even = { int x => x % 2 == 0 }.invoke(15);
where the { int x => x % 2 == 0 } bit is the closure.
It really depends on what gets introduced, and indeed whether it will be introduced at all. There are a number of closure proposals of varying sizes.
See Alex Miller's Java 7 page for the proposals and various blog posts.
Personally I'd love to see closures - they're beautiful and incredibly helpful - but I fear that some of the proposals are pretty hairy.
In November 2009 there was a surprising u-turn on this issue, and closures will now be added to Java 7.
Update
Closures (AKA lambdas expressions) in Java 7 didn't happen. They were finally added in the first release of Java 8 in 2014.
Unofortunately you will not find closure in Java 7. If you are looking for a lighter solution to have closure in java just now check out the lambdaj project:
http://code.google.com/p/lambdaj/
This is the java 7 features http://tech.puredanger.com/java7/#switch the examples are very usefull.
Note that a "function-type" is really a type under the proposal:
{int => boolean} evaluateInt; //declare variable of "function" type
evaluateInt = {int x => x % 2 }; //assignment
I think there is still a lot of debate going in with regards to what syntax will ultimately be used. I'd actually be pretty surprised if this does make it into Java 7 due to all of that.
closures will be annoyinglly verbose if there won't be any sort of type inference... :(
Closures have some serious edge cases. I would say that Closures are a much more significant change than Generics and the later still has a number hairy edge cases.
e.g. The Java Collections libraries cannot be written/compiled without warnings.
Related
From the Book "Core Java for the Impatient", Chapter "increment and decrement operators"
String arg = args[n++];
sets arg to args[n], and then increments n. This made sense thirty
years ago when compilers didn’t do a good job optimizing code.
Nowadays, there is no performance drawback in using two separate
statements, and many programmers find the explicit form easier to
read.
I thought such usage of increment and decrement operators was only used in order to write less code, but according to this quote it wasn't so in the past.
What was the performance benefit of writing statements such as String arg = args[n++]?
Some processors, like the Motorola 68000, support addressing modes that specifically dereference a pointer, then increment it. For instance:
Older compilers might conceivably be able to use this addressing mode on an expression like *p++ or arr[i++], but might not be able to recognize it split across two statements.
Over years architectures and compilers became better. Given the improvements in architectures of CPUs and compilers I would say there is no single answer to it.
From the architecuture standpoint - many processors support STORE & POINTER AUTO-INCREMENT as a one CPU cycle. So in the past - the way you wrote the code would impact the result (one vs more operations). Most notably DSP architectures were good at paralleling things (e.g. TI DSPs like C54xx with post-increment and post-decrement instructions and instructions that you can execute in circular buffers - e.g. *"ADD *AR2+, AR2–, A ;after accessing the operands, AR2 ;is incremented by one." - from TMS320C54x DSP reference set). ARM cores also feature instructions that allows for similar parallelism (VLDR, VSTR instructions - see documentation )
From the compiler standpoint - Compiler looks at how variable is used in its scope (what could not be the the case before). It can see if the variable is reused later or not. It might be the case that in the code a variable is increased but then discarded. What is the point of doing that?Nowadays compiler has to track variable usage and it can make smart decisions based on that (if you look at Java 8 - the compiler must be able to spot "effectively final" variables that are not reassigned).
These operators were/are generally used for convenience by programmers rather than to achieve performance. Because effectively, the statement would get split into a two line statement during compilation!! Apparently, the overhead for performing Post/Pre-increment/decrement operators would be more as compared to an already split two liner statement!
This question already has answers here:
Should I use string.isEmpty() or "".equals(string)?
(6 answers)
Closed 7 years ago.
I'm writing a lot of components in Adobe CQ so have to deal a lot with user set properties. And i'm getting a little tired of all the null checks before I can do an isEmpty check.
I'd like to do something like.
"".equals(string);
This would be a lot more readable, but how would it compare performance wise. And yes i would expect to create the "" as a constant if there where multiple checks.
Thanks
D
Personally I use Apache's StringUtils, eg:
if (StringUtils.isEmpty(someString)) {
...
or
if (StringUtils.isNotEmpty(someString)) {
...
Also I really wouldn't worry about the performance of this unless you have benchmarked an identified it as an issue
It is preferred to use the isEmpty() method(Simpler and faster source code ).
Another efficient way to check empty string in java is to use:
string.length() == 0;
You should not care about performance here. Both version have similar speed. Even if they compile differently, JITted code will unlikely to differ more than several CPU cycles (especially given the fact that String.equals is JVM intrinsic). Not the thing you should worry about when programming on Java.
Which keywords are reserved in JavaScript but not in Java?
One example is debugger, but there are more.
By reserved I mean reserved words as well as future reserved words (in both strict and non-strict mode) and special tokens like null, true and false.
I'm interested in ECMAScript 5.1 as well as current 6 vs. Java 5-8 (not sure if there were new keywords since Java 5).
Update
For those who's interested in reasons to know this.
I know many Java developers switching from Java to JavaScript (my story). Knowing delta in keywords is helpful.
Language history.
My very specific reason for asking: I'm building code Java/JavaScript code generation tools (quasi cross-langiuage). Which reserved keywords should I add to Java code generator so that it produces JavaScript-compatible identifiers in cross-language case?
This is what I've found out so far.
There were seems to be no new keywords in Java since 5.0 (which added enum).
Java vs. ECMAScript 5.1:
debugger
delete
function
in
typeof
var
with
export
let
yield
Java vs. ECMAScript 6 Rev 36 Release Candidate 3:
all of above
await
This question already has answers here:
Is the ternary operator faster than an "if" condition in Java [duplicate]
(9 answers)
Closed 9 years ago.
This doesn't look like a duplicate, as only one my solutions involves a branch.
Essentially, which of these two lines is more efficient? will be a java app, but it'd be nice to know a general answer well.
shouldRefresh = useCache ? refetchIfExpired : true;
shouldRefresh = !useCache || refetchIfExpired;
The JIT compiler will figure out the fastest operation and use that. Use whatever makes the most sense to read. Don't optimize prematurely.
For interest's sake: If this were being compiled without optimizations, then the boolean operator would be faster. It's a simple mathematical operation, which takes just one CPU cycle (plus another for the ! operator), whereas the ternary expression would require a branch, which interrupts the pipeline if branch prediction guesses wrong.
I would not care about performance here, but about readability. With this aspect, the ternary operator wins in your example. By the way, I expect roughly the same performance.
You can also look how readability helps to save time in maintenance of code. So what is more important? Almost unmeasurable micro-optimization or easier understanding? And when you think a comment shall fix this, so I consider this as an unnecessary writing effort which also costs time.
This question already has answers here:
comparing float/double values using == operator
(9 answers)
Closed 5 years ago.
Are there any java libraries for doing double comparison?
e.g.
public static boolean greaterThanOrEqual(double a, double b, double epsilon){
return a - b > -epsilon;
}
Every project I start I end up re-implementing this and copy-pasting code and test.
NB a good example of why its better to use 3rd party JARs is that IBM recommend the following:
"If you don't know the scale of the underlying measurements, using the
test "abs(a/b - 1) < epsilon" is likely to be more robust than simply
comparing the difference"
I doubt many people would have thought of this and illustrates that even simple code can be sub-optimal.
Guava has DoubleMath.fuzzyCompare().
In the standard Java library there are no methods to handle your problem actually I suggest you to follow Joachim's link and use that library which is quite good for your needs, even though my suggestion would be to create an utils library in which you could add frequently used methods as the one you've stated in your question, as for different implementations of your problem you should consider looking into this :
Java double comparison epsilon
Feel free to ask out any other ambiguities
You should abstain from any library that uses the naive "maximum absolute difference" approach (like Guava). As detailed in the Bruce Dawson's excellent article Comparing Floating Point Numbers, 2012 edition, it is highly error-prone as it only works for a very limited range of values. A much more robust approach is to use relative differences or ULPs for approximate comparisons.
The only library I know of that does implement a correct approximate comparison algorithm is apache.common.math.