Simple question asked mostly out of curiosity about what java compiler's are smart enough to do. I know not all compilers are built equally, but I'm wondering if others feel it's reasonable to expect an optimization on most compilers I'm likely to run against, not if it works on a specific version or on all versions.
So lets say that I have some tree structure and I want to collect all the descendant of a node. There are two easy ways to do this recursively.
The more natural method, for me, to do this would be something like this:
public Set<Node> getDescendants(){
Set<Node> descendants=new HashSet<Node>();
descendants.addall(getChildren());
for(Node child: getChildren()){
descendants.addall(child.getDescendants());
}
return descendants;
}
However, assuming no compiler optimizations and a decent sized tree this could get rather expensive. On each recursive call I create and fully populate a set, only to return that set up the stack so the calling method can add the contents of my returning set to it's version of the descendants set, discarding the version that was just built and populated in the recursive call.
So now I'm creating many sets just to have them be discarded as soon as I return their contents. Not only do I pay a minor initialization cost for building the sets, but I also pay the more substantial cost of moving all the contents of one set into the larger set. In large trees most of my time is spent moving Nodes around in memory from set A to B. I think this even makes my algorithm O(n^2) instead of O(n) due to the time spent copying Nodes; though it may work out to being O(N log(n)) if I set down to do the math.
I could instead have a simple getDescendants method that calls a helper method that looks like this:
public Set<Node> getDescendants(){
Set<node> descendants=new HashSet<Node>();
getDescendantsHelper(descendants);
return descendants;
}
public Set<Node> getDescendantsHelper(Set<Node> descendants){
descendants.addall(getChildren());
for(Node child: getChildren()){
child.getDescendantsHelper(descendant);
}
return nodes;
}
This ensures that I only ever create one set and I don't have to waste time copying from one set to another. However, it requires writing two methods instead of one and generally feels a little more cumbersome.
The question is, do I need to do option two if I'm worried about optimizing this sort of method? or can I reasonably expect the java compiler, or JIT, to recognize that I am only creating temporary sets for convenience of returning to the calling method and avoid the wasteful copying between sets?
edit: cleaned up bad copy paste job which lead to my sample method adding everything twice. You know something is bad when your 'optimized' code is slower then your regular code.
The question is, do I need to do option two if I'm worried about optimizing this sort of method?
Definitely yes. If performance is a concern (and most of the time it is not!), then you need it.
The compiler optimizes a lot but on a very different scale. Basically, it works with one method only and it optimizes the most commonly used path there in. Due to heavy inlining it can sort of optimize across method calls, but nothing like the above.
It can also optimize away needless allocations, but only in very simple cases. Maybe something like
int sum(int... a) {
int result = 0;
for (int x : a) result += x;
return result;
}
Calling sum(1, 2, 3) means allocating int[3] for the varargs arguments and this can be eliminated (if the compiler really does it is a different question). It can even find out that the result is a constant (which I doubt it really does). If the result doesn't get used, it can perform dead code elimination (this happens rather often).
Your example involves allocating a whole HashMap and all its entries, and is several orders of magnitude more complicated. The compiler has no idea how a HashMap works and it can't find out e.g., that after m.addAll(m1) the set m contains all member of m1. No way.
This is an algorithmical optimization rather than low-level. That's what humans are still needed for.
For things the compiler could do (but currently fails to), see e.g. these questions of mine concerning associativity and bounds checks.
Related
I need to modify a local variable inside a lambda expression in a JButton's ActionListener and since I'm not able to modify it directly, I came across the AtomicInteger type.
I implemented it and it works just fine but I'm not sure if this is a good practice or if it is the correct way to solve this situation.
My code is the following:
newAnchorageButton.addActionListener(e -> {
AtomicInteger anchored = new AtomicInteger();
anchored.set(0);
cbSets.forEach(cbSet ->
cbSet.forEach(cb -> {
if (cb.isSelected())
anchored.incrementAndGet();
})
);
// more code where I use the 'anchored' variable...
}
I'm not sure if this is the right way to solve this since I've read that AtomicInteger is used mostly for concurrency-related applications and this program is single-threaded, but at the same time I can't find another way to solve this.
I could simply use two nested for-loops to go over those arrays but I'm trying to reduce the method's cognitive complexity as much as I can according to the sonarlint vscode extension, and leaving those for-loops theoretically increases the method complexity and therefore its readability and maintainability.
Replacing the for-loops with lambda expressions reduces the cognitive complexity but maybe I shouldn't pay that much attention to it.
While it is safe enough in single-threaded code, it would be better to count them in a functional way, like this:
long anchored = cbSets.stream() // get a stream of the sets
.flatMap(List::stream) // flatten to list of cb's
.filter(JCheckBox::isSelected) // only selected ones
.count(); // count them
Instead of mutating an accumulator, we limit the flattened stream to only the ones we're interested in and ask for the count.
More generally, though, it is always possible to sum things up or generally aggregate the values without a mutable variable. Consider:
record Country(int population) { }
countries.stream()
.mapToInt(Country::population)
.reduce(0, Math::addExact)
Note: we never mutate any values; instead, we combine each successive value with the preceding one, producing a new value. One could use sum() but I prefer reduce(0, Math::addExact) to avoid the possibility of overflow.
and leaving those for-loops theoretically increases the method complexity and therefore its readability and maintainability.
This is obvious horsepuckey. x.forEach(foo -> bar) is not 'cognitively simpler' than for (var foo : x) bar; - you can map each AST node straight over from one to the other.
If a definition is being used to define complexity which concludes that one is significantly more complex than the other, then the only correct conclusion is that the definition is silly and should be fixed or abandoned.
To make it practical: Yes, introducing AtomicInteger, whilst performance wise it won't make one iota of difference, does make the code way more complicated. AtomicInteger's simple existence in the code suggests that concurrency is relevant here. It isn't, so you'd have to add a comment to explain why you're using it. Comments are evil. (They imply the code does not speak for itself, and they cannot be tested in any way). They are often the least evil, but evil they are nonetheless.
The general 'trick' for keeping lambda-based code cognitively easily followed is to embrace the pipeline:
You write some code that 'forms' a stream. This can be as simple as list.stream(), but sometimes you do some stream joining or flatmapping a collection of collections.
You have a pipeline of operations that operate on single elements in the stream and do not refer to the whole or to any neighbour.
At the end, you reduce (using collect, reduce, max - some terminator) such that the reducing method returns what you need.
The above model (and the other answer follows it precisely) tends to result in code that is as readable/complex as the 'old style' code, and rarely (but sometimes!) more readable, and significantly less complicated. Deviate from it and the result is virtually always considerably more complicated - a clear loser.
Not all for loops in java fit the above model. If it doesn't fit, then trying to force that particular square peg into the round hole will take a lot of effort and almost always results in code that is significantly worse: Either an order of magnitude slower or considerably more cognitively complicated.
It also means that it is virtually never 'worth' rewriting perfectly fine readable non-stream based code into stream based code; at best it becomes a percentage point more readable according to some personal tastes, with no significant universally agreed upon improvement.
Turn off that silly linter rule. The fact that it considers the above 'less' complex, and that it evidently determines that for (var foo : x) bar; is 'more complicated' than x.forEach(foo -> bar) is proof enough that it's hurting way more than it is helping.
I have the following to add to the two other answers:
Two general good practices in your code are in question:
Lambdas shouldn't be longer than 3-4 lines
Except in some precise cases, lambdas of stream operations should be stateless.
For #1, consider extracting the code of the lambda to a private method for example, when it's getting too long.
You will probably gain in readability, and you will also probably gain in better separating UI from business logic.
For #2, you are probably not concerned since you are working in a single thread at the moment, but streams can be parallelized, and they may not always execute exactly as you think it does.
For that reason, it's always better to keep the code stateless in stream pipeline operations. Otherwise you might be surprised.
More generally, streams are very good, very concise, but sometimes it's just better to do the same with good old loops.
Don't hesitate to come back to classic loops.
When Sonar tells you that the complexity is too high, in fact, you should try to factorize your code: split into smaller methods, improve the model of your objects, etc.
Is there an established way of measuring (or getting an existing measure) a JDK class method complexity? Is javap representative of time complexity and to what degree. In particular, I am interested in the complexity of Arrays.sort() but also some other collections manipulation methods.
E.g. I am trying to compare two implementations for performance, one is using Arrays.sort() and one doesn't. The javap disassembly for that doesn't returns a lot more steps (twice as many) but I am not sure if the one that does excludes the Arrays.sort() steps. IOW, does javap of one method include a recursive measure of the methods invoked within or just for that method?
Also, is there a way, without modifying and recompiling the Java code itself, to find how many loop iterations were done when a certain base Java method was invoked on specific parameters? E.g. measure the number of iterations of Arrays.sort('A', 'r', 'T', 'f')?
I would not expect javap to be even a little bit representative of actual speed.
The Javadoc specifies the algorithmic complexity, but if you care about constant factors there is absolutely no way to realistically compare constant factors except with actual benchmarks.
You can't get any information on what was done when Arrays.sort is called on a primitive array, but by passing a custom Comparator that counts the number of calls, you can count the number of comparisons made when sorting an object array. (That said, object arrays are sorted with a different sorting algorithm -- specifically a stable one -- and primitive arrays are sorted with a Quicksort variant.)
you can use the output from javap to determine where loops occur you want to find the goto instruction. This post gives a comprehensive explanation of that identification.
From the post:
Before considering any loop start/exit instrumentation, you should
look into the definitions of what entry, exit and successors are.
Although a loop will only have one entry point, it may have multiple
exit points and/or multiple successors, typically caused by break
statements (sometimes with labels), return statements and/or
exceptions (explicitly caught or not). While you haven't given details
regarding the kind of instrumentations you're investigating, it's
certainly worth considering where you want to insert code (if that's
what you want to do). Typically, some instrumentation may have to be
done before each exit statement or instead of each successor statement
(in which case you'll have to move the original statement).
Arrays.sort() for primitives uses tuned quicksort. For Object uses mergesort (but this is depends on implementation).
From: Arrays
For example, the algorithm used by sort(Object[]) does not have to be
a mergesort, but it does have to be stable
I'm currently learning C++ and trying to get used to the standard data structures that come with it, but they all seem very bare. For example, list doesn't have simple accessors like get(index) that I'm used to in Java. Methods like pop_back and pop_front don't return the object in the list either. So you have to do something like:
Object blah = myList.back();
myList.pop_back();
Instead of something simple like:
Object blah = myList.pop_back();
In Java, just about every data structure returns the object back so you don't have to make these extra calls. Why is the STL containers for C++ designed like this? Are common operations like this that I do in Java not so common for C++?
edit: Sorry, I guess my question was worded very poorly to get all these downvotes, but surely somebody could have edited it. To clarify, I'm wondering why the STL data structures are created like this in comparison to Java. Or am I using the wrong set of data structures to begin with? My point is that these seem like common operations you might use on (in my example) a list and surely everybody does not want to write their own implementation each time.
edit: reworded the question to be more clear.
Quite a few have already answered the specific points you raised, so I'll try to take a look for a second at the larger picture.
One of the must fundamental differences between Java and C++ is that C++ works primarily with values, while Java works primarily with references.
For example, if I have something like:
class X {
// ...
};
// ...
X x;
In Java, x is only a reference to an object of type X. To have an actual object of type X for it to refer to, I normally have something like: X x = new X;. In C++, however, X x;, by itself, defines an object of type X, not just a reference to an object. We can use that object directly, not via a reference (i.e., a pointer in disguise).
Although this may initially seem like a fairly trivial difference, the effects are substantial and pervasive. One effect (probably the most important in this case) is that in Java, returning a value does not involve copying the object itself at all. It just involves copying a reference to the value. This is normally presumed to be extremely inexpensive and (probably more importantly) completely safe -- it can never throw an exception.
In C++, you're dealing directly with values instead. When you return an object, you're not just returning a reference to the existing object, you're returning that object's value, usually in the form of a copy of that object's state. Of course, it's also possible to return a reference (or pointer) if you want, but to make that happen, you have to make it explicit.
The standard containers are (if anything) even more heavily oriented toward working with values rather than references. When you add a value to a collection, what gets added is a copy of the value you passed, and when you get something back out, you get a copy of the value that was in the container itself.
Among other things, this means that while returning a value might be cheap and safe just like in Java, it can also be expensive and/or throw an exception. If the programmer wants to store pointers, s/he can certainly do so -- but the language doesn't require it like Java does. Since returning an object can be expensive and/or throw, the containers in the standard library are generally built around ensuring they can work reasonably well if copying is expensive, and (more importantly) work correctly, even when/if copying throws an exception.
This basic difference in design accounts not only for the differences you've pointed out, but quite a few more as well.
back() returns a reference to the final element of the vector, which makes it nearly free to call. pop_back() calls the destructor of the final element of the vector.
So clearly pop_back() cannot return a reference to an element that is destroyed. So for your syntax to work, pop_back() would have to return a copy of the element before it is destroyed.
Now, in the case where you do not want that copy, we just needlessly made a copy.
The goal of C++ standard containers is to give you nearly bare-metal performance wrapped up in nice, easy to use dressing. For the most part, they do NOT sacrifice performance for ease of use -- and a pop_back() that returned a copy of the last element would be sacrificing performance for ease of use.
There could be a pop-and-get-back method, but it would duplicate other functionality. And it would be less efficient in many cases than back-and-pop.
As a concrete example,
vector<foo> vec; // with some data in it
foo f = std::move( vec.back() ); // tells the compiler that the copy in vec is going away
vec.pop_back(); // removes the last element
note that the move had to be done before the element was destroyed to avoid creating an extra temporary copy... pop_back_and_get_value() would have to destroy the element before it returned, and the assignment would happen after it returned, which is wasteful.
A list doesn't have a get(index) method because accessing a linked list by index is very inefficient. The STL has a philosophy of only providing methods that can be implemented somewhat efficiently. If you want to access a list by index in spite of the inefficiency, it's easy to implement yourself.
The reason that pop_back doesn't return a copy is that the copy constructor of the return value will be called after the function returns (excluding RVO/NRVO). If this copy constructor throws an exception, you have removed the item from the list without properly returning a copy. This means that the method would not be exception-safe. By separating the two operations, the STL encourages programming in an exception-safe manner.
Why is the STL containers for C++ designed like this?
I think Bjarne Stroustrup put it best:
C++ is lean and mean. The underlying principle is that you don't pay
for what you don't use.
In the case of a pop() method that would return the item, consider that in order to both remove the item and return it, that item could not be returned by reference. The referent no longer exists because it was just pop()ed. It could be returned by pointer, but only if you make a new copy of the original, and that's wasteful. So it would most likely be returned by value which has the potential to make a deep copy. In many cases it won't make a deep copy (through copy elision), and in other cases that deep copy would be trivial. But in some cases, such as large buffers, that copy could be extremely expensive and in a few, such as resource locks, it might even be impossible.
C++ is intended to be general-purpose, and it is intended to be fast as possible. General-purpose doesn't necessarily mean "easy to use for simple use cases" but "an appropriate platform for the widest range of applications."
list doesn't even have simple accessors like get(index)
Why should it? A method that lets you access the n-th element from the list would hide the complexity of O(n) of the operation, and that's the reason C++ doesn't offer it. For the same reason, C++'s std::vector doesn't offer a pop_front() function, since that one would also be O(N) in the size of the vector.
Methods like pop_back and pop_front don't return the object in the list either.
The reason is exception safety. Also, since C++ has free functions, it's not hard to write such an extension to the operations of std::list or any standard container.
template<class Cont>
typename Cont::value_type return_pop_back(Cont& c){
typename Cont::value_type v = c.back();
c.pop_back();
return v;
}
It should be noted, though, that the above function is not exception-safe, meaning if the return v; throws, you'll have a changed container and a lost object.
Concerning pop()-like functions, there are two things (at least) to consider:
1) There is no clear and safe action for a returning pop_back() or pop_front() for cases when there is nothing there to return.
2) These functions would return by value. If there were an exception thrown in the copy constructor of the type stored in the container, the item would be removed from the container and lost. I guess this was deemed to be undesirable and unsafe.
Concerning access to list, it is a general design principle of the standard library not to avoid providing inefficient operations. std::list is a double-linked list, and accessing a list element by index means traversing the list from the beginning or end until you get to the desired position. If you want to do this, you can provide your own helper function. But if you need random access to elements, then you should probably use a structure other than a list.
In Java a pop of a general interface can return a reference to the object popped.
In C++ returning the corresponding thing is to return by value.
But in the case of non-movable non-POD objects the copy construction might throw an exception. Then, an element would have been removed and yet not have been made accessible to the client code. A convenience return-by-value popper can always be defined in terms of more basic inspector and pure popper, but not vice versa.
This is also a difference in philosophy.
With C++ the standard library only provides basic building blocks, not directly usable functionality (in general). The idea is that you're free to choose from thousands of third party libraries, but that freedom of choice comes at a great cost, in usability, portability, training, etc. In contrast, with Java you have mostly all you need (for typical Java programming) in the standard library, but you're not effectively free to choose (which is another kind of cost).
As I understand, in case of an array, JAVA checks the index against the size of the Array.
So instead of using array[i] multiple times in a loop, it is better to declare a variable which stores the value of array[i], and use that variable multiple times.
My question is, if I have a class like this:
public class MyClass(){
public MyClass(int value){
this.value = value;
}
int value;
}
If I create an instance of this class somewhere else: (MyClass myobject = new MyClass(7)), and I have to use the objects value multiple times, is it okay to use myobject.value often or would it be better to declare a variable which stores that value and use that multiple times, or would it be the same?
In your case, it wouldn't make any difference, since referencing myobject.value is as fast and effective as referencing a new int variable.
Also, the JVM is usually able to optimize these kinds of things, and you shouldn't spend time worrying about it unless you have a highly performance critical piece of code. Just concentrate on writing clear, readable code.
The short answer is yes (in fact, in the array case, it does not only have to check the index limit but to calculate the actual memory position of the reference you are looking for -as in i=7, get the base position of the array and add 7 words-).
The long answer is that, unless you are really using that value a lot (and I mean a lot) and you are really constrained due to speed, it is not worth the added complexity of the code. Add to that that the local variable means that your JVM uses more memory, may hit a cache fault, and so on.
In general, you should worry more about the efficiency of your algorithm (the O(n)) and less about these tiny things.
The Java compiler is no bozo. He will do that optimization for you. There is 0 speed difference between all the options you give, usually.
I say 'usually' because whether or not accessing the original object or your local copy isn't always the same. If your array is globally visible, and another thread is accessing it, the two forms will yield different results, and the compiler cannot optimize one into the other. It is possible that something confuses the compiler into thinking there may be a problem, even though there isn't. Then it won't apply a legal optimization.
However, if you aren't doing funny stuff, the compiler will see what you're doing and optimize variable access for you. Really, that's what a compiler does. That's what it's for.
You need to optimize at least one level above that. This one isn't for you.
When you're designing the API for a code library, you want it to be easy to use well, and hard to use badly. Ideally you want it to be idiot proof.
You might also want to make it compatible with older systems that can't handle generics, like .Net 1.1 and Java 1.4. But you don't want it to be a pain to use from newer code.
I'm wondering about the best way to make things easily iterable in a type-safe way... Remembering that you can't use generics so Java's Iterable<T> is out, as is .Net's IEnumerable<T>.
You want people to be able to use the enhanced for loop in Java (for Item i : items), and the foreach / For Each loop in .Net, and you don't want them to have to do any casting. Basically you want your API to be now-friendly as well as backwards compatible.
The best type-safe option that I can think of is arrays. They're fully backwards compatible and they're easy to iterate in a typesafe way. But arrays aren't ideal because you can't make them immutable. So, when you have an immutable object containing an array that you want people to be able to iterate over, to maintain immutability you have to provide a defensive copy each and every time they access it.
In Java, doing (MyObject[]) myInternalArray.clone(); is super-fast. I'm sure that the equivalent in .Net is super-fast too. If you have like:
class Schedule {
private Appointment[] internalArray;
public Appointment[] appointments() {
return (Appointment[]) internalArray.clone();
}
}
people can do like:
for (Appointment a : schedule.appointments()) {
a.doSomething();
}
and it will be simple, clear, type-safe, and fast.
But they could do something like:
for (int i = 0; i < schedule.appointments().length; i++) {
Appointment a = schedule.appointments()[i];
}
And then it would be horribly inefficient because the entire array of appointments would get cloned twice for every iteration (once for the length test, and once to get the object at the index). Not such a problem if the array is small, but pretty horrible if the array has thousands of items in it. Yuk.
Would anyone actually do that? I'm not sure... I guess that's largely my question here.
You could call the method toAppointmentArray() instead of appointments(), and that would probably make it less likely that anyone would use it the wrong way. But it would also make it harder for people to find when they just want to iterate over the appointments.
You would, of course, document appointments() clearly, to say that it returns a defensive copy. But a lot of people won't read that particular bit of documentation.
Although I'd welcome suggestions, it seems to me that there's no perfect way to make it simple, clear, type-safe, and idiot proof. Have I failed if a minority of people are unwitting cloning arrays thousands of times, or is that an acceptable price to pay for simple, type-safe iteration for the majority?
NB I happen to be designing this library for both Java and .Net, which is why I've tried to make this question applicable to both. And I tagged it language-agnostic because it's an issue that could arise for other languages too. The code samples are in Java, but C# would be similar (albeit with the option of making the Appointments accessor a property).
UPDATE: I did a few quick performance tests to see how much difference this made in Java. I tested:
cloning the array once, and iterating over it using the enhanced for loop
iterating over an ArrayList using
the enhanced for loop
iterating over an unmodifyable
ArrayList (from
Collections.unmodifyableList) using
the enhanced for loop
iterating over the array the bad way (cloning it repeatedly in the length check
and when getting each indexed item).
For 10 objects, the relative speeds (doing multiple repeats and taking the median) were like:
1,000
1,300
1,300
5,000
For 100 objects:
1,300
4,900
6,300
85,500
For 1000 objects:
6,400
51,700
56,200
7,000,300
For 10000 objects:
68,000
445,000
651,000
655,180,000
Rough figures for sure, but enough to convince me of two things:
Cloning, then iterating is definitely
not a performance issue. In fact
it's consistently faster than using a
List. (this is why Java's
enum.values() method returns a
defensive copy of an array instead of
an immutable list.)
If you repeatedly call the method,
repeatedly cloning the array unnecessarily,
performance becomes more and more of an issue the larger the arrays in question. It's pretty horrible. No surprises there.
clone() is fast but not what I would describe as super faster.
If you don't trust people to write loops efficiently, I would not let them write a loop (which also avoids the need for a clone())
interface AppointmentHandler {
public void onAppointment(Appointment appointment);
}
class Schedule {
public void forEachAppointment(AppointmentHandler ah) {
for(Appointment a: internalArray)
ah.onAppointment(a);
}
}
Since you can't really have it both ways, I would suggest that you create a pre generics and a generics version of your API. Ideally, the underlying implementation can be mostly the same, but the fact is, if you want it to be easy to use for anyone using Java 1.5 or later, they will expect the usage of Generics and Iterable and all the newer languange features.
I think the usage of arrays should be non-existent. It does not make for an easy to use API in either case.
NOTE: I have never used C#, but I would expect the same holds true.
As far as failing a minority of the users, those that would call the same method to get the same object on each iteration of the loop would be asking for inefficiency regardless of API design. I think as long as that's well documented, it's not too much to ask that the users obey some semblance of common sense.