Pros and cons of using `newArrayListWithCapacity(int)` vs `new ArrayList()` - java

I came across newArrayListWithCapacity(int) (1) in the codebase and was wondering if it has any advantages.
From what I understand, if we don't know the size, new ArrayList is going to serve the purpose, and if we do now the exact size, we can simply use an array. I'm trying to understand the benefits of using newArrayListWithCapacity(int). And how is it different from newArrayListWithExpectedSize?
If I define the expected size as x and if I end up having y number of entries, does it adversely affect the performance?
1: https://guava.dev/releases/15.0/api/docs/com/google/common/collect/Lists.html#newArrayListWithCapacity(int)
#GwtCompatible(serializable=true) public static ArrayList
newArrayListWithCapacity(int initialArraySize) Creates an ArrayList
instance backed by an array of the exact size specified; equivalent to
ArrayList.ArrayList(int). Note: if you know the exact size your list
will be, consider using a fixed-size list (Arrays.asList(Object[])) or
an ImmutableList instead of a growable ArrayList.
Note: If you have only an estimate of the eventual size of the list,
consider padding this estimate by a suitable amount, or simply use
newArrayListWithExpectedSize(int) instead.

The method and the constructor do the same thing, which is to provide an ArrayList with a specific capacity, meaning that the initial backing array is created with the provided size.
Guava created a factory method wrapping the ArrayList(int) constructor because, while well documented, it might sometimes lead people in confusion. The Guava team did the same with other collection factory methods such as Sets.newHashSetWithCapacity(int) and Sets.newHashSetWithExpectedSize(int).
The pros are the following:
You have an explicit factory method name. This is good because not everyone knows what the constructors mean. So having a factory method that says what it creates is handy for people not entirely familiar with the API, but who are reading the code.
You don't have to handle generics in any way. Those are handled automatically for you through the generic method mechanism. Guava added this method in when Java 6 was released, so you had to manually add <FullTypeName> to each constructor calls (such as List<String> strings = new ArrayList<String>(10)). Nowadays you can simply use <> (such as List<String> strings = new ArrayList<>(10), but at the time the gain was huge!
The cons are:
You lose one method call of performance. But usually, when you're already dealing with Java's standard API, performance is not what you're looking for. Some other libraries provide high performance collections, and neither the Java Collections or Guava's Collections are serious contenders there. Also, as MikeFHay mentions in the comments below, the method call will likely be inlined in modern JVMs.
You depend of Google Guava. This is not really a letdown because Guava is a fantastic library, but if this is the only reason you use Guava, it might be.
The Guava team made that method because the pros clearly outmatch the cons in their view (which i personally share).

Related

Sorting a List in parallel without creating a temporary array in Java 8

Java 8 provides java.util.Arrays.parallelSort, which sorts arrays in parallel using the fork-join framework. But there's no corresponding Collections.parallelSort for sorting lists.
I can use toArray, sort that array, and store the result back in my list, but that will temporarily increase memory usage, which if I'm using parallel sorting is already high because parallel sorting only pays off for huge lists. Instead of twice the memory (the list plus parallelSort's working memory), I'm using thrice (the list, the temporary array and parallelSort's working memory). (Arrays.parallelSort documentation says "The algorithm requires a working space no greater than the size of the original array".)
Memory usage aside, Collections.parallelSort would also be more convenient for what seems like a reasonably common operation. (I tend not to use arrays directly, so I'd certainly use it more often than Arrays.parallelSort.)
The library can test for RandomAccess to avoid trying to e.g. quicksort a linked list, so that can't a reason for a deliberate omission.
How can I sort a List in parallel without creating a temporary array?
There doesn't appear to be any straightforward way to sort a List in parallel in Java 8. I don't think this is fundamentally difficult; it looks more like an oversight to me.
The difficulty with a hypothetical Collections.parallelSort(list, cmp) is that the Collections implementation knows nothing about the list's implementation or its internal organization. This can be seen by examining the Java 7 implementation of Collections.sort(list, cmp). As you observed, it has to copy the list elements out to an array, sort them, and then copy them back into the list.
This is the big advantage of the List.sort(cmp) extension method over Collections.sort(list, cmp). It might seem that this is merely a small syntactic advantage being able to write myList.sort(cmp) instead of Collections.sort(myList, cmp). The difference is that myList.sort(cmp), being an interface extension method, can be overridden by the specific List implementation. For example, ArrayList.sort(cmp) sorts the list in-place using Arrays.sort() whereas the default implementation implements the old copyout-sort-copyback technique.
It should be possible to add a parallelSort extension method to the List interface that has similar semantics to List.sort but does the sorting in parallel. This would allow ArrayList to do a straightforward in-place sort using Arrays.parallelSort. (It's not entirely clear to me what the default implementation should do. It might still be worth it to do copyout-parallelSort-copyback.) Since this would be an API change, it can't happen until the next major release of Java SE.
As for a Java 8 solution, there are a couple workarounds, none very pretty (as is typical of workarounds). You could create your own array-based List implementation and override sort() to sort in parallel. Or you could subclass ArrayList, override sort(), grab the elementData array via reflection and call parallelSort() on it. Of course you could just write your own List implementation and provide a parallelSort() method, but the advantage of overriding List.sort() is that this works on the plain List interface and you don't have to modify all the code in your code base to use a different List subclass.
I think you are doomed to use a custom List implementation augmented with your own parallelSort or else change all your other code to store the big data in Array types.
This is the inherent problem with layers of abstract data types. They're meant to isolate the programmer from details of implementation. But when the details of implementation matter - as in the case of underlying storage model for sort - the otherwise splendid isolation leaves the programmer helpless.
The standard List sort documents provide an example. After the explanation that mergesort is used, they say
The default implementation obtains an array containing all elements in this list, sorts the array, and iterates over this list resetting each element from the corresponding position in the array. (This avoids the n2 log(n) performance that would result from attempting to sort a linked list in place.)
In other words, "since we don't know the underlying storage model for a List and couldn't touch it if we did, we make a copy organized in a known way." The parenthesized expression is based on the fact that the List "i'th element accessor" on a linked list is Omega(n), so the normal array mergesort implemented with it would be a disaster. In fact it's easy to implement mergesort efficiently on linked lists. The List implementer is just prevented from doing it.
A parallel sort on List has the same problem. The standard sequential sort fixes it with custom sorts in the concrete List implementations. The Java folks just haven't chosen to go there yet. Maybe in Java 9.
Use the following:
yourCollection.parallelStream().sorted().collect(Collectors.toList());
This will be parallel when sorting, because of parallelStream(). I believe this is what you mean by parallel sort?
Just speculating here, but I see several good reasons for generic sort algorithms preferring to work on arrays instead of List instances:
Element access is performed via method calls. Despite all the optimizations JIT can apply, even for a list that implements RandomAccess, this probably means a lot of overhead compared to plain array accesses which can be optimized very well.
Many algorithms require copying some fragments of the array to temporary structures. There are efficient methods for copying arrays or their fragments. An arbitrary List instance on the other hand, can't be easily copied. New lists would have to be allocated which poses two problems. First, this means allocating some new objects which is likely more costly than allocating arrays. Second, the algorithm would have to choose what implementation of List should be allocated for this temporary structure. There are two obvious solutions, both bad: either just choose some hard-coded implementation, e.g. ArrayList, but then it could just allocate simple arrays as well (and if we're generating arrays then it's much easier if the soiurce is also an array). Or, let the user provide some list factory object, which makes the code much more complicated.
Related to the previous issue: there is no obvious way of copying a list into another due to how the API is designed. The best the List interface offers is addAll() method, but this is probably not efficient for most cases (think of pre-allocating the new list to its target size vs adding elements one by one which many implementations do).
Most lists that need to be sorted will be small enough for another copy to not be an issue.
So probably the designers thought of CPU efficiency and code simplicity most of all, and this is easily achieved when the API accepts arrays. Some languages, e.g. Scala, have sort methods that work directly on lists, but this comes at a cost and probably is less efficient than sorting arrays in many cases (or sometimes there will probably just be a conversion to and from array performed behind the scenes).
By combining the existing answers I came up with this code.
This works if you are not interested in creating a custom List class and if you don't bother to create a temporary array (Collections.sort is doing it anyway).
This uses the initial list and does not create a new one as in the parallelStream solution.
// Convert List to Array so we can use Arrays.parallelSort rather than Collections.sort.
// Note that Collections.sort begins with this very same conversion, so we're not adding overhead
// in comparaison with Collections.sort.
Foo[] fooArr = fooLst.toArray(new Foo[0]);
// Multithread the TimSort. Automatically fallback to mono-thread when size is less than 8192.
Arrays.parallelSort(fooArr, Comparator.comparingStuff(Foo::yourmethod));
// Refill the List using the sorted Array, the same way Collections.sort does it.
ListIterator<Foo> i = fooLst.listIterator();
for (Foo e : fooArr) {
i.next();
i.set(e);
}

Is a Collection better than a LinkedList?

Collection list = new LinkedList(); // Good?
LinkedList list = new LinkedList(); // Bad?
First variant gives more flexibility, but is that all? Are there any other reasons to prefer it? What about performance?
These are design decisions, and one size usually doesn't fit all. Also the choice of what is used internally for the member variable can (and usually should be) different from what is exposed to the outside world.
At its heart, Java's collections framework does not provide a complete set of interfaces that describe the performance characteristics without exposing the implementation details. The one interface that describes performance, RandomAccess is a marker interface, and doesn't even extend Collection or re-expose the get(index) API. So I don't think there is a good answer.
As a rule of thumb, I keep the type as unspecific as possible until I recognize (and document) some characteristic that is important. For example, as soon as I want methods to know that insertion order is retained, I would change from Collection to List, and document why that restriction is important. Similarly, move from List to LinkedList if say efficient removal from front becomes important.
When it comes to exposing the collection in public APIs, I always try to start exposing just the few APIs that are expected to get used; for example add(...) and iterator().
Collection list = new LinkedList(); //bad
This is bad because, you don't want this reference to refer say an HashSet(as HashSet also implements Collection and so does many other class's in the collection framework).
LinkedList list = new LinkedList(); //bad?
This is bad because, good practice is to always code to the interface.
List list = new LinkedList();//good
This is good because point 2 days so.(Always Program To an Interface)
Use the most specific type information on non-public objects. They are implementation details, and we want our implementation details as specific and precise as possible.
Sure. If for example java will find and implement more efficient implementation for the List collection, but you already have API that accepts only LinkedList, you won't be able to replace the implementation if you already have clients for this API. If you use interface, you can easily replace the implementation without breaking the APIs.
They're absolutely equivalent. The only reason to use one over the other is that if you later want to use a function of list that only exists in the class LinkedList, you need to use the second.
My general rule is to only be as specific as you need to be at the time (or will need to be in the near future, within reason). Granted, this is somewhat subjective.
In your example I would usually declare it as a List just because the methods available on Collection aren't very powerful, and the distinction between a List and another Collection (Map, Set, etc.) is often logically significant.
Also, in Java 1.5+ don't use raw types -- if you don't know the type that your list will contain, at least use List<?>.

what's a good persistent collections framework for use in java?

By persistent collections I mean collections like those in clojure.
For example, I have a list with the elements (a,b,c).
With a normal list, if I add d, my original list will have (a,b,c,d) as its elements.
With a persistent list, when I call list.add(d), I get back a new list, holding (a,b,c,d).
However, the implementation attempts to share elements between the list wherever possible, so it's much more memory efficient than simply returning a copy of the original list.
It also has the advantage of being immutable (if I hold a reference to the original list, then it will always return the original 3 elements).
This is all explained much better elsewhere (e.g. http://en.wikipedia.org/wiki/Persistent_data_structure).
Anyway, my question is... what's the best library for providing this functionality for use in java? Can I use the clojure collections somehow (other that by directly using clojure)?
Just use the ones in Clojure directly. While obviously you might not want to use the language it's self, you can still use the persistent collections directly as they are all just Java classes.
import clojure.lang.PersistentHashMap;
import clojure.lang.IPersistentMap;
IPersistentMap map = PersistentHashMap.create("key1", "value1");
assert map.get("key1").equals("value1");
IPersistentMap map2 = map.assoc("key1", "value1");
assert map2 != map;
assert map2.get("key1").equals("value1");
(disclaimer: I haven't actually compiled that code :)
the down side is that the collections aren't typed, i.e. there are no generics with them.
What about pcollections?
You can also check out Clojure's implementation of persistent collections (PersistentHashMap, for instance).
I was looking for a slim, Java "friendly" persistent collection framework and took TotallyLazy and PCollections mentioned in this thread for a testdrive, because they sounded most promising to me.
Both provide reasonable simple interfaces to manipulate persistent lists:
// TotallyLazy
PersistentList<String> original = PersistentList.constructors.empty(String.class);
PersistentList<String> modified = original.append("Mars").append("Raider").delete("Raider");
// PCollections
PVector<String> original = TreePVector.<String>empty();
PVector<String> modified = original.plus("Mars").plus("Raider").minus("Raider");
Both PersistentList and PVector extend java.util.List, so both libraries should integrate well into an existing environment.
It turns out, however, that TotallyLazy runs into performance problems when dealing with larger lists (as already mentioned in a comment above by #levantpied). On my MacBook Pro (Late 2013) inserting 100.000 elements and returning the immutable list took TotallyLazy ~2000ms, whereas PCollections finished within ~120ms.
My (simple) test cases are available on Bitbucket, if someone wants to take a more thorough look.
[UPDATE]: I recently had a look at Cyclops X, which is a high performing and more complete lib targeted for functional programming. Cyclops also contains a module for persistent collections.
https://github.com/andrewoma/dexx is a port of Scala's persistent collections to Java. It includes:
Set, SortedSet, Map, SortedMap and Vector
Adapters to view the persistent collections as java.util equivalents
Helpers for easy construction
Paguro provides type-safe versions of the actual Clojure collections for use in Java 8+. It includes: List (Vector), HashMap, TreeMap, HashSet, and TreeSet. They behave exactly the way you specify in your question and have been painstakingly fit into the existing java.util collections interfaces for maximum type-safe Java compatibility. They are also a little faster than PCollections.
Coding your example in Paguro looks like this:
// List with the elements (a,b,c)
ImList<T> list = vec(a,b,c);
// With a persistent list, when I call list.add(d),
// I get back a new list, holding (a,b,c,d)
ImList<T> newList = list.append(d);
list.size(); // still returns 3
newList.size(); // returns 4
You said,
The implementation attempts to share elements between the list
wherever possible, so it's much more memory efficient and fast than
simply returning a copy of the original list. It also has the
advantage of being immutable (if I hold a reference to the original
list, then it will always return the original 3 elements).
Yes, that's exactly how it behaves. Daniel Spiewak explains the speed and efficiency of these collections much better than I could.
May want to check out clj-ds. I haven't used it, but it seems promising. Based off of the projects readme it extracted the data structures out of Clojure 1.2.0.
Functional Java implements a persistent List, lazy List, Set, Map, and Tree. There may be others, but I'm just going by the information on the front page of the site.
I am also interested to know what the best persistent data structure library for Java is. My attention was directed to Functional Java because it is mentioned in the book, Functional Programming for Java Developers.
There's pcollections (Persistent Collections) library you can use:
http://code.google.com/p/pcollections/
The top voted answer suggest to directly use the clojure collections which I think is a very good idea. Unfortunately the fact that clojure is a dynamically typed language and Java is not makes the clojure libraries very uncomfortable to use in Java.
Because of this and the lack of light-weight, easy-to-use wrappers for the clojure collections types I have written my own library of Java wrappers using generics for the clojure collection types with a focus on ease of use and clarity when it comes to interfaces.
https://github.com/cornim/ClojureCollections
Maybe this will be of use to somebody.
P.S.: At the moment only PersistentVector, PersistentMap and PersistentList have been implemented.
In the same vein as Cornelius Mund, Pure4J ports the Clojure collections into Java and adds Generics support.
However, Pure4J is aimed at introducing pure programming semantics to the JVM through compile time code checking, so it goes further to introduce immutability constraints to your classes, so that the elements of the collection cannot be mutated while the collection exists.
This may or may not be what you want to achieve: if you are just after using the Clojure collections on the JVM I would go with Cornelius' approach, otherwise, if you are interested in pursuing a pure programming approach within Java then you could give Pure4J a try.
Disclosure: I am the developer of this
totallylazy is a very good FP library which has implementations of:
PersistentList<T>: the concrete implementations are LinkedList<T> and TreeList<T> (for random access)
PersistentMap<K, V>: the concrete implementations are HashTreeMap<K, V> and ListMap<K, V>
PersistentSortedMap<K, V>
PersistentSet<T>: the concrete implementation is TreeSet<T>
Example of usage:
import static com.googlecode.totallylazy.collections.PersistentList.constructors.*;
import com.googlecode.totallylazy.collections.PersistentList;
import com.googlecode.totallylazy.numbers.Numbers;
...
PersistentList<Integer> list = list(1, 2, 3);
// Create a new list with 0 prepended
list = list.cons(0);
// Prints 0::1::2::3
System.out.println(list);
// Do some actions on this list (e.g. remove all even numbers)
list = list.filter(Numbers.odd);
// Prints 1::3
System.out.println(list);
totallylazy is constantly being maintained. The main disadvantage is the total absence of Javadoc.
I'm surprised nobody mentioned vavr. I use it for a long time now.
http://www.vavr.io
Description from their site:
Vavr core is a functional library for Java. It helps to reduce the amount of code and to increase the robustness. A first step towards functional programming is to start thinking in immutable values. Vavr provides immutable collections and the necessary functions and control structures to operate on these values. The results are beautiful and just work.
https://github.com/arnohaase/a-foundation is another port of Scala's libraries.
It is also available from Maven Central: com.ajjpj.a-foundation:a-foundation

What is the best way to cache and reuse immutable singleton objects in Java?

I have a class representing a set of values that will be used as a key in maps.
This class is immutable and I want to make it a singleton for every distinct set of values, using the static factory pattern.
The goal is to prevent identical objects from being created many (100+) times and to optimize the equals method.
I am looking for the best way to cache and reuse previous instances of this class.
The first thing that pops to mind is a simple hashmap, but are there alternatives?
There are two situations:
If the number of distinct objects is small and fixed, you should use an enum
They're not instantiatiable beyond the declared constants, and EnumMap is optimized for it
Otherwise, you can cache immutable instances as you planned:
If the values are indexable by numbers in a contiguous range, then an array can be used
This is how e.g. Integer cache instances in a given range for valueOf
Otherwise you can use some sort of Map
Depending on the usage pattern, you may choose to only cache, say, the last N instances, instead of all instances created so far. This is the approach used in e.g. re.compile in Python's regular expression module. If N is small enough (e.g. 5), then a simple array with a linear search may also work just fine.
For Map based solution, perhaps a useful implementation is java.util.LinkedHashMap, which allows you to enforce LRU-like policies if you #Override the removeEldestEntry.
There is also LRUMap from Apache Commons Collections that implement this policy more directly.
See also
Java Tutorials/enums
Effective Java 2nd Edition, Item 1: Consider static factory methods instead of constructors
Wikipedia/Flyweight pattern
Related questions
Easy, simple to use LRU cache in java
LRU LinkedHashMap that limits size based on available memory
Guava MapMaker, soft keys and values, etc
How would you implement an LRU cache in Java 6?
What you're trying to make sounds like an example of the Flyweight Pattern, so looking for references to that might help clarify your thinking.
Storing them in a map of some sort is indeed a common implementation.
What do your objects look like? If your objects are fairly simple I think you should consider not caching them - object creation is usually quite fast. I think you should evaluate whether the possibly small performance boost is worth the added complexity and effort of a cache.

Java: ArrayList for List, HashMap for Map, and HashSet for Set?

I usually always find it sufficient to use the concrete classes for the interfaces listed in the title. Usually when I use other types (such as LinkedList or TreeSet), the reason is for functionality and not performance - for example, a LinkedList for a queue.
I do sometimes construct ArrayList with an initial capcacity more than the default of 10 and a HashMap with more than the default buckets of 16, but I usually (especially for business CRUD) never see myself thinking "hmmm...should I use a LinkedList instead ArrayList if I am just going to insert and iterate through the whole List?"
I am just wondering what everyone else here uses (and why) and what type of applications they develop.
Those are definitely my default, although often a LinkedList would in fact be the better choice for lists, as the vast majority of lists seem to just iterate in order, or get converted to an array via Arrays.asList anyway.
But in terms of keeping consistent maintainable code, it makes sense to standardize on those and use alternatives for a reason, that way when someone reads the code and sees an alternative, they immediately start thinking that the code is doing something special.
I always type the parameters and variables as Collection, Map and List unless I have a special reason to refer to the sub type, that way switching is one line of code when you need it.
I could see explicitly requiring an ArrayList sometimes if you need the random access, but in practice that really doesn't happen.
For some kind of lists (e.g. listeners) it makes sense to use a CopyOnWriteArrayList instead of a normal ArrayList. For almost everything else the basic implementations you mentioned are sufficient.
Yep, I use those as defaults. I generally have a rule that on public class methods, I always return the interface type (ie. Map, Set, List, etc.), since other classes (usually) don't need to know what the specific concrete class is. Inside class methods, I'll use the concrete type only if I need access to any extra methods it may have (or if it makes understanding the code easier), otherwise the interface is used.
It's good to be pretty flexible with any rules you do use, though, as a dependancy on concrete class visibility is something that can change over time (especially as your code gets more complex).
Indeed, always use base interfaces Collection, List, Map instead their implementations. To make thinkgs even more flexible you could hide your implementations behind static factory methods, which allow you to switch to a different implementation in case you find something better(I doubt there will be big changes in this field, but you never know). Another benefit is that the syntax is shorter thanks to generics.
Map<String, LongObjectClasName> map = CollectionUtils.newMap();
instead of
Map<String, LongObjectClasName> map = new HashMap<String, LongObjectClasName>();
public class CollectionUtils {
.....
public <T> List<T> newList() {
return new ArrayList<T>();
}
public <T> List<T> newList(int initialCapacity) {
return new ArrayList<T>(initialCapacity);
}
public <T> List<T> newSynchronizedList() {
return new Vector<T>();
}
public <T> List<T> newConcurrentList() {
return new CopyOnWriteArrayList<T>();
}
public <T> List<T> newSynchronizedList(int initialCapacity) {
return new Vector<T>(initialCapacity);
}
...
}
Having just come out of a class about data structure performance, I'll usually look at the kind of algorithm I'm developing or the purpose of the structure before I choose an implementation.
For example, if I'm building a list that has a lot of random accesses into it, I'll use an ArrayList because its random access performance is good, but if I'm inserting things into the list a lot, I might choose a LinkedList instead. (I know modern implementations remove a lot of performance barriers, but this was the first example that came to mind.)
You might want to look at some of the Wikipedia pages for data structures (especially those dealing with sorting algorithms, where performance is especially important) for more information about performance, and the article about Big O notation for a general discussion of measuring the performance of various functions on data structures.
I don't really have a "default", though I suppose I use the implementations listed in the question more often than not. I think about what would be appropriate for whatever particular problem I'm working on, and use it. I don't just blindly default to using ArrayList, I put in 30 seconds of thought along the lines of "well, I'm going to be doing a lot of iterating and removing elements in the middle of this list so I should use a LinkedList".
And I almost always use the interface type for my reference, rather than the implementation. Remember that List is not the only interface that LinkedList implements. I see this a lot:
LinkedList<Item> queue = new LinkedList<Item>();
when what the programmer meant was:
Queue<Item> queue = new LinkedList<Item>();
I also use the Iterable interface a fair amount.
If you are using LinkedList for a queue, you might consider using the Deque interface and ArrayDeque implementing class (introduced in Java 6) instead. To quote the Javadoc for ArrayDeque:
This class is likely to be faster than
Stack when used as a stack, and faster
than LinkedList when used as a queue.
I tend to use one of *Queue classes for queues. However LinkedList is a good choice if you don't need thread safety.
Using the interface type (List, Map) instead of the implementation type (ArrayList, HashMap) is irrelevant within methods - it's mainly important in public APIs, i.e. method signatures (and "public" doesn't necessarily mean "intended to be published outside your team).
When a method takes an ArrayList as a parameter, and you have something else, you're screwed and have to copy your data pointlessly. If the parameter type is List, callers are much more flexible and can, e.g. use Collections.EMPTY_LIST or Collections.singletonList().
I too typically use ArrayList, but I will use TreeSet or HashSet depending on the circumstances. When writing tests, however, Arrays.asList and Collections.singletonList are also frequently used. I've mostly been writing thread-local code, but I could also see using the various concurrent classes as well.
Also, there were times I used ArrayList when what I really wanted was a LinkedHashSet (before it was available).

Categories

Resources