What do you use when you need a immutable list with the fastest access/update? LinkedList can be slow if you have to access an element from the middle, and it's prohibitive to create and repopulate it. Binary trees? quadtrees?
If updating is very rare (or the collection is small), an array which you don't write to after intialization is worthwhile. The much lower constant factors (both in time and space) outweigh the linear time update in these cases.
Apart from that, there are a number of purely functional data structures which provide better bounds for these cases. 2-3 Finger Trees (the data structure behind Haskell's Data.Sequence) are one example. Another option are Clojure's vectors and related data structures (e.g. Relaxed Radix-Balanced Trees), which use trees with high fan-out (32 or more) to keep reads cheap and structural sharing to avoid too many copies.
All of these are moderately tricky to implement manually though, especially if performance is important, and I'm not aware of existing implementations (I don't think Clojure's vectors are easy or convenient to use from Java).
I'm not sure I understand what you're looking for but I'll try to give a couple of pointers based on some things I've seen in the standard classes:
CopyOnWriteArrayList is a mutable yet threadsafe list because it duplicates the internal array on updates. Perhaps you could adapt some ideas from that, although it's obviously not efficient for large lists.
ConcurrentHashMap implements similar ideas on a much more complicated structure. It divides the internal hash table into separate partitions, so that changes only need to lock access to the relevant partition.
For an immutable list you could do something similar: divide the list's internal array into several partitions and treat them all as immutable. When you need to change the list, you only need to clone one partition and the index of the partitions, which will be cheaper than duplicating the whole list.
AWTEventMulticaster achieves similar goals, but duplicates the absolute minimum. It's a clever binary tree. See the source.
With a smaller size of internal partition or block, you can get faster updates, but slower access in general. With a larger block (e.g., the entire array) you get slower updates but faster access.
If you really need fastest access and update, you have to use a mutable array.
Related
As I have studied HashSet class, it uses the concept of filled ratio, which says if the HashSet if filled up to this limit create a larger HashSet and copy values in to it. Why we dont let HashSet to get full with object and then create a new HashSet? Why a new concept is derived for HashSet?
Both ArrayList and Vector are accessed by positional index, so that there are no conflicts and access is always O(1).
A hash-based data structure is accessed by a hashed value, which can collide and degrade into access to a second-level "overflow" data structure (list or tree). If you have no such collisions, access is O(1), but if you have many collisions, it can be significantly worse. You can control this a bit by allocating more memory (so that there are more buckets and hopefully fewer collisions).
As a result, there is no need to grow an ArrayList to a capacity more than you need to store all elements, but it does make sense to "waste" a bit (or a lot) in the case of a HashSet. The parameter is exposed to allow the programmer to choose what works best for her application.
As Jonny Henly has described. It is because of the way data is stored.
ArrayList is linear data structure, while HashSet is not. In HashSet data is stored in underlying array based on hashcodes. In a way performance of HashSet is linked to how many buckets are filled and how well data is distributed among these buckets. Once this distribution of data is beyond a certain level (called load factor) re-hashing is done.
HashSet is primarily used to ensure that the basic operations (such as adding, fetching, modifying and deleting) are performed in constant time regardless of the number of entries being stored in the HashSet.
Though a well designed hash function can achieve this, designing one might take time. So if performance is a critical requirement for the application, we could use the load factor to ensure the operations are performed in constant time as well. I think we could call both of these as redundant's for each other (the load factor and the hash function).
I agree that this may not be a perfect explanation, but I hope it does bring some clarity on the subject.
I hope that this question is specific enough to be deemed fit for StackOverflow. I checked the FAQ and I think this qualifies, since it is specific and related to programming.
I'm implementing a complex data mining algorithm (FP-growth) in Java. Some of the initial phases of the algorithm require me to scan a large database and keep a running count of each item type found. This seems perfectly suited to a Hashbag interface. I found one in Apache Commons which seems to work for me.
So now, my HashBag is filled with [itemType, count] entries (pairs). Later on in the algorithm, I'm required to do a lot of list-like operations on these pairs. In some cases, I must sort the collection by itemType. In others, I must sort by count. This seems perfectly suited to a List interface.
I'm left with the conclusion that I must convert my Hasbag to a List. Yet it feels dirty somehow, like a waste of space and time. Is there a smarter way to do this, or is it a common situation to have a programming problem where you must treat your collection differently at different times, and conversions are a necessary evil?
One alternative is to make my own interface which is truly a list, but allows "bag-style" adds. I'd have to keep the list sorted and perform binary searches with a custom comparator every time I wanted to add something. Building that collection would probably take longer than building a Hashbag, but I'd save on the conversion step at the end. Any thoughts as to which is preferable?
Thanks!
If you used Guava's Multiset instead of Apache's Bag -- roughly analogous, but in a different style -- you can do most of this without converting. Multiset.entrySet() returns a Set<Entry<E>>, with Entry<E> effectively representing a pair of an element and a count -- that sounds like it's probably the best way to address your need to operate on the element-count pairs, maybe? You can iterate over that like you'd iterate over a Map.entrySet().
You can use Multisets.copyHighestCountFirst(Multiset) to get a multiset reordered in highest-frequency-first order, and use TreeMultiset to order by the elements directly.
(Disclosure: I contribute to Guava.)
I assume you're using the Apache Commons Collections HashBag class. Have you considered using TreeBag instead? It implements the same Bag interface but efficiently keeps the data sorted according to a comparator you provide.
That said, when you need to change sort order, there isn't usually any better answer than to copy the collection to a new one with a different comparator.
Yet it feels dirty somehow, like a waste of space and time. Is there a smarter way to do this, or is it a common situation to have a programming problem where you must treat your collection differently at different times, and conversions are a necessary evil?
Sometimes it is necessary to convert between collection types. If it is necessary "dirty" or "inelegant" or "dumb" are not really relevant.
It can also be a mistake to over-think these things up front. The actual computational trade-offs are often difficult to grasp. For instance, if you changed the HashBag to a TreeBag, insertion goes from O(1) to O(logN) but you then avoid the overheads of sorting and copying. "Big Oh" analysis / thinking is not going to give you a clear answer. Indeed, the real performance is going to depend on the scaling factors, the values of N, the ratio of hits and misses in the bag and so on.
I would advise to try implementing things the obvious way, and see if it performs well enough ... and if not, profile it to see if the data structures are the main bottleneck. Then based on the profiling, and other measurements of the input datasets, figure out the best way to improve performance from your baseline implementation.
Answering my own question!
I did some experimenting with the different types of Multiset provided by the Guava libary mentioned above by Louis Wasserman. In my particular test case, I'm parsing a 1GB XML file (database of books and authors) and creating a very large Multiset (keeping a count of how many times each author shows up in the DB). Once I reach the end of the parsing, I need to get a new Multiset which only contains the authors who showed up more than x times, where x is some threshold value. I also want my final set to be sorted by author name.
Here are two of the different ways I tried it (among others):
1) collect the original counts in a TreeMultiset and then remove any which don't meet the threshold
2) collect the original counts in a HashMultiset, and then create a new TreeMultiset where I add each item from the hash set with a count the meets the threshold
The second way proved to be significantly faster (roughly 25%), despite the conversion and extra memory usage. Obviously a big part of this is that it is pretty inefficient to delete from binary trees.
So the clear conclusion here is that in this situation, conversion is a good move (unless you have memory constraints that won't allow it).
Thanks again for turning me onto the Guava library, Louis!
I'm a student and fairly new to Java. I was looking over the different speeds achieved by the two collections in Java, Linked List, and ArrayList. I know that an ArrayList is much much faster at looking up and placing in values into its indexes. My question is:
how can one make a linked list faster, if at all possible?
Thanks for any help.
zmahir
When talking about speed, perhaps you mean complexity. Insertion and retrieval operations for ArrayList (and arrays) are O(1), while for LinkedList they are O(n). And this cannot be changed - it is 'by definition'.
O(n) means that in order to insert an object at a given position, or retrieve it, you must traverse, in the worst case, all (n) the items in the list. Hence n operations. For ArrayList this is only one operation.
You probably can't. You don't know the size (well, ok you can), nor the location of each element. To find element 100 in a linked list, you need to start with item 1, find it's link to item 2, etc. until you find 100. This makes inserting into this list a tedious job.
There are many alternatives depending on your exact goals. You can use b-trees or similar methods to split the large linked list into smaller ones. Or use hashlists if you want to quickly find items. Or use simple arrays. But if you want a list that performs like an ArrayList, why not use an ArrayList?
You can split off regions which are linked to the main linked list, so this gives you entry points directly inside the list so you don't have to walk up to them. See the subList method here: http://download.oracle.com/javase/1.4.2/docs/api/java/util/AbstractList.html. This is useful if you have a number of 'sentences' made out of words, say. You can use a separate linked list to iterate over the sentences, which are sublists of the main linked list.
You can also use a ListIterator when adding, removing, or accessing elements. This helps greatly with increasing the speed of sequential access. See the listIterator method for this, and the class: http://download.oracle.com/javase/1.4.2/docs/api/java/util/ListIterator.html.
Speed of a linked list could be improved by using skip lists: http://igoro.com/archive/skip-lists-are-fascinating/
a linked list uses pointers to walk through the items, so for example if you asked for the 5th item, the runtime will start from the first item and walks through each pointer until it reaches the 5th item.
there is really not much you can do about it. a linked list may not be a good choice if you need fast acces to items. although there are some optimizations for it such as creating a circular linked list or a double linked list where you can walk back and forth the list but this really depends on the business logic and the application requirements.
my advise is to avoid linked lists if it does not match your needs and changing to a different data structure might be the best approach.
As a general rule, data structures are designed to do certain things well. LinkedLists are designed to be faster than ArrayLists at inserting elements and removing elements and about the same as ArrayLists at iterating across the list in order. When you change the way a LinkedList works, you make it no longer a true LinkedList, so there's not really any way to modify them to be faster at something and still be a LinkedList.
You'll need to examine the way you're using this particular collection and decide whether a LinkedList is really the best data structure for your purposes. If you share with us how you're using it, and why you need it to be faster, then we can advise you on which data structure you ought to consider using.
Lots of people smarter than you or I have looked at the implementation of the Java collection classes. If there were an optimization to be made, they would have found it and already made it.
Since the collection classes are pretty much as optimized as they can be, our primary task should be to choose the correct one.
When choosing your collection type, don't forget about things like HashSet. If order doesn't matter, and you don't need to put duplicates in the collection, then HashSet may be appropriate.
I'm a student and fairly new to Java. ... how can one make a linked list faster, if at all possible?
The standard Java collection type (indeed all data structures implemented in any language!) represent compromises on various "measures" such as:
The amount of memory needed to represent the data structure.
The time taken to perform various operations; e.g. for a "list" the operations of interest are insertion, removal, indexing, contains, iteration and so on.
How easy or hard it is to integrate / reuse the collection type; see below.
So for instance:
ArrayList offers lower memory overheads, fast indexing (O(1)), but slow contains, random insertion and removal (O(N)).
LinkedList has higher memory overheads, slow indexing and contains (O(N)), but faster removal (O(1)) under certain circumstances.
The various performance measures are typically determines by the maths of the various data structures. For example, if you have a chain of nodes, the only way to get the ith node is to step through them from the beginning. This involves following i pointers.
Sometimes you can modify the data structures to improve one aspect of the performance. But this typically comes at the cost of some other aspect of the performance. (For example, you could add a separate index to make indexing of a linked list faster. But the cost of maintaining the index on insertion / deletion would mean that you'd probably be better of using an ArrayList.)
In some cases the integration / reuse requirements have significant impact on performance.
For example, it is theoretically possible to optimize a linked list's space usage by adding a next field to the list element type, combining the element and node objects and saving 16 or so bytes per list entry. However, this would make the list type less general (the member/element class would need to implement a specific interface), and has the restriction that an element can belong to at most one list at any time. These restrictions are so limiting that this approach is rarely used in Java.
For a second example, consider the problem of inserting at a given position in a linked list. For the LinkedList class, this is normally an O(N) operation, because you have to step through the list to find the position. In theory, if an application could find and remember a position, it should be able to perform the insertion at that position in O(1). Unfortunately, neither the List APIs provides no way to "remember" a position.
While neither of these examples is a fundamental roadblock to a developer "doing his own thing", they illustrate that using general data structure APIs and general implementations of those APIs has performance implications, and therefore represents a trade-off between performance and ease-of-use.
I'm a bit surprised by the answers here. There are big difference between the theoretical performance of LinkedLists and ArrayLists compared to the actual performance of the Java implementations.
What makes the Java LinkedList slower than a theoretical LinkedList is that it does a lot more than just the operations. For example it checks for concurrent modifications and other safeties.
If you know your use case, you can write a your own simple implementation of a LinkedList and it will be much faster.
My TableModel implementations typically sit on an ArrayList to allow for efficient random access performance. However, ArrayList's remove(int) implementation looks fairly inefficient as it involves making a System.arrayCopy(...) call to shift all subsequent elements back by 1.
What approaches to people take to implementing TableModels? Is there a better data structure I should be considering? ... perhaps a 3rd party library?
Some more information: My table data can shrink and grow so any fixed-size buffer implementation isn't going to work.
Thanks in advance.
Your question reeks of "Premature Optimization".
On my computer, System.arrayCopy() can copy 1 million elements of data in roughly 13ms. So I suggest to measure whether this is really an issue. In the general case, ArrayList is faster and has a better memory performance than any other similar data structure.
Using a LinkedList would make all operations on the list (including remove()) slower since you will now have to traverse half of all list elements for each operation (on average). So most operations would go from O(1) to O(N/2).
When implementing a TableModel you need to consider 2 things
1) creating and manipulating the model
2) rendering your table
The first is alot less important performance wise then the second.
You typicaly create your model once and do a tiny amount of manipulation, the rendering engine behind the table continuosly queries your tablemodel.
This means that the retrieval of the information from a model has to be optimal, any action taken to speed up model manipulation at cost of model query is therefore to be avoided.
In my case i need both, heavy manipulation and fast rendering. So the choice i did was a sort of LinkedList implemenation for all manipulation combined with an array, both are kept synchronous , all inserting, manipulation deletion is being done through the linkedlist structure, all rendering with the array structure
If you need to remove elements frequently you could choose a LinkedList implementation. You pay a bit of memory for speedy removals.
Anyone have a good rule of thumb for choosing between different implementations of Java Collection interfaces like List, Map, or Set?
For example, generally why or in what cases would I prefer to use a Vector or an ArrayList, a Hashtable or a HashMap?
I really like this cheat sheet from Sergiy Kovalchuk's blog entry, but unfortunately it is offline. However, the Wayback Machine has a historical copy:
More detailed was Alexander Zagniotov's flowchart, also offline therefor also a historical copy of the blog:
Excerpt from the blog on concerns raised in comments:
"This cheat sheet doesn't include rarely used classes like WeakHashMap, LinkedList, etc. because they are designed for very specific or exotic tasks and shouldn't be chosen in 99% cases."
I'll assume you know the difference between a List, Set and Map from the above answers. Why you would choose between their implementing classes is another thing. For example:
List:
ArrayList is quick on retrieving, but slow on inserting. It's good for an implementation that reads a lot but doesn't insert/remove a lot. It keeps its data in one continuous block of memory, so every time it needs to expand, it copies the whole array.
LinkedList is slow on retrieving, but quick on inserting. It's good for an implementation that inserts/removes a lot but doesn't read a lot. It doesn't keep the entire array in one continuous block of memory.
Set:
HashSet doesn't guarantee the order of iteration, and therefore is fastest of the sets. It has high overhead and is slower than ArrayList, so you shouldn't use it except for a large amount of data when its hashing speed becomes a factor.
TreeSet keeps the data ordered, therefore is slower than HashSet.
Map: The performance and behavior of HashMap and TreeMap are parallel to the Set implementations.
Vector and Hashtable should not be used. They are synchronized implementations, before the release of the new Collection hierarchy, thus slow. If synchronization is needed, use Collections.synchronizedCollection().
I've always made those decisions on a case by case basis, depending on the use case, such as:
Do I need the ordering to remain?
Will I have null key/values? Dups?
Will it be accessed by multiple threads
Do I need a key/value pair
Will I need random access?
And then I break out my handy 5th edition Java in a Nutshell and compare the ~20 or so options. It has nice little tables in Chapter five to help one figure out what is appropriate.
Ok, maybe if I know off the cuff that a simple ArrayList or HashSet will do the trick I won't look it all up. ;) but if there is anything remotely complex about my indended use, you bet I'm in the book. BTW, I though Vector is supposed to be 'old hat'--I've not used on in years.
Theoretically there are useful Big-Oh tradeoffs, but in practice these almost never matter.
In real-world benchmarks, ArrayList out-performs LinkedList even with big lists and with operations like "lots of insertions near the front." Academics ignore the fact that real algorithms have constant factors that can overwhelm the asymptotic curve. For example, linked-lists require an additional object allocation for every node, meaning slower to create a node and vastly worse memory-access characteristics.
My rule is:
Always start with ArrayList and HashSet and HashMap (i.e. not LinkedList or TreeMap).
Type declarations should always be an interface (i.e. List, Set, Map) so if a profiler or code review proves otherwise you can change the implementation without breaking anything.
About your first question...
List, Map and Set serve different purposes. I suggest reading about the Java Collections Framework at http://java.sun.com/docs/books/tutorial/collections/interfaces/index.html.
To be a bit more concrete:
use List if you need an array-like data structure and you need to iterate over the elements
use Map if you need something like a dictionary
use a Set if you only need to decide if something belongs to the set or not.
About your second question...
The main difference between Vector and ArrayList is that the former is synchronized, the latter is not synchronized. You can read more about synchronization in Java Concurrency in Practice.
The difference between Hashtable (note that the T is not a capital letter) and HashMap is similiar, the former is synchronized, the latter is not synchronized.
I would say that there are no rule of thumb for preferring one implementation or another, it really depends on your needs.
For non-sorted the best choice, more than nine times out of ten, will be: ArrayList, HashMap, HashSet.
Vector and Hashtable are synchronised and therefore might be a bit slower. It's rare that you would want synchronised implementations, and when you do their interfaces are not sufficiently rich for thier synchronisation to be useful. In the case of Map, ConcurrentMap adds extra operations to make the interface useful. ConcurrentHashMap is a good implementation of ConcurrentMap.
LinkedList is almost never a good idea. Even if you are doing a lot of insertions and removal, if you are using an index to indicate position then that requires iterating through the list to find the correct node. ArrayList is almost always faster.
For Map and Set, the hash variants will be faster than tree/sorted. Hash algortihms tend to have O(1) performance, whereas trees will be O(log n).
Lists allow duplicate items, while Sets allow only one instance.
I'll use a Map whenever I'll need to perform a lookup.
For the specific implementations, there are order-preserving variations of Maps and Sets but largely it comes down to speed. I'll tend to use ArrayList for reasonably small Lists and HashSet for reasonably small sets, but there are many implementations (including any that you write yourself). HashMap is pretty common for Maps. Anything more than 'reasonably small' and you have to start worrying about memory so that'll be way more specific algorithmically.
This page has lots of animated images along with sample code testing LinkedList vs. ArrayList if you're interested in hard numbers.
EDIT: I hope the following links demonstrate how these things are really just items in a toolbox, you just have to think about what your needs are: See Commons-Collections versions of Map, List and Set.
Well, it depends on what you need. The general guidelines are:
List is a collection where data is kept in order of insertion and each element got index.
Set is a bag of elements without duplication (if you reinsert the same element, it won't be added). Data doesn't have the notion of order.
Map You access and write your data elements by their key, which could be any possible object.
Attribution: https://stackoverflow.com/a/21974362/2811258
For more information about Java Collections, check out this article.
As suggested in other answers, there are different scenarios to use correct collection depending on use case. I am listing few points,
ArrayList:
Most cases where you just need to store or iterate through a "bunch of things" and later iterate through them. Iterating is faster as its index based.
Whenever you create an ArrayList, a fixed amount of memory is allocated to it and once exceeded, it copies the whole array
LinkedList:
It uses doubly linked list so insertion and deletion operation will be fast as it will only add or remove a node.
Retrieving is slow as it will have to iterate through the nodes.
HashSet:
Making other yes-no decisions about an item, e.g. "is the item a word of English", "is the item in the database?" , "is the item in this category?" etc.
Remembering "which items you've already processed", e.g. when doing a web crawl;
HashMap:
Used in cases where you need to say "for a given X, what is the Y"? It is often useful for implementing in-memory caches or indexes i.e key value pairs For example:
For a given user ID, what is their cached name/User object?.
Always go with HashMap to perform a lookup.
Vector and Hashtable are synchronized and therefore bit slower and If synchronization is needed, use Collections.synchronizedCollection().
Check This for sorted collections.
Hope this hepled.
I found Bruce Eckel's Thinking in Java to be very helpful. He compares the different collections very well. I used to keep a diagram he published showing the inheritance heirachy on my cube wall as a quick reference. One thing I suggest you do is keep in mind thread safety. Performance usually means not thread safe.
Use Map for key-value pairing
For key-value tracking, use Map implementation.
For example, tracking which person is covering which day of the weekend. So we want to map a DayOfWeek object to an Employee object.
Map < DayOfWeek , Employee > weekendWorker =
Map.of(
DayOfWeek.SATURDAY , alice ,
DayOfWeek.SUNDAY , bob
)
;
When choosing one of the Map implementations, there are several aspects to consider. These include: concurrency, tolerance for NULL values in key and/or value, order when iterating keys, tracking by reference versus content, and convenience of literals syntax.
Here is a chart I made showing the various aspects of each of the ten Map implementations bundled with Java 11.