Java object arrays - use of hardware memory cache - java

Iterating over consecutive elements of an array is generally considered to be more efficient than iterating over consecutive linked list elements because of caching.
This is undoubtedly true as long as elements have elementary data types. But if the elements are objects, my understanding is that only the references to the objects will be stored in the contiguous memory area of the array (which is likely to be cached) while the actual object data will be stored anywhere in main memory and cannot be cached effectively.
As you normally not only iterate over the container but also need to access object data in each iteration, doesn't this more or less kill the performance benefit of the array over the list?
Edit: The comment about different scenarios varying greatly is probably correct. So let's consider a specific one: You search for a specific object in the container. In order to find it you need to compare a given string to another string that is a class variable of the object.

No, for objects ("pointers") there is an indirection in both. A linked list needs for every node to step to the next one. So it still has an overhead.
But yes, in a relative way the gain concerns only a part, very roughly the half of the pure walk through, counting indirection steps.
And ofcourse every indirection makes access more chaotic, slower.
BTW there is the ArrayList too being similar fast as arrays.

Related

Why does the StringBuffer class uses an Array as it's underlying data structure instead of a LinkedList?

The StringBuffer/StringBuilder classes in Java are primarily used to modify String values without having to initialize a new String object everytime.
Is there a specific reason, it doesn't use a LinkedList instead of an Char Array as it's underlying data structure?
Inserting a char into a Array will always result in a O(n) time to copy all elements to the next index, where as that that be done in O(1) time in case of a LinkedList.
Random access
StringBuilder has random access operations such as charAt or substring, which would be extremely slow with a linked list.
Insertions
In fact, array lists aren't particularly slower than linked lists even when it comes to other operations like insertion. Typically StringBuilder won't be used to create strings of a million characters so it's unlikely that we need to resize the buffer too many times.
At the end
I have to correct you that insertion at the end always requires a O(n) copy of elements. The worst-case is indeed O(n) but the amortized complexity is O(1) because we don't just allocate one element at a time. When the array isn't big enough to make another insertion, most implementations double the size of the array.
In the middle
Insertions in the middle always require the copy of elements at the right of the insertion, so yes it is pretty slow but it's not a typical use-case for a StringBuilder where most insertions happen at the end. Also, linked lists have the same average complexity on insertions in the middle since they first have to reach the right node by iterating the list.
Data locality
Another advantage of arrays compared to linked list is data locality. Array lists are faster to iterate than linked lists because when the processor loads a piece of memory around an element of an array, it will also cache some of the neighbours of this element which will therefore be returned faster. On the other hand, all elements of a linked list may live in very distant memory locations, which is not cache-friendly.
Memory footprint
Because we double the size of the array of every resize, dynamic arrays can have a pretty big memory footprint (but at least we won't need to copy elements too often). Linked lists also have a fairly large memory footprint since they require one additional reference and a pointer for each element while elements are compactly stored in an array. On average, I'd say a typical array list will have a smaller memory footprint than a linked list but I may be wrong. This is particularly the case for primitive types - such as char - because linked lists require wrapper objects (at least in Java since there are no pointers) whereas we can use much more compact primitive arrays.
Last notes
Finally, I used StringBuilder in my answer instead of StringBuffer because this is the recommended implementation for most use-cases. StringBuffer is only preferrable when thread-safety is a hard requirement. Otherwise, StringBuilder will have better performance.
PS: Python's most prominent data structure is list and guess what... it's implemented with a dynamic array! Resizable arrays are very often a better choice than linked lists. The only case in which a linked list is notably more performant is when the application focuses on elements close to the head of the list and makes frequent insertions or deletions in this area.

Why don't we count linear search cost as a prerequisite bottleneck for the insertion operation of a linked list, compared to ArrayList?

I have had this question for a while but I have been unsatisfied with the answers because the distinctions appear to be arbitrary and more like conventional wisdom that is sort of blindly accepted rather than assessed critically.
In an ArrayList it is said that insertion cost (for a single element) is linear. If we are inserting at index p for 0 <= p < n where n is the size of the list, then the remaining n-p elements are shifted over first before the new element is copied into position p.
In a LinkedList, it is said that insertion cost (for a single element) is constant. For instance if we already have a node and we want to insert after it, we rearrange some pointers and it's done quickly. But getting this node in the first place, I don't see how it can be done other than a linear search first (assuming it isn't a trivial case like prepending at the start of the list or appending at the end).
And yet in the case of the LinkedList, we don't count that initial search time. To me this is confusing because it's sort of like saying "The ice cream is free... after you pay for it." It's like, well, of course it is... but that sort of skips the hard part of paying for it. Of course inserting in a LinkedList is going to be constant time if you already have the node you want, but getting that node in the first place may take some extra time! I could easily say that inserting in an ArrayList is constant time... after I move the remaining n-p elements.
So I don't understand why this distinction is made for one but not the other. You could argue that insertion is considered constant for LinkedLists because of the cases where you insert at the front or back where linear time operations are not required, whereas in an ArrayList, insertion requires copying of the suffix array after position p, but I could easily counter that by saying if we insert at the back of an ArrayList, it is amortized constant time and doesn't require extra copying in most cases unless we reach capacity.
In other words we separate the linear stuff from the constant stuff for LinkedList, but we don't separate them for the ArrayList, even though in both cases, the linear operations may not be invoked or not invoked.
So why do we consider them separate for LinkedList and not for ArrayList? Or are they only being defined here in the context where LinkedList is overwhelmingly used for head/tail appends and prepends as opposed to elements in the middle?
This is basically a limitation of the Java interface for List and LinkedList, rather than a fundamental limitation of linked lists. That is, in Java there is no convenient concept of "a pointer to a list node".
Every type of list has a few different concepts loosely associated with the idea of pointing to a particular item:
The idea of a "reference" to a specific item in a list
The integer position of an item in the list
The value of a item that may be in the list (possibly multiple times)
The most general concept is the first one, and is usually encapsulated in the idea of an iterator. As it happens, the simple way to implement an iterator for an array backed list is simply to wrap an integer which refers to the position of the item in a list. So for array lists only, the first and second ways of referring to items are pretty tightly bound.
For other list types, however, and even for most other container types (trees, hashes, etc) that is not the case. The generic reference to an item is usually something like a pointer to the wrapper structure around one item (e.g., HashMap.Entry or LinkedList.Entry). For these structures the idea of accessing the nth element isn't necessary natural or even possible (e.g., unordered collections like sets and many hash maps).
Perhaps unfortunately, Java made the idea of getting an item by its index a first-class operation. Many of the operations directly on List objects are implemented in terms of list indexes: remove(int index), add(int index, ...), get(int index), etc. So it's kind of natural to think of those operations as being the fundamental ones.
For LinkedList though it's more fundamental to use a pointer to a node to refer to an object. Rather than passing around a list index, you'd pass around the pointer. After inserting an element, you'd get a pointer to the element.
In C++ this concept is embodied in the concept of the iterator, which is the first class way to refer to items in collections, including lists. So does such a "pointer" exist in Java? It sure does - it's the Iterator object! Usually you think of an Iterator as being for iteration, but you can also think of it as pointing to a particular object.
So the key observation is: given an pointer (iterator) to an object, you can remove and add from linked lists in constant time, but from an array-like list this takes linear time in general. There is no inherent need to search for an object before deleting it: there are plenty of scenarios where you can maintain or take as input such a reference, or where you are processing the entire list, and here the constant time deletion of linked lists does change the algorithmic complexity.
Of course, if you need to do something like delete the first entry containing the value "foo" that implies both a search and a delete operation. Both array-based and linked lists taken O(n) for search, so they don't vary here - but you can meaningfully separate the search and delete operations.
So you could, in principle, pass around Iterator objects rather than list indexes or object values - at least if your use case supports it. However, at the top I said that "Java has no convenient notion of a pointer to a list node". Why?
Well because actually using Iterator is actually very inconvenient. First of all, it's tough to get an Iterator to an object in the first place: for example, and unlike C++, the add() methods don't return an Iterator - so to get a pointer to the item you just added, you need to go ahead and iterate over the list or use the listIterator(int index) call, which is inherently inefficient for linked lists. Many methods (e.g., subList()) support only a version that takes indexes, but not Iterators - even when such a method could be efficiently supported.
Add to that the restrictions around iterator invalidation when the list is modified, and they actually become pretty useless for referring to elements except in immutable lists.
So Java's support of pointers to list elements is pretty half-hearted an so it's tough to leverage the constant time operations that linked list offers, except in cases such as adding to the front of a list, or deleting items during iteration.
It's not limited to lists, either - the ConcurrentQueue is also a linked structure which supports constant time deletes, but you can't reliably use that ability from Java.
If you're using a LinkedList, chances are you're not going to use it for a random access insert. LinkedList offers constant time for push (insert at the beginning) or add (because it has a ref to the final element IIRC). You are correct in your suspicion that an insert into a random index (e.g. insert sorted) will take linear time - not constant.
ArrayList, by contrast, is worst case linear. Most of the time it simply does an arraycopy to shift the indices (which is a low-level shift that is constant time). Only when you need to resize the backing array will it take linear time.

Benefits of "both" arraylist and linkedlist... possible in java?

Of course, I know about the performance difference between arraylist and linkedlist. I have run tests myself and seen the huge difference in time and memory for insertion/deletion and iteration between arraylist and linkedlist for a very big list.
(Correct me if i am wrong)We generally prefer arraylist over linkedlist because:
1)We practically do iterations more often than insertion/deletion. So we prefer iterations to be faster than insertion/deletion.
2)The memory overhead of linkedlist is much more than arraylist
3)There is NO way in which we can define a list as linkedlist while inserting/deleting in batch, and as arraylist while iterating. It is because arraylist and linkedlist have fundamentally different data-storage techniques.
Am I wrong about the 3rd point [I hope so :)]? Is there any possibility to have benefits of these two data structures in a single list? I guess, data structure designers must have thought about it.
If you are looking for some more performant collection implementations, check out Javolution. That package provides a FastList and FastTable which may at least reduce the cost of choosing between linked lists and array lists.
You might want to look into Clojure's "vectors" (which are a lot more than a simple array under the hood): http://blog.higher-order.net/2009/02/01/understanding-clojures-persistentvector-implementation/. They are O(log32 n) for lookup and insertion.
Note that these are directly usable from Java! (Actually, they're implemented in Java code.)
Probably we have another points to consider, but one aspect that make me choose LinkedList over ArrayList is when:
When I don't need to get an element by index (in case of process all elements)
When I don't know the size when creating my list
Here is an interesting manifesto about this topic.

What is the need of collection framework in java?

What is the need of Collection framework in Java since all the data operations(sorting/adding/deleting) are possible with Arrays and moreover array is suitable for memory consumption and performance is also better compared with Collections.
Can anyone point me a real time data oriented example which shows the difference in both(array/Collections) of these implementations.
Arrays are not resizable.
Java Collections Framework provides lots of different useful data types, such as linked lists (allows insertion anywhere in constant time), resizeable array lists (like Vector but cooler), red-black trees, hash-based maps (like Hashtable but cooler).
Java Collections Framework provides abstractions, so you can refer to a list as a List, whether backed by an array list or a linked list; and you can refer to a map/dictionary as a Map, whether backed by a red-black tree or a hashtable.
In other words, Java Collections Framework allows you to use the right data structure, because one size does not fit all.
Several reasons:
Java's collection classes provides a higher level interface than arrays.
Arrays have a fixed size. Collections (see ArrayList) have a flexible size.
Efficiently implementing a complicated data structures (e.g., hash tables) on top of raw arrays is a demanding task. The standard HashMap gives you that for free.
There are different implementation you can choose from for the same set of services: ArrayList vs. LinkedList, HashMap vs. TreeMap, synchronized, etc.
Finally, arrays allow covariance: setting an element of an array is not guaranteed to succeed due to typing errors that are detectable only at run time. Generics prevent this problem in arrays.
Take a look at this fragment that illustrates the covariance problem:
String[] strings = new String[10];
Object[] objects = strings;
objects[0] = new Date(); // <- ArrayStoreException: java.util.Date
Collection classes like Set, List, and Map implementations are closer to the "problem space." They allow developers to complete work more quickly and turn in more readable/maintainable code.
For each class in the Collections API there's a different answer to your question. Here are a few examples.
LinkedList: If you remove an element from the middle of an array, you pay the cost of moving all of the elements to the right of the removed element. Not so with a linked list.
Set: If you try to implement a set with an array, adding an element or testing for an element's presence is O(N). With a HashSet, it's O(1).
Map: To implement a map using an array would give the same performance characteristics as your putative array implementation of a set.
It depends upon your application's needs. There are so many types of collections, including:
HashSet
ArrayList
HashMap
TreeSet
TreeMap
LinkedList
So for example, if you need to store key/value pairs, you will have to write a lot of custom code if it will be based off an array - whereas the Hash* collections should just work out of the box. As always, pick the right tool for the job.
Well the basic premise is "wrong" since Java included the Dictionary class since before interfaces existed in the language...
collections offer Lists which are somewhat similar to arrays, but they offer many more things that are not. I'll assume you were just talking about List (and even Set) and leave Map out of it.
Yes, it is possible to get the same functionality as List and Set with an array, however there is a lot of work involved. The whole point of a library is that users do not have to "roll their own" implementations of common things.
Once you have a single implementation that everyone uses it is easier to justify spending resources optimizing it as well. That means when the standard collections are sped up or have their memory footprint reduced that all applications using them get the improvements for free.
A single interface for each thing also simplifies every developers learning curve - there are not umpteen different ways of doing the same thing.
If you wanted to have an array that grows over time you would probably not put the growth code all over your classes, but would instead write a single utility method to do that. Same for deletion and insertion etc...
Also, arrays are not well suited to insertion/deletion, especially when you expect that the .length member is supposed to reflect the actual number of contents, so you would spend a huge amount of time growing and shrinking the array. Arrays are also not well suited for Sets as you would have to iterate over the entire array each time you wanted to do an insertion to check for duplicates. That would kill any perceived efficiency.
Arrays are not efficient always. What if you need something like LinkedList? Looks like you need to learn some data structure : http://en.wikipedia.org/wiki/List_of_data_structures
Java Collections came up with different functionality,usability and convenience.
When in an application we want to work on group of Objects, Only ARRAY can not help us,Or rather they might leads to do things with some cumbersome operations.
One important difference, is one of usability and convenience, especially given that Collections automatically expand in size when needed:
Collections came up with methods to simplify our work.
Each one has a unique feature:
List- Essentially a variable-size array;
You can usually add/remove items at any arbitrary position;
The order of the items is well defined (i.e. you can say what position a given item goes in in the list).
Used- Most cases where you just need to store or iterate through a "bunch of things" and later iterate through them.
Set- Things can be "there or not"— when you add items to a set, there's no notion of how many times the item was added, and usually no notion of ordering.
Used- Remembering "which items you've already processed", e.g. when doing a web crawl;
Making other yes-no decisions about an item, e.g. "is the item a word of English", "is the item in the database?" , "is the item in this category?" etc.
Here you find use of each collection as per scenario:
Collection is the framework in Java and you know that framework is very easy to use rather than implementing and then use it and your concern is that why we don't use the array there are drawbacks of array like it is static you have to define the size of row at least in beginning, so if your array is large then it would result primarily in wastage of large memory.
So you can prefer ArrayList over it which is inside the collection hierarchy.
Complexity is other issue like you want to insert in array then you have to trace it upto define index so over it you can use LinkedList all functions are implemented only you need to use and became your code less complex and you can read there are various advantages of collection hierarchy.
Collection framework are much higher level compared to Arrays and provides important interfaces and classes that by using them we can manage groups of objects with a much sophisticated way with many methods already given by the specific collection.
For example:
ArrayList - It's like a dynamic array i.e. we don't need to declare its size, it grows as we add elements to it and it shrinks as we remove elements from it, during the runtime of the program.
LinkedList - It can be used to depict a Queue(FIFO) or even a Stack(LIFO).
HashSet - It stores its element by a process called hashing. The order of elements in HashSet is not guaranteed.
TreeSet - TreeSet is the best candidate when one needs to store a large number of sorted elements and their fast access.
ArrayDeque - It can also be used to implement a first-in, first-out(FIFO) queue or a last-in, first-out(LIFO) queue.
HashMap - HashMap stores the data in the form of key-value pairs, where key and value are objects.
Treemap - TreeMap stores key-value pairs in a sorted ascending order and retrieval speed of an element out of a TreeMap is quite fast.
To learn more about Java collections, check out this article.

Which list<Object> implementation will be the fastest for one pass write, read, then destroy?

What is the fastest list implementation (in java) in a scenario where the list will be created one element at a time then at a later point be read one element at a time? The reads will be done with an iterator and then the list will then be destroyed.
I know that the Big O notation for get is O(1) and add is O(1) for an ArrayList, while LinkedList is O(n) for get and O(1) for add. Does the iterator behave with the same Big O notation?
It depends largely on whether you know the maximum size of each list up front.
If you do, use ArrayList; it will certainly be faster.
Otherwise, you'll probably have to profile. While access to the ArrayList is O(1), creating it is not as simple, because of dynamic resizing.
Another point to consider is that the space-time trade-off is not clear cut. Each Java object has quite a bit of overhead. While an ArrayList may waste some space on surplus slots, each slot is only 4 bytes (or 8 on a 64-bit JVM). Each element of a LinkedList is probably about 50 bytes (perhaps 100 in a 64-bit JVM). So you have to have quite a few wasted slots in an ArrayList before a LinkedList actually wins its presumed space advantage. Locality of reference is also a factor, and ArrayList is preferable there too.
In practice, I almost always use ArrayList.
First Thoughts:
Refactor your code to not need the list.
Simplify the data down to a scalar data type, then use: int[]
Or even just use an array of whatever object you have: Object[] - John Gardner
Initialize the list to the full size: new ArrayList(123);
Of course, as everyone else is mentioning, do performance testing, prove your new solution is an improvement.
Iterating through a linked list is O(1) per element.
The Big O runtime for each option is the same. Probably the ArrayList will be faster because of better memory locality, but you'd have to measure it to know for sure. Pick whatever makes the code clearest.
Note that iterating through an instance of LinkedList can be O(n^2) if done naively. Specifically:
List<Object> list = new LinkedList<Object>();
for (int i = 0; i < list.size(); i++) {
list.get(i);
}
This is absolutely horrible in terms of efficiency due to the fact that the list must be traversed up to i twice for each iteration. If you do use LinkedList, be sure to use either an Iterator or Java 5's enhanced for-loop:
for (Object o : list) {
// ...
}
The above code is O(n), since the list is traversed statefully in-place.
To avoid all of the above hassle, just use ArrayList. It's not always the best choice (particularly for space efficiency), but it's usually a safe bet.
There is a new List implementation called GlueList which is faster than all classic List implementations.
Disclaimer: I am the author of this library
You almost certainly want an ArrayList. Both adding and reading are "amortized constant time" (i.e. O(1)) as specified in the documentation (note that this is true even if the list has to increase it's size - it's designed like that see http://java.sun.com/j2se/1.5.0/docs/api/java/util/ArrayList.html ). If you know roughly the number of objects you will be storing then even the ArrayList size increase is eliminated.
Adding to the end of a linked list is O(1), but the constant multiplier is larger than ArrayList (since you are usually creating a node object every time). Reading is virtually identical to the ArrayList if you are using an iterator.
It's a good rule to always use the simplest structure you can, unless there is a good reason not to. Here there is no such reason.
The exact quote from the documentation for ArrayList is: "The add operation runs in amortized constant time, that is, adding n elements requires O(n) time. All of the other operations run in linear time (roughly speaking). The constant factor is low compared to that for the LinkedList implementation."
I suggest benchmarking it. It's one thing reading the API, but until you try it for yourself, it'd academic.
Should be fair easy to test, just make sure you do meaningful operations, or hotspot will out-smart you and optimise it all to a NO-OP :)
I have actually begun to think that any use of data structures with non-deterministic behavior, such as ArrayList or HashMap, should be avoided, so I would say only use ArrayList if you can bound its size; any unbounded list use LinkedList. That is because I mainly code systems with near real time requirements though.
The main problem is that any memory allocation (which could happen randomly with any add operation) could also cause a garbage collection, and any garbage collection can cause you to miss a target. The larger the allocation, the more likely this is to occur, and this is also compounded if you are using CMS collector. CMS is non-compacting, so finding space for a new linked list node is generally going to be easier than finding space for a new 10,000 element array.
The more rigorous your approach to coding, the closer you can come to real time with a stock JVM. But choosing only data structures with deterministic behavior is one of the first steps you would have to take.

Categories

Resources