Efficiency of ArrayList - java

I am making a program in Java in which a ball bounces around on the screen. The user can add other balls, and they all bounce off of each other. My question lies in the storage of the added balls. At the moment, I am using an ArrayList to store them, and every time the space bar is pressed, a new ball class is created and added to an Array List. Is this the most efficient way of doing things? I don't specify the size of the Array List at the beginning, so is it inefficient to have to allocate a new space on the array every time the user wants a new ball, even if the ball count will get up in the hundreds? Is there another class I could use to handle this in a more efficient manner?
Thanks!
EDIT:
Sorry, I should have been more clear. I iterate through the balls every 30 milliseconds, using nested for loops to see if they are intersecting with each other. I do access one ball the most often (the ball which the user can control with the arrow keys, another feature of the game), but the user can choose to switch control balls. Balls are never removed. So, I am performing some fairly complex calculations (I use my own vector class to move them off of each other every time there is a collision) on the balls very often.

Measure it and find out! In all seriousness, often times the best way to get answers to these questions is to set up a benchmark and swap in different collection types.
I can tell you that it won't allocate new space every time you add a new item to the ArrayList. Extra space is allocated so that it has room to grow.
LinkedList is another List option. It is super cheap to add items, but random access (list.get(10)) is expensive. Sets could also be good if you don't need ordered access (though there are ordered sets, too), and you want a Map implementation if you're accessing them by some sort of key/id. It really all depends on how you're using the collection.
Update based on added details
It sounds like you are mostly doing sequential reads through the entire list. In that scenario, a LinkedList is probably your best choice. Though again, if you only expose the List interface to the rest of your code (or even a more general Collection), you can easily swap in different implementations and actually measure the difference.

ArrayList is a highly optimized and very efficient wrapper on top of a plain Java array. A small timing overhead comes from copying array elements, which happens when the allocated size is less than required number of elements. When you grow the array into a few hundreds of items, the copying will happen less than ten times, so the price you pay for not knowing the size in advance is very small. You can further reduce that overhead by suggesting an initial size for your ArrayList.
Removing from the middle of the ArrayList does take linear time. If you plan to remove items and/or insert them in the middle of the list frequently, this may become an issue. Note, however, that the overhead is not going to be worse than that for a plain array.
I iterate through the balls every 30 milliseconds, using nested for loops to see if they are intersecting with each other.
This does not have much to do with the collection in which the balls are stored. You could use a spatial index to improve the speed of finding intersections.

About ArrayList in Java, the complexity of remove at the end and add one element is Amortize O(1). Or, you can say, it's almost efficient in most cases. (In some rare cases, it will be awful.)
But you should think more carefully about your design before choosing your data structure.
How many objects often in your collection. If it's small, you can free to choose any data structure that you feel easily to work with. it will almost doesn't lost performance for your code.
If you often find one ball in all of your balls, another datastructure such as HashMap or HashSet would be better.
Or you often delete at middle of your list, maybe LinkedList will be appropriate choice :)

I'd recommend working out the way in which you need to access the balls, and pick an appropriate interface (Not implementation) eg. If you're accessing sequentially only, use a List. If you need to look up the ball by ID, think of a Map. The interface should match your requirements in terms of functionality, not in terms of speed/efficiency.
Then pick an implementation, eg. HashMap or TreeMap, and write your code.
Afterwards, profile it - Is your code inefficient in the ball access code? If so, then try to optimise by switching to an alternate implementation thats more appropriate to your needs.

Related

Performance: List containing lots of objects VS lots of objects containing smaller lists

I am currently developing my own app (just another remake of the famous "Game of Life") and I want to add a "revert" button. My game basically consists of a two dimensional array: Cell[][]...
So: My idea was to create an ArrayList which that array is being added to every update... (With a limit of 50 entries)
But then I thought, that that would be a lot of Objects in that list... So:
Would it be more performant to have an ArrayList in each cell in the 2-dimensional array, containing the history of itself, or to have a huge ArrayList containing entire game states as a history?
(I don't think you need any of my code to answer this question, but I will post it if you do)
Instead of storing cell state, why not keep the events that transform the cell state? An event would simply encapsulate the previous state info and the next state, or maybe even a coding enum that can be executed in reverse. This would already significantly reduce the data you need to store.
Also, it makes a difference if you're working natively, or on a garbage-collecting VM platform such as Java or C#. The former tends to be faster and is more forgiving in terms of large addressing storage, while the latter can start trashing wildly if you store lots of objects.
But the real proof of the pudding is in the eating / tinstatfc: there is no such thing as the fastest code (c) Michael Abrash -> i.e. benchmark your code and find out! (and come tell us)
In my opinion, an ArrayList per cell would work fine; however, if you only want to store 50 states, you'll have to create a custom ADT or utilize Guava's Collections API.
I recommend using a Stack rather than an ArrayList, as reverting every cell would be a simple operation. In Java, the correct ADT to use is an ArrayDeque, but you'll have to handle capacity yourself.

Why Java ArrayLists do not shrink automatically

Long time ago I watched a video lecture from the Princeton Coursera MOOC: Introduction to algorithms, which can be found here. It explains the cost of resizing an ArrayList like structure while adding or removing the elements from it. It turns out that if we want to supply resizing to our data structure we will go from O(n) to amortized O(n) for add and remove operations.
I have been using Java ArrayList for a couple of years. I've been always sure that they grow and shrink automatically. Only recently, to my great surprise, I was proven wrong in this post. Java ArrayLists do not shrink (even though, of course they do grow) automatically.
Here are my questions:
In my opinion providing shrinking in ArrayLists does not make any harm as the performance is already amortized O(n). Why did Java creators did not include this feature into the design?
I know that other data structures like HashMaps also do not shrink automatically. Is there any other data structure in Java which is build on top of arrays that supports automatic shrinking?
What are the tendencies in other languages? How does automatic shrinking look like in case of lists, dictionaries, maps, sets in Python/C# etc. If they go in the opposite direction to what Java does, then my question is: why?
The comments already cover most of what you are asking. Here some thoughts on your questions:
When creating a structure like the ArrayList in Java, the developers make certain decisions regarding runtime / performance. They obviously decided to exclude shrinking from the “normal” operations to avoid the additional runtime, which is needed.
The question is why you would want to shrink automatically. The ArrayList does not grow that much (the factor is about 1.5; newCapacity = oldCapacity + (oldCapacity >> 1), to be exact). Maybe you also insert in the middle and not just append at the end. Then a LinkedList (which is not based on an array -> no shrinking needed) might be better. It really depends on your use case. If you think you really need everything an ArrayList does, but it has to shrink when removing elements (I doubt you really need this), just extend ArrayList and override the methods. But be careful! If you shrink at every removal, you are back at O(n).
The C# List and the C++ vector behave the same concerning shrinking a list at removal of elements. But the factors of automatic growing vary. Even some Java-implementations use different factors.
Another issue with automatic shrinking is that you could get into really horrible 'pathological' situations where each insert and delete to the list causes the growing or shrinking the backing array.
For example, if the backing array's initial capacity is 10, such that the array would grow upon the insertion of the 11th element (capacity would grow to 15), a natural implementation would be to shrink the backing array once the list size dropped below 11. If you had a list whose length kept changing between 10 and 11, you'd be constantly changing the backing array. Not only would that add a runtime overhead, but you could start putting lots of pressure on the garbage collector if every operation resulted in either 10 or 15 objects becoming garbage.
Although arraylist with shrinking is still amortized O(n) time complexity, it involves more operation.
By shrinking you just save some memory space by adding some calculation, which is not a wise decision because the Moore's law says computer space double every 2 years. So the time is more valuable in algorithm than space.

ArrayList.remove() induces lags

I have a problem that's been bothering me for some time. I have a software where I generate a certain number of objects in a physical world, storing them in an ArrayList. However, this implementation generates lags when removing objects from the ArrayList.
A static array implementation does not lead to lags, but is less practical as I cannot use "add" and "remove".
I'm assuming the lags with ArrayLists are due to memory freeing and re-allocation. As my ArrayList has a fixed maximal size, is it possible to preallocate it a certain memory, to avoid these issues? Or is there another solution for this?
Thanks a lot in advance for any help!
I normally would not just repost someone elses answer, but this seems to fit very well.
The problem here is that an ArrayList is internally implemented, as the name states, with an array. This means that elements of the collection can't be freely inserted or removed without shifting the elements after the index you are working it.
So if your collection has a lot of elements and you, for example, remove the 5th, then all the elements from 6th to the end of the list must be shifted to the left by one position. This can be expensive indeed and leads to a O(n) complexity.
To avoid these issues, you should choose an appropriate collection according to the most common operations you are going to use on it. A LinkedList can be good if you need to iterate, remove (actually the removal requires to find the element anyway so it's good just if you are already enumerating them) or insert elements but whenever you want to access a specific index you are going into troubles.
You could also look for a HashSet or a TreeSet, they could be suitable to your solution.
In these circumstances knowing how most common data structures work and which are good/bad for is always useful to make appropriate choices.

Speeding up a linked list?

I'm a student and fairly new to Java. I was looking over the different speeds achieved by the two collections in Java, Linked List, and ArrayList. I know that an ArrayList is much much faster at looking up and placing in values into its indexes. My question is:
how can one make a linked list faster, if at all possible?
Thanks for any help.
zmahir
When talking about speed, perhaps you mean complexity. Insertion and retrieval operations for ArrayList (and arrays) are O(1), while for LinkedList they are O(n). And this cannot be changed - it is 'by definition'.
O(n) means that in order to insert an object at a given position, or retrieve it, you must traverse, in the worst case, all (n) the items in the list. Hence n operations. For ArrayList this is only one operation.
You probably can't. You don't know the size (well, ok you can), nor the location of each element. To find element 100 in a linked list, you need to start with item 1, find it's link to item 2, etc. until you find 100. This makes inserting into this list a tedious job.
There are many alternatives depending on your exact goals. You can use b-trees or similar methods to split the large linked list into smaller ones. Or use hashlists if you want to quickly find items. Or use simple arrays. But if you want a list that performs like an ArrayList, why not use an ArrayList?
You can split off regions which are linked to the main linked list, so this gives you entry points directly inside the list so you don't have to walk up to them. See the subList method here: http://download.oracle.com/javase/1.4.2/docs/api/java/util/AbstractList.html. This is useful if you have a number of 'sentences' made out of words, say. You can use a separate linked list to iterate over the sentences, which are sublists of the main linked list.
You can also use a ListIterator when adding, removing, or accessing elements. This helps greatly with increasing the speed of sequential access. See the listIterator method for this, and the class: http://download.oracle.com/javase/1.4.2/docs/api/java/util/ListIterator.html.
Speed of a linked list could be improved by using skip lists: http://igoro.com/archive/skip-lists-are-fascinating/
a linked list uses pointers to walk through the items, so for example if you asked for the 5th item, the runtime will start from the first item and walks through each pointer until it reaches the 5th item.
there is really not much you can do about it. a linked list may not be a good choice if you need fast acces to items. although there are some optimizations for it such as creating a circular linked list or a double linked list where you can walk back and forth the list but this really depends on the business logic and the application requirements.
my advise is to avoid linked lists if it does not match your needs and changing to a different data structure might be the best approach.
As a general rule, data structures are designed to do certain things well. LinkedLists are designed to be faster than ArrayLists at inserting elements and removing elements and about the same as ArrayLists at iterating across the list in order. When you change the way a LinkedList works, you make it no longer a true LinkedList, so there's not really any way to modify them to be faster at something and still be a LinkedList.
You'll need to examine the way you're using this particular collection and decide whether a LinkedList is really the best data structure for your purposes. If you share with us how you're using it, and why you need it to be faster, then we can advise you on which data structure you ought to consider using.
Lots of people smarter than you or I have looked at the implementation of the Java collection classes. If there were an optimization to be made, they would have found it and already made it.
Since the collection classes are pretty much as optimized as they can be, our primary task should be to choose the correct one.
When choosing your collection type, don't forget about things like HashSet. If order doesn't matter, and you don't need to put duplicates in the collection, then HashSet may be appropriate.
I'm a student and fairly new to Java. ... how can one make a linked list faster, if at all possible?
The standard Java collection type (indeed all data structures implemented in any language!) represent compromises on various "measures" such as:
The amount of memory needed to represent the data structure.
The time taken to perform various operations; e.g. for a "list" the operations of interest are insertion, removal, indexing, contains, iteration and so on.
How easy or hard it is to integrate / reuse the collection type; see below.
So for instance:
ArrayList offers lower memory overheads, fast indexing (O(1)), but slow contains, random insertion and removal (O(N)).
LinkedList has higher memory overheads, slow indexing and contains (O(N)), but faster removal (O(1)) under certain circumstances.
The various performance measures are typically determines by the maths of the various data structures. For example, if you have a chain of nodes, the only way to get the ith node is to step through them from the beginning. This involves following i pointers.
Sometimes you can modify the data structures to improve one aspect of the performance. But this typically comes at the cost of some other aspect of the performance. (For example, you could add a separate index to make indexing of a linked list faster. But the cost of maintaining the index on insertion / deletion would mean that you'd probably be better of using an ArrayList.)
In some cases the integration / reuse requirements have significant impact on performance.
For example, it is theoretically possible to optimize a linked list's space usage by adding a next field to the list element type, combining the element and node objects and saving 16 or so bytes per list entry. However, this would make the list type less general (the member/element class would need to implement a specific interface), and has the restriction that an element can belong to at most one list at any time. These restrictions are so limiting that this approach is rarely used in Java.
For a second example, consider the problem of inserting at a given position in a linked list. For the LinkedList class, this is normally an O(N) operation, because you have to step through the list to find the position. In theory, if an application could find and remember a position, it should be able to perform the insertion at that position in O(1). Unfortunately, neither the List APIs provides no way to "remember" a position.
While neither of these examples is a fundamental roadblock to a developer "doing his own thing", they illustrate that using general data structure APIs and general implementations of those APIs has performance implications, and therefore represents a trade-off between performance and ease-of-use.
I'm a bit surprised by the answers here. There are big difference between the theoretical performance of LinkedLists and ArrayLists compared to the actual performance of the Java implementations.
What makes the Java LinkedList slower than a theoretical LinkedList is that it does a lot more than just the operations. For example it checks for concurrent modifications and other safeties.
If you know your use case, you can write a your own simple implementation of a LinkedList and it will be much faster.

Which Data Structure? LinkedList or Any Other in Java?

I have specific requirements for the data structure to be used in my program in Java. It (Data Structure) should be able to hold large amounts of data (not fixed), my main operations would be to add at the end, and delete/read from the beginning (LinkedLists look good soo far). But occasionally, I need to delete from the middle also and this is where LinkedLists are soo painful. Can anyone suggest me a way around this? Or any optimizations through which I can make deletion less painful in LinkedLists?
Thanks for the help!
A LinkedHashMap may suit your purpose
You'd use an iterator to pull stuff from the front
and lookup the entry by key when you needed to access the middle of the list
LinkedList falls down on random accesses. Deletion, without the random access look up, is constant time and so really not too bad for long lists.
ArrayList is generally fast. Inserts and removes from the middle are faster than you might expect because block memory moves are surprisingly fast. Removals and insertions near the start to cause all the following data to be moved down or up.
ArrayDeque is like ArrayList only it uses a circular buffer and has a strange interface.
Usual advice: try it.
you can try using linked list with a pointers after evey 10000th element so that you can reduce the time to find the middle which you wish to delete.
here are some different variations of linked list:
http://experimentgarden.blogspot.com/2009/08/performance-analysis-of-thirty-eight.html
LinkedHashMap is probably the way to go. Great for iteration, deque operations, and seeking into the middle. Costs extra in memory, though, as you'll need to manage a set of keys on top of your basic collection. Plus I think it'll leave 'gaps' in the spaces you've deleted, leading to a non-consecutive set of keys (shouldn't affect iteration, though).
Edit: Aha! I know what you need: A LinkedMultiSet! All the benefit of a LinkedHashMap, but without the superfluous key set. It's only a little more complex to use, though.
First you need to consider whether you will delete from the center of the list often compared to the length of the list. If your list has N items but you delete much less often than 1/N, don't worry about it. Use LinkedList or ArrayDeque as you prefer. (If your lists are occasionally huge and then shrink, but are mostly small, LinkedList is better as it's easy to recover the memory; otherwise, ArrayDeque doesn't need extra objects, so it's a bit faster and more compact--except the underlying array never shrinks.)
If, on the other hand, you delete quite a bit more often than 1/N, then you should consider a LinkedHashSet, which maintains a linked list queue on top of a hash set--but it is a set, so keep in mind that you can't store duplicate elements. This has the overhead of LinkedList and ArrayDeque put together, but if you're doing central deletes often, it'll likely be worth it.
The optimal structure, however--if you really need every last ounce of speed and are willing to spend the coding time to get it--would be a "resizable" array (i.e. reallocated when it was too small) with a circular buffer where you could blank out elements from the middle by setting them to null. (You could also reallocate the buffer when too much was empty if you had a perverse use case then.) I don't advise coding this unless you either really enjoy coding high-performance data structures or have good evidence that this is one of the key bottlenecks in your code and thus you really need it.

Categories

Resources