Very large Java ArrayList has slow traversal time - java

Solution: My ArrayList was filled with duplicates. I modified my code to filter these out, which reduced running times to about 1 second.
I am working on a algorithms project that requires me to look at large amounts of data.
My program has a potentially very large ArrayList (A) that has every element in it traversed. For each of these elements in (A), several other, calculated elements are added to another ArrayList (B). (B) will be much, much larger than (A).
Once my program has run through seven of these ArrayLists, the running time goes up to approximately 5 seconds. I'm trying to get that down to < 1 second, if possible.
I am open to different ways to traverse the ArrayList, as well as using a completely different data-structure. I don't care about the order of the values inside the lists, as long as I can go through all values, very fast. I have tried a linked-list and it was significantly slower.
Here is a snippet of code, to give you a better understanding. The code tries to find all single-digit permutations of a prime number.
public static Integer primeLoop(ArrayList current, int endVal, int size)
{
Integer compareVal = 0;
Integer currentVal = 0;
Integer tempVal = 0;
int currentSize = current.size()-1;
ArrayList next = new ArrayList();
for(int k = 0; k <= currentSize; k++)
{
currentVal = Integer.parseInt(current.get(k).toString());
for(int i = 1; i <= 5; i++)
{
for(int j = 0; j <= 9; j++)
{
compareVal = orderPrime(currentVal, endVal, i, j);
//System.out.println(compareVal);
if(!compareVal.equals(tempVal) && !currentVal.equals(compareVal))
{
tempVal = compareVal;
next.add(compareVal);
//System.out.println("Inserted: "+compareVal + " with parent: "+currentVal);
if(compareVal.equals(endVal))
{
System.out.println("Separation: " + size);
return -1;
}
}
}
}
}
size++;
//System.out.println(next);
primeLoop(next, endVal, size);
return -1;
}
*Edit: Removed unnecessary code from snippet above. Created a currSize variable that stops the program from having to call the size of (current) every time. Still no difference. Here is an idea of how the ArrayList grows:
2,
29,
249,
2293,
20727,
190819,

When something is slow, the typical advice is to profile it. This is generally wise, as it's often difficult to determine what's the cause of slowness, even for performance experts. Sometimes it's possible to pick out code that's likely to be a performance problem, but this is hit-or-miss. There are some likely things in this code, but it's hard to say for sure, since we don't have the code for the orderPrime() and primeLoop() methods.
That said, there's one thing that caught my eye. This line:
currentVal = Integer.parseInt(current.get(k).toString());
This gets an element from current, turns it into a string, parses it back to an int, and then boxes it into an Integer. Conversion to and from String is pretty expensive, and it allocates memory, so it puts pressure on garbage collection. Boxing primitive int values to Integer objects also allocates memory, contributing to GC pressure.
It's hard to say what the fix is, since you're using the raw type ArrayList for current. I surmise it might be ArrayList<Integer>, and if so, you could just replace this line with
currentVal = (Integer)current.get(k);
You should be using generics in order to avoid the cast. (But that doesn't affect performance, just the readability and type-safety of the code.)
If current doesn't contain Integer values, then it should. Whatever it contains should be converted to Integer beforehand, instead of putting conversions inside a loop.
After fixing this, you are still left with boxing/unboxing overhead. If performance is still a problem, you'll have to switch from ArrayList<Integer> to int[] because Java collections cannot contain primitives. This is inconvenient, since you'll have to implement your own list-like structure that simulates a variable-length array of int (or find a third party library that does this).
But even all of the above might not be enough to make your program run fast enough. I don't know what your algorithm is doing, but it looks like it's doing linear searching. There are a variety of ways to speed up searching. But another commenter suggested binary search, and you said it wasn't allowed, so it's not clear what can be done here.

Here is an idea of how the ArrayList grows: 2, 29, 249, 2293, 20727, 190819
Your next list grows too large, so it must contain duplicates:
190_819 entries for 100_000 numbers?
According to primes.utm.edu/howmany.html there are only 9,592 primes up to 100_000.
Getting rid of the duplicates will certainly improve your response times.

Why you have this line
current.iterator();
You don't use the iterator at all, you don't even have a variable for it. It's just waisting of time.
for(int k = 0; k <= current.size()-1; k++)
Instead of counting size every iteration, create value like:
int curSize = current.size() - 1;
And use it in loop.
It can save some time.

Related

Is checking if a value is 0 faster than just overwriting it?

Let's just say I have an array which contains some numbers contained between 1 and 9, and another array of 9 elements which contains every number from 1 to 9 (1, 2, 3, ... 9). I want to read every number from the first array, and when I read the number X, put to 0 the X value in the second array (which would be contained in second_array[X-1]). Is it faster for the CPU to do this:
//size is the size of the first array
int temp = 0;
for(int i; i < size; i++)
{
temp = first_array[i];
if(second_array[temp-1] != 0) second_array[temp-1]= 0;
}
Or the same without the control:
int temp = 0;
for(int i; i < size; i++)
{
temp = first_array[i];
second_array[temp-1]= 0;
}
To be clear: does it take more time to make a control on the value, or to overwrite it? I need my code to execute as fast as possible, so every nanosecond saved would be useful. Thanks in advance.
The second version is more performant, as is does not require the check, which will happen in every iteration of the loop and only in one case yield true.
Furthermore you can improve even more if you write:
for(int i; i < size; i++)
{
temp = first_array[i];
}
second_array[size-1]= 0;
Writing data to memory is always fast when comparing to if condition.
Reason - to compare two values,you need to read them first and then you need to perform comparison.While in other case,you are simply writing data to memory which does not care about anything.
If the value is checked repeatedly, it may be cached in the CPU cache and accessed much faster from there than from the main RAM. Differently, writing the value requires this value to make all the way to the main memory that may take more time. Writing may be deferred, but the dirty cache row must be flushed sooner or later.
Hence the version that only repeatedly reads may be faster than the version that repeatedly writes.
You can just go with:
for(int i = 0; i < size; i++)
{
second_array[first_array[i]-1]= 0;
}
Even if the check is faster (i doubt it), it would only be faster if a large part of the second_array started as 0, as any non 0 location requires two operations.
Anyway, the best way to figure out the answer to these kind of questions is to use run it and benchmark! A quick benchmark on my side shows it is 60% slower if second_array does not contain zero's, but 50% faster when second_array is all zero's! That would lead me to conclude the check is faster if more than 55% of second_array start zeroed out.

Explaining how insertion sort works

I need to understanding the algorithm for insertion sort.
If someone can explain in layman terms how this code breaks down.
public void insertionSort() {
int in, out;
for(out=1; out<nElems; out++) {
long temp = a[out];
in = out;
while(in>0 && a[in-1] >= temp){
a[in] = a[in-1]; // shift item right,
--in;
a[in] = temp; } // end for
I would like to know in particular why
What i do not understand is, in other sorts we use the for loop, and in this one we are using while loop in this particular sort?
2.a[in-1] >= temp, are they using this because we have removed one element?
What i do not understand is, in other sorts we use the for loop, and
in this one we are using while loop in this particular sort?
Humm no. Some sort implementations don't even have a loop. In this particular case they use a while loop because the number of iterations is not known in advance, but in Java the for loop can do the exact same things as a while loop so it's just a matter of readability
a[in-1] >= temp, are they using this because we have removed one
element ?
Because they perform a shift, which is always done by storing one of the shifted elements in a temporary variable. This is probably the most classical programming trick :
a = 5, b = 6
tmp = a
b = a
a = tmp
// now a is 6 and b is 5
while loops and for loops are (essentially) equivalent, because you can always do:
for(int i = 0; i < n; i++)
as
int i = 0
while (i < n) {
...
i++;
}
(The scope of int i is a little different e.g. in Java because in for loops its scope of i is only inside the the loop.)
It is not. The algorithm is trying to move the current value backwards in the array.
I think the Wikipedia article and this picture inside it is very helpful in understanding the concept.
Well, depending on how a while loop is structured, it can serve the same purpose as for loop. Additionally, most sorts utilize recursion. My advice to you is
1) Look up insertion sort, basic enough but look at a visual representation and analyze what is being done. That'll help you understand what is going on in the code.
2) Think about the run time (which is O(n^2) average case). Why is it O(n^2)? What does this mean?

Adding 2 list element by element

I am having 2 Lists and want to add them element by element. Like that:
Is there an easier way and probably much more well performing way than using a for loop to iterate over the first list and add it to the result list?
I appreciate your answer!
Depends on what kind of list and what kind of for loop.
Iterating over the elements (rather than indices) would almost certainly be plenty fast enough.
On the other hand, iterating over indices and repeatedly getting the element by index could work rather poorly for certain types of lists (e.g. a linked list).
My understanding is that you have List1 and List2 and that you want to find the best performing way to find result[index] = List1[index] + list2[index]
My main suggestion is that before you start optimising for performance is to measure whether you need to optimise at all. You can iterate through the lists as you said, something like:
for(int i = 0; i < listSize; i++)
{
result[i] = List1[i] + List2[i];
}
In most cases this is fine. See NPE's answer for a description of where this might be expensive, i.e. a linked list. Also see this answer and note that each step of the for loop is doing a get - on an array it is done in 1 step, but in a linked list it is done in as many steps at it takes to iterate to the element in the list.
Assuming a standard array, this is O(n) and (depending on array size) will be done so quickly that it will hardly result in a blip on your performance profiling.
As a twist, since the operations are completely independent, that is result[0] = List1[0] + List2[0] is independent of result[1] = List1[1] + List2[1], etc, you can run these operations in parallel. E.g. you could run the first half of the calculations (<= List.Size / 2) on one thread and the other half (> List.Size / 2) on another thread and expect the elapsed time to roughly halve (assuming at least 2 free CPUs). Now, the best number of threads to use depends on the size of your data, the number of CPUs, other operations happening at the same time and is normally best decided by testing and modeling under different conditions. All this adds complexity to your program, so my main recommendation is to start simple, then measure and then decide whether you need to optimise.
Looping is inevitable except you have a matrix API (e.g. OpenGL). You could implement a List<Integer> which is backed by the original Lists:
public class CalcList implements List<Integer> {
private List<Integer> l1, l2;
#Override
public int get(int index) {
return l1.get(index) + l2.get(index);
}
}
This avoids copy operations and moves the calculations at the end of your stack:
CalcList<Integer> results1 = new CalcList(list, list1);
CalcList<Integer> results2 = new CalcList(results1, list3);
// no calculation or memory allocated until now.
for (int result : results2) {
// here happens the calculation, still without unnecessary memory
}
This could give an advantage if the compiler is able to translate it into:
for (int i = 0; i < list1.size; i++) {
int result = list1[i] + list2[i] + list3[i] + …;
}
But I doubt that. You have to run a benchmark for your specific use case to find out if this implementation has an advantage.
Java doesn't come with a map style function, so the the way of doing this kind of operation is using a for loop.
Even if you use some other construct, the looping will be done anyway. An alternative is using the GPU for computations but this is not a default Java feature.
Also using arrays should be faster than operating with linked lists.

Calculating run time analysis on a few methods

I am preparing for my exam and having a little trouble on run time analysis. I have 2 methods below that I am confused on the run time analysis for:
public boolean findDuplicates(String [] arr) {
Hashtable<String,String> h = new Hashtable<String,String>();
for (int i = 0; i < arr.length; i++) {
if (h.get(arr[i]) == null)
h.put(arr[i], arr[i]);
else
return true;
}
return false;
}
Assuming that hash function only takes O(1) on any key, would the run time simply be O(n) due to in worst case, running through the entire array? Am I thinking of this along the right lines if each hash function takes constant time to evaluate?
The other problem I have seems much more complicated and I don't know exactly how to approach this. Assume these are arrarlists.
public boolean makeTranslation(List<Integer> lst1, List<Integer> lst2) {
//both lst1 and lst2 are same size and size is positive
int shift = lst1.get(0) - lst2.get(0);
for (int i = 1; i < lst1.size(); i++)
if ( (lst1.get(i) - lst2.get(i)) != shift)
return false;
return true;
}
In this case, the get operations are supposed to be constant since we are simply retrieving a particular index values. But in the for loop, we are both comparing it to shift and also iterating over all elements. How exactly would this translate to run time?
A helpful explanation would be much appreciated since I have the hardest time understanding run time analysis than anything in this course and my final is next week.
The short answer: both methods have time complexity of O(n).
For hash, it is clear that both get and put operations take constant time.
For list, if you use the ArrayList implementation(and it is likely), the get method takes constant time as well. This is because an ArrayList in Java is a List that is backed by an array.
Code for ArrayList.get(index) in the standard Java library:
public E get(int index) {
RangeCheck(index);
return (E) elementData[index];
}
RangeCheck probably did two comparisons, which is constant time. Returning a value from an array is obviously constant time. Thus, the get method for ArrayList takes constant time.
As for your specific concern mentioned in the OP:
But in the for loop, we are both comparing it to shift and also
iterating over all elements. How exactly would this translate to run
time?
lst1.get(i) takes constant time. lst2.get(i) takes constant time. Thus, lst1.get(i) - lst2.get(i) takes constant time. The same holds for (lst1.get(i) - lst2.get(i)) != shift. The idea is the sum of a constant number of constant time operations is still constant time. Since the loop iterates up to n times, the total time is O(Cn), i.e., O(n) where C is a constant.
And...it never hurts to have a brief review of the big O notation just before final :)
In general, the O() expresses the complexity of an algorithm wich generally is the number of operations, assuming the cost of each operation is constant.For example O(1000n) would be the similar to writing O(n), because each operation costs 1000, and there are n operations.
So assuming get and put are constant (depends on library implementation) for every value, the time for both would be O(n).
For more information see:
http://en.wikipedia.org/wiki/Big_O_notation
Big-O notation is not very accurate as you omit constant factors and lower order terms. So, even if you have 2 constant operations n times, it will still be O(n). In reality, it will be (1+1)n=2n, but in ordo-notation we round it down (even if it's 10000n). So, for both these cases the run-time will be O(n).
In practice, I suggest typing out the costs for each loop and each operation in the worst case. Start from the innermost nested level and multiply outwards (with only the highest cost of each level).
For example:
for (int i = 0; i<n; i++) { //n times
//log n operation
for (int i = 0; i<n; i++) { //n times
//constant operation
}
}
Here, we have n*(log(n)+n*1)=O(n*n) as n>log(n)
It's worth echoing, but both of these operations are O(n) (for #2, that's the worst case). The key thing to note is the number of critical operations done each iteration through.
For your first snippet, the Hashtable is a bit of a red herring, since access time isn't going to be your largest operation in the loop. It's also the case that, since that Hashtable was just new'd, you'll always be inserting n elements into it.
For your second snippet, you have a chance to end early. If the next elements' difference isn't shift, then you return false right there and then, which was only one operation. In the worst case, you'll be going through all n and returning.

Array or List in Java. Which is faster?

I have to keep thousands of strings in memory to be accessed serially in Java. Should I store them in an array or should I use some kind of List ?
Since arrays keep all the data in a contiguous chunk of memory (unlike Lists), would the use of an array to store thousands of strings cause problems ?
I suggest that you use a profiler to test which is faster.
My personal opinion is that you should use Lists.
I work on a large codebase and a previous group of developers used arrays everywhere. It made the code very inflexible. After changing large chunks of it to Lists we noticed no difference in speed.
The Java way is that you should consider what data abstraction most suits your needs. Remember that in Java a List is an abstract, not a concrete data type. You should declare the strings as a List, and then initialize it using the ArrayList implementation.
List<String> strings = new ArrayList<String>();
This separation of Abstract Data Type and specific implementation is one the key aspects of object oriented programming.
An ArrayList implements the List Abstract Data Type using an array as its underlying implementation. Access speed is virtually identical to an array, with the additional advantages of being able to add and subtract elements to a List (although this is an O(n) operation with an ArrayList) and that if you decide to change the underlying implementation later on you can. For example, if you realize you need synchronized access, you can change the implementation to a Vector without rewriting all your code.
In fact, the ArrayList was specifically designed to replace the low-level array construct in most contexts. If Java was being designed today, it's entirely possible that arrays would have been left out altogether in favor of the ArrayList construct.
Since arrays keep all the data in a contiguous chunk of memory (unlike Lists), would the use of an array to store thousands of strings cause problems ?
In Java, all collections store only references to objects, not the objects themselves. Both arrays and ArrayList will store a few thousand references in a contiguous array, so they are essentially identical. You can consider that a contiguous block of a few thousand 32-bit references will always be readily available on modern hardware. This does not guarantee that you will not run out of memory altogether, of course, just that the contiguous block of memory requirement is not difficult to fufil.
Although the answers proposing to use ArrayList do make sense in most scenario, the actual question of relative performance has not really been answered.
There are a few things you can do with an array:
create it
set an item
get an item
clone/copy it
General conclusion
Although get and set operations are somewhat slower on an ArrayList (resp. 1 and 3 nanosecond per call on my machine), there is very little overhead of using an ArrayList vs. an array for any non-intensive use. There are however a few things to keep in mind:
resizing operations on a list (when calling list.add(...)) are costly and one should try to set the initial capacity at an adequate level when possible (note that the same issue arises when using an array)
when dealing with primitives, arrays can be significantly faster as they will allow one to avoid many boxing/unboxing conversions
an application that only gets/sets values in an ArrayList (not very common!) could see a performance gain of more than 25% by switching to an array
Detailed results
Here are the results I measured for those three operations using the jmh benchmarking library (times in nanoseconds) with JDK 7 on a standard x86 desktop machine. Note that ArrayList are never resized in the tests to make sure results are comparable. Benchmark code available here.
Array/ArrayList Creation
I ran 4 tests, executing the following statements:
createArray1: Integer[] array = new Integer[1];
createList1: List<Integer> list = new ArrayList<> (1);
createArray10000: Integer[] array = new Integer[10000];
createList10000: List<Integer> list = new ArrayList<> (10000);
Results (in nanoseconds per call, 95% confidence):
a.p.g.a.ArrayVsList.CreateArray1 [10.933, 11.097]
a.p.g.a.ArrayVsList.CreateList1 [10.799, 11.046]
a.p.g.a.ArrayVsList.CreateArray10000 [394.899, 404.034]
a.p.g.a.ArrayVsList.CreateList10000 [396.706, 401.266]
Conclusion: no noticeable difference.
get operations
I ran 2 tests, executing the following statements:
getList: return list.get(0);
getArray: return array[0];
Results (in nanoseconds per call, 95% confidence):
a.p.g.a.ArrayVsList.getArray [2.958, 2.984]
a.p.g.a.ArrayVsList.getList [3.841, 3.874]
Conclusion: getting from an array is about 25% faster than getting from an ArrayList, although the difference is only on the order of one nanosecond.
set operations
I ran 2 tests, executing the following statements:
setList: list.set(0, value);
setArray: array[0] = value;
Results (in nanoseconds per call):
a.p.g.a.ArrayVsList.setArray [4.201, 4.236]
a.p.g.a.ArrayVsList.setList [6.783, 6.877]
Conclusion: set operations on arrays are about 40% faster than on lists, but, as for get, each set operation takes a few nanoseconds - so for the difference to reach 1 second, one would need to set items in the list/array hundreds of millions of times!
clone/copy
ArrayList's copy constructor delegates to Arrays.copyOf so performance is identical to array copy (copying an array via clone, Arrays.copyOf or System.arrayCopy makes no material difference performance-wise).
You should prefer generic types over arrays. As mentioned by others, arrays are inflexible and do not have the expressive power of generic types. (They do however support runtime typechecking, but that mixes badly with generic types.)
But, as always, when optimizing you should always follow these steps:
Don't optimize until you have a nice, clean, and working version of your code. Changing to generic types could very well be motivated at this step already.
When you have a version that is nice and clean, decide if it is fast enough.
If it isn't fast enough, measure its performance. This step is important for two reasons. If you don't measure you won't (1) know the impact of any optimizations you make and (2) know where to optimize.
Optimize the hottest part of your code.
Measure again. This is just as important as measuring before. If the optimization didn't improve things, revert it. Remember, the code without the optimization was clean, nice, and working.
I'm guessing the original poster is coming from a C++/STL background which is causing some confusion. In C++ std::list is a doubly linked list.
In Java [java.util.]List is an implementation-free interface (pure abstract class in C++ terms). List can be a doubly linked list - java.util.LinkedList is provided. However, 99 times out of 100 when you want a make a new List, you want to use java.util.ArrayList instead, which is the rough equivalent of C++ std::vector. There are other standard implementations, such as those returned by java.util.Collections.emptyList() and java.util.Arrays.asList().
From a performance standpoint there is a very small hit from having to go through an interface and an extra object, however runtime inlining means this rarely has any significance. Also remember that String are typically an object plus array. So for each entry, you probably have two other objects. In C++ std::vector<std::string>, although copying by value without a pointer as such, the character arrays will form an object for string (and these will not usually be shared).
If this particular code is really performance-sensitive, you could create a single char[] array (or even byte[]) for all the characters of all the strings, and then an array of offsets. IIRC, this is how javac is implemented.
I agree that in most cases you should choose the flexibility and elegance of ArrayLists over arrays - and in most cases the impact to program performance will be negligible.
However, if you're doing constant, heavy iteration with little structural change (no adds and removes) for, say, software graphics rendering or a custom virtual machine, my sequential access benchmarking tests show that ArrayLists are 1.5x slower than arrays on my system (Java 1.6 on my one year-old iMac).
Some code:
import java.util.*;
public class ArrayVsArrayList {
static public void main( String[] args ) {
String[] array = new String[300];
ArrayList<String> list = new ArrayList<String>(300);
for (int i=0; i<300; ++i) {
if (Math.random() > 0.5) {
array[i] = "abc";
} else {
array[i] = "xyz";
}
list.add( array[i] );
}
int iterations = 100000000;
long start_ms;
int sum;
start_ms = System.currentTimeMillis();
sum = 0;
for (int i=0; i<iterations; ++i) {
for (int j=0; j<300; ++j) sum += array[j].length();
}
System.out.println( (System.currentTimeMillis() - start_ms) + " ms (array)" );
// Prints ~13,500 ms on my system
start_ms = System.currentTimeMillis();
sum = 0;
for (int i=0; i<iterations; ++i) {
for (int j=0; j<300; ++j) sum += list.get(j).length();
}
System.out.println( (System.currentTimeMillis() - start_ms) + " ms (ArrayList)" );
// Prints ~20,800 ms on my system - about 1.5x slower than direct array access
}
}
Well firstly it's worth clarifying do you mean "list" in the classical comp sci data structures sense (ie a linked list) or do you mean java.util.List? If you mean a java.util.List, it's an interface. If you want to use an array just use the ArrayList implementation and you'll get array-like behaviour and semantics. Problem solved.
If you mean an array vs a linked list, it's a slightly different argument for which we go back to Big O (here is a plain English explanation if this is an unfamiliar term.
Array;
Random Access: O(1);
Insert: O(n);
Delete: O(n).
Linked List:
Random Access: O(n);
Insert: O(1);
Delete: O(1).
So you choose whichever one best suits how you resize your array. If you resize, insert and delete a lot then maybe a linked list is a better choice. Same goes for if random access is rare. You mention serial access. If you're mainly doing serial access with very little modification then it probably doesn't matter which you choose.
Linked lists have a slightly higher overhead since, like you say, you're dealing with potentially non-contiguous blocks of memory and (effectively) pointers to the next element. That's probably not an important factor unless you're dealing with millions of entries however.
I wrote a little benchmark to compare ArrayLists with Arrays. On my old-ish laptop, the time to traverse through a 5000-element arraylist, 1000 times, was about 10 milliseconds slower than the equivalent array code.
So, if you're doing nothing but iterating the list, and you're doing it a lot, then maybe it's worth the optimisation. Otherwise I'd use the List, because it'll make it easier when you do need to optimise the code.
n.b. I did notice that using for String s: stringsList was about 50% slower than using an old-style for-loop to access the list. Go figure... Here's the two functions I timed; the array and list were filled with 5000 random (different) strings.
private static void readArray(String[] strings) {
long totalchars = 0;
for (int j = 0; j < ITERATIONS; j++) {
totalchars = 0;
for (int i = 0; i < strings.length; i++) {
totalchars += strings[i].length();
}
}
}
private static void readArrayList(List<String> stringsList) {
long totalchars = 0;
for (int j = 0; j < ITERATIONS; j++) {
totalchars = 0;
for (int i = 0; i < stringsList.size(); i++) {
totalchars += stringsList.get(i).length();
}
}
}
No, because technically, the array only stores the reference to the strings. The strings themselves are allocated in a different location. For a thousand items, I would say a list would be better, it is slower, but it offers more flexibility and it's easier to use, especially if you are going to resize them.
If you have thousands, consider using a trie. A trie is a tree-like structure that merges the common prefixes of the stored string.
For example, if the strings were
intern
international
internationalize
internet
internets
The trie would store:
intern
-> \0
international
-> \0
-> ize\0
net
->\0
->s\0
The strings requires 57 characters (including the null terminator, '\0') for storage, plus whatever the size of the String object that holds them. (In truth, we should probably round all sizes up to multiples of 16, but...) Call it 57 + 5 = 62 bytes, roughly.
The trie requires 29 (including the null terminator, '\0') for storage, plus sizeof the trie nodes, which are a ref to an array and a list of child trie nodes.
For this example, that probably comes out about the same; for thousands, it probably comes out less as long as you do have common prefixes.
Now, when using the trie in other code, you'll have to convert to String, probably using a StringBuffer as an intermediary. If many of the strings are in use at once as Strings, outside the trie, it's a loss.
But if you're only using a few at the time -- say, to look up things in a dictionary -- the trie can save you a lot of space. Definitely less space than storing them in a HashSet.
You say you're accessing them "serially" -- if that means sequentially an alphabetically, the trie also obviously gives you alphabetical order for free, if you iterate it depth-first.
Since there are already a lot of good answers here, I would like to give you some other information of practical view, which is insertion and iteration performance comparison : primitive array vs Linked-list in Java.
This is actual simple performance check. So, the result will depend on the machine performance.
Source code used for this is below :
import java.util.Iterator;
import java.util.LinkedList;
public class Array_vs_LinkedList {
private final static int MAX_SIZE = 40000000;
public static void main(String[] args) {
LinkedList lList = new LinkedList();
/* insertion performance check */
long startTime = System.currentTimeMillis();
for (int i=0; i<MAX_SIZE; i++) {
lList.add(i);
}
long stopTime = System.currentTimeMillis();
long elapsedTime = stopTime - startTime;
System.out.println("[Insert]LinkedList insert operation with " + MAX_SIZE + " number of integer elapsed time is " + elapsedTime + " millisecond.");
int[] arr = new int[MAX_SIZE];
startTime = System.currentTimeMillis();
for(int i=0; i<MAX_SIZE; i++){
arr[i] = i;
}
stopTime = System.currentTimeMillis();
elapsedTime = stopTime - startTime;
System.out.println("[Insert]Array Insert operation with " + MAX_SIZE + " number of integer elapsed time is " + elapsedTime + " millisecond.");
/* iteration performance check */
startTime = System.currentTimeMillis();
Iterator itr = lList.iterator();
while(itr.hasNext()) {
itr.next();
// System.out.println("Linked list running : " + itr.next());
}
stopTime = System.currentTimeMillis();
elapsedTime = stopTime - startTime;
System.out.println("[Loop]LinkedList iteration with " + MAX_SIZE + " number of integer elapsed time is " + elapsedTime + " millisecond.");
startTime = System.currentTimeMillis();
int t = 0;
for (int i=0; i < MAX_SIZE; i++) {
t = arr[i];
// System.out.println("array running : " + i);
}
stopTime = System.currentTimeMillis();
elapsedTime = stopTime - startTime;
System.out.println("[Loop]Array iteration with " + MAX_SIZE + " number of integer elapsed time is " + elapsedTime + " millisecond.");
}
}
Performance Result is below :
I came here to get a better feeling for the performance impact of using lists over arrays. I had to adapt code here for my scenario: array/list of ~1000 ints using mostly getters, meaning array[j] vs. list.get(j)
Taking the best of 7 to be unscientific about it (first few with list where 2.5x slower) I get this:
array Integer[] best 643ms iterator
ArrayList<Integer> best 1014ms iterator
array Integer[] best 635ms getter
ArrayList<Integer> best 891ms getter (strange though)
- so, very roughly 30% faster with array
The second reason for posting now is that no-one mentions the impact if you do math/matrix/simulation/optimization code with nested loops.
Say you have three nested levels and the inner loop is twice as slow you are looking at 8 times performance hit. Something that would run in a day now takes a week.
*EDIT
Quite shocked here, for kicks I tried declaring int[1000] rather than Integer[1000]
array int[] best 299ms iterator
array int[] best 296ms getter
Using Integer[] vs. int[] represents a double performance hit, ListArray with iterator is 3x slower than int[]. Really thought Java's list implementations were similar to native arrays...
Code for reference (call multiple times):
public static void testArray()
{
final long MAX_ITERATIONS = 1000000;
final int MAX_LENGTH = 1000;
Random r = new Random();
//Integer[] array = new Integer[MAX_LENGTH];
int[] array = new int[MAX_LENGTH];
List<Integer> list = new ArrayList<Integer>()
{{
for (int i = 0; i < MAX_LENGTH; ++i)
{
int val = r.nextInt();
add(val);
array[i] = val;
}
}};
long start = System.currentTimeMillis();
int test_sum = 0;
for (int i = 0; i < MAX_ITERATIONS; ++i)
{
// for (int e : array)
// for (int e : list)
for (int j = 0; j < MAX_LENGTH; ++j)
{
int e = array[j];
// int e = list.get(j);
test_sum += e;
}
}
long stop = System.currentTimeMillis();
long ms = (stop - start);
System.out.println("Time: " + ms);
}
list is slower than arrays.If you need efficiency use arrays.If you need flexibility use list.
UPDATE:
As Mark noted there is no significant difference after JVM warm up (several test passes). Checked with re-created array or even new pass starting with new row of matrix. With great probability this signs simple array with index access is not to be used in favor of collections.
Still first 1-2 passes simple array is 2-3 times faster.
ORIGINAL POST:
Too much words for the subject too simple to check. Without any question array is several times faster than any class container. I run on this question looking for alternatives for my performance critical section. Here is the prototype code I built to check real situation:
import java.util.List;
import java.util.Arrays;
public class IterationTest {
private static final long MAX_ITERATIONS = 1000000000;
public static void main(String [] args) {
Integer [] array = {1, 5, 3, 5};
List<Integer> list = Arrays.asList(array);
long start = System.currentTimeMillis();
int test_sum = 0;
for (int i = 0; i < MAX_ITERATIONS; ++i) {
// for (int e : array) {
for (int e : list) {
test_sum += e;
}
}
long stop = System.currentTimeMillis();
long ms = (stop - start);
System.out.println("Time: " + ms);
}
}
And here is the answer:
Based on array (line 16 is active):
Time: 7064
Based on list (line 17 is active):
Time: 20950
Any more comment on 'faster'? This is quite understood. The question is when about 3 time faster is better for you than flexibility of List. But this is another question.
By the way I checked this too based on manually constructed ArrayList. Almost the same result.
Remember that an ArrayList encapsulates an array, so there is little difference compared to using a primitive array (except for the fact that a List is much easier to work with in java).
The pretty much the only time it makes sense to prefer an array to an ArrayList is when you are storing primitives, i.e. byte, int, etc and you need the particular space-efficiency you get by using primitive arrays.
Array vs. List choice is not so important (considering performance) in the case of storing string objects. Because both array and list will store string object references, not the actual objects.
If the number of strings is almost constant then use an array (or ArrayList). But if the number varies too much then you'd better use LinkedList.
If there is (or will be) a need for adding or deleting elements in the middle, then you certainly have to use LinkedList.
It you can live with a fixed size, arrays will will be faster and need less memory.
If you need the flexibility of the List interface with adding and removing elements, the question remains which implementation you should choose. Often ArrayList is recommended and used for any case, but also ArrayList has its performance problems if elements at the beginning or in the middle of the list must be removed or inserted.
You therefore may want to have a look at https://dzone.com/articles/gaplist-lightning-fast-list which introduces GapList. This new list implementation combines the strengths of both ArrayList and LinkedList resulting in very good performance for nearly all operations. Get it at https://github.com/magicwerk/brownies-collections.
If you know in advance how large the data is then an array will be faster.
A List is more flexible. You can use an ArrayList which is backed by an array.
List is the preferred way in java 1.5 and beyond as it can use generics. Arrays cannot have generics. Also Arrays have a pre defined length, which cannot grow dynamically. Initializing an array with a large size is not a good idea.
ArrayList is the the way to declare an array with generics and it can dynamically grow.
But if delete and insert is used more frequently, then linked list is the fastest data structure to be used.
Which one to use depends on the problem. We need to look at the Big O.
image source: https://github.com/egonSchiele/grokking_algorithms
Depending on the implementation. it's possible that an array of primitive types will be smaller and more efficient than ArrayList. This is because the array will store the values directly in a contiguous block of memory, while the simplest ArrayList implementation will store pointers to each value. On a 64-bit platform especially, this can make a huge difference.
Of course, it's possible for the jvm implementation to have a special case for this situation, in which case the performance will be the same.
Arrays recommended everywhere you may use them instead of list, especially in case if you know items count and size would not be changing.
See Oracle Java best practice: http://docs.oracle.com/cd/A97688_16/generic.903/bp/java.htm#1007056
Of course, if you need add and remove objects from collection many times easy use lists.
A lot of microbenchmarks given here have found numbers of a few nanoseconds for things like array/ArrayList reads. This is quite reasonable if everything is in your L1 cache.
A higher level cache or main memory access can have order of magnitude times of something like 10nS-100nS, vs more like 1nS for L1 cache. Accessing an ArrayList has an extra memory indirection, and in a real application you could pay this cost anything from almost never to every time, depending on what your code is doing between accesses. And, of course, if you have a lot of small ArrayLists this might add to your memory use and make it more likely you'll have cache misses.
The original poster appears to be using just one and accessing a lot of contents in a short time, so it should be no great hardship. But it might be different for other people, and you should watch out when interpreting microbenchmarks.
Java Strings, however, are appallingly wasteful, especially if you store lots of small ones (just look at them with a memory analyzer, it seems to be > 60 bytes for a string of a few characters). An array of strings has an indirection to the String object, and another from the String object to a char[] which contains the string itself. If anything's going to blow your L1 cache it's this, combined with thousands or tens of thousands of Strings. So, if you're serious - really serious - about scraping out as much performance as possible then you could look at doing it differently. You could, say, hold two arrays, a char[] with all the strings in it, one after another, and an int[] with offsets to the starts. This will be a PITA to do anything with, and you almost certainly don't need it. And if you do, you've chosen the wrong language.
None of the answers had information that I was interested in - repetitive scan of the same array many many times. Had to create a JMH test for this.
Results (Java 1.8.0_66 x32, iterating plain array is at least 5 times quicker than ArrayList):
Benchmark Mode Cnt Score Error Units
MyBenchmark.testArrayForGet avgt 10 8.121 ? 0.233 ms/op
MyBenchmark.testListForGet avgt 10 37.416 ? 0.094 ms/op
MyBenchmark.testListForEach avgt 10 75.674 ? 1.897 ms/op
Test
package my.jmh.test;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
#State(Scope.Benchmark)
#Fork(1)
#Warmup(iterations = 5, timeUnit = TimeUnit.SECONDS)
#Measurement(iterations = 10)
#BenchmarkMode(Mode.AverageTime)
#OutputTimeUnit(TimeUnit.MILLISECONDS)
public class MyBenchmark {
public final static int ARR_SIZE = 100;
public final static int ITER_COUNT = 100000;
String arr[] = new String[ARR_SIZE];
List<String> list = new ArrayList<>(ARR_SIZE);
public MyBenchmark() {
for( int i = 0; i < ARR_SIZE; i++ ) {
list.add(null);
}
}
#Benchmark
public void testListForEach() {
int count = 0;
for( int i = 0; i < ITER_COUNT; i++ ) {
for( String str : list ) {
if( str != null )
count++;
}
}
if( count > 0 )
System.out.print(count);
}
#Benchmark
public void testListForGet() {
int count = 0;
for( int i = 0; i < ITER_COUNT; i++ ) {
for( int j = 0; j < ARR_SIZE; j++ ) {
if( list.get(j) != null )
count++;
}
}
if( count > 0 )
System.out.print(count);
}
#Benchmark
public void testArrayForGet() {
int count = 0;
for( int i = 0; i < ITER_COUNT; i++ ) {
for( int j = 0; j < ARR_SIZE; j++ ) {
if( arr[j] != null )
count++;
}
}
if( count > 0 )
System.out.print(count);
}
}
"Thousands" is not a large number. A few thousand paragraph-length strings are on the order of a couple of megabytes in size. If all you want to do is access these serially, use an immutable singly-linked List.
Don't get into the trap of optimizing without proper benchmarking. As others have suggested use a profiler before making any assumption.
The different data structures that you have enumerated have different purposes. A list is very efficient at inserting elements in the beginning and at the end but suffers a lot when accessing random elements. An array has fixed storage but provides fast random access. Finally an ArrayList improves the interface to an array by allowing it to grow. Normally the data structure to be used should be dictated by how the data stored will be access or added.
About memory consumption. You seem to be mixing some things. An array will only give you a continuous chunk of memory for the type of data that you have. Don't forget that java has a fixed data types: boolean, char, int, long, float and Object (this include all objects, even an array is an Object). It means that if you declare an array of String strings [1000] or MyObject myObjects [1000] you only get a 1000 memory boxes big enough to store the location (references or pointers) of the objects. You don't get a 1000 memory boxes big enough to fit the size of the objects. Don't forget that your objects are first created with "new". This is when the memory allocation is done and later a reference (their memory address) is stored in the array. The object doesn't get copied into the array only it's reference.
I don't think it makes a real difference for Strings. What is contiguous in an array of strings is the references to the strings, the strings themselves are stored at random places in memory.
Arrays vs. Lists can make a difference for primitive types, not for objects. IF you know in advance the number of elements, and don't need flexibility, an array of millions of integers or doubles will be more efficient in memory and marginally in speed than a list, because indeed they will be stored contiguously and accessed instantly. That's why Java still uses arrays of chars for strings, arrays of ints for image data, etc.
Array is faster - all memory is pre-allocated in advance.
It depends on how you have to access it.
After storing, if you mainly want to do search operation, with little or no insert/delete, then go for Array (as search is done in O(1) in arrays, whereas add/delete may need re-ordering of the elements).
After storing, if your main purpose is to add/delete strings, with little or no search operation, then go for List.
Arrays - It would always be better when we have to achieve faster fetching of results
Lists- Performs results on insertion and deletion since they can be done in O(1) and this also provides methods to add, fetch and delete data easily. Much easier to use.
But always remember that the fetching of data would be fast when the index position in array where the data is stored - is known.
This could be achieved well by sorting the array. Hence this increases the time to fetch the data (ie; storing the data + sorting the data + seek for the position where the data is found). Hence this increases additional latency to fetch the data from the array even they may be good at fetching the data sooner.
Hence this could be solved with trie data structure or ternary data structure. As discussed above the trie data structure would be very efficient in searching for the data the search for a particularly word can be done in O(1) magnitude. When time matters ie; if you have to search and retrieve data quickly you may go with trie data structure.
If you want your memory space to be consumed less and you wish to have a better performance then go with ternary data structure. Both these are suitable for storing huge number of strings (eg; like words contained in dictionary).

Categories

Resources