Why is creating ArrayList with initial capacity slow? - java

Comparing creating large ArrayList with intialCapacity I found that it;s slower than creting it without one. Here is the simple program I wrote to measure it:
long start2 = System.nanoTime();
List<Double> col = new ArrayList<>(30000000); // <--- Here
for (int i = 0; i < 30000000; i++) {
col.add(Math.sqrt(i + 1));
}
long end2 = System.nanoTime();
System.out.println(end2 - start2);
System.out.println(col.get(12411325).hashCode() == System.nanoTime());
The average result for new ArrayList<>(30000000): 6121173329
The average result for new ArrayList<>(): 4883894100
on my machine. I thought that it would be faster to create large array once rather than reacreating it once we go beyond the capacity of the current underlying array of ArrayList. Eventually we should have ended up with array size greater or equal than 30000000.
I thought it was optimization, but actualy pessimization. Why that?

I ran the same program multiple times. It was not in a loop
Consider how you are profiling the code - if you include both a 'ramp up time' (to take into account things such as JIT) and average over several calls (to gather some statistics/distribution), the timing may lead you to a different conclusion. For example:
public static void main(String[] args){
//Warm up
System.out.println("Warm up");
for ( int i = 0; i < 5; i++ ){
dynamic();
constant();
}
System.out.println("Timing...");
//time
long e = 0;
long s = 0;
int total = 5;
for ( int i = 0; i < total; i++ ){
long e1 = dynamic();
System.out.print(e1 + "\t");
e += e1;
long s1 = constant();
System.out.println(s1);
s += s1;
}
System.out.println("Static Avg: " + (s/total));
System.out.println("Dynamic Avg: " + (e/total));
}
private static long dynamic(){
long start2 = System.currentTimeMillis();
List<Double> col = new ArrayList<>();
for (int i = 0; i < 30000000; i++) {
col.add(Math.sqrt(i + 1));
}
long end2 = System.currentTimeMillis();
return end2 - start2;
}
private static long constant(){
long start2 = System.currentTimeMillis();
List<Double> col = new ArrayList<>(30000000);
for (int i = 0; i < 30000000; i++) {
col.add(Math.sqrt(i + 1));
}
long end2 = System.currentTimeMillis();
return end2 - start2;
}
On my system setting the initial capacity is always faster, though not by any orders of magnitude.
Edit: As suggested in a comment, consider reading through How do I write a correct micro-benchmark in Java?

Related

TestClass for simple sorting algorithms gives unlogical results

I have the following Class with some sorting algorithms like MaxSort, BubbleSort, etc.:
class ArrayUtility {
public static int returnPosMax(int[] A, int i, int j) {
int max = i;
int position = 0;
for(int c = 0; c <= j; c++){
if(c >= i){
if(A[c] > max){
max = A[c];
position = c;
}
}
}
return position;
}
public static int returnMax(int[] A, int i, int j) {
return A[returnPosMax(A, i, j)];
}
public static void swap(int[] A, int i, int j) {
int b = A[i];
A[i] = A[j];
A[j] = b;
}
public static void MaxSort(int[] A) {
int posMax;
for(int i = A.length - 1; i >= 0; i--){
posMax = returnPosMax(A, 0, i);
swap(A, posMax, i);
}
}
public static void BubbleSort(int[] A) {
boolean flag = true;
while (flag != false){
flag = false;
for(int i = 1; i <= A.length - 1; i++){
if(A[i-1]>A[i]){
swap(A, i-1, i);
flag = true;
}
}
if(flag = false) {
break;
}
for(int i = A.length - 1; i >= 1; i--){
if(A[i-1]>A[i]){
swap(A, i - 1, i);
flag = true;
}
}
}
}
public static void BubbleSortX(int[] A) {
boolean flag = true;
while (flag != false){
flag = false;
for(int i = 1; i <= A.length - 1; i++){
if(A[i-1]>A[i]){
swap(A, i-1, i);
flag = true;
}
}
}
}
}
Now i have to create a Test Class to evaluate the different sorting algorithms for different lengths of randomly created Arrays:
import java.util.Random;
import java.util.Arrays;
public class TestSorting{
public static void main(String[] args){
int[] lengthArray = {100, 1000, 10000, 100000};
for(int i = 0; i <= lengthArray.length - 1; i++){
int[] arr = new int[i];
for(int j = 0; j < i; j++){
Random rd = new Random();
int randInt = rd.nextInt();
arr[j] = randInt;
}
/* long startTime = System.nanoTime();
ArrayUtility.MaxSort(arr);
long cpuTime = System.nanoTime() - startTime;
System.out.println("Time: " + cpuTime + " - Array with Length: " + lengthArray[i] + " Using MaxSort"); */
/* long startTime = System.nanoTime();
ArrayUtility.BubbleSortX(arr);
long cpuTime = System.nanoTime() - startTime;
System.out.println("Time: " + cpuTime + " - Array with Length: " + lengthArray[i] + " Using BubbleSortX"); */
long startTime = System.nanoTime();
ArrayUtility.BubbleSort(arr);
long cpuTime = System.nanoTime() - startTime;
System.out.println("Time: " + cpuTime + " - Array with Length: " + lengthArray[i] + " Using BubbleSort");
/*long startTime = System.nanoTime();
Arrays.sort(arr)
long cpuTime = System.nanoTime() - startTime;
System.out.println("Time: " + cpuTime + " - Array with Length: " + lengthArray[i] + " Using BubbleSort"); */
}
}
}
Now when i run a certain sorting algorithm (i set the others as comment for the meantime), i get weird results, for example
Time: 1049500 - Array with Length: 100 Using BubbleSort
Time: 2200 - Array with Length: 1000 Using BubbleSort
Time: 13300 - Array with Length: 10000 Using BubbleSort
Time: 3900 - Array with Length: 100000 Using BubbleSort
And any time i run the test i get different results, such that Arrays with 10 times the length take less time to sort, also i dont understand why the array with 100 integers takes so long.
TL;DR: your benchmark is wrong.
Explanation
To make a good benchmark, you need to do a lot of research. A good starting point is this article and this talk by Alexey Shipilev, the author of micro-benchmark toolkit JMH.
Main rules for benchmarking:
warm up! Do a bunch (like, thousands) of warmup rounds before you actually measure stuff - this will allow JIT compiler to do its job, all optimizations to apply, etc.
Monitor your GC closely - GC event can skid the results drastically
To avoid that - repeat the benchmark many (hundreds thousands) times and get the average.
All this can be done in JMH.
I took a snippet out of your code to show you where your code is buggy.
public static void main(String[] args){
int[] lengthArray = {100, 1000, 10000, 100000};
for(int i = 0; i <= lengthArray.length - 1; i++) { // this loop goes from 0 - 3
int[] arr = new int[i]; // thats why this array will be of size 0 - 3
// correct line would be:
// int[] arr = new int[lengthArray[i]];
for(int j = 0; j < i; j++) {
// correct line would be:
// for (int j = 0; j < arr.length; j++) {
...
Additionally, the hint for benchmarking from Dmitry is also important to note.

Average runtime much lower than 1 time runtime

public class Runtime {
public static void main(String[] args) {
int[] n = {1,100,1000,10000};
for (int i=0; i<4; i++) {
StringRepeater s = new StringRepeater();
long start = System.nanoTime();
s.repeatString("hello", n[i]);
long stop = System.nanoTime();
long runtime = stop - start;
System.out.println("T(" + n[i] + ") = " + runtime/1000000000.0 + " seconds");
}
for (int i=0; i<4; i++) {
long start = 0;
long stop = 0;
long runtime100 = 0;
for (int j=0; j<100; j++) {
StringRepeater s = new StringRepeater();
start = System.nanoTime();
s.repeatString("hello", n[i]);
stop = System.nanoTime();
runtime100 = runtime100 + (stop - start);
}
System.out.println("T(" + n[i] + ") = " + runtime100/100000000000.0 + " seconds");
}
}
}
So i've got this code which measures the runtime of repeatString
public class StringRepeater {
public String repeatString(String s, int n){
String result = "";
for(int i=0; i<n; i++) {
result = result + s;
}
return result;
}
}
The top part with 1 for loop calculates runtime of 1 run. The bottom part with 2 for loop calculates it based on average of 100. But for some reason the runtime of second part is averagely so much faster especially for lower n. For n=1 its even 100 times faster.
T(1) = 2.3405E-5 seconds
T(100) = 1.47748E-4 seconds
T(1000) = 0.00358515 seconds
T(10000) = 0.173254266 seconds
T(1) = 1.9015E-7 seconds
T(100) = 3.035997E-5 seconds
T(1000) = 0.00168481277 seconds
T(10000) = 0.10354477848 seconds
This is about the typical return. Is my code wrong or is there something else going on. TL:DL why is average runtime so much lower than 1x runtime? You would expect it to be fairly similar right?
There are many things that require attention:
It's better to avoid division to monitor execute time because you can have a precision problem. So, a first suggestion: keep speed time in nanoseconds.
The performance difference is probably due to just in time compilation: the first time that compiler executes the code, it takes some time to compile on-the-fly the bytecode. Just to demonstrate this, simply try to invert the loops in your code. I do it for you:
public class Runtime {
public static void main(String[] args) {
int[] n = { 1, 100, 1000, 10000 };
for (int i = 0; i < 4; i++) {
long start = 0;
long stop = 0;
long runtime100 = 0;
for (int j = 0; j < 100; j++) {
StringRepeater s = new StringRepeater();
start = System.nanoTime();
s.repeatString("hello", n[i]);
stop = System.nanoTime();
runtime100 = runtime100 + (stop - start);
}
System.out.println("T(" + n[i] + ") = " + runtime100 / 100.0 + " seconds");
}
for (int i = 0; i < 4; i++) {
StringRepeater s = new StringRepeater();
long start = System.nanoTime();
s.repeatString("hello", n[i]);
long stop = System.nanoTime();
long runtime = stop - start;
//System.out.println("T(" + n[i] + ") = " + runtime / 1000000000.0 + " seconds");
System.out.println("T(" + n[i] + ") = " + runtime + " seconds");
}
}
public static class StringRepeater {
public String repeatString(String s, int n) {
String result = "";
for (int i = 0; i < n; i++) {
result = result + s;
}
return result;
}
}
}
When I run this code on my machine I obtain the following results:
T(1) = 985.31 seconds
T(100) = 109439.19 seconds
T(1000) = 2604811.29 seconds
T(10000) = 1.1787790449E8 seconds
T(1) = 821 seconds
T(100) = 18886 seconds
T(1000) = 1099442 seconds
T(10000) = 121750683 seconds
You can see that now the 100's loop now is slower than single round execution. This is because it is executed before now.
3 - If you observe the above result you probably notice that now the situation is simply the opposite respect the initial situation. Why? In my opinion, this is due to the garbage collector work. In the bigger cycle, garbage collection has more work to do, just because there are many temporary variables to garbage.
I hope it helps you.

Why is the conditional check at the end of the method doubles the execution time?

I have the following pieces of code:
long start = System.currentTimeMillis();
for(int i = 0; i < keys.length; ++i) {
obj.getElement(keys[i]);
}
long total = System.currentTimeMillis() - start;
System.out.println(total/1000d + " seconds");
And the following:
long start = System.currentTimeMillis();
for(int i = 0; i < keys.length; ++i) {
obj.hasElement(keys[i]);
}
long total = System.currentTimeMillis() - start;
System.out.println(total/1000d + " seconds");
The implementations of these methods are:
public T getElement(int key) {
int idx = findIndexOfElement(key);
return idx >= 0? ITEMS[idx]:null;
}
public boolean hasElement(int key) {
return findIndexOfElement(key) >= 0;
}
Pretty straightforward. The only difference between the 2 methods is the conditional access to the table.
Problem: When actually measuring the performance of these snippets the getElement takes twice the time than the hasElement.
So for a series of tests I get ~2.5seconds for the first loop of getElement and ~0.8 secs for the second loop of hasElement.
How is it possible to have such a big difference? I understand that the conditional statement is a branch and jump but still seems to me too big.
Is there a way to improve this?
Update:
The way I measure is:
long min = Long.MAX_VALUE;
long max = Long.MIN_VALUE;
long run = 0;
for(int i = 0; i < 10; ++i) {
long start = System.currentTimeMillis();
for(int i = 0; i < keys.length; ++i) {
obj.getElement(keys[i]);
}
long total = System.currentTimeMillis() - start;
System.out.println(total/1000d + " seconds");
if(total < min) {
min = time;
}
if(total > max) {
max = time;
}
run += time;
for(int i = 0; i < 50; ++i) {
System.gc();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
System.out.println("min=" + min + " max=" + max);
System.out.println("avg = " + (double)run/1000/keys.length);
Is ITEMS definitely an array, and implemented as an array? If it is somehow implemented as a linked list, that would cause O(n) time instead of O(1) time on the get.
Your branches are probably the limiting factor in the short code posted. In the getElement method there is one branch and in the hasElement method there is another one plus it calls the getElement method, making it two branches for that method.
So in summary, the number of branches are double in that method and it seems very reasonable that the runtime also is double.

Looping through a method and using the results

I am trying to loop through this method 10 times that searches an array of numbers captures the run time in nano seconds and prints the results. I then want t take the 10 run times and find the average and standard deviation.
Is there a way to capture the time after 10 run and use the result to find my average and standard deviation?
This is what I have so far:
public class Search {
public static int Array[] = new int[100];
//Building my array with 100 numbers in sequential order
public static void createArray(){
int i = 0;
for(i = 0; i<Array.length; i++)
Array[i] = i + 1;
int check[] = {5, 15, 12};
int target = check[2];
boolean found = false;
int j = 0;
long startTime = System.nanoTime();
for(j=0; j<Array.length;j++){
if(Array[j] == target){
long endTime = System.nanoTime();
System.out.print(endTime - startTime + "ms" + "\t\t");
found = true;
break;
}
}
if(found){
//System.out.println("got you! "+ target + " is at index "+ j +"\t");..... just to test if it was working
}
else{
System.out.println("not available");
}
}
// Printing header
public static void main(String[]args){
System.out.print("First run\tSecond run\tThird run\tFourth run\tFifth run\tSixth run\tSeventh run\tEight run\tNinth run\tTenth run\tAverage \tStandard deviation\n");
// looping through the method 10 times
int i=0;
while(i<10){
createArray();
i++;
}
}
}
Try:
long sum = 0;
long sumSquare = 0;
for(int c = 0 ; c < 10 ; c++) {
long start = System.nanoTime();
// do work
long end = System.nanoTime();
sum += end - start;
sumSquare += Math.pow(end - start, 2);
}
double average = (sum * 1D) / 10;
double variance = (sumSquare * 1D) / 10 - Math.pow(average, 2);
double std = Math.sqrt(variance);
Try creating an array list of size 10 like:
private static List<Long> times = new ArrayList<>(10);
And then, when you find the element just add endTime - startTime to list like:
times.add(..);
And once that's done, in your main method you could do sum, average like:
long totalTime = 0;
for (Long time : times) {
totalTime += time;
}
//print average by dividing totalTime by 10.

Java Compute Average Execution Time

I want to compute the average execution time of x number of runs (i.e. 10)... I can easily execute 10 times using the loop in the main method, but how can I store the execution times & compute the average? I'm thinking this is something really simple, but I'm drawing a blank at the moment... Thanks in advanced!
import java.util.Arrays;
import java.util.Random;
public class OptQSort1 {
static boolean insertionSortCalled = false;
private static final Random random = new Random();
private static final int RANDOM_INT_RANGE = 9999;
private static int[] randomArray(int size) {
// Randomize data (array)
final int[] arr = new int[size];
for (int i = 0; i < arr.length; i++) {
arr[i] = random.nextInt(RANDOM_INT_RANGE);
}
return arr;
}
// Sort
private static void sort(int[] arr) {
if (arr.length > 0)
sortInPlace(arr, 0, arr.length - 1);
}
private static void sortInPlace(int[] arr, int left, int right) {
// OptQSort1:
int size = right - left + 1;
if (size < 10 && !insertionSortCalled) {
insertionSortCalled = true;
insertionSort(arr, 0, arr.length - 1);
}
if (left >= right)
return; // sorted
final int range = right - left + 1;
int pivot = random.nextInt(range) + left;
int newPivot = partition(arr, left, right, pivot);
sortInPlace(arr, left, newPivot - 1);
sortInPlace(arr, newPivot + 1, right);
}
private static int partition(int[] arr, int left, int right, int pivot) {
int pivotVal = arr[pivot];
swapArrayVals(arr, pivot, right);
int storeIndex = left;
for (int i = left; i <= (right - 1); i++) {
if (arr[i] < pivotVal) {
swapArrayVals(arr, i, storeIndex);
storeIndex++;
}
}
swapArrayVals(arr, storeIndex, right);
return storeIndex;
}
private static void swapArrayVals(int[] arr, int from, int to) {
int fromVal = arr[from];
int toVal = arr[to];
arr[from] = toVal;
arr[to] = fromVal;
}
public static void insertionSort(int[] arr, int left, int right) {
int in, out;
for (out = left + 1; out <= right; out++) {
int temp = arr[out];
in = out;
while (in > left && arr[in - 1] >= temp) {
arr[in] = arr[in - 1];
--in;
}
arr[in] = temp;
}
}
public static void main(String[] args) {
long StartTime = System.nanoTime();
int runCount = 0;
// Array size
int[] arr = randomArray(1000);
int[] copy = Arrays.copyOf(arr, arr.length);
// Print original data (array)
System.out.println("The starting/unsorted array: \n"
+ Arrays.toString(arr));
sort(arr);
do {
// check the result
Arrays.sort(copy);
if (Arrays.equals(arr, copy)) {
System.out.println("The ending/sorted array: \n"
+ Arrays.toString(arr));
// print time
long TotalTime = System.nanoTime() - StartTime;
System.out.println("Total elapsed time (milliseconds) " + "is: "
+ TotalTime + "\n");
runCount++;
}
} while (runCount < 10);
}
}
You can compute the average by just measuring the total time for 10 iterations of your code, then divide that by 10.
e.g:
public static void main(String[] args) {
long start = System.currentTimeMillis();
for (int i = 0; i < 10; ++i) {
doSort();
}
long elapsed = System.currentTimeMillis() - start;
long average = elapsed / 10;
}
As a helpful tip, use a named constant rather than a literal value for the number of iterations:
private final static int ITERATIONS = 10;
public static void main(String[] args) {
long start = System.currentTimeMillis();
for (int i = 0; i < ITERATIONS; ++i) {
doSort();
}
long elapsed = System.currentTimeMillis() - start;
long average = elapsed / ITERATIONS;
}
This means you only have to change the number in one place if you want to run, say, 50 or 100 iterations.
You should also be aware that it is very difficult to get accurate timing results from this kind of experiment. It's a good idea to include a "warm-up" phase to allow the JIT to evaluate and optimize the code, and to have a much larger number of iterations:
private static final int WARMUP_ITERATIONS = 10000;
private static final int RUN_ITERATIONS = 100000;
public static void main(String[] args) {
// Warmup with no timing
for (int i = 0; i < WARMUP_ITERATIONS; ++i) {
doSort();
}
// Now the real test
long start = System.currentTimeMillis();
for (int i = 0; i < RUN_ITERATIONS; ++i) {
doSort();
}
long elapsed = System.currentTimeMillis() - start;
long average = elapsed / RUN_ITERATIONS;
}
To calculate the average time, you need the sum of the times. The sum of the times is the total time, so you don't even need to know the individual times or record them. Just take the end-to-end time and divide by the count.
int count = ...
long start = System.nanoTime();
for(int i=0;i<count;i++) {
// do something
}
long time = System.nanoTime() - start;
long averageTime = time/count;
The JIT doesn't fully warmup until you have done at least 10,000 iterations, so you might ignore the first 11,000 if this is practical.
A simple way to do this is
int count = ...
long start = 0;
for(int i=-11000;i<count;i++) {
if(i == 0) start = System.nanoTime();
// do something
}
long time = System.nanoTime() - start;
long averageTime = time/count;
BTW: Only include in the test time the things you want to time. Generating random numbers , for example, could take longer than the sort itself which could give misleading results.
EDIT: The compile threshold which determines when a method or loop is compiled is controlled with -XX:CompileThresholed= which defaults to 10000 on the server JVM and 1500 on the client JVM. http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html
-XX:CompileThreshold=10000 Number of method invocations/branches before
compiling [-client: 1,500]
You can use a list of integers to store the result for each run, but you don't need it to calculate average, just divide totaltime by number of runs.
Btw, your measurements are not very good:
1) Generation of random arrays is included there
2) 10 runs is not enought
before the execution:
long start = System.currentTimeMillis();
after the execution:
long end = System.currentTimeMillis();
add each time in an ArrayList like this:
times.add(end-start);
get the average time:
Long total = 0;
for(Long l : times)
total += l;
System.out.println("Average Time: "+(total/times.size()));
Be careful the unit of time of the return value is a millisecond.
Just keep a second long value that is a running total of the Totaltime values. When the loop exits, just divide by runCount.
Alternatively, create an ArrayList<Long> to store the times. Each time you do one run, add Totaltime to the array. After the loop exits, you can average the values and also compute other statistics (min/max, standard deviation, etc.).

Categories

Resources