Comparing creating large ArrayList with intialCapacity I found that it;s slower than creting it without one. Here is the simple program I wrote to measure it:
long start2 = System.nanoTime();
List<Double> col = new ArrayList<>(30000000); // <--- Here
for (int i = 0; i < 30000000; i++) {
col.add(Math.sqrt(i + 1));
}
long end2 = System.nanoTime();
System.out.println(end2 - start2);
System.out.println(col.get(12411325).hashCode() == System.nanoTime());
The average result for new ArrayList<>(30000000): 6121173329
The average result for new ArrayList<>(): 4883894100
on my machine. I thought that it would be faster to create large array once rather than reacreating it once we go beyond the capacity of the current underlying array of ArrayList. Eventually we should have ended up with array size greater or equal than 30000000.
I thought it was optimization, but actualy pessimization. Why that?
I ran the same program multiple times. It was not in a loop
Consider how you are profiling the code - if you include both a 'ramp up time' (to take into account things such as JIT) and average over several calls (to gather some statistics/distribution), the timing may lead you to a different conclusion. For example:
public static void main(String[] args){
//Warm up
System.out.println("Warm up");
for ( int i = 0; i < 5; i++ ){
dynamic();
constant();
}
System.out.println("Timing...");
//time
long e = 0;
long s = 0;
int total = 5;
for ( int i = 0; i < total; i++ ){
long e1 = dynamic();
System.out.print(e1 + "\t");
e += e1;
long s1 = constant();
System.out.println(s1);
s += s1;
}
System.out.println("Static Avg: " + (s/total));
System.out.println("Dynamic Avg: " + (e/total));
}
private static long dynamic(){
long start2 = System.currentTimeMillis();
List<Double> col = new ArrayList<>();
for (int i = 0; i < 30000000; i++) {
col.add(Math.sqrt(i + 1));
}
long end2 = System.currentTimeMillis();
return end2 - start2;
}
private static long constant(){
long start2 = System.currentTimeMillis();
List<Double> col = new ArrayList<>(30000000);
for (int i = 0; i < 30000000; i++) {
col.add(Math.sqrt(i + 1));
}
long end2 = System.currentTimeMillis();
return end2 - start2;
}
On my system setting the initial capacity is always faster, though not by any orders of magnitude.
Edit: As suggested in a comment, consider reading through How do I write a correct micro-benchmark in Java?
I am trying to loop through this method 10 times that searches an array of numbers captures the run time in nano seconds and prints the results. I then want t take the 10 run times and find the average and standard deviation.
Is there a way to capture the time after 10 run and use the result to find my average and standard deviation?
This is what I have so far:
public class Search {
public static int Array[] = new int[100];
//Building my array with 100 numbers in sequential order
public static void createArray(){
int i = 0;
for(i = 0; i<Array.length; i++)
Array[i] = i + 1;
int check[] = {5, 15, 12};
int target = check[2];
boolean found = false;
int j = 0;
long startTime = System.nanoTime();
for(j=0; j<Array.length;j++){
if(Array[j] == target){
long endTime = System.nanoTime();
System.out.print(endTime - startTime + "ms" + "\t\t");
found = true;
break;
}
}
if(found){
//System.out.println("got you! "+ target + " is at index "+ j +"\t");..... just to test if it was working
}
else{
System.out.println("not available");
}
}
// Printing header
public static void main(String[]args){
System.out.print("First run\tSecond run\tThird run\tFourth run\tFifth run\tSixth run\tSeventh run\tEight run\tNinth run\tTenth run\tAverage \tStandard deviation\n");
// looping through the method 10 times
int i=0;
while(i<10){
createArray();
i++;
}
}
}
Try:
long sum = 0;
long sumSquare = 0;
for(int c = 0 ; c < 10 ; c++) {
long start = System.nanoTime();
// do work
long end = System.nanoTime();
sum += end - start;
sumSquare += Math.pow(end - start, 2);
}
double average = (sum * 1D) / 10;
double variance = (sumSquare * 1D) / 10 - Math.pow(average, 2);
double std = Math.sqrt(variance);
Try creating an array list of size 10 like:
private static List<Long> times = new ArrayList<>(10);
And then, when you find the element just add endTime - startTime to list like:
times.add(..);
And once that's done, in your main method you could do sum, average like:
long totalTime = 0;
for (Long time : times) {
totalTime += time;
}
//print average by dividing totalTime by 10.
How can I find the largest square number (ie 4, 9, 16) smaller than a given int n efficiently? I have the following attempt:
int square = (int)Math.sqrt(number);
return square*square;
But it has the obvious inefficiency of getting a square root just so we can square it.
Up front: It should be noted that processors capable of doing sqrt as a machine instruction will be fast enough. No doubt, its (micro)program uses Newton-Raphson, and this algorithm is of quadratic convergence, doubling the number of accurate digits with each iteration.
So, ideas like this one aren't really worth pursuing, although they use nice properties of squares, etc. (See the next proposal)
// compute the root of the biggests square that is a power of two < n
public static int pcomp( int n ){
long p2 = 1;
int i = 0;
while( p2 < n ){
p2 <<= 2;
i += 2;
}
p2 >>= 2;
i -= 2;
return (int)(p2 >>= i/2);
}
public static int squareLowerThan( int n ){
int p = pcomp(n);
int p2 = p*p; // biggest power of two that is a square < n
int d = 1; // increase using odd numbers until n is exceeded
while( p2 + 2*p + d < n ){
p2 += 2*p + d;
d += 2;
}
return p2;
}
But I'm sure that Newton's algorithm is faster. Quadratic convergence, remember.
public static int sqrt( int n ){
int x = n;
while( true ){
int y = (x + n/x)/2;
if( y >= x ) return x;
x = y;
}
}
This returns the integer square root. return x*x to get the square below n.
A linear-time algorithm:
int largestSquare(int n) {
int i = 0;
while ((i+1)*(i+1) < n) {
++i;
}
return i*i;
}
There is a newton algorithm to find square root, what you need is m^2 instead of m, in the given link
https://math.stackexchange.com/questions/34235/algorithm-for-computing-square-root-of-a-perfect-square-integer
Even if you want to find square directly instead of finding m, I don't think it will be faster than this.
And working code here
public static int squareLessThanN(int N)
{
int x=N;
int y=(x+N/x)/2;
while(y<x)
{
x=y;
y=(x+N/x)/2;
}
return x*x;
}
But it seems inbuilt square root seems to be faster anyway. Just measured the runtime for both.
class Square{
public static void main(String[] args)
{
long startTime = System.currentTimeMillis();
System.out.println(squareLessThanN(149899437943L));
long endTime = System.currentTimeMillis();
long totalTime = endTime - startTime;
System.out.println("Running time is "+totalTime);
startTime = System.currentTimeMillis();
System.out.println(normal(149899437943L));
endTime = System.currentTimeMillis();
totalTime = endTime - startTime;
System.out.println("Running time is "+totalTime);
}
public static long squareLessThanN(long N)
{
long x=N;
long y=(x+N/x)/2;
while(y<x)
{
x=y;
y=(x+N/x)/2;
}
return x*x;
}
public static long normal(long N)
{
long square = (long)Math.sqrt(N);
return square*square;
}
}
And the output is
149899060224
Running time is 1
149899060224
Running time is 0
I wrote two programs that create random int[] and then generate random ints to search for. Well, really I only wrote one program, then modified the search method (like I'm supposed to for OOP, right?). I'm using System.nanoTime() to calculate the time lapsed during search. I get values of 10,000-13,000 consistently for the linear search, and MOSTLY times less than 5,000 for binary search(averaging in the sort time across all the searches, less than 4,000 typically for the search alone). Occasionally, though, it will through back times over 13,000, which is even longer than all but the very longest linear searches.
I have java priority set to Real Time in the task manager (Windows). What would cause the search time to triple or more? FWIW, I get about 1 spike in linear for every 3 or 4 in binary, and the linear spike is roughly twice the normal time (22,000 was the worst so far). Is it something with the way I've written the code?
If you don't want to read through the whole thing, in both cases I record the time in the line before calling the search method and the line after, so you can skip to the search method if you think that might be the problem. Reporting of that number is copy/pasted from one program to the other, but binary search program has more reporting that I thought would be interesting. Thanks in advance.
import java.util.Random;
public class LinearSearchTimer{
public static void main(String[] args){
int[] numbers;
final int MAX_RUNS;
double time1, time2, timeSpent, timeTotal;
int searchedNumber;
Random rand = new Random();
boolean result;
boolean[] resultArr;
int resultCounter;
double pctTrue;
MAX_RUNS = 1000;
numbers = new int[MAX_RUNS];
resultArr = new boolean[MAX_RUNS];
resultCounter = 0;
timeTotal = 0;
//build array with random ints
for(int k = 0; k < numbers.length; k++){
numbers[k] = rand.nextInt(10001);
}
//generate random number and search array for it, then store to result array
for(int i = 0; i < numbers.length; i++){
searchedNumber = rand.nextInt(10001);
time1 = System.nanoTime();
result = search(numbers, searchedNumber);
time2 = System.nanoTime();
resultArr[resultCounter] = result;
resultCounter++;
timeSpent = time2 - time1;
timeTotal += timeSpent;
}
pctTrue = resultPercent(resultArr);
System.out.println("Avg time : " + timeTotal / MAX_RUNS);
System.out.println("Percent found: " + pctTrue * 100);
}
//search algorithm, returns true if found, false if not
public static boolean search(int[] numbers, int searchedNumber){
int i = 0;
boolean found = false;
while (found == false && i < numbers.length){
if (numbers[i] == searchedNumber)
found = true;
i++;
}
return found;
}
public static double resultPercent(boolean[] arr){
int trueCount = 0;
double result;
double runs;
for(boolean element : arr){
if (element == true)
trueCount++;
}
runs = arr.length;//for calculating percentage as a forced double
result = trueCount / runs;
return result;
}
}
and then...
import java.util.Arrays;
import java.util.Random;
public class BinarySearchTimer{
public static void main(String[] args){
//declare variables
int[] numbers;
final int MAX_RUNS;
double time1, time2, timeSpent, timeTotal;
int searchedNumber;
Random rand;
boolean result;
boolean[] resultArr;
int resultCounter;
double pctTrue;
double timeSort1, timeSort2, timeSortSpent;
//instantiate variables
MAX_RUNS = 1000;
numbers = new int[MAX_RUNS];
resultArr = new boolean[MAX_RUNS];
resultCounter = 0;
timeTotal = 0;
rand = new Random();
//build array with random ints
for(int k = 0; k < numbers.length; k++){
numbers[k] = rand.nextInt(10001);
}
//sort array
timeSort1 = System.nanoTime();
Arrays.sort(numbers);
timeSort2 = System.nanoTime();
timeSortSpent = timeSort2 - timeSort1;
//generate random number and call search method
for(int i = 0; i < numbers.length; i++){
searchedNumber = rand.nextInt(10001);
//get start time
time1 = System.nanoTime();
result = search(numbers, searchedNumber, 0, numbers.length - 1);
//get end time
time2 = System.nanoTime();
resultArr[resultCounter] = result;
resultCounter++;
timeSpent = time2 - time1;
timeTotal += timeSpent;
}
pctTrue = resultPercent(resultArr);
System.out.println("Avg time : " + timeTotal / MAX_RUNS);
System.out.println("Percent found: " + pctTrue * 100);
System.out.println("Actual sort time: " + timeSortSpent);
System.out.println("Sort time shared per search: " + timeSortSpent / MAX_RUNS);
System.out.println("Effective average search time: " + (timeTotal / MAX_RUNS + timeSortSpent / MAX_RUNS));
}
//search algorithm, returns true if found, false if not
public static boolean search(int[] numbers, int searchedNumber, int low, int high){
int mid = (high + low) / 2;
if(low > high)
return false;
if(numbers[mid] == searchedNumber){
return true;
}
else if(searchedNumber < numbers[mid]){
mid--;
return search(numbers, searchedNumber, low, mid);
}
else if(searchedNumber > numbers[mid]){
mid++;
return search(numbers, searchedNumber, mid, high);
}
return false;
}
public static double resultPercent(boolean[] arr){
int trueCount = 0;
double result;
double runs;
for(boolean element : arr){
if (element == true)
trueCount++;
}
runs = arr.length;//for calculating percentage as a forced double
result = trueCount / runs;
return result;
}
}
I know this may be a stupid question, maybe the most stupid question today, but I have to ask it: Have I invented this sorting algorithm?
Yesterday, I had a little inspiration about an exchange-based sorting algorithm. Today, I implemented it, and it worked.
It probably already exists, since there are many not-so-popular sorting algorithms out there that has little or none information about, and almost no implementation of them exist.
Description: Basically, this algorithm takes an item, them a pair, then an item again... until the end of the list. For each item/pair, compare EVERY two items at the same radius distance from pair space or item, until a border of the array is reached, and then exchange those items if needed. Repeat this for each pair/item of the list.
An English-based pseudo-code:
FOR i index to last index of Array (starting from 0)
L index is i - 1
R index is i + 1
//Odd case, where i is the center
WHILE (L is in array range and R is in array range)
IF item Array[L] is greater than Array[R]
EXCHANGE item Array[L] with Array[R]
END-IF
ADD 1 to R
REST 1 to L
END-WHILE
//Even case, where i is not the center
L index is now i
R index in now i + 1
WHILE (L is in array range and R is in array range)
IF item Array[L] is greater than Array[R]
EXCHANGE Array[L] with Array[R]
END-IF
ADD 1 to R
REST 1 to L
END-WHILE
END FOR
This is the implementation in Java:
//package sorting;
public class OrbitSort {
public static void main(String[] args) {
int[] numbers ={ 15, 8, 6, 3, 11, 1, 2, 0, 14, 13, 7, 9, 4, 10, 5, 12 };
System.out.println("Original list:");
display(numbers);
sort(numbers);
System.out.println("\nSorted list:");
display(numbers);
}
//Sorting algorithm
public static void sort(int[] array) {
for(int i = 0; i < array.length; i++){
int L = i - 1;
int R = i + 1;
//Odd case (with a central item)
while(L >= 0 && R < array.length){
if(array[L] > array[R])
swap(array, L, R);
L--;
R++;
}
//Even case (with no central item)
L = i;
R = i + 1;
while(L >= 0 && R < array.length) {
if(array[L] > array[R])
swap(array, L, R);
L--;
R++;
}
}
}
//Swap two items in array.
public static void swap(int[] array, int x, int y) {
int temp = array[x];
array[x] = array[y];
array[y] = temp;
}
//Display items
public static void display(int[] numbers){
for(int i: numbers)
System.out.print(" " + i);
System.out.println();
}
}
I know can be shorter, but it's just an early implementation.
It probably runs in O(n^2), but I'm not sure.
So, what do you think? Does it already exists?
To me, it looks like a modified bubble sort algo, which may perform better for certain arrangements of input elements.
Altough not necessarily fair, I did a benchmark with warmup cycles using your input array, for comparison of:
java.util.Arrays.sort(), which is a merge quick sort implementation
BubbleSort.sort(), a java implementation of the bubble sort algo
OrbitSort.sort(), your algo
Results:
input size: 8192
warmup iterations: 32
Arrays.sort()
iterations : 10000
total time : 4940.0ms
avg time : 0.494ms
BubbleSort.sort()
iterations : 100
total time : 8360.0ms
avg time : 83.6ms
OrbitSort.sort()
iterations : 100
total time : 8820.0ms
avg time : 88.2ms
Of course, the performance depends on input size and arrangement
Straightforward code:
package com.sam.tests;
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Random;
import java.util.concurrent.Callable;
public class SortBenchmark {
public static class OrbitSort {
// Sorting algorithm
public static void sort(int[] array) {
for (int i = 0; i < array.length; i++) {
int L = i - 1;
int R = i + 1;
// Odd case (with a central item)
while (L >= 0 && R < array.length) {
if (array[L] > array[R])
swap(array, L, R);
L--;
R++;
}
// Even case (with no central item)
L = i;
R = i + 1;
while (L >= 0 && R < array.length) {
if (array[L] > array[R])
swap(array, L, R);
L--;
R++;
}
}
}
// Swap two items in array.
public static void swap(int[] array, int x, int y) {
int temp = array[x];
array[x] = array[y];
array[y] = temp;
}
}
public static class BubbleSort {
public static void sort(int[] numbers) {
boolean swapped = true;
for (int i = numbers.length - 1; i > 0 && swapped; i--) {
swapped = false;
for (int j = 0; j < i; j++) {
if (numbers[j] > numbers[j + 1]) {
int temp = numbers[j];
numbers[j] = numbers[j + 1];
numbers[j + 1] = temp;
swapped = true;
}
}
}
}
}
public static class TestDataFactory {
public static enum ElementOrder {
Ascending, Descending, Random
}
public static int[] createIntArray(final int size, final ElementOrder elementOrder) {
int[] array = new int[size];
switch (elementOrder) {
case Ascending:
for (int i = 0; i < size; ++i)
array[i] = i;
break;
case Descending:
for (int i = 0; i < size; ++i)
array[i] = size - i - 1;
break;
case Random:
default:
Random rg = new Random(System.nanoTime());
for (int i = 0; i < size; ++i)
array[i] = rg.nextInt(size);
break;
}
return array;
}
}
public static class Benchmark {
// misc constants
public static final int NANOS_PER_MSEC = 1000000;
// config constants
public static final int BIGDECIMAL_PRECISION = 6;
// constant defaults
public static final long AUTOTUNING_MIN_ITERATIONS_DEFAULT = 1;
public static final long AUTOTUNING_MIN_DURATION_DEFAULT = 125;
public static final long BENCHMARK_MIN_ITERATIONS_DEFAULT = 1;
public static final long BENCHMARK_MAX_ITERATIONS_DEFAULT = Integer.MAX_VALUE;
public static final long BENCHMARK_TARGET_DURATION_DEFAULT = 125;
// private static final ThreadMXBean threadBean =
// ManagementFactory.getThreadMXBean();
public static final long getNanoTime() {
// return threadBean.getCurrentThreadCpuTime();// not good, runs at
// some time slice resolution
return System.nanoTime();
}
public static class Result {
public String name;
public long iterations;
public long totalTime; // nanoseconds
public Result(String name, long iterations, long startTime, long endTime) {
this.name = name;
this.iterations = iterations;
this.totalTime = endTime - startTime;
}
#Override
public String toString() {
final double totalTimeMSecs = ((double) totalTime) / NANOS_PER_MSEC;
final BigDecimal avgTimeMsecs = new BigDecimal(this.totalTime).divide(new BigDecimal(this.iterations).multiply(new BigDecimal(NANOS_PER_MSEC)),
BIGDECIMAL_PRECISION, RoundingMode.HALF_UP);
final String newLine = System.getProperty("line.separator");
StringBuilder sb = new StringBuilder();
sb.append(name).append(newLine);
sb.append(" ").append("iterations : ").append(iterations).append(newLine);
sb.append(" ").append("total time : ").append(totalTimeMSecs).append(" ms").append(newLine);
sb.append(" ").append("avg time : ").append(avgTimeMsecs).append(" ms").append(newLine);
return sb.toString();
}
}
public static <T> Result executionTime(final String name, final long iterations, final long warmupIterations, final Callable<T> test) throws Exception {
// vars
#SuppressWarnings("unused")
T ret;
long startTime;
long endTime;
// warmup
for (long i = 0; i < warmupIterations; ++i)
ret = test.call();
// actual benchmark iterations
{
startTime = getNanoTime();
for (long i = 0; i < iterations; ++i)
ret = test.call();
endTime = getNanoTime();
}
// return result
return new Result(name, iterations, startTime, endTime);
}
/**
* Auto tuned execution time measurement for test callbacks with steady
* execution time
*
* #param name
* #param test
* #return
* #throws Exception
*/
public static <T> Result executionTimeAutotuned(final String name, final Callable<T> test) throws Exception {
final long autoTuningMinIterations = AUTOTUNING_MIN_ITERATIONS_DEFAULT;
final long autoTuningMinDuration = AUTOTUNING_MIN_DURATION_DEFAULT;
final long benchmarkTargetDuration = BENCHMARK_TARGET_DURATION_DEFAULT;
final long benchmarkMinIterations = BENCHMARK_MIN_ITERATIONS_DEFAULT;
final long benchmarkMaxIterations = BENCHMARK_MAX_ITERATIONS_DEFAULT;
// vars
#SuppressWarnings("unused")
T ret;
final int prevThreadPriority;
long warmupIterations = 0;
long autoTuningDuration = 0;
long iterations = benchmarkMinIterations;
long startTime;
long endTime;
// store current thread priority and set it to max
prevThreadPriority = Thread.currentThread().getPriority();
Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
// warmup and iteration count tuning
{
final long autoTuningMinTimeNanos = autoTuningMinDuration * NANOS_PER_MSEC;
long autoTuningConsecutiveLoops = 1;
double avgExecutionTime = 0;
do {
{
startTime = getNanoTime();
for (long i = 0; i < autoTuningConsecutiveLoops; ++i, ++warmupIterations) {
ret = test.call();
}
endTime = getNanoTime();
autoTuningDuration += (endTime - startTime);
}
avgExecutionTime = ((double) autoTuningDuration) / ((double) (warmupIterations));
if ((autoTuningDuration >= autoTuningMinTimeNanos) && (warmupIterations >= autoTuningMinIterations)) {
break;
} else {
final double remainingAutotuningIterations = ((double) (autoTuningMinTimeNanos - autoTuningDuration)) / avgExecutionTime;
autoTuningConsecutiveLoops = Math.max(1, Math.min(Integer.MAX_VALUE, (long) Math.ceil(remainingAutotuningIterations)));
}
} while (warmupIterations < Integer.MAX_VALUE);
final double requiredIterations = ((double) benchmarkTargetDuration * NANOS_PER_MSEC) / avgExecutionTime;
iterations = Math.max(1, Math.min(benchmarkMaxIterations, (long) Math.ceil(requiredIterations)));
}
// actual benchmark iterations
{
startTime = getNanoTime();
for (long i = 0; i < iterations; ++i)
ret = test.call();
endTime = getNanoTime();
}
// restore previous thread priority
Thread.currentThread().setPriority(prevThreadPriority);
// return result
return new Result(name, iterations, startTime, endTime);
}
}
public static void executeBenchmark(int inputSize, ArrayList<Benchmark.Result> results) {
// final int[] inputArray = { 15, 8, 6, 3, 11, 1, 2, 0, 14, 13, 7, 9, 4,
// 10, 5, 12 };
final int[] inputArray = TestDataFactory.createIntArray(inputSize, TestDataFactory.ElementOrder.Random);
try {
// compare against Arrays.sort()
{
final int[] ref = inputArray.clone();
Arrays.sort(ref);
{
int[] temp = inputArray.clone();
BubbleSort.sort(temp);
if (!Arrays.equals(temp, ref))
throw new Exception("BubbleSort.sort() failed");
}
{
int[] temp = inputArray.clone();
OrbitSort.sort(temp);
if (!Arrays.equals(temp, ref))
throw new Exception("OrbitSort.sort() failed");
}
}
results.add(Benchmark.executionTimeAutotuned("Arrays.sort()", new Callable<Void>() {
#Override
public Void call() throws Exception {
int[] temp = Arrays.copyOf(inputArray, inputArray.length);
Arrays.sort(temp);
return null;
}
}));
results.add(Benchmark.executionTimeAutotuned("BubbleSort.sort()", new Callable<Void>() {
#Override
public Void call() throws Exception {
int[] temp = Arrays.copyOf(inputArray, inputArray.length);
BubbleSort.sort(temp);
return null;
}
}));
results.add(Benchmark.executionTimeAutotuned("OrbitSort.sort()", new Callable<Void>() {
#Override
public Void call() throws Exception {
int[] temp = Arrays.copyOf(inputArray, inputArray.length);
OrbitSort.sort(temp);
return null;
}
}));
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
ArrayList<Benchmark.Result> results = new ArrayList<Benchmark.Result>();
for (int i = 16; i <= 16384; i <<= 1) {
results.clear();
executeBenchmark(i, results);
System.out.println("input size : " + i);
System.out.println("");
for (Benchmark.Result result : results) {
System.out.print(result.toString());
}
System.out.println("----------------------------------------------------");
}
}
}
It is O(n^2) (assuming it works, I am not sure about that), as to already exists - maybe - it is not really original, as it can be considered a variation of a trivial sorting implementation, but I doubt if there is any published algorithm which is exactly the same as this one, specifically one with two consecutive inner loops.
I am not saying it is without merit, there can be a use case for which its behavior is uniquely efficient (maybe where reading is much faster than writing, and cache behavior benefits its access pattern).
To see why it is O(n^2), think about the first n/6 outer loop iterations, the inner loops run on O(n) length O(n) times.