public int lengthOfLongestSubstring(String s) {
//using a Collection to check if duplicate elements exist.
int count = 0;
int max = 0;
ArrayList<String> str = new ArrayList();
for(int i =0; i< s.length();i++){
String temp = Character.toString(s.charAt(i));
if(str.contains(temp)){
int idx = str.indexOf(temp);
for(int j = 0; j<= idx;j++) // clear all the elements before the duplicate elements, inluding the
str.remove(0); //duplicate. So, no elements collision in the new list.
str.add(temp);
count = str.size(); // update the count - the size of the new list.
}else {
str.add(temp);
count++;
if(count>max)
max=count;
}
}
return max;
}
Hi, what are time and space analysis? I think space complexity is O(n), but time complexity depends on the numbers of duplicate characters because we remove element only if we meet the repeated character.
Thank you!
The space complexity is O(n^2) as you said.
But the time complexity will be O(n^2) because you are doing a linear search on arraylist every time to find if the char is repeated or not
if(str.contains(temp)) at this line. which is O(k) where k is the size of the arrayList
Consider an example of all unique elements
for i = 1 you will take 1 operation
for i = 2 you will take 2 operations
for i =n you will take n operations
total time complexity = O(1 + 2 + ... +n) = O(n^2)
Related
I have a general question about my complexity in my Code. Since I am doing some things from Codeforces, complexity matters the first time to me.
Problemset: https://codeforces.com/problemset/problem/1554/A
Code:
import java.util.Arrays;
import java.util.List;
import java.util.Scanner;
import java.util.Collections;
public class main {
public static void main(String[] args) {
doCalculation();
}
public static void doCalculation () {
Scanner sc = new Scanner(System.in);
long n = sc.nextLong();
for (int i = 0; i < n; i++) {
int numbersInRow = sc.nextInt();
long [] numbersToRead = new long[numbersInRow];
for(int j = 0; j < numbersInRow; j++) {
numbersToRead[j] = sc.nextLong();
}
long maxMultiplication = 0;
for(int k = 0; k < numbersInRow; k++) {
for (int m = k + 1; m < numbersInRow; m++) {
//Von jetzt an von 1 bis 2; von 1 bis 3; von 2 bis 3
long[] subarray = subArray(numbersToRead, k, m);
//System.out.println(Arrays.toString(subarray));
long min = Arrays.stream(subarray).min().getAsLong();
//System.out.println(min);
long max = Arrays.stream(subarray).max().getAsLong();
//System.out.println(max);
long multiplicationEveryArray = min * max;
if(multiplicationEveryArray > maxMultiplication) {
maxMultiplication = multiplicationEveryArray;
}
}
}
System.out.println(maxMultiplication);
}
}
public static long[] subArray(long[] array, int beg, int end) {
return Arrays.copyOfRange(array, beg, end + 1);
}
}
I think copyRange has complexity O(n) and subarry has the same complexity.
What are some ways to improve my complexity without taking a different approach?
Usually, these problems are set up in a way that the brute-force method will finish in a million years.
Looking at the examples, my first intuition kind of was that you only need to look at neighboring numbers, making this an O(n) problem.
If you have f(1,2) and f(2,3) then f(1,3) will definitely be the max of f(1,2) and f(2,3). The product of the first and last elements is impossible to be the max as it means that at least one of them has to be smaller than or equal to the second element in the middle to be considered, making the product smaller than or equal to the product of "just neighbors".
You can continue on through induction to reach to the conclusion that f(1,n+1) will be max(f(1,2), f(2,3), ... f(n,n+1)).
In order to get rid of the O(n^2) loop try
for(int k = 0; k < numbersInRow - 1; k++) {
final int product = numbersToRead[k] * numbersToRead[k+1];
if (product > maxMultiplication) {
maxMultiplication = product;
}
}
There are multiple problem with this code :
I think you are using too much space for this question. In this question why are you always creating a new array before finding out max and min.
And that too in a loop so it will cause memory problems for sure later.
As approximately space complexity will go above O(n2)
long[] subarray = subArray(numbersToRead, k, m);
long min = Arrays.stream(subarray).min().getAsLong();
long max = Arrays.stream(subarray).max().getAsLong();
First of all you are doing all this in 3n time ( O(n) for copying +O(n) for finding minimum, O(n) for finding maximum )
i suggest if possible find a better algorithm which can find maximum and minimum in O(n) time if possible.
If thats not possible try finding maximum , minimum in a single loop and try avoiding this function of yours.
public static long[] subArray(long[] array, int beg, int end) {
return Arrays.copyOfRange(array, beg, end + 1);
}
Instead of creating a new array everytime try iterating on the same array.
Avoid streams if you dont have anykind of function which can iterate on original array without creating a new one though i doubt so, that would be the case. I think there is some kind of function which can help limit the finding of minimum and maximum on some range of array
Something like :
Arrays.stream(numbersToRead).range(k, m)
This java code is taken from geeks for geeks which is to find the longest consecutive subsequence and it says the running time complexity of the code is O(n). But I couldn't understand why O(n) and not O(n2) as it contains a nested loop.
// Java program to find longest
// consecutive subsequence
import java.io.*;
import java.util.*;
class ArrayElements {
// Returns length of the longest
// consecutive subsequence
static int findLongestConseqSubseq(int arr[], int n)
{
HashSet<Integer> S = new HashSet<Integer>();
int ans = 0;
// Hash all the array elements
for (int i = 0; i < n; ++i)
S.add(arr[i]);
// check each possible sequence from the start
// then update optimal length
for (int i = 0; i < n; ++i)
{
// if current element is the starting
// element of a sequence
if (!S.contains(arr[i] - 1))
{
// Then check for next elements
// in the sequence
int j = arr[i];
while (S.contains(j))
j++;
// update optimal length if this
// length is more
if (ans < j - arr[i])
ans = j - arr[i];
}
}
return ans;
}
// Driver Code
public static void main(String args[])
{
int arr[] = { 1, 9, 3, 10, 4, 20, 2 };
int n = arr.length;
System.out.println(
"Length of the Longest consecutive subsequence is "
+ findLongestConseqSubseq(arr, n));
}
}
// This code is contributed by Aakash Hasija
S.contains(j) is O(1) because we know that checking for the containment in an hashed set is constant.
j++ is obviously O(1).
Therefore, the cost of the whole inner loop is O(1).
Thus, the cost of the outer loop is O(n*1) = O(n).
Because this condition guarantees that it is the beginning of the sequence
if (!S.contains(arr[i] - 1))
So, all elements of current sequence will not be counted next time.
The worst case is when all elements are consecutive. For example, [1,2,3,4,5].
Time complexity is O(n) for the first element, O(1) for other elements.
n => once
1 + ... + 1 => n times
O(n) = n + n = 2n = n
The objective is to create a function that accepts two arguments: and array of integers and an integer that is our target, the function should return the two indexes of the array elements that add up to the target. We cannot sum and element by it self and we should assume that the given array always contains and answer
I solved this code kata exercise using a a for loop and a while loop. The time complexity for a for loop when N is the total elements of the array is linear O(N) but for each element there is a while process hat also increases linearly.
Does this means that the total time complexity of this code is O(N²) ?
public int[] twoSum(int[] nums, int target) {
int[] answer = new int[2];
for (int i = 0; i <= nums.length - 1; i++){
int finder = 0;
int index = 0;
while ( index <= nums.length - 1 ){
if (nums[i] != nums[index]){
finder = nums[i] + nums[index];
if (finder == target){
answer[0] = index;
answer[1] = i;
}
}
index++;
}
}
return answer;
}
How would you optimize this for time and space complexity?
Does this means that the total time complexity of this code is O(N²) ?
Yes, your reasoning is correct and your code is indeed O(N²) time complexity.
How would you optimize this for time and space complexity?
You can use an auxilary data structure, or sort the array, and perform lookups on the array.
One simple solution, which is O(n) average case is using a hash table, and inserting elements while you traverse the list. Then, you need to lookup for target - x in your hash table, assuming the element currently traversed is x.
I am leaving the implementation to you, I am sure you can do it and learn a lot in the process!
I was recently asked the following interview question over the phone:
Given an array of integers, produce an array whose values are the
product of every other integer excluding the current index.
Example:
[4, 3, 2, 8] -> [3*2*8, 4*2*8, 4*3*8, 4*3*2] -> [48, 64, 96, 24]
I came up with below code:
public static BigInteger[] calcArray(int[] input) throws Exception {
if (input == null) {
throw new IllegalArgumentException("input is null");
}
BigInteger product = calculateProduct(input);
BigInteger result[] = new BigInteger[input.length];
for (int i = 0; i < input.length; i++) {
result[i] = product.divide(BigInteger.valueOf(input[i]));
}
return result;
}
private static BigInteger calculateProduct(int[] input) {
BigInteger result = BigInteger.ONE;
for (int i = 0; i < input.length; i++) {
result = result.multiply(BigInteger.valueOf(input[i]));
}
return result;
}
Complexity:
Time Complexity: O(n)
Space Complexity: O(n)
Can we do this in O(n) complexity without division? Also is there any way to reduce space complexity if use simple primitive integer array.
Consider an element located at index i. Look to its left, lets say we have a product of elements till the index i-1. Lets call it leftProduct[i] that is product of all elements to the left of element at i. Similarly lets call rightProduct[i] is product of all elements to the right of element at i.
Then the result for that index is output[i] = leftProduct[i]*rightProduct[i]
Now think about how to get leftProduct. You simply traverse the array from start and compute a running product and at each element update the leftProduct with the current running product.
Similarly you can compute rightProduct by traversing the array from the end. Here you can optimize the space by reusing the leftProduct array by updating it with multiplying the rightProduct.
The below code demonstrates this:
public static int[] getProductsExcludingCurrentIndex( int[] arr ) {
if ( arr == null || arr.length == 0 ) return new int[]{};
int[] leftProduct = new int[arr.length];
int runningProduct = 1;
//Compute left product at each i
for ( int i = 0; i < arr.length; i++ ) {
leftProduct[i] = runningProduct;
runningProduct = runningProduct*arr[i];
}
runningProduct = 1;
//By reverse traversal, we compute right product but at the same time update the left
//product, so it will have leftProduct*rightProduct
for ( int i = arr.length - 1; i >= 0; i-- ) {
leftProduct[i] = leftProduct[i]*runningProduct;
runningProduct = runningProduct*arr[i];
}
return leftProduct;
}
Space complexity is O(n) - we use only one array leftProduct, time complexity is O(n).
Space complexity edit:
But if you don't consider the space used for storing output, then this is O(1), because we are storing output in leftProduct itself.
If you strictly don't want extra space then that entails modifying your input array. Solving this by modifying input array as you go is not possible at all at least as far as I know.
My thought:
Take product all numbers and store it in a variable result.
Now, for each element, answer is result / arr[i].
So, do a binary search from 1 to result/2 for each element arr[i] to get the quotient which is the answer for each arr[i].
Time complexity: O(n * (log(n)), Space Complexity: O(1).
Here is a solution for finding all substrings of a string.
for (int i = 0; i < str.length(); i++) {
String subStr;
for (int j = i; j < str.length(); j++) {
subStr = str + str.charAt(j));
System.out.println(subStr);
}
}
All over the internet I read that the complexity of this code is O(n2).
However the + operation is an O(n) operation.
Thus in my opinion the complexity should be O(n3).
In case I am wrong, please correct my understanding.
Adding a character to a string is a O(1) operation. You get O(n3) if you take into account also the time need to print the output with println.
Finding all substrings of a string is O(n2) (by finding a substring I mean determining its begin and end indexes), it's easy to see because the total number of substrings is O(n2).
But printing all of them out is O(n3), simply because total number of characters to be printed is O(n3). In your code, println adds O(n) complexity (the + operator should have O(1) complexity if used/implemented properly).
Finding all substrings from a string the naive way is indeed O(n^2). But the code in the question doesn't probably do that. Here is the corrected version.
for (int i = 0; i < str.length(); ++i) {
//Initialize with max length of the substring
StringBuilder prefix = new StringBuilder(str.length() - i);
for (int j = i; j < str.length(); ++j) {
prefix.append(str.charAt(j)); //this step is O(1).
System.out.println(prefix); //this step is supposed to be O(1)
}
}
The total number of iterations is given by
Outer Loop : Inner Loop
First time : n
Second time : n - 1
Third Time : n - 2
..
n - 2 time : 2
n - 1 time : 1
So the total number of iterations is sum of iterations of outer loop plus sum of iterations of the inner loop.
n + (1 + 2 + 3 + ... + n - 3 + n - 2 + n - 1 + n) is = O(n^2)