Given a collection of distinct numbers, return all possible permutations.
For example, [1,2,3] have the following permutations:
[ [1,2,3], [1,3,2], [2,1,3], [2,3,1], [3,1,2], [3,2,1] ]
My Iterative Solution is :
public List<List<Integer>> permute(int[] nums) {
List<List<Integer>> result = new ArrayList<>();
result.add(new ArrayList<>());
for(int i=0;i<nums.length;i++)
{
List<List<Integer>> temp = new ArrayList<>();
for(List<Integer> a: result)
{
for(int j=0; j<=a.size();j++)
{
a.add(j,nums[i]);
List<Integer> current = new ArrayList<>(a);
temp.add(current);
a.remove(j);
}
}
result = new ArrayList<>(temp);
}
return result;
}
My Recursive Solution is:
public List<List<Integer>> permuteRec(int[] nums) {
List<List<Integer>> result = new ArrayList<List<Integer>>();
if (nums == null || nums.length == 0) {
return result;
}
makePermutations(nums, result, 0);
return result;
}
void makePermutations(int[] nums, List<List<Integer>> result, int start) {
if (start >= nums.length) {
List<Integer> temp = convertArrayToList(nums);
result.add(temp);
}
for (int i = start; i < nums.length; i++) {
swap(nums, start, i);
makePermutations(nums, result, start + 1);
swap(nums, start, i);
}
}
private ArrayList<Integer> convertArrayToList(int[] num) {
ArrayList<Integer> item = new ArrayList<Integer>();
for (int h = 0; h < num.length; h++) {
item.add(num[h]);
}
return item;
}
According to me the time complexity(big-Oh) of my iterative solution is: n * n(n+1)/2~ O(n^3)
I am not able to figure out the time complexity of my recursive solution.
Can anyone explain complexity of both?
The recursive solution has a complexity of O(n!) as it is governed by the equation: T(n) = n * T(n-1) + O(1).
The iterative solution has three nested loops and hence has a complexity of O(n^3).
However, the iterative solution will not produce correct permutations for any number apart from 3.
For n = 3, you can see that n * (n - 1) * (n-2) = n!. The LHS is O(n^3) (or rather O(n^n) since n=3 here) and the RHS is O(n!).
For larger values of the size of the list, say n, you could have n nested loops and that will provide valid permutations. The complexity in that case will be O(n^n), and that is much larger than O(n!), or rather, n! < n^n. There is a rather nice relation called Stirling's approximation which explains this relation.
It's the output (which is huge) matters in this problem, not the routine's implementation. For n distinct items, there are n! permutations to be returned as the answer, and thus we have at least O(n!) complexity.
With a help of Stirling's approximation
O(n!) = O(n^(1/2+n)/exp(n)) = O(sqrt(n) * (n/e)^n)
we can easily see, that O(n!) > O(n^c) for any constant c, that's why it doesn't matter if the implementation itself adds another O(n^3) since
O(n!) + O(n^3) = O(n!)
In terms of number of times the method makePermutations is called, the exact time complexity would be:
O( 1 + n + n(n-1) + n(n-1)(n-2) + ... )
For n = 3:
O( 1 + 3 + (3*2) + (3*2*1) ) = O(16)
This means, for n = 3, the method makePermutations will be called 16 times.
And I think the space complexity for an optimal permutations function would be O(n * n!) because there are n! total arrays to return, and each of those arrays is of size n.
Related
I am trying to solve this problem on leetcode https://leetcode.com/problems/factor-combinations/description/
Numbers can be regarded as product of its factors. For example
8 = 2 x 2 x 2; = 2 x 4.
Write a function that takes an integer n and return all possible combinations of its factors.
while I am able to write the code using dfs approach , I am having hard time in driving its worst case time complexity in terms of input. Can anyone please help?
public List<List<Integer>> getFactors(int n) {
List<List<Integer>> result = new ArrayList<List<Integer>>();
List<Integer> current = new ArrayList<Integer>();
getFactorsHelper(n,2,current,result);
return result;
}
public void getFactorsHelper(int n,int start,List<Integer> current, List<List<Integer>> result){
if(n<=1 && current.size()>1){
result.add(new ArrayList<>(current));
return;
}
for(int i=start;i<=n;i++){
if(n%i==0) {
current.add(i);
getFactorsHelper(n/i,i,current,result);
current.remove(current.size()-1);
}
}
}
I computed complexity of your code like this:
Let's consider the runtime of getFactorsHelper(n,2) is function T(n).
In bellow portion you have a loop with i index.
for(int i=start;i<=n;i++){
if(n%i==0) {
current.add(i);
getFactorsHelper(n/i,i,current,result);
current.remove(current.size()-1);
}
}
The n is divided by i in each iteration. So we have:
(first iteration)
getFactorsHelper(n/2,2,current,result) = T(n/2)
(second iteration)
getFactorsHelper(n/3,3,current,result) <= getFactorsHelper(n/3,2,current,result) = T(n/3)
(third iteration)
getFactorsHelper(n/4,4,current,result) <= getFactorsHelper(n/4,2,current,result)
= T(n/4)
...
(final iteration)
getFactorsHelper(n/n,n,current,result) <= getFactorsHelper(n/n,2,current,result) = T(n/n) = T(1)
total cost
T(n) <= T(n/2) + T(n/3) + T(n/4) + ... + T(1)
Solving recursive function
I hope this can help you.
Not able to post the solution in the comment. Post as another answer here #AliSoltani
https://discuss.leetcode.com/topic/30752/my-short-java-solution-which-is-easy-to-understand
public class Solution {
public List<List<Integer>> getFactors(int n) {
List<List<Integer>> ret = new LinkedList<List<Integer>>();
if(n <= 3) return ret;
List<Integer> path = new LinkedList<Integer>();
getFactors(2, n, path, ret);
return ret;
}
private void getFactors(int start, int n, List<Integer> path, List<List<Integer>> ret){
for(int i = start; i <= Math.sqrt(n); i++){
if(n % i == 0 && n/i >= i){ // The previous factor is no bigger than the next
path.add(i);
path.add(n/i);
ret.add(new LinkedList<Integer>(path));
path.remove(path.size() - 1);
getFactors(i, n/i, path, ret);
path.remove(path.size() - 1);
}
}
}}
I've got two different methods, one is calculating Fibonacci sequence to the nth element by using iteration and the other one is doing the same thing using recursive method.
Program example looks like this:
import java.util.Scanner;
public class recursionVsIteration {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
//nth element input
System.out.print("Enter the last element of Fibonacci sequence: ");
int n = sc.nextInt();
//Print out iteration method
System.out.println("Fibonacci iteration:");
long start = System.currentTimeMillis();
System.out.printf("Fibonacci sequence(element at index %d) = %d \n", n, fibIteration(n));
System.out.printf("Time: %d ms\n", System.currentTimeMillis() - start);
//Print out recursive method
System.out.println("Fibonacci recursion:");
start = System.currentTimeMillis();
System.out.printf("Fibonacci sequence(element at index %d) = %d \n", n, fibRecursion(n));
System.out.printf("Time: %d ms\n", System.currentTimeMillis() - start);
}
//Iteration method
static int fibIteration(int n) {
int x = 0, y = 1, z = 1;
for (int i = 0; i < n; i++) {
x = y;
y = z;
z = x + y;
}
return x;
}
//Recursive method
static int fibRecursion(int n) {
if ((n == 1) || (n == 0)) {
return n;
}
return fibRecursion(n - 1) + fibRecursion(n - 2);
}
}
I was trying to find out which method is faster. I came to the conclusion that recursion is faster for the smaller amount of numbers, but as the value of nth element increases recursion becomes slower and iteration becomes faster. Here are the three different results for three different n:
Example #1 (n = 10)
Enter the last element of Fibonacci sequence: 10
Fibonacci iteration:
Fibonacci sequence(element at index 10) = 55
Time: 5 ms
Fibonacci recursion:
Fibonacci sequence(element at index 10) = 55
Time: 0 ms
Example #2 (n = 20)
Enter the last element of Fibonacci sequence: 20
Fibonacci iteration:
Fibonacci sequence(element at index 20) = 6765
Time: 4 ms
Fibonacci recursion:
Fibonacci sequence(element at index 20) = 6765
Time: 2 ms
Example #3 (n = 30)
Enter the last element of Fibonacci sequence: 30
Fibonacci iteration:
Fibonacci sequence(element at index 30) = 832040
Time: 4 ms
Fibonacci recursion:
Fibonacci sequence(element at index 30) = 832040
Time: 15 ms
What I really want to know is why all of a sudden iteration became faster and recursion became slower. I'm sorry if I missed some obvious answer to this question, but I'm still new to the programming, I really don't understand what's going on behind that and I would like to know. Please provide a good explanation or point me in the right direction so I can find out the answer myself. Also, if this is not a good way to test which method is faster let me know and suggest me different method.
Thanks in advance!
For terseness, Let F(x) be the recursive Fibonacci
F(10) = F(9) + F(8)
F(10) = F(8) + F(7) + F(7) + F(6)
F(10) = F(7) + F(6) + F(6) + F(5) + 4 more calls.
....
So your are calling F(8) twice,
F(7) 3 times, F(6) 5 times, F(5) 7 times.. and so on
So with larger inputs, the tree gets bigger and bigger.
This article does a comparison between recursion and iteration and covers their application on generating fibonacci numbers.
As noted in the article,
The reason for the poor performance is heavy push-pop of the registers in the ill level of each recursive call.
which basically says there is more overhead in the recursive method.
Also, take a look at Memoization
When doing the recursive implementation of Fibonacci algorithm, you are adding redundant calls by recomputing the same values over and over again.
fib(5) = fib(4) + fib(3)
fib(4) = fib(3) + fib(2)
fib(3) = fib(2) + fib(1)
Notice, that fib(2) will be redundantly calculated both for fib(4) and for fib(3).
However this can be overcome by a technique called Memoization, that improves the efficiency of recursive Fibonacci by storing the values, you have calculated once. Further calls of fib(x) for known values may be replaced by a simple lookup, eliminating the need for further recursive calls.
This is the main difference between the iterative and recursive approaches, if you are interested, there are also other, more efficient algorithms of calculating Fibonacci numbers.
Why is Recursion slower?
When you call your function again itself (as recursion) the compiler allocates new Activation Record (Just think as an ordinary Stack) for that new function. That stack is used to keep your states, variables, and addresses. Compiler creates a stack for each function and this creation process continues until the base case is reached. So, when the data size becomes larger, compiler needs large stack segment to calculate the whole process. Calculating and managing those Records is also counted during this process.
Also, in recursion, the stack segment is being raised during run-time. Compiler does not know how much memory will be occupied during compile time.
That is why if you don't handle your Base case properly, you will get StackOverflow exception :).
Using recursion the way you have, the time complexity is O(fib(n)) which is very expensive. The iterative method is O(n) This doesn't show because a) your tests are very short, the code won't even be compiled b) you used very small numbers.
Both examples will become faster the more you run them. Once a loop or method has been called 10,000 times, it should be compiled to native code.
If anyone is interested in an iterative Function with array:
public static void fibonacci(int y)
{
int[] a = new int[y+1];
a[0] = 0;
a[1] = 1;
System.out.println("Step 0: 0");
System.out.println("Step 1: 1");
for(int i=2; i<=y; i++){
a[i] = a[i-1] + a[i-2];
System.out.println("Step "+i+": "+a[i]);
}
System.out.println("Array size --> "+a.length);
}
This solution crashes for input value 0.
Reason: The array a will be initialized 0+1=1 but the consecutive assignment of a[1] will result in an index out of bounds exception.
Either add an if statement that returns 0 on y=0 or initialize the array by y+2, which will waste 1 int but still be of constant space and not change big O.
I prefer using a mathematical solution using the golden number. enjoy
private static final double GOLDEN_NUMBER = 1.618d;
public long fibonacci(int n) {
double sqrt = Math.sqrt(5);
double result = Math.pow(GOLDEN_NUMBER, n);
result = result - Math.pow(1d - GOLDEN_NUMBER, n);
result = Math.round(result / sqrt);
return Double.valueOf(result).longValue();
}
Whenever you are looking for time taken to complete a particular algorithm, it's best you always go for time complexity.
Evaluate the time complexity on the paper in terms of O(something).
Comparing the above two approaches, time complexity of iterative approach is O(n) whereas that of recursive approach is O(2^n).
Let's try to find the time complexity of fib(4)
Iterative approach, the loop evaluates 4 times, so it's time complexity is O(n).
Recursive approach,
fib(4)
fib(3) + fib(2)
fib(2) + fib(1) fib(1) + fib(0)
fib(1) + fib(0)
so fib() is called 9 times which is slightly lower than 2^n when the value of n is large, even small also(remember that BigOh(O) takes care of upper bound) .
As a result we can say that the iterative approach is evaluating in polynomial time, whereas recursive one is evaluating in exponential time
The recursive approach that you use is not efficient. I would suggest you use tail recursion. In contrast to your approach tail recursion keeps only one function call in the stack at any point in time.
public static int tailFib(int n) {
if (n <= 1) {
return n;
}
return tailFib(0, 1, n);
}
private static int tailFib(int a, int b, int count) {
if(count <= 0) {
return a;
}
return tailFib(b, a+b, count-1);
}
public static void main(String[] args) throws Exception{
for (int i = 0; i <10; i++){
System.out.println(tailFib(i));
}
}
I have a recursive solution that you where the computed values are stored to avoid the further unnecessary computations. The code is provided below,
public static int fibonacci(int n) {
if(n <= 0) return 0;
if(n == 1) return 1;
int[] arr = new int[n+1];
// this is faster than using Array
// List<Integer> lis = new ArrayList<>(Collections.nCopies(n+1, 0));
arr[0] = 0;
arr[1] = 1;
return fiboHelper(n, arr);
}
public static int fiboHelper(int n, int[] arr){
if(n <= 0) {
return arr[0];
}
else if(n == 1) {
return arr[1];
}
else {
if( arr[n-1] != 0 && (arr[n-2] != 0 || (arr[n-2] == 0 && n-2 == 0))){
return arr[n] = arr[n-1] + arr[n-2];
}
else if (arr[n-1] == 0 && arr[n-2] != 0 ){
return arr[n] = fiboHelper(n-1, arr) + arr[n-2];
}
else {
return arr[n] = fiboHelper(n-2, arr) + fiboHelper(n-1, arr );
}
}
}
I am trying to Implement a solutions to find k-th largest element in a given integer list with duplicates with O(N*log(N)) average time complexity in Big-O notation, where N is the number of elements in the list.
As per my understanding Merge-sort has an average time complexity of O(N*log(N)) however in my below code I am actually using an extra for loop along with mergesort algorithm to delete duplicates which is definitely violating my rule of find k-th largest element with O(N*log(N)). How do I go about it by achieving my task O(N*log(N)) average time complexity in Big-O notation?
public class FindLargest {
public static void nthLargeNumber(int[] arr, String nthElement) {
mergeSort_srt(arr, 0, arr.length - 1);
// remove duplicate elements logic
int b = 0;
for (int i = 1; i < arr.length; i++) {
if (arr[b] != arr[i]) {
b++;
arr[b] = arr[i];
}
}
int bbb = Integer.parseInt(nthElement) - 1;
// printing second highest number among given list
System.out.println("Second highest number is::" + arr[b - bbb]);
}
public static void mergeSort_srt(int array[], int lo, int n) {
int low = lo;
int high = n;
if (low >= high) {
return;
}
int middle = (low + high) / 2;
mergeSort_srt(array, low, middle);
mergeSort_srt(array, middle + 1, high);
int end_low = middle;
int start_high = middle + 1;
while ((lo <= end_low) && (start_high <= high)) {
if (array[low] < array[start_high]) {
low++;
} else {
int Temp = array[start_high];
for (int k = start_high - 1; k >= low; k--) {
array[k + 1] = array[k];
}
array[low] = Temp;
low++;
end_low++;
start_high++;
}
}
}
public static void main(String... str) {
String nthElement = "2";
int[] intArray = { 1, 9, 5, 7, 2, 5 };
FindLargest.nthLargeNumber(intArray, nthElement);
}
}
Your only problem here is that you don't understand how to do the time analysis. If you have one routine which takes O(n) and one which takes O(n*log(n)), running both takes a total of O(n*log(n)). Thus your code runs in O(n*log(n)) like you want.
To do things formally, we would note that the definition of O() is as follows:
f(x) ∈ O(g(x)) if and only if there exists values c > 0 and y such that f(x) < cg(x) whenever x > y.
Your merge sort is in O(n*log(n)) which tells us that its running time is bounded above by c1*n*log(n) when n > y1 for some c1,y1. Your duplication elimination is in O(n) which tells us that its running time is bounded above by c2*n when n > y2 for some c2 and y2. Using this, we can know that the total running time of the two is bounded above by c1*n*log(n)+c2*n when n > max(y1,y2). We know that c1*n*log(n)+c2*n < c1*n*log(n)+c2*n*log(n) because log(n) > 1, and this, of course simplifies to (c1+c2)*n*log(n). Thus, we can know that the running time of the two together is bounded above by (c1+c2)*n*log(n) when n > max(y1,y2) and thus, using c1+c2 as our c and max(y1,y2) as our y, we know that the running time of the two together is in O(n*log(n)).
Informally, you can just know that faster growing functions always dominate, so if one piece of code is O(n) and the second is O(n^2), the combination is O(n^2). If one is O(log(n)) and the second is O(n), the combination is O(n). If one is O(n^20) and the second is O(n^19.99), the combination is O(n^20). If one is O(n^2000) and the second is O(2^n), the combination is O(2^n).
Problem here is your merge routine where you have used another loop which i donot understand why, Hence i would say your algorithm of merge O(n^2) which changes your merge sort time to O(n^2).
Here is a pseudo code for typical O(N) merge routine :-
void merge(int low,int high,int arr[]) {
int buff[high-low+1];
int i = low;
int mid = (low+high)/2;
int j = mid +1;
int k = 0;
while(i<=mid && j<=high) {
if(arr[i]<arr[j]) {
buff[k++] = arr[i];
i++;
}
else {
buff[k++] = arr[j];
j++;
}
}
while(i<=mid) {
buff[k++] = arr[i];
i++;
}
while(j<=high) {
buff[k++] = arr[j];
j++;
}
for(int x=0;x<k;x++) {
arr[low+x] = buff[x];
}
}
Method needs to return the k elements a[i] such that ABS(a[i] - val) are the k largest evaluation. My code only works for integers greater than val. It will fail if integers less than val. Can I do this without importing anything other than java.util.Arrays? Could somebody just enlighten me? Any help will be much appreciated!
public static int[] farthestK(int[] a, int val, int k) {// This line should not change
int[] b = new int[a.length];
for (int i = 0; i < b.length; i++) {
b[i] = Math.abs(a[i] - val);
}
Arrays.sort(b);
int[] c = new int[k];
int w = 0;
for (int i = b.length-1; i > b.length-k-1; i--) {
c[w] = b[i] + val;
w++;
}
return c;
}
test case:
#Test public void farthestKTest() {
int[] a = {-2, 4, -6, 7, 8, 13, 15};
int[] expected = {15, -6, 13, -2};
int[] actual = Selector.farthestK(a, 4, 4);
Assert.assertArrayEquals(expected, actual);
}
There was 1 failure:
1) farthestKTest(SelectorTest)
arrays first differed at element [1]; expected:<-6> but was:<14>
FAILURES!!!
Tests run: 1, Failures: 1
The top k problem can be solved in many ways. In your case you add a new parameter, but it really doesn't matter.
The first and the easiest one: just sort the array. Time complexity: O(nlogn)
public static int[] farthestK(Integer[] a, final int val, int k) {
Arrays.sort(a, new java.util.Comparator<Integer>() {
#Override
public int compare(Integer o1, Integer o2) {
return -Math.abs(o1 - val) + Math.abs(o2 - val);
}
});
int[] c = new int[k];
for (int i = 0; i < k; i++) {
c[i] = a[i];
}
return c;
}
The second way: use a heap to save the max k values, Time complexity: O(nlogk)
/**
* Use a min heap to save the max k values. Time complexity: O(nlogk)
*/
public static int[] farthestKWithHeap(Integer[] a, final int val, int k) {
PriorityQueue<Integer> minHeap = new PriorityQueue<Integer>(4,
new java.util.Comparator<Integer>() {
#Override
public int compare(Integer o1, Integer o2) {
return Math.abs(o1 - val) - Math.abs(o2 - val);
}
});
for (int i : a) {
minHeap.add(i);
if (minHeap.size() > k) {
minHeap.poll();
}
}
int[] c = new int[k];
for (int i = 0; i < k; i++) {
c[i] = minHeap.poll();
}
return c;
}
The third way: divide and conquer, just like quicksort. Partition the array to two part, and find the kth in one of them. Time complexity: O(n + klogk)
The code is a little long, so i just provide link here.
Selection problem.
Sorting the array will cost you O(n log n) time. You can do it in O(n) time using k-selection.
Compute an array B, where B[i] = abs(A[i] - val). Then your problem is equivalent to finding the k values farthest from zero in B. Since each B[i] >= 0, this is equivalent to finding the k largest elements in B.
Run k-selection on B looking for the (n - k)th element. See Quickselect on Wikipedia for an O(n) expected time algorithm.
After k-selection is complete, B[n - k] through B[n - 1] contain the largest elements in B. With proper bookkeeping, you can link back to the elements in A that correspond to them (see pseudocode below).
Time complexity: O(n) time for #1, O(n) time for #2, and O(k) time for #3 => a total time complexity of O(n). (Quickselect runs in O(n) expected time, and there exist complicated worst-case linear time selection algorithms).
Space complexity: O(n).
Pseudocode:
farthest_from(k, val, A):
let n = A.length
# Compute B. Elements are objects to
# keep track of the original element in A.
let B = array[0 .. n - 1]
for i between 0 and n - 1:
B[i] = {
value: abs(A[i] - val)
index: i
}
# k_selection should know to compare
# elements in B by their "value";
# e.g., each B[i] could be java.lang.Comparable.
k_selection(n - k - 1, B)
# Use the top elements in B to link back to A.
# Return the result.
let C = array[0 .. k - 1]
for i between 0 and k - 1:
C[i] = A[B[n - k + i].index]
return C
You can modify this algorithm a little and use it for printing k elements according to your requirement.(This is the only work you will need to do with some changes in this algorithm.)
Explore this link.
http://jakharmania.blogspot.in/2013/08/selection-of-kth-largest-element-java.html
This algo uses Selection Sort - so the output would be a Logarithmic Time Complexity based answer which is very efficient.
O(n) algorithm, from Wikipedia entry on partial sorting:
Find the k-th smallest element using the linear time median-of-medians selection algorithm. Then make a linear pass to select the elements smaller than the k-th smallest element.
The collection in this case is created by taking the original array, subtracting the given value, taking the absolute value, (and then negating it so that largest becomes smallest).
What is the Big-O run-time of the following function? Explain.
static int fib(int n){
if (n <= 2)
return 1;
else
return fib(n-1) + fib(n-2)
}
Also how would you re-write the fib(int n) function with a faster Big-O run-time iteratively?
would this be the best way with O(n):
public static int fibonacci (int n){
int previous = -1;
int result = 1;
for (int i = 0; i <= n; ++i)
{
int sum = result + previous;
previous = result;
result = sum;
}
return result;
}
}
Proof
You model the time function to calculate Fib(n) as sum of time to calculate Fib(n-1) plus the time to calculate Fib(n-2) plus the time to add them together (O(1)).
T(n<=1) = O(1)
T(n) = T(n-1) + T(n-2) + O(1)
You solve this recurrence relation (using generating functions, for instance) and you'll end up with the answer.
Alternatively, you can draw the recursion tree, which will have depth n and intuitively figure out that this function is asymptotically O(2n). You can then prove your conjecture by induction.
Base: n = 1 is obvious
Assume T(n-1) = O(2n-1), therefore
T(n) = T(n-1) + T(n-2) + O(1) which is equal to
T(n) = O(2n-1) + O(2n-2) + O(1) = O(2n)
Iterative version
Note that even this implementation is only suitable for small values of n, since the Fibonacci function grows exponentially and 32-bit signed Java integers can only hold the first 46 Fibonacci numbers
int prev1=0, prev2=1;
for(int i=0; i<n; i++) {
int savePrev1 = prev1;
prev1 = prev2;
prev2 = savePrev1 + prev2;
}
return prev1;