I have written this method to convert a sorted array that I have, to a balanced binary search tree. I'm not sure what the big O time complexity of this method should be. Would it be O(n)?
Node ArrayToBST(Node arr[], int start, int end)
{
if (start > end)
return null;
int mid = (start + end) / 2;
Node node =arr[mid];
node.left = ArrayToBST(arr, start, mid - 1);
node.right = ArrayToBST(arr, mid + 1, end);
return node;
}
The complexity will be O(n). Every node is created so there will be n calls...each having O(1) runtime.
T(n)=2T(n/2)+C if you use masters theorem you will arrive at the same conclusion.
Master's theorem rules:-
If n^(log b base a) < f(n) then T(n) = f(n)
If n^(log b base a) = f(n) then T(n) = f(n) * log n
If n^(log b base a) > f(n) then T(n) = n^(log b base a)
`n` -> input size
`a` -> number of subproblems
`n/b` -> size of each subproblem
`f(n`) -> cost of non-recursive calls (Here C)
Here
a = 2, b = 2, f(n) = O(1)
n^(log b base a) = n = O(n)
Here < or > denotes polynomially smaller or larger.
It is in O(n).
The total time is defined as T(n) = 2T(n/2) + C:
T(n): Time taken for an array of size n
C: Constant (Finding middle of array and linking root to left and right subtrees take constant time)
So I'm trying to find a recurrence relation for this java method:
public static Pair min/max(int start, int end, int[] a) {
int mid;
Pair pair = new Pair(a[start], a[end]);
Pair p1;
Pair p2;
if (start == end) { //if n = 1
return pair;
}
else if (end == start + 1) { //if n = 2
if (a[start] > a[end]) {
pair.upper = a[end];
pair.lower = a[start];
} else {
pair.upper = a[start];
pair.lower = a[end];
}
return pair;
}
mid = (end + start) / 2; //if n > 2
p1 = min/max(end, mid, a);
p2 = min/max(mid + 1, ub, a);
if (p1.lower < p2.lower)
pair.upper = p1.lower;
else
pair.lower = p2.lower;
if (p1.upper > p2.upper)
pair.upper = p1.upper;
else
pair.upper = p2.upper;
return pair; //the min and max pair
}
This is supposed to find the max and min of an array by always using ⌈3n/2 - 2⌉ comparisons. It also uses this class:
class Pair {
int lower;
int upper;
Pair ( int a, int o ) { lower = a; upper = o; }
}
So what would be the recurrence relation for this method? I know that it starts out as:
C(n) = 0 if n=1
1 if n=2
And now I'm trying to figure out what the equation would be when n > 2. First off does the above method always run in ⌈3n/2 - 2⌉ comparisons. I'm just wondering because of the line mid = (lb + ub) / 2 which makes me think that I'm still splitting the array into ⌈n/2⌉ and ⌊n/2⌋ parts which would not get me ⌈3n/2 - 2⌉ comparisons each time. I know that there is a better way to do this same thing but the code I'm writing has to use recursion.
UPDATE: After adding counter statements I find that the above code does not use ⌈3n/2 - 2⌉ comparisons for every array. I think the problem is with the variable mid. With mid I'm supposed to split the array however I don't think I'm splitting it correctly to have the proper number of comparisons.
Let n = end - start + 1. I presume you mean a recurrence expressing the number of comparisons.
In the general case you are doing 4 comparisons for each level of recursion and splitting the list in half. So the recurrence is:
T(n) = 4 + T( floor(n/2) ) + T( ceiling(n/2) ) for n > 1
NB: You seem to have begun with lb and ub and switched only partially to start and end.
I've got two different methods, one is calculating Fibonacci sequence to the nth element by using iteration and the other one is doing the same thing using recursive method.
Program example looks like this:
import java.util.Scanner;
public class recursionVsIteration {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
//nth element input
System.out.print("Enter the last element of Fibonacci sequence: ");
int n = sc.nextInt();
//Print out iteration method
System.out.println("Fibonacci iteration:");
long start = System.currentTimeMillis();
System.out.printf("Fibonacci sequence(element at index %d) = %d \n", n, fibIteration(n));
System.out.printf("Time: %d ms\n", System.currentTimeMillis() - start);
//Print out recursive method
System.out.println("Fibonacci recursion:");
start = System.currentTimeMillis();
System.out.printf("Fibonacci sequence(element at index %d) = %d \n", n, fibRecursion(n));
System.out.printf("Time: %d ms\n", System.currentTimeMillis() - start);
}
//Iteration method
static int fibIteration(int n) {
int x = 0, y = 1, z = 1;
for (int i = 0; i < n; i++) {
x = y;
y = z;
z = x + y;
}
return x;
}
//Recursive method
static int fibRecursion(int n) {
if ((n == 1) || (n == 0)) {
return n;
}
return fibRecursion(n - 1) + fibRecursion(n - 2);
}
}
I was trying to find out which method is faster. I came to the conclusion that recursion is faster for the smaller amount of numbers, but as the value of nth element increases recursion becomes slower and iteration becomes faster. Here are the three different results for three different n:
Example #1 (n = 10)
Enter the last element of Fibonacci sequence: 10
Fibonacci iteration:
Fibonacci sequence(element at index 10) = 55
Time: 5 ms
Fibonacci recursion:
Fibonacci sequence(element at index 10) = 55
Time: 0 ms
Example #2 (n = 20)
Enter the last element of Fibonacci sequence: 20
Fibonacci iteration:
Fibonacci sequence(element at index 20) = 6765
Time: 4 ms
Fibonacci recursion:
Fibonacci sequence(element at index 20) = 6765
Time: 2 ms
Example #3 (n = 30)
Enter the last element of Fibonacci sequence: 30
Fibonacci iteration:
Fibonacci sequence(element at index 30) = 832040
Time: 4 ms
Fibonacci recursion:
Fibonacci sequence(element at index 30) = 832040
Time: 15 ms
What I really want to know is why all of a sudden iteration became faster and recursion became slower. I'm sorry if I missed some obvious answer to this question, but I'm still new to the programming, I really don't understand what's going on behind that and I would like to know. Please provide a good explanation or point me in the right direction so I can find out the answer myself. Also, if this is not a good way to test which method is faster let me know and suggest me different method.
Thanks in advance!
For terseness, Let F(x) be the recursive Fibonacci
F(10) = F(9) + F(8)
F(10) = F(8) + F(7) + F(7) + F(6)
F(10) = F(7) + F(6) + F(6) + F(5) + 4 more calls.
....
So your are calling F(8) twice,
F(7) 3 times, F(6) 5 times, F(5) 7 times.. and so on
So with larger inputs, the tree gets bigger and bigger.
This article does a comparison between recursion and iteration and covers their application on generating fibonacci numbers.
As noted in the article,
The reason for the poor performance is heavy push-pop of the registers in the ill level of each recursive call.
which basically says there is more overhead in the recursive method.
Also, take a look at Memoization
When doing the recursive implementation of Fibonacci algorithm, you are adding redundant calls by recomputing the same values over and over again.
fib(5) = fib(4) + fib(3)
fib(4) = fib(3) + fib(2)
fib(3) = fib(2) + fib(1)
Notice, that fib(2) will be redundantly calculated both for fib(4) and for fib(3).
However this can be overcome by a technique called Memoization, that improves the efficiency of recursive Fibonacci by storing the values, you have calculated once. Further calls of fib(x) for known values may be replaced by a simple lookup, eliminating the need for further recursive calls.
This is the main difference between the iterative and recursive approaches, if you are interested, there are also other, more efficient algorithms of calculating Fibonacci numbers.
Why is Recursion slower?
When you call your function again itself (as recursion) the compiler allocates new Activation Record (Just think as an ordinary Stack) for that new function. That stack is used to keep your states, variables, and addresses. Compiler creates a stack for each function and this creation process continues until the base case is reached. So, when the data size becomes larger, compiler needs large stack segment to calculate the whole process. Calculating and managing those Records is also counted during this process.
Also, in recursion, the stack segment is being raised during run-time. Compiler does not know how much memory will be occupied during compile time.
That is why if you don't handle your Base case properly, you will get StackOverflow exception :).
Using recursion the way you have, the time complexity is O(fib(n)) which is very expensive. The iterative method is O(n) This doesn't show because a) your tests are very short, the code won't even be compiled b) you used very small numbers.
Both examples will become faster the more you run them. Once a loop or method has been called 10,000 times, it should be compiled to native code.
If anyone is interested in an iterative Function with array:
public static void fibonacci(int y)
{
int[] a = new int[y+1];
a[0] = 0;
a[1] = 1;
System.out.println("Step 0: 0");
System.out.println("Step 1: 1");
for(int i=2; i<=y; i++){
a[i] = a[i-1] + a[i-2];
System.out.println("Step "+i+": "+a[i]);
}
System.out.println("Array size --> "+a.length);
}
This solution crashes for input value 0.
Reason: The array a will be initialized 0+1=1 but the consecutive assignment of a[1] will result in an index out of bounds exception.
Either add an if statement that returns 0 on y=0 or initialize the array by y+2, which will waste 1 int but still be of constant space and not change big O.
I prefer using a mathematical solution using the golden number. enjoy
private static final double GOLDEN_NUMBER = 1.618d;
public long fibonacci(int n) {
double sqrt = Math.sqrt(5);
double result = Math.pow(GOLDEN_NUMBER, n);
result = result - Math.pow(1d - GOLDEN_NUMBER, n);
result = Math.round(result / sqrt);
return Double.valueOf(result).longValue();
}
Whenever you are looking for time taken to complete a particular algorithm, it's best you always go for time complexity.
Evaluate the time complexity on the paper in terms of O(something).
Comparing the above two approaches, time complexity of iterative approach is O(n) whereas that of recursive approach is O(2^n).
Let's try to find the time complexity of fib(4)
Iterative approach, the loop evaluates 4 times, so it's time complexity is O(n).
Recursive approach,
fib(4)
fib(3) + fib(2)
fib(2) + fib(1) fib(1) + fib(0)
fib(1) + fib(0)
so fib() is called 9 times which is slightly lower than 2^n when the value of n is large, even small also(remember that BigOh(O) takes care of upper bound) .
As a result we can say that the iterative approach is evaluating in polynomial time, whereas recursive one is evaluating in exponential time
The recursive approach that you use is not efficient. I would suggest you use tail recursion. In contrast to your approach tail recursion keeps only one function call in the stack at any point in time.
public static int tailFib(int n) {
if (n <= 1) {
return n;
}
return tailFib(0, 1, n);
}
private static int tailFib(int a, int b, int count) {
if(count <= 0) {
return a;
}
return tailFib(b, a+b, count-1);
}
public static void main(String[] args) throws Exception{
for (int i = 0; i <10; i++){
System.out.println(tailFib(i));
}
}
I have a recursive solution that you where the computed values are stored to avoid the further unnecessary computations. The code is provided below,
public static int fibonacci(int n) {
if(n <= 0) return 0;
if(n == 1) return 1;
int[] arr = new int[n+1];
// this is faster than using Array
// List<Integer> lis = new ArrayList<>(Collections.nCopies(n+1, 0));
arr[0] = 0;
arr[1] = 1;
return fiboHelper(n, arr);
}
public static int fiboHelper(int n, int[] arr){
if(n <= 0) {
return arr[0];
}
else if(n == 1) {
return arr[1];
}
else {
if( arr[n-1] != 0 && (arr[n-2] != 0 || (arr[n-2] == 0 && n-2 == 0))){
return arr[n] = arr[n-1] + arr[n-2];
}
else if (arr[n-1] == 0 && arr[n-2] != 0 ){
return arr[n] = fiboHelper(n-1, arr) + arr[n-2];
}
else {
return arr[n] = fiboHelper(n-2, arr) + fiboHelper(n-1, arr );
}
}
}
Practicing recursion and D&C and a frequent problem seems to be to convert the array:
[a1,a2,a3..an,b1,b2,b3...bn] to [a1,b1,a2,b2,a3,b3...an,bn]
I solved it as follows (startA is the start of as and startB is the start of bs:
private static void shuffle(int[] a, int startA, int startB){
if(startA == startB)return;
int tmp = a[startB];
shift(a, startA + 1, startB);
a[startA + 1] = tmp;
shuffle(a, startA + 2, startB + 1);
}
private static void shift(int[] a, int start, int end) {
if(start >= end)return;
for(int i = end; i > start; i--){
a[i] = a[i - 1];
}
}
But I am not sure what the runtime is. Isn't it linear?
Let the time consumed by the algorithm be T(n), and let n=startB-startA.
Each recursive invokation reduces the run time by 1 (startB-startA is reduced by one per invokation), so the run time is T(n) = T(n-1) + f(n), we only need to figure what f(n) is.
The bottle neck in each invokation is the shift() operation, which is iterating from startA+1 to startB, meaning n-1 iterations.
Thus, the complexity of the algorithm is T(n) = T(n-1) + (n-1).
However, this is a known Theta(n^2) function (sum of arithmetic progression) - and the time complexity of the algorithm is Theta(N^2), since the initial startB-startA is linear with N (the size of the input).
What is the Big-O run-time of the following function? Explain.
static int fib(int n){
if (n <= 2)
return 1;
else
return fib(n-1) + fib(n-2)
}
Also how would you re-write the fib(int n) function with a faster Big-O run-time iteratively?
would this be the best way with O(n):
public static int fibonacci (int n){
int previous = -1;
int result = 1;
for (int i = 0; i <= n; ++i)
{
int sum = result + previous;
previous = result;
result = sum;
}
return result;
}
}
Proof
You model the time function to calculate Fib(n) as sum of time to calculate Fib(n-1) plus the time to calculate Fib(n-2) plus the time to add them together (O(1)).
T(n<=1) = O(1)
T(n) = T(n-1) + T(n-2) + O(1)
You solve this recurrence relation (using generating functions, for instance) and you'll end up with the answer.
Alternatively, you can draw the recursion tree, which will have depth n and intuitively figure out that this function is asymptotically O(2n). You can then prove your conjecture by induction.
Base: n = 1 is obvious
Assume T(n-1) = O(2n-1), therefore
T(n) = T(n-1) + T(n-2) + O(1) which is equal to
T(n) = O(2n-1) + O(2n-2) + O(1) = O(2n)
Iterative version
Note that even this implementation is only suitable for small values of n, since the Fibonacci function grows exponentially and 32-bit signed Java integers can only hold the first 46 Fibonacci numbers
int prev1=0, prev2=1;
for(int i=0; i<n; i++) {
int savePrev1 = prev1;
prev1 = prev2;
prev2 = savePrev1 + prev2;
}
return prev1;