This question already has answers here:
nth fibonacci number in sublinear time
(16 answers)
Closed 7 years ago.
I was required to write a simple implementation of Fibonacci's algorithm and then to make it faster.
Here is my initial implementation
public class Fibonacci {
public static long getFibonacciOf(long n) {
if (n== 0) {
return 0;
} else if (n == 1) {
return 1;
} else {
return getFibonacciOf(n-2) + getFibonacciOf(n-1);
}
}
public static void main(String[] args) {
Scanner scanner = new Scanner (System.in);
while (true) {
System.out.println("Enter n :");
long n = scanner.nextLong();
if (n >= 0) {
long beginTime = System.currentTimeMillis();
long fibo = getFibonacciOf(n);
long endTime = System.currentTimeMillis();
long delta = endTime - beginTime;
System.out.println("F(" + n + ") = " + fibo + " ... computed in " + delta + " milliseconds");
} else {
break;
}
}
}
}
As you can see I am using System.currentTimeMillis() to get a simple measure of the time elapsed while computed Fibonacci.
This implementation get rapidly kind of exponentially slow as you can see on the following picture
So I've got a simple optimisation idea. To put previous values in a HashMap and instead of re-computing them each time, to simply take them back from the HashMap if they exist. If they don't exist, we then put them in the HashMap.
Here is the new version of the code
public class FasterFibonacci {
private static Map<Long, Long> previousValuesHolder;
static {
previousValuesHolder = new HashMap<Long, Long>();
previousValuesHolder.put(Long.valueOf(0), Long.valueOf(0));
previousValuesHolder.put(Long.valueOf(1), Long.valueOf(1));
}
public static long getFibonacciOf(long n) {
if (n== 0) {
return 0;
} else if (n == 1) {
return 1;
} else {
if (previousValuesHolder.containsKey(Long.valueOf(n))) {
return previousValuesHolder.get(n);
} {
long newValue = getFibonacciOf(n-2) + getFibonacciOf(n-1);
previousValuesHolder.put(Long.valueOf(n), Long.valueOf(newValue));
return newValue;
}
}
}
public static void main(String[] args) {
Scanner scanner = new Scanner (System.in);
while (true) {
System.out.println("Enter n :");
long n = scanner.nextLong();
if (n >= 0) {
long beginTime = System.currentTimeMillis();
long fibo = getFibonacciOf(n);
long endTime = System.currentTimeMillis();
long delta = endTime - beginTime;
System.out.println("F(" + n + ") = " + fibo + " ... computed in " + delta + " milliseconds");
} else {
break;
}
}
}
This change makes the computing extremely fast. I computes all the values from 2 to 103 in no time at all and I get a long overflow at F(104) (Gives me F(104) = -7076989329685730859, which is wrong). I find it so fast that **I wonder if there is any mistakes in my code (Thank your checking and let me know please) **. Please take a look at the second picture:
Is my faster fibonacci's algorithm's implementation correct (It seems it is to me because it gets the same values as the first version, but since the first version was too slow I could not compute bigger values with it such as F(75))? What other way can I use to make it faster? Or is there a better way to make it faster? Also how can I compute Fibonacci for greater values (such as 150, 200) without getting a **long overflow**? Though it seems fast I would like to push it to the limits. I remember Mr Abrash saying 'The best optimiser is between your two ears', so I believe it can still be improved. Thank you for helping
[Edition Note:] Though this question adresses one of the main point in my question, you can see from above that I have additionnal issues.
Dynamic programming
Idea:Instead of recomputing the same value multiple times you just store the value calculated and use them as you go along.
f(n)=f(n-1)+f(n-2) with f(0)=0,f(1)=1.
So at the point when you have calculated f(n-1) you can easily calculate f(n) if you store the values of f(n) and f(n-1).
Let's take an array of Bignums first. A[1..200].
Initialize them to -1.
Pseudocode
fact(n)
{
if(A[n]!=-1) return A[n];
A[0]=0;
A[1]=1;
for i=2 to n
A[i]= addition of A[i],A[i-1];
return A[n]
}
This runs in O(n) time. Check it out yourself.
This technique is also called memoization.
The IDEA
Dynamic programming (usually referred to as DP ) is a very powerful technique to solve a particular class of problems. It demands very elegant formulation of the approach and simple thinking and the coding part is very easy. The idea is very simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again.. shortly 'Remember your Past'.
If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lappping subproblems, then its a big hint for DP. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem ( referred to as the Optimal Substructure Property ).
There are two ways of doing this.
1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization. (I have used this idea).
2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Dynamic Programming. (MinecraftShamrock used this idea)
There's more!
(Other ways to do this)
Look our quest to get a better solution doesn't end here. You will see a different approach-
If you know how to solve recurrence relation then you will find a solution to this relation
f(n)=f(n-1)+f(n-2) given f(0)=0,f(1)=1
You will arrive at the formula after solving it-
f(n)= (1/sqrt(5))((1+sqrt(5))/2)^n - (1/sqrt(5))((1-sqrt(5))/2)^n
which can be written in more compact form
f(n)=floor((((1+sqrt(5))/2)^n) /sqrt(5) + 1/2)
Complexity
You can get the power a number in O(logn) operations.
You have to learn the Exponentiation by squaring.
EDIT: It is good to point out that this doesn't necessarily mean that the fibonacci number can be found in O(logn). Actually the number of digits we need to calculate frows linearly. Probably because of the position where I stated that it seems to claim the wrong idea that factorial of a number can be calculated in O(logn) time.
[Bakurui,MinecraftShamrock commented on this]
If you need to compute n th fibonacci numbers very frequently I suggest using amalsom's answer.
But if you want to compute a very big fibonacci number, you will run out of memory because you are storing all smaller fibonacci numbers. The following pseudocode only keeps the last two fibonacci numbers in memory, i.e. it requires much less memory:
fibonacci(n) {
if n = 0: return 0;
if n = 1: return 1;
a = 0;
b = 1;
for i from 2 to n: {
sum = a + b;
a = b;
b = sum;
}
return b;
}
Analysis
This can compute very high fibonacci numbers with quite low memory consumption: We have O(n) time as the loop repeats n-1 times. The space complexity is interesting as well: The nth fibonacci number has a length of O(n), which can easily be shown:
Fn <= 2 * Fn-1
Which means that the nth fibonacci number is at most twice as big as its predecessor. Doubling a number in binary is equivalent with a single left-shift, which increases the number of necessary bits by one. So representing the nth fibonacci number takes at most O(n) space. We have at most three successive fibonacci numbers in memory which makes O(n) + O(n-1) + O(n-2) = O(n) total space consumption. In contrast to this the memoization algorithm always keeps the first n fibonacci numbers in memory, which makes O(n) + O(n-1) + O(n-2) + ... + O(1) = O(n^2) space consumption.
So which way should one use?
The only reason to keep all lower fibonacci numbers in memory is if you need fibonacci numbers very frequently. It is a question of balancing time with memory consumption.
Get away from the Fibonacci recursion and use the identities
(F(2n), F(2n-1)) = (F(n)^2 + 2 F(n) F(n-1), F(n)^2+F(n-1)^2)
(F(2n+1), F(2n)) = (F(n+1)^2+F(n)^2, 2 F(n+1) F(n) - F(n)^2)
This allows you to compute (F(m+1), F(m)) in terms of (F(k+1), F(k)) for k half the size of m. Written iteratively with some bit shifting for division by 2, this should give you the theoretical O(log n) speed of exponentiation by squaring while staying entirely within integer arithmetic. (Well, O(log n) arithmetic operations. Since you will be working with numbers with roughly n bits, it won't be O(log n) time once you are forced to switch to a large integer library. After F(50), you will overflow the integer data type, which only goes up to 2^(31).)
(Apologies for not remembering Java well enough to implement this in Java; anyone who wants to is free to edit it in.)
Fibonacci(0) = 0
Fibonacci(1) = 1
Fibonacci(n) = Fibonacci(n - 1) + Fibonacci(n - 2), when n >= 2
Usually there are 2 ways to calculate Fibonacci number:
Recursion:
public long getFibonacci(long n) {
if(n <= 1) {
return n;
} else {
return getFibonacci(n - 1) + getFibonacci(n - 2);
}
}
This way is intuitive and easy to understand, while because it does not reuse calculated Fibonacci number, the time complexity is about O(2^n), but it does not store calculated result, so it saves space a lot, actually the space complexity is O(1).
Dynamic Programming:
public long getFibonacci(long n) {
long[] f = new long[(int)(n + 1)];
f[0] = 0;
f[1] = 1;
for(int i=2;i<=n;i++) {
f[i] = f[i - 1] + f[i - 2];
}
return f[(int)n];
}
This Memoization way calculated Fibonacci numbers and reuse them when calculate next one. The time complexity is pretty good, which is O(n), while space complexity is O(n). Let's investigate whether the space complexity can be optimized... Since f(i) only requires f(i - 1) and f(i - 2), there is not necessary to store all calculated Fibonacci numbers.
The more efficient implementation is:
public long getFibonacci(long n) {
if(n <= 1) {
return n;
}
long x = 0, y = 1;
long ans;
for(int i=2;i<=n;i++) {
ans = x + y;
x = y;
y = ans;
}
return ans;
}
With time complexity O(n), and space complexity O(1).
Added: Since Fibonacci number increase amazing fast, long can only handle less than 100 Fibonacci numbers. In Java, we can use BigInteger to store more Fibonacci numbers.
Precompute a large number of fib(n) results, and store them as a lookup table inside your algorithm. Bam, free "speed"
Now if you need to compute fib(101) and you already have fibs 0 to 100 stored, this is just like trying to compute fib(1).
Chances are this isn't what this homework is looking for, but it's a completely legit strategy and basically the idea of caching extracted further away from running the algorithm. If you know you're likely to be computing the first 100 fibs often and you need to do it really really fast, there's nothing faster than O(1). So compute those values entirely out of band and store them so they can be looked up later.
Of course, cache values as you compute them too :) Duplicated computation is waste.
Here is a snippet of code with an iterative approach instead of recursion.
Output example:
Enter n: 5
F(5) = 5 ... computed in 1 milliseconds
Enter n: 50
F(50) = 12586269025 ... computed in 0 milliseconds
Enter n: 500
F(500) = ...4125 ... computed in 2 milliseconds
Enter n: 500
F(500) = ...4125 ... computed in 0 milliseconds
Enter n: 500000
F(500000) = 2955561408 ... computed in 4,476 ms
Enter n: 500000
F(500000) = 2955561408 ... computed in 0 ms
Enter n: 1000000
F(1000000) = 1953282128 ... computed in 15,853 ms
Enter n: 1000000
F(1000000) = 1953282128 ... computed in 0 ms
Some pieces of results are omitted with ... for a better view.
Code snippet:
public class CachedFibonacci {
private static Map<BigDecimal, BigDecimal> previousValuesHolder;
static {
previousValuesHolder = new HashMap<>();
previousValuesHolder.put(BigDecimal.ZERO, BigDecimal.ZERO);
previousValuesHolder.put(BigDecimal.ONE, BigDecimal.ONE);
}
public static BigDecimal getFibonacciOf(long number) {
if (0 == number) {
return BigDecimal.ZERO;
} else if (1 == number) {
return BigDecimal.ONE;
} else {
if (previousValuesHolder.containsKey(BigDecimal.valueOf(number))) {
return previousValuesHolder.get(BigDecimal.valueOf(number));
} else {
BigDecimal olderValue = BigDecimal.ONE,
oldValue = BigDecimal.ONE,
newValue = BigDecimal.ONE;
for (int i = 3; i <= number; i++) {
newValue = oldValue.add(olderValue);
olderValue = oldValue;
oldValue = newValue;
}
previousValuesHolder.put(BigDecimal.valueOf(number), newValue);
return newValue;
}
}
}
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
while (true) {
System.out.print("Enter n: ");
long inputNumber = scanner.nextLong();
if (inputNumber >= 0) {
long beginTime = System.currentTimeMillis();
BigDecimal fibo = getFibonacciOf(inputNumber);
long endTime = System.currentTimeMillis();
long delta = endTime - beginTime;
System.out.printf("F(%d) = %.0f ... computed in %,d milliseconds\n", inputNumber, fibo, delta);
} else {
System.err.println("You must enter number > 0");
System.out.println("try, enter number again, please:");
break;
}
}
}
}
This approach runs much faster than the recursive version.
In such a situation, the iterative solution tends to be a bit faster, because each
recursive method call takes a certain amount of processor time. In principle, it is
possible for a smart compiler to avoid recursive method calls if they follow simple
patterns, but most compilers don’t do that. From that point of view, an iterative
solution is preferable.
UPDATE:
After Java 8 releases and Stream API is available one more way is available for calculating Fibonacci.
Checked with JDK 17.0.2.
Code:
public static BigInteger streamFibonacci(long n) {
return Stream.iterate(new BigInteger[]{BigInteger.ONE, BigInteger.ONE},
p -> new BigInteger[]{p[1], p[0].add(p[1])})
.limit(n)
.reduce((a, b) -> b)
.get()[0];
}
Test output:
Enter n (q for quit): 5
F(5) = 5 ... computed in 2 ms
Enter n (q for quit): 50
F(50) = 1258626902 ... computed in 0 ms
Enter n (q for quit): 500
F(500) = 1394232245 ... computed in 3 ms
Enter n (q for quit): 500000
F(500000) = 2955561408 ... computed in 4,343 ms
Enter n (q for quit): 1000000
F(1000000) = 1953282128 ... computed in 19,280 ms
The results are pretty good.
Keep in mind that ... just cuts all following digits of the real numbers.
Having followed a similar approach some time ago, I've just realized there's another optimization you can make.
If you know two large consecutive answers, you can use this as a starting point. For example, if you know F(100) and F(101), then calculating F(104) is approximately as difficult (*) as calculating F(4) based on F(0) and F(1).
Calculating iteratively up is as efficient calculation-wise as doing the same using cached-recursion, but uses less memory.
Having done some sums, I have also realized that, for any given z < n:
F(n)=F(z) * F(n-z) + F(z-1) * F(n-z-1)
If n is odd, and you choose z=(n+1)/2, then this is reduced to
F(n)=F(z)^2+F(z-1)^2
It seems to me that you should be able to use this by a method I have yet to find, that you should be able use the above info to find F(n) in the number of operations equal to:
the number of bits in n doublings (as per above) + the number of 1 bits in n addings; in the case of 104, this would be (7 bits, 3 '1' bits) = 14 multiplications (squarings), 10 additions.
(*) assuming adding two numbers takes the same time, irrelevant of the size of the two numbers.
Here's a way of provably doing it in O(log n) (as the loop runs log n times):
/*
* Fast doubling method
* F(2n) = F(n) * (2*F(n+1) - F(n)).
* F(2n+1) = F(n+1)^2 + F(n)^2.
* Adapted from:
* https://www.nayuki.io/page/fast-fibonacci-algorithms
*/
private static long getFibonacci(int n) {
long a = 0;
long b = 1;
for (int i = 31 - Integer.numberOfLeadingZeros(n); i >= 0; i--) {
long d = a * ((b<<1) - a);
long e = (a*a) + (b*b);
a = d;
b = e;
if (((n >>> i) & 1) != 0) {
long c = a+b;
a = b;
b = c;
}
}
return a;
}
I am assuming here (as is conventional) that one multiply / add / whatever operation is constant time irrespective of number of bits, i.e. that a fixed-length data type will be used.
This page explains several methods of which this is the fastest. I simply translated it away from using BigInteger for readability. Here's the BigInteger version:
/*
* Fast doubling method.
* F(2n) = F(n) * (2*F(n+1) - F(n)).
* F(2n+1) = F(n+1)^2 + F(n)^2.
* Adapted from:
* http://www.nayuki.io/page/fast-fibonacci-algorithms
*/
private static BigInteger getFibonacci(int n) {
BigInteger a = BigInteger.ZERO;
BigInteger b = BigInteger.ONE;
for (int i = 31 - Integer.numberOfLeadingZeros(n); i >= 0; i--) {
BigInteger d = a.multiply(b.shiftLeft(1).subtract(a));
BigInteger e = a.multiply(a).add(b.multiply(b));
a = d;
b = e;
if (((n >>> i) & 1) != 0) {
BigInteger c = a.add(b);
a = b;
b = c;
}
}
return a;
}
Related
I am trying to make a parallel implementation of the Sieve of Eratosthenes. I made a boolean list which gets filled up with true's for the given size. Whenever a prime is found, all multiples of that prime are marked false in the boolean list.
The way I am trying to make this algorithm parallel is by firing up a new thread while still filtering the initial prime number. For example, the algorithm starts with prime = 2. In the for loop for filter, when prime * prime, I make another for loop in which every number in between the prime (2) and the prime * prime (4) is checked. If that index in the boolean list is still true, I fire up another thread to filter that prime number.
The nested for loop creates more and more overhead as the prime numbers to filter are progressing, so I limited this to only do this nested for loop when the prime number < 100. I am assuming that by that time, the 100 million numbers will be somewhat filtered. The problem here is that this way, the primes to be filter stay just under 9500 primes, while the algorithm stops at 10000 primes (prime * prime < size(100m)). I also think this is not at all the correct way to go about it. I have searched a lot online, but didn't manage to find any examples of parallel Java implementations of the sieve.
My code looks like this:
Main class:
public class Main {
private static ListenableQueue<Integer> queue = new ListenableQueue<>(new LinkedList<>());
private static ArrayList<Integer> primes = new ArrayList<>();
private static boolean serialList[];
private static ArrayList<Integer> serialPrimes = new ArrayList<>();
private static ExecutorService exec = Executors.newFixedThreadPool(10);
private static int size = 100000000;
private static boolean list[] = new boolean[size];
private static int lastPrime = 2;
public static void main(String[] args) {
Arrays.fill(list, true);
parallel();
}
public static void parallel() {
Long startTime = System.nanoTime();
int firstPrime = 2;
exec.submit(new Runner(size, list, firstPrime));
}
public static void parallelSieve(int size, boolean[] list, int prime) {
int queuePrimes = 0;
for (int i = prime; i * prime <= size; i++) {
try {
list[i * prime] = false;
if (prime < 100) {
if (i == prime * prime && queuePrimes <= 1) {
for (int j = prime + 1; j < i; j++) {
if (list[j] && j % prime != 0 && j > lastPrime) {
lastPrime = j;
startNewThread(j);
queuePrimes++;
}
}
}
}
} catch (ArrayIndexOutOfBoundsException ignored) { }
}
}
private static void startNewThread(int newPrime) {
if ((newPrime * newPrime) < size) {
exec.submit(new Runner(size, list, newPrime));
}
else {
exec.shutdown();
for (int i = 2; i < list.length; i++) {
if (list[i]) {
primes.add(i);
}
}
}
}
}
Runner class:
public class Runner implements Runnable {
private int arraySize;
private boolean[] list;
private int k;
public Runner(int arraySize, boolean[] list, int k) {
this.arraySize = arraySize;
this.list = list;
this.k = k;
}
#Override
public void run() {
Main.parallelSieve(arraySize, list, k);
}
}
I feel like there is a much simpler way to solve this...
Do you guys have any suggestions as to how I can make this parallelization working and maybe a bit simpler?
Creating a performant concurrent implementation of an algorithm like the Sieve of Eratosthenes is somewhat more difficult than creating a performant single-threaded implementation. The reason is that you need to find a way to partition the work in a way that minimises communication and interference between the parallel worker threads.
If you achieve complete isolation then you can hope for a speed increase approaching the number of logical processors available, or about one order of magnitude on a typical modern PC. By contrast, using a decent single-threaded implementation of the sieve will give you a speedup of at least two to three orders of magnitude. One simple cop-out would be to simply load the data from a file when needed, or to shell out to a decent prime-sieving program like Kim Walisch's PrimeSieve.
Even if we only want to look at the parallelisation problem, it is still necessary to have some insight in the algorithm itself and into to machine it runs on.
The most important aspect is that modern computers have deep cache hierarchies where only the L1 cache - typically 32 KB - is accessible at full speed and all other memory accesses incur significant penalties. Translated to the Sieve of Eratosthenes this means that you need to sieve your target range one 32 KB window at a time, instead of striding each prime over many megabytes. The small primes up to the square root of the target range end must be sieved before the parallel dance begins, but then each segment or window can be sieved independently.
Sieving a given window or segment necessitates determining the start offsets for the small primes that you want to sieve by, which means at least one modulo divison per small prime per window and division is a an extremely slow operation. However, if you sieve consecutive segments instead of arbitrary windows placed anywhere in the range then you can keep the end offsets for each prime in a vector and use them as start offsets for the next segment, thus eliminating the expensive computation of the start offset.
Thus, one promising parallelisation strategy for the Sieve of Eratosthenes would be to give each worker thread a contiguous group of 32 KB blocks to sieve, so that the start offset calculation needs to happen only once per worker. This way there cannot be memory access contention between workers, since each has its own independent subrange of the target range.
However, before you begin to parallelise - i.e., make your code more complex - you should first slim it down and reduce the work to be done to the absolute essentials. For example, take a look at this fragment from your code:
for (int i = prime; i * prime <= size; i++)
list[i * prime] = false;
Instead of recomputing loop bounds in every iteration and indexing with a multiplication, check the loop variable against a precomputed, loop-invariant value and reduce the multiplication to iterated addition:
for (int o = prime * prime; o <= size; o += prime)
list[o] = false;
There are two simple sieve-specific optimisations that can give significant speed bosts.
1) Leave the even numbers out of your sieve and pull the prime 2 out of thin air when needed. Bingo, you just doubled your performance.
2) Instead of sieving each segment by the small odd primes 3, 5, 7 and so on, blast a precomputed pattern over the segment (or even the whole range). This saves time because these small primes make many, many steps in each segment and account for the lion's share of sieving time.
There are more possible optimisations including a couple more low-hanging fruit but either the returns are diminishing or the effort curve rises steeply. Try searching Code Review for 'sieve'. Also, don't forget that you're fighting a Java compiler in addition to the algorithmic problem and the machine architecture, i.e. things like array bounds checking which your compiler may or may not be able to hoist out of loops.
To give you a ballpark figure: a single-threaded segmented odds-only sieve with precomputed patterns can sieve the whole 32-bit range in 2 to 4 seconds in C#, depending on how much TLC you apply in addition to things mentioned above. Your much smaller problem of primes up to 100000000 (1e8) is solved in less than 100 ms on my aging notebook.
Here's some code that shows how windowed sieving works. For clarity I left off all optimisations like odds-only representation or wheel-3 stepping when reading out the primes and so on. It's C# but that should be similar enough to Java to be readable.
Note: I called the sieve array eliminated because a true value indicates a crossed-off number (saves filling the array with all true at the beginning and it is more logical anyway).
static List<uint> small_primes_between (uint m, uint n)
{
m = Math.Max(m, 2);
if (m > n)
return new List<uint>();
Trace.Assert(n - m < int.MaxValue);
uint sieve_bits = n - m + 1;
var eliminated = new bool[sieve_bits];
foreach (uint prime in small_primes_up_to((uint)Math.Sqrt(n)))
{
uint start = prime * prime, stride = prime;
if (start >= m)
start -= m;
else
start = (stride - 1) - (m - start - 1) % stride;
for (uint j = start; j < sieve_bits; j += stride)
eliminated[j] = true;
}
return remaining_numbers(eliminated, m);
}
//---------------------------------------------------------------------------------------------
static List<uint> remaining_numbers (bool[] eliminated, uint sieve_base)
{
var result = new List<uint>();
for (uint i = 0, e = (uint)eliminated.Length; i < e; ++i)
if (!eliminated[i])
result.Add(sieve_base + i);
return result;
}
//---------------------------------------------------------------------------------------------
static List<uint> small_primes_up_to (uint n)
{
Trace.Assert(n < int.MaxValue); // size_t is int32_t in .Net (!)
var eliminated = new bool[n + 1]; // +1 because indexed by numbers
eliminated[0] = true;
eliminated[1] = true;
for (uint i = 2, sqrt_n = (uint)Math.Sqrt(n); i <= sqrt_n; ++i)
if (!eliminated[i])
for (uint j = i * i; j <= n; j += i)
eliminated[j] = true;
return remaining_numbers(eliminated, 0);
}
I've got two different methods, one is calculating Fibonacci sequence to the nth element by using iteration and the other one is doing the same thing using recursive method.
Program example looks like this:
import java.util.Scanner;
public class recursionVsIteration {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
//nth element input
System.out.print("Enter the last element of Fibonacci sequence: ");
int n = sc.nextInt();
//Print out iteration method
System.out.println("Fibonacci iteration:");
long start = System.currentTimeMillis();
System.out.printf("Fibonacci sequence(element at index %d) = %d \n", n, fibIteration(n));
System.out.printf("Time: %d ms\n", System.currentTimeMillis() - start);
//Print out recursive method
System.out.println("Fibonacci recursion:");
start = System.currentTimeMillis();
System.out.printf("Fibonacci sequence(element at index %d) = %d \n", n, fibRecursion(n));
System.out.printf("Time: %d ms\n", System.currentTimeMillis() - start);
}
//Iteration method
static int fibIteration(int n) {
int x = 0, y = 1, z = 1;
for (int i = 0; i < n; i++) {
x = y;
y = z;
z = x + y;
}
return x;
}
//Recursive method
static int fibRecursion(int n) {
if ((n == 1) || (n == 0)) {
return n;
}
return fibRecursion(n - 1) + fibRecursion(n - 2);
}
}
I was trying to find out which method is faster. I came to the conclusion that recursion is faster for the smaller amount of numbers, but as the value of nth element increases recursion becomes slower and iteration becomes faster. Here are the three different results for three different n:
Example #1 (n = 10)
Enter the last element of Fibonacci sequence: 10
Fibonacci iteration:
Fibonacci sequence(element at index 10) = 55
Time: 5 ms
Fibonacci recursion:
Fibonacci sequence(element at index 10) = 55
Time: 0 ms
Example #2 (n = 20)
Enter the last element of Fibonacci sequence: 20
Fibonacci iteration:
Fibonacci sequence(element at index 20) = 6765
Time: 4 ms
Fibonacci recursion:
Fibonacci sequence(element at index 20) = 6765
Time: 2 ms
Example #3 (n = 30)
Enter the last element of Fibonacci sequence: 30
Fibonacci iteration:
Fibonacci sequence(element at index 30) = 832040
Time: 4 ms
Fibonacci recursion:
Fibonacci sequence(element at index 30) = 832040
Time: 15 ms
What I really want to know is why all of a sudden iteration became faster and recursion became slower. I'm sorry if I missed some obvious answer to this question, but I'm still new to the programming, I really don't understand what's going on behind that and I would like to know. Please provide a good explanation or point me in the right direction so I can find out the answer myself. Also, if this is not a good way to test which method is faster let me know and suggest me different method.
Thanks in advance!
For terseness, Let F(x) be the recursive Fibonacci
F(10) = F(9) + F(8)
F(10) = F(8) + F(7) + F(7) + F(6)
F(10) = F(7) + F(6) + F(6) + F(5) + 4 more calls.
....
So your are calling F(8) twice,
F(7) 3 times, F(6) 5 times, F(5) 7 times.. and so on
So with larger inputs, the tree gets bigger and bigger.
This article does a comparison between recursion and iteration and covers their application on generating fibonacci numbers.
As noted in the article,
The reason for the poor performance is heavy push-pop of the registers in the ill level of each recursive call.
which basically says there is more overhead in the recursive method.
Also, take a look at Memoization
When doing the recursive implementation of Fibonacci algorithm, you are adding redundant calls by recomputing the same values over and over again.
fib(5) = fib(4) + fib(3)
fib(4) = fib(3) + fib(2)
fib(3) = fib(2) + fib(1)
Notice, that fib(2) will be redundantly calculated both for fib(4) and for fib(3).
However this can be overcome by a technique called Memoization, that improves the efficiency of recursive Fibonacci by storing the values, you have calculated once. Further calls of fib(x) for known values may be replaced by a simple lookup, eliminating the need for further recursive calls.
This is the main difference between the iterative and recursive approaches, if you are interested, there are also other, more efficient algorithms of calculating Fibonacci numbers.
Why is Recursion slower?
When you call your function again itself (as recursion) the compiler allocates new Activation Record (Just think as an ordinary Stack) for that new function. That stack is used to keep your states, variables, and addresses. Compiler creates a stack for each function and this creation process continues until the base case is reached. So, when the data size becomes larger, compiler needs large stack segment to calculate the whole process. Calculating and managing those Records is also counted during this process.
Also, in recursion, the stack segment is being raised during run-time. Compiler does not know how much memory will be occupied during compile time.
That is why if you don't handle your Base case properly, you will get StackOverflow exception :).
Using recursion the way you have, the time complexity is O(fib(n)) which is very expensive. The iterative method is O(n) This doesn't show because a) your tests are very short, the code won't even be compiled b) you used very small numbers.
Both examples will become faster the more you run them. Once a loop or method has been called 10,000 times, it should be compiled to native code.
If anyone is interested in an iterative Function with array:
public static void fibonacci(int y)
{
int[] a = new int[y+1];
a[0] = 0;
a[1] = 1;
System.out.println("Step 0: 0");
System.out.println("Step 1: 1");
for(int i=2; i<=y; i++){
a[i] = a[i-1] + a[i-2];
System.out.println("Step "+i+": "+a[i]);
}
System.out.println("Array size --> "+a.length);
}
This solution crashes for input value 0.
Reason: The array a will be initialized 0+1=1 but the consecutive assignment of a[1] will result in an index out of bounds exception.
Either add an if statement that returns 0 on y=0 or initialize the array by y+2, which will waste 1 int but still be of constant space and not change big O.
I prefer using a mathematical solution using the golden number. enjoy
private static final double GOLDEN_NUMBER = 1.618d;
public long fibonacci(int n) {
double sqrt = Math.sqrt(5);
double result = Math.pow(GOLDEN_NUMBER, n);
result = result - Math.pow(1d - GOLDEN_NUMBER, n);
result = Math.round(result / sqrt);
return Double.valueOf(result).longValue();
}
Whenever you are looking for time taken to complete a particular algorithm, it's best you always go for time complexity.
Evaluate the time complexity on the paper in terms of O(something).
Comparing the above two approaches, time complexity of iterative approach is O(n) whereas that of recursive approach is O(2^n).
Let's try to find the time complexity of fib(4)
Iterative approach, the loop evaluates 4 times, so it's time complexity is O(n).
Recursive approach,
fib(4)
fib(3) + fib(2)
fib(2) + fib(1) fib(1) + fib(0)
fib(1) + fib(0)
so fib() is called 9 times which is slightly lower than 2^n when the value of n is large, even small also(remember that BigOh(O) takes care of upper bound) .
As a result we can say that the iterative approach is evaluating in polynomial time, whereas recursive one is evaluating in exponential time
The recursive approach that you use is not efficient. I would suggest you use tail recursion. In contrast to your approach tail recursion keeps only one function call in the stack at any point in time.
public static int tailFib(int n) {
if (n <= 1) {
return n;
}
return tailFib(0, 1, n);
}
private static int tailFib(int a, int b, int count) {
if(count <= 0) {
return a;
}
return tailFib(b, a+b, count-1);
}
public static void main(String[] args) throws Exception{
for (int i = 0; i <10; i++){
System.out.println(tailFib(i));
}
}
I have a recursive solution that you where the computed values are stored to avoid the further unnecessary computations. The code is provided below,
public static int fibonacci(int n) {
if(n <= 0) return 0;
if(n == 1) return 1;
int[] arr = new int[n+1];
// this is faster than using Array
// List<Integer> lis = new ArrayList<>(Collections.nCopies(n+1, 0));
arr[0] = 0;
arr[1] = 1;
return fiboHelper(n, arr);
}
public static int fiboHelper(int n, int[] arr){
if(n <= 0) {
return arr[0];
}
else if(n == 1) {
return arr[1];
}
else {
if( arr[n-1] != 0 && (arr[n-2] != 0 || (arr[n-2] == 0 && n-2 == 0))){
return arr[n] = arr[n-1] + arr[n-2];
}
else if (arr[n-1] == 0 && arr[n-2] != 0 ){
return arr[n] = fiboHelper(n-1, arr) + arr[n-2];
}
else {
return arr[n] = fiboHelper(n-2, arr) + fiboHelper(n-1, arr );
}
}
}
In a computer contest, I was given a problem where I had to manipulate input data. The input has been split() into an array where data[0] is the number of repetitions. There can be up to 10^18 repetitions. My program returned Exception in thread "main" java.lang.OutOfMemoryError: Java heap space and I failed the contest.
Here's a piece of my code that's eating up memory and CPU:
long product[][]=new long[data[0]][2];
product[0][0]=data[1];
product[0][1]=data[2];
for(int a=1;a<data[0];a++){
product[a][0]=((data[5]*product[a-1][0] + data[6]) % data[3]) + 1; // Pi = ((A*Pi-1 + B) mod M) + 1 (for all i = 2..N)
product[a][1]=((data[7]*product[a-1][1] + data[8]) % data[4]) + 1; // Wi = ((C*Wi-1 + D) mod K) + 1 (for all i = 2..N)
}
Here's some of the input data:
980046644627629799 9 123456 18 10000000 831918484 451864686 840000324 650000765
972766173386786486 123 1 10000000 10000000 590000001 680000000 610000001 970000002
299896237124947938 681206 164538 2280874 981991 416793690 904023823 813682336 774801135
My program can only work up to about 7 or 8 digits, then it takes minutes to run. With 18 digits, it crashed almost as soon as I clicked "Run" in Eclipse.
I'm curious as to how is it possible to manipulate that much data on a normal computer. Please let me know if my question is unclear or you need more information. Thanks!
You can't have, and don't need, an array of such a huge length. You just need to track the most recent 2values. E.g., just have product1 and product2.
Also, consider testing if either number is a NaN after each iteration. If so, throw an Exception and give the iteration number.
Because once you get a NaN they will all be NaN. Except you are using long, so scratch that. "Nevermind". :-)
long product[][]=new long[data[0]][2];
This is the only line in the code you pasted that allocates memory. You allocate an array whose length will be data[0] in length! As data grows, so does the array. What is the formula you're trying to apply here?
The first input data you provide :
980046644627629799
is already too large to even declare an array for. Try creating a single dimension array with that as its length and see what happens....
Are you sure you don't just want a 1 x 2 matrix that you accumulate over? Explain your intended algorithm clearly and we can help you with a more optimal solution.
Let's put the numbers into perspective.
Memory: One long takes 8 bytes. 1018 longs take 16,000,000 terabytes. Way too much.
Time: 10,000,000 operations ≈ 1 second. 1018 steps ≈ 30 centuries. Also way too much.
You can solve the memory problem by realising that you only need the most recent values at any time, and that the entire array is redundant:
long currentP = data[1];
long currentW = data[2];
for (int a = 1; a < data[0]; a++)
{
currentP = ((data[5] * currentP + data[6]) % data[3]) + 1;
currentW = ((data[7] * currentW + data[8]) % data[4]) + 1;
}
The time problem is a bit trickier to solve. Since modulus is used, you can observe that the numbers must enter a cycle at some point. Once you find the cycle, you can predict what the value will be after n iterations without having to do each iteration manually.
The simplest method for finding cycles is to keep track of whether or not you visited each element, and then go through until you encounter an element you've seen before. In this situation, the amount of memory required is proportional to M and K (data[3] and data[4]). If they are too large, a more space-efficient cycle detection algorithm must be used.
Here is an example which finds the value for P:
public static void main(String[] args)
{
// value = (A * prevValue + B) % M + 1
final long NOT_SEEN = -1; // the code used for values not visited before
long[] data = { 980046644627629799L, 9, 123456, 18, 10000000, 831918484, 451864686, 840000324, 650000765 };
long N = data[0]; // the number of iterations
long S = data[1]; // the initial value of the sequence
long M = data[3]; // the modulus divisor
long A = data[5]; // muliply by this
long B = data[6]; // add this
int max = (int) Math.max(M, S); // all the numbers (except first) must be less than or equal to M
long[] seenTime = new long[max + 1]; // whether or not a value was seen and how many iterations it took
// initialize the values of 'seenTime' to 'not seen'
for (int i = 0; i < seenTime.length; i++)
{
seenTime[i] = NOT_SEEN;
}
// find the cycle
long count = 0;
long cycleValue = S; // the current value in the series
while (seenTime[(int)cycleValue] == NOT_SEEN)
{
seenTime[(int)cycleValue] = count;
cycleValue = (A * cycleValue + B) % M + 1;
count++;
}
long cycleLength = count - seenTime[(int)cycleValue];
long cycleOffset = seenTime[(int)cycleValue];
long result;
if (N < cycleOffset)
{
// Special case: requested iteration occurs before the cycle starts
// Straightforward simulation
long value = S;
for (long i = 0; i < N; i++)
{
value = (A * value + B) % M + 1;
}
result = value;
}
else
{
// Normal case: requested iteration occurs inside the cycle
// Simulate just the relevant part of one cycle
long positionInCycle = (N - cycleOffset) % cycleLength;
long value = cycleValue;
for (long i = 0; i < positionInCycle; i++)
{
value = (A * value + B) % M + 1;
}
result = value;
}
System.out.println(result);
}
I am only giving you the solution because it looks like the contest is over. The important lesson to learn from this is that you should always check the bounds to see whether your solution is practical before you start coding it up.
I was studying about Tail call recursion and came across some documentation that mentioned. Sun Java doesn't implement tail call optimization.
I wrote following code to calculate fibonacci number in 3 different ways:
1. Iterative
2. Head Recursive
3. Tail Recursive
public class Fibonacci {
public static void main(String[] args) throws InterruptedException {
int n = Integer.parseInt(args[0]);
System.out.println("\n Value of n : " + n);
System.out.println("\n Using Iteration : ");
long l1 = System.nanoTime();
fibonacciIterative(n);
long l2 = System.nanoTime();
System.out.println("iterative time = " + (l2 - l1));
System.out.println(fibonacciIterative(n));
System.out.println("\n Using Tail recursion : ");
long l3 = System.nanoTime();
fibonacciTail(n);
long l4 = System.nanoTime();
System.out.println("Tail recursive time = " + (l4 - l3));
System.out.println(fibonacciTail(n));
System.out.println("\n Using Recursion : ");
long l5 = System.nanoTime();
fibonacciRecursive(n);
long l6 = System.nanoTime();
System.out.println("Head recursive time = " + (l6 - l5));
}
private static long fibonacciRecursive(int num) {
if (num == 0) {
return 0L;
}
if (num == 1) {
return 1L;
}
return fibonacciRecursive(num - 1) + fibonacciRecursive(num - 2);
}
private static long fibonacciIterative(int n) throws InterruptedException {
long[] arr = new long[n + 1];
arr[0] = 0;
arr[1] = 1;
for (int i = 2; i <= n; i++) {
// Thread.sleep(1);
arr[i] = arr[i - 1] + arr[i - 2];
}
return arr[n];
}
private static long fibonacciTail(int n) {
if (n == 0)
return 0;
return fibHelper(n, 1, 0, 1);
}
private static long fibHelper(int n, int m, long fibM_minus_one, long fibM) {
if (n == m)
return fibM;
return fibHelper(n, m + 1, fibM, fibM_minus_one + fibM);
}
}
On running this program I drew some results:
Head Recursive method does not finish for n>50. Program looked like hanged. Any idea, why this could happen?
Tail recursive method took significantly less time as compared to Head recursion. Sometimes took even less time than Iterative method. Does it mean that java does some Tail call optimization internally?
And if it does, why I did it give StackOverflowError at n > 5000?
System specs:
Intel core 5 processor,
Windows XP,
32 bit Java 1.6
Default stack size for JVM.
Does it mean that java does some Tail call optimization internally?
No, it does not. The HotSpot JIT compilers do not implement tail-call optimization.
The results you are observing are typical of the anomalies that you see in a Java benchmark that doesn't take account of JVM warmup. For instance, the "first few" times a method is called, it will be executed by the interpreter. Then the JIT compiler will compile the method ... and it will get faster.
To get meaningful results, put a loop around the whole lot and run it a number of times until the timings stabilize. Then discard the results from the early iterations.
... why I did it give StackOverflowError at n > 5000?
That's just evidence that there isn't any tail-call optimization happening.
For the first question, what is 2^50 (or something close)? Each number N in a recursive Fib function calls it twice (prior 2). Each of those calls 2 prior iterations, etc.. so it's grows to 2^(N-k) of recursion (k is probably 2 or 3).
The 2nd question is because the 2nd one is a straight N recursion. Instead of going double headed (N-1),(N-2), it simply builds up from M=1, M=2... M=N. Each step of the way, the N-1 value is retained for adding. Since it is an O(N) operation, it is comparable to the iterative method, the only difference being how the JIT compiler optimizes it. The problem with recursion though is that it requires a huge memory footprint for each level that you stack onto the frame - you run out of memory or stack space at some limit. It should still generally be slower than the iterative method.
Regarding point 1: Computing Fibonacci numbers recursively without memoization leads to a run time that is exponential in n. This goes for any programming language that does not automatically memoize function results (such as most mainstream non-functional languages, e.g. Java, C#, C++, ...). The reason is that the same functions will get called over and over again - e.g. f(8) will call f(7) and f(6); f(7) will call f(6) and f(5), so that f(6) gets called twice. This effect propagates and causes an exponential growth in the number of function calls. Here's a visualization of which functions get called:
f(8)
f(7)
f(6)
f(5)
f(4)
...
f(3)
...
f(4)
...
f(5)
f(4)
...
f(3)
...
f(6)
f(5)
...
f(4)
...
You can use Memoization to avoid head recursion.
I have tested the following code , when N <=40 , that approach is bad because Map has trade-off.
private static final Map<Integer,Long> map = new HashMap<Integer,Long>();
private static long fibonacciRecursiveMemoization(int num) {
if (num == 0) {
return 0L;
}
if (num == 1) {
return 1L;
}
int num1 = num - 1;
int num2 = num - 2;
long numResult1 = 0;
long numResult2 = 0;
if(map.containsKey(num1)){
numResult1 = map.get(num1);
}else{
numResult1 = fibonacciRecursiveMemoization(num1);
map.put(num1, numResult1);
}
if(map.containsKey(num2)){
numResult2 = map.get(num2);
}else{
numResult2 = fibonacciRecursiveMemoization(num2);
map.put(num2, numResult2);
}
return numResult1 + numResult2;
}
when the value of n : 44
Using Iteration :
iterative time = 6984
Using Tail recursion :
Tail recursive time = 8940
Using Memoization Recursion :
Memoization recursive time = 1799949
Using Recursion :
Head recursive time = 12697568825
I am writing a "simple" program to determine the Nth number in the Fibonacci sequence. Ex: the 7th number in the sequence is: 13. I have finished writing the program, it works, but beginning at the 40th number it begins to delay, and takes longer, and longer. My program has to go to the 100th spot in the series.
How can I fix this so it doesn't take so long? This is very basic program, so I don't know all the fancy syntax codes.. my formula is:
if n =1 || n = 0
return n;
else
return F(n-1) + F(n-2);
This works great until it goes past the 40th term. What other statement do I have to add to make it go quicker for higher numbers??
The problem is that because you are using simple recursion, you re-evaluate F(n) multiple times, so your execution time is exponential.
There are two simple ways to fix this:
1) Cache values of F(n) when they are evaluated the first time. Check the cache first before evaluating F(n) to see if you have already calculated it for this n.
2) Use an iterative approach: Calculate F(1), F(2), F(3), etc... until you reach the number you need.
The issue is that your algorithm, while mathematically pure (and nice) isn't very good.
For every number it wants to calculate, it has to calculate two lower ones which in turn have to calculate two lower ones, etc. Your current algorithm has a Big O notation complexity of about O(1.6n), so for very large numbers (100 for example) it takes a long time.
This book, Structure and Interpretation of Computer programs has a nice diagram: showing what happens when you generate fib 5 with your algorithm
(source: mit.edu)
The simplest thing to do is to store F - 1 and F - 2, so that you don't have to calculate them from scratch every time. In other words, rather than using recursion, use a loop. Than means that the complexity of the algorithm goes from O(1.6n) to O(n).
There are a number of solutions. The most straightforward is to use memoization. There's also Binet's formula which will give you the nth fibonacci number in constant time.
For memoization, you store your results for F[a_i] in a map or list of some kind. In the naive recursion, you compute F[4] hundreds of thousands of times, for example. By storing all these results as you find them, the recursion ceases to proceed like a tree and looks like the straightforward iterative solution.
If this isn't homework, use Binet's formula. It's the fastest method available.
Try this example, it calculates the millionth Fibonacci number in a reasonable time frame without any loss of precision.
import java.math.BigInteger;
/*
250000th fib # is: 36356117010939561826426 .... 10243516470957309231046875
Time to compute: 3.5 seconds.
1000000th fib # is: 1953282128707757731632 .... 93411568996526838242546875
Time to compute: 58.1 seconds.
*/
public class Fib {
public static void main(String... args) {
int place = args.length > 0 ? Integer.parseInt(args[0]) : 1000 * 1000;
long start = System.nanoTime();
BigInteger fibNumber = fib(place);
long time = System.nanoTime() - start;
System.out.println(place + "th fib # is: " + fibNumber);
System.out.printf("Time to compute: %5.1f seconds.%n", time / 1.0e9);
}
private static BigInteger fib(int place) {
BigInteger a = new BigInteger("0");
BigInteger b = new BigInteger("1");
while (place-- > 1) {
BigInteger t = b;
b = a.add(b);
a = t;
}
return b;
}
}
Create an array with 100 values, then when you calculate a value for Fib(n), store it in the array and use that array to get the values of Fib(n-1) and Fib(n-2).
If you're calling Fib(100) without storing any of the previously calculated values, you're going to make your java runtime explode.
Pseudocode:
array[0] = 0;
array[1] = 1;
for 2:100
array[n] = array[n-1] + array[n-2];
The problem is not JAVA, but the way you are implementing your Fibonacci algorithm.
You are computing the same values many times, which is slowing your program.
Try something like this : Fibonacci with memoization
F(n)
/ \
F(n-1) F(n-2)
/ \ / \
F(n-2) F(n-3) F(n-3) F(n-4)
/ \
F(n-3) F(n-4)
Notice that many computations are repeated!
Important point to note is this algorithm is exponential because it does not store the result of previous calculated numbers. eg F(n-3) is called 3 times.
Better solution is iterative code written below
function fib2(n) {
if n = 0
return 0
create an array f[0.... n]
f[0] = 0, f[1] = 1
for i = 2...n:
f[i] = f[i - 1] + f[i - 2]
return f[n]
}
For more details refer algorithm by dasgupta chapter 0.2
My solution using Java 8 Stream:
public class Main {
public static void main(String[] args) {
int n = 10;
Fibonacci fibonacci = new Fibonacci();
LongStream.generate(fibonacci::next)
.skip(n)
.findFirst()
.ifPresent(System.out::println);
}
}
public class Fibonacci {
private long next = 1;
private long current = 1;
public long next() {
long result = current;
long previous = current;
current = next;
next = current + previous;
return result;
}
}
If you use the naive approach, you'll end up with an exploding number of same calculations, i.e. to calc fib(n) you have to calc fib(n-1) and fib(n-2). Then to calc fib(n-1) you have to calc fib(n-2) and fib(n-3), etc. A better approach is to do the inverse. You calc starting with fib(0), fib(1), fib(2) and store the values in a table. Then to calc the subsequent values you use the values stored in a table (array). This is also caled memoization. Try this and you should be able to calc large fib numbers.
This is the code in Python, which can easily be converted to C/Java. First one is recursive and second is the iterative solution.
def fibo(n, i=1, s=1, s_1=0):
if n <= i: return s
else: return fibo(n, i+1, s+s_1, s)
def fibo_iter_code(n):
s, s_1 = 1, 0
for i in range(n-1):
temp = s
s, s_1 = s+s_1, temp
print(s)
Too slow...
Better:
(JavaScript example)
function fibonacci(n) {
var a = 0, b = 1;
for (var i = 0; i < n; i++) {
a += b;
b = a - b;
}
return a;
}
import java.util.*;
public class FibonacciNumber
{
public static void main(String[] args)
{
int high = 1, low = 1;
int num;
Scanner in = new Scanner(System.in);
try
{
System.out.print("Enter Number : " );
num = in.nextInt();
System.out.println( low);
while(high < num && num < 2000000000)
{
System.out.println(high);
high = low + high;
low = high - low;
}
} catch (InputMismatchException e) {
System.out.print("Limit Exceeded");
}
}
}
/* Ouput :
Enter Number : 1999999999
1
1
2
3
5
8
13
21
34
55
89
144
233
377
610
987
1597
2584
4181
6765
10946
17711
28657
46368
75025
121393
196418
317811
514229
832040
1346269
2178309
3524578
5702887
9227465
14930352
24157817
39088169
63245986
102334155
165580141
267914296
433494437
701408733
1134903170
1836311903
-1323752223
512559680
-811192543
-298632863
-1109825406
-1408458269
1776683621
368225352 */
Naive implementation is natural and elegant but during execution recursive calls are creating binary tree. Beside already mentioned memoization, cashing of previous F(n) results and avoiding of unnecessary tree traversal, you can go for tail call optimization, already mentioned iterative or matrix multiplication. For example, Java 8 memoization:
private static final Map<Long, Long> memo = new HashMap<>();
static {
memo.put(0L, 0L);
memo.put(1L, 1L);
}
public static void main(String[] args) {
System.out.println(fibonacci(0));
System.out.println(fibonacci(43));
System.out.println(fibonacci(92));
}
public static long fibonacci(long n) {
return memo.computeIfAbsent(n, m -> fibonacci(m - 1) + fibonacci(m - 2));
}
Or maybe tail call optimized version:
interface FewArgs<T, U, V, R> {
public R apply(T t, U u, V v);
}
static FewArgs<Long, Long, Long, Long> tailRecursive;
static {
tailRecursive = (a, b, n) -> {
if (n > 0)
return tailRecursive.apply(b, a + b, n - 1);
return a;
};
}
You call it with a = 0, b = 1, n is required nth Fibonacci number but must be smaller than 93.
More efficient ways to calculate Fibonacci numbers are matrix squaring, you will find example on my blog, and Binet formula
You can use the caching technic. Since f(n)= f(n-1)+f(n-2) , you'll calculate f(n-2) one more time when you calculate f(n-1). So simply treat them as two incremental numbers like below:
public int fib(int ithNumber) {
int prev = 0;
int current = 1;
int newValue;
for (int i=1; i<ithNumber; i++) {
newValue = current + prev;
prev = current;
current = newValue;
}
return current;
}
It looks better with multiple statements of ternary operator.
static int fib(int n) {
return n > 5 ? fib(n-2) + fib(n-1)
: n < 2 || n == 5 ? n
: n - 1;
}