Why is recursive MergeSort faster than iterative MergeSort? - java

I just implemented the two algorithms and I was surprised when I plotted the results! Recursive implementation is clearly faster than the iterative one.
After that, I added the insertion sort combined with both of them and the result was the same.
In the lectures we use to see that recursive is slower that iterative like in factorial calculation but here it doesn't seem to be the case. I pretty sure that my codes are right. What's the explenation for this behaviour? It look like java (10) implements automatically a multithread in recursion mode cause when I display the little animation the insertion sort works in parallel with merge operations.
If these codes are not enough to understand here is my github: Github
EDIT RELOADED
As said in comments I should compare things that are similar so now the merge method is the same in iterative and recursive.
private void merge(ArrayToSort<T> array, T[] sub_array,
int min, int mid, int max) {
//we make a copy of the array.
if (max + 1 - min >= 0) System.arraycopy(array.array, min, sub_array, min, max + 1 - min);
int i = min, j = mid + 1;
for (var k = min; k <= max; k++) {
if (i > mid) {
array.array[k] = sub_array[j++];
} else if (j > max) {
array.array[k] = sub_array[i++];
} else if (sub_array[j].compareTo(sub_array[i]) < 0) {
array.array[k] = sub_array[j++];
} else {
array.array[k] = sub_array[i++];
}
}
}
Sort Recursive:
public void Sort(ArrayToSort<T> array) {
T sub[] = (T[]) new Comparable[array.Length];
sort(array, sub, 0, array.Length - 1);
}
private InsertionSort<T> insertionSort = new InsertionSort<>();
private void sort(ArrayToSort<T> array, T[] sub_array, int min, int max) {
if (max <= min) return;
if (max <= min + 8 - 1) {
insertionSort.Sort(array, min, max);
return;
}
var mid = min + (max - min) / 2;
sort(array, sub_array, min, mid);
sort(array, sub_array, mid + 1, max);
merge(array, sub_array, min, mid, max);
}
Sort Iterative:
private InsertionSort<T> insertionSort = new InsertionSort<>();
public void Sort(ArrayToSort<T> array) {
int length = array.Length;
int maxIndex = length - 1;
T temp[] = (T[]) new Comparable[length];
for (int i = 0; i < maxIndex; i += 8) {
insertionSort.Sort(array, i, Integer.min(i + 8 - 1, maxIndex));
}
System.arraycopy(array.array, 0, temp, 0, length);
for (int m = 8; m <= maxIndex; m = 2 * m) {
for (int i = 0; i < maxIndex; i += 2 * m) {
merge(array, temp, i, i + m - 1,
Integer.min(i + 2 * m - 1, maxIndex));
}
}
}
In the new plot we can see that now the difference is proportional (à un facteur près). If someone has any more ideas... Thanks a lot :)
The new * new plot
And here is my (teacher's one in fact) method to plot:
for (int i = 0; i < nbSteps; i++) {
int N = startingCount + countIncrement * i;
for (ISortingAlgorithm<Integer> algo : algorithms) {
long time = 0;
for (int j = 0; j < folds; j++) {
ArrayToSort<Integer> toSort = new ArrayToSort<>(
ArrayToSort.CreateRandomIntegerArray(N, Integer.MAX_VALUE, (int) System.nanoTime())
);
long startTime = System.currentTimeMillis();
algo.Sort(toSort);
long endTime = System.currentTimeMillis();
time += (endTime - startTime);
assert toSort.isSorted();
}
stringBuilder.append(N + ", " + (time / folds) + ", " + algo.Name() + "\n");
System.out.println(N + ", " + (time / folds) + ", " + algo.Name());
}
}

I don't think I have an answer because I didn't try your code.
I will give you thoughts:
a) CPU have L1 cache and instruction prefetching. The recursive version may have better locality of references when all the sorts are done and it is finishing with a bunch of merges while poping all frames (of for other cpu optimzation reasons)
b) Meanwhile the JIT compiler does crazy things to recursion, particularly due to tail recursion and inlining. I suggest you try without JIT compiler just for fun. Also might want to try changing the thresholds for JIT compilation, so it gets JIT compiled faster for minimizing the warmup time.
c) system.arraycopy is a native method and despite being optimized, it should have overhead.
d) the iterative version seems to have more arithmetics in the loops.
e) that is an attempt at micro-benchmarking. you need to factor out the GC and have tests running dozens if not hundreds of times. Read up on JMH. Also try different GCs and -Xmx.

Related

Sorted Search, count numbers less than target value

I am practicing a sorted search task, from testdome.com
/**
* Implement function countNumbers that accepts a sorted array of unique integers and,
* efficiently with respect to time used, counts the number of array elements that are less than the parameter lessThan.
* <p>
* For example, SortedSearch.countNumbers(new int[] { 1, 3, 5, 7 }, 4)
* should return 2 because there are two array elements less than 4.
*/
Currently according to the site my answer has a score of 50 % due to edge cases and performance, im trying to get an opinion on what i might need to add or a different approach.
Here is my code
public static int countNumbers(int[] sortedArray, int lessThan) {
int count = 0;
if(sortedArray == null) {
return 0;
}
List<Integer> numbers = new ArrayList<>();
for (int i = 0; i < sortedArray.length; i++) {
if (sortedArray[i] < lessThan) {
count++;
} else {
break;
}
}
return count;
}
And the result i get when i test it on their environment is as follows
Example case: Correct answer
Various small arrays: Correct answer
Performance test when sortedArray contains lessThan: Time limit exceeded
Performance test when sortedArray doesn't contain lessThan: Time limit exceeded
so two performance tests fail even though i cant see this tests may be i could get a suggestion here
If O(n) is giving TLE. You need something faster than O(n). Binary Search is O(logN).
public static int countNumbers(int[] sortedArray, int lessThan) {
int start = 0;
int end = sortedArray.length - 1;
int mid = 0;
while (start <= end) {
mid = start + (end - start) / 2;
if (sortedArray[mid] < lessThan) {
if (mid < sortedArray.length - 1 && sortedArray[mid + 1] < lessThan) {
start = mid + 1;
continue;
} else {
return mid + 1;
}
}
if (sortedArray[mid] >= lessThan) {
end = mid - 1;
} else {
start = mid + 1;
}
}
return 0;
}
Or use built-in Binary Search:
Arrays.binarySearch(new int[]{1, 2, 4}, 3) + 1) * -1;
When the key is not found, it returns negative insertion position. To convert it to index, I did + 1 and multiplied by - 1 to make it positive.

Recursive version of Java function is slower than iterative on first call, but faster after. Why is this?

For an assignment I'm currently trying to measure the performance (space/time) difference between an iterative solution to the matrix chain problem and a recursive one.
The gist of the problem and the solution I'm using for the iterative version can be found here: http://www.geeksforgeeks.org/dynamic-programming-set-8-matrix-chain-multiplication/
I'm running a given input through both functions 10 times, measuring the space and time performance of each function. The very interesting thing is that while the recursive solution runs much slower than the iterative solution on the first call it's performance is much better on successive calls it is much faster. The functions are not making use of any class-global variables other than one for counting memory usage. Why is this occurring? Is it something the compiler is doing or am I missing something obvious?
Note: I know my way of measuring memory is wrong, planning on changing it.
Main: Initializes Array and passes it to run functions
public static void main(String[] args) {
int s[] = new int[] {30,35,15,5,10,100,25,56,78,55,23};
runFunctions(s, 15);
}
runFunctions: Runs both functions 2 * n times, measuring space and time and printing results at the end
private static void runFunctions(int[]arr , int n){
final Runtime rt = Runtime.getRuntime();
long iterativeTime[] = new long [n],
iterativeSpace[] = new long [n],
recursiveSpace[] = new long [n],
recursiveTime[] = new long [n];
long startTime, stopTime, elapsedTime, res1, res2;
for (int i = 0; i <n; i++){
System.out.println("Measuring Running Time");
//measure running time of iterative
startTime = System.nanoTime();
res1 = solveIterative(arr, false);
stopTime = System.nanoTime();
elapsedTime = stopTime - startTime;
iterativeTime[i] = elapsedTime;
//measure running time of recursive
startTime = System.nanoTime();
res2 = solveRecursive(arr, false);
stopTime = System.nanoTime();
elapsedTime = stopTime - startTime;
recursiveTime[i] = elapsedTime;
System.out.println("Measuring Space");
//measure space usage of iterative
rt.gc();
res1 = solveIterative(arr, true);
iterativeSpace[i] = memoryUsage;
//measure space usage of recursive
rt.gc();
res2 = solveRecursive(arr, true);
recursiveSpace[i] = memoryUsage;
rt.gc();
if (res1 != res2){
System.out.println("Error! Results do not match! Iterative Result: " + res1 + " Recursive Result: " + res2);
}
}
System.out.println("Time Iterative: " + Arrays.toString(iterativeTime));
System.out.println("Time Recursive: " + Arrays.toString(recursiveTime));
System.out.println("Space Iterative: " + Arrays.toString(iterativeSpace));
System.out.println("Space Recursive: " + Arrays.toString(recursiveSpace));
}
solveRecursive: bootstrap for doRecursion
private static int solveRecursive(int[] s, boolean measureMemory){
memoryUsage = 0;
maxMemory = 0;
int n = s.length - 1;
int[][] m = new int[n][n];
int result;
if (measureMemory){
memoryUsage += MemoryUtil.deepMemoryUsageOf(n);
memoryUsage += MemoryUtil.deepMemoryUsageOf(s);
memoryUsage += MemoryUtil.deepMemoryUsageOf(m);
result = doRecursion(0, n - 1, m, s);
memoryUsage += MemoryUtil.deepMemoryUsageOf(result);
System.out.println("Memory Used: " + memoryUsage);
}
else
{
result = doRecursion(0, n - 1, m, s);
}
return result;
}
doRecursion: solves the function recursively
private static int doRecursion(int i, int j, int[][] m, int s[]){
if (m[i][j] != 0){
return m[i][j];
}
if (i == j){
return 0;
}
else
{
m[i][j] = Integer.MAX_VALUE / 3;
for (int k = i; k <= j - 1; k++){
int q = doRecursion(i, k, m, s) + doRecursion(k + 1, j, m, s) + (s[i] * s[k + 1] * s[j + 1]);
if (q < m[i][j]){
m[i][j] = q;
}
}
}
return m[i][j];
}
solveIterative: Solves the problem iteratively
private static int solveIterative(int[] s, boolean measureMemory) {
memoryUsage = 0;
maxMemory = 0;
int n = s.length - 1;
int i = 0, j = 0, k= 0, v = 0;
int[][] m = new int[n][n];
for (int len = 2; len <= n; len++) {
for (i = 0; i + len <= n; i++) {
j = i + len - 1;
m[i][j] = Integer.MAX_VALUE;
for (k = i; k < j; k++) {
v = m[i][k] + m[k + 1][j] + s[i] * s[k + 1] * s[j + 1];
if (m[i][j] > v) {
m[i][j] = v;
}
}
}
}
if (measureMemory){
memoryUsage += MemoryUtil.deepMemoryUsageOf(n);
memoryUsage += MemoryUtil.deepMemoryUsageOf(m);
memoryUsage += MemoryUtil.deepMemoryUsageOf(i);
memoryUsage += MemoryUtil.deepMemoryUsageOf(j);
memoryUsage += MemoryUtil.deepMemoryUsageOf(k);
memoryUsage += MemoryUtil.deepMemoryUsageOf(v);
memoryUsage += MemoryUtil.deepMemoryUsageOf(s);
System.out.println("Memory Used: " + memoryUsage);
}
return m[0][n - 1];
}
Output:
Time Iterative: [35605, 12039, 20492, 17674, 17674, 12295, 11782, 19467, 16906, 18442, 21004, 19980, 18955, 12039, 13832]
Time Recursive: [79918, 4611, 8453, 6916, 6660, 6660, 4354, 6916, 18699, 7428, 13576, 5635, 4867, 3330, 3586]
Space Iterative: [760, 760, 760, 760, 760, 760, 760, 760, 760, 760, 760, 760, 760, 760, 760]
Space Recursive: [712, 712, 712, 712, 712, 712, 712, 712, 712, 712, 712, 712, 712, 712, 712]
The problem is that your test runs too short. JIT has not enough time to optimize the methods well enough.
Try repeating the test at least 200 times (instead of 15) and you'll see the difference.
Note that JIT compilation does not happen just once. Methods can be recompiled several times as JVM collects more runtime statistics. You've hit the situation where solveRecursive survived more levels of optimization than solveIterative.
In this answer I've described how JIT decides to compile a method. Basically there are two main compilation triggers: the method invocation threshold and the backedge threshold (i.e. loop iteration counter).
Note that those two methods have different compilation triggers:
solveRecursive does more calls => it is compiled when invocation threshold is reached;
solveIterative runs more loops => it is compiled when backedge threshold is reached.
These thresholds are not equal, and it happens that on a short distance solveRecursive is compiled earlier. But as soon as solveIterative is optimized, it starts to perform even better.
There is also a trick to make solveIterative compiled earlier: move the innermost loop for (k = i; k < j; k++) to a separate method. Yes, it may sound strange, but JIT is better in compiling several small methods instead of compiling one big method. Smaller methods are easier to understand and to optimize not only for humans, but also for computers :)
It is absolutely 100% normal for Java code to get faster after you use it more. That's more or less the whole point of the JIT compiler -- to optimize at runtime code that's getting used more heavily.

Max Double Slice Sum codility O(1) space complexity fail performance test case

I was trying figure out why the below solution failed for a single performance test case for the 'Max Double Slice Sum' problem in the codility website: https://codility.com/demo/take-sample-test/max_double_slice_sum
There is another solution O(n) space complexity which is easier to comprehend overhere: Max double slice sum. But i am just wondering why this O(1) solution doesn't work. Below is the actual code:
import java.util.*;
class Solution {
public int solution(int[] A) {
long maxDS = 0;
long maxDSE = 0;
long maxS = A[1];
for(int i=2; i<A.length-1; ++i){
//end at i-index
maxDSE = Math.max(maxDSE+A[i], maxS);
maxDS = Math.max(maxDS, maxDSE);
maxS = Math.max(A[i], maxS + A[i]);
}
return (int)maxDS;
}
}
The idea is simple as follow:
The problem can be readdress as finding max(A[i]+A[i+1]+...+A[j]-A[m]); 1<=i<=m<=j<=n-2; while n = A.length; we call A[m] is missing element within the slice.
maxS[i] will keep max slice which end at current index i; in other words, = max(A[t] + ... + A[i]); while t < i; so when i=1; maxS = A[1]; Note that in solution, we don't keep array but rather latest maxS at current index (See above code).
maxDSE[i] is max of all double slice which end at i; in other words, = max(A[t]+A[t+1]+...+A[i]-A[m])--end at A[i]; maxDS is the final max of double slice sum which we try to find.
Now, we just use a for-loop from i=2; -> i=A.length-2; For each index i, we notice some findings:
If the missing element is A[i], then maxDSE[i] = maxS[i-1] (max sum of
all slice which end at i-1 => or A[t] + ... + A[i] - A[i]);
If missing element is not A[i] -> so it must be somewhere from A[1]->A[i-1] -> maxDSE = maxDSE[i-1] + A[i]; such as A[t] + ... + A[i] - A[m] (not that A[i] must be last element) with t
so maxDSE[i] = Math.max(maxDSE[i-1]+A[i], maxS[i-1]);
maxDS = Math.max(maxDS, maxDSE); max amount all maxDSE;
and maxS[i] = Math.max(A[i], maxS[i-1]+A[i]);
by that way, maxDS will be the final result.
But strange that, I was only able to get 92%; with one failed performance test case as shown here:
medium_range
-1000, ..., 1000
WRONG ANSWER
got 499499 expected 499500
Could anyone please enlighten me where is problem in my solution? Thanks!
Ok, I found the error with my code. Seems that I forgot one corner cases. When calculate DSE[i], in cases A[i] is missing number, maxS should contain the case when array is empty. In other word, maxS should be calculated as:
maxS[i] = Math.max(0, Math.max(A[i]+maxS[i-1], A[i])); while 0 is for case of empty subarray (end at i-th); Math.max(A[i]+maxS[i-1], A[i]) is max of all slice with at least one element (end at i-index). The complete code as follow:
import java.util.*;
class Solution {
public int solution(int[] A) {
long maxDS = 0;
long maxDSE = 0;
long maxS = A[1];
for(int i=2; i<A.length-1; ++i){
maxDSE = Math.max(maxDSE+A[i], maxS);
maxDS = Math.max(maxDS, maxDSE);
maxS = Math.max(0, Math.max(A[i], maxS + A[i]));
}
return (int)maxDS;
}
}
It seems that for the input [-11, -53, -4, 38, 76, 80], your solution doesn't work. Yes, it tricks all the codility test cases, but I managed to trick all codility test cases for other problems too.
If you don't just want to trick codility, but also you want to come with a good solution, I suggest that you create a loop and a large number of random test cases (in number of elements and element values), and create a test method of your own, that you are sure works (even if the complexity is quadratic), compare the results from both methods and then analyze the current random input that doesn't fit.
Here is clear solution. Best approach is to use algorithm of Kanade O(N) and O(1) by space
public class DuplicateDetermineAlgorithm {
public static boolean isContainsDuplicate(int[] array) {
if (array == null) {
throw new IllegalArgumentException("Input array can not be null");
}
if (array.length < 2) {
return false;
}
for (int i = 0; i < array.length; i++) {
int pointer = convertToPositive(array[i]) - 1;
if (array[pointer] > 0) {
array[pointer] = changeSign(array[pointer]);
} else {
return true;
}
}
return false;
}
private static int convertToPositive(int value) {
return value < 0 ? changeSign(value) : value;
}
private static int changeSign(int value) {
return -1 * value;
}
}
I have coded it in vb.net and got 100/100 getting idea form solution by Guillermo
Private Function solution(A As Integer()) As Integer
' write your code in VB.NET 4.0
Dim Slice1() As Integer = Ending(A)
Dim slice2() As Integer = Starting(A)
Dim maxSUM As Integer = 0
For i As Integer = 1 To A.Length - 2
maxSUM = Math.Max(maxSUM, Slice1(i - 1) + slice2(i + 1))
Next
Return maxSUM
End Function
Public Shared Function Ending(input() As Integer) As Integer()
Dim result As Integer() = New Integer(input.Length - 1) {}
result(0) = InlineAssignHelper(result(input.Length - 1), 0)
For i As Integer = 1 To input.Length - 2
result(i) = Math.Max(0, result(i - 1) + input(i))
Next
Return result
End Function
Public Shared Function Starting(input() As Integer) As Integer()
Dim result As Integer() = New Integer(input.Length - 1) {}
result(0) = InlineAssignHelper(result(input.Length - 1), 0)
For i As Integer = input.Length - 2 To 1 Step -1
result(i) = Math.Max(0, result(i + 1) + input(i))
Next
Return result
End Function
Private Shared Function InlineAssignHelper(Of T)(ByRef target As T, value As T) As T
target = value
Return value
End Function
Visit Codility to see the results

violating the given average time complexity in Big-O notation

I am trying to Implement a solutions to find k-th largest element in a given integer list with duplicates with O(N*log(N)) average time complexity in Big-O notation, where N is the number of elements in the list.
As per my understanding Merge-sort has an average time complexity of O(N*log(N)) however in my below code I am actually using an extra for loop along with mergesort algorithm to delete duplicates which is definitely violating my rule of find k-th largest element with O(N*log(N)). How do I go about it by achieving my task O(N*log(N)) average time complexity in Big-O notation?
public class FindLargest {
public static void nthLargeNumber(int[] arr, String nthElement) {
mergeSort_srt(arr, 0, arr.length - 1);
// remove duplicate elements logic
int b = 0;
for (int i = 1; i < arr.length; i++) {
if (arr[b] != arr[i]) {
b++;
arr[b] = arr[i];
}
}
int bbb = Integer.parseInt(nthElement) - 1;
// printing second highest number among given list
System.out.println("Second highest number is::" + arr[b - bbb]);
}
public static void mergeSort_srt(int array[], int lo, int n) {
int low = lo;
int high = n;
if (low >= high) {
return;
}
int middle = (low + high) / 2;
mergeSort_srt(array, low, middle);
mergeSort_srt(array, middle + 1, high);
int end_low = middle;
int start_high = middle + 1;
while ((lo <= end_low) && (start_high <= high)) {
if (array[low] < array[start_high]) {
low++;
} else {
int Temp = array[start_high];
for (int k = start_high - 1; k >= low; k--) {
array[k + 1] = array[k];
}
array[low] = Temp;
low++;
end_low++;
start_high++;
}
}
}
public static void main(String... str) {
String nthElement = "2";
int[] intArray = { 1, 9, 5, 7, 2, 5 };
FindLargest.nthLargeNumber(intArray, nthElement);
}
}
Your only problem here is that you don't understand how to do the time analysis. If you have one routine which takes O(n) and one which takes O(n*log(n)), running both takes a total of O(n*log(n)). Thus your code runs in O(n*log(n)) like you want.
To do things formally, we would note that the definition of O() is as follows:
f(x) ∈ O(g(x)) if and only if there exists values c > 0 and y such that f(x) < cg(x) whenever x > y.
Your merge sort is in O(n*log(n)) which tells us that its running time is bounded above by c1*n*log(n) when n > y1 for some c1,y1. Your duplication elimination is in O(n) which tells us that its running time is bounded above by c2*n when n > y2 for some c2 and y2. Using this, we can know that the total running time of the two is bounded above by c1*n*log(n)+c2*n when n > max(y1,y2). We know that c1*n*log(n)+c2*n < c1*n*log(n)+c2*n*log(n) because log(n) > 1, and this, of course simplifies to (c1+c2)*n*log(n). Thus, we can know that the running time of the two together is bounded above by (c1+c2)*n*log(n) when n > max(y1,y2) and thus, using c1+c2 as our c and max(y1,y2) as our y, we know that the running time of the two together is in O(n*log(n)).
Informally, you can just know that faster growing functions always dominate, so if one piece of code is O(n) and the second is O(n^2), the combination is O(n^2). If one is O(log(n)) and the second is O(n), the combination is O(n). If one is O(n^20) and the second is O(n^19.99), the combination is O(n^20). If one is O(n^2000) and the second is O(2^n), the combination is O(2^n).
Problem here is your merge routine where you have used another loop which i donot understand why, Hence i would say your algorithm of merge O(n^2) which changes your merge sort time to O(n^2).
Here is a pseudo code for typical O(N) merge routine :-
void merge(int low,int high,int arr[]) {
int buff[high-low+1];
int i = low;
int mid = (low+high)/2;
int j = mid +1;
int k = 0;
while(i<=mid && j<=high) {
if(arr[i]<arr[j]) {
buff[k++] = arr[i];
i++;
}
else {
buff[k++] = arr[j];
j++;
}
}
while(i<=mid) {
buff[k++] = arr[i];
i++;
}
while(j<=high) {
buff[k++] = arr[j];
j++;
}
for(int x=0;x<k;x++) {
arr[low+x] = buff[x];
}
}

How to count possible combination for coin problem

I am trying to implement a coin problem, Problem specification is like this
Create a function to count all possible combination of coins which can be used for given amount.
All possible combinations for given amount=15, coin types=1 6 7
1) 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
2) 1,1,1,1,1,1,1,1,1,6,
3) 1,1,1,1,1,1,1,1,7,
4) 1,1,1,6,6,
5) 1,1,6,7,
6) 1,7,7,
function prototype:
int findCombinationsCount(int amount, int coins[])
assume that coin array is sorted. for above example this function should return 6.
Anyone guide me how to implement this??
Use recursion.
int findCombinationsCount(int amount, int coins[]) {
return findCombinationsCount(amount, coins, 0);
}
int findCombinationsCount(int amount, int coins[], int checkFromIndex) {
if (amount == 0)
return 1;
else if (amount < 0 || coins.length == checkFromIndex)
return 0;
else {
int withFirstCoin = findCombinationsCount(amount-coins[checkFromIndex], coins, checkFromIndex);
int withoutFirstCoin = findCombinationsCount(amount, coins, checkFromIndex+1);
return withFirstCoin + withoutFirstCoin;
}
}
You should check this implementation though. I don't have a Java IDE here, and I'm a little rusty, so it may have some errors.
Although recursion can work and is often an assignment to implement in some college level courses on Algorithms & Data Structures, I believe the "dynamic programming" implementation is more efficient.
public static int findCombinationsCount(int sum, int vals[]) {
if (sum < 0) {
return 0;
}
if (vals == null || vals.length == 0) {
return 0;
}
int dp[] = new int[sum + 1];
dp[0] = 1;
for (int i = 0; i < vals.length; ++i) {
for (int j = vals[i]; j <= sum; ++j) {
dp[j] += dp[j - vals[i]];
}
}
return dp[sum];
}
You can use generating function methods to give fast algorithms, which use complex numbers.
Given the coin values c1, c2, .., ck, to get the number of ways to sum n, what you need is the coefficient of x^n in
(1 + x^c1 + x^(2c1) + x^(3c1) + ...)(1+x^c2 + x^(2c2) + x^(3c2) + ...)....(1+x^ck + x^(2ck) + x^(3ck) + ...)
Which is the same as finding the coefficient of x^n in
1/(1-x^c1) * 1/(1-x^c2) * ... * (1-x^ck)
Now using complex numbers, x^a - 1 = (x-w1)(x-w2)...(x-wa) where w1, w2 etc are the complex roots of unity.
So
1/(1-x^c1) * 1/(1-x^c2) * ... * (1-x^ck)
can be written as
1/(x-a1)(x-a2)....(x-am)
which can be rewritten using partial fractions are
A1/(x-a1) + A2/(x-a2) + ... + Am/(x-am)
The coefficient of x^n in this can be easily found:
A1/(a1)^(n+1) + A2/(a2)^(n+1) + ...+ Am/(am)^(n+1).
A computer program should easily be able to find Ai and ai (which could be complex numbers). Of course, this might involve floating point computations.
For large n, this will be probably faster than enumerating all the possible combinations.
Hope that helps.
Very simple with recursion:
def countChange(money: Int, coins: List[Int]): Int = {
def reduce(money: Int, coins: List[Int], accCounter: Int): Int = {
if(money == 0) accCounter + 1
else if(money < 0 || coins.isEmpty) accCounter
else reduce(money - coins.head, coins, accCounter) + reduce(money, coins.tail, accCounter)
}
if(money <= 0 || coins.isEmpty) 0
else reduce(money, coins, 0)
}
This is example in SCALA
Aryabhatta’s answer for
counting the number of ways to make change with coins of fixed
denominations is very cute but also impractical to implement as
described. Rather than use complex numbers, we’ll use modular
arithmetic, similar to how the number-theoretic transform replaces a
Fourier transform for multiplying integer polynomials.
Let D be the least common multiple of the coin denominations. By
Dirichlet’s theorem on arithmetic progressions, there exist infinitely
many prime numbers p such that D divides p - 1. (With any luck,
they’ll even be distributed in a way such that we can find them
efficiently.) We’ll compute the number of ways modulo some p
satisfying this condition. By obtaining a crude bound somehow (e.g.,
n + k - 1 choose k - 1 where n is the total and k is the number
of denominations), repeating this procedure with several different
primes whose product exceeds that bound, and applying the Chinese
remainder theorem, we can recover the exact number.
Test candidates 1 + k*D for integers k > 0 until we find a prime
p. Let g be a primitive root modulo p (generate candidates at
random and apply the standard test). For each denomination d, express
the polynomial x**d - 1 modulo p as a product of factors:
x**d - 1 = product from i=0 to d-1 of (x - g**((p-1)*i/d)) [modulo p].
Note that d divides D divides p-1, so the exponent indeed is an
integer.
Let m be the sum of denominations. Gather all of the constants
g**((p-1)*i/d) as a(0), ..., a(m-1). The next step is to find a
partial fraction decomposition A(0), ..., A(m-1) such that
sign / product from j=0 to m-1 of (a(j) - x) =
sum from j=0 to m-1 of A(j)/(a(j) - x) [modulo p],
where sign is 1 if there are an even number of denominations and
-1 if there are an odd number of denominations. Derive a system of
linear equations for A(j) by evaluating both sides of the given
equation for different values of x, then solve it with Gaussian
elimination. Life gets complicated if there are duplicates; it's probably easiest just to pick another prime.
Given this setup, we can compute the number of ways (modulo p, of
course) to make change amounting to n as
sum from j=0 to m-1 of A(j) * (1/a(j))**(n+1).
The recursive solutions mentioned will work, but they're going to be horrendously slow if you add more coin denominations and/or increase the target value significantly.
What you need to speed it up is to implement a dynamic programming solution. Have a look at the knapsack problem. You can adapt the DP solution mentioned there to solve your problem by keeping a count of the number of ways a total can be reached rather than the minimum number of coins required.
package algorithms;
import java.util.Random;
/**`enter code here`
* Owner : Ghodrat Naderi
* E-Mail: Naderi.ghodrat#gmail.com
* Date : 10/12/12
* Time : 4:50 PM
* IDE : IntelliJ IDEA 11
*/
public class CoinProblem
{
public static void main(String[] args)
{
int[] coins = {1, 3, 5, 10, 20, 50, 100, 200, 500};
int amount = new Random().nextInt(10000);
int coinsCount = 0;
System.out.println("amount = " + amount);
int[] numberOfCoins = findNumberOfCoins(coins, amount);
for (int i = 0; i < numberOfCoins.length; i++)
{
if (numberOfCoins[i] > 0)
{
System.out.println("coins= " + coins[i] + " Count=" + numberOfCoins[i] + "\n");
coinsCount += numberOfCoins[i];
}
}
System.out.println("numberOfCoins = " + coinsCount);
}
private static int[] findNumberOfCoins(int[] coins, int amount)
{
int c = coins.length;
int[] numberOfCoins = new int[coins.length];
while (amount > 0)
{
c--;
if (amount >= coins[c])
{
int quotient = amount / coins[c];
amount = amount - coins[c] * quotient;
numberOfCoins[c] = quotient;
}
}
return numberOfCoins;
}
}
A recursive solution might be the right answer here:
int findCombinationsCount(int amount, int coins[])
{
// I am assuming amount >= 0, coins.length > 0 and all elements of coins > 0.
if (coins.length == 1)
{
return amount % coins[0] == 0 ? 1 : 0;
}
else
{
int total = 0;
int[] subCoins = arrayOfCoinsExceptTheFirstOne(coins);
for (int i = 0 ; i * coins[0] <= amount ; ++i)
{
total += findCombinationsCount(amount - i * coins[0], subCoins);
}
return total;
}
}
Warning: I haven't tested or even compiled the above.
The solution provided by #Jordi is nice but runs extremely slow. You can try input 600 to that solution and see how slow it is.
My idea is to use bottom-up dynamic programming.
Note that generally, the possible combination for money=m and coins{a,b,c} equals combination for
m-c and coins{a,b,c} (with coin c)
combination for m and coins{a,b} (without coin c).
If no coins are available or available coins can not cover the required amount of money, it should fill in 0 to the block accordingly. If the amount of money is 0, it should fill in 1.
public static void main(String[] args){
int[] coins = new int[]{1,2,3,4,5};
int money = 600;
int[][] recorder = new int[money+1][coins.length];
for(int k=0;k<coins.length;k++){
recorder[0][k] = 1;
}
for(int i=1;i<=money;i++){
//System.out.println("working on money="+i);
int with = 0;
int without = 0;
for(int coin_index=0;coin_index<coins.length;coin_index++){
//System.out.println("working on coin until "+coins[coin_index]);
if(i-coins[coin_index]<0){
with = 0;
}else{
with = recorder[i-coins[coin_index]][coin_index];
}
//System.out.println("with="+with);
if(coin_index-1<0){
without = 0;
}else{
without = recorder[i][coin_index-1];
}
//System.out.println("without="+without);
//System.out.println("result="+(without+with));
recorder[i][coin_index] = with+without;
}
}
System.out.print(recorder[money][coins.length-1]);
}
This code is based on the solution provided by JeremyP which is working perfect and I just enhanced it to optimize the performance by using dynamic programming.I couldn't comment on the JeremyP post because I don't have enough reputation :)
public static long makeChange(int[] coins, int money) {
Long[][] resultMap = new Long[coins.length][money+1];
return getChange(coins,money,0,resultMap);
}
public static long getChange(int[] coins, int money, int index,Long[][] resultMap) {
if (index == coins.length -1) // if we are at the end
return money%coins[index]==0? 1:0;
else{
//System.out.printf("Checking index %d and money %d ",index,money);
Long storedResult =resultMap[index][money];
if(storedResult != null)
return storedResult;
long total=0;
for(int coff=0; coff * coins[index] <=money; coff ++){
total += getChange(coins, money - coff*coins[index],index +1,resultMap);
}
resultMap[index][money] = total;
return total;
}
}
First idea:
int combinations = 0;
for (int i = 0; i * 7 <=15; i++) {
for (int j = 0; j * 6 + i * 7 <= 15; j++) {
combinations++;
}
}
(the '<=' is superfluous in this case, but is needed for a more general solution, if you decide to change your parameters)
Below is recursion with memoization java solution. for below one we have 1,2,3,5 as coins and 200 as the target amount.
countCombinations(200,new int[]{5,2,3,1} , 0, 0,new Integer[6][200+5]);
static int countCombinations(Integer targetAmount, int[] V,int currentAmount, int coin, Integer[][] memory){
//Comment below if block if you want to see the perf difference
if(memory[coin][currentAmount] != null){
return memory[coin][currentAmount];
}
if(currentAmount > targetAmount){
memory[coin][currentAmount] = 0;
return 0;
}
if(currentAmount == targetAmount){
return 1;
}
int count = 0;
for(int selectedCoin : V){
if(selectedCoin >= coin){
count += countCombinations(targetAmount, V, currentAmount+selectedCoin, selectedCoin,memory);
}
}
memory[coin][currentAmount] = count;
return count;
}
#include<iostream>
using namespace std;
int solns = 0;
void countComb(int* arr, int low, int high, int Val)
{
bool b = false;
for (size_t i = low; i <= high; i++)
{
if (Val - arr[i] == 0)
{
solns++;
break;
}
else if (Val - arr[i] > 0)
countComb(arr, i, high, Val - arr[i]);
}
}
int main()
{
int coins[] = { 1,2,5 };
int value = 7;
int arrSize = sizeof(coins) / sizeof(int);
countComb(coins,0, arrSize,value);
cout << solns << endl;
return 0;
}
Again using recursion a tested solution, though probably not the most elegant code. (note it returns the number of each coin to use rather than repeating the actual coin ammount n times).
public class CoinPerm {
#Test
public void QuickTest() throws Exception
{
int ammount = 15;
int coins[] = {1,6,7};
ArrayList<solution> solutionList = SolvePerms(ammount, coins);
for (solution sol : solutionList)
{
System.out.println(sol);
}
assertTrue("Wrong number of solutions " + solutionList.size(),solutionList.size() == 6);
}
public ArrayList<solution> SolvePerms(int ammount, int coins[]) throws Exception
{
ArrayList<solution> solutionList = new ArrayList<solution>();
ArrayList<Integer> emptyList = new ArrayList<Integer>();
solution CurrentSolution = new solution(emptyList);
GetPerms(ammount, coins, CurrentSolution, solutionList);
return solutionList;
}
private void GetPerms(int ammount, int coins[], solution CurrentSolution, ArrayList<solution> mSolutions) throws Exception
{
int currentCoin = coins[0];
if (currentCoin <= 0)
{
throw new Exception("Cant cope with negative or zero ammounts");
}
if (coins.length == 1)
{
if (ammount % currentCoin == 0)
{
CurrentSolution.add(ammount/currentCoin);
mSolutions.add(CurrentSolution);
}
return;
}
// work out list with one less coin.
int coinsDepth = coins.length;
int reducedCoins[] = new int[(coinsDepth -1 )];
for (int j = 0; j < coinsDepth - 1;j++)
{
reducedCoins[j] = coins[j+1];
}
// integer rounding okay;
int numberOfPerms = ammount / currentCoin;
for (int j = 0; j <= numberOfPerms; j++)
{
solution newSolution = CurrentSolution.clone();
newSolution.add(j);
GetPerms(ammount - j * currentCoin,reducedCoins, newSolution, mSolutions );
}
}
private class solution
{
ArrayList<Integer> mNumberOfCoins;
solution(ArrayList<Integer> anumberOfCoins)
{
mNumberOfCoins = anumberOfCoins;
}
#Override
public String toString() {
if (mNumberOfCoins != null && mNumberOfCoins.size() > 0)
{
String retval = mNumberOfCoins.get(0).toString();
for (int i = 1; i< mNumberOfCoins.size();i++)
{
retval += ","+mNumberOfCoins.get(i).toString();
}
return retval;
}
else
{
return "";
}
}
#Override
protected solution clone()
{
return new solution((ArrayList<Integer>) mNumberOfCoins.clone());
}
public void add(int i) {
mNumberOfCoins.add(i);
}
}
}
Dynamic Programming Solution
Given an array of denominations D = {d1, d2, d3, ... , dm} and a target amount W. Note that D doesn't need to be sorted.
Let T(i, j) be the number of combinations that make up amount j using only denominations on the left of the ith one (can include itself) in D.
We have:
T(0, 0) = 1 : since the amount is 0, there is only 1 valid combination that makes up 0, which is the empty set.
T(i, j) = T(i - 1, j) if D[i] > j
T(i, j) = T(i - 1, j) + T(i, j - D[i]) if D[i] <= j
public int change(int amount, int[] coins) {
int m = coins.length;
int n = amount;
int[][] dp = new int[m + 1][n + 1];
dp[0][0] = 1;
for (int i = 1; i <= m; i++) {
for (int j = 0; j <= n; j++) {
if (j < coins[i - 1]) {
dp[i][j] = dp[i - 1][j];
}
else {
dp[i][j] = dp[i - 1][j] + dp[i][j - coins[i - 1]];
}
}
}
return dp[m][n];
}
public static void main(String[] args) {
int b,c,total = 15;
int combos =1;
for(int d=0;d<total/7;d++)
{
b = total - d * 7;
for (int n = 0; n <= b /6; n++)
{
combos++;
}
}
System.out.print("TOTAL COMBINATIONS = "+combos);
}
Below is a recursive backtracking solution I created, It lists and counts all possible combination of denominations (coins) that would add up to a given amount.
Both denominations and the amounts can be dynamic
public class CoinComboGenerate {
public static final int[] DENO = {1,6,7};
public static final int AMOUNT = 15;
public static int count = 0;
public static void change(int amount) {
change(amount, new ArrayList<>(),0);
}
private static void change(int rem, List<Integer> coins, int pos) {
if (rem == 0) {
count++;
System.out.println(count+")"+coins);
return;
}
while(pos<DENO.length){
if (rem >= DENO[pos]) {
coins.add(DENO[pos]);
change(rem - DENO[pos], coins,pos);
coins.remove(coins.size() - 1); //backtrack
}
pos++;
}
}
public static void main(String[] args) {
change(AMOUNT);
}
}
Output:
1)[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
2)[1, 1, 1, 1, 1, 1, 1, 1, 1, 6]
3)[1, 1, 1, 1, 1, 1, 1, 1, 7]
4)[1, 1, 1, 6, 6]
5)[1, 1, 6, 7]
6)[1, 7, 7]
The same problem for coins(1,5,10,25,50) has one of below solutions.
The solution should satisfy below equation:
1*a + 5*b + 10*c + 25*d + 50*e == cents
public static void countWaysToProduceGivenAmountOfMoney(int cents) {
for(int a = 0;a<=cents;a++){
for(int b = 0;b<=cents/5;b++){
for(int c = 0;c<=cents/10;c++){
for(int d = 0;d<=cents/25;d++){
for(int e = 0;e<=cents/50;e++){
if(1*a + 5*b + 10*c + 25*d + 50*e == cents){
System.out.println("1 cents :"+a+", 5 cents:"+b+", 10 cents:"+c);
}
}
}
}
}
}
}
This can be modified for any general solutions.

Categories

Resources