with the following code, I count the restricted integer partitions(each number can only occure once in each partition) with k numbers in each partition, each number is equal or greater than 1 and not greater than m. This code generate a lot of cache values so that it goes out memory quickly.
Example:
sum := 15, k := 4, m:= 10 expected result is 6
Has following restricted integer partitions:
1,2,3,9,1,2,4,8,1,2,5,7,1,3,4,7,1,3,5,7,2,3,4,6
public class Key{
private final int sum;
private final short k1;
private final short start;
private final short end;
public Key(int sum, short k1, short start, short end){
this.sum = sum;
this.k1 = k1;
this.start = start;
this.end = end;
}
// + hashcode and equals
}
public BigInteger calcRestrictedIntegerPartitions(int sum,short k,short m){
return calcRestrictedIntegerPartitionsHelper(sum,(short)0,k,(short)1,m,new HashMap<>());
}
private BigInteger calcRestrictedIntegerPartitionsHelper(int sum, short k1, short k, short start, short end, Map<Key,BigInteger> cache){
if(sum < 0){
return BigInteger.ZERO;
}
if(k1 == k){
if(sum ==0){
return BigInteger.ONE;
}
return BigInteger.ZERO;
}
if(end*(k-k1) < sum){
return BigInteger.ZERO;
}
final Key key = new Key(sum,(short)(k-k1),start,end);
BigInteger fetched = cache.get(key);
if(fetched == null){
BigInteger tmp = BigInteger.ZERO;
for(short i=start; i <= end;i++){
tmp = tmp.add(calcRestrictedIntegerPartitionsHelper(sum-i,(short)(k1+1),k,(short)(i+1),end,cache));
}
cache.put(key, tmp);
return tmp;
}
return fetched;
}
Is there formula to avoid/reduce caching? Or how Can I count restricted integer partions with k and m?
Your problem can be transposed, so you only need 3 keys in your cache and a lot less runtime to boot. Less distinct keys means better caching (A smarter person than me may still find a cheaper solution).
Let's view the partitions as sets. The elements of each set shall be ordered (ascending).
You have already done this implicitly, when you stated the expected results for sum := 15, k := 4, m:= 10 as [1, 2, 3, 9]; [1, 2, 4, 8] ....
The restrictions you defined for the partitions are:
exactly k elements per set
max m as element
distinct values
non-zero positive integers
The restriction of distinction is actually a bit bothersome, so we will lift it.
For that, we need to transform the problem a bit. Because the elements of your set are ascending (and distinct), we know, that the minimum value of each element is an ascending sequence (if we ignore that the sum must be sum), so the minia are: [1, 2, 3, ...].
If m were for example less than k, then the number of possible partitions would always be zero. Likewise, if the sum of [1, 2, 3, ... k] is more than sum, then you also have zero results. We exclude these edge cases at the beginning, to make sure the transformation is legal.
Let us look at a geometric representation of a 'legal partition' and how we want to transform it. We have k columns, m rows and sum squares are filled blue (either light or dark blue).
The red and dark blue squares are irrelevant, as we already know, the dark blue squares must always be filled, and the red ones must always be empty. Therefore we can exclude them from our calculation and assume their respective states as we go along. The resulting box is represented on the right side. Every column was 'shifted down' by it's position, and the red and dark blue areas are cut off.
We now have a smaller overall box and a column can now be empty (and we may have the same number of blue boxes among neighboring columns).
Algorithmically the transformation now works like this:
For every element in a legal partition, we subtract it's position (starting at 1). So for
[1, 2, 4, 8] we get [0, 0, 1, 4]. Furthermore, we have to adapt our bounds (sum and m) accordingly:
// from the sum, we subtract the sum of [1, 2, 3, ... k], which is (k * (k + 1) / 2)
sum_2 = sum - (k * (k + 1) / 2)
// from m we subtract the maximum position (which is k)
m_2 = m - k
Now we have transposed our partitioning problem into another partitioning problem, one that does not have the restriction of distinct elements! Also, this partition can contain element 0, which our original could not. (We keep the internal ascending order).
Now we need to refine the recursion a bit. If we know the elements are ascending, not necessariely distinct and always less-equal to m_2, then we have bound the possible elements to a range. Example:
[0, 1, 3, n1, n2]
=> 3 <= n1 <= m_2
=> 3 <= n2 <= m_2
Because we know that n1 and n2 in the example are 3 or greater, when calling the recursion, we can also instead reduce them both by 3 and reduce sum_2 by 2 * 3 (one is the number of 'open' elements, one is the value of the last 'fixed' element). This way, what we pass in the recursion does not have an upper and a lower bound, but only an upper bound, which is what we had before (m).
Because of this, we can toss 1 value of your cache key: start. Instead we now only have 3: sum, m and k, when solving this reduced problem.
The following implementation works to this effect:
#Test
public void test() {
calcNumRIPdistinctElementsSpecificKmaxM(600, (short) 25, (short) 200);
}
public BigInteger calcNumRIPdistinctElementsSpecificKmaxM(int sum, short k, short m) {
// If the biggest allowed number in a partition is less than the number of parts, then
// they cannot all be distinct, therefore we have zero results.
if (m < k) {
return BigInteger.ZERO;
}
// If the sum of minimum element-values for k is less than the expected sum, then
// we also have no results.
final int v = ((k * ((int) k + 1)) / 2);
if (sum < v) {
return BigInteger.ZERO;
}
// We normalize the problem by lifting the distinction restriction.
final Cache cache = new Cache();
final int sumNorm = sum - v;
final short mNorm = (short) (m - k);
BigInteger result = calcNumRIPspecificKmaxM(sumNorm, k, mNorm, cache);
System.out.println("Calculation (n=" + sum + ", k=" + k + ", m=" + m + ")");
System.out.println("p = " + result);
System.out.println("entries = " + cache.getNumEntries());
System.out.println("c-rate = " + cache.getCacheRate());
return result;
}
public BigInteger calcNumRIPspecificKmaxM(int sum, short k, short m, Cache cache) {
// We can improve cache use by standing the k*m-rectangle upright (k being the 'bottom').
if (k > m) {
final short c = k;
k = m;
m = c;
}
// If the result is trivial, we just calculate it. This is true for k < 3
if (k < 3) {
if (k == 0) {
return sum == 0 ? BigInteger.ONE : BigInteger.ZERO;
} else if (k == 1) {
return sum <= m ? BigInteger.ONE : BigInteger.ZERO;
} else {
final int upper = Math.min(sum, m);
final int lower = sum - upper;
if (upper < lower) {
return BigInteger.ZERO;
}
final int difference = upper - lower;
final int numSubParts = difference / 2 + 1;
return BigInteger.valueOf(numSubParts);
}
}
// If k * m / 2 < sum, we can 'invert' the sub problem to reduce the number of keys further.
sum = Math.min(sum, k * m - sum);
// If the sum is less than m and maybe even k, we can reduce the box. This improves the cache size even further.
if (sum < m) {
m = (short) sum;
if (sum < k) {
k = (short) sum;
if (k < 3) {
return calcNumRIPspecificKmaxM(sum, k, m, cache);
}
}
}
// If the result is non-trivial, we check the cache or delegate.
final Triple<Short, Short, Integer> key = Triple.of(k, m, sum);
final BigInteger cachedResult = cache.lookUp(key);
if (cachedResult != null) {
return cachedResult;
}
BigInteger current = BigInteger.ZERO;
// i = m is reached in case the result is an ascending stair e.g. [1, 2, 3, 4]
for (int i = 0; i <= m; ++i) {
final int currentSum = sum - (i * k);
if (currentSum < 0) {
break;
}
short currentK = (short) (k - 1);
short currentM = (short) (m - i);
current = current.add(calcNumRIPspecificKmaxM(currentSum, currentK, currentM, cache));
}
// We cache this new result and return it.
cache.enter(key, current);
return current;
}
public static class Cache {
private final HashMap<Triple<Short, Short, Integer>, BigInteger> map = new HashMap<>(1024);
private long numLookUps = 0;
private long numReuse = 0;
public BigInteger lookUp(Triple<Short, Short, Integer> key) {
++numLookUps;
BigInteger value = map.get(key);
if (value != null) {
++numReuse;
}
return value;
}
public void enter(Triple<Short, Short, Integer> key, BigInteger value) {
map.put(key, value);
}
public double getCacheRate() {
return (double) numReuse / map.size();
}
public int getNumEntries() {
return map.size();
}
public long numLookUps() {
return numLookUps;
}
public long getNumReuse() {
return numReuse;
}
}
Note: I used apache-common's Triple-class as key here, to spare the implementation of an explicit key-class, but this is not an optimization in runtime, it just saves code.
Edit: Beside a fix to a problem found by #MBo (thank you), I added a few shortcuts to reach the same result. The algorithm now performs even better, and the cache (reuse) rate is better. Maybe this will satisfy your requirements?
The optimizations explained (they are only applicable after the above mentioned transposition of the problem):
If k > m, we can 'flip' the rectangle upright, and still get the same result for the number of legal partitions. This will map some 'lying' configurations into 'upright' configurations and reduce the overall amount of different keys.
If the number of squares in the rectangle is larger than the number of 'empty spaces', we can consider the 'empty spaces' as squares instead, which will map another bunch of keys together.
If sum < k and/or sum < m, we can reduce k and/or m to sum, and still get the same number of partitions. (this is the most impacting optimization, as it often skips multiple redundant interim steps and frequently reaches m = k = sum)
Your key contains 4 parts, so hash space might reach value of product of max values for these parts. It is possible to diminish key to 3 parts using backward loops and zero value as natural limit.
Python example uses in-built functionality lru_cache with hashtable size = N*K*M
#functools.lru_cache(250000)
def diff_partition(N, K, M):
'''Counts integer partitions of N with K distint parts <= M'''
if K == 0:
if N == 0:
return 1
return 0
res = 0
for i in range(min(N, M), -1, -1):
res += diff_partition(N - i, K - 1, i - 1)
return res
def diffparts(Sum, K, M): #diminish problem size allowing zero part
return diff_partition(Sum - K, K, M-1)
print(diffparts(500, 25, 200))
>>>147151784574
An alternative would be to use a constraint solver and configure it to show all solutions. Here a solution with MiniZinc:
include "globals.mzn";
int: sum = 15;
int: k = 4;
int: m = 10;
array[1..k] of var 1..m: numbers;
constraint sum(numbers) = sum;
constraint alldifferent(numbers);
constraint increasing(numbers);
solve satisfy;
Related
I am trying to save many natural numbers that are smaller than m into 1 natural number n.
I need a function to read i'th number from n.
In python I can do it like:
def read(n,m,i):#reads a number on index i from n.
return n//m**i%m
def save(numbers_to_save, m=None):#saves natural numbers, that are smaller than m to n.
if m is None:
m=max(numbers_to_save)+1
n=0
for i_number in range(len(numbers_to_save)):
n+=m**i_number*numbers_to_save[i_number]
return n
numbers_to_save=[12,54,3,7,23,8,9,3,72,3]
i_max=len(numbers_to_save)
m=max(numbers_to_save)+1
n=save(numbers_to_save,m)
del numbers_to_save
for i in range(i_max):
print(read(n,m,i),end=",")
But how to do it effectively in java reading n only byte per byte? n is bigger than maximum value of long, so I can not use long to save n.
To translate this code to Java, you would need to use BigInteger class.
It works similarly to Python's "infinite" size integers, but with two key differences:
It is immutable, which means every time you change it, the result is a new object you must store in place of the old one.
You can't use regular operators (+, -, *, +) on it directly, but instead you must use the instance methods such as add or pow.
Here is an example of how your read function will look in Java:
int read(BigInteger n, BigInteger m, int i) {
return n.divide(m.pow(i)).mod(m).intValue();
}
Note, that for simplicity, this code assumes that both i and m will be smaller than MAX_INT.
It is possible to make both of them BigInteger as well to allow them to be of any size.
A long can be used for that specific given numbers (since m**count < Long.MAX_VALUE).
import static java.lang.Math.*;
public static long save(int m, int... numbers) {
long result = 0;
long mult = 1;
for (var num : numbers) {
if (num <0 || num >= m) throw new IllegalArgumentException("invalid: " + num);
result = addExact(result, multiplyExact(mult, num));
mult = multiplyExact(mult, m);
}
return result;
}
public static int read(long compressed, int m, int i) {
return (int) (compressed / (long)pow(m, i) % m);
}
private static void test() {
int[] numbers = { 12, 54, 3, 7, 23, 8, 9, 3, 72, 3};
int m = Arrays.stream(numbers).max().orElseThrow() + 1;
long compressed = save(m, numbers);
for (var i = 0; i < numbers.length; i++) {
int val = read(compressed, m, i);
if (val == numbers[i])
System.out.println(val);
else
System.err.printf("%d != %d # %d%n", val, numbers[i], i);
}
}
I am a bit lazy, so I used the Math methods addExactly and multiplyExact that throw an Exception in case of overflow. Alternative: check if long can save that count of numbers given m at start of method and use
result += mult * num;
mult *= m;
instead in the loop.
Use BigInteger as posted in this answer), if more space is needed.
this code also works with int to compress less, smaller values
I'm trying to find the average of all even numbers in an array using recursion and I'm stuck.
I realize that n will have to be decremented for each odd number so I divide by the correct value, but I can't wrap my mind around how to do so with recursion.
I don't understand how to keep track of n as I go, considering it will just revert when I return.
Is there a way I'm missing to keep track of n, or am I looking at this the wrong way entirely?
EDIT: I should have specified, I need to use recursion specifically. It's an assignment.
public static int getEvenAverage(int[] A, int i, int n)
{
// first element
if (i == 0)
if (A[i] % 2 == 0)
return A[0];
else
return 0;
// last element
if (i == n - 1)
{
if (A[i] % 2 == 0)
return (A[i] + getEvenAverage(A, i - 1, n)) / n;
else
return (0 + getEvenAverage(A, i - 1, n)) / n;
}
if (A[i] % 2 == 0)
return A[i] + getEvenAverage(A, i - 1, n);
else
return 0 + getEvenAverage(A, i - 1, n);
}
In order to keep track of the number of even numbers you have encountered so far, just pass an extra parameter.
Moreover, you can also pass an extra parameter for the sum of even numbers and when you hit the base case you can return the average, that is, sum of even numbers divided by their count.
One more thing, your code has two base cases for the first as well as last element which is unneeded.
You can either go decrementing n ( start from size of array and go till the first element ), or
You can go incrementing i starting from 0 till you reach size of array, that is, n.
Here, is something I tried.
public static int getEvenAvg(int[] a, int n, int ct, int sum) {
if (n == -1) {
//make sure you handle the case
//when count of even numbers is zero
//otherwise you'll get Runtime Error.
return sum/ct;
}
if (a[n]%2 == 0) {
ct++;
sum+=a[n];
}
return getEvenAvg(a, n - 1, ct, sum);
}
You can call the function like this getEvenAvg(a, size_of_array - 1, 0, 0);
Example
When dealing with recursive operations, it's often useful to start with the terminating conditions. So what are our terminating conditions here?
There are no more elements to process:
if (index >= a.length) {
// To avoid divide-by-zero
return count == 0 ? 0 : sum / count;
}
... okay, now how do we reduce the number of elements to process? We should probably increment index?
index++;
... oh, but only when going to the next level:
getEvenAverage(elements, index++, sum, count);
Well, we're also going to have to add to sum and count, right?
sum += a[index];
count++;
.... except, only if the element is even:
if (a[index] % 2 == 0) {
sum += a[index];
count++;
}
... and that's about it:
static int getEvenAverage(int[] elements, int index, int sum, int count) {
if (index >= a.length) {
// To avoid divide-by-zero
return count == 0 ? 0 : sum / count;
}
if (a[index] % 2 == 0) {
sum += a[index];
count++;
}
return getEvenAverage(elements, index + 1, sum, count);
}
... although you likely want a wrapper function to make calling it prettier:
static int getEvenAverage(int[] elements) {
return getEvenAverage(elements, 0, 0, 0);
}
Java is not a good language for this kind of thing but here we go:
public class EvenAverageCalculation {
public static void main(String[] args) {
int[] array = {1,2,3,4,5,6,7,8,9,10};
System.out.println(getEvenAverage(array));
}
public static double getEvenAverage(int[] values) {
return getEvenAverage(values, 0, 0);
}
private static double getEvenAverage(int[] values, double currentAverage, int nrEvenValues) {
if (values.length == 0) {
return currentAverage;
}
int head = values[0];
int[] tail = new int[values.length - 1];
System.arraycopy(values, 1, tail, 0, tail.length);
if (head % 2 != 0) {
return getEvenAverage(tail, currentAverage, nrEvenValues);
}
double newAverage = currentAverage * nrEvenValues + head;
nrEvenValues++;
newAverage = newAverage / nrEvenValues;
return getEvenAverage(tail, newAverage, nrEvenValues);
}
}
You pass the current average and the number of even elements so far to each the recursive call. The new average is calculated by multiplying the average again with the number of elements so far, add the new single value and divide it by the new number of elements before passing it to the next recursive call.
The way of recreating new arrays for each recursive call is the part that is not that good with Java. There are other languages that have syntax for splitting head and tail of an array which comes with a much smaller memory footprint as well (each recursive call leads to the creation of a new int-array with n-1 elements). But the way I implemented that is the classical way of functional programming (at least how I learned it in 1994 when I had similar assignments with the programming language Gofer ;-)
Explanation
The difficulties here are that you need to memorize two values:
the amount of even numbers and
the total value accumulated by the even numbers.
And you need to return a final value for an average.
This means that you need to memorize three values at once while only being able to return one element.
Outline
For a clean design you need some kind of container that holds those intermediate results, for example a class like this:
public class Results {
public int totalValueOfEvens;
public int amountOfEvens;
public double getAverage() {
return totalValueOfEvens + 0.0 / amountOfEvens;
}
}
Of course you could also use something like an int[] with two entries.
After that the recursion is very simple. You just need to recursively traverse the array, like:
public void method(int[] values, int index) {
// Abort if last element
if (index == values.length - 1) {
return;
}
method(array, index + 1);
}
And while doing so, update the container with the current values.
Collecting backwards
When collecting backwards you need to store all information in the return value.
As you have multiple things to remember, you should use a container as return type (Results or a 2-entry int[]). Then simply traverse to the end, collect and return.
Here is how it could look like:
public static Results getEvenAverage(int[] values, int curIndex) {
// Traverse to the end
if (curIndex != values.length - 1) {
results = getEvenAverage(values, curIndex + 1);
}
// Update container
int myValue = values[curIndex];
// Whether this element contributes
if (myValue % 2 == 0) {
// Update the result container
results.totalValueOfEvens += myValue;
results.amountOfEvens++;
}
// Return accumulated results
return results;
}
Collecting forwards
The advantage of this method is that the caller does not need to call results.getAverage() by himself. You store the information in the parameters and thus be able to freely choose the return type.
We get our current value and update the container. Then we call the next element and pass him the current container.
After the last element was called, the information saved in the container is final. We now simply need to end the recursion and return to the first element. When again visiting the first element, it will compute the final output based on the information in the container and return.
public static double getEvenAverage(int[] values, int curIndex, Results results) {
// First element in recursion
if (curIndex == 0) {
// Setup the result container
results = new Results();
}
int myValue = values[curIndex];
// Whether this element contributes
if (myValue % 2 == 0) {
// Update the result container
results.totalValueOfEvens += myValue;
results.amountOfEvens++;
}
int returnValue = 0;
// Not the last element in recursion
if (curIndex != values.length - 1) {
getEvenAverage(values, curIndex + 1, results);
}
// Return current intermediate average,
// which is the correct result if current element
// is the first of the recursion
return results.getAverage();
}
Usage by end-user
The backward method is used like:
Results results = getEvenAverage(values, 0);
double average results.getAverage();
Whereas the forward method is used like:
double average = getEvenAverage(values, 0, null);
Of course you can hide that from the user using a helper method:
public double computeEvenAverageBackward(int[] values) {
return getEvenAverage(values, 0).getAverage();
}
public double computeEvenAverageForward(int[] values) {
return getEvenAverage(values, 0, null);
}
Then, for the end-user, it is just this call:
double average = computeEvenAverageBackward(values);
Here's another variant, which uses a (moderately) well known recurrence relationship for averages:
avg0 = 0
avgn = avgn-1 + (xn - avgn-1) / n
where avgn refers to the average of n observations, and xn is the nth observation.
This leads to:
/*
* a is the array of values to process
* i is the current index under consideration
* n is a counter which is incremented only if the current value gets used
* avg is the running average
*/
private static double getEvenAverage(int[] a, int i, int n, double avg) {
if (i >= a.length) {
return avg;
}
if (a[i] % 2 == 0) { // only do updates for even values
avg += (a[i] - avg) / n; // calculate delta and update the average
n += 1;
}
return getEvenAverage(a, i + 1, n, avg);
}
which can be invoked using the following front-end method to protect users from needing to know about the parameter initialization:
public static double getEvenAverage(int[] a) {
return getEvenAverage(a, 0, 1, 0.0);
}
And now for a completely different approach.
This one draws on the fact that if you have two averages, avg1 based on n1 observations and avg2 based on n2 observations, you can combine them to produce a pooled average:
avgpooled = (n1 * avg1 + n2 * avg2) / (n1 + n2).
The only issue here is that the recursive function should return two values, the average and the number of observations on which that average is based. In many other languages, that's not a problem. In Java, it requires some hackery in the form of a trivial, albeit slightly annoying, helper class:
// private helper class because Java doesn't allow multiple returns
private static class Pair {
public double avg;
public int n;
public Pair(double avg, int n) {
super();
this.avg = avg;
this.n = n;
}
}
Applying a divide and conquer strategy yields the following recursion:
private static Pair getEvenAverage(int[] a, int first, int last) {
if (first == last) {
if (a[first] % 2 == 0) {
return new Pair(a[first], 1);
}
} else {
int mid = (first + last) / 2;
Pair p1 = getEvenAverage(a, first, mid);
Pair p2 = getEvenAverage(a, mid + 1, last);
int total = p1.n + p2.n;
if (total > 0) {
return new Pair((p1.n * p1.avg + p2.n * p2.avg) / total, total);
}
}
return new Pair(0.0, 0);
}
We can deal with empty arrays, protect the end-user from having to know about the book-keeping arguments, and return just the average by using the following public front-end:
public static double getEvenAverage(int[] a) {
return a.length > 0 ? getEvenAverage(a, 0, a.length - 1).avg : 0.0;
}
This solution has the benefit of O(log n) stack growth for an array of n items, versus O(n) for the various other solutions that have been proposed. As a result, it can deal with much larger arrays without fear of a stack overflow.
I'm solving Codility questions as practice and couldn't answer one of the questions. I found the answer on the Internet but I don't get how this algorithm works. Could someone walk me through it step-by-step?
Here is the question:
/*
You are given integers K, M and a non-empty zero-indexed array A consisting of N integers.
Every element of the array is not greater than M.
You should divide this array into K blocks of consecutive elements.
The size of the block is any integer between 0 and N. Every element of the array should belong to some block.
The sum of the block from X to Y equals A[X] + A[X + 1] + ... + A[Y]. The sum of empty block equals 0.
The large sum is the maximal sum of any block.
For example, you are given integers K = 3, M = 5 and array A such that:
A[0] = 2
A[1] = 1
A[2] = 5
A[3] = 1
A[4] = 2
A[5] = 2
A[6] = 2
The array can be divided, for example, into the following blocks:
[2, 1, 5, 1, 2, 2, 2], [], [] with a large sum of 15;
[2], [1, 5, 1, 2], [2, 2] with a large sum of 9;
[2, 1, 5], [], [1, 2, 2, 2] with a large sum of 8;
[2, 1], [5, 1], [2, 2, 2] with a large sum of 6.
The goal is to minimize the large sum. In the above example, 6 is the minimal large sum.
Write a function:
class Solution { public int solution(int K, int M, int[] A); }
that, given integers K, M and a non-empty zero-indexed array A consisting of N integers, returns the minimal large sum.
For example, given K = 3, M = 5 and array A such that:
A[0] = 2
A[1] = 1
A[2] = 5
A[3] = 1
A[4] = 2
A[5] = 2
A[6] = 2
the function should return 6, as explained above. Assume that:
N and K are integers within the range [1..100,000];
M is an integer within the range [0..10,000];
each element of array A is an integer within the range [0..M].
Complexity:
expected worst-case time complexity is O(N*log(N+M));
expected worst-case space complexity is O(1), beyond input storage (not counting the storage required for input arguments).
Elements of input arrays can be modified.
*/
And here is the solution I found with my comments about parts which I don't understand:
public static int solution(int K, int M, int[] A) {
int lower = max(A); // why lower is max?
int upper = sum(A); // why upper is sum?
while (true) {
int mid = (lower + upper) / 2;
int blocks = calculateBlockCount(A, mid); // don't I have specified number of blocks? What blocks do? Don't get that.
if (blocks < K) {
upper = mid - 1;
} else if (blocks > K) {
lower = mid + 1;
} else {
return upper;
}
}
}
private static int calculateBlockCount(int[] array, int maxSum) {
int count = 0;
int sum = array[0];
for (int i = 1; i < array.length; i++) {
if (sum + array[i] > maxSum) {
count++;
sum = array[i];
} else {
sum += array[i];
}
}
return count;
}
// returns sum of all elements in an array
private static int sum(int[] input) {
int sum = 0;
for (int n : input) {
sum += n;
}
return sum;
}
// returns max value in an array
private static int max(int[] input) {
int max = -1;
for (int n : input) {
if (n > max) {
max = n;
}
}
return max;
}
So what the code does is using a form of binary search (How binary search works is explained quite nicely here, https://www.topcoder.com/community/data-science/data-science-tutorials/binary-search/. It also uses an example quite similar to your problem.). Where you search for the minimum sum every block needs to contain. In the example case, you need the divide the array in 3 parts
When doing a binary search you need to define 2 boundaries, where you are certain that your answer can be found in between. Here, the lower boundary is the maximum value in the array (lower). For the example, this is 5 (this is if you divide your array in 7 blocks). The upper boundary (upper) is 15, which is the sum of all the elements in the array (this is if you divide the array in 1 block.)
Now comes the search part: In solution() you start with your bounds and mid point (10 for the example).
In calculateBlockCount you count (count ++ does that) how many blocks you can make if your sum is a maximum of 10 (your middle point/ or maxSum in calculateBlockCount).
For the example 10 (in the while loop) this is 2 blocks, now the code returns this (blocks) to solution. Then it checks whether is less or more than K, which is the number of blocks you want. If its less than K your mid point is high because you're putting to many array elements in your blocks. If it's more than K, than your mid point is too high and you're putting too little array elements in your array.
Now after the checking this, it halves the solution space (upper = mid-1).
This happens every loop, it halves the solution space which makes it converge quite quickly.
Now you keep going through your while adjusting the mid, till this gives the amount blocks which was in your input K.
So to go though it step by step:
Mid =10 , calculateBlockCount returns 2 blocks
solution. 2 blocks < K so upper -> mid-1 =9, mid -> 7 (lower is 5)
Mid =7 , calculateBlockCount returns 2 blocks
solution() 2 blocks < K so upper -> mid-1 =6, mid -> 5 (lower is 5, cast to int makes it 5)
Mid =5 , calculateBlockCount returns 4 blocks
solution() 4 blocks < K so lower -> mid+1 =6, mid -> 6 (lower is 6, upper is 6
Mid =6 , calculateBlockCount returns 3 blocks
So the function returns mid =6....
Hope this helps,
Gl learning to code :)
Edit. When using binary search a prerequisite is that the solution space is a monotonic function. This is true in this case as when K increases the sum is strictly decreasing.
Seems like your solution has some problems. I rewrote it as below:
class Solution {
public int solution(int K, int M, int[] A) {
// write your code in Java SE 8
int high = sum(A);
int low = max(A);
int mid = 0;
int smallestSum = 0;
while (high >= low) {
mid = (high + low) / 2;
int numberOfBlock = blockCount(mid, A);
if (numberOfBlock > K) {
low = mid + 1;
} else if (numberOfBlock <= K) {
smallestSum = mid;
high = mid - 1;
}
}
return smallestSum;
}
public int sum(int[] A) {
int total = 0;
for (int i = 0; i < A.length; i++) {
total += A[i];
}
return total;
}
public int max(int[] A) {
int max = 0;
for (int i = 0; i < A.length; i++) {
if (max < A[i]) max = A[i];
}
return max;
}
public int blockCount(int max, int[] A) {
int current = 0;
int count = 1;
for (int i = 0; i< A.length; i++) {
if (current + A[i] > max) {
current = A[i];
count++;
} else {
current += A[i];
}
}
return count;
}
}
This is helped me in case anyone else finds it helpful.
Think of it as a function: given k (the block count) we get some largeSum.
What is the inverse of this function? It's that given largeSum we get a k. This inverse function is implemented below.
In solution() we keep plugging guesses for largeSum into the inverse function until it returns the k given in the exercise.
To speed up the guessing process, we use binary search.
public class Problem {
int SLICE_MAX = 100 * 1000 + 1;
public int solution(int blockCount, int maxElement, int[] array) {
// maxGuess is determined by looking at what the max possible largeSum could be
// this happens if all elements are m and the blockCount is 1
// Math.max is necessary, because blockCount can exceed array.length,
// but this shouldn't lower maxGuess
int maxGuess = (Math.max(array.length / blockCount, array.length)) * maxElement;
int minGuess = 0;
return helper(blockCount, array, minGuess, maxGuess);
}
private int helper(int targetBlockCount, int[] array, int minGuess, int maxGuess) {
int guess = minGuess + (maxGuess - minGuess) / 2;
int resultBlockCount = inverseFunction(array, guess);
// if resultBlockCount == targetBlockCount this is not necessarily the solution
// as there might be a lower largeSum, which also satisfies resultBlockCount == targetBlockCount
if (resultBlockCount <= targetBlockCount) {
if (minGuess == guess) return guess;
// even if resultBlockCount == targetBlockCount
// we keep searching for potential lower largeSum that also satisfies resultBlockCount == targetBlockCount
// note that the search range below includes 'guess', as this might in fact be the lowest possible solution
// but we need to check in case there's a lower one
return helper(targetBlockCount, array, minGuess, guess);
} else {
return helper(targetBlockCount, array, guess + 1, maxGuess);
}
}
// think of it as a function: given k (blockCount) we get some largeSum
// the inverse of the above function is that given largeSum we get a k
// in solution() we will keep guessing largeSum using binary search until
// we hit k given in the exercise
int inverseFunction(int[] array, int largeSumGuess) {
int runningSum = 0;
int blockCount = 1;
for (int i = 0; i < array.length; i++) {
int current = array[i];
if (current > largeSumGuess) return SLICE_MAX;
if (runningSum + current <= largeSumGuess) {
runningSum += current;
} else {
runningSum = current;
blockCount++;
}
}
return blockCount;
}
}
From anhtuannd's code, I refactored using Java 8. It is slightly slower. Thanks anhtuannd.
IntSummaryStatistics summary = Arrays.stream(A).summaryStatistics();
long high = summary.getSum();
long low = summary.getMax();
long result = 0;
while (high >= low) {
long mid = (high + low) / 2;
AtomicLong blocks = new AtomicLong(1);
Arrays.stream(A).reduce(0, (acc, val) -> {
if (acc + val > mid) {
blocks.incrementAndGet();
return val;
} else {
return acc + val;
}
});
if (blocks.get() > K) {
low = mid + 1;
} else if (blocks.get() <= K) {
result = mid;
high = mid - 1;
}
}
return (int) result;
I wrote a 100% solution in python here. The result is here.
Remember: You are searching the set of possible answers not the array A
In the example given they are searching for possible answers. Consider [5] as 5 being the smallest max value for a block. And consider [2, 1, 5, 1, 2, 2, 2] 15 as the largest max value for a block.
Mid = (5 + 15) // 2. Slicing out blocks of 10 at a time won't create more than 3 blocks in total.
Make 10-1 the upper and try again (5+9)//2 is 7. Slicing out blocks of 7 at a time won't create more than 3 blocks in total.
Make 7-1 the upper and try again (5+6)//2 is 5. Slicing out blocks of 5 at a time will create more than 3 blocks in total.
Make 5+1 the lower and try again (6+6)//2 is 6. Slicing out blocks of 6 at a time won't create more than 3 blocks in total.
Therefore 6 is the lowest limit to impose on the sum of a block that will permit breaking into 3 blocks.
At a recent computer programming competition that I was at, there was a problem where you have to determine if a number N, for 1<=N<=1000, is a palindromic square. A palindromic square is number that can be read the same forwards and backwards and can be expressed as the sum of two or more consecutive perfect squares. For example, 595 is a palindrome and can be expressed as 6^2 + 7^2 + 8^2 + 9^2 + 10^2 + 11^2 + 12^2.
I understand how to determine if the number is a palindrome, but I'm having trouble trying to figure out if it can be expressed as the sum of two or more consecutive squares.
Here is the algorithm that I tried:
public static boolean isSumOfSquares(int num) {
int sum = 0;
int lowerBound = 1;
//largest square root that is less than num
int upperBound = (int)Math.floor(Math.sqrt(num));
while(lowerBound != upperBound) {
for(int x=lowerBound; x<upperBound; x++) {
sum += x*x;
}
if(sum != num) {
lowerBound++;
}
else {
return true;
}
sum=0;
}
return false;
}
My approach sets the upper boundary to the closest square root to the number and sets the lower bound to 1 and keeps evaluating the sum of squares from the lower bound to the upper bound. The issue is that only the lower bound changes while the upper bound stays the same.
This should be an efficient algorithm for determining if it's a sum of squares of consecutive numbers.
Start with a lower bound and upper bound of 1. The current sum of squares is 1.
public static boolean isSumOfSquares(int num) {
int sum = 1;
int lowerBound = 1;
int upperBound = 1;
The maximum possible upper bound is the maximum number whose square is less than or equal to the number to test.
int max = (int) Math.floor(Math.sqrt(num));
While loop. If the sum of squares is too little, then add the next square, incrementing upperBound. If the sum of squares is too high, then subtract the first square, incrementing lowerBound. Exit if the number is found. If it can't be expressed as the sum of squares of consecutive numbers, then eventually upperBound will exceed the max, and false is returned.
while(sum != num)
{
if (sum < num)
{
upperBound++;
sum += upperBound * upperBound;
}
else if (sum > num)
{
sum -= lowerBound * lowerBound;
lowerBound++;
}
if (upperBound > max)
return false;
}
return true;
Tests for 5, 11, 13, 54, 181, and 595. Yes, some of them aren't palindromes, but I'm just testing the sum of squares of consecutive numbers part.
1: true
2: false
3: false
4: true
5: true
11: false
13: true
54: true
180: false
181: true
595: true
596: false
Just for play, I created a Javascript function that gets all of the palindromic squares between a min and max value: http://jsfiddle.net/n5uby1wd/2/
HTML
<button text="click me" onclick="findPalindromicSquares()">Click Me</button>
<div id="test"></div>
JS
function isPalindrome(val) {
return ((val+"") == (val+"").split("").reverse().join(""));
}
function findPalindromicSquares() {
var max = 1000;
var min = 1;
var list = [];
var done = false,
first = true,
sum = 0,
maxsqrt = Math.floor(Math.sqrt(max)),
sumlist = [];
for(var i = min; i <= max; i++) {
if (isPalindrome(i)) {
done = false;
//Start walking up the number list
for (var j = 1; j <= maxsqrt; j++) {
first = true;
sum = 0;
sumlist = [];
for(var k = j; k <= maxsqrt; k++) {
sumlist.push(k);
sum = sum + (k * k);
if (!first && sum == i) {
list.push({"Value":i,"Sums":sumlist});
done = true;
}
else if (!first && sum > i) {
break;
}
first = false;
if (done) break;
}
if (done) break;
}
}
}
//write the list
var html = "";
for(var l = 0; l < list.length; l++) {
html += JSON.stringify(list[l]) + "<br>";
}
document.getElementById("test").innerHTML = html;
}
Where min=1 and max=1000, returns:
{"Value":5,"Sums":[1,2]}
{"Value":55,"Sums":[1,2,3,4,5]}
{"Value":77,"Sums":[4,5,6]}
{"Value":181,"Sums":[9,10]}
{"Value":313,"Sums":[12,13]}
{"Value":434,"Sums":[11,12,13]}
{"Value":505,"Sums":[2,3,4,5,6,7,8,9,10,11]}
{"Value":545,"Sums":[16,17]}
{"Value":595,"Sums":[6,7,8,9,10,11,12]}
{"Value":636,"Sums":[4,5,6,7,8,9,10,11,12]}
{"Value":818,"Sums":[2,3,4,5,6,7,8,9,10,11,12,13]}
An updated version which allows testing individual values: http://jsfiddle.net/n5uby1wd/3/
It only took a few seconds to find them all between 1 and 1,000,000.
You are looking for S(n, k) = n^2 + (n + 1)^2 + (n + 2)^2 + ... (n + (k - 1))^2 which adds up to a specified sum m, i.e., S(n, k) = m. (I'm assuming you'll test for palindromes separately.) S(n, k) - m is a quadratic in n. You can easily work out an explicit expression for S(n, k) - m, so solve it using the quadratic formula. If S(n, k) - m has a positive integer root, keep that root; it gives a solution to your problem.
I'm assuming you can easily test whether a quadratic has a positive integer root. The hard part is probably determining whether the discriminant has an integer square root; I'm guessing you can figure that out.
You'll have to look for k = 2, 3, 4, .... You can stop when 1 + 4 + 9 + ... + k^2 > m. You can probably work out an explicit expression for that.
since there are only few integer powers, you can create an array of powers.
Then you can have 1st and last included index. Initially they are both 1.
while sum is lower than your number, increase last included index. Update sum
while sum is higher, increase 1st included index. Update sum
Or without any array, as in rgettman's answer
Start with an array of The first perfect squares, Let's say your numbers are 13 and 17 , then your array will contain: 1, 4, 9, and 16
Do this kind of checking:
13 minus 1 (0^2) is 12. 1 is a perfect square, 12 is not.
13 minus 2(1^2) is 11. 2 is a perfect square, 11 is not.
13 minus 4(2^2) is 9. 4 is a perfect square, 9 is a perfect square, so 13 is the sum of two perfect
17 minus 1 is 16. 1 and 16 are perfect squares. Eliminate choice.
Keep going until you find one that is not the sum of two perfect squares or not.
One method (probably not efficient) I can think of off the top of my head is,
Suppose N is 90.
X=9 (integer value of sqrt of 90)
1. Create an array of all the integer powers less than x [1,4,9,16,25,36,49,64,81]
2. Generate all possible combinations of the items in the array using recursion. [1,4],[1,9],[1,16],....[4,1],[4,9],....[1,4,9]....3. For each combination (as you generate)- check if the sum of add up to N
**To save memory space, upon generating each instance, you can verify if it sums up to N. If not, discard it and move on to the next.
One of the instances will be [9,81] where 9+81=[90]
I think you can determine whether a number is a sum of consecutive squares quickly in the following manner, which vastly reduces the amount of arithmetic that needs to be done. First, precompute all the sums of squares and place them in an array:
0, 0+1=1, 1+4=5, 5+9=14, 14+16=30, 30+25=55, 55+36=91, ...
Now, if a number is the sum of two or more consecutive squares, we can complete it by adding a number from the above sequence to obtain another number in the above sequence. For example, 77=16+25+36, and we can complete it by adding the listed number 14=0+1+4+9 to obtain the listed number 91=14+77=(0+1+4+9)+(16+25+36). The converse holds as well, provided the two listed numbers are at least two positions apart on the list.
How long does our list have to be? We can stop when we add the first square of n which satisfies (n-1)^2+n^2 > max where max in this case is 1000. Simplifying, we can stop when 2(n-1)^2 > max or n > sqrt(max/2) + 1. So for max=1000, we can stop when n=24.
To quickly test membership in the set, we should hash the numbers in the list as well as storing them in the list; the value of the hash should be the location of the number in the list so that we can quickly locate its position to determine whether it is at least two positions away from the starting point.
Here's my suggestion in Java:
import java.util.HashMap;
public class SumOfConsecutiveSquares {
// UPPER_BOUND is the largest N we are testing;
static final int UPPER_BOUND = 1000;
// UPPER_BOUND/2, sqrt, then round up, then add 1 give MAX_INDEX
static final int MAX_INDEX = (int)(Math.sqrt(UPPER_BOUND/2.0)) + 1 + 1;
static int[] sumsOfSquares = new int[MAX_INDEX+1];
static HashMap<Integer,Integer> sumsOfSquaresHash
= new HashMap<Integer,Integer>();
// pre-compute our list
static {
sumsOfSquares[0] = 0;
sumsOfSquaresHash.put(0,0);
for (int i = 1; i <= MAX_INDEX; ++i) {
sumsOfSquares[i] = sumsOfSquares[i-1] + i*i;
sumsOfSquaresHash.put(sumsOfSquares[i],i);
}
}
public static boolean isSumOfConsecutiveSquares(int N) {
for (int i=0; i <= MAX_INDEX; ++i) {
int candidate = sumsOfSquares[i] + N;
if (sumsOfSquaresHash.containsKey(candidate)
&& sumsOfSquaresHash.get(candidate) - i >= 2) {
return true;
}
}
return false;
}
public static void main(String[] args) {
for (int i=0; i < 1000; ++i) {
if (isSumOfConsecutiveSquares(i)) {
System.out.println(i);
}
}
}
}
Each run of the function performs at most 25 additions and 25 hash table lookups. No multiplications.
To use it efficiently to solve the problem, construct 1, 2, and 3-digit palindromes (1-digit are easy: 1, 2, ..., 9; 2-digit by multiplying by 11: 11, 22, 33, ..., 99; 3-digit by the formula i*101 + j*10. Then check the palindromes with the function above and print out if it returns true.
public static boolean isSumOfSquares(int num) {
int sum = 0;
int lowerBound = 1;
//largest square root that is less than num
int upperBound = (int)Math.floor(Math.sqrt(num));
while(lowerBound != upperBound) {
sum = 0
for(int x=lowerBound; x<upperBound; x++) {
sum += x * x;
}
if(sum != num) {
lowerBound++;
}
else {
return true;
}
}
return false;
}
Perhaps I am missing the point, but considering N, for 1<=N<=1000 the most efficient way would be to solve the problem some way (perhaps brute force) and store the solutions in a switch.
switch(n){
case 5:
case 13:
...
return true;
default:
return false;
}
public static boolean validNumber(int num) {
if (!isPalindrome(num))
return false;
int i = 1, j = 2, sum = 1*1 + 2*2;
while (i < j)
if (sum > num) {
sum = sum - i*i; i = i + 1;
} else if (sum < num) {
j = j + 1; sum = sum + j*j;
} else {
return true;
}
return false;
}
However There Are Only Eleven "Good Numbers" { 5, 55, 77, 181, 313, 434, 505, 545, 595, 636, 818 }. And This Grows Very Slow, For N = 10^6, There Are Only 59.
Method needs to return the k elements a[i] such that ABS(a[i] - val) are the k largest evaluation. My code only works for integers greater than val. It will fail if integers less than val. Can I do this without importing anything other than java.util.Arrays? Could somebody just enlighten me? Any help will be much appreciated!
public static int[] farthestK(int[] a, int val, int k) {// This line should not change
int[] b = new int[a.length];
for (int i = 0; i < b.length; i++) {
b[i] = Math.abs(a[i] - val);
}
Arrays.sort(b);
int[] c = new int[k];
int w = 0;
for (int i = b.length-1; i > b.length-k-1; i--) {
c[w] = b[i] + val;
w++;
}
return c;
}
test case:
#Test public void farthestKTest() {
int[] a = {-2, 4, -6, 7, 8, 13, 15};
int[] expected = {15, -6, 13, -2};
int[] actual = Selector.farthestK(a, 4, 4);
Assert.assertArrayEquals(expected, actual);
}
There was 1 failure:
1) farthestKTest(SelectorTest)
arrays first differed at element [1]; expected:<-6> but was:<14>
FAILURES!!!
Tests run: 1, Failures: 1
The top k problem can be solved in many ways. In your case you add a new parameter, but it really doesn't matter.
The first and the easiest one: just sort the array. Time complexity: O(nlogn)
public static int[] farthestK(Integer[] a, final int val, int k) {
Arrays.sort(a, new java.util.Comparator<Integer>() {
#Override
public int compare(Integer o1, Integer o2) {
return -Math.abs(o1 - val) + Math.abs(o2 - val);
}
});
int[] c = new int[k];
for (int i = 0; i < k; i++) {
c[i] = a[i];
}
return c;
}
The second way: use a heap to save the max k values, Time complexity: O(nlogk)
/**
* Use a min heap to save the max k values. Time complexity: O(nlogk)
*/
public static int[] farthestKWithHeap(Integer[] a, final int val, int k) {
PriorityQueue<Integer> minHeap = new PriorityQueue<Integer>(4,
new java.util.Comparator<Integer>() {
#Override
public int compare(Integer o1, Integer o2) {
return Math.abs(o1 - val) - Math.abs(o2 - val);
}
});
for (int i : a) {
minHeap.add(i);
if (minHeap.size() > k) {
minHeap.poll();
}
}
int[] c = new int[k];
for (int i = 0; i < k; i++) {
c[i] = minHeap.poll();
}
return c;
}
The third way: divide and conquer, just like quicksort. Partition the array to two part, and find the kth in one of them. Time complexity: O(n + klogk)
The code is a little long, so i just provide link here.
Selection problem.
Sorting the array will cost you O(n log n) time. You can do it in O(n) time using k-selection.
Compute an array B, where B[i] = abs(A[i] - val). Then your problem is equivalent to finding the k values farthest from zero in B. Since each B[i] >= 0, this is equivalent to finding the k largest elements in B.
Run k-selection on B looking for the (n - k)th element. See Quickselect on Wikipedia for an O(n) expected time algorithm.
After k-selection is complete, B[n - k] through B[n - 1] contain the largest elements in B. With proper bookkeeping, you can link back to the elements in A that correspond to them (see pseudocode below).
Time complexity: O(n) time for #1, O(n) time for #2, and O(k) time for #3 => a total time complexity of O(n). (Quickselect runs in O(n) expected time, and there exist complicated worst-case linear time selection algorithms).
Space complexity: O(n).
Pseudocode:
farthest_from(k, val, A):
let n = A.length
# Compute B. Elements are objects to
# keep track of the original element in A.
let B = array[0 .. n - 1]
for i between 0 and n - 1:
B[i] = {
value: abs(A[i] - val)
index: i
}
# k_selection should know to compare
# elements in B by their "value";
# e.g., each B[i] could be java.lang.Comparable.
k_selection(n - k - 1, B)
# Use the top elements in B to link back to A.
# Return the result.
let C = array[0 .. k - 1]
for i between 0 and k - 1:
C[i] = A[B[n - k + i].index]
return C
You can modify this algorithm a little and use it for printing k elements according to your requirement.(This is the only work you will need to do with some changes in this algorithm.)
Explore this link.
http://jakharmania.blogspot.in/2013/08/selection-of-kth-largest-element-java.html
This algo uses Selection Sort - so the output would be a Logarithmic Time Complexity based answer which is very efficient.
O(n) algorithm, from Wikipedia entry on partial sorting:
Find the k-th smallest element using the linear time median-of-medians selection algorithm. Then make a linear pass to select the elements smaller than the k-th smallest element.
The collection in this case is created by taking the original array, subtracting the given value, taking the absolute value, (and then negating it so that largest becomes smallest).