I made this test to challenge myself while learning a bit of Java, and I got the worst possible result in the performance test, 0%.
This was the exercise:
You are given two non-empty zero-indexed arrays A and B consisting of
N integers. These arrays represent N planks. More precisely, A[K] is
the start and B[K] the end of the K−th plank.
Next, you are given a non-empty zero-indexed array C consisting of M
integers. This array represents M nails. More precisely, C[I] is the
position where you can hammer in the I−th nail.
We say that a plank (A[K], B[K]) is nailed if there exists a nail C[I]
such that A[K] ≤ C[I] ≤ B[K].
The goal is to find the minimum number of nails that must be used
until all the planks are nailed. In other words, you should find a
value J such that all planks will be nailed after using only the first
J nails. More precisely, for every plank (A[K], B[K]) such that 0 ≤ K
< N, there should exist a nail C[I] such that I < J and A[K] ≤ C[I] ≤
B[K].
For example, given arrays A, B such that:
A[0] = 1 B[0] = 4
A[1] = 4 B[1] = 5
A[2] = 5 B[2] = 9
A[3] = 8 B[3] = 10 four planks are represented: [1, 4], [4, 5], [5, 9] and [8, 10].
Given array C such that:
C[0] = 4
C[1] = 6
C[2] = 7
C[3] = 10
C[4] = 2 if we use the following nails:
0, then planks [1, 4] and [4, 5] will both be nailed. 0, 1, then
planks [1, 4], [4, 5] and [5, 9] will be nailed. 0, 1, 2, then planks
[1, 4], [4, 5] and [5, 9] will be nailed. 0, 1, 2, 3, then all the
planks will be nailed. Thus, four is the minimum number of nails that,
used sequentially, allow all the planks to be nailed.
Write a function:
class Solution { public int solution(int[] A, int[] B, int[] C); }
that, given two non-empty zero-indexed arrays A and B consisting of N
integers and a non-empty zero-indexed array C consisting of M
integers, returns the minimum number of nails that, used sequentially,
allow all the planks to be nailed.
If it is not possible to nail all the planks, the function should
return −1.
For example, given arrays A, B, C such that:
A[0] = 1 B[0] = 4
A[1] = 4 B[1] = 5
A[2] = 5 B[2] = 9
A[3] = 8 B[3] = 10
C[0] = 4
C[1] = 6
C[2] = 7
C[3] = 10
C[4] = 2 the function should return 4, as explained above.
Assume that:
N and M are integers within the range [1..30,000]; each element of
arrays A, B, C is an integer within the range [1..2*M]; A[K] ≤ B[K].
Complexity:
expected worst-case time complexity is O((N+M)*log(M)); expected
worst-case space complexity is O(M), beyond input storage (not
counting the storage required for input arguments). Elements of input
arrays can be modified.
Here is my solution:
class Solution {
public int solution(int[] A, int[] B, int[] C) {
int result = 0;
int empties = 0;
for(int i = 0; i < C.length; i ++){
for(int j = 0; j < A.length; j ++){
if(A[j] != 0){
if(C[i] >= A[j] && C[i] <= B[j]){
A[j] = B[j] = 0;
empties ++;
}
}
if(empties == A.length){
return i + 1;
}
}
}
return -1;
}
}
This is the link of the result: https://codility.com/demo/results/trainingXXEXMW-KVJ/
Questions:
First, I don't understand why my performance is measured O((N + M) * N) and not O(M * N), since I'm doing a for (M) and inside a for(N). Disclaimer, I only learned about Big O notation a couple of days ago.
Second, most likely the reason why the performance was bad was because I didn't use a binary search to find the nail-able elements, instead I looped through them.
However, I did that on purpose since in no part of the exercise is mentioned that the A and B arrays are sorted, in a way that 1 >= A[K] >= A[K+1]. And if I sorted those arrays, then the performance would be bad (I guess, no idea how much the sort hurts the performance honestly, just a guesstimate).
What is your opinion about it?.
I don't understand why my performance is measured O((N + M) * N) and not O(M * N)
They are probably doing curve fitting against a limited number of curves. (This is the problem with trying to determine complexity empirically.)
And if I sorted those arrays, then the performance would be bad (I guess, no idea how much the sort hurts the performance honestly, just a guesstimate).
Actually, sorting will be O(NlogN) if done with a good algorithm. So from a complexity perspective you could achieve the O(NlogN) overall worst-case by sorting and then doing a binary search. (I'm not saying it is the right solution though ....)
Related
A non-empty array A consisting of N integers is given.
A peak is an array element which is larger than its neighbors. More precisely, it is an index P such that 0 < P < N − 1, A[P − 1] < A[P] and A[P] > A[P + 1].
For example, the following array A:
A[0] = 1
A[1] = 2
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
has exactly three peaks: 3, 5, 10.
We want to divide this array into blocks containing the same number of elements. More precisely, we want to choose a number K that will yield the following blocks:
A[0], A1, ..., A[K − 1],
A[K], A[K + 1], ..., A[2K − 1],
...
A[N − K], A[N − K + 1], ..., A[N − 1].
What's more, every block should contain at least one peak. Notice that extreme elements of the blocks (for example A[K − 1] or A[K]) can also be peaks, but only if they have both neighbors (including one in an adjacent blocks).
The goal is to find the maximum number of blocks into which the array A can be divided.
Array A can be divided into blocks as follows:
one block (1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2). This block contains three peaks.
two blocks (1, 2, 3, 4, 3, 4) and (1, 2, 3, 4, 6, 2). Every block has a peak.
three blocks (1, 2, 3, 4), (3, 4, 1, 2), (3, 4, 6, 2). Every block has a peak. Notice in particular that the first block (1, 2, 3, 4) has a peak at A[3], because A[2] < A[3] > A[4], even though A[4] is in the adjacent block.
However, array A cannot be divided into four blocks, (1, 2, 3), (4, 3, 4), (1, 2, 3) and (4, 6, 2), because the (1, 2, 3) blocks do not contain a peak. Notice in particular that the (4, 3, 4) block contains two peaks: A[3] and A[5].
The maximum number of blocks that array A can be divided into is three.
Write a function:
class Solution { public int solution(int[] A); }
that, given a non-empty array A consisting of N integers, returns the maximum number of blocks into which A can be divided.
If A cannot be divided into some number of blocks, the function should return 0.
For example, given:
A[0] = 1
A[1] = 2
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
the function should return 3, as explained above.
Write an efficient algorithm for the following assumptions:
N is an integer within the range [1..100,000];
each element of array A is an integer within the range [0..1,000,000,000].
My Understanding of the problem :
Each sub array should contain at least one peak
An element which forms a peak can be in an Adjacent sub array.
Return max possible sub arrays
My Question
Consider Main Array : [0,1,0,1,0]
Possible sub arrays as per understanding : [0,1] [0,1,0]
Each subarray has a peak.
Subarray 1 [0,1] has peak element shared with adjacent array [0,1,0].
Subarray 2 [0,1,0] contains peak 0<1>0.
So max possible sub arrays are 2 but a test case in Codility returns max possible sub arrays as 1.
Below is my code
// you can also use imports, for example:
import java.util.*;
// you can write to stdout for debugging purposes, e.g.
// System.out.println("this is a debug message");
class Solution {
public int solution(int[] A) {
// write your code in Java SE 8
int count=0,size=A.length;
if(size<2)
return 0;
System.out.println(Arrays.toString(A));
for(int i=1;i<size-1;i++){
if(A[i-1]<A[i] && A[i]>A[i+1]){
count++;
i++;
}
}
return count;
}
}
Test case which failed in Codility click here
I believe there is a gap in my understanding. Any help would be helpful :)
https://app.codility.com/demo/results/training5KP2PK-P4M/
https://github.com/niall-oc/things/blob/master/codility/peaks.py
Breaking an array into even parts is another way of factorizing the length of the array.
Array of length 12
[0,1,0,1,0,0,0,1,0,0,1,0,]
Factors of 12.
The square root of 12 is 3.464... so start with 3 and irterate down to 1 and divide each number into 12. This gives you a set of numbers {1, 2, 3, 4, 6, 12}.
Process
Because of how a peak is defined you cannot have 12 peaks in this array so remove 12 from the set. Starting d as the largest number divide the array int d parts. And check each part has a peak, if so then d is the maximum number of equal parts to all contain at least one peak. If not so then iterate to the next largest divisor and try that until you find a solution.
First of all you need to be congratulated to write a concise program to calculate the number of peaks. But the question is not to count the peaks.
It is to find the number of equal sized array and each having at least one peak. And a peak cannot be the first or last element as stated in the problem 0 < P < N − 1
Now quoting your question:
Consider Main Array : [0,1,0,1,0]
Possible sub arrays as per understanding : [0,1] [0,1,0] Each subarray has a peak. Subarray 1 [0,1] has peak element shared with adjacent array [0,1,0]. Subarray 2 [0,1,0] contains peak 0<1>0.
So max possible sub arrays are 2 but a test case in Codility returns max possible sub arrays as 1.
I see below issues:
your sub array sizes are not equal
array [0,1] does not have a peak
So, the array cannot be divided in equal parts each having a peak and hence only one array remains [0,1,0,1,0].
I was trying to solve a problem from the Codility with a given solution. The problem is provided below:
You are given N counters, initially set to 0, and you have two possible operations on them:
increase(X) − counter X is increased by 1,
max counter − all counters are set to the maximum value of any counter.
A non-empty array A of M integers is given. This array represents consecutive operations:
if A[K] = X, such that 1 ≤ X ≤ N, then operation K is increase(X),
if A[K] = N + 1 then operation K is max counter.
For example, given integer N = 5 and array A such that:
A[0] = 3
A[1] = 4
A[2] = 4
A[3] = 6
A[4] = 1
A[5] = 4
A[6] = 4
the values of the counters after each consecutive operation will be:
(0, 0, 1, 0, 0)
(0, 0, 1, 1, 0)
(0, 0, 1, 2, 0)
(2, 2, 2, 2, 2)
(3, 2, 2, 2, 2)
(3, 2, 2, 3, 2)
(3, 2, 2, 4, 2)
The goal is to calculate the value of every counter after all operations.
Write a function:
class Solution { public int[] solution(int N, int[] A); }
that, given an integer N and a non-empty array A consisting of M integers, returns a sequence of integers representing the values of the counters.
The sequence should be returned as:
a structure Results (in C), or
a vector of integers (in C++), or
a record Results (in Pascal), or
an array of integers (in any other programming language).
For example, given:
A[0] = 3
A[1] = 4
A[2] = 4
A[3] = 6
A[4] = 1
A[5] = 4
A[6] = 4
the function should return [3, 2, 2, 4, 2], as explained above.
Assume that:
N and M are integers within the range [1..100,000];
each element of array A is an integer within the range [1..N + 1].
Complexity:
expected worst-case time complexity is O(N+M);
expected worst-case space complexity is O(N) (not counting the storage required for input arguments).
I have a solution provided,
public static int[] solution(int N, int[] A) {
int[] counters = new int[N];
int currMax = 0;
int currMin = 0;
for (int i = 0; i < A.length; i++) {
if (A[i] <= N) {
counters[A[i] - 1] = Math.max(currMin, counters[A[i] - 1]);
counters[A[i] - 1]++;
currMax = Math.max(currMax, counters[A[i] - 1]);
} else if (A[i] == N + 1) {
currMin = currMax;
}
}
for (int i = 0; i < counters.length; i++) {
counters[i] = Math.max(counters[i], currMin);
}
return counters;
}
It seems they use 2 storage to hold and update the min/max values and use them inside the algorithm. Obviously, there is a more direct way to solve the problem ie. increase the value by 1 or set all the values to max as suggested and I can do that. The drawback will be to lower perfromance and increased time complexity.
However, I would like to understand what is going on here. I spend times debugging with the example array but the algorithm is still little confusing.
Anyone understand it and can explain to me briefly?
It is quite simple, they do lazy update. You keep track at all times of what is the value of the counter that has the highest value (currMax). Then, when you get a command to increase all counters to that maxValue, as that is too expensive, you just save that the last time you had to increase all counters to maxValue, that value was currMin.
So, when do you update a counter value to that value? You do it lazily, you just update it when you get a command to update that counter (increase it). So when you need to increase a counter, you update the counter to the max between its old value and currMin. If this was the first update on this counter since a N + 1 command, the correct value it should have is actually currMin, and that will be higher (or equal) to its old value. One you updated it, you add 1 to it. If now another increase happens, currMin doesn't actually matter, as the max will take its old value until another N + 1 command happens.
The second for is to account for counters that did not get an increase command after the last N + 1 command.
Note that there can be any number of N + 1 commands between 2 increase operations on a counter. It still follows that the value it should have is the maxValue at the time of the last N + 1 command, it doesn't really matter that we didn't update it before with the other maxValue from a previous N + 1, we only care about latest.
For a A * B matrix of all distinct numbers from 1 to A * B, we first sort each column and then concatenate all columns in increasing order of indices to form an array of size A * B. Columns are numbered in increasing order from left to right.
For example, if matrix is
[1 5 6]
[3 2 4]
We first sort all columns to get
[1 2 4]
[3 5 6]
Now, we concatenate columns in increasing order of indices to get an array
[1, 3, 2, 5, 4, 6]
Given this final array, you have to count how many distinct initial matrices are possible. Return the required answer modulo 10^9+7.
Two matrices are distinct if:
- Either their dimensions are different.
- Or if any of the corresponding row in two matrices are different.
Example:
If input array is [1, 3, 2, 4], distinct initial matrices possible are:
[1 3 2 4]
============
[1 2]
[3 4]
=============
[1 4]
[3 2]
===========
[3 2]
[1 4]
===========
[3 4]
[1 2]
===========
that is, a total of 5 matrices.
Here is what is did:
I found the ways we can arrange values in every subarray of size(len/2).
So if an array is [1,2,3,4]
we have two subarrays [1,2]&[3,4].So the answer will be 2!*2!.Thing is we have to get the unique rows as well.That's where my code failed.
Can you enlighten me in the right direction.
Here's my code;
public int cntMatrix(ArrayList<Integer> a) {
if(a.size()==1){
return 1;
}
int n=a.size();
int len=n/2;
int i=0;
long ans=1;
if(n%2!=0){ //n is odd
ans=fact(n); //factorial function
}else{
while(i<n){
int x=i;
int y=i+len;
HashMap<Integer,Integer> map=new HashMap<>(); //frequency of each element in subarray[x..y]
for(int m=i;m<y;m++){
if(map.containsKey(a.get(m))){
map.put(a.get(m),map.get(a.get(m))+1);
}else{
map.put(a.get(m),1);
}
}
long p=fact(len);
long q=1;
for(Map.Entry<Integer,Integer> set:map.entrySet()){
int key=set.getKey();
int value=set.getValue();
q*=fact(value);
}
ans*=p/q; //ncr
map.clear();
i+=len;
}
}
ans%=1000000007;
return ((int)ans+1);
}
How to deal with unique rows
Asked on interviewbit.com
One thing that I noticed is that you check if the length is odd or not.
This is not right, if for example, the length is 9 you can arrange a 3x3 matrix that will suffice the conditions.
I think that you should try to "cut" the array into columns with the sizes 1 - n and for each size check if it can be an initial matrix.
The complexity of my algorithm is O(n^2), though I feel like there is a better one.
This is my python code -
class Solution:
# #param A : list of integers
# #return an integer
def cntMatrix(self, A):
count = 0
n = len(A)
# i = number of rows
for i in range(1, n + 1):
if n % i == 0:
can_cut = True
start = 0
while start < len(A) and can_cut:
prev = 0
for j in range(start, start + i):
if prev > A[j]:
can_cut = False
prev = A[j]
start += i
if can_cut:
count = (count + pow(math.factorial(i), n / i)) % (pow(10, 9) + 7)
return count
I didn't check it on their site because the question page couldn't be found anymore, I saw it only on the ninja test.
After running -
s = Solution()
print(s.cntMatrix([1, 2, 3, 1, 2, 3, 1, 2, 3]))
We get - 217 = 3! * 3! * 3! + 1
class Solution:
# #param A : list of integers
# #return an integer
def cntMatrix(self, A):
self.factCache = {}
bucket = 1
buckets = []
while bucket <= len(A):
if len(A) % bucket == 0:
buckets.append(bucket)
bucket += 1
valid_buckets = []
for bucket in buckets:
counter = 1
invalid = False
for i in range(1, len(A)):
if counter == bucket:
counter = 1
continue
if A[i] > A[i - 1]:
counter += 1
else:
invalid = True
break
if not invalid:
valid_buckets.append(bucket)
combs = 0
for bucket in valid_buckets:
rows = bucket
columns = int(len(A)/rows)
combs += (self.fact(rows) ** columns)
return combs % 1000000007
def fact(self, number):
if number == 0 or number == 1:
return 1
fact = 1
for i in range(1, number + 1):
fact = fact * i
return fact
I have is algorithm, which takes an array as an argument, and returns its maximum value.
find_max(as) :=
max = as[0]
for i = 1 ... len(as) {
if max < as[i] then max = as[i]
}
return max
My question is: given that the array is initially in a (uniformly) random permutation and that all its elements are distinct, what's the expected number of times the max variable is updated (ignoring the initial assignment).
For example, if as = [1, 3, 2], then the number of updates to max would be 1 (when reading the value 3).
Assume the original array contains the values 1, 2, ..., N.
Let X_i, i = 1..N be random variables that take the value 1 if i is, at some point during the algorithm, the maximum value.
Then the number of maximums the algorithm takes is the random variable: M = X_1 + X_2 + ... + X_N.
The average is (by definition) E(M) = E(X_1 + X_2 + ... + X_N). Using linearity of expectation, this is E(X_1) + E(X_2) + .. + E(X_N), which is prob(1 appears as a max) + prob(2 appears as a max) + ... + prob(N appears as a max) (since each X_i takes the value 0 or 1).
When does i appear as a maximum? It's when it appears first in the array amongst the i, i+1, i+2, ..., N. The probability of this is 1/(N-i+1) (since each of those numbers are equally likely to be first).
So... prob(i appears as a max) = 1/(N-i+1), and the overall expectation is 1/N + 1/(N-1) + ..+ 1/3 + 1/2 + 1/1
This is Harmonic(N) which is approximated closely by ln(N) + emc where emc ~= 0.5772156649, the Euler-Mascheroni constant.
Since in the problem you don't count the initial setting of the maximum to the first value as a step, the actual answer is Harmonic(N) - 1, or approximately ln(N) - 0.4227843351.
A quick check for some simple cases:
N=1, only one permutation, and no maximum updates. Harmonic(1) - 1 = 0.
N=2, permutations are [1, 2] and [2, 1]. The first updates the maximum once, the second zero times, so the average is 1/2. Harmonic(2) - 1 = 1/2.
N=3, permutations are [1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1]. Maximum updates are 2, 1, 1, 1, 0, 0 respectively. Average is (2+1+1+1)/6 = 5/6. Harmonic(3) - 1 = 1/2 + 1/3 = 5/6.
So the theoretical answer looks good!
Empirical Solution
A simulation of many different array sizes with multiple trials each can be performed and analyzed:
#include <iostream>
#include <fstream>
#include <cstdlib>
#define UPTO 10000
#define TRIALS 100
using namespace std;
int arr[UPTO];
int main(void){
ofstream outfile ("tabsep.txt");
for(int i = 1; i < UPTO; i++){
int sum = 0;
for(int iter = 0; iter < TRIALS; iter++){
for(int j = 0; j < i; j++){
arr[j] = rand();
}
int max = arr[0];
int times_changed = 0;
for(int j = 0; j < i; j++){
if (arr[j] > max){
max = arr[j];
times_changed++;
}
}
sum += times_changed;
}
int avg = sum/TRIALS;
outfile << i << "\t" << avg << "\n";
cout << "\r" << i;
}
outfile.close();
cout << endl;
return 0;
}
When I graphed these results, the complexity appeared to be logarithmic:
I think it's safe to conclude that the time complexity is O(log n).
Theoretical solution:
Assume that the numbers are in the range 0...n
You have a tentative maximum m
The next maximum will be a random number in the range m+1...n, which averages out to be (m+n)/2
This means that each time you find a new maximum, you are dividing the range of possible maximums by 2
Repeated division is equivalent to a logarithm
Therefore the number of times a new maximum is found is O(log n)
Worst case scenario (which is often what is sought) is O(n). If the list is sorted in reverse order every single one will result in an assignment.
HOWEVER, if your assignment is the most expensive operation why don't you just store it's index and only ever copy once, if at all? In that case, you will have exactly 1 assignment and n-1 comparisons.
Given a string of even size, say:
abcdef123456
How would I interleave the two halves, such that the same string would become this:
a1b2c3d4e5f6
I tried attempting to develop an algorithm, but couldn't. Would anybody give me some hints as to how to proceed? I need to do this without creating extra string variables or arrays. One or two variable is fine.
I just don't want a working code (or algorithm), I need to develop an algorithm and prove it correctness mathematically.
You may be able to do it in O(N*log(N)) time:
Want: abcdefgh12345678 -> a1b2c3d4e5f6g7h8
a b c d e f g h
1 2 3 4 5 6 7 8
4 1-sized swaps:
a 1 c 3 e 5 g 7
b 2 d 4 f 6 h 8
a1 c3 e5 g7
b2 d4 f6 h8
2 2-sized swaps:
a1 b2 e5 f6
c3 d4 g7 h8
a1b2 e5f6
c3d4 g7h8
1 4-sized swap:
a1b2 c3d4
e5f6 g7h8
a1b2c3d4
e5f6g7h8
Implementation in C:
#include <stdio.h>
#include <string.h>
void swap(void* pa, void* pb, size_t sz)
{
char *p1 = pa, *p2 = pb;
while (sz--)
{
char tmp = *p1;
*p1++ = *p2;
*p2++ = tmp;
}
}
void interleave(char* s, size_t len)
{
size_t start, step, i, j;
if (len <= 2)
return;
if (len & (len - 1))
return; // only power of 2 lengths are supported
for (start = 1, step = 2;
step < len;
start *= 2, step *= 2)
{
for (i = start, j = len / 2;
i < len / 2;
i += step, j += step)
{
swap(s + i,
s + j,
step / 2);
}
}
}
char testData[][64 + 1] =
{
{ "Aa" },
{ "ABab" },
{ "ABCDabcd" },
{ "ABCDEFGHabcdefgh" },
{ "ABCDEFGHIJKLMNOPabcdefghijklmnop" },
{ "ABCDEFGHIJKLMNOPQRSTUVWXYZ0<({[/abcdefghijklmnopqrstuvwxyz1>)}]\\" },
};
int main(void)
{
unsigned i;
for (i = 0; i < sizeof(testData) / sizeof(testData[0]); i++)
{
printf("%s -> ", testData[i]);
interleave(testData[i], strlen(testData[i]));
printf("%s\n", testData[i]);
}
return 0;
}
Output (ideone):
Aa -> Aa
ABab -> AaBb
ABCDabcd -> AaBbCcDd
ABCDEFGHabcdefgh -> AaBbCcDdEeFfGgHh
ABCDEFGHIJKLMNOPabcdefghijklmnop -> AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPp
ABCDEFGHIJKLMNOPQRSTUVWXYZ0<({[/abcdefghijklmnopqrstuvwxyz1>)}]\ -> AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz01<>(){}[]/\
Generically that problem is quite hard -- and it reduces to finding permutation cycles. The number and length of those varies quite a lot depending on the length.
The first and last cycles are always degenerate; the 10 entry array has 2 cycles of lengths 6 and 2 and the 12 entry array has a single cycle of length 10.
Withing a cycle one does:
for (i=j; next=get_next(i) != j; i=next) swap(i,next);
Even though the function next can be implemented as some relatively easy formula of N, the problem is postponed to do book accounting of what indices have been swapped. In the left case of 10 entries, one should [quickly] find the starting positions of the cycles (they are e.g. 1 and 3).
Ok lets start over. Here is what we are going to do:
def interleave(string):
i = (len(string)/2) - 1
j = i+1
while(i > 0):
k = i
while(k < j):
tmp = string[k]
string[k] = string[k+1]
string[k+1] = tmp
k+=2 #increment by 2 since were swapping every OTHER character
i-=1 #move lower bound by one
j+=1 #move upper bound by one
Here is an example of what the program is going to do. We are going to use variables i,j,k. i and j will be the lower and upper bounds respectively, where k is going to be the index at which we swap.
Example
`abcd1234`
i = 3 //got this from (length(string)/2) -1
j = 4 //this is really i+1 to begin with
k = 3 //k always starts off reset to whatever i is
swap d and 1
increment k by 2 (k = 3 + 2 = 5), since k > j we stop swapping
result `abc1d234` after the first swap
i = 3 - 1 //decrement i
j = 4 + 1 //increment j
k= 2 //reset k to i
swap c and 1, increment k (k = 2 + 2 = 4), we can swap again since k < j
swap d and 2, increment k (k = 4 + 2 = 6), k > j so we stop
//notice at EACH SWAP, the swap is occurring at index `k` and `k+1`
result `ab1c2d34`
i = 2 - 1
j = 5 + 1
k = 1
swap b and 1, increment k (k = 1 + 2 = 3), k < j so continue
swap c and 2, increment k (k = 3 + 2 = 5), k < j so continue
swap d and 3, increment k (k = 5 + 2 = 7), k > j so were done
result `a1b2c3d4`
As for proving program correctness, see this link. It explains how to prove this is correct by means of a loop invariant.
A rough proof would be the following:
Initialization: Prior to the first iteration of the loop we can see that i is set to
(length(string)/2) - 1. We can see that i <= length(string) before we enter the loop.
Maintenance. After each iteration, i is decremented (i = i-1, i=i-2,...) and there must be a point at which i<length(string).
Termination: Since i is a decreasing sequence of positive integers, the loop invariant i > 0 will eventually equate to false and the loop will exit.
The solution is here J. Ellis and M. Markov. In-situ, stable merging by way of perfect shuffle.
The Computer Journal. 43(1):40-53, (2000).
Also see the various discussions here:
https://cs.stackexchange.com/questions/332/in-place-algorithm-for-interleaving-an-array/400#400
https://cstheory.stackexchange.com/questions/13943/linear-time-in-place-riffle-shuffle-algorithm.
Alright, here's a rough draft. You say you don't just want an algorithm, but you are taking hints, so consider this algorithm a hint:
Length is N.
k = N/2 - 1.
1) Start in the middle, and shift (by successive swapping of neighboring pair elements) the element at position N/2 k places to the left (1st time: '1' goes to position 1).
2) --k. Is k==0? Quit.
3) Shift (by swapping) the element at N/2 (1st time:'f' goes to position N-1) k places to the right.
4) --k.
Edit: The above algorithm is correct, as the code below shows. Actually proving that it's correct is waaay beyond my capabilities, fun little question though.
#include <iostream>
#include <algorithm>
int main(void)
{
std::string s("abcdefghij1234567890");
int N = s.size();
int k = N/2 - 1;
while (true)
{
for (int j=0; j<k; ++j)
{
int i = N/2 - j;
std::swap(s[i], s[i-1]);
}
--k;
if (k==0) break;
for (int j=0; j<k; ++j)
{
int i = N/2 + j;
std::swap(s[i], s[i+1]);
}
--k;
}
std::cout << s << std::endl;
return 0;
}
Here's an algorithm and working code. It is in place, O(N), and conceptually simple.
Walk through the first half of the array, swapping items into place.
Items that started in the left half will be swapped to the right
before we need them, so we use a trick to determine where they
were swapped to.
When we get to the midpoint, unscramble the unplaced left items that were swapped to the right.
A variation of the same trick is used to find the correct order for unscrambling.
Repeat for the remaining half array.
This goes through the array making no more than N+N/2 swaps, and requires no temporary storage.
The trick is to find the index of the swapped items. Left items are swapped into a swap space vacated by the Right items as they are placed. The swap space grows by the following sequence:
Add an item to the end(into the space vacated by a Right Item)
Swap an item with the oldest existing (Left) item.
Adding items 1..N in order gives:
1 2 23 43 435 465 4657 ...
The index changed at each step is:
0 0 1 0 2 1 3 ...
This sequence is exactly OEIS A025480, and can be calculated in O(1) amortized time:
def next_index(n):
while n&1: n=n>>1
return n>>1
Once we get to the midpoint after swapping N items, we need to unscramble. The swap space will contain N/2 items where the actual index of the item that should be at offset i is given by next_index(N/2+i). We can advance through the swaps space, putting items back in place. The only complication is that as we advance, we may eventually find a source index that is left of the target index, and therefore has already been swapped somewhere else. But we can find out where it is by doing the previous index look up again.
def unscramble(start,i):
j = next_index(start+i)
while j<i: j = next_index(start+j)
return j
Note that this only an indexing calculation, not data movement. In practice, the total number of calls to next_index is < 3N for all N.
That's all we need for the complete implementation:
def interleave(a, idx=0):
if (len(a)<2): return
midpt = len(a)//2
# the following line makes this an out-shuffle.
# add a `not` to make an in-shuffle
base = 1 if idx&1==0 else 0
for i in range(base,midpt):
j=next_index(i-base)
swap(a,i,midpt+j)
for i in range(larger_half(midpt)-1):
j = unscramble( (midpt-base)//2, i);
if (i!=j):
swap(a, midpt+i, midpt+j)
interleave(a[midpt:], idx+midpt)
The tail-recursion at the end can easily be replaced by a loop. It's just less elegant with Python's array syntax. Also note that for this recursive version, the input must be a numpy array instead of a python list, because standard list slicing creates copies of the indexes that are not propagated back up.
Here's a quick test to verify correctness. (8 perfect shuffles of a 52 card deck restore it to the original order).
A = numpy.arange(52)
B = A.copy()
C =numpy.empty(52)
for _ in range(8):
#manual interleave
C[0::2]=numpy.array(A[:26])
C[1::2]=numpy.array(A[26:])
#our interleave
interleave(A)
print(A)
assert(numpy.array_equal(A,C))
assert(numpy.array_equal(A, B))