Apply n qubits to a Hadamard Gate - java

First of all sorry for the long text, I tried to explain my problem / misunderstanding as good as possible.
For my student project I have to implement a simulation of a simple Quantum Computer. What I am trying to understand right now is how different Gates are getting applied to n-qubits, bit by bit.
For example one qubit gets represented by two complex numbers (a1, a2) :
a1 |0> + a2 |1>
Where a1 and a2 are the amplitudes - the possibilities that a value is meassured. All amplitudes squared and summed must always be equal to 1.
So i added a Hadamard Gate, represented by its 2x2 Matrizes
public void Hadamard(){
gate.entries[0][0] = new ComplexNumber(1,0);
gate.entries[0][1] = new ComplexNumber(1,0);
gate.entries[1][0] = new ComplexNumber(1,0);
gate.entries[1][1] = new ComplexNumber(-1,0);
gate = (Matrix.scalarMultiplication(gate,Math.pow(2,-0.5)));
}
Now I would make a Matrixmultiplication with the a1 and a2 with a Hadamard gate.
So I set up a register as a two dimensional array of complex numbers representing the states of the bit as :
Register register = new Register(1);
Where the number represents the number of qubits. We only create one row holding all our states and the index of the columns equals the state. So e.g.
[0][0] = |0> and [0][1] = |1>
If we say that a1=1+0i and a2=0+0i the multiplication would look like this :
cmplx1 = cmplxMultiplicate(gate.entries[0][0],a1);
cmplx2 = cmplxMultiplicate(gate.entries[0][1],a2);
cmplx3 = cmplxMultiplicate(gate.entries[1][0],a1);
cmplx4 = cmplxMultiplciate(gate.entires[1][1],a2);
register.entries[0][0] = cmplxAddition(cmplx1,cmplx2); // 0.70710678118
register.entries[0][1] = cmplxAddition(cmplx3,cmplx4); // 0.70710678118
Now comes the question - I have no idea how to do this if we have more than one Qubit. For example at two Qubits I would have
a1 |00> + a2 |01> + a3 |10> + a4 |11>
Four different states (or 2^(numberOfQubits) states for any given number). But how could i now apply all 4 States to my Hadamard Gate ? Do i have to make all possible outcomes where i multiply a1 with every value, than a2 etc. etc. ? Like this :
cmplx1 = cmplxMultiplicate(gate.entries[0][0],a1);
cmplx2 = cmplxMultiplicate(gate.entries[0][1],a2);
cmplx3 = cmplxMultiplicate(gate.entries[1][0],a1);
cmplx4 = cmplxMultiplciate(gate.entries[1][1],a2);
cmplx1 = cmplxMultiplicate(gate.entries[0][0],a1);
cmplx2 = cmplxMultiplicate(gate.entries[0][1],a3);
cmplx3 = cmplxMultiplicate(gate.entries[1][0],a1);
cmplx4 = cmplxMultiplciate(gate.entries[1][1],a3);
I am really clueless about this and i think there is a fundamental misunderstanding on my site that makes things so complicated for me.
Any help leading me on the right way / track would be really appreciated.
Thank you very much.

Note that https://en.wikipedia.org/wiki/Hadamard_transform#Quantum_computing_applications writes:
It is useful to note that computing the quantum Hadamard transform is simply the application of a Hadamard gate to each qubit individually because of the tensor product structure of the Hadamard transform.
So it would make sense to just model a single gate, and instantiate that a number of times.
But how could i now apply all 4 States to my Hadamard Gate ?
The gate would get applied to all 4 states of your 2-qubit register. It would operate on pairs of coefficients, namely those which only differ in a single bit, corresponding on the bit position to which the gate gets applied.
If you want to go for the larger picture, apply the Hadamard operation first to the left qubit
((|00〉 + |10〉) 〈00| + (|00〉 − |10〉) 〈10| + (|01〉 + |11〉) 〈01| + (|01〉 − |11〉) 〈11|) / sqrt(2)
and then to the right qubit
((|00〉 + |01〉) 〈00| + (|00〉 − |01〉) 〈01| + (|10〉 + |11〉) 〈10| + (|10〉 − |11〉) 〈11|) / sqrt(2)
Writing this as matrices with your order of coefficients (first step right matrix and second step left matrix) you get
⎛1 1 0 0⎞ ⎛1 0 1 0⎞ ⎛1 1 1 1⎞
⎜1 -1 0 0⎟ ⎜0 1 0 1⎟ ⎜1 -1 1 -1⎟
½ ⎜0 0 1 1⎟ ⎜1 0 -1 0⎟ = ½ ⎜1 1 -1 -1⎟
⎝0 0 1 -1⎠ ⎝0 1 0 -1⎠ ⎝1 -1 -1 1⎠
If you wanted you could encode the product matrix in your notation. But I'd rather find a way to model applying a quantum gate operation to a subset of the qubits in your register while passing through the other bits unmodified. This could be by expaning the matrix, as I did above to go from the conventional 2×2 to the 4×4 I used. Or it could be in the way you evaluate the matrix times vector product, in order to make better use of the sparse nature of these matrices.
Looking at your code, I'm somewhat worried by the two indices in your register.entries[0][0]. If the first index is supposed to be the index of the qubit and the second the value of that qubit, then the representation is unfit to model entangled situations.

Related

How do I create a method to generate all possible boolean functions?

I have used JAVA to create a linked list of 30 nodes. Each node is assigned a random Boolean value when instantiated.
I want each node assigned its own random boolean method/function/rule that takes three Boolean arguments and returns the result:
boolean assignedBolMethod(boolean a, boolean b, boolean c) {
boolean answer = conduct assigned ruled
return answer;
}
I understand there are 256 such rules to chose from (2^2^3); how could I generate all 256 possible rules without typing them out manually?
Let's say '0' when we mean false and '1' when we mean true, because that makes this a lot easier to read:
There are 8 different possible inputs (000, 001, 010, 011, 100, 101, 110, and 111), and for each input, there are 2 possible answers: 0 or 1.
Let's define a 'rule' as follows: We always list all the inputs in that exact order, and then we list the rule's answer to each input in terms of a 1 or a 0. Thus, 00001111 is the rule that says '000 = 0', '001 = 0, 010 = 0, 011 = 0, 100 = 1, ', etcetera - in other words, the rule is: return a;, if you were to put it in code.
It is then obvious that there are indeed 256 rules (2 ^ 8), and you can represent each rule as a single byte, as bytes consist of 8 bits. Every existing byte represents one rule. Thus 'a rule' and 'a byte' are completely interchangeable, thus, this boils down to: How do I generate an arbitrary byte.
And that's easy:
Random r = new Random(); // do this once someplace
byte rule = r.nextByte();
Alternatively if you want an ordered list of every possible rule:
byte[] rules = new byte[256];
for (int i = 0; i < rules.length; i++) rules[i] = (byte) i;
But this array is mostly meaningless; it effectively maps '100' to '100' - not very useful. There is no actual need to have a 'list' of all possible rules: Java already ships with it: byte - that is a data type that exactly matches. Thus, if you have some code and you want 'rule 100' to be applied, all you need to write is byte rule = 100; - no need for a list.
Given a byte that represents a rule, plus those 3 inputs, how do you determine the answer the rule indicates is correct?
Well, first you need to collapse those 3 booleans you have into which one of the 8 bits in your byte represents the answer.
int bitPos = (a ? 1 : 0) + (b ? 2 : 0) + (c ? 4 : 0);
This gives you the position of the bit (a number between 0 and 7) that determines the answer.
Then, given a bit position and a byte:
boolean answer = ((rule >> bitpos) & 1) != 0;
Breaking that down:
a >> b will take the bitstring of a (let's say it's rule 00110111), and shift it to the right by b spots. So if we want the bit at bitpos=2 (so, the third bit), 0b00110111 >> 2 is 0b00001101. This means the bit we are interested in is now at the very end.
a & b will take the bitstring of a, and the bitstring of b, and checks for all positions where both a and b have a 1. Then, it returns a new number represented by setting each bit to 1 where both a and b have a one. Therefore, a & 1 has the effect of zeroing out all bits, except the lowest bit (1 = 00000001 - all bits unset except the lowest bit). It gets rid of all bits, except the bit we care about.
!= 0 then just checks if that bit was set or not.

Multiply numbers represented as Linked List

I've been trying to solve this problem for a while, now, but none of my approaches have worked so far.
You're given two linked lists that represents big numbers, where the head of the lists represent the least significant digit.
Return a new list which stores the result of the multiplication of the two lists.
I've tried an algorithm that worked for the first list being a single number, but not more than that.
Initialize a new list.
Initialize a carry.
While node 1 is not null:
While node 2 is not null:
If node 3's next is not null, add it's value to the carry;
Set node 3's next as node 1 * node 2's value + carry.
Set node 2 as it's next, set node 3 as it's next.
End while set at section 4
Set node 1 as node 1's next.
End while set at section 3.
Return list set at section 1.
This obviously has problems. I've also tried to set a "powCounter" for each iteration of the first loop, and multiply the value by 10 to the power of powCounter.
But this didn't work either.
I'd really appreciate any help!
Do it like you would on paper.
Presumably, before tasking you to write multiply(a, b), they would have already had you write add(a, b), right? So use that.
You said you already wrote logic for multiply with a single digit, so let calls that multiplySingle(a, digit).
You need one more helper method, e.g. shiftLeft(a, n), which adds n 0's to the end of the number, i.e. to the beginning of the list. E.g. shiftLeft([4,3,2,1], 2) should return [0,0,4,3,2,1], meaning 1234 * 10² = 123400.
So, on paper you would multiply 123 with 456 like this:
123 * 456
45600 = 1 * 456 * 100 = shiftLeft(multiplySingle([6,5,4], 1), 2)
9120 = 2 * 456 * 10 = shiftLeft(multiplySingle([6,5,4], 2), 1)
1368 = 3 * 456 * 1 = shiftLeft(multiplySingle([6,5,4], 3), 0)
=====
56088 = 45600 + 9120 + 1368 = add(add([0,0,6,5,4], [0,2,1,9]), [8,6,3,1])
Good luck writing the code for that.
FYI: The idea for a shiftLeft() method is based on similar methods in the built-in BigInteger and BigDecimal classes.
BigInteger shiftLeft(int n)
Returns a BigInteger whose value is (this << n). The shift distance, n, may be negative, in which case this method performs a right shift. (Computes floor(this * 2ⁿ).)
BigDecimal movePointRight(int n)
Returns a BigDecimal which is equivalent to this one with the decimal point moved n places to the right. If n is non-negative, the call merely subtracts n from the scale. If n is negative, the call is equivalent to movePointLeft(-n). The BigDecimal returned by this call has value (this × 10ⁿ) and scale max(this.scale()-n, 0).

Find a matrix which satisfies certain constraints

Another description of the problem: Compute a matrix which satisfies certain constraints
Given a function whose only argument is a 4x4 matrix (int[4][4] matrix), determine the maximal possible output (return value) of that function.
The 4x4 matrix must satisfy the following constraints:
All entries are integers between -10 and 10 (inclusively).
It must be symmetrix, entry(x,y) = entry(y,x).
Diagonal entries must be positive, entry(x,x) > 0.
The sum of all 16 entries must be 0.
The function must only sum up values of the matrix, nothing fancy.
My question:
Given such a function which sums up certain values of a matrix (matrix satisfies above constraints), how do I find the maximal possible output/return value of that function?
For example:
/* The function sums up certain values of the matrix,
a value can be summed up multiple or 0 times. */
// for this example I arbitrarily chose values at (0,0), (1,2), (0,3), (1,1).
int exampleFunction(int[][] matrix) {
int a = matrix[0][0];
int b = matrix[1][2];
int c = matrix[0][3];
int d = matrix[1][1];
return a+b+c+d;
}
/* The result (max output of the above function) is 40,
it can be achieved by the following matrix: */
0. 1. 2. 3.
0. 10 -10 -10 10
1. -10 10 10 -10
2. -10 10 1 -1
3. 10 -10 -1 1
// Another example:
// for this example I arbitrarily chose values at (0,3), (0,1), (0,1), (0,4), ...
int exampleFunction2(int[][] matrix) {
int a = matrix[0][3] + matrix[0][1] + matrix[0][1];
int b = matrix[0][3] + matrix[0][3] + matrix[0][2];
int c = matrix[1][2] + matrix[2][1] + matrix[3][1];
int d = matrix[1][3] + matrix[2][3] + matrix[3][2];
return a+b+c+d;
}
/* The result (max output of the above function) is -4, it can be achieved by
the following matrix: */
0. 1. 2. 3.
0. 1 10 10 -10
1. 10 1 -1 -10
2. 10 -1 1 -1
3. -10 -10 -1 1
I don't know where to start. Currently I'm trying to estimate the number of 4x4 matrices which satisfy the constraints, if the number is small enough the problem could be solved by brute force.
Is there a more general approach?
Can the solution to this problem be generalized such that it can be easily adapted to arbitrary functions on the given matrix and arbitrary constraints for the matrix?
You can try to solve this using linear programming techniques.
The idea is to express the problem as some inequalities, some equalities, and a linear objective function and then call a library to optimize the result.
Python code:
import scipy.optimize as opt
c = [0]*16
def use(y,x):
c[y*4+x] -= 1
if 0:
use(0,0)
use(1,2)
use(0,3)
use(1,1)
else:
use(0,3)
use(0,1)
use(0,1)
use(0,3)
use(0,3)
use(0,2)
use(1,2)
use(2,1)
use(3,1)
use(1,3)
use(2,3)
use(3,2)
bounds=[ [-10,10] for i in range(4*4) ]
for i in range(4):
bounds[i*4+i] = [1,10]
A_eq = [[1] * 16]
b_eq = [0]
for x in range(4):
for y in range(x+1,4):
D = [0]*16
D[x*4+y] = 1
D[y*4+x] = -1
A_eq.append(D)
b_eq.append(0)
r = opt.linprog(c,A_eq=A_eq,b_eq=b_eq,bounds=bounds)
for y in range(4):
print r.x[4*y:4*y+4]
print -r.fun
This prints:
[ 1. 10. -10. 10.]
[ 10. 1. 8. -10.]
[-10. 8. 1. -10.]
[ 10. -10. -10. 1.]
16.0
saying that the best value for your second case is 16, with the given matrix.
Strictly speaking you are wanting integer solutions. Linear programming solves this type of problem when the inputs can be any real values, while integer programming solves this type when the inputs must be integers.
In your case you may well find that the linear programming method already provides integer solutions (it does for the two given examples). When this happens, it is certain that this is the optimal answer.
However, if the variables are not integral you may need to find an integer programming library instead.
Sort the elements in the matrix in descending order and store in an array.Iterate through the elements in the array one by one
and add it to a variable.Stop iterating at the point when adding an element to variable decrease its value.The value stored in the variable gives maximum value.
maxfunction(matrix[][])
{
array(n)=sortDescending(matrix[][]);
max=n[0];
i=1;
for i to n do
temp=max;
max=max+n[i];
if(max<temp)
break;
return max;
}
You need to first consider what matrices will satisfy the rules. The 4 numbers on the diagonal must be positive, with the minimal sum of the diagonal being 4 (four 1 values), and the maximum being 40 (four 10 values).
The total sum of all 16 items is 0 - or to put it another way, sum(diagnoal)+sum(rest-of-matrix)=0.
Since you know that sum(diagonal) is positive, that means that sum(rest-of-matrix) must be negative and equal - basically sum(diagonal)*(-1).
We also know that the rest of the matrix is symmetrical - so you're guaranteed the sum(rest-of-matrix) is an even number. That means that the diagonal must also be an even number, and the sum of the top half of the matrix is exactly half the diagonal*(-1).
For any given function, you take a handful of cells and sum them. Now you can consider the functions as fitting into categories. For functions that take all 4 cells from the diagonal only, the maximum will be 40. If the function takes all 12 cells which are not the diagonal, the maximum is -4 (negative minimal diagonal).
Other categories of functions that have an easy answer:
1) one from the diagonal and an entire half of the matrix above/below the diagonal - the max is 3. The diagonal cell will be 10, the rest will be 1, 1, 2 (minimal to get to an even number) and the half-matrix will sum at -7.
2) two cells of the diagonal and half a matrix - the max is 9. the two diagonal cells are maximised to two tens, the remaining cells are 1,1 - and so the half matrix sums at -11.
3) three cells from the diagonal and half a matrix - the max is 14.
4) the entire diagonal and half the matrix - the max is 20.
You can continue with the categories of selecting functions (using some from the diagonal and some from the rest), and easily calculating the maximum for each category of selecting function. I believe they can all be mapped.
Then the only step is to put your new selecting function in the correct category and you know the maximum.

Printing PowerSet with help of bit position

Googling around for a while to find subsets of a String, i read wikipedia and it mentions that
.....For the whole power set of S we get:
{ } = 000 (Binary) = 0 (Decimal)
{x} = 100 = 4
{y} = 010 = 2
{z} = 001 = 1
{x, y} = 110 = 6
{x, z} = 101 = 5
{y, z} = 011 = 3
{x, y, z} = 111 = 7
Is there a possible way to implement this through program and avoid recursive algorithm which uses string length?
What i understood so far is that, for a String of length n, we can run from 0 to 2^n - 1 and print characters for on bits.
What i couldn't get is how to map those on bits with the corresponding characters in the most optimized manner
PS : checked thread but couldnt understood this and c++ : Power set generated by bits
The idea is that a power set of a set of size n has exactly 2^n elements, exactly the same number as there are different binary numbers of length at most n.
Now all you have to do is create a mapping between the two and you don't need a recursive algorithm. Fortunately with binary numbers you have a real intuitive and natural mapping in that you just add a character at position j in the string to a subset if your loop variable has bit j set which you can easily do with getBit() I wrote there (you can inline it but for you I made a separate function for better readability).
P.S. As requested, more detailed explanation on the mapping:
If you have a recursive algorithm, your flow is given by how you traverse your data structure in the recursive calls. It is as such a very intuitive and natural way of solving many problems.
If you want to solve such a problem without recursion for whatever reason, for instance to use less time and memory, you have the difficult task of making this traversal explicit.
As we use a loop with a loop variable which assumes a certain set of values, we need to make sure to map each value of the loop variable, e.g. 42, to one element, in our case a subset of s, in a way that we have a bijective mapping, that is, we map to each subset exactly once. Because we have a set the order does not matter, so we just need whatever mapping that satisfies these requirements.
Now we look at a binary number, e.g. 42 = 32+8+2 and as such in binary with the position above:
543210
101010
We can thus map 42 to a subset as follows using the positions:
order the elements of the set s in any way you like but consistently (always the same in one program execution), we can in our case use the order in the string
add an element e_j if and only if the bit at position j is set (equal to 1).
As each number has at least one digit different from any other, we always get different subsets, and thus our mapping is injective (different input -> different output).
Our mapping is also valid, as the binary numbers we chose have at most the length equal to the size of our set so the bit positions can always be assigned to an element in the set. Combined with the fact that our set of inputs is chosen to have the same size (2^n) as the size of a power set, we can follow that it is in fact bijective.
import java.util.HashSet;
import java.util.Set;
public class PowerSet
{
static boolean getBit(int i, int pos) {return (i&1<<pos)>0;}
static Set<Set<Character>> powerSet(String s)
{
Set<Set<Character>> pow = new HashSet<>();
for(int i=0;i<(2<<s.length());i++)
{
Set<Character> subSet = new HashSet<>();
for(int j=0;j<s.length();j++)
{
if(getBit(i,j)) {subSet.add(s.charAt(j));}
}
pow.add(subSet);
}
return pow;
}
public static void main(String[] args)
{System.out.println(powerSet("xyz"));}
}
Here is easy way to do it (pseudo code) :-
for(int i=0;i<2^n;i++) {
char subset[];
int k = i;
int c = 0;
while(k>0) {
if(k%2==1) {
subset.add(string[c]);
}
k = k/2;
c++;
}
print subset;
}
Explanation :- The code divides number by 2 and calculates remainder which is used to convert number to binary form. Then as you know only selects index in string which has 1 at that bit number.

How can I estimate time complexity from a table of values?

I know that my naive matrix multiplication algorithm has a time complexity of O(N^3)...
But how can I prove that through my table of values? Size is the row or column length of the matrix. So square that for the full matrix size.
Size = 100 Mat. Mult. Elapsed Time: 0.0199 seconds.
Size = 200 Mat. Mult. Elapsed Time: 0.0443 seconds.
Size = 300 Mat. Mult. Elapsed Time: 0.0984 seconds.
Size = 400 Mat. Mult. Elapsed Time: 0.2704 seconds.
Size = 800 Mat. Mult. Elapsed Time: 6.393 seconds.
This is like looking at a table of values and estimating the graph of the function... There has to be some relationship between these numbers, and N^3. How do I make sense of it though?
I have provided my algorithm below. I already know it is O(N^3) by counting the loops. How can I relate that to my table of values above though?
/**
* This function multiplies two matrices and returns the product matrix.
*
* #param mat1
* The first multiplier matrix.
* #param mat2
* The second multiplicand matrix.
* #return The product matrix.
*/
private static double[][] MatMult(double[][] mat1, double[][] mat2) {
int m1RowLimit = mat1.length, m2ColumnLimit = mat2[0].length, innerLimit = mat1[0].length;
if ((mat1[0].length != mat2.length))
return null;
int m1Row = 0, m1Column = 0, m2Row = 0, m2Column = 0;
double[][] mat3 = new double[m1RowLimit][m2ColumnLimit];
while (m1Row < m1RowLimit) {
m2Column = 0;
while (m2Column < m2ColumnLimit) {
double value = 0;
m1Column = 0;
m2Row = 0;
while (m1Column < innerLimit) {
value += mat1[m1Row][m1Column] * mat2[m2Row][m2Column];
m1Column++;
m2Row++;
}
mat3[m1Row][m2Column] = value;
m2Column++;
}
m1Row++;
}
return mat3;
}
The methodology
Okay. So you want to prove your algorithm's time complexity is O(n^3). I understand why you would look at the time it takes for a program to run a calculation, but this data is not reliable. What we do, is we apply a weird form of limits to abstract away from the other aspects of an algorithm, and leave us with our metric.
The Metric
A metric is what we are going to use to measure your algorithm. It is the operation that occurs the most, or carries the most processing weight. In this case, it is this line:
value += mat1[m1Row][m1Column] * mat2[m2Row][m2Column];
Deriving the Recurrence Relation
The next step, as I understand it, is to derive a recurrence relation from your algorithm. That is, a description of how your algorithm functions based on it's functionality in the past. Let's look at how your program runs.
As you explained, you have looked at your three while loops, and determined the program is of order O(n^3). Unfortunately, this is not mathematical. This is just something that seems to happen a lot. First, let's look at some numerical examples.
When m1RowLimit = 4, m2ColumnLimit = 4, innerLimit = 4, our metric is ran 4 * 4 * 4 = 4^3 times.
When m1RowLimit = 5, m2ColumnLimit = 5, innerLimit = 5, our metric is ran 5 * 5 * 5 = 5^3 times.
So how do we express this in a recurrence relation? Well, using some basic maths we get:
T(n) = T(n-1) + 3(n-1)^2 + 3(n-1) + 1 for all n >= 1
T(1) = 1
Solving the Recurrence Relation using Forward Substitution and Mathematical Induction
Now, is where we use some forward substitution. What we first do, is get a feel for the relation (this also tests that it's accurate).
T(2) = T(1) + 3(1^2) + 3(1) + 1 = 1 + 3 + 3 + 1 = 8.
T(3) = T(2) + 3(2^2) + 3(2) + 1 = 8 + 12 + 6 + 1 = 27
T(4) = T(3) + 3(3^2) + 3(3) + 1 = 27 + 27 + 9 + 1 = 64
NOW, we assert the hypothesis that T(n) = n^3. Let's test it for the base case:
T(1) = 1^3 = 1. // Correct!
Now we test it, using mathematical induction, for the next step. The algorithm increases by 1 each time, so the next step is: T(n+1). So what do we need to prove? Well we need to prove that by increasing n by 1 on one side, the equal effect happens to n on the other. If it is true for all n + 1, then it is true for n + 1 + 1 and so on. This means, we're aiming to prove that:
T(n + 1) = (n + 1)^3
T(n + 1) = T(n - 1 + 1) + 3(n + 1 - 1)^2 + 3(n + 1 - 1) + 1
= T(n) + 3(n)^2 + 3(n) + 1
Assume T(n) = n^3
T(n + 1) = n^3 + 3(n)^2 + 3(n) + 1
T(n + 1) = (n+1)^3 // Factorize the function.
So at this point, you've proven your algorithm has a run time complexity of O(n^3).
Empirically, you can plot your data with an adjacent third-degree polynomial trend-line for reference.
CSV data:
100, 0.0199
200, 0.0443
300, 0.0984
400, 0.2704
800, 6.393
The first response covers how to prove the time complexity of your algorithm quite well.
However, you seem to be asking how to relate the experimental results of your benchmarks with time complexity, not how to prove time complexity.
So, how do we interpret the experimental data? Well, you could start by simply plotting the data (runtime on the y-axis, size on the x-axis). With enough data points, this could give you some hints about the behavior of your algorithm.
Since you already know the expected time complexity of your algorithm, you could then draw a "curve of best fit" (i.e. a line of the shape n^3 that best fits your data). If your data matches the line fairly well, then you were likely correct. If not, it's possible you made some mistake, or that your experimental results are not matching due to factors you are not accounting for.
To determine the equation for the best fitting n^3 line, you could simply take the calculated time complexity, express it as an equation, and guess values for the unknowns until you find an equation that fits. So for n^3, you'd have:
t = a*n^3 + b*n^2 + c*n + d
Find the values of a, b, c, and d that form an equation that best fits your data. If that fit still isn't good enough, then you have a problem.
For more rigorous techniques, you'd have to ask someone more well versed in statistics. I believe the value you'd want to calculate is the coefficient of determination (a.k.a. R^2, basically tells you the variance between the expected and actual results). However, on it's own this value doesn't prove a whole lot. This problem of validating hypothesized relationships between variables is known as Regression Model Validation; the wikipedia article provides a bit more information on how to go further with this if R^2 isn't enough for your purposes.

Categories

Resources