I'm using Gurobi with java to solve a ILP problem.
I set all and I start the program. But Gurobi doesn't even try to solve my problem and gives my an empty solution all variable set to 0.
During the relaxed step Gurobi shows that the minimum value for the function is -246. This is in contrast with the next step were gurobi shows that the optimal solution is 0.
The output of Gurobi is:
Optimize a model with 8189 rows, 3970 columns and 15011 nonzeros
Variable types: 0 continuous, 3970 integer (0 binary)
0 0 0 1.0E100 -1.0E100 0 0
**** New solution at node 0, obj 0.0
Found heuristic solution: objective 0.0000000
Root relaxation: objective -2.465000e+02, 4288 iterations, 0.08 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 -246.50000 0 315 0.00000 -246.50000 - - 0s
Cutting planes:
MIR: 907
Explored 0 nodes (5485 simplex iterations) in 0.70 seconds
Thread count was 1 (of 1 available processors)
Optimal solution found (tolerance 1.00e-04)
Best objective 0.000000000000e+00, best bound 0.000000000000e+00, gap 0.0%
Gurobi is reporting that it found an optimal solution. The solution with values of 0 for all the variables is optimal (it's not an "empty solution"). The solution with objective -246.5 is for the relaxed problem. The relaxed problem ignores the constraints forcing variables to take on integer values. The solution with objective value of 0 is the solution to the original problem as you formulated it.
The symptoms you are reporting (an all 0 solution that you clearly don't want) is possibly caused by an inverted objective function. Is it possible that you wanted to maximize instead of minimize?
Related
the board is like this:
1 2 3 4
5 6 7 0
0 0 0 0
0 0 0 0
the '0' represents that is empty, we can move the non-zero number to the '0'.
so how to get all of the state of the board using BFS?
for example, there are two state of the board:
1 2 3 4
0 0 0 0
5 6 7 0
0 0 0 0
1 2 3 0
4 0 0 0
5 0 0 0
6 7 0 0
The reason I ask this question is that I need to process all of the 15-puzzle state using Disjoint pattern database to solve the nearly most difficult state of 15-puzzle in 1 minutes.
15 14 13 12
11 10 9 8
7 6 5 4
3 1 2 0
I need to process all of the 15-puzzle state [..] to solve the nearly most difficult state of 15-puzzle in 1 minutes
Approach 1 - using a database and storing all states
For reasons given by Henry as well, and also supported by [1], solving this problem using a database would require generating the entire A_15 , storing all of it and then finding the shortest path, or some path between a given state and the solved state. This would require a lot of space and a lot of time. See this discussion for an outline of this approach.
Approach 2 - using a specialized depth-first search algorithm
Here is an implementation of this search strategy that uses the IDA algorithm.
Approach 3 - using computational group theory
Yet another way to handle this in a much shorter amount of time is to use GAP (which implements a variant of Schreier-Sims) in order to decompose a given word into a product of generators. There is an example in the docs that shows how to use it to solve the Rubik's cube, and it can be adapted to the 15-puzzle too [2].
[1] Permutation Puzzles - A Mathematical Perspective by Jamie Mulholland - see page 103 and 104 for solvability criteria, and the state space being |A_15| ~ 653 billion
[2] link2 - page 37
I'm trying to wrap my head around entropy. I basically understand this:
Technically, entropy is the sum over all events of the probability of each event times the log probability of that event.
If you consider a 8-bit byte, if all 256 values are equally likely, then the byte contains 8 bits of entropy or equivalently, 8 bits of real information. If some bit patterns are more likely than others, for example the bit pattern for the letter ‘e’, then the byte will contain less than 8 bits of entropy or information.
Running English text is pretty low entropy, at about 2.3 bits per byte. This is why compression algorithms work well on text files.
https://www.quora.com/What-is-entropy-in-terms-of-cryptography
For now let's say I'm using strings of ASCII converted into string arrays of binary, as in
// [0 0 1 1 0 0 1 1]
// [1 1 1 1 0 0 1 1]
// [1 0 1 1 0 0 0 0]
// [0 0 1 1 0 0 1 1]
I've devised 2 methods for calculating entropy (maybe they are both wrong, I don't know). The first is to start with the element[1] and compare each bit to element[0] and element[2], and for each bit they share in the same location, add 1 to a running score. I then divide by 2. I end up with something that is consistent-ish with the 2.3 bytes of entropy as described above. As in, the score for most of my text is that they share 5-6~ bits with their neighbors on average.
The other method is to just add up each 'column' of bits and then average them to find a probability of their appearance. For the above I get
//=[2 1 4 4 0 0 3 3] / 4 = [.5 .25 1 1 0 0 .75 .75] as probabilities
For the above, I am less sure how to derive an entropy 'score.
In any case, I'm curious if people can help me to understand if I am doing this right, or more likely, how I can improve or understand this differently.
Thank you
Okay, so here is the scenario:
I have a 2D integer array representing my graph / matrix. If there is a connection, there is a 1, if no connection then there is a 0. Pretty simple, however I am iterating back through the array to create a subset as given. So if I have a graph:
{ 0 1 1 1
1 0 1 1
1 1 0 1
1 1 1 0 }
and I pass the subset {3,1}
then I will be left with
{ 0 0 1 0
0 0 0 0
1 0 0 0
0 0 0 0 }
Now my question is how to go about counting the maximum vertices in the components? So the output I want is the maximum vertices of a single component. My problem is I don't understand how I'm suppose to tell the difference in components. It's easier for me to understand on paper, but I am stumped on how to interpret it through code. I will say I am doing this in Java.
Any insight would be helpful
Edit Note:
I am trying to use BFS or some other search method to count each vertex and its connections. Then iterate over each vertex that has yet to be seen or checked, and continue. Then output the number of max components
Lets say I have a graph with connections as above before the subset given. The subset will be removed, then we are left with pieces of a graph. I then need to iterate over those pieces to find which piece has the most connections.
I am looking for the fastest way to square a double (double d). So far I came up with two approaches:
1. d*d
2. Math.pow(d, 2)
To test the performance I set up three test cases, in each I generate random numbers using the same seed for the three cases and just calculate the squared number in a loop 100 000 000 times.
In the first test case numbers are generated using random.nextDouble(), in the second case using random.nextDouble()*Double.MAX_VALUE and in the third one using random.nextDouble()*Double.MIN_VALUE.
The results of a couple of runs (approximate results, theres always some variation, run using java 1.8, compiled for java 1.6 on Mac OSX Mavericks)
Approach | Case 1 | Case 2 | Case 3
---------•--------•--------•-------
1 | ~2.16s | ~2.16s | ~2.16s
2 | ~9s | ~30s | ~60s
The conclusion seems to be that approach 1 is way faster but also that Math.pow seems to behave kind of weird.
So I have two questions:
Why is Math.pow so slow, and why does it cope badly with > 1 and even worse with < -1 numbers?
Is there a way to improve the performance over what I suggested as approach 1? I was thinking about something like:
long l = Double.doubleToRawLongBits(d);
long sign = (l & (1 << 63));
Double.longBitsToDouble((l<<1)&sign);
But that is a) wrong, and b) about the same speed as approach 1.
The fastest way to square a number is to multiply it by itself.
Why is Math.pow so slow?
It's really not, but it is performing exponentiation instead of simple multiplication.
and why does it cope badly with > 1 and even worse with < -1 numbers
First, because it does the math. From the Javadoc it also contains tests for many corner cases. Finally, I would not rely too much on your micro-benchmark.
Squaring by multipling with self is the fastest. Because that approch can be directly translated into simple, non-branching bytecode (and thus, indirectly, machine code).
Math.pow() is a quite complex function that comes with various guarantees for edge cases. And it need to be called instead of being inlined.
Math.pow() is slow because it has to deal with the generic case or raising a number to any given power.
As for why it is slower with negative numbers, it is because it has to test if the power is positive or negative in order to give the sign, so it is one more operation to do.
I'm having a hard time figuring out how to explain this problem. I'm currently trying to create a program for extra credit in my programming class, but I don't even understand the math behind it.... So I would love if someone could help me out. Alright:
Say you have 1 cent coin and a 4 cent coin. And the total number of coins allowed is 4. The maximal coverage of the value is 11. The chart is below.
Value | 1 cent | 4 cent
1 | 1
2 | 2
3 | 3
4 | 4
5 | 1 | 1
6 | 2 | 1
7 | 3 | 1
8 | | 2
9 | 1 | 2
10 | 2 | 2
11 | Maximum
S0 that's an example. I need to make this for something that is a much larger number. But I would love if someone can help explain the math for me. Or what the equation is... It's driving me insane.
I was trying to implement a version of the knapsack algorithm, but it doesn't seem to be doing the trick. If anyone can help it would be much appreciated. I'm not sure if I'm able to do that or if I need to use the greedy algorithm for this solution. It's basically a twist on the greedy algorithm.
EDIT: changed to 11
Dynamic programming (DP) is the way to solve the problem. DP generally involves finding some basic property you can compute based on other values of that property -- a form of inductive reasoning.
In your case, the basic question you need to ask is: "can I make n cents using exactly k coins". That's a simple boolean yes/no; because you can reuse coins, you don't need to know how to make n cents with k coins, only whether it is possible. This implicitly defines a boolean matrix A[n][k], where A[n][k] = TRUE iff you can make n cents with k of the given sorts of coins.
Study the relationships between the various entries in this truth table. For example, if I can make 5 cents with 2 coins, then it follows I can make 6 and 9 cents each with 3 coins (why?); thus A[5][2] implies A[6][3] and A[9][3].
Good luck!
Note: I'm re-posting because the other answer was deleted while updating to provide more context.
This appears to be the original problem author and his Java source code solution, if you'd like to study it further.
However, here's the summary of how this algorithm works, using dynamic programming:
Assumptions:
Each value in T is constrained by Integer.MAX_VALUE
T is constrained by Integer.MAX_VALUE -1
Definitions:
D = {d1, d2, ..., dk}∀ d∈ℤ, ∀ w_d = 1
T = W = Total Weight of Knapsack = Total Coins Available for Use
How the algorithm works:
Ensures W > 0 and that by 1 ∈ D
Ensures constraints above are met
Create a dynamically-sized array MinCoins[0] = 0
Let n=1 and iterate by 1 as n→∞
For each iteration, set MinCoins[ n ] = Integer.MAX_VALUE
Iterate over each element in D, let each value be known as d during iteration
If d > n skip this iteration
Let z represent the optimal number of coins for this iteration
Get the optimal number of coins from the previous iteration, and add one more (of this value) to it: z = MinCoins [ n - d ] + 1
Now compare z to MinCoins[ n ]
If z < MinCoins[ n ] a new optimal solution has been found (save it), else iterate to next d
Let the optimal solution found for this iteration be defined as q = MinCoins[ n ]
If q < T then continue to next iteration. Else, no maximum solution was found this iteration and break the loop.
https://bitbucket.org/asraful/coin-change