I'm using the choco solver library to generate a set of puzzles. I need to run the solver, check how many solutions there are and if there is more than one, add an extra constraint. Repeating this will give me a set of constraints (clues) that has a unique solution.
However once I've run model.getSolver(findAllSolutions()) any additional checks return zero solutions.
I'm guessing I need to somehow reset the model solver but can't find a way of achieving this - I'd rather not generate a new model and recreate the exiting constraints if I have to.
The original code has 110 IntVar's and a huge number of constraints, but I've created a much smaller example.
Note: in the real application I use model.getSolver().findAllSolutions(new SolutionCounter(model,2)) to speed things up, but I've omitted that step here.
Model model = new Model();
// setup two doors A and B, one has the value 0 the other 1
IntVar doorA = model.intVar("Door A", 0, 1);
IntVar doorB = model.intVar("Door B", 0, 1);
model.allDifferent(new IntVar[]{doorA, doorB}).post();
// setup two windows A and B, one has the value 0 the other 1
IntVar windowA = model.intVar("Window A", 0, 1);
IntVar windowB = model.intVar("Window B", 0, 1);
model.allDifferent(new IntVar[]{windowA, windowB}).post();
// assign the first constraint and count the solutions
model.arithm(doorA,"=",0).post();
// this should force door B to be 1 - there are two remaining solutions
List<Solution> solutions = model.getSolver().findAllSolutions();
System.out.println("results after first clue");
for (Solution s : solutions) {
System.out.println(">"+s.toString());
}
assertEquals("First clue leaves two solutions",2,solutions.size());
// add second clue
model.arithm(windowA,"=",1).post();
// this should force window B to by 0 - only one valid solution
List<Solution> solutions2 = model.getSolver().findAllSolutions();
System.out.println("results after second clue");
for (Solution s : solutions2) {
System.out.println(">"+s.toString());
}
assertEquals("Second clue leaves one solution",1,solutions2.size());
For anybody else looking for this, it turns out that the answer is simple.
model.getSolver().reset();
Related
Im trying to copy the exrcise about halfway down the page on this link:
https://d2l.ai/chapter_recurrent-neural-networks/sequence.html
The exercise uses a sine function to create 1000 data points between -1 through 1 and use a recurrent network to approximate the function.
Below is the code I used. I'm going back to study more why this isn't working as it doesn't make much sense to me now when I was easily able to use a feed forward network to approximate this function.
//get data
ArrayList<DataSet> list = new ArrayList();
DataSet dss = DataSetFetch.getDataSet(Constants.DataTypes.math, "sine", 20, 500, 0, 0);
DataSet dsMain = dss.copy();
if (!dss.isEmpty()){
list.add(dss);
}
if (list.isEmpty()){
return;
}
//format dataset
list = DataSetFormatter.formatReccurnent(list, 0);
//get network
int history = 10;
ArrayList<LayerDescription> ldlist = new ArrayList<>();
LayerDescription l = new LayerDescription(1,history, Activation.RELU);
ldlist.add(l);
LayerDescription ll = new LayerDescription(history, 1, Activation.IDENTITY, LossFunctions.LossFunction.MSE);
ldlist.add(ll);
ListenerDescription ld = new ListenerDescription(20, true, false);
MultiLayerNetwork network = Reccurent.getLstm(ldlist, 123, WeightInit.XAVIER, new RmsProp(), ld);
//train network
final List<DataSet> lister = list.get(0).asList();
DataSetIterator iter = new ListDataSetIterator<>(lister, 50);
network.fit(iter, 50);
network.rnnClearPreviousState();
//test network
ArrayList<DataSet> resList = new ArrayList<>();
DataSet result = new DataSet();
INDArray arr = Nd4j.zeros(lister.size()+1);
INDArray holder;
if (list.size() > 1){
//test on training data
System.err.println("oops");
}else{
//test on original or scaled data
for (int i = 0; i < lister.size(); i++) {
holder = network.rnnTimeStep(lister.get(i).getFeatures());
arr.putScalar(i,holder.getFloat(0));
}
}
//add originaldata
resList.add(dsMain);
//result
result.setFeatures(dsMain.getFeatures());
result.setLabels(arr);
resList.add(result);
//display
DisplayData.plot2DScatterGraph(resList);
Can you explain the code I would need for a 1 in 10 hidden and 1 out lstm network to approximate a sine function?
Im not using any normalization as function is already -1:1 and Im using the Y input as the feature and the following Y Input as the label to train the network.
You notice i am building a class that allows for easier construction of nets and I have tried throwing many changes at the problem but I am sick of guessing.
Here are some examples of my results. Blue is data red is result
This is one of those times were you go from wondering why was this not working to how in the hell were my original results were as good as they were.
My failing was not understanding the documentation clearly and also not understanding BPTT.
With feed forward networks each iteration is stored as a row and each input as a column. An example is [dataset.size, network inputs.size]
However with recurrent input its reversed with each row being a an input and each column an iteration in time necessary to activate the state of the lstm chain of events. At minimum my input needed to be [0, networkinputs.size, dataset.size] But could also be [dataset.size, networkinputs.size, statelength.size]
In my previous example I was training the network with data in this format [dataset.size, networkinputs.size, 1]. So from my low resolution understanding the lstm network should never have worked at all but somehow produced at least something.
There may have also been some issue with converting the dataset to a list as I also changed how I feed the network but but I think the bulk of the issue was a data structure issue.
Below are my new results
Hard to tell what is going on without seeing the full code. For a start I don't see an RnnOutputLayer specified. You could take a look this which shows you how to build an RNN in DL4J.
If your RNN setup is correct this could be a tuning issue. You can find more on tuning here. Adam is probably a better choice for an updater than RMSProp. And tanh probably is a good choice for the activation for your output layer since it's range is (-1,1). Other things to check/tweak - learning rate, number of epochs, set up of your data (like are you trying to predict to far out?).
In my software, I need to decide the version of a feature based on 2 parameters. Eg.
Render version 1 -> if (param1 && param2) == true;
Render version 2 -> if (!param1 && !param2) == true;
Render version 3 -> if only param1 == true;
Render version 4 -> if only param2 == true;
So, to meet this requirement, I wrote a code which looks like this -
if(param1 && param2) //both are true {
version = 1;
}
else if(!param1 && !param2) //both are false {
version = 2;
}
else if(!param2) //Means param1 is true {
version = 3;
}
else { //Means param2 is true
version = 4;
}
There are definitely multiple ways to code this but I finalised this approach after trying out different approaches because this is the most readable code I could come up with.
But this piece of code is definitely not scalable because -
Let say tomorrow we want to introduce new param called param3. Then
the no. of checks will increase because of multiple possible
combinations.
For this software, I am pretty much sure that we
will have to accommodate new parameters in future.
Can there be any scalable & readable way to code these requirements?
EDIT:
For a scalable solution define the versions for each parameter combination through a Map:
Map<List<Boolean>, Integer> paramsToVersion = Map.of(
List.of(true, true), 1,
List.of(false, false), 2,
List.of(true, false), 3,
List.of(false, true), 4);
Now finding the right version is a simple map lookup:
version = paramsToVersion.get(List.of(param1, param2));
The way I initialized the map works since Java 9. In older Java versions it’s a little more wordy, but probably still worth doing. Even in Java 9 you need to use Map.ofEntries if you have 4 or more parameters (for 16 combinations), which is a little more wordy too.
Original answer:
My taste would be for nested if/else statements and only testing each parameter once:
if (param1) {
if (param2) {
version = 1;
} else {
version = 3;
}
} else {
if (param2) {
version = 4;
} else {
version = 2;
}
}
But it scales poorly to many parameters.
If you have to enumerate all the possible combinations of Booleans, it's often simplest to convert them into a number:
// param1: F T F T
// param2; F F T T
static final int[] VERSIONS = new int[]{2, 3, 4, 1};
...
version = VERSIONS[(param1 ? 1:0) + (param2 ? 2:0)];
I doubt that there is a way that would be more compact, readable and scalable at the same time.
You express the conditions as minimized expressions, which are compact and may have meaning (in particular, the irrelevant variables don't clutter them). But there is no systematism that you could exploit.
A quite systematic alternative could be truth tables, i.e. the explicit expansion of all combinations and the associated truth value (or version number), which can be very efficient in terms of running-time. But these have a size exponential in the number of variables and are not especially readable.
I am afraid there is no free lunch. Your current solution is excellent.
If you are after efficiency (i.e. avoiding the need to evaluate all expressions sequentially), then you can think of the truth table approach, but in the following way:
declare an array of version numbers, with 2^n entries;
use the code just like you wrote to initialize all table entries; to achieve that, enumerate all integers in [0, 2^n) and use their binary representation;
now for a query, form an integer index from the n input booleans and lookup the array.
Using the answer by Olevv, the table would be [2, 4, 3, 1]. A lookup would be like (false, true) => T[01b] = 4.
What matters is that the original set of expressions is still there in the code, for human reading. You can use it in an initialization function that will fill the array at run-time, and you can also use it to hard-code the table (and leave the code in comments; even better, leave the code that generates the hard-coded table).
Your combinations of parameters is nothing more than a binary number (like 01100) where the 0 indicates a false and the 1 a true.
So your version can be easily calculated by using all the combinations of ones and zeroes. Possible combinations with 2 input parameters are:
11 -> both are true
10 -> first is true, second is false
01 -> first is false, second is true
00 -> both are false
So with this knowledge I've come up with a quite scalable solution using a "bit mask" (nothing more than a number) and "bit operations":
public static int getVersion(boolean... params) {
int length = params.length;
int mask = (1 << length) - 1;
for(int i = 0; i < length; i++) {
if(!params[i]) {
mask &= ~(1 << length - i - 1);
}
}
return mask + 1;
}
The most interesting line is probably this:
mask &= ~(1 << length - i - 1);
It does many things at once, I split it up. The part length - i - 1 calculates the position of the "bit" inside the bit mask from the right (0 based, like in arrays).
The next part: 1 << (length - i - 1) shifts the number 1 the amount of positions to the left. So lets say we have a position of 3, then the result of the operation 1 << 2 (2 is the third position) would be a binary number of the value 100.
The ~ sign is a binary inverse, so all the bits are inverted, all 0 are turned to 1 and all 1 are turned to 0. With the previous example the inverse of 100 would be 011.
The last part: mask &= n is the same as mask = mask & n where n is the previously computed value 011. This is nothing more than a binary AND, so all the same bits which are in mask and in n are kept, where as all others are discarded.
All in all, does this single line nothing more than remove the "bit" at a given position of the mask if the input parameter is false.
If the version numbers are not sequential from 1 to 4 then a version lookup table, like this one may help you.
The whole code would need just a single adjustment in the last line:
return VERSIONS[mask];
Where your VERSIONS array consists of all the versions in order, but reversed. (index 0 of VERSIONS is where both parameters are false)
I would have just gone with:
if (param1) {
if (param2) {
} else {
}
} else {
if (param2) {
} else {
}
}
Kind of repetitive, but each condition is evaluated only once, and you can easily find the code that executes for any particular combination. Adding a 3rd parameter will, of course, double the code. But if there are any invalid combinations, you can leave those out which shortens the code. Or, if you want to throw an exception for them, it becomes fairly easy to see which combination you have missed. When the IF's become too long, you can bring the actual code out in methods:
if (param1) {
if (param2) {
method_12();
} else {
method_1();
}
} else {
if (param2) {
method_2();
} else {
method_none();
}
}
Thus your whole switching logic takes up a function of itself and the actual code for any combination is in another method. When you need to work with the code for a particular combination, you just look up the appropriate method. The big IF maze is then rarely looked at, and when it is, it contains only the IFs themselves and nothing else potentially distracting.
Neuroph released a preview for a NEAT implementation (https://sourceforge.net/projects/neuroph/files/NEAT-preview/). I wanted to try to use it to create an XOR gate (a common thing to try). But for some reason, it's not working. This is the code I used:
First I created the properties:
SimpleNeatParameters params = new SimpleNeatParameters();
params.setFitnessFunction(new Fitness());
params.setPopulationSize(150);
params.setMaximumFitness(16);
params.setMaximumGenerations(100);
Fitness gives a double value for how well it scored like this:
net.setInput(0, 0);
net.calculate();
fitness += (1 - net.getOutput().get(0)) * 4;
net.setInput(1, 0);
net.calculate();
fitness += net.getOutput().get(0) * 4;
and so on. The idea is if you got it right, you'd be awarded more, to a maximum of four.
Then the input genes:
NeuronGene inputOne = new NeuronGene(NeuronType.INPUT, params);
NeuronGene inputTwo = new NeuronGene(NeuronType.INPUT, params);
NeuronGene output = new NeuronGene(NeuronType.OUTPUT, params);
Then told it to evolve:
Evolver e = Evolver.createNew(params, Arrays.asList(inputOne, inputTwo), Arrays.asList(output));
Organism best = e.evolve();
NeuralNetwork nn = params.getNeuralNetworkBuilder().createNeuralNetwork(best);
Then tried it out and I got these results:
0, 0: 0.6129870845620041
1, 0: 0.6492975506882983
0, 1: 0.6527754436728792
1, 1: 0.6530807065677073
So what's happening? I tried increasing the population, the terminal generation, but nothing works. Am I doing anything wrong?
I am no expert on Neuroph but maybe it's one of these:
Have you set a Selector?
NaturalSelectionOrganismSelector selector = new NaturalSelectionOrganismSelector();
selector.setKillUnproductiveSpecies(true);
params.setOrganismSelector(selector);
In the example xor test, they square the error. I don't see why that should cause Problems, but who knows?
They also reset the neural Network on each call of the Fitness Function.
I'm using ELKI to cluster my data I used KMeansLloyd<NumberVector> with k=3 every time I run my java code I'm getting totally different clusters results, is this normal or there is something I should do to make my output nearly stable?? here my code that I got from elki tutorials
DatabaseConnection dbc = new ArrayAdapterDatabaseConnection(a);
// Create a database (which may contain multiple relations!)
Database db = new StaticArrayDatabase(dbc, null);
// Load the data into the database (do NOT forget to initialize...)
db.initialize();
// Relation containing the number vectors:
Relation<NumberVector> rel = db.getRelation(TypeUtil.NUMBER_VECTOR_FIELD);
// We know that the ids must be a continuous range:
DBIDRange ids = (DBIDRange) rel.getDBIDs();
// K-means should be used with squared Euclidean (least squares):
//SquaredEuclideanDistanceFunction dist = SquaredEuclideanDistanceFunction.STATIC;
CosineDistanceFunction dist= CosineDistanceFunction.STATIC;
// Default initialization, using global random:
// To fix the random seed, use: new RandomFactory(seed);
RandomlyGeneratedInitialMeans init = new RandomlyGeneratedInitialMeans(RandomFactory.DEFAULT);
// Textbook k-means clustering:
KMeansLloyd<NumberVector> km = new KMeansLloyd<>(dist, //
3 /* k - number of partitions */, //
0 /* maximum number of iterations: no limit */, init);
// K-means will automatically choose a numerical relation from the data set:
// But we could make it explicit (if there were more than one numeric
// relation!): km.run(db, rel);
Clustering<KMeansModel> c = km.run(db);
// Output all clusters:
int i = 0;
for(Cluster<KMeansModel> clu : c.getAllClusters()) {
// K-means will name all clusters "Cluster" in lack of noise support:
System.out.println("#" + i + ": " + clu.getNameAutomatic());
System.out.println("Size: " + clu.size());
System.out.println("Center: " + clu.getModel().getPrototype().toString());
// Iterate over objects:
System.out.print("Objects: ");
for(DBIDIter it = clu.getIDs().iter(); it.valid(); it.advance()) {
// To get the vector use:
NumberVector v = rel.get(it);
// Offset within our DBID range: "line number"
final int offset = ids.getOffset(it);
System.out.print(v+" " + offset);
// Do NOT rely on using "internalGetIndex()" directly!
}
System.out.println();
++i;
}
I would say, since you are using RandomlyGeneratedInitialMeans:
Initialize k-means by generating random vectors (within the data sets value range).
RandomlyGeneratedInitialMeans init = new RandomlyGeneratedInitialMeans(RandomFactory.DEFAULT);
Yes, it is normal.
K-Means is supposed to be initialized randomly. It is desirable to get different results when running it multiple times.
If you don't want this, use a fixed random seed.
From the code you copy and pasted:
// To fix the random seed, use: new RandomFactory(seed);
That is exactly what you should do...
long seed = 0;
RandomlyGeneratedInitialMeans init = new RandomlyGeneratedInitialMeans(
new RandomFactory(seed));
This was too long for a comment. As #Idos stated, You are initializing your data randomly; that's why you're getting random results. Now the question is, how do you ensure the results are robust? Try this:
Run the algorithm N times. Each time, record the cluster membership for each observation. When you are finished, classify an observation into the cluster which contained it most often. For example, suppose you have 3 observations, 3 classes, and run the algorithm 3 times:
obs R1 R2 R3
1 A A B
2 B B B
3 C B B
Then you should classify obs1 as A since it was most often classified as A. Classify obs2 as B since it was always classified as B. And classify obs3 as B since it was most often classified as B by the algorithm. The results should become increasingly stable the more times you run the algorithm.
I've got a very simple question about a game I created (this is not homework): what should the following method contain to maximize payoff:
private static boolean goForBiggerResource() {
return ... // I must fill this
};
Once again I stress that this is not homework: I'm trying to understand what is at work here.
The "strategy" is trivial: there can only be two choices: true or false.
The "game" itself is very simple:
P1 R1 R2 P2
R5
P3 R3 R4 P4
there are four players (P1, P2, P3 and P4) and five resources (R1, R2, R3, R4 all worth 1 and R5, worth 2)
each player has exactly two options: either go for a resource close to its starting location that gives 1 and that the player is sure to get (no other player can get to that resource first) OR the player can try to go for a resource that is worth 2... But other players may go for it too.
if two or more players go for the bigger resource (the one worth 2), then they'll arrive at the bigger resource at the same time and only one player, at random, will get it and the other player(s) going for that resource will get 0 (they cannot go back to a resource worth 1).
each player play the same strategy (the one defined in the method goForBiggerResource())
players cannot "talk" to each other to agree on a strategy
the game is run 1 million times
So basically I want to fill the method goForBiggerResource(), which returns either true or false, in a way to maximize the payoff.
Here's the code allowing to test the solution:
private static final int NB_PLAYERS = 4;
private static final int NB_ITERATIONS = 1000000;
public static void main(String[] args) {
double totalProfit = 0.0d;
for (int i = 0; i < NB_ITERATIONS; i++) {
int nbGoingForExpensive = 0;
for (int j = 0; j < NB_PLAYERS; j++) {
if ( goForBiggerResource() ) {
nbGoingForExpensive++;
} else {
totalProfit++;
}
}
totalProfit += nbGoingForExpensive > 0 ? 2 : 0;
}
double payoff = totalProfit / (NB_ITERATIONS * NB_PLAYERS);
System.out.println( "Payoff per player: " + payoff );
}
For example if I suggest the following solution:
private static boolean goForBiggerResource() {
return true;
};
Then all four players will go for the bigger resource. Only one of them will get it, at random. Over one million iteration the average payoff per player will be 2/4 which gives 0.5 and the program shall output:
Payoff per player: 0.5
My question is very simple: what should go into the method goForBiggerResource() (which returns either true or false) to maximize the average payoff and why?
Since each player uses the same strategy described in your goForBiggerResource method, and you try to maximize the overall payoff, the best strategy would be three players sticking with the local resource and one player going for the big game. Unfortunately since they can not agree on a strategy, and I assume no player can not be distinguished as a Big Game Hunter, things get tricky.
We need to randomize whether a player goes for the big game or not. Suppose p is the probability that he goes for it. Then separating the cases according to how many Big Game Hunters there are, we can calculate the number of cases, probabilities, payoffs, and based on this, expected payoffs.
0 BGH: (4 choose 0) cases, (1-p)^4 prob, 4 payoff, expected 4(p^4-4p^3+6p^2-4p+1)
1 BGH: (4 choose 1) cases, (1-p)^3*p prob, 5 payoff, expected 20(-p^4+3p^3-3p^2+p)
2 BGH: (4 choose 2) cases, (1-p)^2*p^2 prob, 4 payoff, expected 24(p^4-2p^3+p^2)
3 BGH: (4 choose 3) cases, (1-p)*p^3 prob, 3 payoff, expected 12(-p^4+p^3)
4 BGH: (4 choose 4) cases, p^4 prob, 2 payoff, expected 2(p^4)
Then we need to maximize the sum of the expected payoffs. Which is -2p^4+8p^3-12p^2+4p+4 if I calculated correctly. Since the first term is -2 < 0, it is a concave function, and hopefully one of the roots to its derivative, -8p^3+24p^2-24p+4, will maximize the expected payoffs. Plugging it into an online polynomial solver, it returns three roots, two of them complex, the third being p ~ 0.2062994740159. The second derivate is -24p^2+48p-24 = 24(-p^2+2p-1) = -24(p-1)^2, which is < 0 for all p != 1, so we indeed found a maximum. The (overall) expected payoff is the polynomial evaluated at this maximum, around 4.3811015779523, which is a 1.095275394488075 payoff per player.
Thus the winning method is something like this
private static boolean goForBiggerResource ()
{
return Math.random() < 0.2062994740159;
}
Of course if players can use different strategies and/or play against each other, it's an entirely different matter.
Edit: Also, you can cheat ;)
private static int cheat = 0;
private static boolean goForBiggerResource ()
{
cheat = (cheat + 1) % 4;
return cheat == 0;
}
I take it you tried the following:
private static boolean goForBiggerResource() {
return false;
};
where none of the player try to go for the resource that is worth 2. They are hence guaranteed to each get a resource worth 1 every time hence:
Payoff per player: 1.0
I suppose also that if you ask this nice question is because you guess there's a better answer.
The trick is that you need what is called a "mixed strategy".
EDIT: ok here I come with a mixed-strategy... I don't get how Patrick found the 20% that fast (when he commented, only minutes after you posted your question) but, yup, I found out basically that same value too:
private static final Random r = new Random( System.nanoTime() );
private static boolean goForBiggerResource() {
return r.nextInt(100) < 21;
}
Which gives, for example:
Payoff per player: 1.0951035
Basically if I'm not mistaken you want to read the Wikipedia page on the "Nash equilibrium" and particularly this:
"Nash Equilibrium is defined in terms of mixed strategies, where players choose a probability distribution over possible actions"
Your question/simple example if I'm not mistaken also can be used to show why colluding players can do better average payoffs: if players could colude, they'd get 1.25 on average, which beats the 1.095 I got.
Also note that my answers contains approximation errors (I only check random numbers from 0 to 99) and depends a bit on the Random PRNG but you should get the idea.
if the players cannot cooperate and have no memory there is only one possible way to implement goForBiggerResource: choose a value randomly. Now the question is what is the best rate to use.
Now simple mathematics (not really programming related):
assume the rate x represents the probability to stay with the small resource;
therefore the chance for no player going for the big one is x^4;
so the chance for at least one player going to the big one is 1-x^4;
total profit is x + ( 1 - x^4 ) / 2
find the maximum of that formula for 0% <= x <= 100%
the result is about 79.4% (for returning false)
Mmm, I think your basic problem is that the game as described is trivial. In all cases, the optimal strategy is to stick with the local resource, because the expected payoff for going for R5 is only 0.5 (1/4 * 2). Raise the reward for R5 to 4, and it becomes even; there's no better strategy. reward(R5)>4 and it always pays to take R5.