I'm trying to use Optaplanner to replace myself in scheduling our work planning.
The system is having a MySQL database containing the necessary information and relationships. For this issue I'll only use the three tables I need:
Employees --> Have Skills
Jobs --> Have Skills
Skills
In Drools I have the rule
rule 'required Skills'
when
Job(employee != null, missingSkillCount > 0, $missingSkillCount : missingSkillCount)
then
scoreHolder.addHardConstraintMatch(kcontext, -10 * $missingSkillCount);
end
In Class Job I have a function missingSkillCount():
public int getMissingSkillCount() {
if (this.employee == null) {
return 0;
}
int count = 0;
for (Skill skill : this.reqskills) {
if(!this.employee.getSkills().contains(skill)) {
count++;
}
}
return count;
}
When I run my program, Optaplanner returns that none of my workers have any skills...
However, when I manually use this function (adapted to accept an Employee as parameter): public int getMissingSkillCount(Employee employee), it does return the correct values.
I'm puzzled! I somehow understand that containsis checking for the same object, instead of the content of the object. But then I don't understand how to do this efficiently...
1) Are your Jobs in the Drools working memory? I presume they are your #PlanningEntity and the instances are in #PlanningEntityCollectionProperty on your #PlanningSolution, so they will be. You can verify this by just matching a rule on Job() and doing a System.out.println.
2) Try writing the constraint as a ConstraintStream (see docs) and putting a debug breakpoint in the getMissingSkillCount() > 0 lambda to see what's going on.
3) Temporarily turn on FULL_ASSERT to validate there is no score corruption.
4) Turn on DEBUG and then TRACE logging for optaplanner, to see what's going on inside.
Still wondering what makes the difference between letting Optaplanner run getMissingSkillCount() and using it "manually".
I fixed it by overriding equals(), that should have been my first clue!
Related
I'm working on scheduling project for classes (Teachers, Lessons, time). I'm using optaplanner as part of spring-boot application, the test code is compiling and running correctly however the result contain empty solution, in the log output I see this message:
rted: time spent (11), best score (0hard/0soft), environment mode (REPRODUCIBLE), move thread count (NONE), random (JDK with seed 0).
2021-09-28 22:39:26.619 INFO 2579 --- [pool-1-thread-1] o.o.core.impl.solver.DefaultSolver : Skipped all phases (2): out of 0 planning entities, none are movable (non-pinned).
2021-09-28 22:39:26.620 INFO 2579 --- [pool-1-thread-1] o.o.core.impl.solver.DefaultSolver : Solving ended: time spent (16), best score (0hard/0soft), score calculation speed (62/sec), phase total (2), environment mode (REPRODUCIBLE), move thread count (NONE).
The problem is in the test calculator I wrote I'm trying to loop on the possible solution and actually decrease the cost a bit sometimes or even increase it, but it doesn't taking effect because I'm looping and trying to log the objects but nothing is being logged, this is the code of the calculator:
public class ScheduleScoreCalculator implements EasyScoreCalculator<ScheduleTable, HardSoftScore>
{
#Override
public HardSoftScore calculateScore(ScheduleTable scheduleTable) {
int hardScore = 0;
int softScore = 0;
List<ScheduledClass> scheduledClassList = scheduleTable.getScheduledClasses();
System.out.println(scheduleTable);
System.out.println("Hmmmmm ---"); // this is logged but the score is not changing
for(ScheduledClass a: scheduledClassList) {
for (ScheduledClass b : scheduledClassList) {
if (a.getTeacher().getTeacherId() > 17000L) {
hardScore+=18;
}
if (a.getTimeslot() != null && a.getTimeslot().equals(b.getTimeslot())
&& a.getId() < b.getId()) {
if (a.getTeacher() != null && a.getTeacher().equals(b.getTeacher())) {
hardScore--;
}
if (a.getTeacher().equals(b.getTeacher())) {
hardScore--;
}
} else {
hardScore++;
softScore+=2;
}
}
}
return HardSoftScore.of(hardScore, softScore);
}
}
So Please any idea why optaplanner might skip creating possible solutions?
The issue was simpler than I thought, The Solution class annotated with "PlanningSolution" has property "scheduledClasses" annotated with "PlanningEntityCollectionProperty" my mistake that this property was initialized with empty List (ArrayList), the solution was to initialize a solution class! In retrospect I think the documentation is to be blamed on this, the provided example didn't mention that we need to have this, so it should not be null (otherwise and exception will be raised) and it shouldn't be empty List. You need to initialize it with class without setting any value for the movable properties (annotated with "PlanningVariable").
Thanks for #Lukáš Petrovický as his comment helped me do the correct investigation!
I've been trying to get over-constrained planning to work for my situation, but keep running into issues where some failed hard constraints are still being assigned. Apologies if this has been answered before, but most examples/solutions I have seen are centered around Drools, and I'm using the streams API on this project. Using the quarkus 1.4.2 implementation of optaplanner, if that helps.
Below are some example constraints of what I'm trying to accomplish:
private Constraint unnassignedPerson(ConstraintFactory constraintFactory) {
return constraintFactory.from(Assignment.class)
.filter(assignment -> assignment.getPerson() == null)
.penalize("Unassigned", HardMediumSoftScore.ONE_MEDIUM);
private Constraint numberAssignmentConflict(ConstraintFactory constraintFactory) {
return constraintFactory.from(Assignment.class)
.join(Assignment.class,
Joiners.equal(Assignment::getPerson),
Joiners.equal(Assignment::getNumber),
Joiners.lessThan(Assignment::getId))
.penalize("Number Conflict", HardMediumSoftScore.of(2, 0, 0));
private Constraint tooLittleSpaceBetweenResourceAssignment(ConstraintFactory constraintFactory) {
return constraintFactory.from(Assignment.class)
.join(Assignment.class, Joiners.equal(Assignment::getPerson), Joiners.lessThan(Assignment::getId))
.filter((assignment, assignment2) -> !assignment.getResourceId().equals(assignment2.getResourceId()))
.filter(((assignment, assignment2) -> inRange(1, assignment.getNumber(), assignment2.getNumber())))
.penalize("Not enough space between assignments of different resource (requires 1)", HardMediumSoftScore.of(1, 0, 0));
}
(inRange is a simple local function to get the absolute difference between two numbers)
Note that these both work independently of each other in terms of honoring the nullable planning variable - it's only when both are enabled that I am getting unexpected results. When both are enabled, the one with the lower hard score is still assigned in the solution despite showing up as a hard constraint in the debug log (which in my local testing always finishes at -12hard/-2medium/0soft).
Any insight on what I might be doing wrong would be much appreciated, and thanks in advance :)
As a follow up, it appears the Joiners.lessThan(Assignment::getId) portion of my assignment conflict constraint is not compatible with nullable assignments. I removed that and added some more explicit checks instead, and now things are working like they should :D
psuedo-adaptation for anyone it might help:
private Constraint numberAssignmentConflict(ConstraintFactory constraintFactory) {
return constraintFactory.from(Assignment.class)
.join(Assignment.class,
Joiners.equal(Assignment::getPerson),
Joiners.equal(Assignment::getNumber))
.filter(((assignment, assignment2) -> assignment.getPerson() != null && assignment2.getPerson() != null))
.filter(((assignment, assignment2) -> !assignment.getId().equals(assignment2.getId())))
.penalize("Number Conflict", HardMediumSoftScore.of(2, 0, 0));
}
Doesn't the first constraint have to be a fromUnfiltered(Assignment.class) rather than from(Assignment.class). I believe that from()does not pass entities with unassigned planning variables hence the ONE_MEDIUM penalty would never be applied?
spring-data-redis module contains RedisAtomicLong class.
In this class you can see
public boolean compareAndSet(long expect, long update) {
return generalOps.execute(new SessionCallback<Boolean>() {
#Override
#SuppressWarnings("unchecked")
public Boolean execute(RedisOperations operations) {
for (;;) {
operations.watch(Collections.singleton(key));
if (expect == get()) {
generalOps.multi();
set(update);
if (operations.exec() != null) {
return true;
}
}
{
return false;
}
}
}
});
}
My question is why it works?
generalOps.multi() starts transaction after get() is invoked. It means that there is possibility that two different thread (or even client) can change value and both of them will succeed.
Is operations.watch prevent it somehow? JavaDoc doesn't explain purpose of this method.
PS: Minor question: why for (;;)? There is always one iteration.
Q: Is operations.watch prevent it somehow?
YES.
Quoting from Redis documentation about transaction:
WATCH is used to provide a check-and-set (CAS) behavior to Redis transactions.
WATCHed keys are monitored in order to detect changes against them. If at least one watched key is modified before the EXEC command, the whole transaction aborts, and EXEC returns a Null reply to notify that the transaction failed.
You can learn more about Redis transaction from that documentation.
Q: why for (;;)? There is always one iteration.
It seems the code you've posted is very old. From Google's cache of this url, I saw the code you provided which is dated back to Oct 15th, 2012!
Latest codes look much different:
compareAndSet method
CompareAndSet class
Is operations.watch prevent it somehow?
YES. After watching a key, if the key has been modified before transaction finishes, EXEC will fail. So if EXEC successes, the value is guaranteed to be unchanged by others.
why for (;;)? There is always one iteration.
In your case, it seems the infinite loop is redundant.
However, if you want to implement a check-and-set operation to modify the value with the old value, the infinite loop is necessary. Check this example from redis doc:
WATCH mykey
val = GET mykey
val = val + 1
MULTI
SET mykey $val
EXEC
Since EXEC might fail, you need to retry the whole process in a loop until it successes.
RedisAtomicLong.compareAndSet implementation is not optimal since it requires 5 requests to Redis
Redisson - Redis Java client provides more efficient implementation.
org.redisson.RedissonAtomicLong#compareAndSetAsync method implemented using atomic EVAL-script:
"local currValue = redis.call('get', KEYS[1]); "
+ "if currValue == ARGV[1] "
+ "or (tonumber(ARGV[1]) == 0 and currValue == false) then "
+ "redis.call('set', KEYS[1], ARGV[2]); "
+ "return 1 "
+ "else "
+ "return 0 "
+ "end",
This script requires only single request to Redis.
Usage example:
RAtomicLong atomicLong = redisson.getAtomicLong("myAtomicLong");
atomicLong.compareAndSet(1L, 2L);
Can anybody show an example of how to use heap.heapForEachClass in a select statement?
It would be great if you could provide some links with different examples of queries (other than those in the oqlhelp page of course :) )
I don't believe heap.forEachClass() is meant to be used in a select statement, at least not directly. Consider the fact that it doesn't return anything:
var result=heap.forEachClass(function(it){return it;});
typeof result
//returns undefined
The OQL used in jhat and VisualVM does support plain ol' JavaScript, just like the "query" I use above. I believe that the heap.forEachClass() finds more use in either JavaScript-style queries or in JavaScript functions within select-type queries.
That said, I don't know why this function exists since the heap.classes() enumeration is much easier to use, both with with select-style queries and plain JavaScript ones.
You could even even recreate the same functionality as heap.forEachClass() with the following JavaScript function:
function heapForEachClass(func){
map(heap.classes(),func)
return undefined;
}
Any sample queries that I could provide you would likely be easier written with heap.classes(). For example, you could use heap.forEachClass() to get list of all classes:
var list=[];
heap.forEachClass(function(it){
list.push(it);
});
list
but this is more complicated than how you'd do it with heap.classes():
select heap.classes()
or just
heap.classes()
I've used this function before to look for classes that are loaded multiple times (usually, this happens when two different class loaders load the same lib taking more memory for no reason, and making the JVM serialize and deserialize objects passed from one class instance to the other -because it doesn't know that they are actually the same class-)
This is my OQL script that selects (and count) classes that has the same name:
var classes = {};
var multipleLoadedClasses = {};
heap.forEachClass(function(it) {
if (classes[it.name] != null) {
if (multipleLoadedClasses[it.name] != null) {
multipleLoadedClasses[it.name] = multipleLoadedClasses[it.name] + 1;
} else {
multipleLoadedClasses[it.name] = 1;
}
} else {
classes[it.name] = it;
}
});
multipleLoadedClasses;
hopes that this will help further visitors ;)
guys
I am implementing a simple example of 2 level cache in java:
1st level is memeory
2nd - filesystem
I am new in java and I do this just for understanding caching in java.
And sorry for my English, this language is not native for me :)
I have completed 1st level by using LinkedHashMap class and removeEldestEntry method and it is looks like this:
import java.util.*;
public class level1 {
private static final int max_cache = 50;
private Map cache = new LinkedHashMap(max_cache, .75F, true) {
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > max_cache;
}
};
public level1() {
for (int i = 1; i < 52; i++) {
String string = String.valueOf(i);
cache.put(string, string);
System.out.println("\rCache size = " + cache.size() +
"\tRecent value = " + i +
" \tLast value = " +
cache.get(string) + "\tValues in cache=" +
cache.values());
}
}
Now, I am going to code my 2nd level. What code, methods I should write to implement this tasks:
1) When the 1st level cache is full, the value shouldn't be removed by removeEldestEntry but it should be moved to 2nd level (to file)
2) When the new values are added to 1st level, firstly this value should be checked in file (2nd level) and if it exists it should be moved from 2nd to 1st level.
And I tried to use LRUMap to upgrade my 1st level but the compiler couldn't find class LRUMap in library, what's the problem? Maybe special syntax needed?
You can either use the built in java serialization mechanism and just send your stuff to file by wrapping FileOutputStrem with DataOutputStream and then calling writeObjet().
This method is simple but not flexible enough. for example you will fail to read old cache from file if your classes changed.
You can use serialization to xml, e.g. JaxB or XStream. I used XStream in past and it worked just fine. You can easily store any collection in file and the restore it.
Obviously you can store stuff in DB but it is more complicated.
A remark is that you are not getting thread safety under consideration for your cache! By default LinkedHashMap is not thread-safe and you would need to synchronize your access to it. Even better you could use ConcurrentHashMap which deals with synchronization internally being able to handle by default 16 separate threads (you can increase this number via one of its constructors).
I don't know your exact requirements or how complicated you want this to be but have you looked at existing cache implementations like the ehcache library?