Overconstrained planning not working as expected - java

I've been trying to get over-constrained planning to work for my situation, but keep running into issues where some failed hard constraints are still being assigned. Apologies if this has been answered before, but most examples/solutions I have seen are centered around Drools, and I'm using the streams API on this project. Using the quarkus 1.4.2 implementation of optaplanner, if that helps.
Below are some example constraints of what I'm trying to accomplish:
private Constraint unnassignedPerson(ConstraintFactory constraintFactory) {
return constraintFactory.from(Assignment.class)
.filter(assignment -> assignment.getPerson() == null)
.penalize("Unassigned", HardMediumSoftScore.ONE_MEDIUM);
private Constraint numberAssignmentConflict(ConstraintFactory constraintFactory) {
return constraintFactory.from(Assignment.class)
.join(Assignment.class,
Joiners.equal(Assignment::getPerson),
Joiners.equal(Assignment::getNumber),
Joiners.lessThan(Assignment::getId))
.penalize("Number Conflict", HardMediumSoftScore.of(2, 0, 0));
private Constraint tooLittleSpaceBetweenResourceAssignment(ConstraintFactory constraintFactory) {
return constraintFactory.from(Assignment.class)
.join(Assignment.class, Joiners.equal(Assignment::getPerson), Joiners.lessThan(Assignment::getId))
.filter((assignment, assignment2) -> !assignment.getResourceId().equals(assignment2.getResourceId()))
.filter(((assignment, assignment2) -> inRange(1, assignment.getNumber(), assignment2.getNumber())))
.penalize("Not enough space between assignments of different resource (requires 1)", HardMediumSoftScore.of(1, 0, 0));
}
(inRange is a simple local function to get the absolute difference between two numbers)
Note that these both work independently of each other in terms of honoring the nullable planning variable - it's only when both are enabled that I am getting unexpected results. When both are enabled, the one with the lower hard score is still assigned in the solution despite showing up as a hard constraint in the debug log (which in my local testing always finishes at -12hard/-2medium/0soft).
Any insight on what I might be doing wrong would be much appreciated, and thanks in advance :)

As a follow up, it appears the Joiners.lessThan(Assignment::getId) portion of my assignment conflict constraint is not compatible with nullable assignments. I removed that and added some more explicit checks instead, and now things are working like they should :D
psuedo-adaptation for anyone it might help:
private Constraint numberAssignmentConflict(ConstraintFactory constraintFactory) {
return constraintFactory.from(Assignment.class)
.join(Assignment.class,
Joiners.equal(Assignment::getPerson),
Joiners.equal(Assignment::getNumber))
.filter(((assignment, assignment2) -> assignment.getPerson() != null && assignment2.getPerson() != null))
.filter(((assignment, assignment2) -> !assignment.getId().equals(assignment2.getId())))
.penalize("Number Conflict", HardMediumSoftScore.of(2, 0, 0));
}

Doesn't the first constraint have to be a fromUnfiltered(Assignment.class) rather than from(Assignment.class). I believe that from()does not pass entities with unassigned planning variables hence the ONE_MEDIUM penalty would never be applied?

Related

How to make the planner tolerate init score

I have taken the cloud balancing problem to a different direction:
A problem where you need to fill up all the computers with processes and computers can overfill (by changing the overfill constraint to a soft one)
The change was easy by adding the following constraints:
Constraint unfilledCpuPowerTotal(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class)
.groupBy(CloudProcess::getComputer, sum(CloudProcess::getRequiredCpuPower))
.filter((computer, requiredCpuPower) -> requiredCpuPower < computer.getCpuPower())
.penalize("unfilledCpuPowerTotal",
HardSoftScore.ONE_HARD,
(computer, requiredCpuPower) -> computer.getCpuPower() - requiredCpuPower);
}
Constraint unfilledMemoryTotal(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class)
.groupBy(CloudProcess::getComputer, sum(CloudProcess::getRequiredMemory))
.filter((computer, requiredMemory) -> requiredMemory < computer.getMemory())
.penalize("unfilledMemoryTotal",
HardSoftScore.ONE_HARD,
(computer, requiredMemory) -> computer.getMemory() - requiredMemory);
}
Constraint unfilledNetworkBandwidthTotal(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class)
.groupBy(CloudProcess::getComputer, sum(CloudProcess::getRequiredNetworkBandwidth))
.filter((computer, requiredNetworkBandwidth) -> requiredNetworkBandwidth < computer.getNetworkBandwidth())
.penalize("unfilledNetworkBandwidthTotal",
HardSoftScore.ONE_HARD,
(computer, requiredNetworkBandwidth) -> computer.getNetworkBandwidth() - requiredNetworkBandwidth);
}
Constraint unusedComputer(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudComputer.class)
.ifNotExists(CloudProcess.class, equal(Function.identity(), CloudProcess::getComputer))
.penalize("unusedComputers",
HardSoftScore.ONE_HARD,
computer -> computer.getCpuPower() + computer.getMemory() + computer.getNetworkBandwidth());
}
I have also removed the cost constraint because it doesn't make sense in this context.
However, i dont want the planner to dump all the avalible processes into computers.
Meaning, if all the computers are already full and there are unused processes I would like them to stay that way and not be forced to add more overfill penalty to a computer.
I guess this can be done by somehow ignoring the init penalty but I can't seem to understand where or how to implement that idea.
I also thought about adding a "dummy" computer entity that just holds processes with no penalty (the planner will still fill regular computers because not filling them will resault in a soft penalty) but it seems like a lot of work and requires big changes to almost every part of the project so if there is a way to implement the first idea it would be preferred.
What you're describing is called over-constrained planning.
Most likely, you are looking for nullable variables.
Your idea with a dummy is called a "virtual value".

Drools comparing Sets

I'm trying to use Optaplanner to replace myself in scheduling our work planning.
The system is having a MySQL database containing the necessary information and relationships. For this issue I'll only use the three tables I need:
Employees --> Have Skills
Jobs --> Have Skills
Skills
In Drools I have the rule
rule 'required Skills'
when
Job(employee != null, missingSkillCount > 0, $missingSkillCount : missingSkillCount)
then
scoreHolder.addHardConstraintMatch(kcontext, -10 * $missingSkillCount);
end
In Class Job I have a function missingSkillCount():
public int getMissingSkillCount() {
if (this.employee == null) {
return 0;
}
int count = 0;
for (Skill skill : this.reqskills) {
if(!this.employee.getSkills().contains(skill)) {
count++;
}
}
return count;
}
When I run my program, Optaplanner returns that none of my workers have any skills...
However, when I manually use this function (adapted to accept an Employee as parameter): public int getMissingSkillCount(Employee employee), it does return the correct values.
I'm puzzled! I somehow understand that containsis checking for the same object, instead of the content of the object. But then I don't understand how to do this efficiently...
1) Are your Jobs in the Drools working memory? I presume they are your #PlanningEntity and the instances are in #PlanningEntityCollectionProperty on your #PlanningSolution, so they will be. You can verify this by just matching a rule on Job() and doing a System.out.println.
2) Try writing the constraint as a ConstraintStream (see docs) and putting a debug breakpoint in the getMissingSkillCount() > 0 lambda to see what's going on.
3) Temporarily turn on FULL_ASSERT to validate there is no score corruption.
4) Turn on DEBUG and then TRACE logging for optaplanner, to see what's going on inside.
Still wondering what makes the difference between letting Optaplanner run getMissingSkillCount() and using it "manually".
I fixed it by overriding equals(), that should have been my first clue!

GCP Datastore Java API, build an entity with null value

So this is not much of a "I have a bug how to fix it" question as much as "Is this really how this works?" question.
So, I am looking over my code to persist an entity into datastore, and if I try to set an attribute that is null, all hell breaks lose. I noticed that there's a setNull but is that it? I have to manually check every attribute before building it in order to call the appropriate set? Shouldn't the standard set, which is overloaded for a plethora of datatypes handle null on itself?
Here is a code piece
public void put(BatchExecution obj) {
Key key = keyFactory.newKey(obj.getId());
FullEntity<Key> incBEEntity = Entity.newBuilder(key)
.set(BatchExecution.ID, obj.getId())
.set(BatchExecution.NAME, obj.getName())
.set(BatchExecution.CREATETIME, obj.getCreateTime())
.set(BatchExecution.ELAPSEDTIME, obj.getElapsedTime()) //I break the code because elapesedTime was never set in the object
.set(BatchExecution.STATUS, obj.getStatus().name())
.build();
datastore.put(incBEEntity);
}
Am I missing something here or is this really how the API works?
Concern raised on github
https://github.com/GoogleCloudPlatform/google-cloud-java/issues/3583
Question has been closed.

Android NullPointerException - Debugging a specific line

If I have a line like this:
var.getSomething().getSomethingElse().setNewValue(stuff.getValue().getWhatever());
If that line creates a NullPointerException, is there any way of finding out which method is returning a null value?
I believe I was able to split the line at every dot and get the exception showing which line was failing. But I can't get that to work anymore (maybe I remember incorrectly).
Is the only good debugging possibility to write it like this?
a = var.getSomething();
b = a.getSomehingElse();
c = stuff.getValue();
d = c.getWhatever();
b.setNewValue(d);
With this I should be able to easily see where the exception happens. But it feels inefficient and ugly to write this way.
I use Android Studio. Used Eclipse before but moved to Android Studio some time ago.
You might want to put every part into "Watches":
But I'm pretty sure that both Eclipse and Android Studio would let you inspect the content by just a selection of the part you' re interested in (if you are in debug mode)
The best I can advice for you is to use #Nullable and #NonNull annotations for all methods with return values. It would not help you to get line where null pointer is but would help to prevent such situations in future.
So if method may return null and you have it in call sequence you will get warning from Android Studio about this. In this case it is better to break sequence and check for null.
For example:
private static class Seq {
private final Random rand = new Random();
#NonNull
public Seq nonNull() {
return new Seq();
}
#Nullable
public Seq nullable() {
return rand.nextInt() % 100 > 50 ? new Seq() : null;
}
}
If you write new Seq().nonNull().nonNull().nullable().nonNull(); you will get warning from IDE:
Method invocation `new Seq().nonNull().nonNull().nullable().nonNull()` may produce 'java.lang.NullPointerException'
The best solution in this case is to change code like so:
Seq seq = new Seq().nonNull().nonNull().nullable();
if (seq != null) {
seq.nonNull();
}
Don't forget to add it into Gradle build script
compile 'com.android.support:support-annotations:22.+'
I am not positive on the way you are doing it. This makes your code tightly coupled and not unit testable.
var.getSomething().getSomethingElse().setNewValue(stuff.getValue().getWhatever());
Instead do something like
var.getSomething();
that get something internally does whatever you are doing as a part of
getSomethingElse().setNewValue(stuff.getValue().getWhatever())
In the same way getSomethingElse() should perform whatever you are doing as a part of
setNewValue(stuff.getValue().getWhatever())

If my Ehcache is configured with a TTL, do I need to check if a retrieved Element is expired?

I can't find this anywhere in the Ehcache docs.
Right now I'm using this code to create and configure my Cache:
// Groovy syntax
def cacheConfig = new CacheConfiguration('stats', 1)
cacheConfig.timeToLiveSeconds = 2
def cache = new Cache(cacheConfig)
cache.initialise()
and this to retrieve data:
// Groovy syntax
def cachedElement = cache.get('stats')
if (cachedElement != null && ! cachedElement.isExpired()) {
// use the cached data
} else {
// get/generate the data and cache it
}
return cachedElement.value
I wrote this awhile ago, but looking at it now it seems kinda silly to have to check Element.isExpired() — that shouldn't be necessary, right? I mean, if an element is expired, then the cache shouldn't return it — right?
So I'm thinking I can remove that check — just hoping for a quick sanity check here.
Thanks!
No, you don't have to perform this check. If you get a non-null element back, then it's not expired. It is odd that the javadoc doesn't mention this, right enough, and although I have read this somewhere, I can't find a reference to back up my answer.
If it is possible that your cacheElement could actually be null then the null check is not enough. Simply returning null when something is expired is not always sufficient in that case. Ecache has no reliable way of determining if something is expired until it is accessed. You could set the following in the config:
<ehcache:config cache-manager="ehCacheManager">
<!-- interval is in minutes -->
<ehcache:evict-expired-elements interval="20"/>
</ehcache:config>
but there is always a chance that your access will fall within that interval.
If you are absolutly certain that your values are never null, then you can drop the isExpired() call.

Categories

Resources