I am wondering how the drools accumulate element work inside Optaplanner? It seems that the sum() function is not returning the value I expect. I have two simple POJOs ProductionPackage and SKU.
#PlanningEntity
public class ProductionPackage{
// This is a custom date object containing information about a date. Constant Date initialized on start.
// Custom date object NOT util.Date or sql.Date
private Date date;
// The number of units in this package. Constant value initialized on program start.
private int productionUnits;
// The ID of the production package
private int productionPackageId;
// The SKU assigned to this production package
#PlanningVariable(valueRangeProviderRefs = "skuRange")
private SKU sku;
// Getters, setters, constructors...
}
public class SKU {
// Unique identifier for this product.
private String sku;
//Getters, setters, constructors...
}
The PlanningSolution looks as follows:
#PlanningSolution
public class ProductionPlan {
#ProblemFactCollectionProperty
#ValueRangeProvider(id = "skuRange")
private List<SKU> skuList;
#ProblemFactCollectionProperty
private List<Date> dateList;
#PlanningEntityCollectionProperty
List<ProductionPackage> ProductionPackageList;
//Getters, setters, constructors...
}
Basically I have many ProductionPackages and I am trying to assign one SKU to each production package. The productionUnits field of the 'ProductionPackage' specifies how much of the assigned SKU has to be produced. For simplicity, lets say that the productionUnits of all productionPackages is 1(could be any integer > 0).
One constraint of this problem is that there should be at least 10 units produced per SKU per Date. I am trying to implement this constraint using Drools as follows:
rule "At least 10 production units per SKU and per Day"
enabled true
when
// Bind SKU to variable
$sku : SKU()
// Bind Date object to variable
$date : Date()
accumulate(
ProductionPackage(sku == $sku,
date == $date,
$productionUnits : productionUnits);
$productionUnitsTotal : sum($productionUnits);
$productionUnitsTotal < 10
)
then
// For debugging purposes
System.out.println("SKU: " + $sku.toString());
System.out.println("Date: " + $date.toString());
System.out.println("ProductionUnits: " + $productionUnitsTotal);
scoreHolder.addHardConstraintMatch(kcontext, -1);
end
In the Drools documentation it is stated that the accumulate element iterates over all the elements of a collection. To me, this rule selects a combination of SKU and Date. It then iterates over the ProductionPackageList containing ProductionPackage objects. If the Date and SKU match with the previously selected SKU and Date objects it sums the productionUnits. I expect this to return the total units produced per SKU and per Date (and it looks like it does sometimes). However, I am noticing that the System.out.println("ProductionUnits: " + $productionUnitsTotal); statement is printing 0. This is really odd to me since productionUnits is always an integer > 0. How can it print 0? If both conditions match($date = date and $sku == sku), an SKU should never have 0 production units because it is already assigned to a production package whose productionUnits > 0.
I think the error stems from the fact that I really don't understand what Optaplanner or Drools are doing. I have turned on DEBUG/TRACE mode however those haven't helped me much.
Related
I read some of the publications and threads about DDD. There are many voices about connection between aggregate and entities. I understood that aggregate should be as simple as possible (one entity per aggregate). But what about situation when aggregate has collection of entities?
Let's say, we have one aggregate called "Month" which contains a collection of "Day" objects (that are domain entities, because they need an identity to be
distinguishable - to let aggregate know which "Day" to modify).
So I have two questions:
Is this a proper approach? Just a normal situation and I shouldn't be concerned?
What about "visibility" outside? In my approach, an aggregate is "package-private" to not let anyone use it in different parts of the system. But what about entities? Should they be visible just like Value Objects for different parts of the system? Or just create another VO to represent entities outside (for example: when entities are stored in events)?
Thank you for all the answers
Modeling days and months as entities depends a lot on the context. It might not be the best example to explain aggregates and entities but let's give it a try.
Let's assume that in our context a day can not exist on its own. It has to be within a month. If you want to refer to a day, you must specify the month first. That is how we use dates in real life, January 1st, May 8th ... Even though the days are entities they don't need a global uniques identifier. They only need an identifier within the month [1 .. 31].
Aggregates should be as small as possible however it is not a rule that you should only have one entity per aggregate. You just need to have an aggregate root (month) that has a unique identifier across all the systems. Within the aggregate, you can have entities(days) that have a unique identifier within the aggregate[1 .. 31]. If you want to refer to or access these entities you should always go through the aggregate root.
Aggregate, Entity and ValueObject
In my vision, an entity is a child with a name (Id) of an Aggregate which, in turn, is the Father with a name (Id).
An aggregate can have NO entity.
You can think well is an entity, NO: an entity exist ONLY within an Aggregate.
The entity (child) is a small aggregate (father) without other entities (children's).
Fathers and children can use transparent box (I don't have a better idea to translate the concept of ValueObject, sorry): when you create a transparent box you cannot change anything but you can read the content, if you want to update the box you MUST create a new one.
The aggregate is responsible for manage the entities, this means: add, update, remove and query.
If you want to talk (query) with an entity or entities you must ask the aggregate, consequentially, when you load the aggregate you MUST load all entities.
An aggregate can have multiple types of entities, how many? Well, this depends on you, on the design, and on the system.
Obviously, a big aggregate of many fields with many entities with each of them many rows maybe is not efficient, in this case, maybe you can choose the biggest entities and turn in Aggregates with or without children.
Pratical example
I wrote the example on csharp but is not much different from Java.
class Invoice : ValueObject
{
public string Number { get; private set; }
public DateTime Date { get; private set; }
public decimal TaxableAmount { get; private set; }
public decimal VatAmount { get; private set; }
public decimal TotalAmount => TaxableAmount + VatAmount;
public Invoice(string number, DateTime date, decimal taxableAmount, decimal
vatAmount)
{
// validation
Number = number;
[..]
}
}
class Taxonomy : Entity
{
public int Id {get; private set;}
public decimal Amount {get; private set;}
public string Classfication {get; private set;}
public Taxonomy(int id, decimal amount, string classification)
{
// validation
Id = id;
[..]
}
}
class SaleAggregate : AggregateRoot
{
private List<Taxonomy> _taxonomies;
public int Id {get; private set;}
public Invoice Invoice {get; private set;}
public IReadOnlyCollection<Taxonomy> Taxonomies => _taxonomies.AsReadOnly();
public SaleAggregate(int id, string number, DateTime date, decimal
taxableAmount, decimal vatAmount)
{
_taxonomies = new List<Taxonomy>();
// I prefer to pass ALWAYS primitive types to not rely on valueObject
// validation
Id = id;
Invoice = new Invoice(number, date, taxableAmount, vatAmount)
[..]
}
public void AddTaxonomy(int id, decimal amount, string classification)
{
// validation
_taxonomies.Add(new Taxonomy(id, amount, classification);
}
public void UpdateTaxonomy(int id, decimal amount, string classification)
{
// validation
var entity = _taxonomies.FirstOrDefault(p=> p.Id == id);
entity.Amount = amount;
entity.Classification = classification;
}
public void RemoveTaxonomy(int id)
{
// validation
var entity = _taxonomies.FirstOrDefault(p=> p.Id == id);
_taxonomy.Remove(entity);
}
public void UpdateVatAmount(decimal vatAmount)
{
// validation
Invoice = new Invoice(Invoice.Number, Invoice.Date,
Invoice.TaxableAmount, vatAmount);
}
}
Again: this is my vision about Aggregate, Entities, and ValueObject, other developers reading this can feel free to correct me.
I have to save the old data to a history table whenever a field is changed in the current table. Hence, I have to create a history Domain class with same fields as original Domain class. For now, I am manually creating the history Domain class and saving the older data into it whenever the values are updated in the original table.
Is there a way to auto generate a history Domain class with same fields whenever a new Domain class is created.
Main Domain class is:
class Unit {
String name
String description
Short bedrooms = 1
Short bathrooms = 1
Short kitchens = 1
Short balconies = 0
Short floor = 1
Double area = 0.0D
Date expDate
Date lastUpdated
static hasMany = [tenants:Tenant]
static belongsTo = [property: Property]
}
History Domain class should be like this:
class UnitHistory {
String name
String description
Short bedrooms = 1
Short bathrooms = 1
Short kitchens = 1
Short balconies = 0
Short floor = 1
Double area = 0.0D
Date expDate
Date lastUpdated
static hasMany = [tenants:Tenant]
static belongsTo = [property: Property]
}
Maybe you could add beforeInsert and beforeUpdate methods to your Unit domain as follows:
class Unit {
String name
String description
Short bedrooms = 1
Short bathrooms = 1
Short kitchens = 1
Short balconies = 0
Short floor = 1
Double area = 0.0D
Date expDate
Date lastUpdated
def beforeInsert() {
addHistory()
}
def beforeUpdate() {
addHistory()
}
def addHistory(){
new UnitHistory( this.properties ).save( failOnError: true )
}
}
I would need to know more about the actual requirements to know for sure what the best thing to do is but one likely solution to consider would be to use an event listener that would create instances of the history class each time an instance of the main class is inserted and/or updated. At https://github.com/jeffbrown/gorm-events-demo/blob/261f25652e5fead8563ed83f7903e52dfb37fb40/src/main/groovy/gorm/events/demo/listener/AuditListener.groovy#L22 is an example of an event listener. Instead of updating the instance like you see in that example, you could create a new instance of your history class, copy the relevant stuff and then persist that newly created history instance.
See https://async.grails.org/latest/guide/index.html#gormEvents for more info about GORM events.
I hope that helps.
I have a method for example,
public Order newOrder(Product prod, Supplier supp)
and need to generate an unique alphanumeric code with the format "ordn", where 'n' is a progressive number starting from 1, so every time a new order is added the ID will increment to "ord2" "ord3"...
How can I do this? Is it possible to do it by substringing?
I know how to generate an integer ID, but this one is a String, so my problem is more like how to increment an integer number in a string.
I tried to substring it to String ocode = "ord" + n, and just increment "n", but how can I assign this whole thing to the new order? or do I need a loop?
(the code has to be a String I guess, later there is a findOrder() method to retrieve a specific order by accepting the String code. <--not sure if it matters.)
btw I'm new to Java, and this is just a part of an exercise.
Solved, turns out the substring works...
You can use a static (tutorial) int, and increment it by 1 for each order. The current value of the static counter is the id of the current order. When you need to return the string ordn, you return "ord"+id. Here's a simple example:
public class Order {
static int sharedCounter = 0; //static, shared with ALL `Order` instances
int orderId = 0; //Specific to particular `Order` instance
public Order() {
this.orderId = sharedCounter++;
}
public String getOrderId(){
return "ord"+this.orderId;
}
}
Note that the static ids will start with zero with each execution of the program. If you're writing it as an exercise, that's probably fine; but if you need to actually generate unique ids for some orders in the real world, then you'd need to store that information somewhere, probably a database.
Also, note that I've used a shared int in the example, which isn't thread safe. If thread safety is important, you'd need an AtomicInteger
Try
String newOrderId = "ord" + (Integer.parseInt(lastOrderId.substring(3)) + 1);
I'm not sure if its the way Morphia is designed but here it goes...
Student.class (methods omitted)
#Entity(value = "students", noClassnameStored = true)
public class Student
{
#Id
private String id = new ObjectId().toString();
private String name;
private String city = "London"; // Default Value
}
NOTE: That I have assigned a DEFAULT value to Instance variable city.
Now the code...
Student s1 = new Student("James Bond"); // London IS his default city
ds.save(s1);
Student s2 = new Student("Jason Bourne");
s2.setCity("New York"); // Set a city for Mr. Bourne
ds.save(s2);
System.out.println("----- Normal -----");
List<Student> studentsNormal = ds.createQuery(Student.class).asList();
for(Student s: studentsNormal)
System.out.println(s);
System.out.println("----- Projected -----");
Query<Student> query = ds.find(Student.class);
query.retrievedFields(false, "city");
List<Student> studentsProjected = query.asList();
for(Student s: studentsProjected)
System.out.println(s);
And, now the output...
----- Normal -----
Student{id='57337553db4f0f0f10a93941', name='James Bond', city='London'}
Student{id='57337553db4f0f0f10a93942', name='Jason Bourne', city='New York'}
----- Projected -----
Student{id='57337553db4f0f0f10a93941', name='James Bond', city='London'}
Student{id='57337553db4f0f0f10a93942', name='Jason Bourne', city='London'}
Now, 4 questions about the Morphia behaviour...
Q1. For Mr. Bond, I did not change the city and excluded city from the projected list, yet the default value of the city is printed as London. Should'nt this be null?
Q2. For Mr. Bourne, I did change the city to New York but still during the projection, it picked up the default value and showed London. Should'nt this be null as well?
I havent yet seen the Projection.class from Morphia (intend to do so tonight) but it seems that for exclusions, Morphia seems to pick up the default value and does not over-ride with null. Now this becomes a hindrance since I do not have any way to provide default value to my Instance Variables without exposing them to client's...
Q3. Is this Morphia or a MongoD Java Driver behavior/issue?
And the final question...
Q4. Any known workaround for this?
I have googled around but did not come around any solution or explanation so far...
Thanks in advance...
SG
When Morphia is reading your documents from the query results, the first thing it does is create a new instance of your entity, Student. It just invokes the no argument constructor. There's no magic involved. The city field is initialized with a value. Once that's done, Morphia will take each key in the document returned from the database, find that mapped field, and set it. In your case, there is not city key in the document and so that field is never set by Morphia leaving the initialized value in place.
In general, initializing fields on entities like this is a bad practice. For every entity loaded from the database, the JVM has to initialize those fields to some value only to overwrite them later. In cases such as yours where certain fields don't come back in a query result, those values remain after Morphia returns the new instances back to your application.
I am facing a performance issue in hazelcast while using the Predicate on the hazelcast map.
So I have a model class as shown below:
public class MyAccount {
private String account;
private Date startDate;
private Date endDate;
private MyEnum accountTypeEnum;
// Overrides equals and hascodes using all the attributes
// Has getters and setters for all the attributes
}
Then I create a hazelcast instance of type Hazelcast<MyAccount, String>. And in that instance I start saving the MyAccount object as key and associated string as it's value.
Point to note: I am saving these accounts in different maps (let say local, state, national and international)
Approx 180,000 objects of MyAccount is created and saved in the hazelcast, in different maps depending upon the account's geographical position. Apart from these, hazelcast stores another 50,000 string objects as keys and values in different maps (excluding the maps mentioned above)
Then I have a method which uses predicate filters on the attributes account, startDate and endDate to filter out accounts. Lets call this method as filter.
public static Predicate filter(String account, Date date) {
EntryObject entryObject = new PredicateBuilder().getEntryObject();
PredicateBuilder accountPredicate = entryObject.key().get(Constants.ACCOUNT).equal(account);
PredicateBuilder startDatePredicate = entryObject.key().get(Constants.START_DATE).isNull().or(entryObject.key().get(Constants.START_DATE).lessEqual(date));
PredicateBuilder endDatePredicate = entryObject.key().get(Constants.END_DATE).isNull().or(entryObject.key().get(Constants.END_DATE).greaterThan(date));
return accountPredicate.and(effectiveDatePredicate.and(endDatePredicate));
}
private void addIndexesToHazelcast() {
Arrays.asList("LOCAL", "STATE", "NATIONAL", "INTERNATIONAL").forEach(location -> {
IMap<Object, Object> map = hazelcastInstance.getMap(location);
map.addIndex("__key." + "startDate", true);
map.addIndex("__key." + "endDate", true);
map.addIndex("__key." + "account", false);
});
}
Issue: For a particular map, say local, which holds around 80,000 objects, when I use the predicate to fetch the values from the map, it takes around 4 - 7 seconds which is unacceptable.
Predicate predicate = filter(account, date);
String value = hazelcast.getMap(mapKey).values(predicate); // This line takes 4-7 secs
I am surprised that the cache should take 4 - 7 seconds to fetch the value for one single account given that I have added index in the hazelcast maps for the same attributes. This is a massive performance blow.
Could anybody please let me know why is this happening ?