Drools Decision Table Action Execution Order - java

I have a Drools decision table (see below) whereby Rule 2 has a condition that checks whether a nutrient score is between a certain threshold and executes an action based on this condition. There is an initial rule (RULE 1) that performs a check and performs its action, which updates the overall scores that i want Rule2 to use when executing its conditions.
What i expect/need:
Rule 1 to run, if the condition is met then update the overall score on $model (by executing its action) and then rule 2 run and for it's conditions to use the updated score value that was updated by Rule 1's action running.
What's actually happening
Rule 1 runs it's condition, Rule 2 runs its condition, Rule 1's action is run, Rule 2's action is run. Rule 2 is running it's condition before Rule 1's action has run, and therefore uses an outdated score.
I've proven (i think) by changing the priority/salience value that i can change the order in which the rules run their conditions, but it seems that all rule conditions are running before actions. I expected Rule 1's action to run before the next rule.
Have i fundamentally mis-understood the concept? An obvious mistake? Or if anyone has a suggested workaround that would be great.
To clarify this is a stateless ki session.
Thanks in advance, this is driving me mad!

Drools works by taking all of the rules up front and evaluating whether their conditions are satisfied. Each of these rules is called a "match". When you fire the rules, Drools collects all of the matches, orders them (either naturally or by salience), and then iterates through and executes them one by one.
As the rules are executed, they might change working memory like your example does. Unless you explicitly tell Drools that you're doing so, however, it won't re-evaluate the matches. The match phase has already been completed by the time the rules are executed.
It is possible to tell Drools that you are modifying working memory and that you need it to re-evaluate its rules based on the new data. To do this, you need to use one of the built-in methods:
Method
Explanation
insert
Put a new fact into working memory.
delete or retract
Removes some information (object/s) from working memory.
update
Update/replace a fact in working memory.
modify
Change fields inside of a fact in working memory.
Which one you choose depends on what you're trying to do. Note that calling 'update' will call all matches to be re-evaluated ... it's the equivalent of calling "fire rules" a second time with the new data (so the same rule might hit multiple times, which may or may not be intentional). In comparison, insert will only evaluate subsequent rules to determine if they now match or don't based on the new conditions.
So if your intention is to cause other rules to fire or not by changing the data in working memory, you'll need to use one of these built-in methods to tell Drools that you're making a change that it should re-evaluate its matches for.
I discuss this concept in more detail in this answer specifically about DRL. The same concepts apply to decision tables.

Related

Can a rules file have a dependency on other rules?

I'm using drools rules engine for my new service. I'm looking at cleaning up the rules / making it easier to write. I was hoping to use a rules framework style of coding. I.e. I want a rules file who's sole purpose is to validate the input data (i.e. input list isn't null and contains a specific value). Then when I write new rules files I can just say import this and run the validation before all other rules.
Also, I know I can load multiple rules file into the KieSession. Is it possible to tell it which order to run the rules files in, or which files to skip for each use case? The idea behind this is for performance. Let's say I load up an AWS lambda function with the rules service, I want to have all rules loaded already and have it run the specific one for the use case, instead of loading up a rules file for each call.
Thanks for the help.
You asked
Is it possible to tell it which order to run the rules files in, or which files to skip for each use case?
The answer is yes. The method to do this is called salience. The linked article is a good source to learn about this. This is important because salience can make it possible to change the order of execution of rules.
Hypothetically, let say that you have a program that process transactions based on categories. So, if category == deposit, you want to add to current balance. You have another rule that if category == withdrawal you want to subtract from current balance. BUT, you want to process deposits over withdrawals. Using salience you can guarantee that the deposits rule will fire before the withdrawal rule regardless of order of transaction.
Dependency is kind of related, but not the same. In drools, this is know as Forward or Backward Chaining depending on the order. This is all composed based on a series of facts. For example, if I was to ask a system "is my house on planet Earth?" the conclusion is reached if the following facts exist:
My house is in Fort Worth
Fort Worth is a city in Texas
Texas is a state in the United States
United States is a country on Earth
A direct fact linking the house location to this planet is asserted based on the rules that verify the enumerated facts above. This is done using chaining. Forward or backwards is just how these are processed (top-down or bottom-up). This is in a nutshell to the best of my recollection.
Yes, it is possible by using salience and assigning priorities to some rules so that they execute before other rules. Also, we can make use of inference engine to create facts which can make other rules eligible for execution.

Detecting termination in CP-SAT

I was exploring the CP-SAT APIs for fetching all solution for given set of constraints.
As per the API documentation, onSolutionCallback() function is called for every solution found. However if I need to find all solutions for a given model, is there a way to detect the last solution or the feasibility of no more solutions through onSolutionCallback() function or other means?
I found that searchAllSolutions() API can be used and we can set termination conditions based on time or number of solutions. Assuming I can wait unlimited amount of time, how do I detect that there are no more solutions feasible?
https://developers.google.com/optimization/cp/cp_tasks
Another related question:
Is there any remote chance for a CP SAT solver to run into a non-deterministic state or run into infinite loops (or such) even when there is a feasible solution possible for given set of constraints?
I plan to use the CPSAT for a production usecase and hence would like to know its determinism and upper bounds for execution.
Edit: Added the second question.

IntelliJ debugging - Watch for any variable containing a specific value

Is it possible to put out a global watch or breakpoint, on no specific variable or line of code, with the aim of identifying the first instance of a value?
I'm debugging on an old java6 web app that does a million calculations between multiple databases and dozens of classes with thousands of lines of code each. I'm not exactly sure which of these dozens of classes are being called within this project containing hundreds of classes.
Let's say I'm looking for where "the dog runs" appears in the flow of a calculation: Is it possible to listen for the first appearance of that string with the intention of finding what variable will contain the value?
I do not think that is possible to monitor where a value comes into existence as i think you have asked.
However, you can set "field watchpoints" which are basically expressions like break when x = "the dog runs". When i have used these before the program will run very very slowly.
Refer here for details of how to set it: Intellij breakpoints.
Another technique is to start near the top of where you think a value is being set to what you want. Step over each method until you see where the value set to what you are looking for. Then repeat the process inside the method that you saw it change (i.e. this time step into the method where the value changed).
Within this "second level method", you will probably see another call that sets the value you are looking for. So repeat the process again until you find a system call that sets the value (e.g. a read from a database or a regex match etc).
It sounds tedious, but it is a "divide and conquer" method of the form that eliminates a huge chunk of code that doesn't set the value you are looking for into the one method call that does. Then you divide and conquer the inner workings of that method. In practice, it doesn't take long before you narrow it down.

How do I make a change set in RTC (eclipse) re-editable?

I've submitted for review a change set. Unfortunately i forgot to refresh my sandbox first, so that means I did not include some changes in that set.
So i lost the option to add changes to my change set.
I don't want to discard that change set because it contains important changes. I also don't want to have to deliver 2 change sets, because they contain atomic logic (logic that can't be split).
I'm having a feeling that the "reverse" option would get my change set back into an editable state, but i really have no idea what to do here.
To sum up: i need to make my change set editable again, so that i can merge it with another one.
Anyone know how i would do this?
Thx, you guys rule!
I don't think you can revert back to a mutable state for your change set, if that change set was "completed" before being submitted for review.
In that case, a "reverse" (ie doing a new changeset cancelling the previous changeset), followed by a new changeset in which you redo your work and re-submit it for review might be the only solution.
However, following this example of code review in RTC, change sets should be ket mutable during the review (for the original programer to check-in new revisions of his files based on the feedback of the reviewers).
You should create the new change set.
I say this for two reasons:
1) The aesthetic argument for only having one change set per work item quickly breaks down in practice - it's easy to forget a change, and you may have to make amendments due to bugs or review comments.
2) Having multiple change sets makes your changes easier to understand. Each change set can contain a logical set of changes, so a single work item may have three change sets: "Refactor code", "Update copyrights", and "Changes from review". That way, when someone annotates the files in future, they'll get something at a finer granularity than the initial work item.
Regarding the "atomic logic" argument: it probably isn't an issue unless your team is in the habit of delivering/discarding individual change sets. On the RTC project, we regularly split logically discrete changes across multiple change sets and multiple components.
If you're concerned that you may deliver change sets that logically depend on changes in other components (as I occasionally do), I suggest you chime in on bug 150421. Bug 153907 describes a similar problem, but requires a much more complex solution (making it less likely to be implemented without customer pressure).
I runned into the same issue, and decided to create a patch, discard my changes and then create a new changeset.

Do testing frameworks exist that allow a percentage of failure?

When reading a question about testing a Java program several times with random seeds the word testing sparked of the association with unit testing in my mind, but that might not have been the kind of testing going on there.
In my opinion introducing randomness in a unit test would be considered bad practice, but then I started considering the case where a (small) percentage of failure might be acceptable for the moment.
For example, the code fails to pass the unit test once every 10^n for n > 3 and gradually you want n to go to infinity without the test going red, maybe yellowish.
Another example might be a system-wide test where most of the time things go right, but you still want to limit/know how often they might go wrong.
So my question is, are there any frameworks (full spectrum of testing) out there that can be persuaded to allow a percentage of failure in an huge/excessive amount of repeated tests?
You can "persuade" most testing frameworks to allow partial failures by performing your "tests" not using the framework directly but just by doing plain old conditional (eg: if-statements) and recording the percentage of failures. Then use the framework to assert that this percentage is below your threshold.
If your tests are deterministic (which is generally considered to be a good thing) then another approach is to test for the current behavior even when it's wrong, but to comment the incorrect assertions (typically with what the "right" answer should be). If these tests ever fail you can check the comment. If the code has become "correct" then great, update that assertion. If not, then decide whether the change in behavior is better or worse than the old behavior, and act accordingly. This approach lets you sort of "tighten the screws" over time.

Categories

Resources