Drools FromNodeLeftTuple cannot be cast to ReactiveFromNodeLeftTuple - java

I was wondering if someone could give me advice. I am getting an exception below when trying to modify a property of a reactive model object from Java code.
java.lang.ClassCastException: org.drools.core.reteoo.FromNodeLeftTuple cannot be cast to org.drools.core.reteoo.ReactiveFromNodeLeftTuple
at org.drools.core.phreak.ReactiveObjectUtil.notifyModification(ReactiveObjectUtil.java:47)
at org.drools.core.phreak.ReactiveObjectUtil.notifyModification(ReactiveObjectUtil.java:42)
at org.drools.core.phreak.AbstractReactiveObject.notifyModification(AbstractReactiveObject.java:41)
at org.drools.compiler.oopath.model.Person.setAge(Person.java:50)
at org.drools.compiler.oopath.OOPathReactiveTests.testSetter2Rules(OOPathReactiveTests.java:127)
I created the following tests to reproduce the problem, the code can be inserted into org.drools.compiler.oopath.OOPathReactiveTests in drools-compiler module in 7.1.0-SNAPSHOT.
It does not happen when there is only 1 rule (see testSetter1Rule()), it happens with more rules (testSetter2Rules()).
public class OOPathReactiveTests {
#Test
public void testSetter1Rule() {
String header =
"import org.drools.compiler.oopath.model.*;\n" +
"global java.util.List list\n\n";
String drl1 =
"rule R1 when\n" +
" Man( $m: /wife[age == 25] )\n" +
"then\n" +
" list.add($m.getName());\n" +
"end\n\n";
final KieSession ksession = new KieHelper()
.addContent( header + drl1, ResourceType.DRL )
.build()
.newKieSession();
final List<String> list = new ArrayList<>();
ksession.setGlobal( "list", list );
final Man bob = new Man("John", 25);
bob.setWife( new Woman("Jane", 25) );
ksession.insert( bob );
ksession.fireAllRules();
bob.getWife().setAge(26);
ksession.fireAllRules();
Assertions.assertThat(list).containsExactlyInAnyOrder("Jane");
}
#Test
public void testSetter2Rules() {
String header =
"import org.drools.compiler.oopath.model.*;\n" +
"global java.util.List list\n\n";
String drl1 =
"rule R1 when\n" +
" Man( $m: /wife[age == 25] )\n" +
"then\n" +
" list.add($m.getName());\n" +
"end\n\n";
String drl2 =
"rule R2 when\n" +
" Man( $m: /wife[age == 26] )\n" +
"then\n" +
" list.add($m.getName());\n" +
"end\n\n";
final KieSession ksession = new KieHelper()
.addContent( header + drl1 + drl2, ResourceType.DRL )
.build()
.newKieSession();
final List<String> list = new ArrayList<>();
ksession.setGlobal( "list", list );
final Man bob = new Man("John", 25);
bob.setWife( new Woman("Jane", 25) );
ksession.insert( bob );
ksession.fireAllRules();
bob.getWife().setAge(26);
ksession.fireAllRules();
Assertions.assertThat(list).containsExactlyInAnyOrder("Jane", "Jane");
}
Jane's leftTuples in the moment of the exception are:
leftTuples = {HashSet#3461} size = 2
0 = {FromNodeLeftTuple#3463} "[fact 0:1:1288815068:1288815068:1:DEFAULT:NON_TRAIT:org.drools.compiler.oopath.model.Man:John]"
1 = {ReactiveFromNodeLeftTuple#3469} "[fact 0:1:1288815068:1288815068:1:DEFAULT:NON_TRAIT:org.drools.compiler.oopath.model.Man:John]"
I wonder if this is a bug or I am using it wrong way.
Thank you very much.
Peter

After posting the problem to https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!forum/drools-usage
it has been fixed very quickly by drools developers in https://issues.jboss.org/browse/DROOLS-1589

Related

how to calculate statistical significance using WEKA Java API?

I'm attempting to calculate the statistical significance of classifiers using WEKA Java API. I was reading the documentation and see that I need to use calculateStatistics from PairedCorrectedTTester I'm not sure how to use it.
Any ideas?
public static void main(String[] args) throws Exception {
ZeroR zr = new ZeroR();
Bagging bg = new Bagging();
Experiment exp = new Experiment();
exp.setPropertyArray(new Classifier[0]);
exp.setUsePropertyIterator(true);
SplitEvaluator se = null;
Classifier sec = null;
se = new ClassifierSplitEvaluator();
sec = ((ClassifierSplitEvaluator) se).getClassifier();
CrossValidationResultProducer cvrp = new CrossValidationResultProducer();
cvrp.setNumFolds(10);
cvrp.setSplitEvaluator(se);
PropertyNode[] propertyPath = new PropertyNode[2];
propertyPath[0] = new PropertyNode(
se,
new PropertyDescriptor("splitEvaluator", CrossValidationResultProducer.class), CrossValidationResultProducer.class
);
propertyPath[1] = new PropertyNode(
sec,
new PropertyDescriptor("classifier",
se.getClass()),
se.getClass()
);
exp.setResultProducer(cvrp);
exp.setPropertyPath(propertyPath);
// set classifiers here
exp.setPropertyArray(new Classifier[]{zr, bg});
DefaultListModel model = new DefaultListModel();
File file = new File("dataset arff file");
model.addElement(file);
exp.setDatasets(model);
InstancesResultListener irl = new InstancesResultListener();
irl.setOutputFile(new File("output.csv"));
exp.setResultListener(irl);
exp.initialize();
exp.runExperiment();
exp.postProcess();
PairedCorrectedTTester tester = new PairedCorrectedTTester();
Instances result = new Instances(new BufferedReader(new FileReader(irl.getOutputFile())));
tester.setInstances(result);
tester.setSortColumn(-1);
tester.setRunColumn(result.attribute("Key_Run").index());
tester.setFoldColumn(result.attribute("Key_Fold").index());
tester.setResultsetKeyColumns(
new Range(
""
+ (result.attribute("Key_Dataset").index() + 1)));
tester.setDatasetKeyColumns(
new Range(
""
+ (result.attribute("Key_Scheme").index() + 1)
+ ","
+ (result.attribute("Key_Scheme_options").index() + 1)
+ ","
+ (result.attribute("Key_Scheme_version_ID").index() + 1)));
tester.setResultMatrix(new ResultMatrixPlainText());
tester.setDisplayedResultsets(null);
tester.setSignificanceLevel(0.05);
tester.setShowStdDevs(true);
tester.multiResultsetFull(0, result.attribute("Percent_correct").index());
System.out.println("\nResult:");
ResultMatrix matrix = tester.getResultMatrix();
System.out.println(matrix.toStringMatrix());
}
Results from code above:
results
What I want is similar to the WEKA GUI (circled in red):
Statistical Significance using WEKA GUI
Resources Used:
https://waikato.github.io/weka-wiki/experimenter/using_the_experiment_api/
http://sce.carleton.ca/~mehrfard/repository/Case_Studies_(No_instrumentation)/Weka/doc/weka/experiment/PairedCorrectedTTester.html
You have to swap the key columns for dataset and resultset if you want to statistically evaluate classifiers on datasets (rather than datasets on classifiers):
tester.setDatasetKeyColumns(
new Range(
""
+ (result.attribute("Key_Dataset").index() + 1)));
tester.setResultsetKeyColumns(
new Range(
""
+ (result.attribute("Key_Scheme").index() + 1)
+ ","
+ (result.attribute("Key_Scheme_options").index() + 1)
+ ","
+ (result.attribute("Key_Scheme_version_ID").index() + 1)));
That will give you something like this when using the UCI dataset anneal:
Result:
Dataset (1) rules.ZeroR '' | (2) meta.Baggin
--------------------------------------------------------------
anneal (100) 76.17(0.55) | 98.73(1.12) v
--------------------------------------------------------------
(v/ /*) | (1/0/0)

How to extract the best set of parameters from a TrainValidationSplitModel in Java?

I am using a ParamGridBuilder to construct a grid of parameters to search over and TrainValidationSplit to determine the best model (RandomForestClassifier), in Java. Now, I want to know what are the parameters (maxDepth, numTrees) from ParamGridBuilder that produces the best model.
Pipeline pipeline = new Pipeline().setStages(new PipelineStage[]{
new VectorAssembler()
.setInputCols(new String[]{"a", "b"}).setOutputCol("features"),
new RandomForestClassifier()
.setLabelCol("label")
.setFeaturesCol("features")});
ParamMap[] paramGrid = new ParamGridBuilder()
.addGrid(rf.maxDepth(), new int[]{10, 15})
.addGrid(rf.numTrees(), new int[]{5, 10})
.build();
BinaryClassificationEvaluator evaluator = new BinaryClassificationEvaluator().setLabelCol("label");
TrainValidationSplit trainValidationSplit = new TrainValidationSplit()
.setEstimator(pipeline)
.setEstimatorParamMaps(paramGrid)
.setEvaluator(evaluator)
.setTrainRatio(0.85);
TrainValidationSplitModel model = trainValidationSplit.fit(dataLog);
System.out.println("paramMap size: " + model.bestModel().paramMap().size());
System.out.println("defaultParamMap size: " + model.bestModel().defaultParamMap().size());
System.out.println("extractParamMap: " + model.bestModel().extractParamMap());
System.out.println("explainParams: " + model.bestModel().explainParams());
System.out.println("numTrees: " + model.bestModel().getParam("numTrees"))//NoSuchElementException: Param numTrees does not exist.
Those tries do not help...
paramMap size: 0
defaultParamMap size: 0
extractParamMap: {
}
explainParams:
I found a way:
Pipeline bestModelPipeline = (Pipeline) model.bestModel().parent();
RandomForestClassifier bestRf = (RandomForestClassifier) bestModelPipeline.getStages()[1];
System.out.println("maxDepth : " + bestRf.getMaxDepth());
System.out.println("numTrees : " + bestRf.getNumTrees());
System.out.println("maxBins : " + bestRf.getMaxBins());

OpenNLP classifier output

At the moment I'm using the following code to train a classifier model :
final String iterations = "1000";
final String cutoff = "0";
InputStreamFactory dataIn = new MarkableFileInputStreamFactory(new File("src/main/resources/trainingSets/classifierA.txt"));
ObjectStream<String> lineStream = new PlainTextByLineStream(dataIn, "UTF-8");
ObjectStream<DocumentSample> sampleStream = new DocumentSampleStream(lineStream);
TrainingParameters params = new TrainingParameters();
params.put(TrainingParameters.ITERATIONS_PARAM, iterations);
params.put(TrainingParameters.CUTOFF_PARAM, cutoff);
params.put(AbstractTrainer.ALGORITHM_PARAM, NaiveBayesTrainer.NAIVE_BAYES_VALUE);
DoccatModel model = DocumentCategorizerME.train("NL", sampleStream, params, new DoccatFactory());
OutputStream modelOut = new BufferedOutputStream(new FileOutputStream("src/main/resources/models/model.bin"));
model.serialize(modelOut);
return model;
This goes well and after every run I get the following output :
Indexing events with TwoPass using cutoff of 0
Computing event counts... done. 1474 events
Indexing... done.
Collecting events... Done indexing in 0,03 s.
Incorporating indexed data for training...
done.
Number of Event Tokens: 1474
Number of Outcomes: 2
Number of Predicates: 4149
Computing model parameters...
Stats: (998/1474) 0.6770691994572592
...done.
Could someone explain what this output means? And if it tells something about the accuracy?
Looking at the source, we can tell this output is done by NaiveBayesTrainer::trainModel method:
public AbstractModel trainModel(DataIndexer di) {
// ...
display("done.\n");
display("\tNumber of Event Tokens: " + numUniqueEvents + "\n");
display("\t Number of Outcomes: " + numOutcomes + "\n");
display("\t Number of Predicates: " + numPreds + "\n");
display("Computing model parameters...\n");
MutableContext[] finalParameters = findParameters();
display("...done.\n");
// ...
}
If you take a look at findParameters() code, you'll notice that it calls the trainingStats() method, which contains the code snippet that calculates the accuracy:
private double trainingStats(EvalParameters evalParams) {
// ...
double trainingAccuracy = (double) numCorrect / numEvents;
display("Stats: (" + numCorrect + "/" + numEvents + ") " + trainingAccuracy + "\n");
return trainingAccuracy;
}
TL;DR the Stats: (998/1474) 0.6770691994572592 part of the output is the accuracy you're looking for.

File diff against the last commit with JGit

I am trying to use JGit to get the differences of a file from the last commit to the most recent uncommitted changes. How can I do this with JGit? (using the command line would be the output of git diff HEAD)
Following several discussions (link1, link2) I come with a piece of code that is able to find the files that are uncommited, but it I cannot get the difference of the files
Repository db = new FileRepository("/path/to/git");
Git git = new Git(db);
AbstractTreeIterator oldTreeParser = this.prepareTreeParser(db, Constants.HEAD);
List<DiffEntry> diff = git.diff().setOldTree(oldTreeParser).call();
for (DiffEntry entry : diff) {
System.out.println("Entry: " + entry + ", from: " + entry.getOldId() + ", to: " + entry.getNewId());
DiffFormatter formatter = new DiffFormatter(System.out);
formatter.setRepository(db);
formatter.format(entry);
}
UPDATE
This issue was a long time ago. My existing for does display the uncommitted code. The current code that I am using for prepareTreeParser, in the context of displaying the difference, is:
public void gitDiff() throws Exception {
Repository db = new FileRepository("/path/to/git" + DEFAULT_GIT);
Git git = new Git(db);
ByteArrayOutputStream out = new ByteArrayOutputStream();
DiffFormatter formatter = new DiffFormatter( out );
formatter.setRepository(git.getRepository());
AbstractTreeIterator commitTreeIterator = prepareTreeParser(git.getRepository(), Constants.HEAD);
FileTreeIterator workTreeIterator = new FileTreeIterator( git.getRepository() );
List<DiffEntry> diffEntries = formatter.scan( commitTreeIterator, workTreeIterator );
for( DiffEntry entry : diffEntries ) {
System.out.println("DIFF Entry: " + entry + ", from: " + entry.getOldId() + ", to: " + entry.getNewId());
formatter.format(entry);
String diffText = out.toString("UTF-8");
System.out.println(diffText);
out.reset();
}
git.close();
db.close();
// This code is untested. It is slighting different for the code I am using in production,
// but it should be very easy to adapt it for your needs
}
private static AbstractTreeIterator prepareTreeParser(Repository repository, String ref) throws Exception {
Ref head = repository.getRef(ref);
RevWalk walk = new RevWalk(repository);
RevCommit commit = walk.parseCommit(head.getObjectId());
RevTree tree = walk.parseTree(commit.getTree().getId());
CanonicalTreeParser oldTreeParser = new CanonicalTreeParser();
ObjectReader oldReader = repository.newObjectReader();
try {
oldTreeParser.reset(oldReader, tree.getId());
} finally {
oldReader.release();
}
return oldTreeParser;
}
The following setup works for me:
DiffFormatter formatter = new DiffFormatter( System.out );
formatter.setRepository( git.getRepository() );
AbstractTreeIterator commitTreeIterator = prepareTreeParser( git.getRepository(), Constants.HEAD );
FileTreeIterator workTreeIterator = new FileTreeIterator( git.getRepository() );
List<DiffEntry> diffEntries = formatter.scan( commitTreeIterator, workTreeIterator );
for( DiffEntry entry : diffEntries ) {
System.out.println( "Entry: " + entry + ", from: " + entry.getOldId() + ", to: " + entry.getNewId() );
formatter.format( entry );
}
The uncommitted changes are made accessible trough the FileTreeIterator. Using formatter.scan() instead of the DiffCommand has the advantage that the formatter is set up properly to handle the FileTreeIterator. Otherwise you will get MissingObjectExceptions as the formatter tries to locate changes from the work tree in the repository.

How to get "supportedControl" from LDAP with com.novell.ldap

I want to get value from control oid from LDAP: For example when I use Linux ldapsearch:
ldapsearch -H ldap://host:port -x -wsecret -D "cn=manager,managedElementId=HSS1"
-b "dn" "objectClass=ConfigOAM" -E"1.3.6.1.4.1.637.81.2.10.10"
I get results:
...
**control: 1.3.6.1.4.1.637.81.2.10.10 false AgEB**
objectClass: top
objectClass: ConfigOAM
confOAMId: 1
...
My java code looks:
LDAPConnection connection = new LDAPConnection();
connection.connect(hostName, port);
connection.bind(LDAPConnection.LDAP_V3, userDN, password);
String returnedAttributes[] = {"+", "*"};
boolean attributeOnly = false;
String oid;
LDAPSearchResults results = connection.search("", LDAPConnection.SCOPE_BASE, "(objectClass=*)", returnedAttributes, attributeOnly);
LDAPEntry entry = results.next();
System.out.println("\n" + entry.getDN());
System.out.println(" Attributes: ");
LDAPAttributeSet attributeSet = entry.getAttributeSet();
Iterator allAttributes = attributeSet.iterator();
while(allAttributes.hasNext()) {
LDAPAttribute attribute = (LDAPAttribute)allAttributes.next();
String attrName = attribute.getName();
System.out.println(" " + attrName);
Enumeration allValues = attribute.getStringValues();
while(allValues.hasMoreElements()) {
oid = (String) allValues.nextElement();
if ( (attrName.equalsIgnoreCase("supportedExtension")) || (attrName.equalsIgnoreCase("supportedControl"))) {
System.out.println(" " + oid);
}
}
}
and the result is:
...
supportedControl
2.16.840.1.113730.3.4.2
1.2.840.113556.1.4.319
1.2.826.0.1.3344810.2.3
1.3.6.1.1.12
1.3.6.1.4.1.637.81.2.10.11
**1.3.6.1.4.1.637.81.2.10.10**
1.3.6.1.4.1.637.81.2.10.9
1.3.6.1.4.1.637.81.2.10.6
...
Please suggest me or advice how can I get the additional value "false AgEB" in java as I get it in ldapsearch?
You would need to add the control to the search request and be able to interpret the response.
There are soem examples available:
http://www.novell.com/documentation/developer/samplecode/jldap_sample/
-jim
Thank you for your answer :)
I made something like this from samples available on this site and from others soruces:
lc.connect( ldapHost, ldapPort );
lc.bind( ldapVersion, loginDN, password.getBytes("UTF8"));
LDAPControl ldapCtrl = new LDAPControl("1.3.6.1.4.1.637.81.2.10.10", false, null);
LDAPSearchConstraints cons = lc.getSearchConstraints();
cons.setControls( ldapCtrl );
lc.setConstraints(cons);
LDAPSearchResults searchResults = lc.search("",LDAPConnection.SCOPE_BASE, "(objectclass=*)", returnedAttributes,attributeOnly , cons);
LDAPControl[] controls = searchResults.getResponseControls();
but my "controls" varaible is always null, even if supportedControls are listed
LDAPEntry entry1 = searchResults.next();
System.out.println("\n" + entry1.getDN());
System.out.println(" Attributes: ");
LDAPAttributeSet attributeSet1 = entry1.getAttributeSet();
Iterator allAttributes1 = attributeSet1.iterator();
while(allAttributes1.hasNext()) {
LDAPAttribute attribute = (LDAPAttribute)allAttributes1.next();
String attrName = attribute.getName();
System.out.println(" " + attrName);
Enumeration allValues1 = attribute.getStringValues();
while(allValues1.hasMoreElements()) {
oid = (String) allValues1.nextElement();
if ( (attrName.equalsIgnoreCase("supportedExtension")) || (attrName.equalsIgnoreCase("supportedControl"))) {
System.out.println(" " + oid);
}
}
}
Maybe the searchResults options are wrong?

Categories

Resources