I have a rule LHS like that
when
$location : Location()
$cabinets : ArrayList() from collect ( Cabinet() from $location.elements() )
then
an when I print the content of #cabinets in RHS I see that it contains all elements (also those that are not of class Cabinet ).
I want to collect ontly cabinets from $location>elements().
What did I do wrong ?
I think you would say something along the lines of
Cabinet(class == Cabinet.class)
I dont think that Drools is doing an explicit type check on the Cabinet classes as you have them afaik.
Your question is about the 'from' pattern, not the 'collect' one.
Following rule should help to test:
when
$location : Location()
$cabinet : Cabinet() from $location.elements()
then
This rule should fire for each Cabinet in location.
You can try to post your question on the rule-users user list
Related
In the context of using the OWLAPI 4.0 API, this following line of code:
ontologyIRI = IRI.create(o.getOntologyID().getOntologyIRI().toString());
returns the following string :
"Optional.of(http://www.indytion.com/music/composition)".
What I need is the sole string "http://www.indytion.com/music/composition".
I tried to declare ontologyIRI as Optional and use .get() method, .orElse(), etc. to no avail. I still have the returned string that includes the 'optional.of()' part.
My question is : How could I get the internal string?
Thank you very much for your help.
Edit : The full code the method
private void LoadOntology(String ontologyPath)
{
OWLOntologyManager man = OWLManager.createOWLOntologyManager();
OWLOntology o;
File ontologyFile = new File(ontologyPath);
Optional<IRI> ontologyIRI;
try {
o = man.loadOntologyFromOntologyDocument(ontologyFile);
ontologyIRI = Optional.of(IRI.create(String.valueOf(o.getOntologyID().getOntologyIRI()).toString()));
System.out.println("Ontology IRI is: " + ontologyIRI.get());
} catch (OWLOntologyCreationException e) {
e.printStackTrace();
}
}
The System.out.println() returns exactly this string:
"Ontology IRI = Optional.of(http://www.indytion.com/music/composition)"
Use .get() instead of toString()
//Returns 'Optional[example]'
Optional.of("example").toString();
//Returns 'example'
Optional.of("example").get();
Short answer: Replace
Optional.of(IRI.create(String.valueOf(o.getOntologyID().getOntologyIRI()).toString()));
with
o.getOntologyID().getOntologyIRI().get();
Longer answer: you're doing an awful lot of back-and forth that's pointless at best and actively harmful in some cases:
In no particular order:
others have already commented that IRI instances are immutable, so creating a new one from an existing one is kind of pointless (if harmless).
calling Optional.of() if you don't intend to actually return an Optional is almost always a bad idea.
String.valueOf() is used to get a string-representation of some value and is usually most useful for debugging, but should not be relied on to fully round-trip everything about an object (the same applies to toString().
So basically what you're left with is this:
o.getOntologyID().getOntologyIRI() gives you an Optional<IRI>
you want an IRI.
Optional::get returns the value contained in the optional, if one exists, so you simply need to call get()
If, however the Optional is empty (i.e. there is no underlying value) then get() will throw a NoSuchElementException. This might or might not be what you want. To work around this either call isPresent() before calling get() to check if a value exists or use any of the multitude of other accessor methods, some of which have "built-in checks" in a way.
Finally, it seems that the problem was not the code itself. This is how the problem has been solved. But I don't understand why it has been solved :
I copy/paste (in the same file) the "shouldAddObjectPropertyAssertions()" example from OWLAPI4 examples -> This example code runs OK (but does not use the getOntologyID() method as I do).
Change SDKs to another minor version '1.8.0_61'
Change again with initial and desired SDK '1.8.0_131'
Invalidate caches and restart the IDE
Problem solved. The exactly same code :
ontologyIRI = o.getOntologyID().getOntologyIRI().get();
System.out.println("Ontology IRI is: " + ontologyIRI);
Now returns the expected string value : "http://www.indytion.com/music/composition" and not "Optional.of(http://www.indytion.com/music/composition)" anymore.
If someone can explain why it has been fixed, I would be very glad.
Thank you again for your help.
I'm trying to do exactly what the title says -- I'd like to generate a method spec that looks something like:
public void doSomethingWithThis( Container<? extends ImportantInterface> argument ) {
//1. Collect UnderPants
//2. ...
//3. Profit
}
I understand I can just use the raw type, but generated thing will be consumed by others down stream, and having the type info pop up in their IDEs ( and mine for that matter :/ ) would make my bug solving life easier down the line...
So, I'm a toolbox and 7 more minutes digging around found the path to the answer. The question in the comment points in the right direction, though it's using ParameterizedTypeName.create() which is now ParameterizedTypeName.get()
Code for sample purposes because someone else might find this useful.
ClassName containerClassName = ClassName.get(Container.class);
TypeName wildcardTypeName = WildcardTypeName.subtypeOf(ImportantInterface.class);
ParameterizedTypeName parameterTypeName = ParameterizedTypeName.get(containerClassName, wildcardTypeName);
classBuilder.addMethod(MethodSpec.constructorBuilder()
.addModifiers(Modifier.PUBLIC)
.addParameter(parameterTypeName, "cargo")
.addStatement(CodeBlock.builder()
.addStatement("//1. Collect Underpants")
.addStatement("//2. ...")
.addStatement("//3. Profit!!!")
.build())
I am looking for a way to write a custom index with Apache Lucene (PyLucene to be precise, but a Java answer is fine).
What I would like to do is the following : When adding a document to the index, Lucene will tokenize it, remove stop words, etc. This is usually done with the Analyzer if I am not mistaken.
What I would like to implement is the following : Before Lucene stores a given term, I would like to perform a lookup (say, in a dictionary) to check whether to keep the term or discard it (if the term is present in my dictionary, I keep it, otherwise I discard it).
How should I proceed ?
Here is (in Python) my custom implementation of the Analyzer :
class CustomAnalyzer(PythonAnalyzer):
def createComponents(self, fieldName, reader):
source = StandardTokenizer(Version.LUCENE_4_10_1, reader)
filter = StandardFilter(Version.LUCENE_4_10_1, source)
filter = LowerCaseFilter(Version.LUCENE_4_10_1, filter)
filter = StopFilter(Version.LUCENE_4_10_1, filter,
StopAnalyzer.ENGLISH_STOP_WORDS_SET)
ts = tokenStream.getTokenStream()
token = ts.addAttribute(CharTermAttribute.class_)
offset = ts.addAttribute(OffsetAttribute.class_)
ts.reset()
while ts.incrementToken():
startOffset = offset.startOffset()
endOffset = offset.endOffset()
term = token.toString()
# accept or reject term
ts.end()
ts.close()
# How to store the terms in the index now ?
return ????
Thank you for your guidance in advance !
EDIT 1 : After digging into Lucene's documentation, I figured it had something to do with the TokenStreamComponents. It returns a TokenStream with which you can iterate through the Token list of the field you are indexing.
Now there is something to do with the Attributes that I do not understand. Or more precisely, I can read the tokens, but have no idea how should I proceed afterward.
EDIT 2 : I found this post where they mention the use of CharTermAttribute. However (in Python though) I cannot access or get a CharTermAttribute. Any thoughts ?
EDIT3 : I can now access each term, see update code snippet. Now what is left to be done is actually storing the desired terms...
The way I was trying to solve the problem was wrong. This post and femtoRgon's answer were the solution.
By defining a filter extending PythonFilteringTokenFilter, I can make use of the function accept() (as the one used in the StopFilter for instance).
Here is the corresponding code snippet :
class MyFilter(PythonFilteringTokenFilter):
def __init__(self, version, tokenStream):
super(MyFilter, self).__init__(version, tokenStream)
self.termAtt = self.addAttribute(CharTermAttribute.class_)
def accept(self):
term = self.termAtt.toString()
accepted = False
# Do whatever is needed with the term
# accepted = ... (True/False)
return accepted
Then just append the filter to the other filters (as in the code snipped of the question) :
filter = MyFilter(Version.LUCENE_4_10_1, filter)
I'm working on a java project and i want to create a list of Maps that the type of keys is Character and the values are ArrayLists of Characters. I have written something like this :
List<Map<Character, ArrayList<Character>>>
but the eclipse says : Syntax error on token ">>>", VariableDeclarator expected after this token
How can i do it ? any idea ?
The compiler is expecting a variable name to comply with Java syntax:
List<Map<Character, List<Character>>> myList =
new ArrayList<Map<Character, List<Character>>>();
List<Map<Character, ArrayList<Character>>> list ; should be given
see the screen shot below
You can declare a variable as an interface (e.g. List) but to create an instance, you must choose an implementation (e.g. ArrayList):
List<Map<Character, ArrayList<Character>>> myList = new ArrayList<Map<Character, ArrayList<Character>>>();
In play framework's (2.0) application controller I am creating a java
LinkedHashMap<String, List<MyObject>) to maintain the order in which I am inserting the string key.
I tried iterating this LinkedHashMap in template like below:-
#for(currentKey <- linkedHashMapInstance.keySet()){
....
loop myObjectList for the currentKey
....
}
I got random order whenever I refresh the screen.
Then I tried to change the logic of looping by
#for((currentKey , currentList) <- mapOfCards){
.. used the key and the list
}
Now I am getting a consistent order but not the order which I inserted..
The debug log in the server side is showing the correct order.
I was under assumption that LinkedHashMap in Java will maintain the order of inserts
even when it is rendered in a scala template.
Am I doing something wrong here?
I've faced the same issue a few months ago. As #nico_ekito pointed out, it's a problem related to the Java->Scala conversion.
To fix it, try something like this:
#for((currentKey , currentList) <- SortedMap.empty[String, String] ++ mapOfCards) {
}
by replacing [String, String] by the types of your (currentKey , currentList).
Hope that helps, it worked for me.
It may be related to the Java->Scala conversion.
Try using .asScala like this:
#for((currentKey , currentList) <- mapOfCards.asScala){
..
}
Update :
It works with:
#for((currentKey , currentList) <- scala.collection.mutable.LinkedHashMap.empty[String, String] ++ mapOfCards) {
}