Using OntProperty and DatatypeProperty - Jena Ontology - java

OntModel onto = ModelFactory.createOntologyModel(
OntModelSpec.OWL_MEM_MICRO_RULE_INF, null );
String inputFileName = "./src/test.xml";
InputStream in = FileManager.get().open(inputFileName);
if (in == null) {
throw new IllegalArgumentException( "File: " + inputFileName + " not found");
}
onto.read(new InputStreamReader(in), "");
//ns is the namespace...
OntClass userClass = onto.getOntClass(ns+"User");
Individual dada = onto.createIndividual(ns+"Daryl", userClass);
Property prefBathtub = onto.getProperty(ns+"prefersBathtub");
Property prefBathtubWt = onto.getProperty(ns+"prefersBathtubWeight");
dada.addLiteral(prefBathtub, true);
dada.addLiteral(prefBathtubWt, 0.30);
OutputStream out = new FileOutputStream("./src/test2.xml");
onto.write( out, "RDF/XML"); // readable rdf/xml
out.close();
How do I use OntProperty and/or DatatypeProperty instead of just Property?
By using Property do I get the same amount of expressiveness?

To get an ObjectProperty object from an ontology model, use OntModel.getObjectProperty(). Likewise for datatype properties, etc. The Ont classes are more expressive in the sense that they contain convenience API for getting, for example, the super-properties of a property, with one method call. However, as the convenience API only accesses the underlying triples in the graph, there is strictly speaking nothing you can do with an ObjectProperty that you can't do with a Property. It's just harder work!
Incidentally, Jena allows you to access other facets of an underlying RDF resource with the .as() method. So:
Property p = myModel.getProperty( "http://example.com/foo#p" );
OntProperty op = p.as( OntProperty.class );

Related

OWL replace object and data property value

I've written this code to replace object property value:
public void changeObjectPropertyValue(String ind, String propertyFragment, String newValueFragment) {
OWLNamedIndividual individualToReplaceValueOn = factory.getOWLNamedIndividual(prefix + ind);
OWLNamedIndividual newValueInd = factory.getOWLNamedIndividual(prefix + newValueFragment);
OWLObjectProperty theObjectProperty = factory.getOWLObjectProperty(prefix + propertyFragment);
OWLIndividual theOldValue = EntitySearcher.getObjectPropertyValues(individualToReplaceValueOn, theObjectProperty, ont).findFirst().get();
OWLAxiom oldAxiom = factory.getOWLObjectPropertyAssertionAxiom(
theObjectProperty,
individualToReplaceValueOn,
theOldValue);
OWLAxiom newAxiom = factory.getOWLObjectPropertyAssertionAxiom(
theObjectProperty,
individualToReplaceValueOn,
newValueInd);
List<OWLOntologyChange> changes = new Vector<OWLOntologyChange>();
changes.add(new RemoveAxiom(ont, oldAxiom));
changes.add(new AddAxiom(ont, newAxiom));
manager.applyChanges(changes);
}
I want to know if this is a correct way to replace value and if there is a method in OWLAPI library to do this?
This is correct - and the only way to do this sort of changes in OWL API. Axioms are immutable objects, so there is no other way to modify an axiom than recreating it and changing the parts that need modifying in the process.

Apache POI can not add CustomProperty to Doc

I'm trying to add some Custom Properties to an existing document:
HWPFDocument document = new HWPFDocument(new FileInputStream(sourceFile));
DocumentSummaryInformation docSumInf = document.getDocumentSummaryInformation();
CustomProperties customProperties = docSumInf.getCustomProperties();
CustomProperty singleProp = null;
//...
singleProp = new CustomProperty();
singleProp.setName(entry.getKey());
singleProp.setType(Variant.VT_LPWSTR);
singleProp.setValue((String) entry.getValue());
//..
customProperties.put(entry.getKey(), singleProp);
docSumInf.setCustomProperties(customProperties);
return document;
However, the properties never make it to the file. I tried to
document.getDocumentSummaryInformation().getCustomProperties().putAll(customProperties);
I also tried
document.getDocumentSummaryInformation().getCustomProperties().put(entry.getKey(), singleProp);
System.out.println(document.getDocumentSummaryInformation().getCustomProperties().size() + " Elemente in Map");
in a loop. The printed size was allways one.
With the first attemp (docSumInf.setCustomProperties(customProperties);) I printed out customProperties before setting it to docSumInf. There where all new Properties I miss, as soon as I set them to the document summary.
I don't see what I am missing...
entry.getKey() = null
or entry.getKey() has common value for all CustomProperties in the map.
and because of that you have only one element in the map of CustomProperties.
You need to set non null values here
singleProp.setName(entry.getKey());
CustomProperty class represents custom properties in the document summary information stream. The difference to normal properties is that custom properties have an optional name. If the name is not null it will be maintained in the section's dictionary.

OWL API, extracting a string from a URI

Given an arbitrary IRI, such as the main ontology or one of the ontologies it imports, I would like to extract the title but the code yields no annotations.
Here's an example of what I'm talking about, from the SKOS ontology:
<owl:Ontology rdf:about="http://www.w3.org/2004/02/skos/core">
<dct:title xml:lang="en">SKOS Vocabulary</dct:title>
How exactly would I extract, "SKOS Vocabulary".
Here is some code I am currently using from an OWL-API tutorial.
public void testingOWL() throws OWLOntologyCreationException, OWLOntologyStorageException
{
// Get hold of an ontology manager
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
// Load an ontology from the Web. We load the ontology from a document IRI
IRI docIRI = IRI.create("http://www.w3.org/2009/08/skos-reference/skos.rdf");
OWLOntology skos = manager.loadOntologyFromOntologyDocument(docIRI);
System.out.println("Loaded ontology: " + skos);
System.out.println();
// Save a local copy of the ontology. (Specify a path appropriate to your setup)
File file = new File("e:/downloadAndSaveOWLFile.owl");
manager.saveOntology(skos, IRI.create(file.toURI()));
// Ontologies are saved in the format from which they were loaded.
// We can get information about the format of an ontology from its manager
OWLOntologyFormat format = manager.getOntologyFormat(skos);
System.out.println(" format: " + format);
System.out.println();
// Save the ontology in owl/xml format
OWLXMLOntologyFormat owlxmlFormat = new OWLXMLOntologyFormat();
// Some ontology formats support prefix names and prefix IRIs.
// In our case we loaded the pizza ontology from an rdf/xml format, which supports prefixes.
// When we save the ontology in the new format we will copy the prefixes over
// so that we have nicely abbreviated IRIs in the new ontology document
if(format.isPrefixOWLOntologyFormat())
{
owlxmlFormat.copyPrefixesFrom(format.asPrefixOWLOntologyFormat());
}
manager.saveOntology(skos, owlxmlFormat, IRI.create(file.toURI()));
// Dump an ontology to System.out by specifying a different OWLOntologyOutputTarget
// Note that we can write an ontology to a stream in a similar way
// using the StreamOutputTarget class
OWLOntologyDocumentTarget documentTarget = new SystemOutDocumentTarget();
// Try another format - The Manchester OWL Syntax
ManchesterOWLSyntaxOntologyFormat manSyntaxFormat = new ManchesterOWLSyntaxOntologyFormat();
if(format.isPrefixOWLOntologyFormat())
{
manSyntaxFormat.copyPrefixesFrom(format.asPrefixOWLOntologyFormat());
}
manager.saveOntology(skos, manSyntaxFormat, documentTarget);
}
EDIT: Update the code based on the suggestion below but only returns 1 object for rdfs:seeAlso.
public void getData() throws OWLOntologyCreationException
{
// Get hold of an ontology manager
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
// Load an ontology from the Web. We load the ontology from a document IRI
IRI docIRI = IRI.create("http://www.w3.org/2009/08/skos-reference/skos.rdf");
OWLOntology skos = manager.loadOntologyFromOntologyDocument(docIRI);
for (OWLAnnotation ann: skos.getAnnotations())
{
System.out.println("ann: " + ann.getProperty());
System.out.println();
}
}
The annotation you're looking for is an ontology annotation, meaning the IRI that is its subject is the ontology IRI itself. This is accessed differently from standard annotations.
OWLOntology o= ... // init the ontology as usual
for (OWLAnnotation ann: o.getAnnotations()){
if(ann.getProperty().equals(dataFactory.getRDFSLabel()){
// here you have found a rdfs:label annotation, so you can use the value for your purposes
}
}
Edit: Example of use
public static void main(String[] args) throws OWLOntologyCreationException {
OWLOntologyManager m = OWLManager.createOWLOntologyManager();
OWLOntology o = m.loadOntology(IRI
.create("http://www.w3.org/2009/08/skos-reference/skos.rdf"));
for (OWLAnnotation a : o.getAnnotations()) {
System.out.println("TestSkos.main() " + a);
}
}
Output:
TestSkos.main() Annotation(rdfs:seeAlso <http://www.w3.org/TR/skos-reference/>)
TestSkos.main() Annotation(<http://purl.org/dc/terms/creator> "Alistair Miles")
TestSkos.main() Annotation(<http://purl.org/dc/terms/description> "An RDF vocabulary for describing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, 'folksonomies', other types of controlled vocabulary, and also concept schemes embedded in glossaries and terminologies."#en)
TestSkos.main() Annotation(<http://purl.org/dc/terms/contributor> "Participants in W3C's Semantic Web Deployment Working Group.")
TestSkos.main() Annotation(<http://purl.org/dc/terms/creator> "Sean Bechhofer")
TestSkos.main() Annotation(<http://purl.org/dc/terms/contributor> "Nikki Rogers")
TestSkos.main() Annotation(<http://purl.org/dc/terms/title> "SKOS Vocabulary"#en)
TestSkos.main() Annotation(<http://purl.org/dc/terms/contributor> "Dave Beckett")

Using a resource loader for FileWritingMessageHandler

When using a directory-expression for an <int-file:outbound-gateway> endpoint, the method below is called on org.springframework.integration.file.FileWritingMessageHandler:
private File evaluateDestinationDirectoryExpression(Message<?> message) {
final File destinationDirectory;
final Object destinationDirectoryToUse = this.destinationDirectoryExpression.getValue(
this.evaluationContext, message);
if (destinationDirectoryToUse == null) {
throw new IllegalStateException(String.format("The provided " +
"destinationDirectoryExpression (%s) must not resolve to null.",
this.destinationDirectoryExpression.getExpressionString()));
}
else if (destinationDirectoryToUse instanceof String) {
final String destinationDirectoryPath = (String) destinationDirectoryToUse;
Assert.hasText(destinationDirectoryPath, String.format(
"Unable to resolve destination directory name for the provided Expression '%s'.",
this.destinationDirectoryExpression.getExpressionString()));
destinationDirectory = new File(destinationDirectoryPath);
}
else if (destinationDirectoryToUse instanceof File) {
destinationDirectory = (File) destinationDirectoryToUse;
} else {
throw new IllegalStateException(String.format("The provided " +
"destinationDirectoryExpression (%s) must be of type " +
"java.io.File or be a String.", this.destinationDirectoryExpression.getExpressionString()));
}
validateDestinationDirectory(destinationDirectory, this.autoCreateDirectory);
return destinationDirectory;
}
Based on this code I see that if the directory to use evaluates to a String, it uses that String to create a new java.io.File object.
Is there a reason that a ResourceLoader couldn't/shouldn't be used instead of directly creating a new file?
I ask because my expression was evaluating to a String of the form 'file://path/to/file/' which of course is an invalid path for the java.io.File(String) constructor. I had assumed that Spring would treat the String the same way as it treats the directory attribute on <int-file:outbound-gateway> and pass it through a ResourceLoader.
Excerpt from my configuration file:
<int-file:outbound-gateway
request-channel="inputChannel"
reply-channel="updateTable"
directory-expression="
'${baseDirectory}'
+
T(java.text.MessageFormat).format('${dynamicPathPattern}', headers['Id'])
"
filename-generator-expression="headers.filename"
delete-source-files="true"/>
Where baseDirectory is a property that changes per-environment of the form 'file://hostname/some/path/'
There's no particular reason that this is the case, it probably just wasn't considered at the time of implementation.
The request sounds reasonable to me and will benefit others (even though you have found a work-around), by providing simpler syntax. Please open an 'Improvement' JIRA issue; thanks.
While not directly answering the question, I wanted to post the workaround that I used.
In my XML configuration, I changed the directory-expression to evaluate to a file through the DefaultResourceLoader instead of a String.
So this is what my new configuration looked like:
<int-file:outbound-gateway
request-channel="inputChannel"
reply-channel="updateTable"
directory-expression=" new org.springframework.core.io.DefaultResourceLoader().getResource(
'${baseDirectory}'
+
T(java.text.MessageFormat).format('${dynamicPathPattern}', headers['Id'])).getFile()
"
filename-generator-expression="headers.filename"
delete-source-files="true"/>

ROME API to parse RSS/Atom

I'm trying to parse RSS/Atom feeds with the ROME library. I am new to Java, so I am not in tune with many of its intricacies.
Does ROME automatically use its modules to handle different feeds as it comes across them, or do I have to ask it to use them? If so, any direction on this.
How do I get to the correct 'source'? I was trying to use item.getSource(), but it is giving me fits. I guess I am using the wrong interface. Some direction would be much appreciated.
Here is the meat of what I have for collection my data.
I noted two areas where I am having problems, both revolving around getting Source Information of the feed. And by source, I want CNN, or FoxNews, or whomever, not the Author.
Judging from my reading, .getSource() is the correct method.
List<String> feedList = theFeeds.getFeeds();
List<FeedData> feedOutput = new ArrayList<FeedData>();
for (String sites : feedList ) {
URL feedUrl = new URL(sites);
SyndFeedInput input = new SyndFeedInput();
SyndFeed feed = input.build(new XmlReader(feedUrl));
List<SyndEntry> entries = feed.getEntries();
for (SyndEntry item : entries){
String title = item.getTitle();
String link = item.getUri();
Date date = item.getPublishedDate();
Problem here --> ** SyndEntry source = item.getSource();
String description;
if (item.getDescription()== null){
description = "";
} else {
description = item.getDescription().getValue();
}
String cleanDescription = description.replaceAll("\\<.*?>","").replaceAll("\\s+", " ");
FeedData feedData = new FeedData();
feedData.setTitle(title);
feedData.setLink(link);
And Here --> ** feedData.setSource(link);
feedData.setDate(date);
feedData.setDescription(cleanDescription);
String preview =createPreview(cleanDescription);
feedData.setPreview(preview);
feedOutput.add(feedData);
// lets print out my pieces.
System.out.println("Title: " + title);
System.out.println("Date: " + date);
System.out.println("Text: " + cleanDescription);
System.out.println("Preview: " + preview);
System.out.println("*****");
}
}
getSource() is definitely wrong - it returns back SyndFeed to which entry in question belongs. Perhaps what you want is getContributors()?
As far as modules go, they should be selected automatically. You can even write your own and plug it in as described here
What about trying regex the source from the URL without using the API?
That was my first thought, anyway I checked against the RSS standardized format itself to get an idea if this option is actually available at this level, and then try to trace its implementation upwards...
In RSS 2.0, I have found the source element, however it appears that it doesn't exist in previous versions of the spec- not good news for us!
[ is an optional sub-element of 1
Its value is the name of the RSS channel that the item came from, derived from its . It has one required attribute, url, which links to the XMLization of the source.

Categories

Resources