Within a .owl file, I'm declaring some prefixes like this:
Prefix(:=<http://default.ont/my_ont/>)
Prefix(ex:=<http://example.org/ex#>)
Prefix(ex2:=<http://example2.org/ex#>)
...
And using my ontology in a Java project like this:
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLOntology ontology = manager.loadOntologyFromOntologyDocument(new File(resourceFullPath(ontologyFilename)));
Now I want to build a Map<String, String> in a programmatic way with the following content:
{
"" -> "http://default.ont/my_ont/",
"ex" -> "http://example.org/ex#",
"ex2" -> "http://example2.org/ex#"
}
How can I do this with OWL API (i.e. without parsing the .owl file by myself)?
The prefixes found during parsing are held as part of the OWLDocumentFormat instance associated to the ontology:
OWLDocumentFormat format = manager.getOntologyFormat(ontology);
if (format.isPrefixOWLDocumentFormat()) {
// this is the map you need
Map<String, String> map = format.asPrefixOWLDocumentFormat().getPrefixName2PrefixMap();
}
Related
I am reading the attached university-bench ontology file (which I generated from UBA1.7 lubm) in java using owlapi. But it is not reading any axiom like subclass etc. And it is also not giving me any error. Can anyone tell me what I am doing wrong. The below code I used to retrieve the subclass axioms from this ontology but it return me nothing/ blank output. I wanted to output subclass, disjoint class, sub property, disjoint property, anonymous superclass axioms. but currently I am unable to get anything out from the ontology.
When I use the ontology which is created by me using protege. The below code works fine. But when i try to execute the ontology generated from UBA1.7 it gives me nothing.
public static void axioms(File ontology) throws OWLOntologyCreationException {
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLOntology ontology = manager.loadOntologyFromOntologyDocument(ontology);
OWLDataFactory df = manager.getOWLDataFactory();
for (final OWLSubClassOfAxiom subClasse : ontology.getAxioms(AxiomType.SUBCLASS_OF))
{
if (subClasse.getSuperClass() instanceof OWLClass && subClasse.getSubClass() instanceof OWLClass)
{
System.out.println(subClasse.getSubClass() + " extends " + subClasse.getSuperClass());
}
}
}
i am trying to read a Yaml template and replace certain fields in the template dynamically and create a new Yaml file. My resultant yaml file should reflect the template in all aspects including the double quotes. But I am missing double quotes for the required fields when I use snake yaml.
Can anyone please suggest to resolve this issue?
Example :
My yaml template is as shown below:
version: snapshot-01
kind: sample
metadata:
name: abc
groups:
id: "1000B"
category: category1
I am reading the above template and replacing the required fields dynamically as shown below.
Yaml yaml = new Yaml();
InputStream inputStream = this.getClass().getClassLoader().getResourceAsStream(yamlTemplateLocation);
Map<String, Object>yamlMap = yaml.load(inputStream);
Now I am replacing the required fields as shown below
yamlMap.put("version","v-1.0");
Map<String, Object> metadata = (Map<String, Object>) yamlMap.get("metadata");
metadata.put("name", "XYZ");
Map<String, Object> groups = (Map<String, Object>) yamlMap.get("groups");
groups.put("id","5000Z");
groups.put("category","newCategory");
DumperOptions options = new DumperOptions();
options.setDefaultFlowStyle(DumperOptions.FlowStyle.BLOCK);
options.setPrettyFlow(true);
Yaml yaml = new Yaml(options);
String output = yaml.dump(map);
System.out.println(output);
I am expecting output as shown below
Expected Output :
version: v-1.0
kind: sample
metadata:
name: XYZ
groups:
id: "5000Z"
category: newCategory
But I am actually getting output as below
version: v-1.0
kind: sample
metadata:
name: XYZ
groups:
id: 5000Z
category: newCategory
My problem here is, I am missing the double quotes for "id" node in the new yaml file.
When I use, options.setDefaultScalarStyle(ScalarStyle.DOUBLE_QUOTED), I am getting all fields double quoted which is not required. I need double quotes for id field only.
Can anyone please advice to resolve this issue.
Thanks
If your input is a template, it might be better to use a templating engine. As simple example, MessageFormat would allow you to write id: "{0}" and then interpolate the actual value into it, keeping the double quotes. You could use more sophisticated templating depending on your use-case.
That being said, let's look at how to do it with SnakeYAML:
If you want to control how a single item is rendered as scalar, you have to define a class like this:
class QuotedString {
public String value;
public QuotedString(String value) {
this.value = value;
}
}
And then create a custom representer for it:
class MyRepresenter extends Representer {
public MyRepresenter() {
this.representers.put(QuotedString.class, new RepresentQuotedString());
}
private class RepresentQuotedString implements Represent {
public Node representData(Object data) {
QuotedString str = (QuotedString) data;
return representScalar(
Tag.STR, str.value, DumperOptions.ScalarStyle.DOUBLE_QUOTED);
}
}
}
Modify your code to use the new class:
groups.put("id", new QuotedString("5000Z"));
And finally, instruct SnakeYAML to use your representer:
Yaml yaml = new Yaml(new MyRepresenter(), options);
This should do it.
The title talks by itself, I have a Config object (from https://github.com/typesafehub/config) and I want to pass it the a constructor which only supports java.util.Properties as argument.
Is there an easy way to convert a Config to a Properties object ?
Here is a way to convert a typesafe Config object into a Properties java object. I have only tested it in a simple case for creating Kafka properties.
Given this configuration in application.conf
kafka-topics {
my-topic {
zookeeper.connect = "localhost:2181",
group.id = "testgroup",
zookeeper.session.timeout.ms = "500",
zookeeper.sync.time.ms = "250",
auto.commit.interval.ms = "1000"
}
}
You can create the corresponding Properties object like that:
import com.typesafe.config.{Config, ConfigFactory}
import java.util.Properties
import kafka.consumer.ConsumerConfig
object Application extends App {
def propsFromConfig(config: Config): Properties = {
import scala.collection.JavaConversions._
val props = new Properties()
val map: Map[String, Object] = config.entrySet().map({ entry =>
entry.getKey -> entry.getValue.unwrapped()
})(collection.breakOut)
props.putAll(map)
props
}
val config = ConfigFactory.load()
val consumerConfig = {
val topicConfig = config.getConfig("kafka-topics.my-topic")
val props = propsFromConfig(topicConfig)
new ConsumerConfig(props)
}
// ...
}
The function propsFromConfig is what you are mainly interested in, and the key points are the use of entrySet to get a flatten list of properties, and the unwrapped of the entry value, that gives an Object which type depends on the configuration value.
You can try my scala wrapper https://github.com/andr83/scalaconfig. Using it convert config object to java Properties is simple:
val properties = config.as[Properties]
As typesafe config/hocon supports a much richer structure than java.util.propeties it will be hard to get a safe conversion.
Or spoken otherwise as properties can only express a subset of hocon the conversion is not clear, as it will have a possible information loss.
So if you configuration is rather flat and does not contain utf-8 then you could transform hocon to json and then extract the values.
A better solution would be to implement a ConfigClass and populate the values with values from hocon and passing this to the class you want to configure.
It is not possible directly through typesafe config. Even rending the entire hocon file into json does provide a true valid json:
ex:
"play" : {
"filters" : {
"disabled" : ${?play.filters.disabled}[
"play.filters.hosts.AllowedHostsFilter"
],
"disabled" : ${?play.filters.disabled}[
"play.filters.csrf.CSRFFilter"
]
}
}
That format is directly from Config.render
as you can see, disabled is represented twice with hocon style syntax.
I have also had problems with rendering hocon -> json -> hocon
Example hocon:
http {
port = "9000"
port = ${?HTTP_PORT}
}
typesafe config would parse this to
{
"http": {
"port": "9000,${?HTTP_PORT}"
}
}
However if you try to parse that in hocon - it throws a syntax error. the , cannot be there.
The hocon correct parsing would be 9000${?HTTP_PORT} - with no comma between the values. I believe this is true for all array concatenation and substitution
I am trying to create and store an ontology file in functional format using OWL API:
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLOntology ontology = manager.createOntology();
OWLDataFactory factory = manager.getOWLDataFactory();
PrefixManager pm = new FunctionalSyntaxDocumentFormat();
pm.setDefaultPrefix(" :");
OWLClass item = factory.getOWLClass(IRI.create("item"), pm);
manager.addAxiom(ontology, factory.getOWLDeclarationAxiom(item));
manager.saveOntology(ontology, new FunctionalSyntaxDocumentFormat(), new FileOutputStream("FileName"))
The result in the saved file for this axiom is this:
Declaration(Class(< :item>))
How do I get rid of the < > brackets around entities? It happens to all entities that I create, and it is preventing my file from being parsed correctly.
Two issues: there should not be a space in the default prefix, and the prefix manager you are setting the prefix on must be the same used in the call to saveOntology(). You can just pass the first functional document format to the last method in your code.
Edit: After trying to run the code, I think there's a bit of a bug in the OWL API. It is necessary to set the format on the manager for the prefixes to be picked up properly. That should not be necessary. However, there's a workaround.
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLOntology ontology = manager.createOntology();
OWLDataFactory factory = manager.getOWLDataFactory();
FunctionalSyntaxDocumentFormat pm = new FunctionalSyntaxDocumentFormat();
pm.setPrefix(":", "http://test.owl/test#");
manager.setOntologyFormat(ontology, pm);
OWLClass item = factory.getOWLClass("item", pm);
manager.addAxiom(ontology, factory.getOWLDeclarationAxiom(item));
manager.saveOntology(ontology, System.out);
Given an arbitrary IRI, such as the main ontology or one of the ontologies it imports, I would like to extract the title but the code yields no annotations.
Here's an example of what I'm talking about, from the SKOS ontology:
<owl:Ontology rdf:about="http://www.w3.org/2004/02/skos/core">
<dct:title xml:lang="en">SKOS Vocabulary</dct:title>
How exactly would I extract, "SKOS Vocabulary".
Here is some code I am currently using from an OWL-API tutorial.
public void testingOWL() throws OWLOntologyCreationException, OWLOntologyStorageException
{
// Get hold of an ontology manager
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
// Load an ontology from the Web. We load the ontology from a document IRI
IRI docIRI = IRI.create("http://www.w3.org/2009/08/skos-reference/skos.rdf");
OWLOntology skos = manager.loadOntologyFromOntologyDocument(docIRI);
System.out.println("Loaded ontology: " + skos);
System.out.println();
// Save a local copy of the ontology. (Specify a path appropriate to your setup)
File file = new File("e:/downloadAndSaveOWLFile.owl");
manager.saveOntology(skos, IRI.create(file.toURI()));
// Ontologies are saved in the format from which they were loaded.
// We can get information about the format of an ontology from its manager
OWLOntologyFormat format = manager.getOntologyFormat(skos);
System.out.println(" format: " + format);
System.out.println();
// Save the ontology in owl/xml format
OWLXMLOntologyFormat owlxmlFormat = new OWLXMLOntologyFormat();
// Some ontology formats support prefix names and prefix IRIs.
// In our case we loaded the pizza ontology from an rdf/xml format, which supports prefixes.
// When we save the ontology in the new format we will copy the prefixes over
// so that we have nicely abbreviated IRIs in the new ontology document
if(format.isPrefixOWLOntologyFormat())
{
owlxmlFormat.copyPrefixesFrom(format.asPrefixOWLOntologyFormat());
}
manager.saveOntology(skos, owlxmlFormat, IRI.create(file.toURI()));
// Dump an ontology to System.out by specifying a different OWLOntologyOutputTarget
// Note that we can write an ontology to a stream in a similar way
// using the StreamOutputTarget class
OWLOntologyDocumentTarget documentTarget = new SystemOutDocumentTarget();
// Try another format - The Manchester OWL Syntax
ManchesterOWLSyntaxOntologyFormat manSyntaxFormat = new ManchesterOWLSyntaxOntologyFormat();
if(format.isPrefixOWLOntologyFormat())
{
manSyntaxFormat.copyPrefixesFrom(format.asPrefixOWLOntologyFormat());
}
manager.saveOntology(skos, manSyntaxFormat, documentTarget);
}
EDIT: Update the code based on the suggestion below but only returns 1 object for rdfs:seeAlso.
public void getData() throws OWLOntologyCreationException
{
// Get hold of an ontology manager
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
// Load an ontology from the Web. We load the ontology from a document IRI
IRI docIRI = IRI.create("http://www.w3.org/2009/08/skos-reference/skos.rdf");
OWLOntology skos = manager.loadOntologyFromOntologyDocument(docIRI);
for (OWLAnnotation ann: skos.getAnnotations())
{
System.out.println("ann: " + ann.getProperty());
System.out.println();
}
}
The annotation you're looking for is an ontology annotation, meaning the IRI that is its subject is the ontology IRI itself. This is accessed differently from standard annotations.
OWLOntology o= ... // init the ontology as usual
for (OWLAnnotation ann: o.getAnnotations()){
if(ann.getProperty().equals(dataFactory.getRDFSLabel()){
// here you have found a rdfs:label annotation, so you can use the value for your purposes
}
}
Edit: Example of use
public static void main(String[] args) throws OWLOntologyCreationException {
OWLOntologyManager m = OWLManager.createOWLOntologyManager();
OWLOntology o = m.loadOntology(IRI
.create("http://www.w3.org/2009/08/skos-reference/skos.rdf"));
for (OWLAnnotation a : o.getAnnotations()) {
System.out.println("TestSkos.main() " + a);
}
}
Output:
TestSkos.main() Annotation(rdfs:seeAlso <http://www.w3.org/TR/skos-reference/>)
TestSkos.main() Annotation(<http://purl.org/dc/terms/creator> "Alistair Miles")
TestSkos.main() Annotation(<http://purl.org/dc/terms/description> "An RDF vocabulary for describing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, 'folksonomies', other types of controlled vocabulary, and also concept schemes embedded in glossaries and terminologies."#en)
TestSkos.main() Annotation(<http://purl.org/dc/terms/contributor> "Participants in W3C's Semantic Web Deployment Working Group.")
TestSkos.main() Annotation(<http://purl.org/dc/terms/creator> "Sean Bechhofer")
TestSkos.main() Annotation(<http://purl.org/dc/terms/contributor> "Nikki Rogers")
TestSkos.main() Annotation(<http://purl.org/dc/terms/title> "SKOS Vocabulary"#en)
TestSkos.main() Annotation(<http://purl.org/dc/terms/contributor> "Dave Beckett")