I have a schema in xsd file. once in a while a new version of the schema is created, and I need to update my .ecore (and .genmodel).
How do I update them, without deleting them and re-generate them. I have made some manual modification to the ecore, and i want to keep this modifications.
Ido.
Use the Reload... action on the *.genmodel to update the *.ecore based on the new version of the *.xsd.
And don't change the .ecore directly. Using ecore: annotations in the schema. http://www.eclipse.org/modeling/emf/docs/overviews/XMLSchemaToEcoreMapping.pdf
I've never tried this, but the XSD FAQ says this:
JAXB produces a simple Java API given
an XML Schema and it does so using
essentially a black box design. EMF
produces an Ecore model given an XML
Schema and then uses template-based
generator technology to generate a
rich Java API (of hand written
quality). The XML Schema to Ecore
conversion can be tailored, the
templates used to generate the Java
API can be tailored, and the resulting
Java API can be tailored. The
generator supports merging
regeneration so that it will preserve
your hand written changes. In other
words, EMF is far richer and more
flexible, and supports a broader
subset of XML Schema (especially in
2.0, where wildcards and mixed content will be supported).
If I were you, I'd try some experiments to see how well this process works, and what the practical limitations are.
You can regenerate using the context menu options. To preserve your modifications:
If there is a method that has "Gen" added to the name -- e.g. setWhateverGen in addition to setWhatever -- new code will be generated to the "Gen" method. So leave the "Gen" method alone so that it can be overwritten, and then call it from the non-Gen method, which you can modify.
All the generated methods are annotated with #generated. If you add "NOT" -- #generated NOT -- it will not be overwritten.
All other content should be merged. Go ahead and experiment -- that's what version control is for....
Related
I am facing a problem and I am kind of desesperate :
I am trying to transform a constraint OCL into a C# program. To do so, I define my ocl constraints in a CompleteOCL document, and I save it as Abstract Syntax : POC.ocl.oclas. Then I use Acceleo with the Pivot Meta-model ('http://www.eclipse.org/ocl/2015/Pivot').
However, common OCL operations (such as 'size') are defined in another model : the Library. So when I try to recover operations used on my OCL model, nothing happened, I can only recover the operation I defined in my ocl document.
When I opened POC.ocl.oclas, I have these 2 models :
POC.ocl model + Library model.
I defined these generation :
[comment encoding = UTF-8 /]
[module generate('http://www.eclipse.org/ocl/2015/Pivot','http://www.eclipse.org/ocl/2015/Library')]
[template public generateElement(aModel : Model)]
[comment #main/]
[file (aModel.name + 'xx', false, 'UTF-8')]
yo
[/file]
[/template]
And it only generate one file : "POC.oclxx", not "Library.oclxx"
That lead us to this question :
Is it possible in Acceleo to make a reference to another model (than the main one) ?
And if it is, how to do that ?
ANNEXE :
The code I wrote :
[comment getCode() opération/]
[template public getCode(operationCallExp : pivot::OperationCallExp) post (trim())]
[operationCallExp.ownedSource.getCode()/]
[operationCallExp.referredOperation.name/][operationCallExp.ownedArguments -> getArguments()/]
[/template]
In theory, [operationCallExp.referredOperation.name/] gives me the name of the operation. In reality, it gives me nothing, except when I defined the operation (and thus when the operation doesn't come from the OCL Library)
Thank you in advance !
The zipped projet : Archive_OCL_Acceleo
The POC folder contains POC metamodel (POC.ecore), OCL constraint on this metamodel (POC.ocl) and the Pivot model associate (POC.ocl.oclas). Files generated by Acceleo are in the files folder
The POC_Acceleo forlder contains the Acceleo transformation (generate.mtl)
From the *.oclas file extension, I take it that you are using the/my Pivot-based Eclipse OCL Abstract Syntax.
My first attempt at Java code generation from OCL used Acceleo, but I abandoned this for various reasons, not least of which is that the step from OCL AS to Java code is far too big to perform in a single M2T step. While Java (and no doubt C#) is deceptively similar to OCL making a simple text template-driven translation attractive, that approach is doomed to support only a modest language subset. Real code generation needs real analyses such as Common Subexpression Elimination and these introduce a conflict between preserved source and rewritten source, if you rewrite the source.
The current Eclipse OCL to Java Generator (my third attempt) uses an intermediate CG model where rewrites happen. It is intended to be retargetable to C (or C# or ...). I have many plans for a higher level of auto-generation in my next (fourth) attempt with a further Java (or C or C# or ...) intermediate model to separate the 'trivial' textual language serialization from the non-trivial language concept synthesis.
If you are interested in a serious rather than simplified example tool for C# generation, I strongly recommend you look at the Eclipse OCL CG. If you want to work collaboratively to making it better and are happy to make you contributions available under the EPL, then perhaps we can arrange something.
Are you using the latest code? I recall fixing a couple of bugs recently regarding missing 'cosmetic' AS model content.
I'm looking for a solution which automatically generates POJO classfiles from a given .yaml-Files but have not found anything like this yet.
I can not imagine that it should be the only way to write these classes yourself.
The problem is that YAML describes objects, not classes. In general, you cannot automatically derive a POJO structure from a given YAML file. Take, for example, this YAML:
one: foo
two: bar
In YAML, this is a mapping with scalar keys and values. However, there are multiple possibilities to map it to Java. Here are two:
HashMap<String, String>
class Root {
String one;
String bar;
}
To know which one is the right mapping, you would need a schema definition like those for XML. Sadly, YAML currently does not provide a standard way of defining a schema. Therefore, you define the schema by writing the class hierarchy your YAML should be deserialised into.
So, in contrary to what you may think, writing the POJOs is not a superfluous action that could be automated, but instead is a vital step for including YAML in your application.
Note: In the case that you actually want to use YAML to define some data layout and then generate Java source code from it, that is of course possible. However, you'd need to be much more precise in your description to get help on that.
As pointed out in the comments by Jack Flamp, you can use an online tool (jsonschema2pojo) to convert a sample yaml file to its equivalent POJO classes. This tool can convert json or yaml data to corresponding POJO classes and I have used it successfully in the past.
That being said, the tool is forced to make certain "assumptions" when you are using a yaml file(instead of yaml schema). So, it would be a good idea to look at the generated classes carefully before you start using them.
You can find more information about how to use this online tool from its wiki page.
The Accepted Answer is incomplete.
You can try to use https://editor.swagger.io/
After importing yaml file You can generate Java REST Client project through menu with correspondent POJO classes.
I've inherited a project which essentially maps large documents from one structure to another. The source and target documents are POJOs, and there are a number of transformer classes that map the source POJO to the target by using the getters/setters of each field.
For example, say we have:
public void transform(SourceDocument source, TargetDocument target) {
target.setField1(source.getField1());
target.setField2(source.getField5());
target.setField3(source.getField2());
target.setField4(source.getField4());
target.setField5(source.getField3());
}
We're looking to refactor large parts of this project and as part of it, our customer has requested that we document these mappings before we look to refactor with a better implementation.
There are several hundred of these mappings and before we get one of team to go through them all by hand, does anyone know of any tools that could analyse this code and produce a simple mapping document. All mapping have been strictly performed using getters/setters with no direct property access.
We can't give the actual code to our customer due to IP restrictions, and also it's the (non-technical) business who need this information, so ideally we need a very simple output, describing the mapping from source to target.
These transformers are still occasionally updated while we work on a new version, so I'd really prefer something that could generate the documentation we're after directly from the code rather than comments/annotations that would have to be manually updated by developers.
I easily found JAXB for importing XML into Java code, however, after looking at it a bit more, I started wondering if it were more than I really needed.
It should be rather simple XML that I or other users would create.
For example:
<Type>Armor Material</Type> //could be various types of parent objects
<Name>Steel</Name> //object properties
<Toughness>10</Toughness>
<Type>Armor Material</Type>
<Name>Iron</Name>
<Toughness>7</Toughness>
For the background on my problem: I have a game written in Java, and aim to have many Objects of certain types defined in the XML. I'm hoping to keep the XML as simple as possible for easy user-modding.
I know how to read from a file for creating my own custom solution - but I have never dealt with marshalling/unmarshalling and JAXB in general. I won't lie - something about it intimidates me, maybe because it seems like this "black box" which I don't quite understand.
Are there clear advantages to argue for learning how to get it work, as opposed to implementing a solution I already know I can get to work?
You definitely want to use JAXB.
Whether your XML is simple or complex, write an XML schema (xsd) file. You want the schema file anyway, so you can validate the files you are reading. Use xjc (part of JAXB) to generate Java classes for all the element of your XML schema (complete with setters/getters). Then, it is a one-liner to read or write an XML file.
Because the XML file is mapped to/from Java objects, it is very easy to manipulate these data structures (to create or consume them) in Java.
JAXB is a plugin architecture and there are quite a few open source plugins that you can utilize to enhance the generated classes. By default, JAXB generates all your setters/getters automatically, but there are plugins that will generate equals/hashcode, fluent-style methods, clone, etc. There is even a plugin (hyperjaxb3) that will put JPA annotations on the generated classes, so you can go XML->Java->database->Java->XML all based on the XML schema.
I have worked on projects that used JAXB to generate POJOs even though we didn't need XML - it was quicker to write and easier to maintain the XML schema than all the Java code for the POJOs.
If you're using Java 8, perhaps a dynamic style would be a good fit
XmlDynamic xml = new XmlDynamic(
"<items>" +
"<item>" +
"<type>Armor Material</type>" +
"<name>Steel</name>" +
"<toughness>10</toughness>" +
"</item>" +
"<item>" +
"<type>Armor Material</type>" +
"<name>Iron</name>" +
"<toughness>7</toughness>" +
"</item>" +
"</items>"
);
xml.get("items|item|name").asString(); // "Steel"
xml.get("items|item[1]|toughness").convert().intoInteger(); // 7
see https://github.com/alexheretic/dynamics#xml-dynamics
I am attempting to update some xml parsers, and have hit a small snag. We have an xsd that we need to keep compatible with older versions of the xml, and we had to make some changes to it. We made the changes in a new version of the xsd, and we would like to use the same parser (as the changes are pretty small in general, and the parser can easily handle both). We are using the XMLReader property "http://java.sun.com/xml/jaxp/properties/schemaSource" to set the schema to the previous edition, using something like the following:
xmlReader.setProperty("http://java.sun.com/xml/jaxp/properties/schemaSource",
new InputSource(getClass().getResourceAsStream("/schema/my-xsd-1.0.xsd")));
This worked fine when we only had one version of the schema. Now we have a new version, and we want the system to use whichever version of the schema is defined in the incoming xml. Both schemas define a namespace, something like the following:
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.mycompany.com/my-xsd-1.0"
xmlns="http://www.mycompany.com/my-xsd-1.0"
elementFormDefault="unqualified" attributeFormDefault="unqualified">
and, for the new one:
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.mycompany.com/my-xsd-1.1"
xmlns="http://www.mycompany.com/my-xsd-1.1"
elementFormDefault="unqualified" attributeFormDefault="unqualified">
So, they have different namespaces and different schema "locations" defined. We don't want the schema to live on the 'net - we want it to be bundled with our system. Is there a way to use the setProperty mechanism to do this behavior, or is there a different way to handle this?
I tried putting both resources in an input stream in an array as the parameter, but that didn't work (I remember reading somewhere that this was a possible solution - although now I can't find the source, so it might have been wishful thinking).
So, it turns out what I had tried actually worked - we were accidentally using invalid xml! What works (for anyone else who is interested) is the following:
List<InputSource> inputs = new ArrayList<InputSource>();
inputs.add(new InputSource(getClass().getResourceAsStream("/schema/my-xsd-1.0.xsd")));
inputs.add(new InputSource(getClass().getResourceAsStream("/schema/my-xsd-1.1.xsd")));
xmlReader.setProperty("http://java.sun.com/xml/jaxp/properties/schemaSource",
inputs.toArray(new InputSource[inputs.size()]));
Personally I think it's generally a bad idea to change the namespace when you version a schema, unless the changes are radical - but views differ on that, and you seem to have made your decision, and you may as well reap the benefits.
Since you're using two different namespaces, the schemas are presumably disjoint, so you should be able to give the processor a schema that's the union of the two - I don't know if there's a better way, but one way of achieving this is to write a little stub schema that imports both, and supply this stub as your schemaSource property. The processor will use whichever schema declarations match the namespace of the elements in the source document.
(Using version-specific namespaces makes this task - validation - easier. But it makes subsequent processing of the XML, e.g. using XPath, harder, because it's hard to write code that works with both namespaces.)