JAXB: Generating classes for two XSDs which share a common XSD - java

I have 2 service XSD files AService.xsd and BService.xsd each with different targetNamespace. Both of these use a common XSD called common.xsd. I use the JAXB Maven plugin to generate classes. Here's how,
<execution>
<id>generate-package</id>
<goals>
<goal>generate</goal>
</goals>
<configuration>
<extension>true</extension>
<schemaIncludes>
<include>schema/Aservice.xsd</include>
<include>schema/Bservice.xsd</include>
</schemaIncludes>
<bindingIncludes>
<include>schema/*.xjb</include>
</bindingIncludes>
<generatePackage>com.schema</generatePackage>
<generateDirectory>src/main/java</generateDirectory>
</configuration>
</execution>
When i try to run this i get the following error. ValidationType is defined in common.xsd
org.xml.sax.SAXParseException: A class/interface with the same name "com.schema.ValidationType" is already in use. Use a class customization to resolve this conflict.
..........
org.xml.sax.SAXParseException: (Relevant to above error) another "ValidationType" is generated from here.
......
com.sun.istack.SAXParseException2: Two declarations cause a collision in the ObjectFactory class.
If i run the 2 service xsds in 2 different executions generating into 2 different packages, i get the same ValidationType class in 2 different packages.
Any ideas on how to make JAXB recognize shared schemas?

You are facing a so-called "chameleon schema" which is considered to be a bad practice. Unfortunately, there is no good solution due to the nature of JAXB. JAXB annotation bind bean properties to XML elements and attributes in specific namespaces (determined in the schema compile time). So once your schema is compiled, there is no official good way to change namespaces of elements and attributes your properties are bound to.
However, this is exactly what you want to achieve with "chameleon" schemas. Classes derived from "common.xsd" should somehow magically map to namespace A if used in A classes and to namespace B if used in B classes. I can imagine this magic, but never seen in in real life.
Since you essentially want A/common and B/common to be the "same thing", one of the ways to resolve it is to generate A and B (both with common) in two executions and to make common classes implement a certain "common" interface. Then your software could process A/common and B/common in the same faschion regardless of the fact that these are actually classes from the different packages.
UPDATE:
From the comment I see that you don't have a chameleon schema, but just a normal importing. It is easy then, just compile common, A and B separately. See the Separate schema compilation for maven-jaxb2-plugin.

I customized the packages as described here. So common.xsd goes in com.common.schema and is shared by AService.xsd and BService.xsd which are both in different packages themselves, since they are in different namespaces.
The generatePackage is removed from the maven configuration and looks like this,
<execution>
<id>generate-package</id>
<goals>
<goal>generate</goal>
</goals>
<configuration>
<extension>true</extension>
<schemaIncludes>
<include>schema/Aservice.xsd</include>
<include>schema/Bservice.xsd</include>
</schemaIncludes>
<bindingIncludes>
<include>schema/*.xjb</include>
</bindingIncludes>
<generateDirectory>src/main/java</generateDirectory>
</configuration>
</execution>

Related

How to aggregate maven subproject javadoc output without regenerating javadoc

I have a largish multimodule Maven build. I need to generate the javadoc for all of the modules and produce an "aggregated" javadoc result that I can deploy to a box for consumption by users.
I did have this working perfectly fine for quite a while, until I tried implementing a custom taglet with specific features and requirements, which makes this more complicated to produce.
All of the submodules inherit a parent pom that is not the aggregator pom. In that parent pom I define the maven-javadoc-plugin. This is what it looked like before I added the custom taglet:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.10.4</version>
<configuration>
<additionalparam>-Xdoclint:none</additionalparam>
<bottom>Unified Service Layer - bottom</bottom>
<doctitle>Unified Service Layer - title</doctitle>
<footer>Unified Service Layer - footer</footer>
<groups></groups>
<header>Unified Service Layer - header</header>
<level>public</level>
<packagesheader>Unified Service Layer - packagesheader</packagesheader>
<top>Unified Server Layer - top</top>
<windowtitle>Unified Service Layer - windowtitle</windowtitle>
</configuration>
<executions>
<execution>
<id>module-javadoc-jar</id>
<phase>package</phase>
<goals>
<goal>jar</goal>
</goals>
<configuration>
<show>protected</show>
<detectLinks>false</detectLinks>
</configuration>
</execution>
<execution>
<id>aggregated-documentation</id>
<phase>package</phase>
<inherited>false</inherited>
<goals>
<goal>aggregate-jar</goal>
</goals>
<configuration>
<show>protected</show>
<detectLinks>false</detectLinks>
</configuration>
</execution>
</executions>
</plugin>
With this, I could build all all of the modules, which will generate their own javadoc (which I now know is just a validation step, as aggregate-jar doesn't use this output). I have a separate step I call from jenkins that runs "javadoc:aggregate-jar" in the root project, which produces the aggregated javadoc jar that I deploy.
Again, this has been working fine until now.
I implemented a custom javadoc taglet which requires getting access to the Class object associated with the source file it is contained within. I got this to work, at least in the individual module builds by adding the following to the configuration above:
<taglets>
<taglet>
<tagletClass>com.att.det.taglet.ValidationConstraintsTaglet</tagletClass>
</taglet>
<taglet>
<tagletClass>com.att.det.taglet.ValidationConstraintsCombinedTaglet</tagletClass>
</taglet>
</taglets>
<tagletArtifacts>
<tagletArtifact>
<groupId>com.att.detsusl.taglets</groupId>
<artifactId>validationJavadocTaglet</artifactId>
<version>0.0.1-SNAPSHOT</version>
</tagletArtifact>
</tagletArtifacts>
In order to have the taglet get access to the class file, I had to add a minimal plugin configuration to each subproject pom.xml, which looks like this:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<configuration>
<tagletArtifacts combine.children="append">
<tagletArtifact>
<groupId>com.att.detsusl</groupId>
<artifactId>artifact-name</artifactId>
<version>${current.pom.version}</version>
</tagletArtifact>
</tagletArtifacts>
</configuration>
</plugin>
With these minimal changes, I could run the build in each module, generating the javadoc, and examining the generated javadoc output in each module, verifying that it all worked.
However, the problem is, when I run "javadoc:aggregate-jar" in the root project, all of that already built output is ignored. It reruns the javadoc generation for all of the subprojects, also ignoring the appended tagletArtifacts list in each subproject pom.xml file. As a result, I get ClassNotFound errors when it tries to get the class file.
I could "fix" this by putting all of the subproject GAVs into the top-level "tagletArtifacts" list, but I definitely do not want to do that. I liked the ability to specify this in the subproject pom.xml (with combine.children="append") to make it work.
What I need is an overall javadoc package for all of the subprojects, with the taglet able to get access to the class file, without forcing the parent pom to know about all of its subprojects. How can I do this?
I'm facing the same problem with all aggregate goals. I checked the source code to maven-javadoc-plugin and it turns out that aggregate work by traversing submodules and collecting source files and nothing more, thus completely ignoring any form configurations specified in the submodules.
During execution every submodule is completely ignored:
source
if ( isAggregator() && !project.isExecutionRoot() ) {
return;
}
And during collection of source files submodules are traversed: source
if ( isAggregator() && project.isExecutionRoot() ) {
for ( MavenProject subProject : reactorProjects ) {
if ( subProject != project ) {
List<String> sourceRoots = getProjectSourceRoots( subProject );
So at the moment, there is no way to do this.
This is not easy to fix either since the whole plugin works by composing a single call to the actual javadoc tool. If you would like to respect settings in the submodules as well, you'll have to merge the configuration blocks of them. While this would work in your case with tagletArtifacts, it does not work for all the settings you can specify, e.g. any form of filter, and can therefore not be done in a generic way.

jacoco only shows coverage for classes in the same module

I have a somewhat large multi-module Maven project. I have the unit tests in each module being processed by Jacoco. I have a separate child module doing "merge" and "report-aggregate", and this appears to be generating data. I'm even using the generated data in SonarQube. Most of my tests are using PowerMock, and I'm using offline instrumentation.
However, after looking closer at the coverage data, I see that it is leaving out coverage data for classes and methods that I know are being executed during tests. The pattern I see in every module is that it only reports coverage for a single class in each module, which is a class actually in the current module. Almost all of the tests also call out to other classes in other modules in the build, and coverage for those classes are never reported.
The following plugin configurations are in the parent pom used by each child module:
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.8</version>
<executions>
<execution>
<id>default-instrument</id>
<goals>
<goal>instrument</goal>
</goals>
</execution>
<execution>
<id>default-restore-instrumented-classes</id>
<goals>
<goal>restore-instrumented-classes</goal>
</goals>
</execution>
<execution>
<id>default-report</id>
<phase>prepare-package</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.19.1</version>
<configuration>
<argLine>-Xmx1024m</argLine>
<includes>
<include>**/*Test.java</include>
</includes>
<systemPropertyVariables>
<jacoco-agent.destfile>${project.build.directory}/jacoco.exec</jacoco-agent.destfile>
<running-unit-test>true</running-unit-test>
</systemPropertyVariables>
</configuration>
</plugin>
When I inspect the generated HTML results for each module, I find that it only reports results for the single class in the current module, and not the data for classes in other modules. From this, I would assume that how I do "merge" and "report-aggregate" in the separate child module is probably irrelevant to this problem.
The generated "jacoco.exec" file is binary, but I tried "catting" out one from one module just to see what ascii text was visible, and it showed only one occurrence of anything that looked like a file name, and it was the only file name reported in the HTML coverage report for that module.
I'm not sure what other information I can report.
Update:
I guess I can see pretty clearly now that when surefire runs unit tests, it uses the instrumented classes from the current module, but the uninstrumented classes from the maven artifacts. This is why I only see coverage for classes in the current module.
So it seems like I need a way to specify that the "target/generated-classes/jacoco" folder for each module the current module depends on, is prepended to the classpath that surefire uses. I don't see a way to do that.
Alternatively, I see that the "instrument" goal has an "includes" configuration element. Should I be specifying paths to all of the "target/classes" directories for each of the modules that the current module depends on?
Recording of code coverage for some class requires its instrumentation. Goal instrument performs instrumentation of classes of current module.
all of the tests also call out to other classes in other modules
so the ones that are not instrumented. And if I correctly understood, then exactly those for which you are missing coverage.
If you don't use PowerMock for classes that come from other modules, but only for classes in current module, then you can combine offline instrumentation with on-the-fly using agent. But in this case make sure that classes instrumented offline are explicitly excluded from instrumentation by agent, otherwise agent will be throwing IllegalStateException: Class ... is already instrumented.
If you use PowerMock for classes that come from other modules, then this becomes more complex due to strictness of Maven in regards of manipulations with classpaths and dependencies. And I doubt that this can be easily achieved using one mvn comand, however seems possible using more:
instrument and run tests, but don't use restore-instrumented-classes
restore classes and generate report(s)
Unfortunately you haven't provided complete example (https://stackoverflow.com/help/mcve) and I don't have time to prepare full example to test this approach right now.
As a side note: inability to simply use agent comes from the fact that PowerMock bypasses any agent and reads class files from disk.

Localization in a GWT multi-module project

I have a GWT maven webapp project that used to consist of a single module. As a result of requirements evolution, I need to extract some of the code into separate modules to make them reusable. So far, this process was going well until I decided to extract localization code in order to use it in another project.
What I have is MyAppConstants and MyAppMessages interfaces with corresponding .properties files, which are used in client code by means of GWT.create(). I moved them to separate module, added Localization.gwt.xml file and specified the following inside pom.xml:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>gwt-maven-plugin</artifactId>
<configuration>
<module>com.myapp.Localization</module>
<!-- Do not compile source files, just check them -->
<validateOnly>true</validateOnly>
<!-- i18n -->
<i18nConstantsBundle>com.myapp.client.MyAppConstants_ru</i18nConstantsBundle>
<i18nMessagesBundle>com.myapp.client.MyAppMessages_ru</i18nMessagesBundle>
</configuration>
<executions>
<execution>
<goals>
<goal>i18n</goal>
<goal>resources</goal>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
In main application module I simply inherited Localization.gwt.xml. As a result of compilation, I can see that .cache.html files do not contain localized constants and messages (they look like \u0410\u043B...) which they used to have. I suppose this happens because GWT compiler doesn't see source files (f.e., com.myapp.client.MyAppConstants_ru.java) in .generated folder where they normally reside after successful execution of i18n phase of maven plugin. Instead, they can be found in localization.jar.
I feel like I'm missing something because this doesn't seem like a non-trivial task to solve. What would be the proper way of handling such a scenario?
It turns out, in order to have proper localization, you need to have .properties files in classpath at the time of GWT compilation. Initially, I filtered them out of localization.jar because their presence caused GWT compilation failures with messages like this:
Rebind result 'com.myapp.client.MyAppConstants_ru' must be a class
I digged into gwt-dev.jar contents and found out that compiler actually checks presence of localization properties files in classpath to determine bind results.
So my problem was solved by:
removing <goal>i18n</goal> and corresponding configuration in localization module
making sure .properties files make their way to localization.jar
Which makes me wonder, what's the use of i18n goal of gwt-maven-plugin?

Maven compiler, only compile annotated classes

I've created a custom Java annotation (code below) in a Maven 2 project I'm working on:
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
#Target(ElementType.TYPE)
#Retention(RetentionPolicy.RUNTIME)
public #interface MYANNOTATION{}
At one part of the Maven build, I only want to compile classes annotated with this annotation, e.g:
#MYANNOTATION
public class MyClass {
// Code here
}
I'm currently using the Maven Compiler Plugin to restrict complication based on package structure. My pom.xml contains resembles the one below, restricting compilation to classes in **com.foo.bar.stuff** and **com.baz.foo.more**. This is unsatisfactory, because when I add annotated classes to com.xyz.bar.foo, I must remember to define it in the pom.
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<executions>
<execution>
<id>default-compile</id>
<phase>compile</phase>
<goals>
<goal>compile</goal>
</goals>
<configuration>
<includes>
<include>**/com/foo/bar/stuff/**</include>
<include>**/com/baz/foo/more/**</include>
</includes>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
Is there any way to define Maven to compile only classes that have been annotated with this annotation, not depending on where they are located in the package hierarchy?
(I'm trying to generate a metamodel from domain model classes so I can point to fields & methods without defining the names as String constants - and changing them manually when I refactor)
Edit: I am already doing annotation processing in another part of the build phase. The system works like this:
Compile classes in the specified packages
Using JAnnocessor, build metamodels from classes with #MYANNOTATION
Compile the rest of the classes
Dependencies from other classes to the metamodel classes prevent compiling everything in one go, unless we move the annotated classes to a different project and add a dependency to it. That's one possibility but can add complexity, because the current project structure appears to form a logical whole.
You can do something similar to what you want with annotation processing. I don't think there's any maven-specific thing you need to do, but you need to write an annotation processor that has to either be part of a separate library or compiled separately.
The concept of annotation processing is explained pretty well in this blog entry:
Code Generation using Annotation Processors in the Java language –
part 2: Annotation Processors

Annotating CXF (wsdl2java) generated package

I need to add package level annotation (XmlJavaTypeAdapters type adapter). The problem is that when I run wsdl2java it generates package-info.java file for that package.
When I try to add my own package-info.java I get error: "the type package-ingo is already defined".
Is there a way to inject my annotation to package-info.java?? Maybe any other ideas?
thanks
After some research I used external mapping file. For all that have similar problem to mine I have described below what I have found.
If you are using "cxf-codegen-plugin" for generating source code from WSDL you can't use solution with package-info.java. This is because generated code propably will already contain this file. You cannot also add annotation to your class because it is generated. The only solution is to provide your own mapper.
First of all you have to write custom mapper. After that you should define xjb mapping file and finally add additional configuration to your pom.xml. You can read about first two steps here.
To add external mapping file to cxf-codegen-plugin you have to add something like this to configuration node in plugin definition:
<defaultOptions>
<bindingFiles>
<bindingFile>${basedir}/src/main/resources/mapping.xjb</bindingFile>
</bindingFiles>
<noAddressBinding>true</noAddressBinding>
</defaultOptions>
Note that you should not pass extra parameters to xjc as described here because it will not work.
Hope this will help anybody :)
I've never tried this, but you could try adding an -xjc-npa flag to the wsdl2java command. In theory, that tells XJC to not generate a package-info.java and instead stick all the namespaces and such on all the other elements where it's needed.
You can supply JAXB "bindings", either inline in the WSDL or as a separate external binding file, and JAXB will generate the appropriate adapters and the required package-level annotations. See this question for an example.
I needed to add an annotation to generated code as well. I used the maven-replacer-plugin to do this just after the java classes were generated. You could use this solution to modify any file that comes out.
Here's the relevant pom.xml bit:
<plugin>
<groupId>com.google.code.maven-replacer-plugin</groupId>
<artifactId>replacer</artifactId>
<version>${replacer.plugin.version}</version>
<executions>
<execution>
<phase>process-sources</phase>
<goals>
<goal>replace</goal>
</goals>
</execution>
</executions>
<configuration>
<filesToInclude>target/generated-sources/cxf/com/BLAH/client/api/v4/*.java</filesToInclude>
<filesToExclude>target/generated-sources/cxf/com/BLAH/client/api/v4/ObjectFactory.java,
target/generated-sources/cxf/com/BLAH/client/api/v4/package-info.java,
</filesToExclude>
<replacements>
<replacement>
<!-- Add #XmlRootElement in front of public class Blah -->
<token>public class (\w*)</token>
<value>#XmlRootElement(name ="$1") ${line.separator}public class $1</value>
</replacement>
<replacement>
<!-- Add the import for the XmlRootElement annotation to the file -->
<token>import javax.xml.bind.annotation.XmlType;</token>
<value>import javax.xml.bind.annotation.XmlType;${line.separator}import javax.xml.bind.annotation.XmlRootElement;</value>
</replacement>
</replacements>
</configuration>
</plugin>
Hope this helps!

Categories

Resources