Multi-JDK Maven builds using classifiers - java

Maven docs explicitly suggest classifiers as a solution for multiple JDK support:
The classifier allows to distinguish artifacts that were built from the same POM but differ in their content. It is some optional and arbitrary string that - if present - is appended to the artifact name just after the version number. As a motivation for this element, consider for example a project that offers an artifact targeting JRE 1.5 but at the same time also an artifact that still supports JRE 1.4. The first artifact could be equipped with the classifier jdk15 and the second one with jdk14 such that clients can choose which one to use.
I have never seen a working example of this. Is the documentation wrong, or is it somehow possible to actually make Maven build the same artifact multiple times with different JDKs (and obviously distinct source directories, since they will have different syntax (e.g. diamond or lambdas)) and, most importantly, deploy them together?
Seems like this kind of thing would be a basic requirement for potential support of JEP 238, too.

The documentation is not wrong. It is just giving an example of how classifiers can be applied, in this case by targeting several JREs.
For how this can be done, there may be several ways to do this. See How to configure Maven to build two versions of an artifact, each one for a different target JRE for a related problem. You can also trigger different execution with Maven profiles. In this case, each profile triggers a different configuration of the maven-jar-plugin with a different classifier:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.6</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>jar</goal>
</goals>
<configuration>
<classifier>jdk14</classifier>
</configuration>
</execution>
</executions>
</plugin>

Related

In a collection of Maven subprojects that have common dependencies, can maven be set up to link those dependencies vs. copying them?

I've got a collection of projects that all have a large third party dependency in common, it seems like a waste of space to copy this jar to all the projects during the build, is it possible to have maven just create a hard or soft link to a single cached copy?
This is not a duplicate of Maven multi-module: aggregate common dependencies in a single one? which relates to how to manage common dependencies from a pom perspective. This is about how to avoid copying the same dependent files to the target of multiple projects and just creating links to a single instance to save space.
Simpler version of the question is: is there an equivalent to the maven-dependency-plugin's copy-dependencies goal that creates links vs. copying the files.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.4</version>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
</execution>
</executions>
</plugin>
Yes, Maven already does. It puts all your dependencies in your .m2/repository/... folder. In Windows it's c: \ Users \ [your_name] \ .m2 \ repository. So there is no need for linking it with simlinks or st like that. Just have your maven project search for the dependencies in your maven repository and your dependencies will be reused by all projects which need these dependencies. If you use the same maven all the time, the Maven settings are the same and for every project your repository folder is the same, so you're done already.
Al different jars are saved here, so every single version you depend on. If more pojects depend on the same dependency, with the same version, it's that one jar being used for all your projects.
You can remove all dependencies and reïmport your dependencies, if you used an old version before, and you use a newer version now, you just deleted the old version, and the new one is back where it's needed.

Apache Spark -- using spark-submit throws a NoSuchMethodError

To submit a Spark application to a cluster, their documentation notes:
To do this, create an assembly jar (or “uber” jar) containing your code and its dependencies. Both sbt and Maven have assembly plugins. When creating assembly jars, list Spark and Hadoop as provided dependencies; these need not be bundled since they are provided by the cluster manager at runtime. -- http://spark.apache.org/docs/latest/submitting-applications.html
So, I added the Apache Maven Shade Plugin to my pom.xml file. (version 3.0.0)
And I turned my Spark dependency's scope into provided. (version 2.1.0)
(I also added the Apache Maven Assembly Plugin to ensure I was wrapping all of my dependencies in the jar when I run mvn clean package. I'm unsure if it's truly necessary.)
Thus is how spark-submit fails. It throws a NoSuchMethodError for a dependency I have (note that the code works from a local instance when compiling inside IntelliJ, assuming that provided is removed).
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.createStarted()Lcom/google/common/base/Stopwatch;
The line of code that throws the error is irrelevant--it's simply the first line in my main method that creates a Stopwatch, part of the Google Guava utilities. (version 21.0)
Other solutions online suggest that it has to do with version conflicts of Guava, but I haven't had any luck yet with those suggestions. Any help would be appreciated, thank you.
If you take a look at the /jars subdirectory of the Spark 2.1.0 installation, you will likely see guava-14.0.1.jar. Per the API for the Guava Stopwatch#createStarted method you are using, createStarted did not exist until Guava 15.0. What is most likely happening is that the Spark process Classloader is finding the Spark-provided Guava 14.0.1 library before it finds the Guava 21.0 library packaged in your uberjar.
One possible resolution is to use the class-relocation feature provided by the Maven Shade plugin (which you're already using to construct your uberjar). Via "class relocation", Maven-Shade moves the Guava 21.0 classes (needed by your code) during the packaging of the uberjar from a pattern location reflecting their existing package name (e.g. com.google.common.base) to an arbitrary shadedPattern location, which you specify in the Shade configuration (e.g. myguava123.com.google.common.base).
The result is that the older and newer Guava libraries no longer share a package name, avoiding the runtime conflict.
Most likely you're having a dependency conflict, yes.
First you can look if you have a dependency conflict when you build your jar. A quick way is to look in your jar directly to see if the Stopwatch.class file is there, and if, by looking at the bytecode, it appears that the method createStarted is there.
Otherwise you can also list the dependency tree and work from there : https://maven.apache.org/plugins/maven-dependency-plugin/examples/resolving-conflicts-using-the-dependency-tree.html
If it's not an issue with your jar, you might have a dependency issue due to a conflict between your spark installation and your jar.
Look in the lib and jars folder of your spark installation. There you can see if you have jars that include an alternate version of guava that wouldnt support the method createStarted() from Stopwatch
Apply above answers to solve the problem by following config:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.1.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>com.google.common</pattern>
<shadedPattern>shade.com.google.common</shadedPattern>
</relocation>
<relocation>
<pattern>com.google.thirdparty.publicsuffix</pattern>
<shadedPattern>shade.com.google.thirdparty.publicsuffix</shadedPattern>
</relocation>
</relocations>
</configuration>
</execution>
</executions>
</plugin>

Using maven library in android project

I have a certain library project which does a certain work. The library is built using maven.
I now want to include this library in an android project. I added the library jar as a compile dependency in gradle and I can successfully use the library in my android code.
I have JDK 8 installed and I build the library using it. But, as I have read, android uses Java 7. Since the library is built using JDK 8, can this cause a problem?
If it can cause problems, I don't think building the library using JDK 7 would solve it either, since the library depends on other maven libraries from external maven repository. Is there anything I can do about it?
This is a common JDK lifecycle problem. The good news is there are things you can do to make sure that everything in your build is compliant with a certain JDK.
First off, make sure that your module is indeed compiled with the latest JDK version you are willing to accept. You can set the compiler plugin to only generate bytecode that is compliant to certain version, for instance JDK 7.
For example:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.3</version>
<configuration>
<source>THE_JDK_VERSION_YOU_WANT</source>
<target>THE_JDK_VERSION_YOU_WANT</target>
</configuration>
</plugin>
</plugins>
</build>
NOTE
There is a drawback to this and that is that while setting a specific source and target to the compiler, the code may still use JDK features that aren't available in the target environment's JRE. For example, having set JAVA_HOME to JDK8 and source and target to 1.7 will still allow the code to use (say) ConcurrentHashMap.mappingCount() which came in JDK8.
The answer to this problem is to use the animal-sniffer plugin. This plugin will check your code for any usage of disallowed API:s, such as JDK8, if you configure it that way. This plugin was previously hosted by Codehaus, but it will come up if Google around a bit. I can provide you with a working example tomorrow when I get back to work.
Next, as you pointed out, you have your dependencies. Fortunately, the enforcer plugin can use an extended rule-set, called <enforceBytecodeVersion> that can check that all of your dependencies are also compliant with a specific bytecode version. All the details are available here.
EDITED
Here comes the configuration for the animal-sniffer plugin. There's a newer version available (1.14), but it didn't work for me. Maybe you'll have better luck. The API signatures for the JDK are available at Maven Central.
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>animal-sniffer-maven-plugin</artifactId>
<version>1.13</version>
<executions>
<execution>
<id>check-for-jdk6-compliance</id>
<phase>test</phase>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
<configuration>
<signature>
<groupId>org.codehaus.mojo.signature</groupId>
<artifactId>java16</artifactId>
<version>1.1</version>
</signature>
</configuration>
</plugin>

Using Maven for non-Java projects (overriding clean/compile/install goals)

I have a good(ish) understanding of using Maven for Java/WebApp projects but only to the point of following the default goals/lifecycle.
However I now have a Backup project which is not a Java project at all. I was thinking of configuring it with Maven to keep some consistency but am not sure how I override the main Maven goals/phases for my bespoke processing.
The Backup project needs to do the following:
'build' - initially, the backup outputs will be a mysql database dump file and a zip exported from an existing WebApp. But I want it to be flexible so calling an ant file to do the actual work (creating the dump, calling the WebApp, or doing whatever in the future) seems sensible. The output files could then be copied into the target directory.
'install' - publish the output files to a local repository, preferably providing a datetimestamp version number instead of the usual 1.0.0-SNAPSHOT version. I'd like to think that Maven can cope with an artefact being a collection of files, rather than a single jar/war, but not sure on this.
My pom.xml declares the packaging as 'pom' as 'jar' and 'war' dont seem appropriate here.
I then want other projects to be able to have a dependency on this Backup project so they can get the lastest backup artefacts if required.
1) how do I override the maven 'compile' goal to call an ant build file?
2) how do I override the maven 'install' goal to publish all files in the target directory but as a single artefact?
Any help/guidance appreciated.
you can use maven antrun plugin for achieving this.
one example usage:
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>bundle-virgo</id>
<phase>package</phase>
<configuration>
<tasks>
<ant antfile="<path to build.xml>" target="compile"/>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
you can change the phase parameter to different maven phases to run it at different phases like package, compile etc.

Preprocessing source code as a part of a maven build

I have a lot of Java source code that requires custom pre-processing. I'd like rid of it but that's not feasible right now so I'm stuck with it. Given that I have an unfortunate problem that shouldn't have existed in the first place, how do I solve it using maven?
(For the full story, I'm replacing a python-based build system with a maven one, so one improvement at a time please. Fixing the non-standard source code is harder, and will come later.)
Is it possible using any existing Maven plugins to actually alter the source files during compile time? (Obviously leaving the original, unprocessed code alone)
To be clear, by preprocessing I mean preprocessing in the same sense as antenna or a C compiler would preprocess the code, and by custom I mean that it's completely proprietary and looks nothing at all like C or antenna preprocessing.
There is a Java preprocessor with support of MAVEN: java-comment-preprocessor
This is something that is very doable and I've done something very similar in the past.
An example from a project of mine, where I used the antrun plug-in to execute an external program to process sources:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>process-sources</id>
<phase>process-sources</phase>
<configuration>
<tasks>
<!-- Put the code to run the program here -->
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Note the tag where I indicate the phase where this is run. Documentation for the lifecycles in Maven is here. Another option is to actually write your own Maven plug-in that does this. It's a little more complex, but is also doable. You will still configure it similarly to what I have documented here.
Maven plugins can hook into the build process at pre-compile time yes, as for whether or not any existing ones will help I have no idea.
I wrote a maven plugin a couple of years ago as part of a university project though, and while the documentation was a bit lacking at the time, it wasn't too complicated. So you may look into rolling your own, there should be plenty of open source projects you can rip ideas or code from (ours was BSD-licenced for instance...)

Categories

Resources