The java application I develop should run on servers I have not direct access to. Sometimes dependency conflicts arise. I mean that on some servers the app works perfect and on other ones same application fails. And the errors indicate libraries version conflict. I would like the application informs about the library version conflict rather than just crashes with NoSuchFieldError, NoSuchMethodError, NoClassDefFoundError etc.
I may obtain libraries list on the application building platform with mvn dependency:tree
So, I need the application reads the libraries versions on platform where it runs, compare it with libraries list on building platform and report about versions mismatching. So, how the application could define libraries in runtime? Or maybe there exist more convenient way to automate dependency conflict discovering?
I believe you need resolve this kind of problem before deployment. You cannot rely on Running Tool to save. So I would suggest solve them at much earlier stage.
If you use maven to manage dependency. You can try below:
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>1.4.1</version>
<configuration>
<rules><dependencyConvergence/></rules>
</configuration>
</plugin>
</plugins>
Then
mvn enforcer:enforce
You can read this good material
Related
This is a really weird one. I have a Kotlin web service that was originally written as a hybrid app of both Kotlin and Java but I've recently migrated to pure Kotlin (although many of its libraries are still in Java). The framework I'm using is sparkjava and I'm using Maven to manage dependencies and packaging. The service in the past was built with manually included dependencies as JAR files and was built using an IntelliJ configuration, this was horribly messy and difficult to reproduce so I moved all the dependencies into Maven and set up a process for this. This is where things get weird:
I included this plugin in my pom.xml to manage the creation of the fat JAR which looks like this:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.1.1</version>
<configuration>
<archive>
<manifest>
<mainClass>unifessd.MainKt</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
When I run this confuration however, I get a JAR that won't execute. I didn't think this was a major problem, as running the "package" lifecycle in Maven does produce an executable JAR. This resultant JAR will happily run on my development machine (macOS Big Sur) and will pass all my external testing scripts. However, when I deploy the very same JAR to my production environment which is a FreeBSD server on AWS, it will start up correctly but whenever I make a request I get the following error:
[qtp248514407-20] WARN org.eclipse.jetty.server.HttpChannel -
//<redacted.com>/moderation/users/administrators
java.lang.NoClassDefFoundError: Could not initialize class
de.mkammerer.argon2.jna.Argon2Library
at de.mkammerer.argon2.BaseArgon2.hashBytes(BaseArgon2.java:267)
at de.mkammerer.argon2.BaseArgon2.hashBytes(BaseArgon2.java:259)
at de.mkammerer.argon2.BaseArgon2.hash(BaseArgon2.java:66)
at de.mkammerer.argon2.BaseArgon2.hash(BaseArgon2.java:49)
at [...]
I've truncated the stack trace to keep things concise but all it's doing before that is opening the appropriate DAO and hashing the password attempt. The offending class is of course de.mkammerer.argon2, which is a dependency I use to hash passwords using the argon2 algorithm. This has me really stumped for the following reasons:
When this dependency was linked in manually using a JAR in IntelliJ, it worked absolutely fine in production.
Even though the class fails to load in production, it works fine locally despite the packages being identical.
macOS and FreeBSD aren't exactly a million miles apart in terms of how they're put together, so why are they behaving so differently?
A few other points in my efforts to debug this:
I've tried linking in my argon2 library in the old way, and it's still failing in the same fashion.
IntelliJ isn't recognising the main class of my Kotlin app any more if I try and create an artifact without Maven. This is really weird, I can set up a Kotlin build and run configuration just fine by specifying unifessd.MainKt as my main class, but when it comes to building an artifact it's simply not having it. It doesn't appear in the artifact creation dialogue and when I specify it as my Main-Class in MANIFEST.MF, IntelliJ tells me it's an invalid main class. What on Earth is going on here? It'll run just fine when I tell Maven that's my main class and package it in a JAR, even in the faulty production environment.
Robert and dan1st were correct, the problem was that my argon2 library had a dependency on JNA and native code that was incompatible with FreeBSD. I tested the JAR on an Ubuntu server to confirm that this was the case and the program ran correctly.
To submit a Spark application to a cluster, their documentation notes:
To do this, create an assembly jar (or “uber” jar) containing your code and its dependencies. Both sbt and Maven have assembly plugins. When creating assembly jars, list Spark and Hadoop as provided dependencies; these need not be bundled since they are provided by the cluster manager at runtime. -- http://spark.apache.org/docs/latest/submitting-applications.html
So, I added the Apache Maven Shade Plugin to my pom.xml file. (version 3.0.0)
And I turned my Spark dependency's scope into provided. (version 2.1.0)
(I also added the Apache Maven Assembly Plugin to ensure I was wrapping all of my dependencies in the jar when I run mvn clean package. I'm unsure if it's truly necessary.)
Thus is how spark-submit fails. It throws a NoSuchMethodError for a dependency I have (note that the code works from a local instance when compiling inside IntelliJ, assuming that provided is removed).
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.createStarted()Lcom/google/common/base/Stopwatch;
The line of code that throws the error is irrelevant--it's simply the first line in my main method that creates a Stopwatch, part of the Google Guava utilities. (version 21.0)
Other solutions online suggest that it has to do with version conflicts of Guava, but I haven't had any luck yet with those suggestions. Any help would be appreciated, thank you.
If you take a look at the /jars subdirectory of the Spark 2.1.0 installation, you will likely see guava-14.0.1.jar. Per the API for the Guava Stopwatch#createStarted method you are using, createStarted did not exist until Guava 15.0. What is most likely happening is that the Spark process Classloader is finding the Spark-provided Guava 14.0.1 library before it finds the Guava 21.0 library packaged in your uberjar.
One possible resolution is to use the class-relocation feature provided by the Maven Shade plugin (which you're already using to construct your uberjar). Via "class relocation", Maven-Shade moves the Guava 21.0 classes (needed by your code) during the packaging of the uberjar from a pattern location reflecting their existing package name (e.g. com.google.common.base) to an arbitrary shadedPattern location, which you specify in the Shade configuration (e.g. myguava123.com.google.common.base).
The result is that the older and newer Guava libraries no longer share a package name, avoiding the runtime conflict.
Most likely you're having a dependency conflict, yes.
First you can look if you have a dependency conflict when you build your jar. A quick way is to look in your jar directly to see if the Stopwatch.class file is there, and if, by looking at the bytecode, it appears that the method createStarted is there.
Otherwise you can also list the dependency tree and work from there : https://maven.apache.org/plugins/maven-dependency-plugin/examples/resolving-conflicts-using-the-dependency-tree.html
If it's not an issue with your jar, you might have a dependency issue due to a conflict between your spark installation and your jar.
Look in the lib and jars folder of your spark installation. There you can see if you have jars that include an alternate version of guava that wouldnt support the method createStarted() from Stopwatch
Apply above answers to solve the problem by following config:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.1.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>com.google.common</pattern>
<shadedPattern>shade.com.google.common</shadedPattern>
</relocation>
<relocation>
<pattern>com.google.thirdparty.publicsuffix</pattern>
<shadedPattern>shade.com.google.thirdparty.publicsuffix</shadedPattern>
</relocation>
</relocations>
</configuration>
</execution>
</executions>
</plugin>
I have a Maven project, which uses JAR packaging. When I run the install phase, it will install both Project-1.0.jar and Project-1.0.pom files in my local repository.
Now I would like the JAR to be built with a classifier. This is easy enough: I just add the line to my jar plugin configuration:
<plugin>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<classifier>whatever</classifier>
[...]
</configuration>
</plugin>
Now, this works in that it installs Project-1.0-whatever.jar in my repo, but no longer installs a POM.
In case it matters, I want to use this feature in conjunction with profiles, i.e. I want to build JARs with different classifiers with different profiles.
The reason I want the POM is because I have other projects depending on this one. When I build one of these, it will try to find a POM for this dependency. If it can't, it will happily use the JAR, but that is not an acceptable solution for me for a couple of reasons:
It's bad enough that it will try to contact external repos and look for it, but even worse, we use a share repo, so it will download the POM from the shared repo, which may not be what I want - for example if I just made changes to the POM and am trying to test them.
Is there a solution, or can anyone suggest a reasonable workaround?
EDIT: I just discovered that the issue affects Maven 2.2.1, but not Maven 3.0.5. This may therefore be a bug or a difference in features between versions. I would still be interested in solutions/workarounds for Maven 2, as migrating the project to Maven 3 is a complicated affair and not likely to happen.
The reason turned out to be nothing to do with Maven version as such, and everything to do with the version of maven-install-plugin. It turns out versions prior to 2.3 have this bug.
Old installations of Maven are somewhat likely to suffer this issue, as Maven 2 will use any version of a plugin that it has unless a version has been explicitly specified in the POM, but maven-install-plugin is included by default and it's quite possible for a POM not to explicitly specify it at all (as it was in my case).
I am really new to maven. I am bit confused about the dependency feature. I know that I can add dependency in the pom file like this
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.6.1</version>
</dependency>
What does this actually mean? Does it mean that I dont need to import the slf4j jar files into my project? If so how does my project get access to those libraries?
I have read about dependency from maven site but didnt help me much.
Can some one explain it in a simpler way.
Thanks
Nutshell: It means your project has a dependency on slf4j, version 1.6.1.
Furthermore:
If you build your project with Maven (or your IDE is Maven-aware), you don't have to do anything else in order to use slf4j. (Aside from normal source-code considerations, like a reasonable import statement, etc.)
slf4j v. 1.6.1 will be retrieved from a default Maven repository to your local repository, meaning...
... ~/.m2/repository is your repository. slf4j will be put in $M2_HOME/org/slf4j/$(artifactId}/1.6.1 and will include (in general) a jar file, a pom file, and a hash file.
Slf4j's dependencies will be downloaded into your local repository as well.
Dependencies of those dependencies will be downloaded ad infinitum/ad nauseum. (The source of "first use of a library downloads the internet" jokes if there are a lot of dependencies; not the case for slf4j.) This is "transitive dependency management"--one of Maven's original purposes.
If you were not using maven, you would manually download and use the dependencies that you needed for your project. You would probably place them in a lib folder and specify this location in your IDE as well as your build tool.
maven manages these dependencies for you. You specify the dependency your project needs in the prescribed format and maven downloads them for you from the internet and manages them. When building your project, maven knows where it has placed these dependencies and uses them. Most IDEs also know where these dependencies are, when they discover that it is a maven project.
Why is this a big deal? Typically most open source libraries release newer versions on a regular basis. If your project uses these, then each time a newer version is needed, you would need to manually download it and manage it. More importantly, each dependency, in turn may have other dependencies (called transitive dependency). If you do not use maven, you would need to identify, download and manage these transitive dependencies as well.
It becomes complex the more such dependencies that your project uses. It is possible that two dependencies end up using different versions of a dependency common to them.
When compiling your project, Maven will download the corresponding .jar file from a repository, usually the central repository (you can configure different repositories, either for mirroring or for your own libraries which aren't available on the central repositories).
If your IDE know about Maven, it will parse the pom and either download the dependencies itself or ask Maven to do so. Then it will open the dependencies' jars, and this is how you get autocompletion: the IDE "imports" the jars for you behind the scenes.
The repository contains not only the ".jar" file for the dependency, but also a ".pom" file, which describes its dependencies. So, maven will recursively download its dependencies, and you will get all the jars you need to compile your software.
Then, when you will try to run your software, you will have to tell the JVM where to find these dependencies (ie, you have to put them on the class path).
What I usually do is copy the dependencies to a target/lib/ directory, so it is easy to deploy the software and to launch it. To do so, you can use the maven-dependency-plugin, which you specify in the <build>:
<build>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/lib</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
</build>
There are a variety of servers on the internet that host artifacts (jars) that you can download as part of a maven build. You can add dependencies like you show above to describe what jars you need in order to build your code. When maven goes to build, it will contact one of these servers and download the jar to your computer and place it in a local repository usually
${user_home}/.m2/repository
The servers that maven contacts must be configured in your maven project pom file, under a section like
<repositories>
<repository>
</repository>
</repositories>
The prototypical server can be seen at repo1.maven.org
The nice thing about maven is that if a jar you list is needed, it will pull not only that jar, but any jars that that jar needs. Obviously, since you are pulling the jars to your machine, it only downloads them when it can't find them on your machine, thus not slowing down your build everytime (just the first time).
This question is not really about best practices or architecture, but about how to specifically configure Hudson and Maven to accomplish what I want. I'm a bit lost.
I have a Java application which uses SWT, and I need to build copies for different platforms. For now, all I need is Linux i386 and Linux amd64, but in the future, I need to add Windows x86/x64 as well, so I want to make sure I set it up "right" the first time around.
My application has all of the dependencies and other information listed in the Project pom.xml, including the different SWT jars to grab depending on OS, arch, and family.
My question is, how do I do builds for both linux i386 and linux amd64 with a minimal amount of configuration duplication? Right now I'm doing the following:
Project specifies all dependencies in pom.xml, and this project is set to build in Hudson and deploy the resulting .jar to Nexus
Builder-linux-i386 runs after Project and specifies any JNI files for i386 and uses the de.tarent maven-pkg-plugin to grab the project jar from Nexus and assemble it along with all dependencies into a single 'fat' jar file, and then into a .deb file for installation.
Builder-linux-amd64 does the same, but for amd64 files
I have been trying to specify which dependencies to use in the Builder projects by adding -P profilename to their Hudson projects, where profilename is a profile named in the Project pom. Maven doesn't seem to like this and prints that it is not activating that profile. It only uses the default profile from Project's pom.
What is the correct way to set this up? I want to have all of my dependencies specified in my Project pom, and have a Hudson project which compiles the jar for that project and deploy it to Nexus, and then independent projects which grab that jar and assemble it along with platform-specific files for release. I don't want to build the entire original project repeatedly, and I don't want to have a ton of duplicated configuration info or copy-pasted poms.
I have it working for unix-amd64 only because that's what the build machine is, so Maven targets that architecture. Also, I feel like the setup isn't as clean as it could be. Advice?
You have an syntax error. It needs to be -Pprofilename. It works for me this way.
Edit
Since the profile is read. There might be an syntax error in your profile configuration. I found a profile in one of projects, that I integrate into our CI environment. It defines some dependencies, it might help you.
<profile>
<id>junit</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<skip>false</skip>
<testNGArtifactName>none:none</testNGArtifactName>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.4</version>
<scope>test</scope>
</dependency>
</dependencies>
</profile>
Profiles should work in the way you desciped it (you could post an other question about this).
But at (least for web applications) there is an other way: Try to use classifier instead of profiles to build for different environments. -- You can have a look at this blog: http://blog.jayway.com/2010/01/21/one-artifact-with-multiple-configurations-in-maven/
The purpuse of this solution is, that you are able to build (if you want (controlled by an profile)) for all environments at once.
The builder projects do not see the profiles from the main Project because it is not actually a parent. I cannot define it as a in the builder projects because my projects are not set up that way, and I'm building using variables like ${SVN_REVISION}, which maven does not like.
I have given up and instead copy-pasted the profiles into the 'builder' projects. This isn't the prettiest but for now it works.