I have a good(ish) understanding of using Maven for Java/WebApp projects but only to the point of following the default goals/lifecycle.
However I now have a Backup project which is not a Java project at all. I was thinking of configuring it with Maven to keep some consistency but am not sure how I override the main Maven goals/phases for my bespoke processing.
The Backup project needs to do the following:
'build' - initially, the backup outputs will be a mysql database dump file and a zip exported from an existing WebApp. But I want it to be flexible so calling an ant file to do the actual work (creating the dump, calling the WebApp, or doing whatever in the future) seems sensible. The output files could then be copied into the target directory.
'install' - publish the output files to a local repository, preferably providing a datetimestamp version number instead of the usual 1.0.0-SNAPSHOT version. I'd like to think that Maven can cope with an artefact being a collection of files, rather than a single jar/war, but not sure on this.
My pom.xml declares the packaging as 'pom' as 'jar' and 'war' dont seem appropriate here.
I then want other projects to be able to have a dependency on this Backup project so they can get the lastest backup artefacts if required.
1) how do I override the maven 'compile' goal to call an ant build file?
2) how do I override the maven 'install' goal to publish all files in the target directory but as a single artefact?
Any help/guidance appreciated.
you can use maven antrun plugin for achieving this.
one example usage:
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>bundle-virgo</id>
<phase>package</phase>
<configuration>
<tasks>
<ant antfile="<path to build.xml>" target="compile"/>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
you can change the phase parameter to different maven phases to run it at different phases like package, compile etc.
Related
I've got a collection of projects that all have a large third party dependency in common, it seems like a waste of space to copy this jar to all the projects during the build, is it possible to have maven just create a hard or soft link to a single cached copy?
This is not a duplicate of Maven multi-module: aggregate common dependencies in a single one? which relates to how to manage common dependencies from a pom perspective. This is about how to avoid copying the same dependent files to the target of multiple projects and just creating links to a single instance to save space.
Simpler version of the question is: is there an equivalent to the maven-dependency-plugin's copy-dependencies goal that creates links vs. copying the files.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.4</version>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
</execution>
</executions>
</plugin>
Yes, Maven already does. It puts all your dependencies in your .m2/repository/... folder. In Windows it's c: \ Users \ [your_name] \ .m2 \ repository. So there is no need for linking it with simlinks or st like that. Just have your maven project search for the dependencies in your maven repository and your dependencies will be reused by all projects which need these dependencies. If you use the same maven all the time, the Maven settings are the same and for every project your repository folder is the same, so you're done already.
Al different jars are saved here, so every single version you depend on. If more pojects depend on the same dependency, with the same version, it's that one jar being used for all your projects.
You can remove all dependencies and reïmport your dependencies, if you used an old version before, and you use a newer version now, you just deleted the old version, and the new one is back where it's needed.
I run an open source library and am considering having it fully embrace Maven and upload it to to a central repository so that people can easily add it to their projects.
The problem is that it depends on a couple of older libraries that do not exist on any Maven repos. Currently, that means a pom file has to use the system scope of the dependency. I've also read about creating a local repository for the project to install the 3rd party libraries.
However, my impression is that neither of these approaches will work well when I deploy my library to a Maven repository. That is, if it depends on external "system" or local repositories, then when someone adds my library to their pom file, they're not actually done. They also have to download the 3rd party libraries and manually install them or add them to their own local repository.
What I'd like to have happen is for these couple of 3rd party libraries to simply be included in the jar file that I deploy to the central repository. That way, if someone adds my library to their pom file, they really are done and don't have to worry about getting and installing the 3rd party libraries. Is it possible to do that?
First off, I'll start by saying that you should back away as far as possible from the system scope. You can refer to this answer for more information.
A way to circumvent your problem is indeed to include in the deployed JAR all the libraries that aren't present in Maven Central. (Let's say you have installed those libraries in your local repository.) You can do that with the maven-shade-plugin, which is a plugin used to create uber jars. The attribute artifactSet controls what will be included in the shaded JAR. In the following snippet, only the two dependencies groupId:artifactId and groupId2:artifactId2 will be included.
<plugin>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<includes>
<include>groupId:artifactId</include>
<include>groupId2:artifactId2</include>
</includes>
</artifactSet>
</configuration>
</execution>
</executions>
</plugin>
By default, the shaded JAR will replace your main artifact and this will be the JAR that will be deployed. The POM that will be deployed will not contain the dependency entries for the artifacts that were included in the shaded JAR, as such, a client depending on your deployed artifact won't be transitively-depending on them.
Today I have just created a Java Library. I created it using a Main class, since IntelliJ IDEA 14 asked me to add one. However I want it to be a normal library, without any Main classes. Is it possible to create a jar file from such a project without having a single class with the main method? If so, how do you create such a jar.
It just seems a bit silly to have a main method if you never use it.
Use a build tool like Maven (no IDE dependencies but can be called from IDE for convenience) with the shade plugin to create an 'uber' JAR (that includes all needed dependencies into one final JAR for the project)...
"pom.xml"
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<!-- Run shade goal on package phase -->
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
Documentation to Shade plugin:
https://maven.apache.org/plugins/maven-shade-plugin/
You can do it in few ways, for example from command line, from IDE, maven or other build tool, I describe 2 ways:
Command line:
You can create jar file from command line (without IDE), Here is reference: https://docs.oracle.com/javase/tutorial/deployment/jar/build.html
jar cf jar-file input-file(s)
where jar-file is .jar file name you want and input-file(s) are files you want to put inside your library (can be a wildcard, e.g.: *.class)
Intellij Idea:
Create Artifact like in this article, but without specifying Main class http://blog.jetbrains.com/idea/2010/08/quickly-create-jar-artifact/
Then click Build > Build artifact > Build.
This works even if there is no Main class.
I have a multi-module maven project. I'm using intellij-idea as my IDE.
I have Maven configured with the clover plugin to automatically instrument on build.
How can I get IntelliJ to recognize those changes and refresh its coverage data.(NOTE: having to click the "Refresh Coverage" toolbar button is fine.)
I've tried configuring maven-clover2-plugin like so:
<plugin>
<groupId>com.atlassian.maven.plugins</groupId>
<artifactId>maven-clover2-plugin</artifactId>
<version>3.2.2</version>
<configuration>
<baseDir>${project.basedir}</baseDir>
<cloverMergeDatabase>
${project.basedir}.clover\cloverMerge.db
</cloverMergeDatabase>
</configuration>
<executions>
<execution>
<id>main</id>
<phase>package</phase>
<goals>
<goal>instrument</goal>
<goal>aggregate</goal>
<goal>check</goal>
</goals>
</execution>
<execution>
<id>site</id>
<phase>pre-site</phase>
<goals>
<goal>instrument</goal>
<goal>aggregate</goal>
<goal>check</goal>
</goals>
</execution>
<execution>
<id>clean</id>
<phase>clean</phase>
<goals><goal>clean</goal></goals>
</execution>
</executions>
</plugin>
I then configured my project settings to use:
.clover\cloverMerge.db and checked the relative to project directory. checkbox.
But that didn't work.
NOTE:
At the bottom of Configuring Instrumentation it says
Do not set these locations explicitly if you have a multi-module project.
So I also tried leaving the location as the default for both Maven and IDEA and that didn't work either.
Also in the Clover for IDEA installation GUIDE - Known Issues
If you are using the Maven build tool, you should avoid using the same > IntelliJ output directory as Maven does. As Maven uses the target/classes and target/test-classes directories,
avoid specifying these ones. The clover.db location for IntelliJ should also be distinct from that used by Maven.
WHY should they be distinct is there some file corruption issue? If they're kept distinct then HOW can I get awesome coverage highlighting/etc, without having to repeat builds in a completely separate process?
Well I finally figured out an answer. I'm leaving this here for posterity.
The solution is complicated and somewhat of a Hack but it WORKS.
Update the parent projects pom.xml file
cloverDB: <cloverDatabase>${project.basedir}.clover\clover.db</cloverDatabase>
Merge CloverDB:
<cloverMergeDatabase>
${project.basedir}.clover\cloverMerge.db
</cloverMergeDatabase>
Create your Unit Tests to Run in IntelliJ IDEA
setup a Before launch - Run Maven Goal
clean clover2:setup prepare-package -DSkipTests
Create a Maven Run Configuration
Make the Unit-Tests a Before launch condition
In the command line have Maven run clover2:aggregrate
Update Intellij Project Settings for clover to point to the merge file
Make sure the Relative to project directory. checkbox is checked.
InitString to User specified with the value the same as your pom file.
in my case: .clover\cloverMergeDB
Once the command is run, just click the Referesh Coverage icon to see and work with the coverage data in idea.
If the tests fail you will also have the nice IntelliJ Test runner Tab to figure out why.
At the bottom of Configuring Instrumentation it says
Do not set these locations explicitly if you have a multi-module project.
Documentation actually says: Do not set these locations explicitly (using absolute path) if you have a multi-module project. The reason is simple - if you use an absolute path, then you will not have a separate clover.db for every module, but only a single clover.db file.
"If you are using the Maven build tool, you should avoid using the same IntelliJ output directory as Maven does. As Maven uses the target/classes and target/test-classes directories, avoid specifying these ones" [...] WHY should they be distinct is there some file corruption issue?
The problem is as follows: IntelliJ IDEA uses it's own engine to compile sources. It means that it does not have to call the original project's build system (a Maven, for instance) to compile sources.
It means that:
- if you have a Maven-based project and it has the Clover-for-Maven plugin installed and
- at the same time you have the Clover-for-IDEA installed in the IntelliJ IDE
- and these two Clover integrations use the same output folders for classes and databases
... then these two Clover integrations may start overwriting their files.
In most cases this is not a desired behaviour because any source code modification / project rebuild action etc in IDEA will trigger source recompilation; which can delete results obtained previously by Clover-for-Maven.
let us say I have a standard maven project with the standard four Directories
src/main/java
src/main/resources
src/test/java
src/test/resources
Now let us suppose, I create a subdirectory named "clojure" under "src/main".
Are then the source files under "src/main/clojure" automatically compiled when a build is run or do I somehow have to tell to maven, via configuration of some plugin (e.g. build-helper-maven-plugin), that it also has to compile the sources under "src/main/clojure"?
In others words, does the creation of any folder that is not ".../java" or ".../resources" require an explicit configuration in the pom.xml so that the sources there are taken into account by maven??
Any help would be appreciated.
Regards,
Horace
A Maven project is usually built with a single compiler, which looks for all its source files in those folders known as source folders to Maven. Depending on the project, such source folders may be added automatically, e.g. src/main/java. If a different compiler is used, additional folders may automatically be added, e.g. src/main/groovy.
Sometimes Maven integrations in IDEs (like Eclipse or IntelliJ) do not pick up folders for non-Java projects, even though the correct Maven plugins are in the POM, say e.g. for building a Groovy project.
So even though a build on the command line may run nicely with files in src/main/groovy, the folder may not be detected as a source folder when importing the project in an IDE. In such cases you may have to add the additional source folders, e.g.
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>1.5</version>
<executions>
<execution>
<id>add-source</id>
<phase>generate-sources</phase>
<goals>
<goal>add-source</goal>
</goals>
<configuration>
<sources>
<source>src/main/groovy</source>
</sources>
</configuration>
</execution>
</executions>
</plugin>
Yes, maven needs to "know" what those directories mean, though a clojure build plugin may use that directory by convention - see for example: https://github.com/talios/clojure-maven-plugin
Apache maven has a Standard Directory Layout which it understands out of the box.
To make maven understand any other structure than the above, you'll have to override these settings in pom.xml.
Look at this section of POM reference.