I'm building a project using maven and I need to ensure it works over Java SE and different Java EE containers. There're already integration tests written for WildFly container. And now I'm moving to Java SE.
But face to this multiple test environments, how should I handle them in maven ? Should I use <profile>, <module> or something else ?
<profile> is useful to switch between different profiles and each of them can have their specific dependencies. So in my case, there might be profiles: wildfly-embedded, wildfly-managed, java-se etc. But I need to ensure the project works on every profile, is it possible to run all the profiles in one command ?
<module> can handle project inheritance. After reading the post SO • Why and when to create a multi-module Maven project?, I'm still confused about if I should use it in my case.
Can somebody give me some ideas ? That will be very helpful, thanks.
If you keep in mind that the resulting artifact should always be the same no matter the activated profiles, then you should understand that profile is not the correct solution (although it is very often abused for it. Don't try to go for that advice!)
Configuration should be outside the artifact, so you can reuse the same artifact over and over again. Since many people ask for the proper solution with embedded configuration, Karl Heinz created https://github.com/khmarbaise/multienv-maven-plugin . This is probably the closest you'll get to a valid Maven project setup.
Related
We have many Maven projects, and one of these projects is included as a dependency in every other project. The problem is, that when deploying a new version of this dependency used by the others, every project gets this new version, which could lead to problems.
Of course, I could manually change the version every time I deploy the project, but that could lead to problems as well, i.e. when forgetting to change the version before deploying it.
I also saw the solution of using a ${version} placeholder in the "version"-tag, but that would mean, that I have to specify the version every time I'm doing a Maven command.
Is there a solution for such problems, where you have a dependency used in many other projects and need a different version in everyone of these projects?
The first thing I see
The problem is, that when deploying a new version of this dependency
used by the others, every project gets this new version, which could
lead to problems.
This shows a big issue. You seemed to be violating the foundational rule of immutable releases.
In consequence, the version is in the end useless because it's always the same and does not transport any kind of information.
There is first to say you should follow semantic versioning.
Also you should use either maven-release-plugin to increment the numbers automatically (but that will not solve the issue minor or major release), there are tools to identify such things). This should be solved by using a CI/CD setup (Jenkins, etc.).
Tools to check changes (compatibility) are things like RevAPI or
JAPI-Checker etc. Also some useful information here.
Furthermore you can do that via different setups The mentioned ${version} is simply wrong and will not work in several ways.
Upgrading a larger number of projects can be done by using something like Renovate or things like Dependabot or even some existing Maven plugins which can be run via CI/CD automatically (scheduled) which you should even do for security scans etc. That means automation is the keyword here.
I'm adding unit-tests to an existing codebase, and the application itself retrieves data from a server through REST. The URL to the server is hard-coded in the application.
However, developers are obviously not testing new features, bugs, etc on a live environment, but rather on a development-server. To acomplish this, the developement-build have a different "server-url"-string than the production-build.
During developement a non-production-url should be enforced; and when creating a production build, a production-url should be inforced instead.
I'm looking for advice on how to implement a neat solution for this, since missing to change the url can currently have devastating outcomes.
A maven build script only tests the production-value, and not both. I haven't found any way to make build-specific unit-tests (Technologies used: Java, Git, Git-flow, Maven, JUnit)
Application configuration is an interesting topic. What you've pointed out here as an issue is definitely a very practical need, but even more so, if you need to repackage (and possibly build) between different environments, how do you truly know that what you've got there is the same that was actually tested and verified.
So load the configuration from a resource outside of the application package. Java option to a file on filesystem or a JNDI resource are both good options. You can also have defaults for the development by commiting a config file and reading from there if the Java option is not specified.
I have a test war file that contains many tests. Each test is packaged in maven project with a lot of dependencies. We use maven for dependency management but it comes with a problem. When a test update a common library, it can break other test that depends on the older version of the lib. How to make all the test run in a completely separate environment with its own set of library version? I can't execute them in a separate jvm because these tests need to be executed very frequently like very 30 sec or so. Can OSGi help solve this problem?
Yes OSGi can solve this problem, but it is not a step to be taken lightly. Use OSGi when you are ready to commit time and effort to isolating and managing dependencies, versioning them properly and, optionally, making your code and architecture more modular/reusable.
Bear in mind that adopting OSGi can be painful at first due to non-modular practices used by some legacy libraries.
we are trying to develop a web application framework and build implementatins on top of it. This framwork will be versioned in SVN, live its own life in parallel to those implementations. It will have lots of spring config files, security config and so on. We would like to use those in those implementations.
What structure should such an project have? Keep everything together? Link particular folers (implementations) in "svn: externals"? We would like to use Maven, and create an archetype for those implementations, but is it possible to update the archetype after it has been changed in implementation applications?
Regards,
This is a good example :
http://www.sonatype.com/books/mvnex-book/reference/web.html
Also this book is very useful resource when starting with maven
I found this also :
http://www.avajava.com/tutorials/lessons/how-do-i-create-a-web-application-project-using-maven.html
I'd suggest you create your framework project as a simple jar project to include in your implementation, which would be war projects. For the Spring config files you have three options then:
Package them into your framework jar. This would make it hard for the implementations to customize it. I would not recommend it, unless your configuration is definitively fixed.
Use svn: externals. I have not much experience with that, but I think dependencies between svn repositories would be hard to manage.
Maintain these configuration files per implementation. So, an archetype would help to get started with an initial configuration. Then maintain these configuration files as your framework evolves. This is what we do most of the time. The good thing about Spring configuration is that it often rarely needs to be touched once you are confident with it.
In our infrastructure, we have lots of little Java projects built by Maven2. Each project has its own pom.xml that ultimately inherits from our one company "master" parent pom.
We've recently started adding small profiles to our parent pom, disabled by default, that, when enabled, execute a single plugin in a conventional manner.
Examples:
The 'sources' profile executes the maven-source-plugin to create the jar of project sources.
The 'clover' profile executes the maven-clover2-plugin to generate the Clover report. It also embeds our Clover license file so it need not be re-specified in child projects.
The 'fitnesse' profile executes the fitnesse-maven-plugin to run the fitnesse tests associated with the project. It contains the fitnesse server host and port and other information that need not be repeated.
This is being used to specify builds in our CI server like:
mvn test -P clover
mvn deploy site-deploy -P fitnesse,sources
and so on.
So far, this seems to provide a convenient composition of optional features.
However, are there any dangers or pitfalls in continuing on with this approach (obvious or otherwise)? Could this type of functionality be better implemented or expressed in another way?
The problem with this solution is that you may be creating a "pick and choose" model which is a bit un-mavenesque. In the case of the profiles you're describing you're sort of in-between; if each profile produces a decent result by itself you may be ok. The moment you start requiring specific combinations of profiles I think you're heading for troubles.
Individual developers will typically run into consistency issues because they forget which set of profiles should be used for a given scenario. Your mileage may vary, but we had real problems with this. Half your developers will forget the "correct" combinations after only a short time and end up wasting hours on a regular basis because they run the wrong combinations at the wrong time.
The practical problem you'll have with this is that AFAIK there's no way to have a set of "meta" profiles that activate a set of sub-profiles. If there had been a nice way to create an umbrella profile this'd be a really neat feature. Your "fitnesse" and "sources" profiles should really be private, activated by one or more meta-profiles. (You can activate a default set in settings.xml for each developer)
There isn't a problem with having multiple profiles in Maven, in fact I think they are an excellent way of allowing your build to enable and disable classes of functionality. I'd recommend naming them based on their function rather than the plugin though, and consider grouping functionally related plugins in the same profile.
As a precedent for you to follow, the Maven super POM has a "release-profile" defined, which includes configurations for the source, javadoc, and deploy plugins.
You should consider following this approach, so your "fitnesse" profile would become "integration-test", and you could choose to define additional plugins in that profile if needed at a later date. Similarly the "clover" profile could be renamed "site", and you could define additional reports in that profile, e.g. configurations for the JDepend, JXR, PMD plugins.
You seem slightly suspicious towards that approach, but you're not really sure why - after all, it is quite convenient. Anyway, that's what I feel about it: I don't really know why, but it seems somewhat odd.
Let's consider these two questions:
a) what are profiles meant for?
b) what are the alternative approaches we should should compare your approach with?
Regarding a), I think profiles are meant for different build or execution environments. You may depend on locally installed software, where you would use a profile to define the path to the executable in the respective environments. Or you may have profiles for different runtime configurations, such as "development", "test", "production".
More about this is found on http://maven.apache.org/guides/mini/guide-building-for-different-environments.html and http://maven.apache.org/guides/introduction/introduction-to-profiles.html.
As for b), ideas that come to my head:
triggering the plug-ins with command line properties. Such as mvn -Dfitnesse=true deploy. Like the well known -DdownloadSources=true for the eclipse plugin, or -Dmaven.test.skip=true for surefire.
But that would require the plugin to have a flag to trigger the execution. Not all the plug-ins you need might have that.
Calling the goals explicitly. You can call several goals on one command line, like "mvn clean package war:exploded". When fitnesse is executed automatically (using the respective profile), it means its execution is bound to a lifecycle phase. That is, whenever that phase in the lifecycle is reached, the plugin is executed.
Rather than binding plugin executions to lifecycle phases, you should be able to include the plugin, but only execute it when it is called explicitly.
So your call would look like "mvn fitnesse:run source:jar deploy".
The answer to question a) might explain the "oddness". It is just not what profiles are meant for.
Therefore, I think alternative 2 could actually be a better approach. Using profiles might become problematic when "real" profiles for different execution or build environments come into play. You would end up with a possibly confusing mixture of profiles, where profiles mean very different things (e.g. "test" would denote an environment while "fitnesse" would denote a goal).
If you would just call the goals explicitly, I think that would be very clear and flexible. Remembering the plugin/goal names should not be more difficult that remembering the profile names.