We have been using Ivy for a few months and have our own hosted "Ivy Repo" on a web server here in the office. All of our projects are configured to go to this repo to resolve dependencies.
We have several "commons" type JARs that are used by many of our projects. Because of this, and because we only have 1 repo, we're finding a lot of ugly overhead coming from the following scenario:
A developer is given a task to add a feature to Project 1 (which depends on a Common jar)
During the course of developing Project 1, the developer realizes he/she needs to make changes to the Common jar
Common jar changes are made
Common jar has to go through code review and normal code promotion
Build master publishes new Common jar
Project 1 can resume development now that Common jar has been updated
This is becoming ridiculous and painful for our team.
To me, the obvious solution is to provided ant targets in each project that allow the developer to publish/resolve locally (to and from their file system). That way they can break the Common jar 9 ways to Sunday, but without losing 2 - 4 days while waiting for Common to get published. This way, the developer makes local changes to both Project 1 and Common, and the code goes through our promotion process all at once.
I know this is possible with Ivy, but I'm so new to it I wouldn't even know where to begin.
Currently we use a global ivy.settings file for all projects. In the settings file, we use a chain resolve that has 1 url resolver inside of it, which connects to our "ivy repo".
I believe the following is the only change that will be necessary, but I'm not 100% sure:
In ivy.settings we will need to add a local file system resolver before the url resolver gets called; this way we check the local file system for dependencies before moving on to the ivy repo (web server)
Configure each project's ivy.xml with an option somehow that allows local cache publishing
Tweak the Ant builds to have a publish-locally target that exercises the option mentioned above
I believe these changes will allow us to: (1) always look locally for dependencies before looking to the web server, (2) publish locally as a build option (target).
If this is not true, or if I am missing any steps, please advise! Otherwise, I can probably figure out how to add a file system resolver from the Ivy docs, but have no idea how to get the publish-locally target to work. Any ideas? Thanks in advance!
I too would prefer Marks approach.
As to publish-locally you can tell the publish task which resolver(resolver="local") to use. This way it can publish to the local filesystem or to any defined resolver.
<ivy:publish
resolver="local"
overwrite="true"
revision="${project.version}">
<artifacts pattern="dist/[artifact]-[revision].[type]" />
</ivy:publish>
And if you use a chain resolver you should set returnFirst="true" so that resolving will stop when something was found locally.
Ivy supports dynamic revisions:
Stable code would reference the latest approved version of the commons jar:
<dependency org="my-org" name="commons" rev="latest.release"/>
Unstable (in development) code would reference the latest unapproved version of the code
<dependency org="my-org" name="commons" rev="latest.integration"/>
So you need to change the build process for your commons module to have two publishing targets. One for unstable snapshots of your code the other for formal releases.
(See the status attribute on the ivy publish task)
Note:
In Maven you have two types of repository, release and snapshot. Ivy support for this concept is more subtle and more powerful IMHO.
Related
Is there a way to fail a build in Jenkins if a certain jar is used in a Java Maven Project?
For example I know org.example:badartifact:1.0.1 has a security vulnerability. I told everyone about that, and they fixed their projects..., but maybe some third-party artifacts bring this with them as a transitive and nobody realizes that.
Or maybe someone down the line forgets this old bug...
So I would like to have a last check in Jenkins preferably, so that we don't end up with projects that have that special artifact included.
How do you handle situations like that, what tools do you use? (Whitelisting libs? Blacklisting libs?, etc)
Any suggestions are appreciated.
Possible Maven solution
You could have a company super POM (parent POM of all Maven projects within the company/department/team) and in that super POM configure the Maven Enforcer Plugin, its bannedDependencies rule to ban any library, version or even scope. I have personally used this option even for trivial mistakes (i.e. junit not in test scope would make the build fail).
This solution is a centralized one and as such easier to maintain, however requires all the projects to have the same parent POM and developers could at any time change the parent pom and as such skip this governance. On the other hand, a centralized parent POM is really useful for dependencies Management, common profiles, reporting and so on.
Note: you cannot configure it in the Maven settings of the Jenkins server via an active by default profile, for instance, in order to have it applied to all running Maven build, because Maven limits customization of builds in profiles provided by the settings (it's a design choice, to limit external impact and as such have an easier troubleshooting). I've tried it in the past and hit the wall.
Profiles in external files
Profiles specified in external files (i.e in settings.xml or profiles.xml) are not portable in the strictest sense. Anything that seems to stand a high chance of changing the result of the build is restricted to the inline profiles in the POM. Things like repository lists could simply be a proprietary repository of approved artifacts, and won't change the outcome of the build. Therefore, you will only be able to modify the and sections, plus an extra section
Possible Jenkins solution
If you want to have governance centralized in Jenkins directly, hence independently than Maven builds, I have applied these solutions in the past (and they perfectly work):
Jenkins Text Finder Plugin: you can make the build fail in case a regex or a matching text was found as part of the build output. In your case, you could have a Jenkins build step executing always mvn dependency:tree and as such have as part of the build output the list of dependencies (even transitive). A Text Finder rule matching your banned dependency will then match it and fail the build.
Fail The Build Jenkins Plugin: similar to the one above, but with a centralize management of configured Failure Causes. Again, failures are based on matching text, but no build configuration is required: it will be applied by default to all builds.
Here is one solution to do the job :)
With the Maven License plugin, you can scan the 3rd party dependencies for your Maven project and produce a THIRD_PARTY.txt report (in the target/generated-sources/license folder).
Maven command line:
mvn license:aggregate-add-third-party
Next, you can use the TextFinder plugin to search the "unsafe" dependencies in the THIRD_PARTY.txt file (ex: org.example:badartifact:1.0.1) and change the status of the build if needed.
Another solution is to use a 3rd party tool to do that.
I'm doing some investigation with this one: http://www.whitesourcesoftware.com/
This tool can provide a list of 3rd party dependencies with vulnerability issues.
We have started to implement Continuos Delivery for our Java Builds using Maven and Teamcity tooling for CI and Build automation.
We have few common jars that are built as standalone jar artefacts and are consumed by web modules.
Frequency of the change to these common modules is high; we have started to adopt the approach discussed in various forums What is the Maven way for automatic project versions when doing continuous delivery? and in this blog
http://blog.xebia.com/2012/09/30/continuous-releasing-of-maven-artifacts/ to use Major.Minor.BugFix-${revision} for all the common jars.
Value for revision is set in Parent POM as SNAPSHOT for local development and in case of Teamcity builds it is set to ${BuildNumberCounter}-${SVNRepoRevisionNumber} e.g. 1.0.0-10-233
For a Web Module that needs to consume the jar and always wants to pick the latest version Dependency range is defined as [1.0.0,2.0.0). This seems to be working fine; however to be honest we have not yet used this in anger, so will see if we hit challenges.
The problem that we have straightaway is that for local desktop development the dependency range in the Webmodule always resolves to the latest numbered release rather than snapshot build that was created by the developer for local testing of the common jar with the Web Module. We believe it is valid for the developer to be able to test the change of common jar with web modules locally. Only way it can be achieved is by committing the change and Teamcity producing new numbered release which is not ideal as it would potentially break the build of all Web Modules that use that common jar.
Wonder if anyone has faced similar problem and would have a solution.
Quite new to maven here so let me explain first what I am trying to do:
We have certain JAR files which will not be added to the repo. This is because they are specific to Oracle ADF and are already placed on our application server. There is only 1 version to be used for all apps at anyone time. In order to compile though, we need to have these on the class path. There are a LOT of these JARS, so if we were to upgrade to a newer version of ADF, we would have to go into every application and redefine some pretty redundant dependencies. So again, my goal is to just add these JARs to the classpath, since we will control what version is actually used elsewhere.
So basically, I want to just add every JAR in a given network directory (of which devs do not have permission to modify) to maven's classpath for when it compiles. And without putting any of these JAR files in a repository. And of course, these JARs are not to be packaged into any EAR/WAR.
edit:
Amongst other reasons why I do not want to add these to the corporate repo is that:
These JARs are not used by anything else. There are a lot of them, uncommon and exclusive to Oracle.
There will only be one version of a given JAR used at anyone time. There will never be the case where Application A depends on 1.0 and Application B depends on 1.1. Both App A and B will depend on either 1.1 or 1.2 solely.
We are planning to maintain 100+ applications. That is a lot of pom.xml files, meaning anytime we upgrade Oracle ADF, if any dependency wasn't correctly specified (via human error) we will have to fix each mistake every time we edit those 100+ pom.xml files for an upgrade.
I see three options:
Put the dependencies in a repository (could be a file repository as described in this answer) and declare them with a scope provided.
Use the dirty system scope trick (i.e. declare the dependencies with a system scope and set the path to the jars in your file system.
Little variation of #2: create a jar with a MANIFEST.MF referencing all the jars (using a relative path) and declare a dependency on this almost empty jar with a system scope.
The clean way is option #1 but others would work too in your case. Option #3 seems be the closest to what you're looking for.
Update: To clarify option #3
Let's say you have a directory with a.jar and b.jar. Create a c.jar with a Class-Path entry in its META-INF/MANIFEST.MF listing other jars, something like this:
Class-Path: ./a.jar ./b.jar
Then declare a dependency in your POM on c (and only on c) with a system scope, other jars will become "visible" without having to explicitly list them in your POM (sure, you need to declare them in the manifest but this can be very easily scripted).
Although you explicitly stated you don't want them in the repository, your reasons are not justified. Here's my suggestion:
install these jars in your repostory
add them as maven dependencies, with <scope>provided</scope>. This means that they are provided by your runtime (the application server) and will not be included in your artifacts (war/ear)
Check this similar question
It is advisable that an organization that's using maven extensively has its own repository. You can see Nexus. Then you can install these jars in your repository and all developers will use them, rather than having the jars in each local repository only.
(The "ugliest" option would be not to use maven at all, put put the jars on a relative location and add them to the classpath of the project, submitting the classpath properties file (depending on the IDE))
if you are developing ADF (10g / 11g I guess) components, I suppose you'll be using JDeveloper as IDE. JDeveloper comes with a very rich Library Management Tool that allows you to define which libaries are required for compiling or which ones should be packaged for deployment. I I suppose you will already know how to add libraries to projects and indicate in the deployment profile which ones should be picked while packaging. If you want to keep your libraries out of maven, maybe this could be the best approach. Let´s say the libraries you refer too are the "Webcenter" ones, using this approach will guarantee you you have the adequate libraries as JDeveloper will come with the right version libraries.
Nevertheless, as you are using maven I would not recommend to keep some libraries out of control and maven repositories. I´d recommend choose between maven and Oracle JDeveloper library management. In our current project we are working with JDeveloper ADF 11g (and WebCenter) and we use maven, it simply make us library management easier. At the end of the day, we will have a big amount of third party libraries (say Apache, Spring, etc.) that are useful to be managed by maven and not so many Oracle libraries really required for compiling in the IDE (as you would only need the API ones and not their implementations). Our approach has been to add the Oracle libraries to our maven repository whenever they are required and let maven to control the whole dependency management.
As others say in their answers if you don´t want the dependencies to be included in any of your artifacts use <scope>provided</scope>. Once you configure your development environment you will be grateful maven does the work and you can (almost) forget about dependency management. To build the JDeveloper IDE files we are using the maven jdev plugin, so mvn jdev:jdev would build generate our project files and set up dependencies on libraries and among them to compile properly.
Updated:
Of course, you need to refer to ADF libraries in your pom files. In our project we just refer to the ones used on each application, say ADF Tag Libraries or a specific service, not the whole ADF/WebCenter stack. For this purpose use the "provided" scope. You can still let JDeveloper to manage your libraries, but we have found that it's simpler to either have a 100% JDeveloper libraries approach or a 100% maven approach. If you go with the maven approach it will take you some time to build your local repo at first, but once that's done it's very easy to maintain, and the whole cycle (development, build, test, packaging and deployment) will be simpler, having a more consistent configuration. It's true that in a future you'll have to update to later ADF versions, but as your repository structure will already be defined it should be something fast. For future upgrades I'd recommend to keep the ADF version as a property on the top pom, that will allow you to switch faster to a new version.
Is there any way to force Maven to use remote artifacts and not those installed on your machine? since I worry about runtime errors and not compilation errors build server is not valid option.
P.S. I know I could delete or rename the .m2 folder, but I bet there is some more clever way of doing this. Maybe some plugin or special command param?
Having no local repository would mean your classpath consisting almost entirely of URLs on remote servers. I can't see why this would be supported as execution would be awful, and any dropped connection would result in classloader issues. Having a local repository ensures the jars are available before compilation/execution begins.
Also consider that WAR and EAR projects (and many using the dependency plugin) rely on downloading the jars to complete their packaging. There would be a huge overhead if these had to be retrieved from a remote repository on every build. I'm pretty sure the managers of central would not be keen on dealing with that load.
Some alternatives for you to consider:
If you want to force a clean local repository on each build, you can use the purge goal of the dependency plugin.
If you want to keep builds isolated, you can use separate Maven settings by passing -Dorg.apache.maven.global-settings=/path/to/global/settings.xml
Alternatively you can override the local repository on a per build basis by passing -Dmaven.repo.local=/some/repo/path
If you want to avoid hitting remote repositories on each build, add <updatePolicy>never</updatePolicy> to your remote repository configurations. This means Maven will only check for updates if you force it to with a "-U" switch on the command line
If you want to take the latest version of a dependency, you can use the LATEST keyword in the version declaration (instead of the version number), though this can be risky if the dependency is incompatible.
If you want to take the current release version of a dependency, you can use the RELEASE keyword in the version declaration (instead of the version number). This is like LATEST, but tends to be the newest stable build, rather than the newest.
If you want to take the latest version of a dependency within a range, use Maven's version range notation, for example [1.0.0,2.0.0) means any version from 1.0.0 inclusive to 2.0.0 exclusive
For more details on LATEST and RELEASE, see section 9.3.1.3 of the Maven book.
If you use an internal repository manager (obligatory Nexus and Artifactory references here), the overhead of purging the local repository is greatly reduced - you'll just have an increased local network traffic load.
I don't think there's really a way to do what you are asking for. You could look into depending on SNAPSHOT releases (but that means changing your version string of the upstream projects to be SNAPSHOT versions).
Incidentally, this was discussed at length in a recent Java Posse episode (#268). I don't think they ended up with a solution, but you may get some good ideas there.
I also like some of Rich Seller's ideas, which I'll be looking into myself.
We have several products which have a lot of shared code and which must be maintained several versions back.
To handle this we use a lot of Eclipse projects, some contain library jars, and some contain shared source code (in several projects to avoid getting a giant heap with numerous dependencies while being able to compile everything from scratch to ensure that source and binaries are consistent). We manage those with projectSet.psf's as these can directly pull all projects out from CVS and leave a fully prepared workspace. We do not do ant builds directly or use maven.
We now want to be able to put all these projects and their various versions in a Continous Integration tool - I like Hudson but this is just a matter of taste - which essentially means that we need to get an automatic way to check out the projects to a fresh workspace, and compile the source folders as described in the project-files in each project. Hudson does not provide such an approach to build a project, so I have been considering what the best way to approach this would be.
Ideas have been
Find or write an ant plugin/converter that understands projectSet.psf's and map to cvs-checkout and compile tags.
Create the build.xml files from within Eclipse and use those. I tried this, and found the result to be verbose and with absolute locations which is not good with automatic tools putting files where they want to.
Write a Hudson plugin which understands projectSet.psf's to derive a configuration and build it.
Just bite the bullet and manually create and update the CI configuration whenever stuff breaks - I don't like this :)
I'd really like to hear about other peoples experiences so I can decide how to approach this.
Edit: Another option might be using a CI which knows better about Eclipse projects and/or project sets. We are not religious - this is just a matter of getting stuff running without having to do everything ourselves. Would Cruise Control be a better option perhaps? Others?
Edit: Found that ant4eclipse has a "Team Project Set" facility. http://ant4eclipse.sourceforge.net/
Edit: Used the ant4eclipse and ant-contrib ant extensions to build a complete workspace as a sjgned runnable jar file similar to the Runnable Jar facility in Eclipse 3.5M6. I am still depending on Eclipse to create the initial empty workspace, and extract the ProjectSet, so that is the next hurdle.
Edit: Ended up with a dual configuration, namely that Hudson extracts the same set of modules as listed in the ProjectSet.pdf file from CVS (which needs to have the same tag) causing them to be located next to each other. Then ant4eclipse works well with the projectSet.psf file embedded in the main module. Caveat: Module list in Hudson must be manually updated, and it appears that a manual workspace cleanup is needed afterwards to let Hudson "discover" that there is more projects now than earlier. This has now worked well for us for a couple of months, but it was quite tedious to get everything working inside the ant file.
Edit: The "Use Team Projects" with ant4eclipse and a Ctrl-A, Ctrl-C in Project Panel with a Ctrl-V in the CVS projects in Hudson has turned out to work well enough for us to live with (for mature projects this is very rarely changed). I am awaiting the release of ant4eclipse 1.0 - http://www.ant4eclipse.org/, currently in milestone 2 - to see how much homegrown functionality can be replaced with ant4eclipse things.
Edit: ant4eclipse is as of 20100609 in M4 so the schedule at http://www.ant4eclipse.org/node?page=1 is slipping somewhat.
Edit: My conclusion after using our ant4eclipse approach for a longer period is that the build script get very gnarly and is hard to maintain. Also the Team ProjectSet facility (which ant4eclipse use to locate the projects) which works well for CVS based repositories, but not after we migrated to git (which is a big thing in itself). New projects will most likely be based on maven, as this has good support in Jenkins.
I'm not completely sure I understand the problem, but it sounds like the root issue is that you have many projects, some of which are dependent on others. Some of the projects that are closer to the "leaf" of the dependency tree need to be able to use "stable" (or previously "released") versions of the more "core" projects.
I solve exactly this problem using Hudson, ant, and ivy. I follow a pattern demonstrated by Clark in Pragmatic Project Automation (he doesn't demonstrate the dependency problems and solutions, and he uses CruiseControl rather than hudson.)
I have a hand-written ant build file (we call it "cc-build.xml", because of our CruiseControl roots.) This file is responsible for refreshing the working space for the project from the CM repository and labeling the contents for future reference. It then hands off control to another hand-written ant build file (build.xml) that is provided by each project's developers. This project is responsible for the traditional build steps (compile, packaging, etc.) It is required to spit out the installable artifacts, unit test reports, etc, to the Hudson artifacts directory. It is my experience that automatically generated build files (by Eclipse or other similar IDE's) will never get close to getting this sufficiently robust for use in a CI scenario.
Additionally, it uses ivy to resolve its own dependencies. Ivy supports precisely-specified dependency versions (e.g. "use version 1.1") and it supports "fuzzy versions" (e.g. "use version 1.1+" or "use the latest version in integration status.") Our projects typically start out specifying a very "fuzzy" version for internal projects under ongoing development, and as they get close to a release point, they "freeze" the dependency version so that stuff stops moving underneath them.
The non-leaf projects (projects that are dependents for other projects) also use ivy to publish their artifacts to our internal ivy repository. That repository keeps all past builds of the dependents, so that any project can always depend on any other previous version.
Lastly, each project in Hudson is configured to have a build trigger that causes a rebuild when any of its dependent projects successfully build. This causes them to get built again with the (possibly) new ivy dependent version.
It is worth noting that once you get this up and running, consistent automated "labeling" or "tagging" of an automated build's inputs is going to be critical for you - otherwise troubleshooting post-build problems is going to result in having to untangle a hornet's nest to find the original source.
Getting all of this setup for our environment took quite a bit of effort (primarily in setting up the ivy repository and ant build files,) but it has paid for itself many times over in saved headaches in manually managing the dependencies and decreased troubleshooting effort.
Write a Hudson plugin which
understands projectSet.psf's to derive
a configuration and build it.
That seems like the winning answer to me.
I work with CruiseControl rather than Hudson but in my experience if you can create a plugin that solves your problem it will quickly payoff. And it is generally pretty easy to write a plugin that is custom fit for your solution as opposed to one that needs to work for everyone in a similar situation.
I have tried both Cruise Control (CC) and Hudson for our CI solution. We (as a company) decided on Hudson. But for your question "Does CC support Eclipse project build" the answer is no as far as I know. CC supports many more different build tools and Source Control systems but it is a bit more difficult to configure and use. As for Hudson, it is more simple to configure and use it. We developed our custom plugins for both CC and Hudson for the parts of our build cycle that they do not provide as is. As for plugin development, if you know / use Maven, Hudson is simpler too. But if you are not familiar to Maven, first you need to learn the basic usage of maven to successfully develop a Hudson plugin. But once you understand the basic usage of maven, plugin development, test and even debug is simpler in Hudson.
For your specific problem, I can think of a solution that makes use of Eclipse plugins as well. You can develop your own Eclipse plugin that for instance gets the psf files from a (configurable) folder, and use Eclipse internals to process these psf's. I mean you can use existing Eclipse source codes that takes a psf file, check-outs it's project definitions and compile these projects. This Eclipse plugin of yours may have a preference page (which you can access by Eclipse -> Window -> Preferences) and configure which folder it will use to look for psf files. Your Eclipse plugin should also have a way to start psf processing without user interaction. For this, you can use ipc to trigger your process. I mean your Eclipse plugin can listen for a port, and you can write another java application that will connect to your plugin through this port and trigger its process. As for CI part, you can use either CC or Hudson and use their external process execution support. If you are using Windows, you can write a bat file (for Linux sh file) that first launchs Eclipse that has your plugin installed. Then it launches your java application that will communicate with your Eclipse plugin to trigger your process. From your CI tool you will need to run your bat / sh file to trigger your process.