I'm currently working on a project that requires us to package a JRE with our application. I'm normally against this as it makes keeping the JRE patched quite difficult, but in this case it is necessary.
What are the best practices for packaging a JRE with an application
as part of an automated build process?
Where do you normally store the JRE files so that they can be picked
up by your build process? Shared file server? What about making it
an artifact in your maven repo?
Just trying to get a feel for what people do in this situation.
I currently do this for a desktop app I distribute. I just have the JRE on the build server (which is really just some custom perl scripts and a web server.), in a folder, ant copies it to be part of the build tree which comes out of subversion, and then everything gets consumed by Nullsoft and builds the installer. It's not great, but it works. I should also say, I at one time used to check it, and I'm happier with what I do now.
Most applications keep jre in the root installation folder and the startup scripts would then use relative paths to use that jre. For e.g. Jprofiler
You can use Maven repo for jre.
Related
I'm currently building a desktop java application in a very clumsy manner. The application is deployed on Windows, Mac and Linux. Here's my build process now:
On Windows:
Update local repository
Fire up Eclipse
Refresh the project
Double click the .jardesc file to generate an executable jar file
Commit the executable jar to source control
Open up the .nsi script and click the build button (I have NSSI plugin installed) to produce the .exe installer
Upload installer to ftp server to publish
On Mac:
Update local repository
Run shell script to generate .dmg file using .jar in source control
Upload to ftp server to publish
On Linux:
Update local repository
Run shell script to generate .deb file using .jar in source control
Upload to ftp server to publish
I'd also like to include some extra steps in my build in the future, such as:
Setting build date
Setting the HEAD git commit-id
Performing some code obfuscation
Any suggestions on how I can streamline and speed up this process?
If you are serious about having a good build system, then I'd recommend learning and using Maven, which provides:
Comprehensive project build lifecycle management based on a declarative project definition (pom.xml)
A huge range of plugins, which I expect will be able to handle all the specific build steps you require
Very good integration with Eclipse
Full dependency management (including automatic resolution and download of dependencies)
This is not for the faint hearted (Maven is a complex beast) but in the long run it is a great solution.
First step would be to just get everything building without Eclipse.
You might also want to consider using something like Jenkins to automate some of this. You'll still require build scripts.
A solution could look like
Update repository.
Jenkins detects update and builds the jar.
Jenkins saves the jar to some location.
Then you can have separate builds for each OS, also running in Jenkins. These could be triggered automatically on successful completion of the first build. These would each:
Pick up the jar from the previous build.
Publish the OS specific binary to an FTP site.
Ant is a good start, but you may also want to look at Apache Ivy or Maven, as these will help a bit with managing your build outputs and dependencies.
You should have a look at Ant: https://ant.apache.org/
Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.
Also, a long list of build systems: https://en.wikipedia.org/wiki/List_of_build_automation_software
I have built a Java application that has some dependencies (~10). I would like to easily package this application up and deploy it as a single file to a CD or USB drive.
There doesn't seem to be any "nice" wizard to search the project, grab the dependencies and setup the classpath on the target computer. I have to do this manually.
Is there a better way? Something simple, easy and straight-forward. A link to a tutorial on this would be great.
Seems to me that this should be a built-in feature to eclipse. Deployment of a web application seems easy enough, but not a Java application.
Have a look at the Fat Jar Plug-In.
That's because desktop deployment isn't well defined.
You are heading into release issues which is a huge can of worms.
I assume you have some form of version control like SVN or git? If so check out maven with the release plugin and maven-assembly-plugin
It'll take a lot of work to setup, but once you get it going you'll be cross linking and deploying distribution packages in no time!
Plus you'll have access to the vast maven repos on the web right now
I am writing a web application with Maven in the Eclipse IDE, and use Tomcat servlet container.
So, I run Maven like this: mvn clean compile. It is reasonable that after this operation I must re-run Tomcat so it can reinitialize the context (Sysdeo Tomcat launcher helps a lot).
The problem is Maven execution and subsequent Tomcat re-running takes noticable amount of time (like 10+ seconds for Maven and 20+ sec. for Tomcat, because of logging, O/R mappings, etc.) every time I do it.
Is there any automated and more faster solution for these operations? As I see it, a way better solution can be moving re-compiled classes only to the target dir.
Is there any automated and more faster solution for these operations? As I see it, a way better solution can be moving re-compiled classes only to the target dir.
Well, the question is why do you run clean each time? Doing incremental compilation would already speed up things a lot.
Update: I agree with #Carl about Eclipse WTP that provides very good support of Tomcat (I don't really see the added value of the Sysdeo plugin nowadays). Using Eclipse WTP for development and running Maven before to commit the changes to check that you didn't break the continuous build is a very typical workflow. And both the maven-eclipse-plugin and m2eclipse (the two alternatives for Maven and Eclipse integration) support the WTP i.e. can get your project recognized as a dynamic project than can be Run on a Server.
You may want to have a look at JRebel. It reloads your classes in a running tomcat, so your changes are near instantaneous. I haven't used it much, but it appears to solicit good comments.
Maven does two things: Dependency handling and build management. I usually find Maven's dependency management a big time-wasting annoyance that I usually don't need, so I do my build management with ant.
At the price of a hand-tuned build file, ant gives you very good control over which files go where when. If you copy newly compiled classes to your WEB-INF/classes directory and touch web.xml to trigger a reload, you don't have to stop and restart Tomcat. This brings my compile/reload time down to around one second.
This is how I prefer to work. Some Maven fans will disagree violently.
EDIT: That said, there's another method that allows me to skirt the build issue completely: I develop in Eclipse using the WTP functionality that's included with the Java EE developer's edition. When I make a code change, I simply hit Ctrl-S to save the changed file and Eclipse automatically copies the newly compiled class into the running Tomcat, so I can then immediately refresh my browser and see the newly changed Web app running. Thanks to Eclipse's incremental compilation, this method probably is probably unbeatable in terms of edit/run cycle time. Of course if you really need Maven then this is not an alternative.
There is Maven tomcat plugin can help you, you just execute "mvn tomcat:redeploy", and maven compile the source, package it and deploy it to your configured tomcat, see tomcat plugin for more information.
Eventually, I've solved that by using Eclipse feature called «Build Automatically» (Project → Build Automatically checkbox).
Every time you save a resource, Eclipse compiles it and moves .class file to the output folder.
I need to have a Java EE project generate a WAR file automatically - preferably exploded - as opposed to choosing Export -> War file.
I have played with the various server definitions but have not been able to get either the Java EE preview or the HTTP server to work, and before installing each of the external container specific servers I'd like to hear if anybody has made this work.
So, question is: Which steps to take to have a WAR deployment automatically created and maintained by Eclipse?
EDIT: This is Eclipse 3.5 Java EE, and it is a Dynamic Web project in Eclipse. I want the WAR file/tree to be easily copyable to a network drive to be accessible for the target host. It runs an embedded Jetty, but I am interested in the generic WAR.
MyEclipse can do this, but we are standardizing on plain Eclipse.
EDIT: This particular web application will run inside an embedded Jetty. Since this question was asked we have found empirically that we need to have the complete tree containing the application with embedded Jetty, war file (exploded) and all built by the Hudson server in order to avoid human steps in the build-deploy-process. The answer for us therefore is scripting with ant (using ant4eclipse).
EDIT 2012: The ant4eclipse approach proved to be generally too inflexible and fragile in the long run, so we have switched to Maven. This solved very many problems, this one included.
Make an ant task to build the war (and copy if you like). Then add an Ant builder to the project (project -> properties -> builders). As long as your project is configured to build automatically the war will always be upto date.
This would equally work with maven, or pretty much any other build tool.
You should be able to do this with "File" -> "Export", scroll down to "Web" -> "WAR File" and follow the instructions
Have a look at this question. It refers to 3.2 version, but I believe that it still holds, until up to 3.4 version at least. It seems there is no automatic way of doing the Export - War thing.
Consider the solution given by Pablojim and drop the Export facility.
We have several products which have a lot of shared code and which must be maintained several versions back.
To handle this we use a lot of Eclipse projects, some contain library jars, and some contain shared source code (in several projects to avoid getting a giant heap with numerous dependencies while being able to compile everything from scratch to ensure that source and binaries are consistent). We manage those with projectSet.psf's as these can directly pull all projects out from CVS and leave a fully prepared workspace. We do not do ant builds directly or use maven.
We now want to be able to put all these projects and their various versions in a Continous Integration tool - I like Hudson but this is just a matter of taste - which essentially means that we need to get an automatic way to check out the projects to a fresh workspace, and compile the source folders as described in the project-files in each project. Hudson does not provide such an approach to build a project, so I have been considering what the best way to approach this would be.
Ideas have been
Find or write an ant plugin/converter that understands projectSet.psf's and map to cvs-checkout and compile tags.
Create the build.xml files from within Eclipse and use those. I tried this, and found the result to be verbose and with absolute locations which is not good with automatic tools putting files where they want to.
Write a Hudson plugin which understands projectSet.psf's to derive a configuration and build it.
Just bite the bullet and manually create and update the CI configuration whenever stuff breaks - I don't like this :)
I'd really like to hear about other peoples experiences so I can decide how to approach this.
Edit: Another option might be using a CI which knows better about Eclipse projects and/or project sets. We are not religious - this is just a matter of getting stuff running without having to do everything ourselves. Would Cruise Control be a better option perhaps? Others?
Edit: Found that ant4eclipse has a "Team Project Set" facility. http://ant4eclipse.sourceforge.net/
Edit: Used the ant4eclipse and ant-contrib ant extensions to build a complete workspace as a sjgned runnable jar file similar to the Runnable Jar facility in Eclipse 3.5M6. I am still depending on Eclipse to create the initial empty workspace, and extract the ProjectSet, so that is the next hurdle.
Edit: Ended up with a dual configuration, namely that Hudson extracts the same set of modules as listed in the ProjectSet.pdf file from CVS (which needs to have the same tag) causing them to be located next to each other. Then ant4eclipse works well with the projectSet.psf file embedded in the main module. Caveat: Module list in Hudson must be manually updated, and it appears that a manual workspace cleanup is needed afterwards to let Hudson "discover" that there is more projects now than earlier. This has now worked well for us for a couple of months, but it was quite tedious to get everything working inside the ant file.
Edit: The "Use Team Projects" with ant4eclipse and a Ctrl-A, Ctrl-C in Project Panel with a Ctrl-V in the CVS projects in Hudson has turned out to work well enough for us to live with (for mature projects this is very rarely changed). I am awaiting the release of ant4eclipse 1.0 - http://www.ant4eclipse.org/, currently in milestone 2 - to see how much homegrown functionality can be replaced with ant4eclipse things.
Edit: ant4eclipse is as of 20100609 in M4 so the schedule at http://www.ant4eclipse.org/node?page=1 is slipping somewhat.
Edit: My conclusion after using our ant4eclipse approach for a longer period is that the build script get very gnarly and is hard to maintain. Also the Team ProjectSet facility (which ant4eclipse use to locate the projects) which works well for CVS based repositories, but not after we migrated to git (which is a big thing in itself). New projects will most likely be based on maven, as this has good support in Jenkins.
I'm not completely sure I understand the problem, but it sounds like the root issue is that you have many projects, some of which are dependent on others. Some of the projects that are closer to the "leaf" of the dependency tree need to be able to use "stable" (or previously "released") versions of the more "core" projects.
I solve exactly this problem using Hudson, ant, and ivy. I follow a pattern demonstrated by Clark in Pragmatic Project Automation (he doesn't demonstrate the dependency problems and solutions, and he uses CruiseControl rather than hudson.)
I have a hand-written ant build file (we call it "cc-build.xml", because of our CruiseControl roots.) This file is responsible for refreshing the working space for the project from the CM repository and labeling the contents for future reference. It then hands off control to another hand-written ant build file (build.xml) that is provided by each project's developers. This project is responsible for the traditional build steps (compile, packaging, etc.) It is required to spit out the installable artifacts, unit test reports, etc, to the Hudson artifacts directory. It is my experience that automatically generated build files (by Eclipse or other similar IDE's) will never get close to getting this sufficiently robust for use in a CI scenario.
Additionally, it uses ivy to resolve its own dependencies. Ivy supports precisely-specified dependency versions (e.g. "use version 1.1") and it supports "fuzzy versions" (e.g. "use version 1.1+" or "use the latest version in integration status.") Our projects typically start out specifying a very "fuzzy" version for internal projects under ongoing development, and as they get close to a release point, they "freeze" the dependency version so that stuff stops moving underneath them.
The non-leaf projects (projects that are dependents for other projects) also use ivy to publish their artifacts to our internal ivy repository. That repository keeps all past builds of the dependents, so that any project can always depend on any other previous version.
Lastly, each project in Hudson is configured to have a build trigger that causes a rebuild when any of its dependent projects successfully build. This causes them to get built again with the (possibly) new ivy dependent version.
It is worth noting that once you get this up and running, consistent automated "labeling" or "tagging" of an automated build's inputs is going to be critical for you - otherwise troubleshooting post-build problems is going to result in having to untangle a hornet's nest to find the original source.
Getting all of this setup for our environment took quite a bit of effort (primarily in setting up the ivy repository and ant build files,) but it has paid for itself many times over in saved headaches in manually managing the dependencies and decreased troubleshooting effort.
Write a Hudson plugin which
understands projectSet.psf's to derive
a configuration and build it.
That seems like the winning answer to me.
I work with CruiseControl rather than Hudson but in my experience if you can create a plugin that solves your problem it will quickly payoff. And it is generally pretty easy to write a plugin that is custom fit for your solution as opposed to one that needs to work for everyone in a similar situation.
I have tried both Cruise Control (CC) and Hudson for our CI solution. We (as a company) decided on Hudson. But for your question "Does CC support Eclipse project build" the answer is no as far as I know. CC supports many more different build tools and Source Control systems but it is a bit more difficult to configure and use. As for Hudson, it is more simple to configure and use it. We developed our custom plugins for both CC and Hudson for the parts of our build cycle that they do not provide as is. As for plugin development, if you know / use Maven, Hudson is simpler too. But if you are not familiar to Maven, first you need to learn the basic usage of maven to successfully develop a Hudson plugin. But once you understand the basic usage of maven, plugin development, test and even debug is simpler in Hudson.
For your specific problem, I can think of a solution that makes use of Eclipse plugins as well. You can develop your own Eclipse plugin that for instance gets the psf files from a (configurable) folder, and use Eclipse internals to process these psf's. I mean you can use existing Eclipse source codes that takes a psf file, check-outs it's project definitions and compile these projects. This Eclipse plugin of yours may have a preference page (which you can access by Eclipse -> Window -> Preferences) and configure which folder it will use to look for psf files. Your Eclipse plugin should also have a way to start psf processing without user interaction. For this, you can use ipc to trigger your process. I mean your Eclipse plugin can listen for a port, and you can write another java application that will connect to your plugin through this port and trigger its process. As for CI part, you can use either CC or Hudson and use their external process execution support. If you are using Windows, you can write a bat file (for Linux sh file) that first launchs Eclipse that has your plugin installed. Then it launches your java application that will communicate with your Eclipse plugin to trigger your process. From your CI tool you will need to run your bat / sh file to trigger your process.