I have a working Spring Boot 2.25 application built with mvn. As per this documentation I add
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>
</dependencies>
From the documentation:
As DevTools monitors classpath resources, the only way to trigger a restart is to update the classpath. The way in which you cause the classpath to be updated depends on the IDE that you are using. In Eclipse, saving a modified file causes the classpath to be updated and triggers a restart. In IntelliJ IDEA, building the project (Build -> Build Project) has the same effect.
With the application running I tried a simple
touch /path/to/app.jar
expecting the application to restart but nothing happened.
Okay, so maybe it's doing something smarter. I modified some source .java, recompiled the .jar, and cp'd it to replace the running .jar file and... nothing happened.
Also from the documentation
DevTools relies on the application context’s shutdown hook to close it during a restart. It does not work correctly if you have disabled the shutdown hook (SpringApplication.setRegisterShutdownHook(false)).
I am not doing this.
DevTools needs to customize the ResourceLoader used by the ApplicationContext. If your application provides one already, it is going to be wrapped. Direct override of the getResource method on the ApplicationContext is not supported.
I am not doing this.
I am running this in a Docker container, if that matters. From the documentation:
Developer tools are automatically disabled when running a fully packaged application. If your application is launched from java -jar or if it is started from a special classloader, then it is considered a “production application”. If that does not apply to you (i.e. if you run your application from a container), consider excluding devtools or set the -Dspring.devtools.restart.enabled=false system property.
I don't understand what this means or if it is relevant.
I want to recompile a .jar and replace it in the running docker container and trigger and application restart without restarting the container. How can I do this?
EDIT: I am using mvn to rebuild the jar, then docker cp to replace it in the running container. (IntelliJ IDEA claims to rebuild the project, but the jar files are actually not touched, but that's another story.) I am looking for a non-IDE-specific solution.
The Spring Boot Devtools offers for Spring Boot applications the functionality that usually is available in IDEs like IntelliJ in which you have the ability to, for example, restart an application or force a live browser reload when certain classes or resources change. This can be very useful in the development phase of your application.
It is typically used in conjunction with an IDE in such a way that it will be launched with the rest of your application by Spring Boot when detected in the classpath and if it is not disabled.
Although you can configure it to monitor further resources, it will usually look for changes in your application code, in your classes and resources.
It is important to say that, AFAIK, Devtools will monitor your own classes and resources in an exploded way, I mean, the restart process will not work if you overwrite your whole application jar, only if you overwrite some resources in your classes directory.
This functionality can be tested with Maven. Please, consider download a simple blueprint from Spring Initializr, with Spring Boot, Spring Boot Devtools and Spring Web, for example - in order to keep the application running. From a terminal, in the directory that contains the pom.xml file, run your application, for instance, with the help of the spring-boot-maven-plugin plugin included in the pom.xml:
mvn spring-boot:run
The command will download the project dependencies, compile and run your application.
Now, perform any modification in your source code, either in your classes or in your resources and, from another terminal, in the same directory, recompile your resources:
mvn compile
If you look at the first terminal window you will see that the application is restarted to reflect the changes.
If you are using docker for your application deployment, try reproducing this behavior can be tricky.
On one hand, I do not know if it makes sense, but you can try creating a maven based image and run your code inside, just as described above. Your Dockerfile can look similar to this:
FROM maven:3.5-jdk-8 as maven
WORKDIR /app
# Copy project pom
COPY ./pom.xml ./pom.xml
# Fetch (and cache) dependencies
RUN mvn dependency:go-offline -B
# Copy source files
COPY ./src ./src
# Run your application
RUN mvn springboot:run
With this setup, you can copy with docker cp your resources to the /app/target directory and it will trigger an application restart. As an alternative, consider mounting a volume in your container instead of using docker cp.
Much better, and taking into account the fact that overwriting your application jar will probably not work, you can try to copy both your classes and library dependencies, and run your application in a exploded way. Consider the following Dockerfile:
FROM maven:3.5-jdk-8 as maven
WORKDIR /app
# Copy your project pom
COPY ./pom.xml ./pom.xml
# Fetch (and cache) dependencies
RUN mvn dependency:go-offline -B
# Copy source files
COPY ./src ./src
# Compile application and library dependencies
# The dependencies will, by default, be copied to target/dependency
RUN mvn clean compile dependency:copy-dependencies -Dspring-boot.repackage.skip=true
# Final run image (based on https://stackoverflow.com/questions/53691781/how-to-cache-maven-dependencies-in-docker)
FROM openjdk:8u171-jre-alpine
# OPTIONAL: copy dependencies so the thin jar won't need to re-download them
# COPY --from=maven /root/.m2 /root/.m2
# Change working directory
WORKDIR /app
# Copy classes from maven image
COPY --from=maven /app/target/classes ./classes
# Copy dependent libraries
COPY --from=maven /app/target/dependency ./lib
EXPOSE 8080
# Please, modify your main class name as appropriate
ENTRYPOINT ["java", "-cp", "/app/classes:/app/lib/*", "com.example.demo.DemoApplication"]
The important line in the Dockerfile is this:
mvn clean compile dependency:copy-dependencies -Dspring-boot.repackage.skip=true
It will instruct maven to compile your resources and copy the required libraries. Although redundant for the typical Maven phase in which the spring-boot-maven-plugin repackage goal runs, the flag spring-boot.repackage.skip=true will instruct this plugin to not repackage the application.
With this Dockerfile, build you image (let's tag it devtools-demo, for example):
docker build -t devtools-demo .
And run it:
docker run devtools-demo:latest
With this setup, if now you change your classes and/or resources, and run mvn locally:
mvn compile
you should be able to force the restart mechanism in your container with the following docker cp command:
docker cp classes <container name>:/app/classes
Please, again, consider mounting a volume in your container instead of using docker cp.
I tested the setup and it worked properly.
The important think to keep in mind is to replace your exploded resources, not the whole application jar.
As another option, you can take an approach similar to the one indicated in your comments and run your Devtools in remote mode:
FROM maven:3.5-jdk-8 as maven
WORKDIR /app
# Copy project pom
COPY ./pom.xml ./pom.xml
# Fetch (and cache) dependencies
RUN mvn dependency:go-offline -B
# Copy source files
COPY ./src ./src
# Build jar
RUN mvn package && cp target/your-app-version.jar app.jar
# Final run image (based on https://stackoverflow.com/questions/53691781/how-to-cache-maven-dependencies-in-docker)
FROM openjdk:8u171-jre-alpine
# OPTIONAL: copy dependencies so the thin jar won't need to re-download them
# COPY --from=maven /root/.m2 /root/.m2
# Change working directory
WORKDIR /app
# Copy artifact from the maven image
COPY --from=maven /app/app.jar ./app.jar
ENV JAVA_DOCKER_OPTS "-agentlib:jdwp=transport=dt_socket,server=y,address=*:8000,suspend=n"
ENV JAVA_OPTS "-Dspring.devtools.restart.enabled=true"
EXPOSE 8000
EXPOSE 8080
ENTRYPOINT ["/bin/bash", "-lc", "exec java $JAVA_DOCKER_OPTS $JAVA_OPTS -jar /app/app.jar"]
For the Spring Boot Devtools remote mode to work properly, you need several things (some of them pointed out by Opri as well in his/her answer).
First, you need to configure the spring-boot-maven-plugin to include the devtools in your application jar (it will be excluded otherwise, by default):
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludeDevtools>false</excludeDevtools>
</configuration>
</plugin>
Then, you need to setup a value for the configuration property spring.devtools.remote.secret. This property has to do with the way remote debugging works in Spring Boot Devtools.
The remote debugging functionality consists of two parts, a client and a server. Basically, the client is a copy of your server code, and it uses the value of the spring.devtools.remote.secret configuration property to authenticate itself against the server.
This client code should be run from an IDE, and you attach your IDE debugging process to a local server exposed from that client.
Every change performed in the client monitored resources, remember, the same as in your server, is pushed to the remote server and it will trigger a restart if necessary.
As you can see, this functionality is again more appropriate from a development point of view.
If you need to actually restart your application by overwriting your jar application file, maybe a better approach will be to configure your docker container to run a shell script in your ENTRYPOINT or CMD. This shell script will monitor a copy of your jar, in a certain directory. If that resource changes, as a consequence of your docker cp, this shell script will stop the current running application version - this application is supposed to run from a different location to avoid problems when updating the jar -, replace the current jar with the new one, and then start the new application version. Not the same, but please, consider read this related SO answer.
In any case, when you run an application in a container, you are trying to provide a consistent and platform independent way of deployment for it. From this perspective, instead of monitoring changes in your docker container, a more convenient approach may be to generate and to deploy a new version of your container image with those new changes. This process can be automated greatly using tools like Jenkins, Travis, etcetera. These tools allow you to define CI/CD pipelines that, in response to a code commit, for example, can generate on the fly a docker image with your code and, it configured accordingly, deploy later this image to services like some docker flavor or Kubernetes, on premises or in the cloud. Some of them, especially Kubernetes, but swarm an even docker compose as well, will allow you to perform rolling updates without or with minimal application service interruption.
To conclude, probably it will not fit your needs, but be aware that you can use spring-boot-starter-actuator directly or with Spring Boot Admin, for instance, to restart your application.
Finally, as already indicated in the Spring Boot Devtools documentation, you can try a different option, not based on restart but in application reload, in hot swapping. This functionality is offered by commercial products like JRebel although there are some open sources alternatives as well, mainly dcevm and the HotswapAgent. This related article provides some insight in how these last two products work. This Github project provides complementary information about how to run it in docker containers.
I had a similar problem when using intellij idea, I saw somewhere that you had to use the build button for it to work.
In jsp the application reloads the files, it is not completely automatic, because intellij saves automatically -> this behavior is the default but there is I think a way to change it. -> To record manually and then that it reloads automatically.
Works for jsp apps only, if you try this with standard apps it will create a double frame execution (swing)
I am no shore because you are not saying explicitly if you tried this things but:
try to set this on true:(SpringApplication.setRegisterShutdownHook(true))
try adding manually in the dockerfile this property -Dspring.devtools.restart.enabled=true
I know it says that on default should be on true, but try to do it
manually
Maybe show us the dockerfile.
Later Edit:
Saw this in documentation:
repackaged archives do not contain devtools by default. If you want to
use certain remote devtools feature, you’ll need to disable the
excludeDevtools build property to include it. The property is
supported with both the Maven and Gradle plugins.
The Spring Boot developer tools are not just limited to local development. You can also use several features when running applications remotely. Remote support is opt-in, to enable it you need to make sure that devtools is included in the repackaged archive:
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludeDevtools>false</excludeDevtools>
</configuration>
</plugin>
</plugins>
</build>
Then you need to set a spring.devtools.remote.secret property, for example:
spring.devtools.remote.secret=mysecret
Remote devtools support is provided in two parts; there is a server side endpoint that accepts connections, and a client application that you run in your IDE. The server component is automatically enabled when the spring.devtools.remote.secret property is set. The client component must be launched manually.
Documents from spring
In order to restart app with devtools you need to make sure following things.
Use any IDE or Build tools like maven gradle to start app
Using java -jar devtools will not work as it packages app.
Using maven you can run app like mvn spring-boot:run
Refer official documentation for more details.
I had similar issue after using dependency also spring boot was not picking up devtools configuration so I did following steps in eclipse.
installed eclipse (assuming you have already installed)
installed sts plugin from eclipse market (since i am eclipse generic version lover so prefer generic eclipse on top of that i installed sts plugin)
project --> build automatically
debug as --> spring boot application
done.
Related
I am working with a full-stack application(JSP and Java,Spring based). It is having an embedded tomcat server. Suppose I made some changes in the tomcat source code relevant to the embedded tomcat server(same tomcat version) which I use in my application.
I need to debug the tomcat source code when upping my application with the embedded tomcat server.
Is there any way to achieve this?
Note: I use Apache ANT as the build tool.
To achieve what you want you need to substitute the jar file with embedded tomcat (I guess this is org.apache.tomcat.embed:tomcat-embed-core). Please follow these steps:
First of all you need to build the jar from sources that you've modified locally by running e.g. mvn clean install. This would install the jar built locally into your local maven repository. Pay attention, that in order to distinguish your build from the rest you need to specify your custom version in pom.xml of Tomcat sources (e.g. you specify 9.0.0-my-custom-build)
As soon as your custom build is now in m2 it can be used by your main application. In <dependencyManagement> section of your pom.xml you need to specify this:
<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-core</artifactId>
<version>9.0.0-my-custom-build</version>
</dependency>
This declaration forces maven to use tomcat of your custom version i. e. 9.0.0-my-custom-build.
Build your application and run it. At debug time you'll be able to see and debug your changes.
P.S. No matter what is your building system, the clue is the same: jar built from modified sources must substitute the default one in classpath of your application.
I have a multi-module project on maven. It is quite ancient and is going with a special dance with a tambourine.
Project structure
root
|__api
|__build
|__flash
|__gwt
|__server
|__service
|__shared
|__target
|__toolset
To build such a project, I have a special script that needs to be executed while at the root of the project.
./build/build_and_deploy.sh
When building on Windows, there are a lot of problems (problems with long paths, symbols and line separators get lost, etc.), so I want to build this project in docker.
At first I wanted to connect docker-maven-plugin from io.fabric8 as a plugin in maven, but as I understand it, it cannot run the build of itself in docker.
So I tried to write Dockerfile and ran into the following problems
I don't want to copy the .m2 folder to docker, there are a lot of dependencies there, it will be quite a long time.
I don't want to copy the project sources inside the container
I couldn't run the script./build/build_and_deploy.sh
How I see the solution to this problem.
Create a dockerfile, connect maven and java8 to it, and bash
Using Volume to connect the sources and maven repository
Because I work through VPN and the script is deployed, you need to find a solution to the problem through it (proxy/port forwarding???)
If you have experience or examples of a similar script or competent advice, then I will be glad to hear it
You can perform the build with Maven inside Docker.
For that you basically trigger something like docker build ., and the rest is inside the Dockerfile.
Start off from a container that has Maven, such as maven.
Add your whole project structure
Run your build script
Save your build result
To save your build result, you might want to upload it to some repository, or store it in a mounted volume that is available after the container run as well. Alternatively copy it to the next stage if you use a multistage docker build.
If you want to prevent repeated downloads of the .m2 directory or have many other dependencies in there, also mount it as volume when running the container.
I have several java components (WARs), all of them expose webservices, and they happen to use the samemessaging objects (DTOs).
This components all share a common maven dependency for the DTOs, let's call it "messaging-dtos.jar". This common dependency has a version number, for example messaging-dtos-1.2.3.jar, where 1.2.3 is the maven version for that artifact, which is published in a nexus repository and the like.
In the maven world, docker aside, it can get tedious to work with closed version dependencies. The solution for that is maven SNAPSHOTS. When you use for example Eclipse IDE, and you set a dependency to a SNAPSHOT version, this will cause the IDE to take the version from your current workspace instead of nexus, saving time by not having to close a version each time you make a small change.
Now, I don't know how to make this development cycle to work with docker and docker-compose. I have "Component A" which lives in its own git repo, and messaging-dtos.jar, which lives in another git repo, and it's published in nexus.
My Dockerfile simpy does a RUN mvn clean install at some point, bringing the closed version for this dependency (we are using Dockerfiles for the actual deployments, but for local environments we use docker-compose). This works for closed versions, but not for SNAPSHOTS (at least not for local SNAPSHOTs, I could publish the SNAPSHOT in nexus, but that creates another set of problems, with different content overwriting the same SNAPSHOT and such, been there and I would like to not come back).
I've been thinking about using docker-compose volumes at some point, maybe to mount whatever is in my local .m2 so ComponentA can find the snapshot dependency when it builds, but this doesn't feel "clean" enough, the build would depend partially on whatever is specified in the Dockerfile and partially on things build locally. I'm not sure that'd be the correct way.
Any ideas? Thanks!
I propose maintain two approaches: one for your local development environment (i.e. your machine) and another for building in your current CI tool.
For your local dev environment:
A Dockerfile that provides the system needs for your War application (i.e. Tomcat)
docker-compose to mount a volume with the built war app, from Eclipse or whatever IDE.
For CI (not your dev environment):
A very similar Dockerfile but one that can build your application (with maven installed)
A practical example
I use the docker feature: multi stage build.
A single Dockerfile for both Dev and CI envs that might be splited but I prefer to maintain only one:
FROM maven as build
ARG LOCAL_ENV=false
COPY ./src /app/
RUN mkdir /app/target/
RUN touch /app/target/app.war
WORKDIR /app
# Run the following only if we are not in Dev Environment:
RUN test $LOCAL_ENV = "false" && mvn clean install
FROM tomcat
COPY --from=build /app/target/app.war /usr/local/tomcat/webapps
The multi-stage build saves a lot of disk space discarding everything from the build, except what is being COPY --from='ed.
Then, docker-compose.yml used in Dev env:
version: "3"
services:
app:
build:
context: .
args:
LOCAL_ENV: true
volumes:
- ./target/app.war:/usr/local/tomcat/webapps/app.war
Build in CI (not your local machine):
# Will run `mvn clean install`, fetching whatever it needs from Nexus and so on.
docker build .
Run in local env (your local machine):
# Will inject the war that should be available after a build from your IDE
docker-compose up
I have a current setup of intellij 2016 which compiles my java files on the fly. Due to some configuration in intellij it is possible to propagate any changes directly to tomcat. This way I don't have to manually build a new application and deploy it to tomcat which increases user productivity.
We want to remove tomcat and start using wildfly10 but also keep the hotdeploy functionality. On top of that the wildfly server will be hosted in a docker container.
So what I did is that I mounted the wildfly/standalone/deployment/myapp.war using docker to my host directory myapp/target/myapp.war. In addition I configured a jboss remote server configuration with remote stating set to same file system and let maven build an exploded war. This way if a maven build is performed, the contents of the target/myapp.war directory is directly available in my docker container. When I run the container and perform a new maven package, I do see that wildfly states that the new changes are found and redeploying has succeeded. Unfortunately this only goes well once or two times in a row.
So coming from the tomcat hotdeploy where no maven build was involved and any changes where directly available in tomcat, I'm wondering if the same can be achieved with the setup: intellij, maven, wildfly and docker. So if a change of a java file in intellij is compiled and pushed to wildfly without redeploying or maven build?
Wild-fly - eclipse supports 100 % Hot Code replacement
you have to start web-app in debugging mode .
For every change in java code just do a maven install and refresh
the target .
Limitations :
you can only replace statements in method .
you are not allowed to change whole class and new methods .
I will add 300 points as bounty
I have recently started to take a closer look at Docker and how I can use it for faster getting new member of the team up and running with the development environment as well as shipping new versions of the software to production.
I have some questions regarding how and at what stage I should add the Java EE application to the container. As I see it there are multiple ways of doing this.
This WAS the typical workflow (in my team) before Docker:
Developer writes code
Developer builds the code with Maven producing a WAR
Developer uploads the WAR in the JBoss admin console / or with Maven plugin
Now after Docker came around I am a little confused about if I should create the images that I need and configure them so that all that is left to do when you run the JBoss Wildfly container is to deploy the application through the admin console on the web. Or should I create a new container for each time I build the application in Maven and add it with the ADD command in the Dockerfile and then just run the container without ever deploying to it once it is started?
In production I guess the last approach is what it preffered? Correct me if I am wrong.
But in development how should it be done? Are there other workflows?
With the latest version of Docker, you can achieve that easily with Docker Links, Docker Volume and Docker Compose. More information about these tools from Docker site.
Back to your workflow as you have mentioned: for any typical Java EE application, an application server and a database server are required. Since you do not mention in your post how the database is set up, I would assume that your development environment will have separated database server for each developer.
Taking all these into assumption, I could suggest the following workflow:
Build the base Wildfly application server from the official image. You can achieve that by: "docker pull" command
Run the base application server with:
docker run -d -it -p 8080:8080 -p 9990:9990 --name baseWildfly
jboss/wildfly
The application server is running now, you need to configure it to connect to your database server and also configure the datasource settings and other configuration if neccessary in order to start your Java EE application.
For this, you need to log into bash terminal of the Jboss container:
docker exec -i -t baseWildfly /bin/bash/
You are now in the terminal of container. You can configure the application server as you do for any linux environment.
You can test the configuration by manually deploying the WAR file to Wildfly. This can be done easily with the admin console, or maven plugin, or ADD command as you said. I usually do that with admin console, just for testing quickly. When you verify that the configuration works, you can remove the WAR file and create a snapshot of your container:
docker commit --change "add base settings and configurations"
baseWildfly yourRepository:tag
You can now push the created image to your private repository and share that with your developer team. They can now pull the image and run the application server to deploy right away.
We don't want to deploy the WAR file for every Maven build using admin console as that is too cumbersome, so next task is to automate it with Docker Volume.
Assuming that you have configured Maven to build the WAR file to "../your_project/deployments/", you can link that to deployment directory of Jboss container as following:
docker run -d -p 8080:8080 -v
../your_project/deployments:/opt/jboss/wildfly/standalone/deployments
Now, every time you rebuild the application with Maven, the application server will scan for changes and redeploy your WAR file.
It is also quite problematic to have separated database server for each developer, as they have to configure it by themselves in the container because they might have different settings (e.g. db's url, username, password, etc...). So, it's good to dockerize that eventually.
Assuming you use Postgres as your db server, you can pull it from postgres official repository. When you have the image ready, you can run the db server:
docker run -d -p 5432:5432 -t --name postgresDB postgres
or run the database server with the linked "data" directory:
docker run -d -p 5432:5432 -v
../your_postgres/data:/var/lib/postgresql -t --name postgresDB
postgres
The first command will keep your data in the container, while the latter one will keep your data in the host env.
Now you can link your database container with the Wildfly:
docker run -d -p 8080:8080 --link postgresDB:database -t baseWildfly
Following is the output of linking:
Now you can have the same environment for all members in developer's team and they can start coding with minimal set up.
The same base images can be used for Production environment, so that whenever you want to release new version, you just need to copy the WAR file to "your_deployment" folder of the host.
The good thing of dockerizing application server and db server is that you can cluster it easily in the future to scale it or to apply the High Availability.
I've used Docker with Glassfish extensively, for a long time now and wrote a blog on the subject a while ago here.
Its a great tool for JavaEE development.
For your production image I prefer to bundle everything together, building off the static base image and layering in the new WAR. I like to use the CI server to do the work and have a CI configuration for production branches which will grab a base, layer in the release build, and then publish the artifact. Typically we manually deploy into production but if you really want to get fancy you can even automate that with the CI server deploying into a production environment and using proxy servers to ensure new sessions that come it get the updated version.
In development I like to take the same approach when it comes time to locally running any that rely on the container (eg. Arquillian integration tests) prior to checking in code. That keeps the environment as close to production as possible which I think is important when it comes to testing. That's one big reason I am against approaches like testing with embedded containers but deploying to non-embedded ones. I've seen plenty of cases where a test will pass in the embedded environment and fail in the production/non-embedded one.
During a develop/deploy/hand test cycle, prior to committing code, I think the approach of deploying into a container (which is part of a base image) is more cost effective in terms of speed of that dev. cycle vs. building in your WAR each time. It's also a better approach if your dev environment uses a tool like JRebel or XRebel where you can hot deploy your code and simply refresh your browser to see the changes.
You might want to have a look at rhuss/docker-maven-plugin. It allows a seamless integration for using docker as your deployment unit:
Use a standard Maven assembly descriptor for building images with docker:build, so you generated WAR file or your Microservice can be easily added to a Docker image.
You can push the created image with docker:push
With docker:start and docker:stop you can utilize your image during unit tests.
This plugin comes with a comprehensive documentation, if there are any open questions, please open an issue.
And as you might have noticed, I'm the author of this plugin ;-). And frankly, there are other docker-maven-plugins out there, which all have a slightly different focus. For a simple check, you can have a look at shootout-docker-maven which provides sample configurations for the four most active maven-docker-plugins.
The workflow then simply shifts the artifact boundary from WAR/EAR files to Docker images. mvn docker:push moves them to a Docker registry from where it is pulled during the various testing stages used in a continuous delivery pipeline.
The way you would normally deploy anything with Docker is by producing a new image atop of the platform base image. This way you follow Docker dependency bundling philosophy.
In terms of Maven, you can produce a tarball assembly (let's say it's called jars.tar) and then call ADD jars.tar /app/lib in Dockerfile. You might also implement a Maven plugin that generates a Dockerfile as well.
This is the most sane approach with Docker today, other approaches, such as building image FROM scratch are not quite applicable for Java applications.
See also Java JVM on Docker/CoreOS.
The blog post about setting up JRebel with Docker by Arun Gupta would probably be handy here: http://blog.arungupta.me/configure-jrebel-docker-containers/
I have tried a simular scenario to use docker to run my application. In my situation i wanted to start docker with tomcat running the war. Then at the integration-test phase of maven start the cucumber/phantomjs integration test on the docker.
The example implementation is documented at https://github.com/abroer/cucumber-integration-test. You could extend this example to push the docker image to your private repo when the test is successfull. The pushed image can be used in any enviroment from development to production.
For my current deployment process I use glassfish and this trick, which works very nicely.
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${plugin.exec.version}</version>
<executions>
<execution>
<id>docker</id>
<phase>package</phase>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>docker</executable>
<arguments>
<argument>cp</argument>
<argument>${project.build.directory}/${project.build.finalName}</argument>
<argument>glassfish:/glassfish4/glassfish/domains/domain1/autodeploy</argument>
</arguments>
</configuration>
</plugin>
Once you run: mvn clean package, the containers kicks-in and starts deployment of the latest war.