I have a spring-boot java application build with gradle, run by gitlab CI on gitlab.com that works just fine. But every time the CI run, it takes so much time to download dependencies (because I'm using gitlab shared-runner from gitlab.com which is docker-auto-scale runner and it doesn't cache anything for the next run).
My idea is create a docker image base on docker:latest (because build jobs needs to interact with docker daemon while running) and pre-install or add gradle caches to the image so that images contains all dependencies that my app needs and when the CI run, it doesn't need to re-download dependencies.
Has anyone done it before?
I had the exact idea as you did, but for Maven.
During the docker image build, I copy my project files in the image and run mvn clean install and upload it to my gitlab registry.
CI pipeline execution time is considerably reduced.
But you of course need to do this everytime you have new dependencies, or at least when there is a large difference between what is already in the cache and the dependencies required by your app.
Related
I have a multi-module project on maven. It is quite ancient and is going with a special dance with a tambourine.
Project structure
root
|__api
|__build
|__flash
|__gwt
|__server
|__service
|__shared
|__target
|__toolset
To build such a project, I have a special script that needs to be executed while at the root of the project.
./build/build_and_deploy.sh
When building on Windows, there are a lot of problems (problems with long paths, symbols and line separators get lost, etc.), so I want to build this project in docker.
At first I wanted to connect docker-maven-plugin from io.fabric8 as a plugin in maven, but as I understand it, it cannot run the build of itself in docker.
So I tried to write Dockerfile and ran into the following problems
I don't want to copy the .m2 folder to docker, there are a lot of dependencies there, it will be quite a long time.
I don't want to copy the project sources inside the container
I couldn't run the script./build/build_and_deploy.sh
How I see the solution to this problem.
Create a dockerfile, connect maven and java8 to it, and bash
Using Volume to connect the sources and maven repository
Because I work through VPN and the script is deployed, you need to find a solution to the problem through it (proxy/port forwarding???)
If you have experience or examples of a similar script or competent advice, then I will be glad to hear it
You can perform the build with Maven inside Docker.
For that you basically trigger something like docker build ., and the rest is inside the Dockerfile.
Start off from a container that has Maven, such as maven.
Add your whole project structure
Run your build script
Save your build result
To save your build result, you might want to upload it to some repository, or store it in a mounted volume that is available after the container run as well. Alternatively copy it to the next stage if you use a multistage docker build.
If you want to prevent repeated downloads of the .m2 directory or have many other dependencies in there, also mount it as volume when running the container.
I'm trying to build an efficient Docker image that leverages Docker’s image layering to decrease the duration and the required bandwidth for uploading to or downloading from the repository. Is there any way to separate my compiled code and dependencies (external libs) using Gradle build tools?
My current Dockerfile copies the fat jar to the container, while the most significant part of the jar file is libraries that don’t change between releases.
My current Dockerfile:
FROM openjdk:11-jre-slim
VOLUME /tmp
WORKDIR /app
# Add the application's jar to the container
COPY /build/libs/app-*.jar app.jar
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","app.jar"]
Edit 1: I'm trying to achieve this goal without using any plugin (if it's possible).
Well, maybe a tool like https://github.com/GoogleContainerTools/jib is more what you are looking for? It has been made precisely for this kind of use case.
With "pure" Docker, the whole build is done in your Dockerfile, with at the beginning a command to dowload the dependencies, and THEN adding your code to the image with a copy. This way, the first layers can be cached. You should probably start from an official Gradle image then. And since your dependencies will be included, you can even get away with running your software by directly calling Gradle. The image size will be bigger, but only your code would be sent over the network each time, unless your dependencies change.
There are a lot of other options to achieve something similar with Docker, but many of them break the reproducibility of the build, by using local caches, or downloading the dependencies directly from the target machine. They work, but then you lose the "universal container" approach of Docker.
I have a question with the correct flow for dockerizing Java/Spring Boot/Maven application in order to use docker layers.
Currently it looks like this -> Maven updates the versions(using the maven release plugin) -> docker image is building with the custom dockerfile.
However the problem is, that all of the dependencies are being downloaded every time(even when the poms are not modified, due to the fact, that every new version which I want to build has different version value in the pom's(it is being updated using this plugin).
I would want to use the docker layers, so that our builds could be faster than currently it is. Is it possible to do so? If it is, how? Should I use different plugin or are there other options that should be taken into account(maybe the whole flow is bad and it should be done in a different way?)
One possibility is to set up a local proxy, like a Nexus or Artifactory instance, which you can use to cache contents from the internet. You'll still be downloading them but hopefully that would be faster.
The second possibility you could do would be to create a Docker layer by downloading the content to the local repository (in the Docker image), followed by then doing a Docker build. So it would look something like:
RUN mvn dependency:go-offline
RUN mvn build
That way, your docker image would capture all of the dependencies and plugins in the local ~/.m2/repository cache, and then the build would resolve them locally. When you re-run the build step, it should inherit the layer previously.
You'll need to re-build the dependency set over time as your project evolves, but it would speed your build up by not needing them again.
You might want to have a final step that creates a new runtime Docker image from the built contents rather than depending on this layer for all your production content though.
You might want to look at David Delabasee's presentation at QCon London last year:
https://www.infoq.com/presentations/openjdk-containers/
I recommend to either mount the local maven repository as a volume and use it across Docker images or use a special local repository (/usr/share/maven/ref/) the contents of which will be copied on container startup.
The documentation of the official Maven Docker images (click here) also points out different ways to achieve better caching of dependencies.
I created a Docker container whose purpose is to run a Docker image that implements a REST API. The REST API is created with Java (using the Eclipse IDE, Maven and Spring Boot). When creating a jar file, it (jar file) is titled: workserver-0.0.1-SNAPSHOT.jar
The code is commited to a Gitlab server. When this takes place, a Jenkins job is created. The Jenkins job pulls down the code from the Gitlab Repository, creates a .jar file and then goes through the actions to turn the .jar file into a Docker image (or rather a .zip version of the Docker image).
"scp" is used move the zip file to a target system - where - the .zip file is unpacked (revealing the Docker image) and a container is started. The thing is, the Docker image being used has a version of "latest" (ex: imagename:latest).
I would like to use versions in this scenario starting with Eclipse (i.e. a pom.xml file holding a target workserver-2.2.13.jar file would eventually lead to Docker image that would be named imagename:2.3.13)
I have seen here how one can assign a version number in Docker:
Adding tags to docker image from jenkins
I have also seen that one can use tags and version numbers in Git :
ex: git tag -a v2.5 -m 'Version 2.5' As mentioned above, the Maven
pom.xml file contains instructions to produce a .jar file called:
workserver-0.0.1-SNAPSHOT.jar
The system is working fine. I can commit a change in Eclipse and in a few minutes, a new version of the Docker container has been spun up on the delivery system - ready for use.
The issue I have now is setting up the version numbers.
Any guidance in this area would be greatly appreciated.
TIA
I would recommend using the fabric8io docker maven plugin to create the docker image using maven. This plugin allows you to build, run, and push docker images.
In particular, you can setup the name field in the plugin configuration to be:
<name>workserver:%l<name>
The %l will be resolved to the maven project version, which is the same as the jar version. You can run the plugin explicitly using:
mvn io.fabric8:docker-maven-plugin:build
Or you can set the packaging in the pom to be:
<packaging>docker-build</packaging>
This will build the image whenever you docker an mvn package and will push the image on mvn deploy
I am trying to find a solution for the following puzzle. I have java projects, managed by maven, which needs some native dependencies to work (run unit and integration tests). Those are provided in form of deb packages, which needs to be installed prior to running a build.
I use Jenkins for CI. Native dependencies can not be installed on Jenkins nodes, because of conflicts with other builds and they can change often. What I do now is not to create a Jenkins job type 'maven', but 'freestyle' and use a pbuilder to create an clean sandbox, install all that is necessary and invoke maven build.
This is working great, but I am loosing Jenkins maven goodies like automatic upstream projects, trigger build when dependency change, etc. Jenkins simply does not know that maven is there.
Finally, my question. Is there a way how to achieve both, isolate build so installed libraries does not affect other builds and leverage Jenkins's 'magic' applied to maven builds and their dependencies?
You could split your build in three jobs, which trigger the next one.
Create needed environment
Run maven job
Clean Up
Even a Freestyle job has "Invoke top-level Maven targets". You could use that to get "maven goodies" while also having ability to run other build steps.
There is an option to "use private Maven repository" which will make sure it will use the .m2/repository folder location relative to the workspace. If you need to separate into multiple jobs, you can use "Custom/shared workspace" between those jobs.
Even in Maven-style job, there is an option to use private repository, so that one job does not affect another.
The problem can be solved by using distributed Jenkins builds. Slave agents can be configured to provision clean environment (e.g. via VMs, docker,...) for each build and tear it down after build is done. This way Jenkins job can be of Maven type and any changes done by pre-build step will not affect others.
More information can be found here.
Consider docker. There you can run processes in isolated environments just as you want. Docker works in a way that it easily communicates with Jenkins.
As a benefit you can also use that docker container to run local builds in the same environment as they run in Jenkins.