Is it possible to download gradle dependencies using only build.gradle file?
What I am trying to accomplish is the following:
I have a set of unit tests and I want to execute them (as part of CI process) inside a docker container. Initially, I used the openjdk:8-jdk image from docker hub as base image for my tests. So, docker compose file contains the following:
version: '3.2'
services:
unit:
image: openjdk:8-jdk
volumes:
- ..:/usr/test
working_dir: /usr/test
command: sh -c "exec ./gradlew junitPlatformTest -Punit -p moduleA/"
Whole project is mounted on /usr/test directory inside the container. When the container starts, it executes the junitPlatformTest task against moduleA. The problem with openjdk:8-jdk image is that gradle and its dependencies are downloaded every time I run the unit service.
To solve this, I decided to create a new image which would have gradle and my project dependencies already downloaded. The Dockerfile is the following:
FROM openjdk:8-jdk
COPY . /usr/test
WORKDIR /usr/test
RUN apt-get -y install wget unzip
RUN wget https://services.gradle.org/distributions/gradle-4.1-bin.zip
RUN mkdir /opt/gradle
RUN unzip -d /opt/gradle gradle-4.1-bin.zip
RUN /opt/gradle/gradle-4.1/bin/gradle dependencies
The build.gradle file is located in same folder as Dockerfile so the command COPY . /usr/test copies it in the working directory.
However, executing the gradle dependencies command does not download the libraries. After built the image, ran a container and entered into it (with docker exec), it seems that ~/.gradle/caches/modules-2/files-2.1/ directory contains only pom files, not jars.
I'm not use if gradle dependencies is the correct command. Any suggestions?
EDIT - Gradle file
apply plugin: 'java'
sourceCompatibility = 1.8
repositories {
jcenter()
}
ext.versions = new Properties()
file("./versions.properties").withInputStream {
stream -> ext.versions.load(stream)
}
dependencies {
testCompile("org.junit.jupiter:junit-jupiter-api:$versions.junitJupiterVersion")
testCompile("org.junit.jupiter:junit-jupiter-engine:$versions.junitJupiterVersion")
testCompile("org.junit.jupiter:junit-jupiter-params:$versions.junitJupiterVersion")
testCompile("org.mockito:mockito-core:'$versions.mockitoCore")
testCompile("org.junit.platform:junit-platform-launcher:1.0.0-RC3")
compile("com.google.inject:guice:$versions.guice")
....
}
Add below to your gradle file
task download (type: Exec) {
configurations.testCompile.files
commandLine 'echo', 'Downloaded all dependencies'
}
Also change
RUN /opt/gradle/gradle-4.1/bin/gradle dependencies
to
RUN /opt/gradle/gradle-4.1/bin/gradle
This will cache all dependencies to ~/.gradle when you run the gradle command. It will download the jars also and not just poms. And if you want to cache further you can even use a named volume for gradle folder
version: '3.2'
services:
unit:
image: openjdk:8-jdk
volumes:
- ..:/usr/test
- gradlecache:/root/.gradle/
working_dir: /usr/test
command: sh -c "exec ./gradlew junitPlatformTest -Punit -p moduleA/"
volumes:
gradlecache: {}
Your idea is right but the approach is wrong. First of all, there is no need to download and install Gradle manually, that's exactly what the Gradle Wrapper does for you. Second of all, there is no need to hack Gradle to force download dependencies - it doesn't make any sense. Just check out/clone your test project from inside the container and run your tests with ./gradlew clean test. There is no need for a local /usr/test directory, which is flies in the face of CI, because it uses a relative location, and hence only works when you've laid out files in an exact manner.
Edit:
If you don't want to download Gradle or the dependencies for every build, you can start the container by volume mapping the$HOME/.m2 directory to the host, so that dependencies downloaded once stay in the Maven local cache. To avoid downloading Gradle, you can build your own Docker image with Gradle in it.
Related
I created the images locally and created one repository in DockerHub called testName/backend.
The docker images command shows the created images.
REPOSITORY TAG IMAGE ID CREATED SIZE
testName/backend 0.0.1-SNAPSHOT 10fc47e065ff 25 minutes ago 459MB
backend 0.0.1-SNAPSHOT 10fc47e065ff 25 minutes ago 459MB
backend latest 2f5cc7be2b5d 41 minutes ago 479MB
alpine/git latest 22a2874e1112 3 weeks ago 39.5MB
hello-world latest feb5d9fea6a5 10 months ago 13.3kB
And I would like to run the container with a command in the format docker run yourImage:version. So according to this (so I think I should call such a command: docker run 10fc47e065ff:0.0.1-SNAPSHOT
However, I get this message: Unable to find image '10fc47e065ff:0.0.1-SNAPSHOT' locally
docker: error response from daemon: pull access denied for 10fc47e065ff, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
I will add that I am logged in.
What am I doing wrong that I can't get the container to work?
By the way, I have a question about the docker in the context of the application. Generally, in this image there is a Spring Boot + Angular application. I am uploading it to a container, and spring + angular is supposed to work together with each other . And generally the question I have is whether I will have to create a new image every time I make a modification to the application code, or if I run the application using the docker run command then the image will be automatically overwritten and every time I run the application I will get the "latest version" of the application.
Dockerfile
FROM eclipse-temurin:17-jdk-alpine
ARG JAR_FILE=build/libs/*.jar
COPY ${JAR_FILE} backend.jar
ENTRYPOINT ["java","-jar","/backend-0.0.1-SNAPSHOT.jar"]
build.gradle
plugins {
id 'org.springframework.boot' version '2.7.2'
id 'io.spring.dependency-management' version '1.0.12.RELEASE'
id 'java'
id "com.palantir.docker" version "0.34.0"
id "com.palantir.docker-run" version "0.34.0"
}
group = 'com.example'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '17'
configurations {
compileOnly {
extendsFrom annotationProcessor
}
}
repositories {
mavenCentral()
}
docker {
name "${project.name}:${project.version}"
dockerfile file('Dockerfile')
copySpec.from(jar).rename(".*", "backend.jar")
buildArgs(['JAR_FILE': "backend.jar"])
files 'backend.jar'
tag 'DockerHub', "testName/backend:${project.version}"
}
dockerRun {
name "${project.name}"
image "${project.name}:${project.version}"
ports '8081:8081'
clean true
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
compileOnly 'org.projectlombok:lombok'
developmentOnly 'org.springframework.boot:spring-boot-devtools'
annotationProcessor 'org.projectlombok:lombok'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
tasks.named('test') {
useJUnitPlatform()
}
You are not using the correct image name. The image ID (10fc47e065ff) is a unique identifier so you could use that and not need to refer to a tag at all, or else you use the image name (repository as shown in the list) in combination with the tag to reference a unique image. eg. testName/backend:0.0.1-SNAPSHOT.
If you application code changes, you will need to build a new image. So if you needed to rebuild the spring boot application you would also rebuild the image - this would produce a new image with a new ImageID. The ImageID is unique to each image build, but you can name/rename and label/relabel any image anytime. The ImageID is a unique identifier but the name/tag combination is not. Typically you might use the image label latest to always get the most recently build image. Alternatively, if you want to know exactly what image you want to deploy/run you would use a specific image name and tag that you have assigned to the image.
try use docker run -d backend: 0.0.1-SNAPSHOT
It should helps you:
docker run -d testName/backend:0.0.1-SNAPSHOT
As for your second question... Yes, you have to rebuild the image every time you change the code. If you run docker build and an image with the same repository name and tag already locally exists, then the name and tag of the previous image will be set to <none>.
It looks like you should modify your Dockerfile as follows:
FROM eclipse-temurin:17-jdk-alpine
ARG JAR_FILE=build/libs/*.jar
COPY ${JAR_FILE} backend.jar
ENTRYPOINT ["java", "-jar", "backend.jar"]
I am not sure that you have to use the Image ID. Do docker -ps -a and try with docker run [container_id].
So I am learning about CI and pipelines and all that stuff and I just added my yaml file to my repository and it built successfully and tested successfully, but when I made a change in the code (I removed a semicolumn) it still said that the build was successful while it should say it had a error. I have no idea if my yaml file is not good, or what it is.
I am suspecting my yaml file is not as it should be but I did another post about it and I got no reaction.
My Yaml file:
My changed code:
Obviously, the code should error since there is no semicolumn anymore.
I build this project with gradle also and the yaml file there is:
build:
script:
- ./gradlew assemble
test:
stage: test
script:
- ./gradlew test
after_script:
- echo "End CI"
This code works like it should(on the other project) so I searched the maven version of that code, but could not find it.
Can anyone help me to confirm that the problem is in the yaml file and help me to what the yaml file should be (if that's the problem)?
Thanks!
Is the pipeline posted above your complete pipeline for your maven build? Your pipeline does nothing except echoing outputs.
If you need to build with maven you would need something like this:
image: maven:latest
stages:
- build
- test
build_a:
stage: build
script:
- mvn clean install -DskipTests --batch-mode --errors --fail-at-end --show-version
build_b:
stage: test
script:
- mvn test --batch-mode --errors --fail-at-end --show-version
Stages for gitlab-ci.yml file
Stage=>Build_image, job::build_image => which will also build the project
Stage=>Quality_Check, job::test_junit, job::lint_pmd, job::test_jacoco
Stage=>Deploy, job::deploy_services
Dockerfile need to be build in a way that i can run cmmands like mvn test,lint,jacoco coverage,etc too can run from the built image
This i need to make for java project which uses maven as respository to download dependenies
On a new environment gradle build takes quite a while because all dependencies have to be downloaded.
Is there a way to only download dependencies in order to speed up the following build?
That way we could for example already prefill a CI build environment.
Edit: Updated for Gradle 6+.
Some notes:
This new approach downloads jars into a folder, and then deletes the folder. So the result of having the jars in the Gradle cache is a side-effect.
It currently uses jars configured for the main source-set but could be generalized.
Even though it is neither efficient nor elegant, it can be useful if you actually want the jars (and transitive dependencies): simply comment-out the deletion of the runtime folder.
This solution can be handy when you want the jars (and transitive dependencies), as you simply have to comment-out deleting the folder.
Consider this build.gradle (as an arbitrary, concrete example):
apply plugin: 'java'
dependencies {
implementation 'org.apache.commons:commons-io:1.3.2'
implementation 'org.kie.modules:org-apache-commons-lang3:6.2.0.Beta2'
}
repositories {
jcenter()
}
task getDeps(type: Copy) {
from sourceSets.main.runtimeClasspath
into 'runtime/'
doFirst {
ant.delete(dir: 'runtime')
ant.mkdir(dir: 'runtime')
}
doLast {
ant.delete(dir: 'runtime')
}
}
Example run:
$ find /Users/measter/.gradle/caches -name "commons-io*1.3.2.jar"
$ gradle getDeps
$ find /Users/measter/.gradle/caches -name "commons-io*1.3.2.jar"
/Users/measter/.gradle/caches/modules-2/files-2.1/commons-io/commons-io/1.3.2/[snip]/commons-io-1.3.2.jar
I've found ./gradlew dependencies (as suggested by this user) to be very handy for Docker builds.
You can create a custom task that resolves all the configurations( in doing so, it will also download the dependencies without building the project)
task downloadDependencies {
doLast {
configurations.findAll{it.canBeResolved}.each{it.resolve()}
}
}
Run command ./gradlew downloadDependencies
My answer will favor the gradle plugins and built-in tasks.
I would use "gradle assemble" in the command-line.
It is a minor version of "gradle build".
This way, you may reduce the time of your preparations before running or building anything.
Check the link bellow for the documentation:
https://docs.gradle.org/current/userguide/java_plugin.html#lifecycle_tasks
In general, what is my recipe when I clone a new repository:
-gradle assemble
-do some coding
-gradle run (and basically test until done)
-gradle build (to make distributable files)
note: this last step may have adicional configurations for .jar files as outputs (depends on you).
I am trying to run DynamoDB local for testing purposes. I followed the steps amazon provides for setting it up and running the jar by itself works fine (link to amazon's tutorial Here). However, the tutorial doesn't go over running the jar within your own project. I don't want all the other developers to have to grab a jar and run it locally every time they test their code.
That is where my question comes in. I've had a real hard time finding any examples online of how to configure a Gradle project to run the DynamoDB local server as part of my tests. I found the following maven example https://github.com/awslabs/aws-dynamodb-examples/blob/master/src/test/java/com/amazonaws/services/dynamodbv2/DynamoDBLocalFixture.java#L32 and am trying to convert it to a Gradle, but am getting errors for all of com.amazonaws.services.dynamodbv2.local import statements they are using. The errors are that the resource cannot be found.
I went into their project's pom and put the following into my build.gradle file to emulate it.
//dynamodb local dependencies
testCompile('com.amazonaws:aws-java-sdk-dynamodb:1.10.42')
testCompile('com.amazonaws:aws-java-sdk-cloudwatch:1.10.42')
testCompile('com.amazonaws:aws-java-sdk:1.3.0')
testCompile('com.amazonaws:amazon-kinesis-client:1.6.1')
testCompile('com.amazonaws:amazon-kinesis-connectors:1.1.1')
testCompile('com.amazonaws:dynamodb-streams-kinesis-adapter:1.0.2')
testCompile('com.amazonaws:DynamoDBLocal:1.10.5.1')
The import statements still fail. Here is an example of one that fails.
import com.amazonaws.services.dynamodbv2.local.embedded.DynamoDBEmbedded;
TL;DR
Has anyone managed to get the DynamoDB local JAR to execute as part of a Gradle project or have a link to a good tutorial (it doesn't have to be the tutorial I linked to).
We have DynamoDB local working with gradle. Here's what you need to add to your gradle.build file:
For gradle 4.x and below versions
1) Add to the repositories section:
maven {
url 'http://dynamodb-local.s3-website-us-west-2.amazonaws.com/release'
}
2) Add to the dependencies section (assuming you're using this for your tests):
testCompile group: 'com.amazonaws', name: 'DynamoDBLocal', version: 1.11.0
3) These next two steps are the tricky part. First copy the native files to a directory:
task copyNativeDeps(type: Copy) {
from (configurations.testCompile) {
include "*.dylib"
include "*.so"
include "*.dll"
}
into 'build/libs'
}
4) Then make sure you include this directory (build/libs in our case) in the java library path like so:
test.dependsOn copyNativeDeps
test.doFirst {
systemProperty "java.library.path", 'build/libs'
}
Now you should be able to run ./gradlew test and have your tests hit your local DynamoDB.
For Gradle 5.x the below solution works
maven {
url 'http://dynamodb-local.s3-website-us-west-2.amazonaws.com/release'
}
configurations {
dynamodb
}
dependencies {
testImplementation 'com.amazonaws:DynamoDBLocal:1.11.477'
dynamodb fileTree (dir: 'lib', include: ["*.dylib", "*.so", "*.dll"])
dynamodb 'com.amazonaws:DynamoDBLocal:1.11.477'
}
task copyNativeDeps(type: Copy) {
from configurations.dynamodb
into "$project.buildDir/libs/"
}
test.dependsOn copyNativeDeps
test.doFirst {
systemProperty "java.library.path", 'build/libs'
}
I run into the same problem and first I tried to add sqlite4java.library.path to the Gradle script as it has been mentioned in the other comments.
This worked for command line, but were not working when I was running the tests from IDE (IntelliJ IDEA), so finally I come up with a simple init method, that is called at the beginning of each of integration tests:
AwsDynamoDbLocalTestUtils.initSqLite();
AmazonDynamoDBLocal amazonDynamoDBLocal = DynamoDBEmbedded.create();
Implementation can be found here: https://github.com/redskap/aws-dynamodb-java-example-local-testing/blob/master/src/test/java/io/redskap/java/aws/dynamodb/example/local/testing/AwsDynamoDbLocalTestUtils.java
I put a whole example to GitHub, it might be helpful: https://github.com/redskap/aws-dynamodb-java-example-local-testing
In August 2018 Amazon announced new Docker image with Amazon DynamoDB Local onboard. It does not require downloading and running any JARs as well as adding using third-party OS-specific binaries like sqlite4java.
It is as simple as starting a Docker container before the tests:
docker run -p 8000:8000 amazon/dynamodb-local
You can do that manually for local development, as described above, or use it in your CI pipeline. Many CI services provide an ability to start additional containers during the pipeline that can provide dependencies for your tests. Here is an example for Gitlab CI/CD:
test:
stage: test
image: openjdk:8-alpine
services:
- name: amazon/dynamodb-local
alias: dynamodb-local
script:
- ./gradlew clean test
So, during the test task DynamoDB will be available on http://dynamodb-local:8000.
Another, more powerful tool is localstack. It supports two dozen of AWS services, DynamoDB is one of them. The isage is very similar, you have to start it before running the tests and it will expose AWS-compatible APIs on given ports:
test:
stage: test
image: openjdk:8-alpine
services:
- name: localstack/localstack
alias: localstack
script:
- ./gradlew clean test
The idea is to move all the configuration out of your build tool and tests and provide the dependency externally. Think of it as of dependency injection / IoC but for the whole service, not just a single bean. This way, you code is more clean and maintainable. You can see that even in the examples above: you can switch mock implementation from DynamoDB Local to localstack by simply changing the image part!
The easiest way, in my opinion, is to:
Download the JAR from here:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html#DynamoDBLocal.DownloadingAndRunning
Then unzip the downloaded folder and add its content to the /libs folder in the project (create the /libs folder before that)
Finally, add to the build.gradle:
dependencies {
runtime files('libs/DynamoDBLocal.jar')
}
I didn't want to create a specific configuration for dynamo for gradle 6+ so I tweaked the original answer instructions. Also, this is in kotlin gradle DSL rather than groovy.
val copyNativeDeps by tasks.creating(Copy::class) {
from(configurations.testRuntimeClasspath) {
include("*.dylib")
include("*.so")
include("*.dll")
}
into("$buildDir/libs")
}
tasks.withType<Test> {
dependsOn.add(copyNativeDeps)
doFirst { systemProperty("java.library.path", "$buildDir/libs") }
}
By leveraging the testRuntimeClasspath configuration, gradle is able to locate the relevant files for you without needing to create a custom configuration. Obviously this has the side effect that if your test runtime has many native deps, they will also be copied which would make the custom configuration approach more ideal.