How do I execute a docker container - java

I created the images locally and created one repository in DockerHub called testName/backend.
The docker images command shows the created images.
REPOSITORY TAG IMAGE ID CREATED SIZE
testName/backend 0.0.1-SNAPSHOT 10fc47e065ff 25 minutes ago 459MB
backend 0.0.1-SNAPSHOT 10fc47e065ff 25 minutes ago 459MB
backend latest 2f5cc7be2b5d 41 minutes ago 479MB
alpine/git latest 22a2874e1112 3 weeks ago 39.5MB
hello-world latest feb5d9fea6a5 10 months ago 13.3kB
And I would like to run the container with a command in the format docker run yourImage:version. So according to this (so I think I should call such a command: docker run 10fc47e065ff:0.0.1-SNAPSHOT
However, I get this message: Unable to find image '10fc47e065ff:0.0.1-SNAPSHOT' locally
docker: error response from daemon: pull access denied for 10fc47e065ff, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
I will add that I am logged in.
What am I doing wrong that I can't get the container to work?
By the way, I have a question about the docker in the context of the application. Generally, in this image there is a Spring Boot + Angular application. I am uploading it to a container, and spring + angular is supposed to work together with each other . And generally the question I have is whether I will have to create a new image every time I make a modification to the application code, or if I run the application using the docker run command then the image will be automatically overwritten and every time I run the application I will get the "latest version" of the application.
Dockerfile
FROM eclipse-temurin:17-jdk-alpine
ARG JAR_FILE=build/libs/*.jar
COPY ${JAR_FILE} backend.jar
ENTRYPOINT ["java","-jar","/backend-0.0.1-SNAPSHOT.jar"]
build.gradle
plugins {
id 'org.springframework.boot' version '2.7.2'
id 'io.spring.dependency-management' version '1.0.12.RELEASE'
id 'java'
id "com.palantir.docker" version "0.34.0"
id "com.palantir.docker-run" version "0.34.0"
}
group = 'com.example'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '17'
configurations {
compileOnly {
extendsFrom annotationProcessor
}
}
repositories {
mavenCentral()
}
docker {
name "${project.name}:${project.version}"
dockerfile file('Dockerfile')
copySpec.from(jar).rename(".*", "backend.jar")
buildArgs(['JAR_FILE': "backend.jar"])
files 'backend.jar'
tag 'DockerHub', "testName/backend:${project.version}"
}
dockerRun {
name "${project.name}"
image "${project.name}:${project.version}"
ports '8081:8081'
clean true
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
compileOnly 'org.projectlombok:lombok'
developmentOnly 'org.springframework.boot:spring-boot-devtools'
annotationProcessor 'org.projectlombok:lombok'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
tasks.named('test') {
useJUnitPlatform()
}

You are not using the correct image name. The image ID (10fc47e065ff) is a unique identifier so you could use that and not need to refer to a tag at all, or else you use the image name (repository as shown in the list) in combination with the tag to reference a unique image. eg. testName/backend:0.0.1-SNAPSHOT.
If you application code changes, you will need to build a new image. So if you needed to rebuild the spring boot application you would also rebuild the image - this would produce a new image with a new ImageID. The ImageID is unique to each image build, but you can name/rename and label/relabel any image anytime. The ImageID is a unique identifier but the name/tag combination is not. Typically you might use the image label latest to always get the most recently build image. Alternatively, if you want to know exactly what image you want to deploy/run you would use a specific image name and tag that you have assigned to the image.

try use docker run -d backend: 0.0.1-SNAPSHOT

It should helps you:
docker run -d testName/backend:0.0.1-SNAPSHOT
As for your second question... Yes, you have to rebuild the image every time you change the code. If you run docker build and an image with the same repository name and tag already locally exists, then the name and tag of the previous image will be set to <none>.
It looks like you should modify your Dockerfile as follows:
FROM eclipse-temurin:17-jdk-alpine
ARG JAR_FILE=build/libs/*.jar
COPY ${JAR_FILE} backend.jar
ENTRYPOINT ["java", "-jar", "backend.jar"]

I am not sure that you have to use the Image ID. Do docker -ps -a and try with docker run [container_id].

Related

How to deploy 100s of jars in Jfrog Artifactory

I was in the process of converting java projects build system from Ant to Maven and there are literally 700+ dependency jar files lying in a folder without any version or package information.
I was able to figure out maven co-ordinates for 400+ of those jar files using it's hash. So for the remaining 300+ jar files I am thinking of uploading it directly to a local repo in Artifactory and then generate maven co-ordinates automatically.
As far as i have explored the only way to achieve this is to deploy/upload every jar file manually via Artifactory UI with Deploy as Maven Artifact option enabled to generate co-ordinates automatically but this is a very time consuming process (I want to do this for 300+ files).
is there any other efficient way to do it?
I see two ways to achieve what you want, unfortunately none is available "out of the box"...
Use the command line client to upload each JAR file to Artifactory.
The main command to upload is:
jfrog rt upload foo.jar maven-local-repo
See https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory for more details
Use a bash script to loop on the JAR files, and for each file, upload it to a separate location, generate a short pom (from a sample pom and some sed to replace groupId and artifactId with the filename) and upload it next to the JAR file.
As Artifactory provides this option over its webapp, create a Selenium client that loops on each JAR file, connects to the Artifactory UI, and uploads each file using the "Generate default POM" option.
See https://www.jfrog.com/confluence/display/JFROG/Deploying+Artifacts#DeployingArtifacts-DeployingMavenArtifacts
I'm sure you already figured out a solution, but for anyone else who has to do something similar I ended up just using gradle to do it for me. I created a bare gradle project and with the following build.gradle. It collects all the jars from the directory specified and loops through them and creates publications for each. We wanted to use the sub-folder structure as the groupId so there's a bit of logic to format that in there.
buildscript {
repositories {
jcenter()
}
dependencies {
classpath "org.jfrog.buildinfo:build-info-extractor-gradle:4.21.0"
}
}
apply plugin: 'java'
apply plugin: 'maven-publish'
apply plugin: "com.jfrog.artifactory"
version = '0.2021.0'
ext.thirdPartyLib = fileTree(dir: "$rootDir/../extrajars", include: ['**/*.jar'])
publishing {
publications {
thirdPartyLib.each{ jar->
def fbase = jar.name.minus(".jar")
"$fbase"(MavenPublication) {
artifact jar
artifactId fbase
//the following was to use the folder structure as the groupId
def path = jar.path.minus("\\" + jar.name)
path = path.replaceAll("\\\\", ".")
path = path.replaceAll("c:/pathtoDirectory", "")
groupId = path
}
}
}
}
artifactory {
contextUrl = 'http://yourArtifactoryUrl'
publish {
repository {
repoKey='yourRepo'
username='username'
password='password'
}
defaults {
thirdPartyLib.each{ jar->
def fbase = jar.name.minus(".jar")
publications( fbase )
}
}
}
}

How to change the working directory using JavaFX with Gradle?

The title says all. How do I change the working/runtime directory when using JavaFX with Gradle in Eclipse?
Basically, I have a project that requires log4j and initiates a basic logger which uses the "logs/" directory from the place the jar is run. This directory is being made in the home of the source, but I want it to be made in the "run/" directory. I'm assuming for other files that will be created, they will also have this same issue.
My build.gradle is this:
// Plugins
plugins {
id 'application'
id 'org.openjfx.javafxplugin' version '0.0.7'
}
// Repositories
repositories {
jcenter()
mavenCentral()
}
// Dependencies
dependencies {
implementation 'com.google.code.gson:gson:2.8.5'
implementation 'org.apache.logging.log4j:log4j-core:2.12.0'
}
// JavaFX
javafx {
version = '12'
modules = ['javafx.controls']
}
mainClassName = 'net.protolauncher.backtest2.ProtoLauncher'
I am using Eclipse to run it, but this issue also occurs when just running the run task. I tried changing the Working Directory in the "Gradle Project" run configuration, but it didn't work at all (it just loaded forever).
To give an example, here's the directory of my source code: DirectoryX. Now, I made a folder in here called "run", like so: DirectoryX/run. When I run the program, I want my logs to go into DirectoryX/run/logs and similar files to go into the run directory. However, when running with Gradle my log files are being created in DirectoryX/logs.
This probably made no sense, but if it did, I really appreciate any help I can get.
After hours of searching online to no avail, I finally found a StackOverflow answer that solves the question. Turns out, JavaExec is a complicated thing, and what I was doing was specific to that, NOT JavaFx.

How to create a debian distro for a java project using gradle

I see someone has asked this question before. I would have loved to have seen the answer, but it was removed. At the risk of getting down-voted like that post..., I really need help with this, as I've spent a few days on it already and I'm thoroughly at a loss...
I have a java project that's fairly mature. We're preparing to go from alpha phase into a beta release. As a part of that, we want to release installable packages with a proper app with an icon, etc. Creating a (dmg) package for distribution on Mac was extremely easy using the macAppBundle gradle plugin and it works beautifully. I'm now attempting to address distribution on Linux. Ideally, the setupbuilder plugin would be the way to go, but there's a bug that's preventing me from creating a .deb or .rpm package. I submitted the bug to the developer and am currently trying to work around the issue by following this blog post.
I am running an Ubuntu 16.04.3 vm in VirtualBox on my Mac and I can successfully create a working executable by running gradle debianPrepareappname. But when I try to run gradle debian to create the .deb file, the build always fails (currently with this error:).
Process 'command 'debuild'' finished with non-zero exit value 255
When I run debuild manually, I see the following:
debuild: fatal error at line 679:
found debian/changelog in directory
/home/username/appname/build/debian/appname
but there's no debian/rules there! Are you in the source code tree?
No rules file is getting created by gradle. I know that the rules file is basically a makefile... and I'm not very familiar with makefiles in general, let alone creating .deb distros. I know makefiles do compilations and copy files to places in the system, but I don't know what needs to be done to create a .deb file or where things need to go. I mean, the necessary components are there and they work:
appname/build/debian/appname/debian/{bin,lib}
The bin has the working executable and the lib has all the necessary jar files. I just don't know what I need to do in the gradle build script to create the .deb file. Here's what I've got in the gradle build file (I've omitted the macAppBundle and setupbuilder stuff that's just vestigial in there right now, just to keep it simple):
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'application'
defaultTasks "clean", "fatJar", "eclipse"
version = getVersionName()
sourceCompatibility = 1.7
targetCompatibility = 1.7
repositories {
mavenCentral()
}
dependencies {
compile 'com.miglayout:miglayout-swing:5.0'
compile 'com.googlecode.plist:dd-plist:1.3'
compile 'org.freehep:freehep-graphicsio:2.4'
compile 'org.freehep:freehep-graphicsio-pdf:2.4'
compile 'org.freehep:freehep-graphicsio-ps:2.4'
compile 'org.freehep:freehep-graphicsio-svg:2.4'
compile 'org.freehep:freehep-graphics2d:2.4'
compile 'org.swinglabs.swingx:swingx-autocomplete:1.6.5-1'
}
sourceSets {
main {
java {
srcDir 'src/main/java/'
}
}
}
task fatJar(type: Jar) {
manifest {
attributes 'Main-Class':'com.placeholder.appname'
}
baseName = project.name + '-all'
from {configurations.compile.collect {it.isDirectory() ? it : zipTree(it)}}
with jar
}
def getVersionName() {
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'git', 'rev-parse', '--short', 'HEAD'
standardOutput = stdout
}
return stdout.toString().trim()
}
String applicationVersionFull = getVersionName()
task debianClean(type: Delete) {
delete 'build/debian'
}
tasks.addRule("Pattern: debianPrepare<distribution>") { String taskName ->
if (taskName.startsWith("debianPrepare")) {
task(taskName, dependsOn: [installDist, debianClean]){
String debianDistribution = (taskName - "debianPrepare").toLowerCase()
String debianApplicationVersionFull = getVersionName()
doLast {
copy {
from rootProject.files("build/install/appname")
into rootProject.file("build/debian/appname")
}
copy {
from rootProject.files("gradle/debian/debian")
into rootProject.file("build/debian/appname/debian")
}
}
}
}
}
task debian { // depends on debianPrepare*
doLast {
exec {
workingDir rootProject.file("build/debian/appname")
commandLine "debuild -i -us -uc -b".split()
}
}
}
Everything I've read says this is supposed to be really easy with gradle. The macAppBundle was definitely very easy - it was like 5 lines of code. I barely had to read anything to figure it out and it creates a dmg that has an executable with an icon and everything. I just copied & edited the example in the macAppBundle readme. setupbuilder looked similarly easy, if not for the bug I encountered. Is there a similar example out there for building .deb packages for java projects that doesn't use setupbuilder? I've tried a couple other plugins with no success. I've been googling and I can't find anything straightforward other than the blog post I mentioned. I eventually would like to apply an icon to the executable and other niceties, but first thing is to just get it to build. So why does the rules file not get created? That seems like a good place to start.
I think what you're missing is a "debian" directory with all the related files already present. If you look at syncany's repo https://github.com/syncany/syncany/tree/74c737d871d21dff5283edaac8c187a42c020b20/gradle/debian/debian on github from the blog post you mentioned, you'll see he has 8 files.
At the end of the day, debuild is just bundling a set of files up into an installer. They all have to be there to begin with. His scripts don't create any of these files, just modify some such as the changelog.

Download dependencies given only the build.gradle file

Is it possible to download gradle dependencies using only build.gradle file?
What I am trying to accomplish is the following:
I have a set of unit tests and I want to execute them (as part of CI process) inside a docker container. Initially, I used the openjdk:8-jdk image from docker hub as base image for my tests. So, docker compose file contains the following:
version: '3.2'
services:
unit:
image: openjdk:8-jdk
volumes:
- ..:/usr/test
working_dir: /usr/test
command: sh -c "exec ./gradlew junitPlatformTest -Punit -p moduleA/"
Whole project is mounted on /usr/test directory inside the container. When the container starts, it executes the junitPlatformTest task against moduleA. The problem with openjdk:8-jdk image is that gradle and its dependencies are downloaded every time I run the unit service.
To solve this, I decided to create a new image which would have gradle and my project dependencies already downloaded. The Dockerfile is the following:
FROM openjdk:8-jdk
COPY . /usr/test
WORKDIR /usr/test
RUN apt-get -y install wget unzip
RUN wget https://services.gradle.org/distributions/gradle-4.1-bin.zip
RUN mkdir /opt/gradle
RUN unzip -d /opt/gradle gradle-4.1-bin.zip
RUN /opt/gradle/gradle-4.1/bin/gradle dependencies
The build.gradle file is located in same folder as Dockerfile so the command COPY . /usr/test copies it in the working directory.
However, executing the gradle dependencies command does not download the libraries. After built the image, ran a container and entered into it (with docker exec), it seems that ~/.gradle/caches/modules-2/files-2.1/ directory contains only pom files, not jars.
I'm not use if gradle dependencies is the correct command. Any suggestions?
EDIT - Gradle file
apply plugin: 'java'
sourceCompatibility = 1.8
repositories {
jcenter()
}
ext.versions = new Properties()
file("./versions.properties").withInputStream {
stream -> ext.versions.load(stream)
}
dependencies {
testCompile("org.junit.jupiter:junit-jupiter-api:$versions.junitJupiterVersion")
testCompile("org.junit.jupiter:junit-jupiter-engine:$versions.junitJupiterVersion")
testCompile("org.junit.jupiter:junit-jupiter-params:$versions.junitJupiterVersion")
testCompile("org.mockito:mockito-core:'$versions.mockitoCore")
testCompile("org.junit.platform:junit-platform-launcher:1.0.0-RC3")
compile("com.google.inject:guice:$versions.guice")
....
}
Add below to your gradle file
task download (type: Exec) {
configurations.testCompile.files
commandLine 'echo', 'Downloaded all dependencies'
}
Also change
RUN /opt/gradle/gradle-4.1/bin/gradle dependencies
to
RUN /opt/gradle/gradle-4.1/bin/gradle
This will cache all dependencies to ~/.gradle when you run the gradle command. It will download the jars also and not just poms. And if you want to cache further you can even use a named volume for gradle folder
version: '3.2'
services:
unit:
image: openjdk:8-jdk
volumes:
- ..:/usr/test
- gradlecache:/root/.gradle/
working_dir: /usr/test
command: sh -c "exec ./gradlew junitPlatformTest -Punit -p moduleA/"
volumes:
gradlecache: {}
Your idea is right but the approach is wrong. First of all, there is no need to download and install Gradle manually, that's exactly what the Gradle Wrapper does for you. Second of all, there is no need to hack Gradle to force download dependencies - it doesn't make any sense. Just check out/clone your test project from inside the container and run your tests with ./gradlew clean test. There is no need for a local /usr/test directory, which is flies in the face of CI, because it uses a relative location, and hence only works when you've laid out files in an exact manner.
Edit:
If you don't want to download Gradle or the dependencies for every build, you can start the container by volume mapping the$HOME/.m2 directory to the host, so that dependencies downloaded once stay in the Maven local cache. To avoid downloading Gradle, you can build your own Docker image with Gradle in it.

Run Dynamodb local as part of a Gradle Java project

I am trying to run DynamoDB local for testing purposes. I followed the steps amazon provides for setting it up and running the jar by itself works fine (link to amazon's tutorial Here). However, the tutorial doesn't go over running the jar within your own project. I don't want all the other developers to have to grab a jar and run it locally every time they test their code.
That is where my question comes in. I've had a real hard time finding any examples online of how to configure a Gradle project to run the DynamoDB local server as part of my tests. I found the following maven example https://github.com/awslabs/aws-dynamodb-examples/blob/master/src/test/java/com/amazonaws/services/dynamodbv2/DynamoDBLocalFixture.java#L32 and am trying to convert it to a Gradle, but am getting errors for all of com.amazonaws.services.dynamodbv2.local import statements they are using. The errors are that the resource cannot be found.
I went into their project's pom and put the following into my build.gradle file to emulate it.
//dynamodb local dependencies
testCompile('com.amazonaws:aws-java-sdk-dynamodb:1.10.42')
testCompile('com.amazonaws:aws-java-sdk-cloudwatch:1.10.42')
testCompile('com.amazonaws:aws-java-sdk:1.3.0')
testCompile('com.amazonaws:amazon-kinesis-client:1.6.1')
testCompile('com.amazonaws:amazon-kinesis-connectors:1.1.1')
testCompile('com.amazonaws:dynamodb-streams-kinesis-adapter:1.0.2')
testCompile('com.amazonaws:DynamoDBLocal:1.10.5.1')
The import statements still fail. Here is an example of one that fails.
import com.amazonaws.services.dynamodbv2.local.embedded.DynamoDBEmbedded;
TL;DR
Has anyone managed to get the DynamoDB local JAR to execute as part of a Gradle project or have a link to a good tutorial (it doesn't have to be the tutorial I linked to).
We have DynamoDB local working with gradle. Here's what you need to add to your gradle.build file:
For gradle 4.x and below versions
1) Add to the repositories section:
maven {
url 'http://dynamodb-local.s3-website-us-west-2.amazonaws.com/release'
}
2) Add to the dependencies section (assuming you're using this for your tests):
testCompile group: 'com.amazonaws', name: 'DynamoDBLocal', version: 1.11.0
3) These next two steps are the tricky part. First copy the native files to a directory:
task copyNativeDeps(type: Copy) {
from (configurations.testCompile) {
include "*.dylib"
include "*.so"
include "*.dll"
}
into 'build/libs'
}
4) Then make sure you include this directory (build/libs in our case) in the java library path like so:
test.dependsOn copyNativeDeps
test.doFirst {
systemProperty "java.library.path", 'build/libs'
}
Now you should be able to run ./gradlew test and have your tests hit your local DynamoDB.
For Gradle 5.x the below solution works
maven {
url 'http://dynamodb-local.s3-website-us-west-2.amazonaws.com/release'
}
configurations {
dynamodb
}
dependencies {
testImplementation 'com.amazonaws:DynamoDBLocal:1.11.477'
dynamodb fileTree (dir: 'lib', include: ["*.dylib", "*.so", "*.dll"])
dynamodb 'com.amazonaws:DynamoDBLocal:1.11.477'
}
task copyNativeDeps(type: Copy) {
from configurations.dynamodb
into "$project.buildDir/libs/"
}
test.dependsOn copyNativeDeps
test.doFirst {
systemProperty "java.library.path", 'build/libs'
}
I run into the same problem and first I tried to add sqlite4java.library.path to the Gradle script as it has been mentioned in the other comments.
This worked for command line, but were not working when I was running the tests from IDE (IntelliJ IDEA), so finally I come up with a simple init method, that is called at the beginning of each of integration tests:
AwsDynamoDbLocalTestUtils.initSqLite();
AmazonDynamoDBLocal amazonDynamoDBLocal = DynamoDBEmbedded.create();
Implementation can be found here: https://github.com/redskap/aws-dynamodb-java-example-local-testing/blob/master/src/test/java/io/redskap/java/aws/dynamodb/example/local/testing/AwsDynamoDbLocalTestUtils.java
I put a whole example to GitHub, it might be helpful: https://github.com/redskap/aws-dynamodb-java-example-local-testing
In August 2018 Amazon announced new Docker image with Amazon DynamoDB Local onboard. It does not require downloading and running any JARs as well as adding using third-party OS-specific binaries like sqlite4java.
It is as simple as starting a Docker container before the tests:
docker run -p 8000:8000 amazon/dynamodb-local
You can do that manually for local development, as described above, or use it in your CI pipeline. Many CI services provide an ability to start additional containers during the pipeline that can provide dependencies for your tests. Here is an example for Gitlab CI/CD:
test:
stage: test
image: openjdk:8-alpine
services:
- name: amazon/dynamodb-local
alias: dynamodb-local
script:
- ./gradlew clean test
So, during the test task DynamoDB will be available on http://dynamodb-local:8000.
Another, more powerful tool is localstack. It supports two dozen of AWS services, DynamoDB is one of them. The isage is very similar, you have to start it before running the tests and it will expose AWS-compatible APIs on given ports:
test:
stage: test
image: openjdk:8-alpine
services:
- name: localstack/localstack
alias: localstack
script:
- ./gradlew clean test
The idea is to move all the configuration out of your build tool and tests and provide the dependency externally. Think of it as of dependency injection / IoC but for the whole service, not just a single bean. This way, you code is more clean and maintainable. You can see that even in the examples above: you can switch mock implementation from DynamoDB Local to localstack by simply changing the image part!
The easiest way, in my opinion, is to:
Download the JAR from here:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html#DynamoDBLocal.DownloadingAndRunning
Then unzip the downloaded folder and add its content to the /libs folder in the project (create the /libs folder before that)
Finally, add to the build.gradle:
dependencies {
runtime files('libs/DynamoDBLocal.jar')
}
I didn't want to create a specific configuration for dynamo for gradle 6+ so I tweaked the original answer instructions. Also, this is in kotlin gradle DSL rather than groovy.
val copyNativeDeps by tasks.creating(Copy::class) {
from(configurations.testRuntimeClasspath) {
include("*.dylib")
include("*.so")
include("*.dll")
}
into("$buildDir/libs")
}
tasks.withType<Test> {
dependsOn.add(copyNativeDeps)
doFirst { systemProperty("java.library.path", "$buildDir/libs") }
}
By leveraging the testRuntimeClasspath configuration, gradle is able to locate the relevant files for you without needing to create a custom configuration. Obviously this has the side effect that if your test runtime has many native deps, they will also be copied which would make the custom configuration approach more ideal.

Categories

Resources