What is a clean and elegant way to copy a bunch of files via scp with Gradle?
Two ways I currently see are:
Using Apache Wagon, as described here: http://markmail.org/message/2tmtaffayhq25g4s
Executing scp via command line with the Exec task
Are there any better (more obvious) ways to approach this?
A few years after the original question, I like the Gradle SSH Plugin. A small quote of its extensive documentation:
We can describe SSH operations in the session closure.
session(remotes.web01) {
// Execute a command
def result = execute 'uptime'
// Any Gradle methods or properties are available in a session closure
copy {
from "src/main/resources/example"
into "$buildDir/tmp"
}
// Also Groovy methods or properties are available in a session closure
println result
}
Following methods are available in a session closure.
execute - Execute a command.
executeBackground - Execute a command in background.
executeSudo - Execute a command with sudo support.
shell - Execute a shell.
put - Put a file or directory into the remote host.
get - Get a file or directory from the remote host.
...and allows for, for example:
task deploy(dependsOn: war) << {
ssh.run {
session(remotes.staging) {
put from: war.archivePath.path, into: '/webapps'
execute 'sudo service tomcat restart'
}
}
}
From a project of mine that I use to SCP files to an EC2 server.
The jar files there are local files that are part of my project, I forget where I got them from. There's probably a more concise way of doing all this, but I like to be very explicit in my build scripts.
configurations {
sshAntTask
}
dependencies {
sshAntTask fileTree(dir:'buildSrc/lib', include:'jsch*.jar')
sshAntTask fileTree(dir:'buildSrc/lib', include:'ant-jsch*.jar')
}
ant.taskdef(
name: 'scp',
classname: 'org.apache.tools.ant.taskdefs.optional.ssh.Scp',
classpath: configurations.sshAntTask.asPath)
task uploadDbServer() {
doLast {
ant.scp(
file: '...',
todir: '...',
keyfile: '...' )
}
}
Related
Yoooo!
Scope
I am trying to deploy a Quarkus based application to a Raspberry Pi using some fancy technologies, my goal is to figure out an easy way to develop an application with Quarkus framework, subsequently deploy as native executable to a raspberry device with full GPIO pins access. Below I will provide you will see requirements that I set for myself and my environment settings to have a better picture of the problem that I faced.
Acceptance Criteria
Java 17
Build native executable using GraalVM
Execute native executable in a micro image on raspberry's docker
Target platform can vary
Be able to use GPIO, SPI, I2C and etc. interfaces of the raspberry
Environment
Development PC
Raspberry Pi Model 3 B+
os: Ubuntu 22.04.1 LTS
os: DietPy
platform: x86_64, linux/amd64
platform: aarch64, linux/arm64/v8
Prerequisites
Java: diozero a device I/O library
Docker: working with buildx
Quarkus: build a native executable
How I built ARM based Docker Images for Raspberry Pi using buildx CLI Plugin on Docker Desktop?
Building Multi-Architecture Docker Images With Buildx
Application
source code on github
As for project base I used getting-started application from
https://github.com/quarkusio/quarkus-quickstarts
Adding diozero library to pom.xml
<dependency>
<groupId>com.diozero</groupId>
<artifactId>diozero-core</artifactId>
<version>1.3.3</version>
</dependency>
Creating a simple resource to test GPIO pinspackage org.acme.getting.started;
import com.diozero.devices.LED;
import javax.ws.rs.Path;
import javax.ws.rs.QueryParam;
#Path("led")
public class LedResource {
#Path("on")
public String turnOn(final #QueryParam("gpio") Integer gpio) {
try (final LED led = new LED(gpio)) {
led.on();
} catch (final Throwable e) {
return e.getMessage();
}
return "turn on led on gpio " + gpio;
}
#Path("off")
public String turnOff(final #QueryParam("gpio") Integer gpio) {
try (final LED led = new LED(gpio)) {
led.off();
} catch (final Throwable e) {
return e.getMessage();
}
return "turn off led on gpio " + gpio;
}
}
4.Dockerfile
```
# Stage 1 : build with maven builder image with native capabilities
FROM quay.io/quarkus/ubi-quarkus-native-image:22.0.0-java17-arm64 AS build
COPY --chown=quarkus:quarkus mvnw /code/mvnw
COPY --chown=quarkus:quarkus .mvn /code/.mvn
COPY --chown=quarkus:quarkus pom.xml /code/
USER quarkus
WORKDIR /code
RUN ./mvnw -B org.apache.maven.plugins:maven-dependency-plugin:3.1.2:go-offline
COPY src /code/src
RUN ./mvnw package -Pnative
# Stage 2 : create the docker final image
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6-902
WORKDIR /work/
COPY --from=build /code/target/*-runner /work/application
# set up permissions for user 1001
RUN chmod 775 /work /work/application \
&& chown -R 1001 /work \
&& chmod -R "g+rwX" /work \
&& chown -R 1001:root /work
EXPOSE 8080
USER 1001
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
```
Building image with native executable
Dockerfile based on quarkus docs, I changed image of the build container to quay.io/quarkus/ubi-quarkus-native-image:22.0.0-java17-arm64 and executor container to registry.access.redhat.com/ubi8/ubi-minimal:8.6-902, both of these are linux/arm64* compliant.
Since I am developing and building in linux/amd64 and I want to target linux/arm64/v8 my executable must be created in a target like environment. I can achieve that with buildx feature which enables cross-arch builds for docker images.
Installing QEMU
sudo apt-get install -y qemu-user-static
sudo apt-get install -y binfmt-support
Initializing buildx for linux/arm64/v8 builds
sudo docker buildx create --platform linux/arm64/v8 --name arm64-v8
Use new driver
sudo docker buildx use arm64-v8
Bootstrap driver
sudo docker buildx inspect --bootstrap
Verify
sudo docker buildx inspect
Name: arm64-v8
Driver: docker-container
Nodes:
Name: arm64-v80
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/arm64*, linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
Now looks like we're ready to run the build. I ended up with the following command
sudo docker buildx build --push --progress plain --platform linux/arm64/v8 -f Dockerfile -t nanobreaker/agus:arm64 .
--push - since I need to deploy a final image somewhere
--platform linux/arm64/v8 - docker requires to define target platform
-t nanobreaker/agus:arm64 - my target repository for final image
It took ~16 minutes to complete the build and push the image
target platform is linux/arm64 as needed
59.75 MB image size, good enough already (with micro image I could achieve ~10 MB)
After I connected to raspberry, downloaded image and run it
docker run -p 8080:8080 nanobreaker/agus:arm64
Pretty nice, let's try to execute a http request to test out gpio pins
curl 192.168.0.20:8080/led/on?gpio=3
Okey, so I see here that there are permission problems and diozero library is not in java.library.path
We can fix permission problems by adding additional parameter to docker run command
docker run --privileged -p 8080:8080 nanobreaker/agus:arm64
PROBLEM
From this point I do not know how to resolve library load error in a native executable.
I've tried:
Pulled out native executable from final container, executed on raspberry host os and had same result, this makes me think that library was not included at GraalVM compile time?
Learning how library gets loaded https://github.com/mattjlewis/diozero/blob/main/diozero-core/src/main/java/com/diozero/util/LibraryLoader.java
UPDATE I
It looks like I have two options here
Figure out a way to create configuration for the diozero library so it is properly resolved by GraalVM during native image compilation.
Add library to the native image and pass it to the native executable.
UPDATE II
Further reading of quarkus docs landed me here https://quarkus.io/guides/writing-native-applications-tips
By default, when building a native executable, GraalVM will not include any of the resources that are on the classpath into the native executable it creates. Resources that are meant to be part of the native executable need to be configured explicitly. Quarkus automatically includes the resources present in META-INF/resources (the web resources) but, outside this directory, you are on your own.
I reached out #Matt Lewis (creator of diozero) and he was kind to share his configs, which he used to compile into GraalVM. Thank you Matt!
Here’s the documentation on my initial tests: https://www.diozero.com/performance/graalvm.html
I stashed the GraalVM config here: https://github.com/mattjlewis/diozero/tree/main/src/main/graalvm/config
So combining the knowledge we can enrich pom.xml with additional setting to tell GraalVM how to process our library
<quarkus.native.additional-build-args>
-H:ResourceConfigurationFiles=resource-config.json,
-H:ReflectionConfigurationFiles=reflection-config.json,
-H:JNIConfigurationFiles=jni-config.json,
-H:+TraceServiceLoaderFeature,
-H:+ReportExceptionStackTraces
</quarkus.native.additional-build-args>
Also added resource-config.json, reflection-config.json, jni-config.json to the resource folder of the project (src/main/resources)
First, I will try to create a native executable in my native os ./mvnw package -Dnative
Fatal error: org.graalvm.compiler.debug.GraalError: com.oracle.graal.pointsto.constraints.UnsupportedFeatureException: No instances of java.lang.ProcessHandleImpl are allowed in the image heap as this class should be initialized at image runtime. To see how this object got instantiated use --trace-object-instantiation=java.lang.ProcessHandleImpl.
Okey, so it failed, but let's trace object instantiation as recommended, maybe we can do something in configs to get around this. I added --trace-object-instantiation=java.lang.ProcessHandleImpl to the additional build args.
Fatal error: org.graalvm.compiler.debug.GraalError: com.oracle.graal.pointsto.constraints.UnsupportedFeatureException: No instances of java.lang.ProcessHandleImpl are allowed in the image heap as this class should be initialized at image runtime. Object has been initialized by the java.lang.ProcessHandleImpl class initializer with a trace:
at java.lang.ProcessHandleImpl.<init>(ProcessHandleImpl.java:227)
at java.lang.ProcessHandleImpl.<clinit>(ProcessHandleImpl.java:77)
. To fix the issue mark java.lang.ProcessHandleImpl for build-time initialization with --initialize-at-build-time=java.lang.ProcessHandleImpl or use the the information from the trace to find the culprit and --initialize-at-run-time=<culprit> to prevent its instantiation.
something new at least, let's try to initialize it first at build time with --initialize-at-build-time=java.lang.ProcessHandleImpl
Error: Incompatible change of initialization policy for java.lang.ProcessHandleImpl: trying to change BUILD_TIME from command line with 'java.lang.ProcessHandleImpl' to RERUN for JDK native code support via JNI
com.oracle.svm.core.util.UserError$UserException: Incompatible change of initialization policy for java.lang.ProcessHandleImpl: trying to change BUILD_TIME from command line with 'java.lang.ProcessHandleImpl' to RERUN for JDK native code support via JNI
Okey, we're not able to change the initialization kind and looks like it won't give us any effect.
I found out that with -H:+PrintClassInitialization we can generate a csv file with class initialization info
here we have two lines for java.lang.ProcessHandleImpl
java.lang.ProcessHandleImpl, RERUN, for JDK native code support via JNI
java.lang.ProcessHandleImpl$Info, RERUN, for JDK native code support via JNI
So it says that class is marked as RERUN, but isn't this the thing we're looking for? Makes no sense for me right now.
UPDATE III
With the configs for graalvm provided by #Matt I was able to compile a native image, but it fails anyways during runtime due to java.lang.UnsatisfiedLinkError, makes me feel like the library was not injected properly.
So looks like we just need to build a proper configuration file, in order to do this let's build our application without native for now, just run it on raspberry, trigger the code related to diozero, get output configs.
./mvnw clean package -Dquarkus.package.type=uber-jar
Deploying to raspberry, will run with graalvm agent for configs generation (https://www.graalvm.org/22.1/reference-manual/native-image/Agent/)
/$GRAALVM_HOME/bin/java -agentlib:native-image-agent=config-output-dir=config -jar ags-gateway-1.0.0-SNAPSHOT-runner.jar
Running simple requests to trigger diozero code (I've connected a led to raspberry on gpio 4, and was actually seeing it turn off/on)
curl -X POST 192.168.0.20:8080/blink/off?gpio=4
curl -X POST 192.168.0.20:8080/blink/on?gpio=4
I've published project with output configs
One thing I noticed that "pattern":"\\Qlib/linux-aarch64/libdiozero-system-utils.so\\E" aarch64 library gets pulled while running on py which is correct, but when I build on native OS I should specify 'amd64' platform.
Let's try to build a native with new configs
./mvnw package -Dnative
Successfully compiled, let's run and test
./target/ags-gateway-1.0.0-SNAPSHOT-runner
curl -X POST localhost:8080/led/on?gpio=4
And here we have error again
ERROR [io.qua.ver.htt.run.QuarkusErrorHandler] (executor-thread-0) HTTP Request to /led/on?gpio=4 failed, error id: b0ef3f8a-6813-4ea8-886f-83f626eea3b5-1: java.lang.UnsatisfiedLinkError: com.diozero.internal.provider.builtin.gpio.NativeGpioDevice.openChip(Ljava/lang/String;)Lcom/diozero/internal/provider/builtin/gpio/GpioChip; [symbol: Java_com_diozero_internal_provider_builtin_gpio_NativeGpioDevice_openChip or Java_com_diozero_internal_provider_builtin_gpio_NativeGpioDevice_openChip__Ljava_lang_String_2]
So I finally managed to build native image, but for some reason it didn't resolve JNI for native library.
Any thoughts on how to properly inject diozero library into native executable?
UPDATE IV
With help of #matthew-lewis we managed to build aarch64 native executable on amd64 os. I updated the source project with final configurations, but I must inform you that this is not a final solution and it doesn't cover all the library code, also according to the Matt's comments this might not be the only way to configure the graalvm build.
I've created a very simple Quarkus app that exposes a single REST API to list the available GPIOs. Note that it currently uses the mock provider that will be introduced in v1.3.4 so that I can test and run locally without deploying to a Raspberry Pi.
Running on a Pi would be as simple as removing the dependency to diozero-provider-mock in the pom.xml - you would also currently need to change the dependency to 1.3.3 until 1.3.4 is released.
Basically you need to add this to the application.properties file:
quarkus.native.additional-build-args=\
-H:ResourceConfigurationFiles=resource-config.json,\
-H:JNIConfigurationFiles=jni-config.json,\
-H:ReflectionConfigurationFiles=reflect-config.json
These files were generated by running com.diozero.sampleapps.LEDTest with the GraalVM Java executable (with a few minor tweaks), i.e.:
$GRAALVM_HOME/bin/java -agentlib:native-image-agent=config-output-dir=config \
-cp diozero-sampleapps-1.3.4.jar:diozero-core-1.3.4.jar:tinylog-api-2.4.1.jar:tinylog-impl-2.4.1.jar \
com.diozero.sampleapps.LEDTest 18
Note a lot of this was based my prior experiments with GraalVM as documented here and here.
The ProcessHandlerImpl error appear to be related to the tinylog reflect config that I have edited out.
Update 1
In making life easy for users of diozero, the library does a bit of static initialisation for things like detecting the local board. This causes issues when loading the most appropriate native library at most once (see LibraryLoader - you will notice it has a static Map of libraries that have been loaded which prevents it being loaded at runtime). To get around this I recommend adding this build property:
--initialize-at-run-time=com.diozero.sbc\\,com.diozero.util
Next, I have been unable to resolve the java.lang.ProcessHandleImpl issue, which prevents reenabling the service loader (diozero uses service loader quite a bit to enable flexibility and extensibility). It would be nice to be able to add this flag:
quarkus.native.auto-service-loader-registration=true
Instead I have specified relevant classes in resource-config.json.
I use Java Selenium WebDriver with TestNG for running my tests.
I am calling driver.quit() in my final test to make sure the files created in /tmp/ folder are deleted.
However, if there is an aborted run (via Jenkins), the contents in /tmp/ folder are not deleted.
Is there a testng listener or any other way I can make sure that tmp folder is cleared even if the run is aborted mid-run?
If the issue introduced on Jenkins level (aborting action), I believe, more effective will be solve this also with Jenkins, not with TestNG Listener.
For declarative pipeline
pipeline {
agent any
stages {
...
}
post {
aborted {
script {
echo 'cleanup on abort'
// this will clean workspace
// cleanWs()
//or just delete 'tmp' directory
dir ('tmp') {
deleteDir()
}
}
}
}
}
Using Plugin
https://plugins.jenkins.io/postbuild-task/
Install plugin and setup shell script execution for the job like
rm -rf tmp
I am trying to make sure my Jenkins instance is not exploitable with the latest log4j exploit.
I have a pipeline script that runs, I tried following this instruction :
https://community.jenkins.io/t/apache-log4j-2-vulnerability-cve-2021-44228/990
This is one of my stages of my pipeline script:
stage('Building image aaa') {
steps{
script {
sh "echo executing"
org.apache.logging.log4j.core.lookup.JndiLookup.class.protectionDomain.codeSource
sh "docker build --build-arg SCRIPT_ENVIRONMENT=staging -t $IMAGE_REPO_NAME:$IMAGE_TAG ."
}
}
}
But I get a different error than what's described here and I'm unsure if I'm checking this correctly. This is the error:
groovy.lang.MissingPropertyException: No such property: org for class: groovy.lang.Binding
at groovy.lang.Binding.getVariable(Binding.java:63)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:271)
at org.kohsuke.groovy.sandbox.impl.Checker$7.call(Checker.java:353)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:357)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:333)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:333)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:333)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:29)
at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
at WorkflowScript.run(WorkflowScript:31)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.get(PropertyishBlock.java:74)
at com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
....etc
This is probably the easiest way to check if you Jenkins has the log4j vulnerability (through plugins or otherwise).
Go to https://your-jenkins.domain/script
Paste org.apache.logging.log4j.core.lookup.JndiLookup.class.protectionDomain.codeSource
If the output is groovy.lang.MissingPropertyException: No such property: org for class: Script1 You're good then, otherwise you're not good.
This way you don't have to change your pipeline to test or go through the approval process like mentioned in the other answer, you can just paste and verify without needing to configure additionally.
I don't think a class name would be directly interpreted as a groovy codeSource argument in a declarative pipeline (as opposed to a scripted one)
Try the approach of "How to import a file of classes in a Jenkins Pipeline?", with:
node{
def cl = load 'Classes.groovy'
def a = cl.getProperty("org.apache.logging.log4j.core.lookup.JndiLookup").protectionDomain.codeSource
...
}
Note that getCLassLoader() is by default disallowed, and should require from an Jenkins administrator In-process Script Approval.
I have a Spring Boot JAR on a linux server located at some directory /dev/ExampleApplication.jar.
Is it possible for the Spring Boot application to monitor that specific JAR for an update (maybe a change in the checksum), and if there is an update, shut itself down, and start the updated JAR?
I want something like this in pseudo code:
#Scheduled(fixedDelay = 1000)
public void checkForUpdates() {
long fileChecksum = getJarChecksum();
if (checksum != fileChecksum) {
command("java -jar /dev/ExampleApplication.jar");
context.shutdown();
}
}
You could use inotify-tools to re-launch the jar file any time it is deleted or altered.
You'll need to install inotify-tools on your Linux distro if it is not already.
while inotifywait -e close_write /dev/ExampleApplication.jar; do java -jar /dev/ExampleApplication.jar; done
The -e parameter takes an event type. You can also watch a directory to see if anything is deleted or created in that directory. See here for more on the inotifywait event types: https://linux.die.net/man/1/inotifywait
I want to debug some JVM instances that are running at the same time. I know that I can run gradle using --debug-jvm so that the JVM will wait until I start the IDE debugger so that it connects to the JVM but it uses port 5005 by default. That's fine for debugging one instance of JVM... but if I want to debug more than one instance, I'll need to define a different port from 5005. How can I achieve this with gradle?
In my case I wanted to debug a specific file, so I included the following code in build.gradle:
task execFile(type: JavaExec) {
main = mainClass
classpath = sourceSets.main.runtimeClasspath
if (System.getProperty('debug', 'false') == 'true') {
jvmArgs "-Xdebug", "-agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=y"
}
systemProperties System.getProperties()
}
and I can run with:
gradle execFile -PmainClass=com.MyClass -Dmyprop=somevalue -Ddebug=true
The custom execFile task receives:
-PmainClass=com.MyClass: the class with the main method I want to execute (in the script, main = mainClass)
-Dmyprop=somevalue: a property whose value be retrieved in the application calling System.getProperty("myprop") (in the script, systemProperties System.getProperties() was needed for that)
-Ddebug=true: a flag to enable debugging on port 8787 (in the script, see the if condition, and also address=8787, but the port could be changed, and this flag name also could be changed). Using suspend=y the execution is suspended until the debugger is attached to the port (if you don't want this behaviour, you could use suspend=n)
For your use case, you could try to apply the logic behind the line jvmArgs ... to your specific task (or use tasks.withType(JavaExec) { ... } to apply to all tasks of this type).
Using this solution, don't use the --debug-jvm option because you may receive an error about the property jdwp being defined twice.
Update (2020-08-10)
To make sure that the code runs only when I execute the task execFile explicitly (so as to not run when I just build gradle, for example), I changed the code to:
task execFile {
dependsOn 'build'
doLast {
tasks.create('execFileJavaExec', JavaExec) {
main = mainClass
classpath = sourceSets.main.runtimeClasspath
if (System.getProperty('debug', 'false') == 'true') {
jvmArgs "-Xdebug", "-agentlib:jdwp=transport=dt_socket,address=*:8787,server=y,suspend=y"
}
systemProperties System.getProperties()
}.exec()
}
}
See more at: Run gradle task only when called specifically
You could modify GRADLE_OPTS environment variable and add standard Java debugger syntax e.g. to use port 8888:
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8888
Option 1 - Directly pass the JVM arguments that start up the debugger
task exampleProgram(type: JavaExec) {
classpath = sourceSets.main.runtimeClasspath
description = "Your Description"
main = 'Example.java' // <package>.<path>.<to>.<YourMainClass>.java
// Change `1805` to whatever port you want.
jvmArgs=["-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=1805"]
}
If it doesn't work right away, try stopping all existing Daemons with gradle --stop so Gradle isn't influenced by any past settings when building/running your project.
Option 2 - Use Gradle's debugOptions object
Alternatively, according to Gradle's documentation, the following should also do the trick; however, it didn't work for me. I'm including it for completeness and in the hope that it works in the future.
task runApp(type: JavaExec) {
...
debugOptions {
enabled = true
port = 5566
server = true
suspend = false
}
}
References:
JVM Debugger Args: https://docs.oracle.com/cd/E19146-01/821-1828/gdabx/index.html
Similar question: how to debug spring application with gradle