Logback and Graylog cannot communicate on a Mac using syslog - java

I want to send logmessages from a Java application to Graylog, using slf4j on top of logback with a logback GELF-appender on one side and a Graylog GELF-input on the other. To test it, i'm running Graylog in a Docker container (using Docker for Mac) and run my Java application locally. The gist of my story is that the Graylog GELF-input does not receive anything from the Java application. Somehow the Java application and Graylog don't seem to be able to communicate. The same applies when i switch to a different appender/input combination (one based on syslog records). However, when echoing a message from the commandline to a different Graylog input, namely the RAW input that's listening to port 5555, that message is received fine.
Any idea what the problem is?
This is my setup using GELF:
Java app:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class LogDemo {
public static void main(String[] args) {
Logger logger = LoggerFactory.getLogger(LogDemo.class);
logger.error("Hello World 2");
}
}
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.example</groupId>
<artifactId>logdemo</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>de.appelgriepsch.logback</groupId>
<artifactId>logback-gelf-appender</artifactId>
<version>1.5</version>
</dependency>
</dependencies>
</project>
logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="GELF" class="de.appelgriepsch.logback.GelfAppender">
<server>localhost</server>
<port>12201</port>
<protocol>TCP</protocol>
</appender>
<root level="error">
<appender-ref ref="GELF"/>
</root>
</configuration>
Graylog docker startup:
$ docker run --name mongo -d mongo:3
$ docker run --name elasticsearch \
-e "http.host=0.0.0.0" \
-e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \
-d docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.10
$ docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 1514:1514 -p 5555:5555 \
-e GRAYLOG_HTTP_EXTERNAL_URI="http://127.0.0.1:9000/" \
-d graylog/graylog:3.3
Graylog GELF tcp input (running):
bind_address: 0.0.0.0
decompress_size_limit: 8388608
max_message_size: 2097152
number_worker_threads: 4
override_source: <empty>
port: 12201
recv_buffer_size: 1048576
tcp_keepalive: false
tls_cert_file: <empty>
tls_client_auth: disabled
tls_client_auth_cert_file: <empty>
tls_enable: false
tls_key_file: <empty>
tls_key_password:********
use_null_delimiter: true
As stated, when i run the java app and Graylog is running as a Docker container in the background, Graylog does not receive the logmessage i sent. However, when i type the following on my commandline (using Terminal on Mac), the message IS received by the Graylog RAW input:
$ echo "Testmessage" | nc localhost 5555
Does somebody got a clue what i'm doing wrong?

I found a solution, though i'm not sure what the exact cause of the problem was. The solution was to use a different Gelf appender. Instead of the one i mentioned above, i'm now using the following one:
<dependency>
<groupId>de.siegmar</groupId>
<artifactId>logback-gelf</artifactId>
<version>2.2.0</version>
</dependency>
That did the trick, but as i said, i'm unsure why the one i used earlier did not work.

Related

How Do I Containerize Eureka Server In Docker

i have been struggling to use my created eureka-server container in docker...
I have gone through previous solutions and am still not getting why i cant access the url: http://localhost:8761/
I have changed my properties file severally but no one seems to be working...
Firstly my application.properties file goes like this
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
server.port=8761
spring.application.name=discovery-service
eureka.instance.prefer-ip-address=true
logging.level.org.springframework.cloud.commons.util.InetUtils=trace
spring.cloud.inetutils.timeout-seconds=10
And my dependecies tag of my pom.xml goes like this
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
And i have also added the #EnableEurekaServer to my application class...
So when i created an image for it using the docker command docker build -t davidtega/eureka-layered -f Dockerfile.layered .
It worked perfectly, and i started a container using the docker command docker run -p 8761:8761 -t davidtega/eureka-layered
And this the log
But when i try to access http://localhost:8761/, this site cannot be reached is the response i get everytime...
So i noticed my app was running on 0.0.0.0:8761 not 127.0.0.1:8761
I was wondering how do i change it ???
I have two docker files, the first one is the DockerFile and the second one is the Dockerfile.layered
For my DockerFile, this is what is in it...
FROM openjdk:17
COPY target/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
And my DockerFile.layered file contains
FROM eclipse-temurin:17.0.4.1_1-jre as builder
WORKDIR extracted
ADD target/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
FROM eclipse-temurin:17.0.4.1_1-jre
WORKDIR application
COPY --from=builder extracted/dependencies/ ./
COPY --from=builder extracted/spring-boot-loader/ ./
COPY --from=builder extracted/snapshot-dependencies/ ./
COPY --from=builder extracted/application/ ./
EXPOSE 8761
ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"]
Please and please help i request assistance, i am using spring cloud version 2.7 and java 17... Thanks
Add to the config eureka.hostname=localhost and eureka.client.serviceUrl.defaultZone=http://${eureka.instance.hostname}:${server.port}/eureka/
Make sure the port is mapped doing a docker run -p 8761 and then check that the port is correctly listening with lsof -i -P -n | grep LISTEN

install mysql and java jdk-11 in Dockerfile and run my spring boot jar file in container

I am going to install MySQL and Jdk-11 and run the jar file(spring boot project) on the container. If anyone has experience in this field, please help.
Thanks
this is my sql config
host='localhost',
port=3306,
user='root',
passwd='password',
FROM ubuntu
RUN apt-get update
RUN apt-get -y install mysql-server
RUN apt-get -y install openjdk-11-jdk
COPY target/orderCodeBackEnd-0.0.1-SNAPSHOT.jar /usr/app/
WORKDIR /usr/app
ENTRYPOINT ["java", "-jar", "orderCodeBackEnd-0.0.1-SNAPSHOT.jar"]
//Dockerfile
FROM openjdk:11
ADD target/*.jar app.jar
ENTRYPOINT ["java","-jar","app.jar"]
//Dockerfile just desame to other one
FROM openjdk:11
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
//docker-compose.yaml
services:
yourapp:
image: yourAppJarName:latest
container_name: yourapp
depends_on:
- mysqldb
restart: always
build:
context: ./
dockerfile: Dockerfile
ports:
- "9090:9090"
environment:
MYSQL_HOST: mysqldb
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_PORT: 3306
mysqldb:
image: mysql:8.0.28
restart: unless-stopped
container_name: mysqldb
ports:
- "3307:3306"
cap_add:
- SYS_NICE
environment:
MYSQL_DATABASE: dbname
MYSQL_ROOT_PASSWORD: root
//application.properties or yaml
spring:
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://${MYSQL_HOST:localhost}:${MYSQL_PORT:3306}/dbname
username: root
password: root
//customize you jar name in pom.xml
</dependency>
<dependency>
..........
</dependency>
<dependency>
..........
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
<finalName>yourAppJarName</finalName>
</build>
</project>
Then click Project file then "Run as" then click maven "Install"
you must also open your mysql then connect to 3307 since 3307 is expose
Create a container of MySQL / MariaDB by pulling image from MySQL Docker repository.
sudo docker run --detach --env MARIADB_PASSWORD=secret --env MARIADB_ROOT_PASSWORD=secret -p 3306:3306 --add-host=YOUR_DESIRED_HOSTNAME:YOUR_LOCAL_MACHINE_IP mariadb:latest
--detach
Will run the server in detached mode.
--env MARIADB_PASSWORD=secret --env MARIADB_ROOT_PASSWORD=secret
Setting up environment variables for your DB server passwords. You can define the password as you wish. For me, I set it to secret
-p 3306:3306
Port mapping, container internal port 3306 will be mapped to the port 3306 outside container.
--add-host=YOUR_DESIRED_HOSTNAME:YOUR_LOCAL_MACHINE_IP
Don't forget to change YOUR_DESIRED_HOSTNAME and YOUR_LOCAL_MACHINE_IP values if you want to establish a remote connection with the docker host machine. Usually, the hostname can be localhost if you are running docker on the same development machine. In such case, we don't even need this --add-host flag.
Now you can use the following connection parameters for connecting your application with the database if you run it in local.
host: YOUR_LOCAL_MACHINE_IP
port: 3306
username: root
password: secret
However, if you want to access the db for your spring boot application from inside a docker container, you may have to use additional tool, docker-compose. The reason is because your host machine IP address may not work inside your docker container.
I think, this following git repository will be helpful for you to understand how to write your first docker-compose. This repository has a readme.md file, which you can take help from.
https://gitlab.com/mainul35/traver-auth
It is good practice to separate different services in independent containers, thus creating a less related architecture.
Next think, in docker hub we can find usefull, ready to use, images.
We can pull all images from command line and create all services but there is better way - docker compose. So first file, that u need is docker-compose.yml:
version: '2'
services:
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=password
ports:
- 3306:3306
app:
build: .
ports:
- 8080:8080
depends_on:
- mysql
in this file we describe this 2 services:
mysql:
image: image is from docker hub it's offical docker mysql image
environment variable: u can find all possible variable in image docs
ports: there we can specify what port will be expose
app:
build: path to Dockerfile
depends_on: before you can create this service, create mysql first
Dockerfile for your app:
FROM openjdk:11-jre
COPY target/orderCodeBackEnd-0.0.1-SNAPSHOT.jar /usr/app/
WORKDIR /usr/app
ENTRYPOINT ["java", "-jar", "orderCodeBackEnd-0.0.1-SNAPSHOT.jar"]
now you can easily start these services from the terminal
docker compose up -d
you must be in the directory where docker-compose.yml is located or specife -f parametr
The answer of #ConRed is not complete. I have done lots of changes from it in my application (which is shared here: https://github.com/Aliuken/JobVacanciesApp_Java11).
These are my final files:
docker-compose.yaml:
version: "3.9"
services:
app-db-service:
image: mysql:latest
container_name: app-db-container
ports:
- "3307:3306"
environment:
- MYSQL_DATABASE=job-vacancies-app-db
- MYSQL_ROOT_PASSWORD=admin
networks:
- internal-net
restart: on-failure
volumes:
- app-db-data:/var/lib/mysql
- ./src/main/resources/META-INF/db_dumps_folder:/docker-entrypoint-initdb.d
cap_add:
- SYS_NICE
healthcheck:
test: "mysql -uroot -padmin -e 'select 1'"
interval: 1s
retries: 120
app-service:
image: job-vacancies-app:latest
container_name: app-container
ports:
- "9090:8080"
environment:
- MYSQL_HOST=app-db-container
- MYSQL_PORT=3306
- MYSQL_USER=root
- MYSQL_PASSWORD=admin
networks:
- internal-net
- external-net
restart: on-failure
volumes:
- /AppData:/AppData
depends_on:
app-db-service:
condition: service_healthy
build:
context: .
dockerfile: Dockerfile
networks:
external-net:
external: true
internal-net:
driver: bridge
volumes:
app-db-data:
driver: local
where:
./src/main/resources/META-INF/db_dumps_folder contains my db dump file: db-dump.sql.
/AppData is the folder in my PC (which is Linux) that has images and documents used in the application.
healthcheck and service_healthy are used joint to determine when the db-dump.sql file was executed, to start the Spring Boot application after that.
internal-net is used to communicate the Spring Boot application with the database.
external-net is used to communicate the Spring Boot application with the user.
Dockerfile:
FROM adoptopenjdk/openjdk11-openj9:alpine
USER root
RUN mkdir /opt/apps
RUN mkdir /opt/apps/jobVacanciesApp
ARG JAR_FILE=lib/*.jar
COPY ${JAR_FILE} /opt/apps/jobVacanciesApp/jobVacanciesApp.jar
RUN addgroup -S jobVacanciesAppGroup && adduser -S jobVacanciesAppUser -G jobVacanciesAppGroup
USER jobVacanciesAppUser:jobVacanciesAppGroup
CMD ["java", "-jar", "/opt/apps/jobVacanciesApp/jobVacanciesApp.jar"]
docker-compose-start.sh:
docker volume prune -f
docker network create "external-net"
docker-compose build
docker-compose up
docker-compose start
docker-compose-stop.sh:
docker-compose stop
docker-compose down
docker volume prune -f
docker network rm "external-net"
application.yaml:
spring:
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://${MYSQL_HOST:localhost}:${MYSQL_PORT:3306}/job-vacancies-app-db?useSSL=false&serverTimezone=Europe/Madrid&allowPublicKeyRetrieval=true
username: root
password: admin
pom.xml:
...
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<archive>
<manifest>
<mainClass>com.aliuken.jobvacanciesapp.MainApplication</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy-installed</id>
<phase>install</phase>
<goals>
<goal>copy</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>${project.groupId}</groupId>
<artifactId>${project.artifactId}</artifactId>
<version>${project.version}</version>
<type>${project.packaging}</type>
<outputDirectory>lib</outputDirectory>
<destFileName>job-vacancies-app.jar</destFileName>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
<finalName>job-vacancies-app</finalName>
</build>
...
To run the application:
Execute in a terminal: ./docker-compose-start.sh
Open in a web browser: http://localhost:9090
To stop the application:
Press in the terminal previously opened: Ctrl + C
Execute in the terminal: ./docker-compose-stop.sh

How to speedup java maven build on Google Cloud Build (100s of dependencies)

I am using Google Cloud Build to build a java project which has 100s of dependencies. By default the local maven repository cache will be empty and it downloads all dependencies each time there is a build.
The google documentation only suggests "Caching directories with Google Cloud Storage" https://cloud.google.com/cloud-build/docs/speeding-up-builds but it takes a long time to sync 7000 files (which means the build is slower)
just one dependency is 5 files
repository/org/mockito
repository/org/mockito/mockito-core
repository/org/mockito/mockito-core/2.15.0
repository/org/mockito/mockito-core/2.15.0/mockito-core-2.15.0.jar
repository/org/mockito/mockito-core/2.15.0/mockito-core-2.15.0.jar.sha1
repository/org/mockito/mockito-core/2.15.0/mockito-core-2.15.0.pom
repository/org/mockito/mockito-core/2.15.0/mockito-core-2.15.0.pom.sha1
repository/org/mockito/mockito-core/2.15.0/_remote.repositories
An example cloudbuild.yaml file
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['rsync', '-r', 'gs://my-mavencache-bucket/repository', '.']
- name: 'gcr.io/$PROJECT_ID/mvn'
args: ['package']
...
I would like to mount gs://my-mavencache-bucket at as a volume - but I dont see an option to do that
After much experimentation, this solution seems to work quite well. google-storage-wagon. This maven plugin provides a mechanism to read and publish maven artifacts from a google
data bucket
Maven pom.xml contains
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
...
<repositories>
<repository>
<id>my-repo-bucket-release</id>
<url>gs://bucket-ave-build-artifact/external</url>
<releases>
<enabled>true</enabled>
<!-- TODO figure out why checksums do not match when artifact pulled from GCP -->
<checksumPolicy>ignore</checksumPolicy>
</releases>
</repository>
</repositories>
<distributionManagement>
<snapshotRepository>
<id>my-repo-bucket-snapshot</id>
<url>gs://my-build-artifact-bucket/snapshot</url>
</snapshotRepository>
<repository>
<id>my-repo-bucket-release</id>
<url>gs://my-build-artifact-bucket/release</url>
</repository>
</distributionManagement>
...
<extensions>
<extension>
<groupId>com.gkatzioura.maven.cloud</groupId>
<artifactId>google-storage-wagon</artifactId>
<!-- version 1.8 seems to produce exception, ticket logged -->
<version>1.7</version>
</extension>
</extensions>
</build>
and cloudbuild.yaml is simply
steps:
- name: 'gcr.io/cloud-builders/mvn'
# -X here simply for verbose maven debugging
args: ['deploy', '-X']
this will:
maven publish artifacts to a data bucket
gs://my-build-artifact-bucket/release
download external dependencies
from gs://my-build-artifact-bucket/external (if they exist in this directory)
I found the package google-storage-wagon very nice, but lacking in terms of authentication and timing of the synchronization.
I implemented it myself like follows. For more information about service accounts refer to this answer: https://stackoverflow.com/a/56610260/1236401
So assuming you have your service account key.json file handy and you have the name of your SERVICE_ACCOUNT as well as a storage bucket BUCKET_PATH, this is the basic Dockerfile:
FROM maven:3.6.1-jdk-12
ENV MAVEN_PATH="/root/.m2" \
BUCKET_PATH="gs://mugen-cache/maven"
COPY key.json /key.json
# install gcloud sdk
RUN mkdir -p $MAVEN_PATH && \
yum install -y curl which && \
curl https://sdk.cloud.google.com | bash > /dev/null
ENV PATH="${PATH}:/root/google-cloud-sdk/bin" \
SERVICE_ACCOUNT="mugen-build#mugen.iam.gserviceaccount.com"
# authenticate service account and install crcmod - https://cloud.google.com/storage/docs/gsutil/addlhelp/CRC32CandInstallingcrcmod
RUN gcloud auth activate-service-account $SERVICE_ACCOUNT --key-file=/key.json && \
yum install -y gcc python-devel python-setuptools redhat-rpm-config
RUN curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py" && \
python get-pip.py && \
pip uninstall crcmod && \
pip install --no-cache-dir -U crcmod
RUN echo "Syncing m2 in..." && \
gsutil -q -m rsync -r $BUCKET_PATH $MAVEN_PATH && \
echo "Downloaded $(find $MAVEN_BUCKET -type f -name "*.pom" | wc -l) packages"
# ... build and stuff
RUN echo "Syncing m2 out..." && \
gsutil -q -m rsync -r $MAVEN_PATH $BUCKET_PATH
Some of the instructions here are specific to the base image (which is the REHL-based Oracle Linux Server), but you should be able to extract the important details in order to make it work in your case.

Passing variables from parameterized Jenkins project using Jenkinsfile, Maven, and Java

I have a parameterized Pipeline Jenkins project connected to a Maven project that I forked from https://github.com/jenkins-docs/simple-java-maven-app. I am trying to pass a parameter called "Platform" I have set in the Jenkins Project:
shown here
Before implementing this on my own, larger project, I wanted to see if it was possible to pass a parameter from Jenkins to the Java application via Maven. I've tried some solutions seen in below code.
However, no matter what I try, I still get null when running System.getProperty("platform"). I'm not sure what I could be doing incorrectly. Am I missing something or is there some incorrect syntax I'm just not identifying?
Code snippets below:
Jenkinsfile
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh "mvn -Dplatform=${params.Platform} -B clean package"
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
}
}
}
}
deliver.sh
I added echo "${env.platform}" to see what it returned and I get an error - ./jenkins/scripts/deliver.sh: line 2: ${env.platform}: bad substitution
#!/usr/bin/env bash
echo "${env.platform}"
set -x
mvn jar:jar install:install help:evaluate -Dexpression=project.name
set +x
echo 'The following complex command extracts the value of the <name/> element'
echo 'within <project/> of your Java/Maven project''s "pom.xml" file.'
set -x
NAME=`mvn help:evaluate -Dexpression=project.name | grep "^[^\[]"`
set +x
echo 'The following complex command behaves similarly to the previous one but'
echo 'extracts the value of the <version/> element within <project/> instead.'
set -x
VERSION=`mvn help:evaluate -Dexpression=project.version | grep "^[^\[]"`
set +x
echo 'The following command runs and outputs the execution of your Java'
echo 'application (which Jenkins built using Maven) to the Jenkins UI.'
set -x
java -jar target/${NAME}-${VERSION}.jar
Java main
public static void main(String[] args) {
String test = System.getProperty("platform");
System.out.println(test);
}
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany.app</groupId>
<artifactId>my-app</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>my-app</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<!-- Build an executable JAR -->
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.0.2</version>
<configuration>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<classpathPrefix>lib/</classpathPrefix>
<mainClass>com.mycompany.app.App</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
</plugins>
</build>
</project>
UPDATE - solution found:
Followed Илиян Михайлов's solution and it worked! Also, in the Java class, instead of using System.getProperty("platform") I had to use System.getenv("Platform").
You try to set a parameter in the wrong place (in the build step). In maven, each run is independently and not store any information about the parameters. The parameter must be sent when is needed may be on this line mvn jar:jar install:install help:evaluate -Dexpression=project.name -Dplatform="$1" -> this should be sent as parameter from the Jenkins job sh './jenkins/scripts/deliver.sh ${params.Platform}'}

Maven ssh deploy using password, Permission denied

I'm trying to build a jar using maven and automatically scp it to a remote machine.
This is my pom.xml
<properties>
<deploy.username>root</deploy.username>
<deploy.host>10.10.4.10</deploy.host>
<deploy.port>22</deploy.port>
<deploy.dir>/root</deploy.dir>
</properties>
<distributionManagement>
<repository>
<id>repo1</id>
<url>scpexe://${deploy.host}:${deploy.dir}</url>
</repository>
</distributionManagement>
This is my settings.xml:
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<servers>
<server>
<id>repo1</id>
<username>root</username>
<password>root</password>
</server>
</servers>
</settings>
This the error log
Caused by: org.eclipse.aether.transfer.MetadataTransferException: Could not transfer metadata com.github.rssanders3.spark:spark_quick_start:1.0-SNAPSHOT/maven-metadata.xml from/to repo1 (scpexe://10.10.4.10:/root): Exit code: 1 - Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
at org.eclipse.aether.connector.basic.MetadataTransportListener.transferFailed(MetadataTransportListener.java:43)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run(BasicRepositoryConnector.java:355)
at org.eclipse.aether.util.concurrency.RunnableErrorForwarder$1.run(RunnableErrorForwarder.java:67)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$DirectExecutor.execute(BasicRepositoryConnector.java:581)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector.get(BasicRepositoryConnector.java:222)
at org.eclipse.aether.internal.impl.DefaultDeployer.upload(DefaultDeployer.java:417)
... 28 more
Caused by: org.apache.maven.wagon.TransferFailedException: Exit code: 1 - Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
You can see the full output of maven -X in here
I'm submitting using the following command:
mvn deploy -DskipTests --settings settings.xml
The user name and and password is correct, I'm able to ssh to it use this credential. I even tried to scp a file to the remote without any problem.
I check maven debug output, it is loading the user defined settings.xml I created.
[DEBUG] Reading user settings from /Users/xuanyue/tmp/apache-spark-quickstart-project/settings.xml
And on the ssh server side, this is the only thing I get:
Feb 23 14:33:02 hadoop10 sshd[23804]: Connection closed by 192.168.100.26
I also tried replace scpexe with scp. Still not worked.
Try running it with the -X option:
mvn -X deploy -DskipTests --settings settings.xml
I suspect the problem may be to do with the user at the other side and their permissions (not necessarily file system permissions).
First you could restart sshd on your server with -v (or -vvv for full debug) and thus get more logs in your security file.
Also it looks to me you are trying to scp on a server, have you tried using the plugin org.apache.maven.wagon ?
See Mr Thivent's post: Uploading a File via SCP with Maven fails
You probably need the Maven Wagon Provider for SSH. Note that it must be added as a build extension.
<project>
...
<build>
<extensions>
<extension>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-ssh</artifactId>
<version>2.10</version>
</extension>
</extensions>
</build>
...
</project>
If you have this Wagon Provider, then the failure is likely due to your using an ancient version of Maven Wagon (you mentioned version 1.0-beta-6), which might work well with matching ancient versions (but certainly won't work well with the modern maven-dist-plugin).
Beta versions of Maven Wagon SSH do not need to be used. The current version as of now is 2.10.
Likewise, I don't believe you need to include the Maven Wagon Plugin as the maven-dist-plugin doesn't use that plugin, it directly uses the more modern wagon-provider-api. You should only need the SSH Wagon extension.
Finally, you should run a remote sshd instance for testing, from the command line (so the information goes to the terminal) with enough debugging on to determine if you are reaching the test box, and if you are actually passing in valid credentials (as sshd considers them).
It is important to remember that sshd has specific configuration options which deny root logins, if they are set. If you have access to the remote sshd box, you might also want to verify the sshd settings.

Categories

Resources