Jenkins-slave failing due to jarCache error - java

I have a jenkins master running in VM and docker running on another VM, I have setup all the required things for making docker as a build-agent for jenkins-master.
Now When I am running simple job on docker-build agent, I am getting following error.
SSHLauncher{host='192.168.33.15', port=49162, credentialsId='dock-cont-pass', jvmOptions='', javaPath='', prefixStartSlaveCmd='', suffixStartSlaveCmd='', launchTimeoutSeconds=60, maxNumRetries=10, retryWaitTime=15, sshHostKeyVerificationStrategy=hudson.plugins.sshslaves.verifiers.NonVerifyingKeyVerificationStrategy, tcpNoDelay=true, trackCredentials=true}
[08/02/21 11:00:36] [SSH] Opening SSH connection to 192.168.33.15:49162.
[08/02/21 11:00:36] [SSH] WARNING: SSH Host Keys are not being verified. Man-in-the-middle attacks may be possible against this connection.
[08/02/21 11:00:37] [SSH] Authentication successful.
[08/02/21 11:00:37] [SSH] The remote user's environment is:
BASH=/usr/bin/bash
BASHOPTS=checkwinsize:cmdhist:complete_fullquote:extquote:force_fignore:globasciiranges:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=([0]="0")
BASH_ARGV=()
BASH_CMDS=()
BASH_EXECUTION_STRING=set
BASH_LINENO=()
BASH_SOURCE=()
BASH_VERSINFO=([0]="5" [1]="0" [2]="17" [3]="1" [4]="release" [5]="x86_64-pc-linux-gnu")
BASH_VERSION='5.0.17(1)-release'
DIRSTACK=()
EUID=1000
GROUPS=()
HOME=/home/jenkins
HOSTNAME=7bf4435f24c4
HOSTTYPE=x86_64
IFS=$' \t\n'
LOGNAME=jenkins
MACHTYPE=x86_64-pc-linux-gnu
MOTD_SHOWN=pam
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
PIPESTATUS=([0]="0")
PPID=72
PS4='+ '
PWD=/home/jenkins
SHELL=/bin/bash
SHELLOPTS=braceexpand:hashall:interactive-comments
SHLVL=1
SSH_CLIENT='192.168.33.10 37292 22'
SSH_CONNECTION='192.168.33.10 37292 172.17.0.3 22'
TERM=dumb
UID=1000
USER=jenkins
_=']'
Checking Java version in the PATH
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-8u292-b10-0ubuntu1~20.04-b10)
OpenJDK 64-Bit Server VM (build 25.292-b10, mixed mode)
[08/02/21 11:00:37] [SSH] Checking java version of /jdk/bin/java
Couldn't figure out the Java version of /jdk/bin/java
bash: /jdk/bin/java: No such file or directory
[08/02/21 11:00:37] [SSH] Checking java version of java
[08/02/21 11:00:37] [SSH] java -version returned 1.8.0_292.
[08/02/21 11:00:37] [SSH] Starting sftp client.
ERROR: [08/02/21 11:00:37] [SSH] SFTP failed. Copying via SCP.
java.io.IOException: The subsystem request failed.
at com.trilead.ssh2.channel.ChannelManager.requestSubSystem(ChannelManager.java:741)
at com.trilead.ssh2.Session.startSubSystem(Session.java:362)
at com.trilead.ssh2.SFTPv3Client.<init>(SFTPv3Client.java:100)
at com.trilead.ssh2.SFTPv3Client.<init>(SFTPv3Client.java:119)
at com.trilead.ssh2.jenkins.SFTPClient.<init>(SFTPClient.java:43)
at hudson.plugins.sshslaves.SSHLauncher.copyAgentJar(SSHLauncher.java:688)
at hudson.plugins.sshslaves.SSHLauncher.access$400(SSHLauncher.java:111)
at hudson.plugins.sshslaves.SSHLauncher$1.call(SSHLauncher.java:456)
at hudson.plugins.sshslaves.SSHLauncher$1.call(SSHLauncher.java:421)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: The server denied the request.
at com.trilead.ssh2.channel.ChannelManager.requestSubSystem(ChannelManager.java:737)
... 12 more
[08/02/21 11:00:37] [SSH] Copying latest remoting.jar...
Expanded the channel window size to 4MB
[08/02/21 11:00:37] [SSH] **Starting agent process: cd "" && java -jar remoting.jar -workDir -jar-cache /remoting/jarCache
No argument is allowed: /remoting/jarCache**
java -jar agent.jar [options...]
-agentLog FILE : Local agent error log destination (overrides
workDir)
-auth user:pass : If your Jenkins is security-enabled, specify
a valid user name and password.
-cert VAL : Specify additional X.509 encoded PEM
certificates to trust when connecting to
Jenkins root URLs. If starting with # then
the remainder is assumed to be the name of
the certificate file to read.
-connectTo HOST:PORT : make a TCP connection to the given host and
port, then start communication.
-cp (-classpath) PATH : add the given classpath elements to the
system classloader.
-failIfWorkDirIsMissing : Fails the initialization if the requested
workDir or internalDir are missing ('false'
by default) (default: false)
-help : Show this help message (default: false)
-internalDir VAL : Specifies a name of the internal files
within a working directory ('remoting' by
default) (default: remoting)
-jar-cache DIR : Cache directory that stores jar files sent
from the master
-jnlpCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP requests.
-jnlpUrl URL : instead of talking to the master via
stdin/stdout, emulate a JNLP client by
making a TCP connection to the master.
Connection parameters are obtained by
parsing the JNLP file.
-loggingConfig FILE : Path to the property file with
java.util.logging settings
-noKeepAlive : Disable TCP socket keep alive on connection
to the master. (default: false)
-noReconnect : Doesn't try to reconnect when a
communication fail, and exit instead
(default: false)
-proxyCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP authenticated proxy requests.
-secret HEX_SECRET : Agent connection secret to use instead of
-jnlpCredentials.
-tcp FILE : instead of talking to the master via
stdin/stdout, listens to a random local
port, write that port number to the given
file, then wait for the master to connect to
that port.
-text : encode communication with the master with
base64. Useful for running agent over 8-bit
unsafe protocol like telnet
-version : Shows the version of the remoting jar and
then exits (default: false)
-workDir FILE : Declares the working directory of the
remoting instance (stores cache and logs by
default) (default: -jar-cache)
Agent JVM has terminated. Exit code=0
[08/02/21 11:00:39] Launch failed - cleaning up connection
[08/02/21 11:00:39] [SSH] Connection closed.
How Can I resolved this problem I am using SSH-connection for connecting the docker container.

I have solved above problem by changing the username in dockerfile for build-agent, previously it was set to root user,now I have set this to jenkins and now Jenkins server will use "/home/jenkins/" as a working directory.
You need to build image from this dockerfile and then specify this particular image to the cloud-configuration of docker
You need to change the credentials in jenkins as well
FROM ubuntu:18.04
RUN mkdir -p /var/run/sshd
RUN apt -y update
RUN apt install -y openjdk-8-jdk
RUN apt install -y openssh-server
RUN apt install -y git
#RUN apt install -y maven
#generate the host keys with the default key file path, an empty passphrase
RUN ssh-keygen -A
ADD ./sshd_config /etc/ssh/sshd_config
RUN echo ***jenkins:password123***| chpasswd
RUN chown -R **jenkins:jenkins** /home/jenkins/
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Related

Jenkins "Failed to install JDK" Exit code=-1

(Adding this here as I did not find an answer anywhere)
I configured Jenkins to automatically install JDK from the "Global Tools Configuration" > "JDK Installation" menu. The option works across all 14 nodes (various Windows and Linux) but one.
A server Windows Server 2012 R2 (amd64) with 20 executors, which has been running without issue for just under 3 years.
The log file references in the build's console (i.e. ...tools\hudson.model.JDK\install1873722508778839961log) is empty.
The build's console shows the following:
[EnvInject] - Loading node environment variables. Installing
E:\Jenkins_APA_8080\tools\hudson.model.JDK\Oracle_Java_8.0_191\jdk.exe
[Oracle_Java_8.0_191] $
E:\Jenkins_APA_8080\tools\hudson.model.JDK\Oracle_Java_8.0_191\jdk.exe
/s ADDLOCAL="ToolsFeature" REBOOT=ReallySuppress
INSTALLDIR=E:\Jenkins_APA_8080\tools\hudson.model.JDK\Oracle_Java_8.0_191
/L
E:\Jenkins_APA_8080\tools\hudson.model.JDK\install1873722508778839961log
Failed to install JDK. Exit code=-1 ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: hudson.AbortException
at
org.jenkinsci.plugins.envinject.util.RunHelper.getBuildVariables(RunHelper.java:137)
at
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentWithoutJobPropertyObject(EnvInjectListener.java:235)
at
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:51)
at
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:542)
at
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:462)
at hudson.model.Run.execute(Run.java:1810) at
hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:543) at
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429) Caused by:
hudson.AbortException at
hudson.tools.JDKInstaller.install(JDKInstaller.java:292) at
hudson.tools.JDKInstaller.performInstallation(JDKInstaller.java:157)
at
hudson.tools.InstallerTranslator.getToolHome(InstallerTranslator.java:72)
at
hudson.tools.ToolLocationNodeProperty.getToolHome(ToolLocationNodeProperty.java:109)
at
hudson.tools.ToolInstallation.translateFor(ToolInstallation.java:206)
at hudson.model.JDK.forNode(JDK.java:148) at
org.jenkinsci.plugins.envinject.util.RunHelper.getJDKVariables(RunHelper.java:111)
at
org.jenkinsci.plugins.envinject.util.RunHelper.getBuildVariables(RunHelper.java:135)
... 8 more
I logged onto the server as a local admin and attempted to run the JDK installation line shown in the build console:
E:\Jenkins_APA_8080\tools\hudson.model.JDK\Oracle_Java_8.0_191\jdk.exe
/s ADDLOCAL="ToolsFeature" REBOOT=ReallySuppress
INSTALLDIR=E:\Jenkins_APA_8080\tools\hudson.model.JDK\Oracle_Java_8.0_191
/L E:\Jenkins_APA_8080\tools\hudson.model.JDK
The installation seemed to run and this time the log file contained text. I double checked the owner and permissions on the Jenkins folders and they were owned by a local admin and not a domain admin (this is normal for our Jenkins installation).
However, the login credentials of the Jenkins service on this machine were set to a domain admin (not local admin).
Changing the Jenkins Services Log On credentials resolved the issue. Even though this node has been running for over a couple years without issue, its credentials were not correct.

CentOS Java application error with SELinux

I have a CentOS box hosting a Drupal 7 site. I've attempted to run a Java application called Tika on it, to index files using Apache Solr search.
I keep running into an issue only when SELinux is enabled:
extract using tika: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f1ed9000000, 2555904, 1) failed; error='Permission denied' (errno=13)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 2555904 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/jvm-2356/hs_error.log
This does not happen if I disable selinux. If I run the command from SSH, it works fine -- but not in browser. This is the command it is running:
java '-Dfile.encoding=UTF8' -cp '/var/www/drupal/sites/all/modules/contrib/apachesolr_attachments/tika' -jar '/var/www/drupal/sites/all/modules/contrib/apachesolr_attachments/tika/tika-app-1.11.jar' -t '/var/www/drupal/sites/all/modules/contrib/apachesolr_attachments/tests/test-tika.pdf'
Here is the log from SELinux at /var/log/audit/audit.log:
type=AVC msg=audit(1454636072.494:3351): avc: denied { execmem } for pid=11285 comm="java" scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:system_r:httpd_t:s0 tclass=process
type=SYSCALL msg=audit(1454636072.494:3351): arch=c000003e syscall=9 success=no exit=-13 a0=7fdfe5000000 a1=270000 a2=7 a3=32 items=0 ppid=2377 pid=11285 auid=506 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=1 comm="java" exe="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95.x86_64/jre/bin/java" subj=unconfined_u:system_r:httpd_t:s0 key=(null)
Is there a way I can run this with SELinux enabled? I do not know the policy name of Tika (or should I use Java?) so I'm unsure where to go from here...
This worked for me...
I have tika at /var/apache-tika/tika-app-1.14.jar
setsebool -P httpd_execmem 1
chcon -t httpd_exec_t /var/apache-tika/tika-app-1.14.jar
Using the sealert tools (https://wiki.centos.org/HowTos/SELinux) helped track down the correct selinux type.
All of your context messages reference httpd_t, so I would run
/usr/sbin/getsebool -a | grep httpd
And experiment with enabling properties that show as off. It's been a while since I ran a database-backed website (Drupal, WordPress, etc.) on CentOS, but as I recall, these two were required to be enabled:
httpd_can_network_connect
httpd_can_network_connect_db
to enable a property with persistence, run
setsebool -P httpd_can_network_connect on
etc.
The booleans you're looking for are:
httpd_execmem
httpd_read_user_content
How to find:
audit2why -i /var/log/audit/audit.log will tell you this.
Part of package: policycoreutils-python-utils

Why does start-all.sh from root cause "failed to launch org.apache.spark.deploy.master.Master: JAVA_HOME is not set"?

I am trying to execute a Spark application built through Scala IDE through my standalone Spark service running on cloudera quickstart VM 5.3.0.
My cloudera account JAVA_HOME is /usr/java/default
However, I am facing the below error message while executing the start-all.sh command from cloudera user as below:
[cloudera#localhost sbin]$ pwd
/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin
[cloudera#localhost sbin]$ ./start-all.sh
chown: changing ownership of `/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs': Operation not permitted
starting org.apache.spark.deploy.master.Master, logging to /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-cloudera-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/spark-daemon.sh: line 151: /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-cloudera-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out: Permission denied
failed to launch org.apache.spark.deploy.master.Master:
tail: cannot open `/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-cloudera-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out' for reading: No such file or directory
full log in /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-cloudera-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
cloudera#localhost's password:
localhost: chown: changing ownership of `/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs': Operation not permitted
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs/spark-cloudera-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
localhost: /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/spark-daemon.sh: line 151: /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs/spark-cloudera-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out: Permission denied
localhost: failed to launch org.apache.spark.deploy.worker.Worker:
localhost: tail: cannot open `/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs/spark-cloudera-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out' for reading: No such file or directory
localhost: full log in /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs/spark-cloudera-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
I had added export CMF_AGENT_JAVA_HOME=/usr/java/default in /etc/default/cloudera-scm-agent and run sudo service cloudera-scm-agent restart. See How to set CMF_AGENT_JAVA_HOME
I had also added export JAVA_HOME=/usr/java/default in locate_java_home function definition in file /usr/share/cmf/bin/cmf-server and restarted the cluster and standalone Spark service
But the below error is repeating while starting spark service from root user
[root#localhost spark]# sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
failed to launch org.apache.spark.deploy.master.Master:
JAVA_HOME is not set
full log in /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
root#localhost's password:
localhost: Connection closed by UNKNOWN
Can anybody suggest how to set JAVA_HOME so as to start Spark standalone service on cloudera manager?
The solution had been quite easy and straightforward. Just added export JAVA_HOME=/usr/java/default in /root/.bashrc and it successfully started the spark services from root user without the JAVA_HOME is not set error. Hope it helps somebody facing same problem.
set JAVA_HOME variable in ~/.bashrc as follows
sudo gedit ~/.bashrc
write this line in the file (address of your installed JDK)
JAVA_HOME="/usr/lib/jvm/java-11-openjdk-amd64"
Then command
source ~/.bashrc

Docker port isn't accessible from host

I have a new Spring Boot application that I just finished and am trying to deploy it to Docker. Inside the container the application works fine. It uses ports 9000 for user facing requests and 9100 for administrative tasks like health checks. When I start a docker instance and try to access port 9000 I get the following error:
curl: (56) Recv failure: Connection reset by peer
After a lot of experimentation (via curl), I confirmed in with several different configurations that the application functions fine inside the container, but when I try to map ports to the host it doesn't connect. I've tried starting it with the following commands. None of them allow me to access the ports from the host.
docker run -P=true my-app
docker run -p 9000:9000 my-app
The workaround
The only approach that works is using the --net host option, but this doesn't allow me to run more than one container on that host.
docker run -d --net=host my-app
Experiments with ports and expose
I've used various versions of the Dockerfile exposing different ports such as 9000 and 9100 or just 9000. None of that helped. Here's my latest version:
FROM ubuntu
MAINTAINER redacted
RUN apt-get update
RUN apt-get install openjdk-7-jre-headless -y
RUN mkdir -p /opt/app
WORKDIR /opt/app
ADD ./target/oauth-authentication-1.0.0.jar /opt/app/service.jar
ADD config.properties /opt/app/config.properties
EXPOSE 9000
ENTRYPOINT java -Dext.properties.dir=/opt/app -jar /opt/app/service.jar
Hello World works
To make sure I can run a Spring Boot application, I tried Simplest-Spring-Boot-MVC-HelloWorld and it worked fine.
Netstat Results
I've used netstat to do port scans from the host and from the container:
From the host
root#my-docker-host:~# nmap 172.17.0.71 -p9000-9200
Starting Nmap 6.40 ( http://nmap.org ) at 2014-11-14 19:19 UTC Nmap
scan report for my-docker-host (172.17.0.71)
Host is up (0.0000090s latency).
Not shown: 200 closed ports
PORT STATE SERVICE
9100/tcp open jetdirect
MAC Address: F2:1A:ED:F4:07:7A (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 1.48 seconds
From the container
root#80cf20c0c1fa:/opt/app# nmap 127.0.0.1 -p9000-9200
Starting Nmap 6.40 ( http://nmap.org ) at 2014-11-14 19:20 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0000070s latency).
Not shown: 199 closed ports
PORT STATE SERVICE
9000/tcp open cslistener
9100/tcp open jetdirect
Nmap done: 1 IP address (1 host up) scanned in 2.25 seconds
The container is using Ubuntu
The hosts I've replicated this are Centos and Ubuntu.
This SO question seems similar but had very few details and no answers, so I thought I'd try to document my scenario a bit more.
I had a similar problem, in which specifying a host IP address as '127.0.0.1' wouldn't properly forward the port to the host.
Setting the web server's IP to '0.0.0.0' fixes the problem
eg - for my Node app - the following doesn't work
app.listen(3000, '127.0.0.1')
Where as the following does work:
app.listen(3000, '0.0.0.0')
Which I guess means that docker, by default, is exposing 0.0.0.0:containerPort -> local port
You should run with docker run -P to get the ports to map automatically to the same values to set in the Dockerfile.. Please see http://docs.docker.com/reference/run/#expose-incoming-ports

Deploying java application using capistrano

I am trying to get my java application deployed on tomcat server (in windows), I am getting the following error. Please help me with some guideline on the following connection error. I have admin privilege and server is running in local.
C:\builds>cap local deploy
DL is deprecated, please use Fiddle
* 2013-04-01 14:19:06 executing `local'
* 2013-04-01 14:19:06 executing `deploy'
* 2013-04-01 14:19:06 executing `deploy:update'
** transaction: start
* 2013-04-01 14:19:06 executing `deploy:update_code'
* executing "xcopy C:/_Savita/app/my-app \"C:/builds/releases/20
130401084906\" /S/I/Y/Q/E && (echo > C:/builds/releases/20130401084906/REVISION
)"
servers: ["localhost"]
*** [deploy:update_code] rolling back
* executing "rm -rf C:/builds/releases/20130401084906; true"
servers: ["localhost"]
** [deploy:update_code] exception while rolling back: Capistrano::ConnectionErr
or, connection failed for: localhost (Errno::ECONNREFUSED: No connection could b
e made because the target machine actively refused it. - connect(2))
connection failed for: localhost (Errno::ECONNREFUSED: No connection could be ma
de because the target machine actively refused it. - connect(2))
Please find below the deploy script used
set :application, "myApp"
#set :scm, "git"
set :repository, "C:/_Savita/app/my-app"
#set :branch, "master"
default_run_options[:pty] = true
ssh_options[:forward_agent] = true
task :local do
roles.clear
server "localhost", :app
set :user, "Savita Doddamani"
set :java_home, "C:/Program Files (x86)/Java/jdk1.6.0_25"
set :tomcat_home, "C:/Program Files (x86)/Apache Software Foundation/Tomcat 6.0"
set :tomcat_manager, "user"
set :tomcat_manager_password, "pwd"
set :maven_home, "C:/_Savita/softwares/apache-maven-2.2.1"
set :deploy_to, "C:/builds/"
set :use_sudo, false
namespace :tomcat do
task :deploy do
puts "==================Building with Maven======================" #Line 22
run "export JAVA_HOME=#{java_home} && cd #{deploy_to}/ && #{maven_home}/bin/mvn clean install package -DskipTests"
puts "==================Undeploy war======================"#Line 24
run "curl --user #{tomcat_manager}:#{tomcat_manager_password} http://$CAPISTRANO:HOST$:8080/manager/text/undeploy?path=/#{application}"
puts "==================Deploy war to Tomcat======================" #Line 26
run "curl --upload-file #{deploy_to}/current/target/dist/local/#{application}*.war --user #{tomcat_manager}:#{tomcat_manager_password} http://$CAPISTRANO:HOST$:8080/manager/text/deploy?path=/#{application}"
end
end
after "deploy", "tomcat:deploy" #Line 30
after "tomcat:deploy", "deploy:cleanup" # keep only the last 5 releases
end
ECONNREFUSED is the return from the connect(2) system call. It means that the server process is not listening on TCP port 8080. Java takes time to start up and you may be attempting to connect via curl too soon or you have not configured Tomcat to listen on port 8080 or you have not started Tomcat at all.

Categories

Resources