I'm trying to deploy a Spark job on Kubernetes, using kubectl apply -f <config_file.yml> (after building Docker image based on Dockerfile). The pod is successfuly created on K8s, then quickly stops with a Failed status. Nothing in the logs help understanding where the error comes from. Other jobs have been successfully deployed on the K8s cluster using the same Dockerfile and config file.
The spark job is supposed to read data from a kafka topic, parse it and outout it in console.
Any idea what might be causing the job to fail?
Dockerfile, built using docker build --rm -f "Dockerfile" xxxxxxxx:80/apache/myapp-test . && docker push xxxxxxxx:80/apache/myapp-test :
FROM xxxxxxxx:80/apache/spark:v2.4.4-gcs-prometheus
#USER root
ADD myapp.jar /jars
RUN adduser --no-create-home --system spark
RUN chown -R spark /prometheus /opt/spark
USER spark
config_file.yml :
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: myapp
namespace: spark
labels:
app: myapp-test
release: spark-2.4.4
spec:
type: Java
mode: cluster
image: "xxxxxxxx:80/apache/myapp-test"
imagePullPolicy: Always
mainClass: spark.jobs.app.streaming.Main
mainApplicationFile: "local:///jars/myapp.jar"
sparkVersion: "2.4.4"
restartPolicy:
type: OnFailure
onFailureRetries: 5
onFailureRetryInterval: 30
onSubmissionFailureRetries: 0
onSubmissionFailureRetryInterval: 0
driver:
cores: 1
memory: "1G"
labels:
version: 2.4.4
monitoring:
exposeDriverMetrics: true
exposeExecutorMetrics: true
prometheus:
jmxExporterJar: "/prometheus/jmx_prometheus_javaagent-0.11.0.jar"
port: 8090
imagePullSecrets:
- xxx
Logs :
++ id -u
+ myuid=100
++ id -g
+ mygid=65533
+ set +e
++ getent passwd 100
+ uidentry='spark:x:100:65533:Linux User,,,:/home/spark:/sbin/nologin'
+ set -e
+ '[' -z 'spark:x:100:65533:Linux User,,,:/home/spark:/sbin/nologin' ']'
+ SPARK_K8S_CMD=driver
+ case "$SPARK_K8S_CMD" in
+ shift 1
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ + sed sort -t_ 's/[^=]*=\(.*\)/\1/g'-k4
-n
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -n '' ']'
+ PYSPARK_ARGS=
+ '[' -n '' ']'
+ R_ARGS=
+ '[' -n '' ']'
+ '[' '' == 2 ']'
+ '[' '' == 3 ']'
+ case "$SPARK_K8S_CMD" in
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$#")
+ exec /sbin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=192.168.225.14 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class spark.jobs.app.streaming.Main spark-internal
20/04/20 09:27:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (org.apache.spark.deploy.SparkSubmit$$anon$2).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Pod events as shown with kubectl describe pod :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned spark/myapp-driver to xxxxxxxx.preprod.local
Warning FailedMount 15m kubelet, xxxxxxxx.preprod.local MountVolume.SetUp failed for volume "spark-conf-volume" : configmap "myapp-1587388343593-driver-conf-map" not found
Warning DNSConfigForming 15m (x4 over 15m) kubelet, xxxxxxxx.preprod.local Search Line limits were exceeded, some search paths have been omitted, the applied search line is: spark.svc.cluster.local svc.cluster.local cluster.local preprod.local
Normal Pulling 15m kubelet, xxxxxxxx.preprod.local Pulling image "xxxxxxxx:80/apache/myapp-test"
Normal Pulled 15m kubelet, xxxxxxxx.preprod.local Successfully pulled image "xxxxxxxx:80/apache/myapp-test"
Normal Created 15m kubelet, xxxxxxxx.preprod.local Created container spark-kubernetes-driver
Normal Started 15m kubelet, xxxxxxxx.preprod.local Started container spark-kubernetes-driver
You have to review conf/spark-env.(sh|cmd)
Start by configuring the logging
Spark uses log4j for logging. You can configure it by adding a
log4j.properties file in the conf directory. One way to start is to
copy the existing log4j.properties.template located there.
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=WARN
# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=WARN
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
Related
I have a dataflow running in NiFi 1.12.0, the relevant properties from this installation is here:
nifi.sensitive.props.key=
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
I am facing migration issue in nifi when I upgrade base version to 1.16.3 from 1.12.0. which having following properties.
nifi.sensitive.props.key=testPassword
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=NIFI_ARGON2_AES_GCM_256
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
I am getting following exception while I execute command for migrating flow file.
nifi#nifi-service-0:/opt/nifi/nifi-toolkit-current/bin$ ./encrypt-config.sh -n $NIFI_HOME/conf/nifi.properties -f /opt/nifi/data/old_flow.xml.gz -s testPassword -x
[main] WARN org.apache.nifi.properties.ConfigEncryptionTool - The source nifi.properties and destination nifi.properties are identical [/opt/nifi/nifi-current/conf/nifi.properties] so the original will be overwritten
[main] WARN org.apache.nifi.properties.ConfigEncryptionTool - The source flow.xml.gz and destination flow.xml.gz are identical [/opt/nifi/data/old_flow.xml.gz] so the original will be overwritten
[main] WARN org.apache.nifi.properties.AbstractBootstrapPropertiesLoader - System Property [nifi.properties.file.path] not found: Using Relative Path [conf/nifi.properties]
[main] INFO org.apache.nifi.properties.NiFiPropertiesLoader - Loading Application Properties [/opt/nifi/nifi-current/conf/nifi.properties]
[main] INFO org.apache.nifi.properties.NiFiPropertiesLoader - Loading Application Properties [/opt/nifi/nifi-current/conf/nifi.properties]
[main] INFO org.apache.nifi.properties.ConfigEncryptionTool - Loaded NiFiProperties instance with 138 properties
[main] INFO org.apache.nifi.properties.NiFiPropertiesLoader - Loading Application Properties [/opt/nifi/nifi-current/conf/nifi.properties]
[main] INFO org.apache.nifi.properties.ConfigEncryptionTool - Migrating flow.xml file at /opt/nifi/data/old_flow.xml.gz. This could take a while if the flow XML is very large.
[main] ERROR org.apache.nifi.properties.ConfigEncryptionTool - Encountered an error: Decryption Failed with Algorithm [AES/GCM/NoPadding]
Encountered an error migrating flow content
usage: org.apache.nifi.properties.ConfigEncryptionTool [-h] [-v] [-n <file>] [-o <file>] [-l <file>] [-i <file>] [-a <file>] [-u <file>] [-f <file>] [-g <file>]
[-b <file>] [-S <protectionScheme>] [-k <keyhex>] [-e <keyhex>] [-H <protectionScheme>] [-p <password>] [-w <password>] [-r] [-m] [-x] [-s
<password|keyhex>] [-A <algorithm>] [-P <algorithm>] [-c]
This tool reads from a nifi.properties and/or login-identity-providers.xml file with plain sensitive configuration values, prompts the user for a root key, and
encrypts each value. It will replace the plain value with the protected value in the same file (or write to a new file if specified). It can also be used to
migrate already-encrypted values in those files or in flow.xml.gz to be encrypted with a new key.
Please help to solve this issue
Solved by myself.
Issue can be resolved by following steps
Before migration if you don't have nifi.sensitive.props.key set, set it using following command ${NIFI_TOOLKIT_PAT}/bin/encrypt-config.sh -f /opt/nifi/nifi-current/data/flow.xml.gz -p ${NIFI_HOME}/conf/nifi.properties -s <NEW_KEY_TO_SET> -x
Once key is set upgrade nifi. Since in newer version algorithm is changed set it using command ${NIFI_HOME}/bin/nifi.sh set-sensitive-properties-algorithm <NEW_ALGORITHM>
Once algorithm set, encrypt again using command ${NIFI_TOOLKIT_PAT}/bin/encrypt-config.sh -f /opt/nifi/nifi-current/data/flow.xml.gz -p ${NIFI_HOME}/conf/nifi.properties -s <NEW_KEY_TO_SET> -x
Now you will get all compatible files with respect your latest version
I have been trying to install Elasticsearch, which, for version 7.x seemed easy, whereas for version 5.x is a pain in the neck. The whole ordeal exists because there is a slew of compatibility requirements between the Elasticseach, Django Haystack, Django CMS and other things. If someone has a nice table or a way to wrap their head around that, I'd be happy to hear it.
As to the actual question, after installing ES 5.x, I cannot seem to get it working.
user#user-desktop:~/sites/project-web/project$ sudo systemctl restart elasticsearch
user#user-desktop:~/sites/project-web/project$ curl -X GET localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
user#user-desktop:~/sites/project-web/project$
Entities that are uncommented in /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: project-search
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
transport.host: localhost
transport.tcp.port: 9300
#
# For more information, consult the network module documentation.
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["0.0.0.0"]
#discovery.seed_hosts:["0.0.0.0"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 1
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
This is the status with which it fails:
user#user-desktop:~/sites/project-web/project$ systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2019-11-24 15:39:25 CST; 3min 54s ago
Docs: http://www.elastic.co
Process: 19098 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DI
Process: 19097 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 19098 (code=exited, status=1/FAILURE)
Nov 24 15:39:24 user-desktop systemd[1]: Starting Elasticsearch...
Nov 24 15:39:24 user-desktop systemd[1]: Started Elasticsearch.
Nov 24 15:39:24 user-desktop elasticsearch[19098]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Nov 24 15:39:25 user-desktop systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 15:39:25 user-desktop systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
In /var/log/elasticsearch/project-search.log I find the following error:
[2019-11-24T15:46:44,319][INFO ][o.e.n.Node ] [node-1] initializing ...
[2019-11-24T15:46:44,410][ERROR][o.e.b.Bootstrap ] Exception
org.elasticsearch.ElasticsearchException: java.io.IOException: failed to read [id:0, legacy:false, file:/var/lib/elasticsearch/nodes/0/_state/node-0.st]
at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:196) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:335) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeEnvironment.loadOrCreateNodeMetaData(NodeEnvironment.java:418) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:267) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.node.Node.<init>(Node.java:265) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.node.Node.<init>(Node.java:245) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:233) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:233) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.6.16.jar:5.6.16]
Caused by: java.io.IOException: failed to read [id:0, legacy:false, file:/var/lib/elasticsearch/nodes/0/_state/node-0.st]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:328) ~[elasticsearch-5.6.16.jar:5.6.16]
... 14 more
Caused by: java.lang.IllegalArgumentException: [node_meta_data] unknown field [node_version], parser not found
at org.elasticsearch.common.xcontent.ObjectParser.getParser(ObjectParser.java:399) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:159) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.xcontent.ObjectParser.apply(ObjectParser.java:183) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeMetaData$1.fromXContent(NodeMetaData.java:110) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeMetaData$1.fromXContent(NodeMetaData.java:94) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.read(MetaDataStateFormat.java:203) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:323) ~[elasticsearch-5.6.16.jar:5.6.16]
... 14 more
Could someone tell me what it going on? Any help on resolving this and getting ES to work would be appreciated.
Looks like an inconsistency issue between elasticSearch versions. If you had data indexed previously with ES version 7.0, now that data in that instance in the disk is incompatible with ES version 5.0.
Remove elasticsearch directory:
sudo rm -rf /var/lib/elasticsearch
And reinstall elasticsearch
Work for me.
For mac users using brew first clean all brew files with
brew uninstall elasticsearch
rm -rf /usr/local/etc/elasticsearch
rm -rf /usr/local/var/lib/elasticsearch
Then reinstall your elasticsearch version for example
brew install elasticsearch#6
Make sure elasticsearch is pointing to a compatible java version
nano /usr/local/opt/elasticsearch#6/bin
Then you change this line to your compatible version
JAVA_HOME="${JAVA_HOME:-/usr/local/opt/openjdk#17YOUR_COMPATIBLE_VERSION/libexec/openjdk.jdk/Contents/Home}" exec "/usr/local/Cellar/elasticsearch#6/6.8.23/libexec/bin/elasticsearch" "$#"
run elasticsearch in your terminal
elasticsearch
I was following previous posts but still not able to resolve the issue. I am trying to install zookeeper and start it to run summing-bird which is run to provide bolts/spouts to storm for online and batch. I installed zookeeper version 3.4.6 first and was getting class not found exception. After looking at the post
ClassNotFoundException for Zookeeper while building Storm
I downgraded the version to 3.3.6 and now I am not even able to start the zookeeper server. Any help will be really appreciated.
root#cp-1:/users/username/zookeeper-3.3.6/bin# ./zkServer.sh start
JMX enabled by default
Using config: /users/username/zookeeper-3.3.6/bin/../conf/zoo.cfg
Starting zookeeper ... ./zkServer.sh: 93: [: /tmp/zookeeper/: unexpected operator
./zkServer.sh: 103: ./zkServer.sh: cannot create /tmp/zookeeper/
The number of snapshots to retain in dataDir/zookeeper_server.pid: Directory nonexistent
FAILED TO WRITE PID
This is how my zoo.cfg file looks like
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper/
dataLogDir=/tmp/logs/zookeeper/
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=10.11.10.3:2888:3888
server.2=10.11.10.4:2888:3888
This is how access looks like
drwxr-xr-x 2 username oppts-PG0 4096 Nov 25 14:35 zookeeper
drwxr-xr-x 3 root root 4096 Nov 25 14:46 logs
drwxr-xr-x 2 root root 4096 Nov 25 14:46 logs/zookeeper
As stated in the contents of zoo.cfg, you’d not better to set the dataDir to /tmp/zookeeper.
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
You can try to set dataDir to other directory that you created. And then restart zkServer.sh.
I have been referring this Java Service Wrapper for Linux Daemon ,
#encoding=UTF-8
# Configuration files must begin with a line specifying the encoding
# of the the file.
#********************************************************************
# Wrapper License Properties (Ignored by Community Edition)
#********************************************************************
# Professional and Standard Editions of the Wrapper require a valid
# License Key to start. Licenses can be purchased or a trial license
# requested on the following pages:
# http://wrapper.tanukisoftware.com/purchase
# http://wrapper.tanukisoftware.com/trial
# Include file problems can be debugged by removing the first '#'
# from the following line:
##include.debug
# The Wrapper will look for either of the following optional files for a
# valid License Key. License Key properties can optionally be included
# directly in this configuration file.
#include ../conf/wrapper-license.conf
#include ../conf/wrapper-license-%WRAPPER_HOST_NAME%.conf
# The following property will output information about which License Key(s)
# are being found, and can aid in resolving any licensing problems.
#wrapper.license.debug=TRUE
#********************************************************************
# Wrapper Localization
#********************************************************************
# Specify the locale which the Wrapper should use. By default the system
# locale is used.
#wrapper.lang=en_US # en_US or ja_JP
# Specify the location of the Wrapper's language resources. If these are
# missing, the Wrapper will default to the en_US locale.
wrapper.lang.folder=../lang
#********************************************************************
# Wrapper Java Properties
#********************************************************************
# Java Application
# Locate the java binary on the system PATH:
wrapper.java.command=java
wrapper.working.dir =/home/badvelip/pras/JavaServiceWrapper/
# Specify a specific java binary:
#set.JAVA_HOME=/java/path
#wrapper.java.command=%JAVA_HOME%/bin/java
# Tell the Wrapper to log the full generated Java command line.
wrapper.java.command.loglevel=INFO
# Java Main class. This class must implement the WrapperListener interface
# or guarantee that the WrapperManager class is initialized. Helper
# classes are provided to do this for you. See the Integration section
# of the documentation for details.
wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperJarApp
# Java Classpath (include wrapper.jar) Add class path elements as
# needed starting from 1
wrapper.java.classpath.2=/home/badvelip/pras/JavaServiceWrapper/bin/lib/wrapper.jar
# Java Library Path (location of Wrapper.DLL or libwrapper.so)
wrapper.java.library.path.1=/home/badvelip/pras/JavaServiceWrapper/bin/lib/libwrapper.so
# Java Bits. On applicable platforms, tells the JVM to run in 32 or 64-bit mode.
wrapper.java.additional.auto_bits=TRUE
# Java Additional Parameters
wrapper.java.additional.1=
# Initial Java Heap Size (in MB)
#wrapper.java.initmemory=3
# Maximum Java Heap Size (in MB)
#wrapper.java.maxmemory=64
# Application parameters. Add parameters as needed starting from 1
#wrapper.app.parameter.1=/home/badvelip/pras/JavaServiceWrapper/bin/myDaemon.jar
#********************************************************************
# Wrapper Logging Properties
#********************************************************************
# Enables Debug output from the Wrapper.
wrapper.debug=TRUE
# Format of output for the console. (See docs for formats)
wrapper.console.format=PM
# Log Level for console output. (See docs for log levels)
wrapper.console.loglevel=INFO
# Log file to use for wrapper output logging.
wrapper.logfile=../logs/wrapper.log
# Format of output for the log file. (See docs for formats)
wrapper.logfile.format=LPTM
# Log Level for log file output. (See docs for log levels)
wrapper.logfile.loglevel=INFO
# Maximum size that the log file will be allowed to grow to before
# the log is rolled. Size is specified in bytes. The default value
# of 0, disables log rolling. May abbreviate with the 'k' (kb) or
# 'm' (mb) suffix. For example: 10m = 10 megabytes.
wrapper.logfile.maxsize=0
# Maximum number of rolled log files which will be allowed before old
# files are deleted. The default value of 0 implies no limit.
wrapper.logfile.maxfiles=0
# Log Level for sys/event log output. (See docs for log levels)
wrapper.syslog.loglevel=NONE
wrapper.on_exit.default=RESTART
#********************************************************************
# Wrapper General Properties
#********************************************************************
# Allow for the use of non-contiguous numbered properties
wrapper.ignore_sequence_gaps=TRUE
# Do not start if the pid file already exists.
wrapper.pidfile.strict=TRUE
# Title to use when running as a console
wrapper.console.title=Test Wrapper Sample Application
And my log file is this
jvm 1 | WrapperJarApp Usage:
jvm 1 | java org.tanukisoftware.wrapper.WrapperJarApp {jar_file} [app_arguments]
jvm 1 |
jvm 1 | Where:
jvm 1 | jar_file: The jar file to run.
jvm 1 | app_arguments: The arguments that would normally be passed to the
jvm 1 | application.
jvm 1 | WrapperManager Debug: WrapperManager.stop(1) called by thread: main
jvm 1 | WrapperManager Debug: Backend not connected, not sending packet STOP : 1
jvm 1 | WrapperManager Debug: Pausing for 1,000ms to allow a clean shutdown...
jvm 1 | WrapperManager Debug: Startup runner thread started.
jvm 1 | WrapperManager Debug: Thread, main, handling the shutdown process.
jvm 1 | WrapperManager Debug: shutdownJVM(1) Thread: main
jvm 1 | WrapperManager Debug: wait for 0 shutdown locks to be released.
jvm 1 | WrapperManager Debug: Backend not connected, not sending packet STOPPED : 1
wrapper | Signal trapped. Details:
wrapper | signal number=17 (SIGCHLD), source="unknown"
wrapper | Received SIGCHLD, checking JVM process status.
wrapper | JVM process exited with a code of 1, setting the wrapper exit code to 1.
wrapper | JVM exited while loading the application.
jvm 1 | WrapperManager Debug: calling System.exit(1)
I have tried all possible ways to solve this... I have also seen the other queries related to this topic but no result.
If any one has faced this problem , please help me regarding this.
The link which i have refered is
http://opentodo.net/2013/03/deploying-java-unix-daemon-with-java-service-wrapper/
I tried all the steps as it is in this.
Regards,
Rushita
I have an application called "Update.jar" that I'm trying to use with the java service wrapper (JSW), but when I start the service (either from SERVICES.MSC or StartUpdate-NT.bat) the application doesn't run, even though the service is showing as started in SERVICES.MSC. There should be an icon displayed in the system tray through out the length of the runtime.
I've successfully launched the app:
by executing the .jar
by running Update.bat in [wrapper]/bin/ directory
by executing from the command line
Below is my wrapper.conf file:
#encoding=UTF-8
# Configuration files must begin with a line specifying the encoding
# of the the file.
#********************************************************************
# Wrapper License Properties (Ignored by Community Edition)
#********************************************************************
# Professional and Standard Editions of the Wrapper require a valid
# License Key to start. Licenses can be purchased or a trial license
# requested on the following pages:
# http://wrapper.tanukisoftware.com/purchase
# http://wrapper.tanukisoftware.com/trial
# Include file problems can be debugged by removing the first '#'
# from the following line:
#include.debug
# The Wrapper will look for either of the following optional files for a
# valid License Key. License Key properties can optionally be included
# directly in this configuration file.
#include ../conf/wrapper-license.conf
#include ../conf/wrapper-license-%WRAPPER_HOST_NAME%.conf
# The following property will output information about which License Key(s)
# are being found, and can aid in resolving any licensing problems.
#wrapper.license.debug=TRUE
#********************************************************************
# Wrapper Localization
#********************************************************************
# Specify the locale which the Wrapper should use. By default the system
# locale is used.
#wrapper.lang=en_US # en_US or ja_JP
# Specify the location of the Wrapper's language resources. If these are
# missing, the Wrapper will default to the en_US locale.
wrapper.lang.folder=../lang
#********************************************************************
# Wrapper Java Properties
#********************************************************************
# Java Application
# Locate the java binary on the system PATH:
wrapper.java.command=java
# Specify a specific java binary:
#set.JAVA_HOME=/java/path
#wrapper.java.command=%JAVA_HOME%/bin/java
# Tell the Wrapper to log the full generated Java command line.
#wrapper.java.command.loglevel=INFO
# Java Main class. This class must implement the WrapperListener interface
# or guarantee that the WrapperManager class is initialized. Helper
# classes are provided to do this for you. See the Integration section
# of the documentation for details.
wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp update.Tray
# Java Classpath (include wrapper.jar) Add class path elements as
# needed starting from 1
wrapper.java.classpath.1=../lib/wrapper.jar
wrapper.java.classpath.2=../lib/Update.jar
# Java Library Path (location of Wrapper.DLL or libwrapper.so)
wrapper.java.library.path.1=../lib
# Java Bits. On applicable platforms, tells the JVM to run in 32 or 64-bit mode.
wrapper.java.additional.auto_bits=TRUE
# Java Additional Parameters
wrapper.java.additional.1=
# Initial Java Heap Size (in MB)
#wrapper.java.initmemory=3
# Maximum Java Heap Size (in MB)
#wrapper.java.maxmemory=64
# Application parameters. Add parameters as needed starting from 1
#wrapper.app.parameter.1=update.Tray
#********************************************************************
# Wrapper Logging Properties
#********************************************************************
# Enables Debug output from the Wrapper.
# wrapper.debug=TRUE
# Format of output for the console. (See docs for formats)
wrapper.console.format=PM
# Log Level for console output. (See docs for log levels)
wrapper.console.loglevel=INFO
# Log file to use for wrapper output logging.
wrapper.logfile=../logs/wrapper.log
# Format of output for the log file. (See docs for formats)
wrapper.logfile.format=LPTM
# Log Level for log file output. (See docs for log levels)
wrapper.logfile.loglevel=INFO
# Maximum size that the log file will be allowed to grow to before
# the log is rolled. Size is specified in bytes. The default value
# of 0, disables log rolling. May abbreviate with the 'k' (kb) or
# 'm' (mb) suffix. For example: 10m = 10 megabytes.
wrapper.logfile.maxsize=0
# Maximum number of rolled log files which will be allowed before old
# files are deleted. The default value of 0 implies no limit.
wrapper.logfile.maxfiles=0
# Log Level for sys/event log output. (See docs for log levels)
wrapper.syslog.loglevel=NONE
#********************************************************************
# Wrapper General Properties
#********************************************************************
# Allow for the use of non-contiguous numbered properties
wrapper.ignore_sequence_gaps=TRUE
# Title to use when running as a console
wrapper.console.title=Test Wrapper Sample Application
#********************************************************************
# Wrapper JVM Checks
#********************************************************************
# Detect DeadLocked Threads in the JVM. (Requires Standard Edition)
wrapper.check.deadlock=TRUE
wrapper.check.deadlock.interval=10
wrapper.check.deadlock.action=RESTART
wrapper.check.deadlock.output=FULL
# Out Of Memory detection.
# (Ignore output from dumping the configuration to the console. This is only needed by the TestWrapper sample application.)
wrapper.filter.trigger.999=wrapper.filter.trigger.*java.lang.OutOfMemoryError
wrapper.filter.allow_wildcards.999=TRUE
wrapper.filter.action.999=NONE
# (Simple match)
wrapper.filter.trigger.1000=java.lang.OutOfMemoryError
# (Only match text in stack traces if -XX:+PrintClassHistogram is being used.)
#wrapper.filter.trigger.1000=Exception in thread "*" java.lang.OutOfMemoryError
#wrapper.filter.allow_wildcards.1000=TRUE
wrapper.filter.action.1000=RESTART
wrapper.filter.message.1000=The JVM has run out of memory.
#********************************************************************
# Wrapper Email Notifications. (Requires Professional Edition)
#********************************************************************
# Common Event Email settings.
#wrapper.event.default.email.debug=TRUE
#wrapper.event.default.email.smtp.host=<SMTP_Host>
#wrapper.event.default.email.smtp.port=25
#wrapper.event.default.email.subject=[%WRAPPER_HOSTNAME%:%WRAPPER_NAME%:%WRAPPER_EVENT_NAME%] Event Notification
#wrapper.event.default.email.sender=<Sender email>
#wrapper.event.default.email.recipient=<Recipient email>
# Configure the log attached to event emails.
#wrapper.event.default.email.attach_log=TRUE
#wrapper.event.default.email.maillog.lines=50
#wrapper.event.default.email.maillog.format=LPTM
#wrapper.event.default.email.maillog.loglevel=INFO
# Enable specific event emails.
#wrapper.event.wrapper_start.email=TRUE
#wrapper.event.jvm_prelaunch.email=TRUE
#wrapper.event.jvm_start.email=TRUE
#wrapper.event.jvm_started.email=TRUE
#wrapper.event.jvm_deadlock.email=TRUE
#wrapper.event.jvm_stop.email=TRUE
#wrapper.event.jvm_stopped.email=TRUE
#wrapper.event.jvm_restart.email=TRUE
#wrapper.event.jvm_failed_invocation.email=TRUE
#wrapper.event.jvm_max_failed_invocations.email=TRUE
#wrapper.event.jvm_kill.email=TRUE
#wrapper.event.jvm_killed.email=TRUE
#wrapper.event.jvm_unexpected_exit.email=TRUE
#wrapper.event.wrapper_stop.email=TRUE
# Specify custom mail content
wrapper.event.jvm_restart.email.body=The JVM was restarted.\n\nPlease check on its status.\n
#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
# using this configuration file has been installed as a service.
# Please uninstall the service before modifying this section. The
# service can then be reinstalled.
# Name of the service
wrapper.name=Auto-update
# Display name of the service
wrapper.displayname=Auto-update
# Description of the service
wrapper.description=Auto-update
# Service dependencies. Add dependencies as needed starting from 1
wrapper.ntservice.dependency.1=
# Mode in which the service is installed. AUTO_START, DELAY_START or DEMAND_START
wrapper.ntservice.starttype=AUTO_START
# Allow the service to interact with the desktop.
wrapper.ntservice.interactive=true
Wrapper.log contents:
STATUS | wrapper | 2011/08/10 10:31:56 | Auto-update service installed.
STATUS | wrapper | 2011/08/10 10:32:07 | Starting the Auto-update service...
STATUS | wrapper | 2011/08/10 10:32:07 | --> Wrapper Started as Service
STATUS | wrapper | 2011/08/10 10:32:07 | Java Service Wrapper Community Edition 32-bit 3.5.10
STATUS | wrapper | 2011/08/10 10:32:07 | Copyright (C) 1999-2011 Tanuki Software, Ltd. All Rights Reserved.
STATUS | wrapper | 2011/08/10 10:32:07 | http://wrapper.tanukisoftware.com
STATUS | wrapper | 2011/08/10 10:32:07 |
STATUS | wrapper | 2011/08/10 10:32:08 | Launching a JVM...
INFO | jvm 1 | 2011/08/10 10:32:08 | WrapperManager: Initializing...
STATUS | wrapper | 2011/08/10 10:32:11 | Auto-update started.
Could someone please point me at the right direction?
can you set the loglevel to debug by setting
wrapper.debug=true
and rerun your application as service and post. From the log file you posted, your application seems to run... what happens after starting? does it shut down?
What OS are you running?
Please note that starting with Windows Vista, all Services run in an isolated desktop (session 0), since that, you wouldn't be able to see the tray icon in your user desktop...
Small correction (unrelated to your problem), also please change in your conf file:
wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp update.Tray
to
wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp
wrapper.app.parameter.1=update.Tray
cheers,