Failed to write pid zookeeper installing zookeeper - java

I was following previous posts but still not able to resolve the issue. I am trying to install zookeeper and start it to run summing-bird which is run to provide bolts/spouts to storm for online and batch. I installed zookeeper version 3.4.6 first and was getting class not found exception. After looking at the post
ClassNotFoundException for Zookeeper while building Storm
I downgraded the version to 3.3.6 and now I am not even able to start the zookeeper server. Any help will be really appreciated.
root#cp-1:/users/username/zookeeper-3.3.6/bin# ./zkServer.sh start
JMX enabled by default
Using config: /users/username/zookeeper-3.3.6/bin/../conf/zoo.cfg
Starting zookeeper ... ./zkServer.sh: 93: [: /tmp/zookeeper/: unexpected operator
./zkServer.sh: 103: ./zkServer.sh: cannot create /tmp/zookeeper/
The number of snapshots to retain in dataDir/zookeeper_server.pid: Directory nonexistent
FAILED TO WRITE PID
This is how my zoo.cfg file looks like
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper/
dataLogDir=/tmp/logs/zookeeper/
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=10.11.10.3:2888:3888
server.2=10.11.10.4:2888:3888
This is how access looks like
drwxr-xr-x 2 username oppts-PG0 4096 Nov 25 14:35 zookeeper
drwxr-xr-x 3 root root 4096 Nov 25 14:46 logs
drwxr-xr-x 2 root root 4096 Nov 25 14:46 logs/zookeeper

As stated in the contents of zoo.cfg, you’d not better to set the dataDir to /tmp/zookeeper.
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
You can try to set dataDir to other directory that you created. And then restart zkServer.sh.

Related

Spark job on kubernetes fails without specific error

I'm trying to deploy a Spark job on Kubernetes, using kubectl apply -f <config_file.yml> (after building Docker image based on Dockerfile). The pod is successfuly created on K8s, then quickly stops with a Failed status. Nothing in the logs help understanding where the error comes from. Other jobs have been successfully deployed on the K8s cluster using the same Dockerfile and config file.
The spark job is supposed to read data from a kafka topic, parse it and outout it in console.
Any idea what might be causing the job to fail?
Dockerfile, built using docker build --rm -f "Dockerfile" xxxxxxxx:80/apache/myapp-test . && docker push xxxxxxxx:80/apache/myapp-test :
FROM xxxxxxxx:80/apache/spark:v2.4.4-gcs-prometheus
#USER root
ADD myapp.jar /jars
RUN adduser --no-create-home --system spark
RUN chown -R spark /prometheus /opt/spark
USER spark
config_file.yml :
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: myapp
namespace: spark
labels:
app: myapp-test
release: spark-2.4.4
spec:
type: Java
mode: cluster
image: "xxxxxxxx:80/apache/myapp-test"
imagePullPolicy: Always
mainClass: spark.jobs.app.streaming.Main
mainApplicationFile: "local:///jars/myapp.jar"
sparkVersion: "2.4.4"
restartPolicy:
type: OnFailure
onFailureRetries: 5
onFailureRetryInterval: 30
onSubmissionFailureRetries: 0
onSubmissionFailureRetryInterval: 0
driver:
cores: 1
memory: "1G"
labels:
version: 2.4.4
monitoring:
exposeDriverMetrics: true
exposeExecutorMetrics: true
prometheus:
jmxExporterJar: "/prometheus/jmx_prometheus_javaagent-0.11.0.jar"
port: 8090
imagePullSecrets:
- xxx
Logs :
++ id -u
+ myuid=100
++ id -g
+ mygid=65533
+ set +e
++ getent passwd 100
+ uidentry='spark:x:100:65533:Linux User,,,:/home/spark:/sbin/nologin'
+ set -e
+ '[' -z 'spark:x:100:65533:Linux User,,,:/home/spark:/sbin/nologin' ']'
+ SPARK_K8S_CMD=driver
+ case "$SPARK_K8S_CMD" in
+ shift 1
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ + sed sort -t_ 's/[^=]*=\(.*\)/\1/g'-k4
-n
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -n '' ']'
+ PYSPARK_ARGS=
+ '[' -n '' ']'
+ R_ARGS=
+ '[' -n '' ']'
+ '[' '' == 2 ']'
+ '[' '' == 3 ']'
+ case "$SPARK_K8S_CMD" in
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$#")
+ exec /sbin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=192.168.225.14 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class spark.jobs.app.streaming.Main spark-internal
20/04/20 09:27:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (org.apache.spark.deploy.SparkSubmit$$anon$2).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Pod events as shown with kubectl describe pod :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned spark/myapp-driver to xxxxxxxx.preprod.local
Warning FailedMount 15m kubelet, xxxxxxxx.preprod.local MountVolume.SetUp failed for volume "spark-conf-volume" : configmap "myapp-1587388343593-driver-conf-map" not found
Warning DNSConfigForming 15m (x4 over 15m) kubelet, xxxxxxxx.preprod.local Search Line limits were exceeded, some search paths have been omitted, the applied search line is: spark.svc.cluster.local svc.cluster.local cluster.local preprod.local
Normal Pulling 15m kubelet, xxxxxxxx.preprod.local Pulling image "xxxxxxxx:80/apache/myapp-test"
Normal Pulled 15m kubelet, xxxxxxxx.preprod.local Successfully pulled image "xxxxxxxx:80/apache/myapp-test"
Normal Created 15m kubelet, xxxxxxxx.preprod.local Created container spark-kubernetes-driver
Normal Started 15m kubelet, xxxxxxxx.preprod.local Started container spark-kubernetes-driver
You have to review conf/spark-env.(sh|cmd)
Start by configuring the logging
Spark uses log4j for logging. You can configure it by adding a
log4j.properties file in the conf directory. One way to start is to
copy the existing log4j.properties.template located there.
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=WARN
# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=WARN
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR

Elasticsearch 5.x Fails to Start

I have been trying to install Elasticsearch, which, for version 7.x seemed easy, whereas for version 5.x is a pain in the neck. The whole ordeal exists because there is a slew of compatibility requirements between the Elasticseach, Django Haystack, Django CMS and other things. If someone has a nice table or a way to wrap their head around that, I'd be happy to hear it.
As to the actual question, after installing ES 5.x, I cannot seem to get it working.
user#user-desktop:~/sites/project-web/project$ sudo systemctl restart elasticsearch
user#user-desktop:~/sites/project-web/project$ curl -X GET localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
user#user-desktop:~/sites/project-web/project$
Entities that are uncommented in /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: project-search
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
transport.host: localhost
transport.tcp.port: 9300
#
# For more information, consult the network module documentation.
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["0.0.0.0"]
#discovery.seed_hosts:["0.0.0.0"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 1
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
This is the status with which it fails:
user#user-desktop:~/sites/project-web/project$ systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2019-11-24 15:39:25 CST; 3min 54s ago
Docs: http://www.elastic.co
Process: 19098 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DI
Process: 19097 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 19098 (code=exited, status=1/FAILURE)
Nov 24 15:39:24 user-desktop systemd[1]: Starting Elasticsearch...
Nov 24 15:39:24 user-desktop systemd[1]: Started Elasticsearch.
Nov 24 15:39:24 user-desktop elasticsearch[19098]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Nov 24 15:39:25 user-desktop systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 15:39:25 user-desktop systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
In /var/log/elasticsearch/project-search.log I find the following error:
[2019-11-24T15:46:44,319][INFO ][o.e.n.Node ] [node-1] initializing ...
[2019-11-24T15:46:44,410][ERROR][o.e.b.Bootstrap ] Exception
org.elasticsearch.ElasticsearchException: java.io.IOException: failed to read [id:0, legacy:false, file:/var/lib/elasticsearch/nodes/0/_state/node-0.st]
at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:196) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:335) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeEnvironment.loadOrCreateNodeMetaData(NodeEnvironment.java:418) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:267) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.node.Node.<init>(Node.java:265) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.node.Node.<init>(Node.java:245) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:233) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:233) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.6.16.jar:5.6.16]
Caused by: java.io.IOException: failed to read [id:0, legacy:false, file:/var/lib/elasticsearch/nodes/0/_state/node-0.st]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:328) ~[elasticsearch-5.6.16.jar:5.6.16]
... 14 more
Caused by: java.lang.IllegalArgumentException: [node_meta_data] unknown field [node_version], parser not found
at org.elasticsearch.common.xcontent.ObjectParser.getParser(ObjectParser.java:399) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:159) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.xcontent.ObjectParser.apply(ObjectParser.java:183) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeMetaData$1.fromXContent(NodeMetaData.java:110) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeMetaData$1.fromXContent(NodeMetaData.java:94) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.read(MetaDataStateFormat.java:203) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:323) ~[elasticsearch-5.6.16.jar:5.6.16]
... 14 more
Could someone tell me what it going on? Any help on resolving this and getting ES to work would be appreciated.
Looks like an inconsistency issue between elasticSearch versions. If you had data indexed previously with ES version 7.0, now that data in that instance in the disk is incompatible with ES version 5.0.
Remove elasticsearch directory:
sudo rm -rf /var/lib/elasticsearch
And reinstall elasticsearch
Work for me.
For mac users using brew first clean all brew files with
brew uninstall elasticsearch
rm -rf /usr/local/etc/elasticsearch
rm -rf /usr/local/var/lib/elasticsearch
Then reinstall your elasticsearch version for example
brew install elasticsearch#6
Make sure elasticsearch is pointing to a compatible java version
nano /usr/local/opt/elasticsearch#6/bin
Then you change this line to your compatible version
JAVA_HOME="${JAVA_HOME:-/usr/local/opt/openjdk#17YOUR_COMPATIBLE_VERSION/libexec/openjdk.jdk/Contents/Home}" exec "/usr/local/Cellar/elasticsearch#6/6.8.23/libexec/bin/elasticsearch" "$#"
run elasticsearch in your terminal
elasticsearch

Unable to install Jetty9 as a service in Ubuntu

I've followed the docs in order to install Jetty9 as a service but whenever I run
service jetty start
It would fail with no messages, my JETTY_HOME is /opt/jetty9, contains the home distribution for version 9.4.14. I've also created my JETTY_BASE at /usr/share/jetty9 with my webapp and modules.
Both Jetty Home and Base are owned by the user jetty. I've then symlinked to my init.d folder as:
ln -s /opt/jetty9/bin/jetty.sh /etc/init.d/jetty
Then I created a /etc/default/jetty file with the following content:
# change to 1 to prevent Jetty from starting
NO_START=0
# change to 'no' or uncomment to use the default setting in /etc/default/rcS
VERBOSE=yes
# Run Jetty as this user ID (default: jetty)
# Set this to an empty string to prevent Jetty from starting automatically
JETTY_USER=jetty
# The home directory of the Java Runtime Environment (JRE). You need at least
# Java 6. If JAVA_HOME is not set, some common directories for OpenJDK and
# the Oracle JDK are tried.
#JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
# Extra options to pass to the JVM
#JAVA_OPTIONS="-Xmx256m -Djava.awt.headless=true"
# Timeout in seconds for the shutdown of all webapps
#JETTY_SHUTDOWN=30
# Additional arguments to pass to Jetty
#JETTY_ARGS=
# Jetty uses a directory to store temporary files like unpacked webapps
TMPDIR=/opt/jetty9/tmp
JETTY_HOME=/opt/jetty9
JETTY_BASE=/usr/share/jetty9
# Default for number of days to keep old log files in /var/log/jetty9/
#LOGFILE_DAYS=14
# If you run Jetty on port numbers that are all higher than 1023, then you # do not need authbind. It is used for binding Jetty to lower port numbers.
# (yes/no, default: no)
#AUTHBIND=yes
JETTY_HOST=0.0.0.0
If I start Jetty using java -jar $JETTY_HOME/start.jar in my base folder it would work with no problem. Also, if I run
service jetty supervise
It would also run with no issues, but when I call start it fails with:
root#app:/usr/share/jetty9# service jetty start
Job for jetty.service failed because the control process exited with error code.
See "systemctl status jetty.service" and "journalctl -xe" for details.
root#app:/usr/share/jetty9# service jetty status
● jetty.service - LSB: Jetty start script.
Loaded: loaded (/etc/init.d/jetty; generated)
Active: failed (Result: exit-code) since Mon 2018-12-03 15:05:26 UTC; 14s ago
Docs: man:systemd-sysv-generator(8)
Process: 21162 ExecStop=/etc/init.d/jetty stop (code=exited, status=0/SUCCESS)
Process: 21202 ExecStart=/etc/init.d/jetty start (code=exited, status=1/FAILURE)
Dec 03 15:05:22 app systemd[1]: Stopped LSB: Jetty start script..
Dec 03 15:05:22 app systemd[1]: Starting LSB: Jetty start script....
Dec 03 15:05:26 app jetty[21202]: Starting Jetty: FAILED Mon Dec 3 15:05:26 UTC 2018
Dec 03 15:05:26 app systemd[1]: jetty.service: Control process exited, code=exited status=1
Dec 03 15:05:26 app systemd[1]: jetty.service: Failed with result 'exit-code'.
Dec 03 15:05:26 app systemd[1]: Failed to start LSB: Jetty start script..
This is the output of service jetty check:
root#app:/usr/share/jetty9# service jetty check
Jetty NOT running
JAVA = /usr/bin/java
JAVA_OPTIONS = -Djetty.home=/opt/jetty9 -Djetty.base=/usr/share/jetty9 -Djava.io.tmpdir=/opt/jetty9/tmp
JETTY_HOME = /opt/jetty9
JETTY_BASE = /usr/share/jetty9
START_D = /usr/share/jetty9/start.d
START_INI = /usr/share/jetty9/start.ini
JETTY_START = /opt/jetty9/start.jar
JETTY_CONF = /opt/jetty9/etc/jetty.conf
JETTY_ARGS = jetty.state=/usr/share/jetty9/jetty.state jetty-started.xml
JETTY_RUN = /var/run/jetty
JETTY_PID = /var/run/jetty/jetty.pid
JETTY_START_LOG = /var/run/jetty/jetty-start.log
JETTY_STATE = /usr/share/jetty9/jetty.state
JETTY_START_TIMEOUT = 60
RUN_CMD = /usr/bin/java -Djetty.home=/opt/jetty9 -Djetty.base=/usr/share/jetty9 -Djava.io.tmpdir=/opt/jetty9/tmp -jar /opt/jetty9/start.jar jetty.state=/usr/share/jetty9/jetty.state jetty-started.xml
Any ideas?
UPDATE
Changing the user in /etc/default/jetty to root would solve the issue, but this is not a solution, isn't it?
# Run Jetty as this user ID (default: jetty)
# Set this to an empty string to prevent Jetty from starting automatically
JETTY_USER=root
I finally got this working, the jetty user should have permissions to the following folders and /usr/sbin/nologin as shell as described here.
JETTY_HOME
JETTY_BASE
/var/run/jetty <-- couldn't find a reference to this folder in the docs
And add the following to your /etc/default/jetty:
JETTY_SHELL=/bin/sh
JETTY_LOGS=/usr/share/jetty9/logs
JETTY_START_LOG=/usr/share/jetty9/logs/jetty-start-log.log
Also you should double check that there are no remaining log files owned by other user than jetty in your folders.

Issue with Redis setup cluster mode in docker(Windows 7)

I am trying to set up Redis in cluster mode and when I try to connect to Redis using Jedis API, I am seeing below exception.
Exception in thread "main" redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnection(JedisSlotBasedConnectionHandler.java:57)
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:74)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:116)
at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:31)
at redis.clients.jedis.JedisCluster.set(JedisCluster.java:103)
at com.redis.main.Main.main(Main.java:18)
I am using below command to start the Redis
$ docker run -v /d/redis.conf:/usr/bin/redis.conf --name myredis redis redis-server /usr/bin/redis.conf
And my simple redis.conf looks like below.
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
And below are redis start up logs.
$ docker run -v /d/redis.conf:/usr/bin/redis.conf --name myredis redis redis-se
rver /usr/bin/redis.conf
1:C 11 Oct 18:06:01.657 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 11 Oct 18:06:01.663 # Redis version=4.0.2, bits=64, commit=00000000, modifi
d=0, pid=1, just started
1:C 11 Oct 18:06:01.664 # Configuration loaded
1:M 11 Oct 18:06:01.685 * Running mode=standalone, port=6379.
1:M 11 Oct 18:06:01.690 # WARNING: The TCP backlog setting of 511 cannot be enf
rced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 11 Oct 18:06:01.692 # Server initialized
1:M 11 Oct 18:06:01.696 # WARNING overcommit_memory is set to 0! Background sav
may fail under low memory condition. To fix this issue add 'vm.overcommit_memo
y = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overco
mit_memory=1' for this to take effect.
1:M 11 Oct 18:06:01.697 # WARNING you have Transparent Huge Pages (THP) support
enabled in your kernel. This will create latency and memory usage issues with R
dis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent
hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain
he setting after a reboot. Redis must be restarted after THP is disabled.
1:M 11 Oct 18:06:01.700 * Ready to accept connections
And below is the simple java program.
public class Main {
public static void main(String[] args) {
Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>();
jedisClusterNodes.add(new HostAndPort("127.0.0.1", 6379));
JedisCluster jc = new JedisCluster(jedisClusterNodes);
//Jedis jc = new Jedis("192.168.99.100");
jc.set("prime", "1 is prime");
String keyVal = jc.get("prime");
System.out.println(keyVal);
}
}
Not really sure what is going wrong here and will appreciate any help on this.
You need to expose the port when starting the redis container
docker run -v /d/redis.conf:/usr/bin/redis.conf -p 6379:6379 --name myredis redis redis-server /usr/bin/redis.conf

Zookeeper cluster set up

I am able to set up zookeeper cluster on 1 machine with 3 different ports, but when I do the same with different IP to have zookeeper instance on different machines, it throws following error:
2014-11-20 12:16:24,819 [myid:1] - INFO [main:QuorumPeerMain#127] - Starting quorum peer
2014-11-20 12:16:24,827 [myid:1] - INFO [main:NIOServerCnxnFactory#94] - binding to port 0.0.0.0/0.0.0.0:2181
2014-11-20 12:16:24,842 [myid:1] - INFO [main:QuorumPeer#959] - tickTime set to 2000
2014-11-20 12:16:24,842 [myid:1] - INFO [main:QuorumPeer#979] - minSessionTimeout set to -1
2014-11-20 12:16:24,842 [myid:1] - INFO [main:QuorumPeer#990] - maxSessionTimeout set to -1
2014-11-20 12:16:24,842 [myid:1] - INFO [main:QuorumPeer#1005] - initLimit set to 10
2014-11-20 12:16:24,857 [myid:1] - INFO [Thread-1:QuorumCnxManager$Listener#504] - My election bind port: /172.16.1.175:2223
2014-11-20 12:16:24,870 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer#714] - LOOKING
2014-11-20 12:16:24,873 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection#815] - New election. My id = 1, proposed zxid=0x0
2014-11-20 12:16:24,876 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection#597] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
2014-11-20 12:16:24,881 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager#382] - Cannot open channel to 2 at election address /172.16.1.170:2223
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
at java.lang.Thread.run(Thread.java:744)
have you started zookeeper in all the three nodes ? In a multi-cluster set up (assuming you have a distributed environment with multiple machines) every server knows about the other nodes present in the cluster known as ensemble. It does this by looking at the following piece of line in the zoo.cfg file.
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
In multi-cluster set up doc page it says
As long as a majority of the ensemble are up, the service will be available. Because Zookeeper requires a majority, it is best to use an odd number of machines. For example, with four machines ZooKeeper can only handle the failure of a single machine; if two machines fail, the remaining two machines do not constitute a majority. However, with five machines ZooKeeper can handle the failure of two machines
now unless you start the process in all three nodes it wont be able to communicate with each other and keep logging such errors. This probably might help you get somewhere.
How to Setup Zookeeper for Multiple Clusters or Remote servers?
Step 1: Check the Java 1.8.0 or above version is available in the system under
/Opt/ java -version
Step 2: Download Zookeeper-3.3.6 from the link by using the below command
Sudo wget http://redrockdigimark.com/apachemirror/zookeeper/zookeeper-3.3.6/zookeeper-3.3.6.tar.gz
Step 3: Extract the File by using the below Command
Sudo tar xzf zookeeper-3.3.6.tar.gz -C /opt/
Step 4: Mapper the zookeeper -3.3.6 to Zookeeper as below
/opt/> ls -s zookeeper-3.3.6 zookeeper then
/opt/> Cd zookeeper/conf
Step 5: Create a Configuration file by copying of zoo.cfg from zoo_sample.cfg /opt/zookeeper/conf/>
cp zoo.cfg sample_zoo.cfg
Step 6: Edit the zoo.cfg by using the command /opt/zookeeper/conf/>
sudo vi zoo.cfg
Create the Data directory as DataDir=/var/lib/zookeeper
Step 7: Create a file without extension as myid under /var/lib/zookeeper
and give the unique id as 1 for server1
Add all the cluster server in the botton as
server.1=0.0.0.0:2888:3888
server.2=184.72.205.209:2888:3888
server.3=34.207.92.20:2888:3888
Step 8: Create a file without extension as myid under /var/lib/zookeeper
And give the unique id as 2 for server2
Step 9: The Same configuration to be applied for the second server as below
server.1=34.229.138.19:2888:3888
server.2=0.0.0.0:2888:3888
server.3=34.207.92.20:2888:3888
Step 10: Install nc package and lsof packages as below
Sudo yum install nc
Sudo yum install lsof
Step 11:Now Start the Zookeeper in all servers as
Sudo /opt/zookeeper/bin/zkServer.sh start
Step 12: To Stop the Zookeeper Server
Sudo /opt/zookeeper/bin/zkServer.sh Stop
To Check the Status of Zookeeper Server
Sudo /opt/zookeeper/bin/zkServer.sh Status
Important Points to be noted
1.For Zookeeper 2F+1 server to be maintained ie. If you have 1 servers then (2*1)+1=3 Servers to be maintained , if you have 2 servers then (2*2)+1=5 Servers to be maintained , F stands for number of servers
2.All the Servers should have zoo.cfg configuration file and the local servers IP should be 0.0.0.0
3.zookeeper uses 2888 port to connect to individual followers nodes with the leader node
4.Port 3888 is for peer to peer communication
5.Leader election will be taken care by zookeeper automatically, and if the leader down, with in 2 micro seconds , it will elect the other leader and shares the information of the followers
6.In zoo.cfg configuration file Client port must be 2181

Categories

Resources