Elastic Search - Java Permissions Issue - java

I'm trying to allow remote connections to elasticsearch(7.1.7). Whenever I change network.host in /etc/elastic/elasticsearch.yml to anything but the default value i get an error.
error
[2022-12-04T03:09:13,741][INFO ][o.e.x.m.p.NativeController] [ubuntu] Native controller process has stopped - no new native processes can be started
[2022-12-04T03:09:13,741][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [ubuntu] uncaught exception in thread [process reaper (pid 2635649)]
java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "modifyThread")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:485) ~[?:?]
at java.security.AccessController.checkPermission(AccessController.java:1068) ~[?:?]
at java.lang.SecurityManager.checkPermission(SecurityManager.java:411) ~[?:?]
at org.elasticsearch.secure_sm.SecureSM.checkThreadAccess(SecureSM.java:160) ~[?:7.17.7]
at org.elasticsearch.secure_sm.SecureSM.checkAccess(SecureSM.java:120) ~[?:7.17.7]
at java.lang.Thread.checkAccess(Thread.java:2360) ~[?:?]
at java.lang.Thread.setDaemon(Thread.java:2308) ~[?:?]
at java.lang.ProcessHandleImpl.lambda$static$0(ProcessHandleImpl.java:103) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.<init>(ThreadPoolExecutor.java:637) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:928) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1021) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1158) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
at java.lang.Thread.run(Thread.java:1589) [?:?]
at jdk.internal.misc.InnocuousThread.run(InnocuousThread.java:186) ~[?:?]
Java Version
openjdk 11.0.17 2022-10-18
OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)
OpenJDK 64-Bit Server VM (build 11.0.17+8-post-Ubuntu-1ubuntu222.04, mixed mode, sharing)
/etc/elasticsearch/elasticsearch.yml
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: search
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
# network.host: 192.168.1.100
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
action.destructive_requires_name: false
#
# ---------------------------------- Security ----------------------------------
#
# *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don’t have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features.
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html
xpack.security.enabled: true
xpack.security.http.ssl.enabled: false
xpack.security.http.ssl.key: /etc/elasticsearch/config/es-key.pem
xpack.security.http.ssl.certificate: /etc/elasticsearch/config/es-cert.pem
xpack.security.authc.api_key.enabled: true
xpack:
security:
authc:
realms:
native:
native1:
order: 0
process.permissions.modifyThread: true

Related

error while installing elastic search -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError

I have trying to install elastic search engine in my windows .. at first I download zip file, extract it and then write the command ..
after I getting some errors but I solved them this is the last error I get and can't solve :
-XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m,
-Xms4020m, -Xmx4020m, -XX:MaxDirectMemorySize=2107637760, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=zip, --module-path=C:\elasticsearch-8.4.3\elasticsearch-8.4.3\lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
this is my jvm.options file
################################################################
##
## JVM configuration
##
################################################################
##
## WARNING: DO NOT EDIT THIS FILE. If you want to override the
## JVM options in this file, or set any additional options, you
## should create one or more files in the jvm.options.d
## directory containing your adjustments.
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/8.4/jvm-options.html
## for more information.
##
################################################################
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## which should be named with .options suffix, and the min and
## max should be set to the same value. For example, to set the
## heap to 4 GB, create a new file in the jvm.options.d
## directory containing these lines:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/8.4/heap-size.html
## for more information
##
################################################################
################################################################
## Expert settings
################################################################
##
## All settings below here are considered expert settings. Do
## not adjust them unless you understand what you are doing. Do
## not edit them in this file; instead, create a new file in the
## jvm.options.d directory containing your adjustments.
##
################################################################
-XX:+UseG1GC
## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}
## heap dumps
# generate a heap dump when an allocation from the Java heap fails; heap dumps
# are created in the working directory of the JVM unless an alternative path is
# specified
-XX:+HeapDumpOnOutOfMemoryError
# exit right after heap dump on out of memory error
-XX:+ExitOnOutOfMemoryError
# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data
# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log
## GC logging
-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
this is my elasticsearch.yml file :
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 18-10-2022 14:38:08
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
ingest.geoip.downloader.enabled: false
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["SAM-SAM"]
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
and I have memory, I have 50GB free on C disk and 345GB in D
where is the problem? and how can I solve it?

How do I install hadoop yarn?

I'm trying to upload hadoop yarn on pseudo distributed model on a ubuntu 18.04 virtual machine. I'm having the following error:
Starting resourcemanager
ERROR: Cannot set priority of resourcemanager process 19723
Starting nodemanagers
localhost: ERROR: Cannot set priority of nodemanager process 19875
pdsh#alex-VirtualBox: localhost: ssh exited with exit code 1
jps
18850 DataNode
18691 NameNode
19931 Jps
19103 SecondaryNameNode
java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
hadoop version
Hadoop 3.2.1
Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r b3cbbb467e22ea829b3808f4b7b01d07e0bf3842
Compiled by rohithsharmaks on 2019-09-10T15:56Z
Compiled with protoc 2.5.0
From source with checksum 776eaf9eee9c0ffc370bcbc1888737
This command was run using /opt/hadoop/hadoop-3.2.1/share/hadoop/common/hadoop-common-3.2.1.jar
cat etc/hadoop/yarn-env.sh
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
## THIS FILE ACTS AS AN OVERRIDE FOR hadoop-env.sh FOR ALL
## WORK DONE BY THE yarn AND RELATED COMMANDS.
##
## Precedence rules:
##
## yarn-env.sh > hadoop-env.sh > hard-coded defaults
##
## YARN_xyz > HADOOP_xyz > hard-coded defaults
##
###
# Resource Manager specific parameters
###
# Specify the max heapsize for the ResourceManager. If no units are
# given, it will be assumed to be in MB.
# This value will be overridden by an Xmx setting specified in either
# HADOOP_OPTS and/or YARN_RESOURCEMANAGER_OPTS.
# Default is the same as HADOOP_HEAPSIZE_MAX
#export YARN_RESOURCEMANAGER_HEAPSIZE=
# Specify the JVM options to be used when starting the ResourceManager.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# Examples for a Sun/Oracle JDK:
# a) override the appsummary log file:
# export YARN_RESOURCEMANAGER_OPTS="-Dyarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY"
#
# b) Set JMX options
# export YARN_RESOURCEMANAGER_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=1026"
#
# c) Set garbage collection logs from hadoop-env.sh
# export YARN_RESOURCE_MANAGER_OPTS="${HADOOP_GC_SETTINGS} -Xloggc:${HADOOP_LOG_DIR}/gc-rm.log-$(date +'%Y%m%d%H%M')"
#
# d) ... or set them directly
# export YARN_RESOURCEMANAGER_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:${HADOOP_LOG_DIR}/gc-rm.log-$(date +'%Y%m%d%H%M')"
#
#
# export YARN_RESOURCEMANAGER_OPTS=
###
# Node Manager specific parameters
###
# Specify the max heapsize for the NodeManager. If no units are
# given, it will be assumed to be in MB.
# This value will be overridden by an Xmx setting specified in either
# HADOOP_OPTS and/or YARN_NODEMANAGER_OPTS.
# Default is the same as HADOOP_HEAPSIZE_MAX.
#export YARN_NODEMANAGER_HEAPSIZE=
# Specify the JVM options to be used when starting the NodeManager.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# See ResourceManager for some examples
#
#export YARN_NODEMANAGER_OPTS=
###
# TimeLineServer specific parameters
###
# Specify the max heapsize for the timelineserver. If no units are
# given, it will be assumed to be in MB.
# This value will be overridden by an Xmx setting specified in either
# HADOOP_OPTS and/or YARN_TIMELINESERVER_OPTS.
# Default is the same as HADOOP_HEAPSIZE_MAX.
#export YARN_TIMELINE_HEAPSIZE=
# Specify the JVM options to be used when starting the TimeLineServer.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# See ResourceManager for some examples
#
#export YARN_TIMELINESERVER_OPTS=
###
# TimeLineReader specific parameters
###
# Specify the JVM options to be used when starting the TimeLineReader.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# See ResourceManager for some examples
#
#export YARN_TIMELINEREADER_OPTS=
###
# Web App Proxy Server specifc parameters
###
# Specify the max heapsize for the web app proxy server. If no units are
# given, it will be assumed to be in MB.
# This value will be overridden by an Xmx setting specified in either
# HADOOP_OPTS and/or YARN_PROXYSERVER_OPTS.
# Default is the same as HADOOP_HEAPSIZE_MAX.
#export YARN_PROXYSERVER_HEAPSIZE=
# Specify the JVM options to be used when starting the proxy server.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# See ResourceManager for some examples
#
#export YARN_PROXYSERVER_OPTS=
###
# Shared Cache Manager specific parameters
###
# Specify the JVM options to be used when starting the
# shared cache manager server.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# See ResourceManager for some examples
#
#export YARN_SHAREDCACHEMANAGER_OPTS=
###
# Router specific parameters
###
# Specify the JVM options to be used when starting the Router.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# See ResourceManager for some examples
#
#export YARN_ROUTER_OPTS=
###
# Registry DNS specific parameters
###
# For privileged registry DNS, user to run as after dropping privileges
# This will replace the hadoop.id.str Java property in secure mode.
# export YARN_REGISTRYDNS_SECURE_USER=yarn
# Supplemental options for privileged registry DNS
# By default, Hadoop uses jsvc which needs to know to launch a
# server jvm.
# export YARN_REGISTRYDNS_SECURE_EXTRA_OPTS="-jvm server"
###
# YARN Services parameters
###
# Directory containing service examples
# export YARN_SERVICE_EXAMPLES_DIR = $HADOOP_YARN_HOME/share/hadoop/yarn/yarn-service-examples
# export YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE=true
export YARN_RESOURCEMANAGER_OPTS="--add-modules java.activation"
export YARN_NODEMANAGER_OPTS="--add-modules java.activation"
cat etc/hadoop/yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
The resourcemanager log shows the following:
cat logs/hadoop-hadoop-resourcemanager-alex-VirtualBox.log
sourceManager.java:1535)
Caused by: java.io.IOException: Unable to initialize WebAppContext
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1177)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:439)
... 4 more
Caused by: com.google.inject.ProvisionException: Unable to provision, see the following errors:
1) Error injecting constructor, java.lang.NoClassDefFoundError: javax/activation/DataSource
at org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.<init>(JAXBContextResolver.java:41)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:54)
while locating org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver
1 error
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1025)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1051)
at com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory$GuiceInstantiatedComponentProvider.getInstance(GuiceComponentProviderFactory.java:345)
at com.sun.jersey.core.spi.component.ioc.IoCProviderFactory$ManagedSingleton.<init>(IoCProviderFactory.java:202)
at com.sun.jersey.core.spi.component.ioc.IoCProviderFactory.wrap(IoCProviderFactory.java:123)
at com.sun.jersey.core.spi.component.ioc.IoCProviderFactory._getComponentProvider(IoCProviderFactory.java:116)
at com.sun.jersey.core.spi.component.ProviderFactory.getComponentProvider(ProviderFactory.java:153)
at com.sun.jersey.core.spi.component.ProviderServices.getComponent(ProviderServices.java:278)
at com.sun.jersey.core.spi.component.ProviderServices.getProviders(ProviderServices.java:151)
at com.sun.jersey.core.spi.factory.ContextResolverFactory.init(ContextResolverFactory.java:83)
at com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1332)
at com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180)
at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799)
at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795)
at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:795)
at com.sun.jersey.guice.spi.container.servlet.GuiceContainer.initiate(GuiceContainer.java:121)
at com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:339)
at com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:605)
at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:207)
at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394)
at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:744)
at com.google.inject.servlet.FilterDefinition.init(FilterDefinition.java:112)
at com.google.inject.servlet.ManagedFilterPipeline.initPipeline(ManagedFilterPipeline.java:99)
at com.google.inject.servlet.GuiceFilter.init(GuiceFilter.java:220)
at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139)
at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start(Server.java:427)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart(Server.java:394)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1155)
... 5 more
Caused by: java.lang.NoClassDefFoundError: javax/activation/DataSource
at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.<clinit>(RuntimeBuiltinLeafInfoImpl.java:457)
at com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.<init>(RuntimeTypeInfoSetImpl.java:65)
at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133)
at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:85)
at com.sun.xml.bind.v2.model.impl.ModelBuilder.<init>(ModelBuilder.java:156)
at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.<init>(RuntimeModelBuilder.java:93)
at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:473)
at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319)
at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:262)
at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:249)
at javax.xml.bind.ContextFinder.find(ContextFinder.java:456)
at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:656)
at com.sun.jersey.api.json.JSONJAXBContext.<init>(JSONJAXBContext.java:255)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.<init>(JAXBContextResolver.java:68)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver$$FastClassByGuice$$6a7be7f6.newInstance(<generated>)
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:61)
at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:105)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:267)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1103)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:145)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
at com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1016)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1012)
... 49 more
Caused by: java.lang.ClassNotFoundException: javax.activation.DataSource
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 83 more
2020-07-24 09:15:25,931 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to standby state
2020-07-24 09:15:25,931 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state
2020-07-24 09:15:25,932 FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager
org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:443)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1231)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1340)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1535)
Caused by: java.io.IOException: Unable to initialize WebAppContext
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1177)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:439)
... 4 more
Caused by: com.google.inject.ProvisionException: Unable to provision, see the following errors:
1) Error injecting constructor, java.lang.NoClassDefFoundError: javax/activation/DataSource
at org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.<init>(JAXBContextResolver.java:41)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:54)
while locating org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver
1 error
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1025)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1051)
at com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory$GuiceInstantiatedComponentProvider.getInstance(GuiceComponentProviderFactory.java:345)
at com.sun.jersey.core.spi.component.ioc.IoCProviderFactory$ManagedSingleton.<init>(IoCProviderFactory.java:202)
at com.sun.jersey.core.spi.component.ioc.IoCProviderFactory.wrap(IoCProviderFactory.java:123)
at com.sun.jersey.core.spi.component.ioc.IoCProviderFactory._getComponentProvider(IoCProviderFactory.java:116)
at com.sun.jersey.core.spi.component.ProviderFactory.getComponentProvider(ProviderFactory.java:153)
at com.sun.jersey.core.spi.component.ProviderServices.getComponent(ProviderServices.java:278)
at com.sun.jersey.core.spi.component.ProviderServices.getProviders(ProviderServices.java:151)
at com.sun.jersey.core.spi.factory.ContextResolverFactory.init(ContextResolverFactory.java:83)
at com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1332)
at com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180)
at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799)
at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795)
at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:795)
at com.sun.jersey.guice.spi.container.servlet.GuiceContainer.initiate(GuiceContainer.java:121)
at com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:339)
at com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:605)
at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:207)
at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394)
at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:744)
at com.google.inject.servlet.FilterDefinition.init(FilterDefinition.java:112)
at com.google.inject.servlet.ManagedFilterPipeline.initPipeline(ManagedFilterPipeline.java:99)
at com.google.inject.servlet.GuiceFilter.init(GuiceFilter.java:220)
at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139)
at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start(Server.java:427)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart(Server.java:394)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1155)
... 5 more
Caused by: java.lang.NoClassDefFoundError: javax/activation/DataSource
at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.<clinit>(RuntimeBuiltinLeafInfoImpl.java:457)
at com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.<init>(RuntimeTypeInfoSetImpl.java:65)
at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133)
at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:85)
at com.sun.xml.bind.v2.model.impl.ModelBuilder.<init>(ModelBuilder.java:156)
at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.<init>(RuntimeModelBuilder.java:93)
at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:473)
at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319)
at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:262)
at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:249)
at javax.xml.bind.ContextFinder.find(ContextFinder.java:456)
at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:656)
at com.sun.jersey.api.json.JSONJAXBContext.<init>(JSONJAXBContext.java:255)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.<init>(JAXBContextResolver.java:68)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver$$FastClassByGuice$$6a7be7f6.newInstance(<generated>)
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:61)
at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:105)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:267)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1103)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:145)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
at com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1016)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1012)
... 49 more
Caused by: java.lang.ClassNotFoundException: javax.activation.DataSource
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 83 more
2020-07-24 09:15:25,997 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down ResourceManager at alex-VirtualBox/127.0.1.1
************************************************************/
Does anyone know how to solve this problem?

Elasticsearch 5.x Fails to Start

I have been trying to install Elasticsearch, which, for version 7.x seemed easy, whereas for version 5.x is a pain in the neck. The whole ordeal exists because there is a slew of compatibility requirements between the Elasticseach, Django Haystack, Django CMS and other things. If someone has a nice table or a way to wrap their head around that, I'd be happy to hear it.
As to the actual question, after installing ES 5.x, I cannot seem to get it working.
user#user-desktop:~/sites/project-web/project$ sudo systemctl restart elasticsearch
user#user-desktop:~/sites/project-web/project$ curl -X GET localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
user#user-desktop:~/sites/project-web/project$
Entities that are uncommented in /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: project-search
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
transport.host: localhost
transport.tcp.port: 9300
#
# For more information, consult the network module documentation.
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["0.0.0.0"]
#discovery.seed_hosts:["0.0.0.0"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 1
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
This is the status with which it fails:
user#user-desktop:~/sites/project-web/project$ systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2019-11-24 15:39:25 CST; 3min 54s ago
Docs: http://www.elastic.co
Process: 19098 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DI
Process: 19097 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 19098 (code=exited, status=1/FAILURE)
Nov 24 15:39:24 user-desktop systemd[1]: Starting Elasticsearch...
Nov 24 15:39:24 user-desktop systemd[1]: Started Elasticsearch.
Nov 24 15:39:24 user-desktop elasticsearch[19098]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Nov 24 15:39:25 user-desktop systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 15:39:25 user-desktop systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
In /var/log/elasticsearch/project-search.log I find the following error:
[2019-11-24T15:46:44,319][INFO ][o.e.n.Node ] [node-1] initializing ...
[2019-11-24T15:46:44,410][ERROR][o.e.b.Bootstrap ] Exception
org.elasticsearch.ElasticsearchException: java.io.IOException: failed to read [id:0, legacy:false, file:/var/lib/elasticsearch/nodes/0/_state/node-0.st]
at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:196) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:335) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeEnvironment.loadOrCreateNodeMetaData(NodeEnvironment.java:418) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:267) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.node.Node.<init>(Node.java:265) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.node.Node.<init>(Node.java:245) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:233) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:233) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.6.16.jar:5.6.16]
Caused by: java.io.IOException: failed to read [id:0, legacy:false, file:/var/lib/elasticsearch/nodes/0/_state/node-0.st]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:328) ~[elasticsearch-5.6.16.jar:5.6.16]
... 14 more
Caused by: java.lang.IllegalArgumentException: [node_meta_data] unknown field [node_version], parser not found
at org.elasticsearch.common.xcontent.ObjectParser.getParser(ObjectParser.java:399) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:159) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.xcontent.ObjectParser.apply(ObjectParser.java:183) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeMetaData$1.fromXContent(NodeMetaData.java:110) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeMetaData$1.fromXContent(NodeMetaData.java:94) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.read(MetaDataStateFormat.java:203) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:323) ~[elasticsearch-5.6.16.jar:5.6.16]
... 14 more
Could someone tell me what it going on? Any help on resolving this and getting ES to work would be appreciated.
Looks like an inconsistency issue between elasticSearch versions. If you had data indexed previously with ES version 7.0, now that data in that instance in the disk is incompatible with ES version 5.0.
Remove elasticsearch directory:
sudo rm -rf /var/lib/elasticsearch
And reinstall elasticsearch
Work for me.
For mac users using brew first clean all brew files with
brew uninstall elasticsearch
rm -rf /usr/local/etc/elasticsearch
rm -rf /usr/local/var/lib/elasticsearch
Then reinstall your elasticsearch version for example
brew install elasticsearch#6
Make sure elasticsearch is pointing to a compatible java version
nano /usr/local/opt/elasticsearch#6/bin
Then you change this line to your compatible version
JAVA_HOME="${JAVA_HOME:-/usr/local/opt/openjdk#17YOUR_COMPATIBLE_VERSION/libexec/openjdk.jdk/Contents/Home}" exec "/usr/local/Cellar/elasticsearch#6/6.8.23/libexec/bin/elasticsearch" "$#"
run elasticsearch in your terminal
elasticsearch

Java Elastic Search Api: Unable to run simple example: org.elasticsearch.transport.NodeDisconnectedException:

I have downloaded the elastic search binaries from https://www.elastic.co/downloads/elasticsearch and everything works okay.
Now: I'm trying to run a simple example below:
I first started up the server bin/elasticsearch, and ran the code below
package jp.gr.java_conf.uzresk.es.search;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.search.SearchType;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.FilterBuilders;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHits;
public class Search {
public static void main(String[] args) {
new Search().run();
}
public void run() {
Client client = null;
TransportClient transportClient = null;
try {
transportClient = new TransportClient();
client = transportClient.addTransportAddress(new InetSocketTransportAddress("192.168.1.40", 9300));
SearchResponse response = client.prepareSearch("tms-allflat").setTypes("personal")
.setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
.setQuery(QueryBuilders.termQuery("skill_1", "Java"))
.setPostFilter(FilterBuilders.rangeFilter("age").from(25).to(30)).setFrom(0).setSize(10)
.setExplain(true).execute().actionGet();
SearchHits hits = response.getHits();
hits.forEach(s -> System.out.println(s.getSourceAsString()));
} finally {
transportClient.close();
client.close();
}
}
}
The traceback thrown was this:
Oct 28, 2018 11:15:52 PM org.elasticsearch.plugins.PluginsService <init>
INFO: [Achelous] loaded [], sites []
Oct 28, 2018 11:15:53 PM org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler doSample
INFO: [Achelous] failed to get node info for [#transport#-1][dell-Inspiron-7577][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.NodeDisconnectedException: [][inet[localhost/127.0.0.1:9300]][cluster:monitor/nodes/info] disconnected
Exception in thread "main" org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:305)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:200)
at org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:338)
at org.elasticsearch.client.transport.TransportClient.search(TransportClient.java:430)
at org.elasticsearch.action.search.SearchRequestBuilder.doExecute(SearchRequestBuilder.java:1112)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at jp.gr.java_conf.uzresk.es.search.Search.run(Search.java:30)
at jp.gr.java_conf.uzresk.es.search.Search.main(Search.java:16)
The github repo, with all the deps, can be found here: https://github.com/uzresk/elasticsearch-javaapi-examples
[1] What am I doing wrong?
[2] How do I fix this?
I tried googling around but couldn't find anything promising - I'd love a pointer or two in the right direction. :)
EDIT: My elasticsearch.yml is this:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
Change the port to 9200 from 9300 as shown below:
try {
transportClient = new TransportClient();
client = transportClient.addTransportAddress(new InetSocketTransportAddress("192.168.1.40", 9200))
Edit your elasticsearch.yml file
add below setting and then restart the server.
network.host: 192.168.1.40
The default list of hosts is ["127.0.0.1", "[::1]"].
You need to specify your host, if it's other than 127.0.0.1.

ElasticServer shuts down at random times throughout the day

For the past few weeks, we have been having an issue where the ElasticSearch server dies. I cannot tell what the problem is and I'm not really sure where to even start?
We can restart the server and it will run ok for random lengths of time. Sometimes for the rest of the day, sometimes minutes, but it always crashes again eventually.
Here are some details that I hope someone will be able to process and point me in the right direction:
ElasticSearch Server info:
{
"name" : "WIuGVV9",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "T2Vvt3hzQhSJa4ZFWtdMKA",
"version" : {
"number" : "5.5.1",
"build_hash" : "19c13d0",
"build_date" : "2017-07-18T20:44:24.823Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
$ cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core)
$ java -version
openjdk version "1.8.0_141"
OpenJDK Runtime Environment (build 1.8.0_141-b16)
OpenJDK 64-Bit Server VM (build 25.141-b16, mixed mode)
$ free
total used free shared buff/cache available
Mem: 7915072 3650556 2827472 378156 1437044 3802152
Swap: 0 0 0
elasticsearch.log:
[2017-08-02T14:14:37,927][WARN ][o.e.b.Natives ] unable to load JNA native support library, native methods will be disabled.
java.lang.UnsatisfiedLinkError: /tmp/jna--1985354563/jna3117985363123958860.tmp: /tmp/jna--1985354563/jna3117985363123958860.tmp: failed to map segment from shared object: Operation not permitted
at java.lang.ClassLoader$NativeLibrary.load(Native Method) ~[?:1.8.0_141]
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941) ~[?:1.8.0_141]
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824) ~[?:1.8.0_141]
at java.lang.Runtime.load0(Runtime.java:809) ~[?:1.8.0_141]
at java.lang.System.load(System.java:1086) ~[?:1.8.0_141]
at com.sun.jna.Native.loadNativeDispatchLibraryFromClasspath(Native.java:947) ~[jna-4.4.0.jar:4.4.0 (b0)]
at com.sun.jna.Native.loadNativeDispatchLibrary(Native.java:922) ~[jna-4.4.0.jar:4.4.0 (b0)]
at com.sun.jna.Native.<clinit>(Native.java:190) ~[jna-4.4.0.jar:4.4.0 (b0)]
at java.lang.Class.forName0(Native Method) ~[?:1.8.0_141]
at java.lang.Class.forName(Class.java:264) ~[?:1.8.0_141]
at org.elasticsearch.bootstrap.Natives.<clinit>(Natives.java:45) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:194) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:351) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.cli.Command.main(Command.java:88) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.5.1.jar:5.5.1]
[2017-08-02T14:14:37,936][WARN ][o.e.b.Natives ] cannot check if running as root because JNA is not available
[2017-08-02T14:14:37,937][WARN ][o.e.b.Natives ] cannot install system call filter because JNA is not available
[2017-08-02T14:14:37,937][WARN ][o.e.b.Natives ] cannot register console handler because JNA is not available
[2017-08-02T14:14:37,941][WARN ][o.e.b.Natives ] cannot getrlimit RLIMIT_NPROC because JNA is not available
[2017-08-02T14:14:37,941][WARN ][o.e.b.Natives ] cannot getrlimit RLIMIT_AS beacuse JNA is not available
[2017-08-02T14:14:38,150][INFO ][o.e.n.Node ] [] initializing ...
[2017-08-02T14:14:38,341][INFO ][o.e.e.NodeEnvironment ] [WIuGVV9] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [7.8gb], net total_space [38.7gb], spins? [unknown], types [rootfs]
[2017-08-02T14:14:38,341][INFO ][o.e.e.NodeEnvironment ] [WIuGVV9] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-08-02T14:14:38,452][INFO ][o.e.n.Node ] node name [WIuGVV9] derived from node ID [WIuGVV9sS0mlLPTxRVKn0w]; set [node.name] to override
[2017-08-02T14:14:38,453][INFO ][o.e.n.Node ] version[5.5.1], pid[21555], build[19c13d0/2017-07-18T20:44:24.823Z], OS[Linux/3.10.0-327.4.4.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_141/25.141-b16]
[2017-08-02T14:14:38,453][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-08-02T14:14:40,409][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [aggs-matrix-stats]
[2017-08-02T14:14:40,410][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [ingest-common]
[2017-08-02T14:14:40,410][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [lang-expression]
[2017-08-02T14:14:40,410][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [lang-groovy]
[2017-08-02T14:14:40,410][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [lang-mustache]
[2017-08-02T14:14:40,410][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [lang-painless]
[2017-08-02T14:14:40,410][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [parent-join]
[2017-08-02T14:14:40,410][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [percolator]
[2017-08-02T14:14:40,410][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [reindex]
[2017-08-02T14:14:40,410][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [transport-netty3]
[2017-08-02T14:14:40,410][INFO ][o.e.p.PluginsService ] [WIuGVV9] loaded module [transport-netty4]
[2017-08-02T14:14:40,411][INFO ][o.e.p.PluginsService ] [WIuGVV9] no plugins loaded
[2017-08-02T14:14:42,797][INFO ][o.e.d.DiscoveryModule ] [WIuGVV9] using discovery type [zen]
[2017-08-02T14:14:43,759][INFO ][o.e.n.Node ] initialized
[2017-08-02T14:14:43,760][INFO ][o.e.n.Node ] [WIuGVV9] starting ...
[2017-08-02T14:14:44,061][INFO ][o.e.t.TransportService ] [WIuGVV9] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2017-08-02T14:14:44,088][WARN ][o.e.b.BootstrapChecks ] [WIuGVV9] system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
[2017-08-02T14:14:47,174][INFO ][o.e.c.s.ClusterService ] [WIuGVV9] new_master {WIuGVV9}{WIuGVV9sS0mlLPTxRVKn0w}{uZIN-61JT4KLP1xUSVTdLQ}{localhost}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-08-02T14:14:47,204][INFO ][o.e.h.n.Netty4HttpServerTransport] [WIuGVV9] publish_address {66.55.80.152:9200}, bound_addresses {[::]:9200}
[2017-08-02T14:14:47,204][INFO ][o.e.n.Node ] [WIuGVV9] started
[2017-08-02T14:14:47,589][INFO ][o.e.g.GatewayService ] [WIuGVV9] recovered [3] indices into cluster_state
[2017-08-02T14:14:48,268][INFO ][o.e.c.r.a.AllocationService] [WIuGVV9] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[orgs][4], [orgs][0]] ...]).
[2017-08-02T14:18:01,316][INFO ][o.e.c.m.MetaDataDeleteIndexService] [WIuGVV9] [orgs/ZcizOIAsRWqSXZY8ZiR0BA] deleting index
/etc/elasticsearch/elasticsearch.yml:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
transport.host: localhost
transport.tcp.port: 9300
#network.bind_host: "0.0.0.0"
#network.publish_host: _non_loopback:ipv4_
#network.host: _local_
network.host: 0.0.0.0
#network.bind_host:
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
/etc/elasticsearch/jvm.options :
## JVM configuration
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms2g
-Xmx2g
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## optimizations
# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch
## basic
# force the server VM (remove on 32-bit client JVMs)
-server
# explicitly set the stack size (reduce to 320k on 32-bit client JVMs)
-Xss1m
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
-Djna.nosys=true
# use old-style file permissions on JDK9
-Djdk.io.permissionsUseCanonicalPath=true
# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Dlog4j.skipJansi=true
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${heap.dump.path}
## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${loggc}
# By default, the GC log file will not rotate.
# By uncommenting the lines below, the GC log file
# will be rotated every 128MB at most 32 times.
#-XX:+UseGCLogFileRotation
#-XX:NumberOfGCLogFiles=32
#-XX:GCLogFileSize=128M
# Elasticsearch 5.0.0 will throw an exception on unquoted field names in JSON.
# If documents were already indexed with unquoted fields in a previous version
# of Elasticsearch, some operations may throw errors.
#
# WARNING: This option will be removed in Elasticsearch 6.0.0 and is provided
# only for migration purposes.
#-Delasticsearch.json.allow_unquoted_field_names=true
/etc/sysconfig/elasticsearch :
################################
# Elasticsearch
################################
# Elasticsearch home directory
#ES_HOME=/usr/share/elasticsearch
# Elasticsearch Java path
#JAVA_HOME=
# Elasticsearch configuration directory
#CONF_DIR=/etc/elasticsearch
# Elasticsearch data directory
#DATA_DIR=/var/lib/elasticsearch
# Elasticsearch logs directory
#LOG_DIR=/var/log/elasticsearch
# Elasticsearch PID directory
#PID_DIR=/var/run/elasticsearch
# Additional Java OPTS
#ES_JAVA_OPTS=
# Configure restart on package upgrade (true, every other setting will lead to not restarting)
#RESTART_ON_UPGRADE=true
################################
# Elasticsearch service
################################
# SysV init.d
#
# When executing the init script, this user will be used to run the elasticsearch service.
# The default value is 'elasticsearch' and is declared in the init.d file.
# Note that this setting is only used by the init script. If changed, make sure that
# the configured user can read and write into the data, work, plugins and log directories.
# For systemd service, the user is usually configured in file /usr/lib/systemd/system/elasticsearch.service
#ES_USER=elasticsearch
#ES_GROUP=elasticsearch
# The number of seconds to wait before checking if Elasticsearch started successfully as a daemon process
ES_STARTUP_SLEEP_TIME=5
################################
# System properties
################################
# Specifies the maximum file descriptor number that can be opened by this process
# When using Systemd, this setting is ignored and the LimitNOFILE defined in
# /usr/lib/systemd/system/elasticsearch.service takes precedence
#MAX_OPEN_FILES=65536
# The maximum number of bytes of memory that may be locked into RAM
# Set to "unlimited" if you use the 'bootstrap.memory_lock: true' option
# in elasticsearch.yml.
# When using Systemd, the LimitMEMLOCK property must be set
# in /usr/lib/systemd/system/elasticsearch.service
#MAX_LOCKED_MEMORY=unlimited
# Maximum number of VMA (Virtual Memory Areas) a process can own
# When using Systemd, this setting is ignored and the 'vm.max_map_count'
# property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf
#MAX_MAP_COUNT=262144
Can anyone determine what my error is? I can't tell and my best best is either a permissions issue, or a memory issue. We do have 8 gb, which is the barely make it mark.
Thoughts?
Thanks a million.
[EDIT: New Details] :
dmesg outputs a lot. I'm not sure if there is a way to get timestamps out of the output, but over and over I see:
[10131180.901171] Out of memory: Kill process 13777 (java) score 295 or sacrifice child
[10131180.901186] Killed process 13777 (java) total-vm:5800372kB, anon-rss:2334928kB, file-rss:80kB
[10137088.438235] exim[7581]: segfault at 58 ip 000000000046bee7 sp 00007ffdd0dc63e0 error 4 in exim[400000+fa000]
[10138765.544389] exim[16401]: segfault at 58 ip 000000000046bee7 sp 00007ffede7cc3f0 error 4 in exim[400000+fa000]
[10162265.107101] exim[28217]: segfault at 58 ip 000000000046bee7 sp 00007fff84afb6e0 error 4 in exim[400000+fa000]
I think this is the root cause of our issue and when the boss gets in we will see about upgrading the server as it only has 8gb of ram.
[EDIT: even newer details] :
Running this command:
ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -5
After the ES server is killed:
6.3 0.0 767376 560 /usr/local/cpanel/3rdparty/bin/clamd
2.7 25.5 310612 5060 dovecot/lmtp
2.4 0.3 860920 24850 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/lib/mysql/gigenet.pwi.com.err --open-files-limit=10000 --pid-file=/var/lib/mysql/gigenet.pwi.com.pid
0.3 0.3 75216 17701 tailwatchd
0.3 0.0 393280 5384 /usr/sbin/named -u named
After the ES server is restarted
9.3 45.5 4834304 6815 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Djdk.io.permissionsUseCanonicalPath=true -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -XX:+HeapDumpOnOutOfMemoryError -Xms4g -Xmx4g -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet -Edefault.path.logs=/var/log/elasticsearch -Edefault.path.data=/var/lib/elasticsearch -Edefault.path.conf=/etc/elasticsearch
6.3 0.0 767376 560 /usr/local/cpanel/3rdparty/bin/clamd
2.4 0.3 860920 24850 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/lib/mysql/gigenet.pwi.com.err --open-files-limit=10000 --pid-file=/var/lib/mysql/gigenet.pwi.com.pid
1.2 1.1 243444 23536 /usr/local/cpanel/3rdparty/perl/524/bin/perl -T -w /usr/local/cpanel/3rdparty/bin/spamd --max-spare=1 --max-children=3 --allowed-ips=127.0.0.1,::1 --pidfile=/var/run/spamd.pid --listen=5 --listen=6
1.2 0.5 244292 6618 spamd child
Based on the results in the answers to the question, I found this post
Prevent elasticsearch from being killed by OOM killer
While trying to figure out why OOM is killing java. I have since changed my config to match the recommendations in the first answer and the server still shuts down.
based on dmesg, the processes being killed are:
Out of memory: Kill process 24532 (java) score 293
Out of memory: Kill process 3408 (clamd) score 60
Out of memory: Kill process 1970 (lmtp) score 28
Out of memory: Kill process 17806 (mysqld) score 27
Is there anything in the logs of Elasticsearch that indicates are clean shutdown? Can you paste the lines before the restart?
Can you check with dmesg if the kernel OOM killer is stopping elasticsearch? This happens when the operating system is advised to free memory at all cost. The OOM killer then usually picks the process with the most memory and kills it. Most of the time this is Elasticsearch.
Are there any other services running on that machine, that might cause triggering the OOM killer?
Have you monitoring the system over time? Is memory eaten up? If the process is killed, are certain resources back to normal?

Categories

Resources