OpenLDAP and SSL - java

I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code
java -Djavax.net.debug=ssl LDAPConnector
I get the following exception trace (java version 1.6.0_17)
trigger seeding of SecureRandom
done seeding SecureRandom
%% No cached client session
*** ClientHello, TLSv1
RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 }
Session ID: {}
Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W
ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH
A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA]
Compression Methods: { 0 }
***
Thread-0, WRITE: TLSv1 Handshake, length = 73
Thread-0, WRITE: SSLv2 client hello message, length = 98
Thread-0, received EOFException: error
Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure
Thread-0, WRITE: TLSv1 Alert, length = 2
Thread-0, called closeSocket()
main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands
hake]
at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source)
at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source)
at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source)
at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source)
at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source)
at javax.naming.spi.NamingManager.getInitialContext(Unknown Source)
at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source)
at javax.naming.InitialContext.init(Unknown Source)
at javax.naming.InitialContext.<init>(Unknown Source)
at javax.naming.directory.InitialDirContext.<init>(Unknown Source)
at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43)
at LDAPConnector.main(LDAPConnector.java:237)
Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source)
at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source)
at java.io.BufferedInputStream.fill(Unknown Source)
at java.io.BufferedInputStream.read1(Unknown Source)
at java.io.BufferedInputStream.read(Unknown Source)
at com.sun.jndi.ldap.Connection.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source)
... 9 more
I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14)
I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide --> OpenLDAP with SSL
When I run ldapsearch -x on the server I get
# extended LDIF
#
# LDAPv3
# base <dc=localdomain> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# localdomain
dn: dc=localdomain
objectClass: top
objectClass: dcObject
objectClass: organization
o: localdomain
dc: localdomain
# admin, localdomain
dn: cn=admin,dc=localdomain
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: admin
description: LDAP administrator
# search result
search: 2
result: 0 Success
# numResponses: 3
# numEntries: 2
On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate.
My slapd.conf file is as follows
#######################################################################
# Global Directives:
# Features to permit
#allow bind_v2
# Schema and objectClass definitions
include /etc/ldap/schema/core.schema
include /etc/ldap/schema/cosine.schema
include /etc/ldap/schema/nis.schema
include /etc/ldap/schema/inetorgperson.schema
# Where the pid file is put. The init.d script
# will not stop the server if you change this.
pidfile /var/run/slapd/slapd.pid
# List of arguments that were passed to the server
argsfile /var/run/slapd/slapd.args
# Read slapd.conf(5) for possible values
loglevel none
# Where the dynamically loaded modules are stored
modulepath /usr/lib/ldap
moduleload back_hdb
# The maximum number of entries that is returned for a search operation
sizelimit 500
# The tool-threads parameter sets the actual amount of cpu's that is used
# for indexing.
tool-threads 1
#######################################################################
# Specific Backend Directives for hdb:
# Backend specific directives apply to this backend until another
# 'backend' directive occurs
backend hdb
#######################################################################
# Specific Backend Directives for 'other':
# Backend specific directives apply to this backend until another
# 'backend' directive occurs
#backend <other>
#######################################################################
# Specific Directives for database #1, of type hdb:
# Database specific directives apply to this databasse until another
# 'database' directive occurs
database hdb
# The base of your directory in database #1
suffix "dc=localdomain"
# rootdn directive for specifying a superuser on the database. This is needed
# for syncrepl.
rootdn "cn=admin,dc=localdomain"
# Where the database file are physically stored for database #1
directory "/var/lib/ldap"
# The dbconfig settings are used to generate a DB_CONFIG file the first
# time slapd starts. They do NOT override existing an existing DB_CONFIG
# file. You should therefore change these settings in DB_CONFIG directly
# or remove DB_CONFIG and restart slapd for changes to take effect.
# For the Debian package we use 2MB as default but be sure to update this
# value if you have plenty of RAM
dbconfig set_cachesize 0 2097152 0
# Sven Hartge reported that he had to set this value incredibly high
# to get slapd running at all. See http://bugs.debian.org/303057 for more
# information.
# Number of objects that can be locked at the same time.
dbconfig set_lk_max_objects 1500
# Number of locks (both requested and granted)
dbconfig set_lk_max_locks 1500
# Number of lockers
dbconfig set_lk_max_lockers 1500
# Indexing options for database #1
index objectClass eq
# Save the time that the entry gets modified, for database #1
lastmod on
# Checkpoint the BerkeleyDB database periodically in case of system
# failure and to speed slapd shutdown.
checkpoint 512 30
# Where to store the replica logs for database #1
# replogfile /var/lib/ldap/replog
# The userPassword by default can be changed
# by the entry owning it if they are authenticated.
# Others should not be able to see it, except the
# admin entry below
# These access lines apply to database #1 only
access to attrs=userPassword,shadowLastChange
by dn="cn=admin,dc=localdomain" write
by anonymous auth
by self write
by * none
# Ensure read access to the base for things like
# supportedSASLMechanisms. Without this you may
# have problems with SASL not knowing what
# mechanisms are available and the like.
# Note that this is covered by the 'access to *'
# ACL below too but if you change that as people
# are wont to do you'll still need this if you
# want SASL (and possible other things) to work
# happily.
access to dn.base="" by * read
# The admin dn has full write access, everyone else
# can read everything.
access to *
by dn="cn=admin,dc=localdomain" write
by * read
# For Netscape Roaming support, each user gets a roaming
# profile for which they have write access to
#access to dn=".*,ou=Roaming,o=morsnet"
# by dn="cn=admin,dc=localdomain" write
# by dnattr=owner write
#######################################################################
# Specific Directives for database #2, of type 'other' (can be hdb too):
# Database specific directives apply to this databasse until another
# 'database' directive occurs
#database <other>
# The base of your directory for database #2
#suffix "dc=debian,dc=org"
#######################################################################
# SSL:
# Uncomment the following lines to enable SSL and use the default
# snakeoil certificates.
#TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
#TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
TLSCipherSuite TLS_RSA_AES_256_CBC_SHA
TLSCACertificateFile /etc/ldap/ssl/server.pem
TLSCertificateFile /etc/ldap/ssl/server.pem
TLSCertificateKeyFile /etc/ldap/ssl/server.pem
My ldap.conf file is
#
# LDAP Defaults
#
# See ldap.conf(5) for details
# This file should be world readable but not world writable.
HOST ldap.natraj.com
PORT 636
BASE dc=localdomain
URI ldaps://ldap.natraj.com
TLS_CACERT /etc/ldap/ssl/server.pem
TLS_REQCERT allow
#SIZELIMIT 12
#TIMELIMIT 15
#DEREF never
Why is it that I can connect to the same server using one version of JRE while I cannot with another ?

Fixed the problem. The issue was arising because the CipherSuites which were being sent by the JRE (version 1.6.0_17) did not match with the CipherSuites accepted by the server.
The server's slapd.conf contained the line
TLSCipherSuite TLS_RSA_AES_256_CBC_SHA
while this particular Java client was sending a set of suites which included TLS_RSA_AES_128_CBC_SHA. The problem was solved by simply commenting out the above mentioned line in slapd.conf. The confusion was because the server returned an EOF exception when the problem was invloving CipherSuites.
JRE (version 1.6.0_14) however, was sending TLS_RSA_AES_256_CBC_SHA as part of the CipherSuites it was accepting and thus the same code worked with this version.

As you got this straight after sending the SSLv2 ClientHello, you should try disabling SSlv2ClientHello. See the JSSE Reference Guide.

Related

Elastic Search - Java Permissions Issue

I'm trying to allow remote connections to elasticsearch(7.1.7). Whenever I change network.host in /etc/elastic/elasticsearch.yml to anything but the default value i get an error.
error
[2022-12-04T03:09:13,741][INFO ][o.e.x.m.p.NativeController] [ubuntu] Native controller process has stopped - no new native processes can be started
[2022-12-04T03:09:13,741][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [ubuntu] uncaught exception in thread [process reaper (pid 2635649)]
java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "modifyThread")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:485) ~[?:?]
at java.security.AccessController.checkPermission(AccessController.java:1068) ~[?:?]
at java.lang.SecurityManager.checkPermission(SecurityManager.java:411) ~[?:?]
at org.elasticsearch.secure_sm.SecureSM.checkThreadAccess(SecureSM.java:160) ~[?:7.17.7]
at org.elasticsearch.secure_sm.SecureSM.checkAccess(SecureSM.java:120) ~[?:7.17.7]
at java.lang.Thread.checkAccess(Thread.java:2360) ~[?:?]
at java.lang.Thread.setDaemon(Thread.java:2308) ~[?:?]
at java.lang.ProcessHandleImpl.lambda$static$0(ProcessHandleImpl.java:103) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.<init>(ThreadPoolExecutor.java:637) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:928) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1021) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1158) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
at java.lang.Thread.run(Thread.java:1589) [?:?]
at jdk.internal.misc.InnocuousThread.run(InnocuousThread.java:186) ~[?:?]
Java Version
openjdk 11.0.17 2022-10-18
OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)
OpenJDK 64-Bit Server VM (build 11.0.17+8-post-Ubuntu-1ubuntu222.04, mixed mode, sharing)
/etc/elasticsearch/elasticsearch.yml
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: search
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
# network.host: 192.168.1.100
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
action.destructive_requires_name: false
#
# ---------------------------------- Security ----------------------------------
#
# *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don’t have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features.
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html
xpack.security.enabled: true
xpack.security.http.ssl.enabled: false
xpack.security.http.ssl.key: /etc/elasticsearch/config/es-key.pem
xpack.security.http.ssl.certificate: /etc/elasticsearch/config/es-cert.pem
xpack.security.authc.api_key.enabled: true
xpack:
security:
authc:
realms:
native:
native1:
order: 0
process.permissions.modifyThread: true

error while installing elastic search -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError

I have trying to install elastic search engine in my windows .. at first I download zip file, extract it and then write the command ..
after I getting some errors but I solved them this is the last error I get and can't solve :
-XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m,
-Xms4020m, -Xmx4020m, -XX:MaxDirectMemorySize=2107637760, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=zip, --module-path=C:\elasticsearch-8.4.3\elasticsearch-8.4.3\lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
this is my jvm.options file
################################################################
##
## JVM configuration
##
################################################################
##
## WARNING: DO NOT EDIT THIS FILE. If you want to override the
## JVM options in this file, or set any additional options, you
## should create one or more files in the jvm.options.d
## directory containing your adjustments.
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/8.4/jvm-options.html
## for more information.
##
################################################################
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## which should be named with .options suffix, and the min and
## max should be set to the same value. For example, to set the
## heap to 4 GB, create a new file in the jvm.options.d
## directory containing these lines:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/8.4/heap-size.html
## for more information
##
################################################################
################################################################
## Expert settings
################################################################
##
## All settings below here are considered expert settings. Do
## not adjust them unless you understand what you are doing. Do
## not edit them in this file; instead, create a new file in the
## jvm.options.d directory containing your adjustments.
##
################################################################
-XX:+UseG1GC
## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}
## heap dumps
# generate a heap dump when an allocation from the Java heap fails; heap dumps
# are created in the working directory of the JVM unless an alternative path is
# specified
-XX:+HeapDumpOnOutOfMemoryError
# exit right after heap dump on out of memory error
-XX:+ExitOnOutOfMemoryError
# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data
# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log
## GC logging
-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
this is my elasticsearch.yml file :
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 18-10-2022 14:38:08
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
ingest.geoip.downloader.enabled: false
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["SAM-SAM"]
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
and I have memory, I have 50GB free on C disk and 345GB in D
where is the problem? and how can I solve it?

Elasticsearch 5.x Fails to Start

I have been trying to install Elasticsearch, which, for version 7.x seemed easy, whereas for version 5.x is a pain in the neck. The whole ordeal exists because there is a slew of compatibility requirements between the Elasticseach, Django Haystack, Django CMS and other things. If someone has a nice table or a way to wrap their head around that, I'd be happy to hear it.
As to the actual question, after installing ES 5.x, I cannot seem to get it working.
user#user-desktop:~/sites/project-web/project$ sudo systemctl restart elasticsearch
user#user-desktop:~/sites/project-web/project$ curl -X GET localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
user#user-desktop:~/sites/project-web/project$
Entities that are uncommented in /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: project-search
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
transport.host: localhost
transport.tcp.port: 9300
#
# For more information, consult the network module documentation.
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["0.0.0.0"]
#discovery.seed_hosts:["0.0.0.0"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 1
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
This is the status with which it fails:
user#user-desktop:~/sites/project-web/project$ systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2019-11-24 15:39:25 CST; 3min 54s ago
Docs: http://www.elastic.co
Process: 19098 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DI
Process: 19097 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 19098 (code=exited, status=1/FAILURE)
Nov 24 15:39:24 user-desktop systemd[1]: Starting Elasticsearch...
Nov 24 15:39:24 user-desktop systemd[1]: Started Elasticsearch.
Nov 24 15:39:24 user-desktop elasticsearch[19098]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Nov 24 15:39:25 user-desktop systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 15:39:25 user-desktop systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
In /var/log/elasticsearch/project-search.log I find the following error:
[2019-11-24T15:46:44,319][INFO ][o.e.n.Node ] [node-1] initializing ...
[2019-11-24T15:46:44,410][ERROR][o.e.b.Bootstrap ] Exception
org.elasticsearch.ElasticsearchException: java.io.IOException: failed to read [id:0, legacy:false, file:/var/lib/elasticsearch/nodes/0/_state/node-0.st]
at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:196) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:335) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeEnvironment.loadOrCreateNodeMetaData(NodeEnvironment.java:418) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:267) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.node.Node.<init>(Node.java:265) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.node.Node.<init>(Node.java:245) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:233) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:233) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.6.16.jar:5.6.16]
Caused by: java.io.IOException: failed to read [id:0, legacy:false, file:/var/lib/elasticsearch/nodes/0/_state/node-0.st]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:328) ~[elasticsearch-5.6.16.jar:5.6.16]
... 14 more
Caused by: java.lang.IllegalArgumentException: [node_meta_data] unknown field [node_version], parser not found
at org.elasticsearch.common.xcontent.ObjectParser.getParser(ObjectParser.java:399) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:159) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.xcontent.ObjectParser.apply(ObjectParser.java:183) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeMetaData$1.fromXContent(NodeMetaData.java:110) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.env.NodeMetaData$1.fromXContent(NodeMetaData.java:94) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.read(MetaDataStateFormat.java:203) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:323) ~[elasticsearch-5.6.16.jar:5.6.16]
... 14 more
Could someone tell me what it going on? Any help on resolving this and getting ES to work would be appreciated.
Looks like an inconsistency issue between elasticSearch versions. If you had data indexed previously with ES version 7.0, now that data in that instance in the disk is incompatible with ES version 5.0.
Remove elasticsearch directory:
sudo rm -rf /var/lib/elasticsearch
And reinstall elasticsearch
Work for me.
For mac users using brew first clean all brew files with
brew uninstall elasticsearch
rm -rf /usr/local/etc/elasticsearch
rm -rf /usr/local/var/lib/elasticsearch
Then reinstall your elasticsearch version for example
brew install elasticsearch#6
Make sure elasticsearch is pointing to a compatible java version
nano /usr/local/opt/elasticsearch#6/bin
Then you change this line to your compatible version
JAVA_HOME="${JAVA_HOME:-/usr/local/opt/openjdk#17YOUR_COMPATIBLE_VERSION/libexec/openjdk.jdk/Contents/Home}" exec "/usr/local/Cellar/elasticsearch#6/6.8.23/libexec/bin/elasticsearch" "$#"
run elasticsearch in your terminal
elasticsearch

Failed to write pid zookeeper installing zookeeper

I was following previous posts but still not able to resolve the issue. I am trying to install zookeeper and start it to run summing-bird which is run to provide bolts/spouts to storm for online and batch. I installed zookeeper version 3.4.6 first and was getting class not found exception. After looking at the post
ClassNotFoundException for Zookeeper while building Storm
I downgraded the version to 3.3.6 and now I am not even able to start the zookeeper server. Any help will be really appreciated.
root#cp-1:/users/username/zookeeper-3.3.6/bin# ./zkServer.sh start
JMX enabled by default
Using config: /users/username/zookeeper-3.3.6/bin/../conf/zoo.cfg
Starting zookeeper ... ./zkServer.sh: 93: [: /tmp/zookeeper/: unexpected operator
./zkServer.sh: 103: ./zkServer.sh: cannot create /tmp/zookeeper/
The number of snapshots to retain in dataDir/zookeeper_server.pid: Directory nonexistent
FAILED TO WRITE PID
This is how my zoo.cfg file looks like
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper/
dataLogDir=/tmp/logs/zookeeper/
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=10.11.10.3:2888:3888
server.2=10.11.10.4:2888:3888
This is how access looks like
drwxr-xr-x 2 username oppts-PG0 4096 Nov 25 14:35 zookeeper
drwxr-xr-x 3 root root 4096 Nov 25 14:46 logs
drwxr-xr-x 2 root root 4096 Nov 25 14:46 logs/zookeeper
As stated in the contents of zoo.cfg, you’d not better to set the dataDir to /tmp/zookeeper.
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
You can try to set dataDir to other directory that you created. And then restart zkServer.sh.

Zookeeper cluster set up

I am able to set up zookeeper cluster on 1 machine with 3 different ports, but when I do the same with different IP to have zookeeper instance on different machines, it throws following error:
2014-11-20 12:16:24,819 [myid:1] - INFO [main:QuorumPeerMain#127] - Starting quorum peer
2014-11-20 12:16:24,827 [myid:1] - INFO [main:NIOServerCnxnFactory#94] - binding to port 0.0.0.0/0.0.0.0:2181
2014-11-20 12:16:24,842 [myid:1] - INFO [main:QuorumPeer#959] - tickTime set to 2000
2014-11-20 12:16:24,842 [myid:1] - INFO [main:QuorumPeer#979] - minSessionTimeout set to -1
2014-11-20 12:16:24,842 [myid:1] - INFO [main:QuorumPeer#990] - maxSessionTimeout set to -1
2014-11-20 12:16:24,842 [myid:1] - INFO [main:QuorumPeer#1005] - initLimit set to 10
2014-11-20 12:16:24,857 [myid:1] - INFO [Thread-1:QuorumCnxManager$Listener#504] - My election bind port: /172.16.1.175:2223
2014-11-20 12:16:24,870 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer#714] - LOOKING
2014-11-20 12:16:24,873 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection#815] - New election. My id = 1, proposed zxid=0x0
2014-11-20 12:16:24,876 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection#597] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
2014-11-20 12:16:24,881 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager#382] - Cannot open channel to 2 at election address /172.16.1.170:2223
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
at java.lang.Thread.run(Thread.java:744)
have you started zookeeper in all the three nodes ? In a multi-cluster set up (assuming you have a distributed environment with multiple machines) every server knows about the other nodes present in the cluster known as ensemble. It does this by looking at the following piece of line in the zoo.cfg file.
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
In multi-cluster set up doc page it says
As long as a majority of the ensemble are up, the service will be available. Because Zookeeper requires a majority, it is best to use an odd number of machines. For example, with four machines ZooKeeper can only handle the failure of a single machine; if two machines fail, the remaining two machines do not constitute a majority. However, with five machines ZooKeeper can handle the failure of two machines
now unless you start the process in all three nodes it wont be able to communicate with each other and keep logging such errors. This probably might help you get somewhere.
How to Setup Zookeeper for Multiple Clusters or Remote servers?
Step 1: Check the Java 1.8.0 or above version is available in the system under
/Opt/ java -version
Step 2: Download Zookeeper-3.3.6 from the link by using the below command
Sudo wget http://redrockdigimark.com/apachemirror/zookeeper/zookeeper-3.3.6/zookeeper-3.3.6.tar.gz
Step 3: Extract the File by using the below Command
Sudo tar xzf zookeeper-3.3.6.tar.gz -C /opt/
Step 4: Mapper the zookeeper -3.3.6 to Zookeeper as below
/opt/> ls -s zookeeper-3.3.6 zookeeper then
/opt/> Cd zookeeper/conf
Step 5: Create a Configuration file by copying of zoo.cfg from zoo_sample.cfg /opt/zookeeper/conf/>
cp zoo.cfg sample_zoo.cfg
Step 6: Edit the zoo.cfg by using the command /opt/zookeeper/conf/>
sudo vi zoo.cfg
Create the Data directory as DataDir=/var/lib/zookeeper
Step 7: Create a file without extension as myid under /var/lib/zookeeper
and give the unique id as 1 for server1
Add all the cluster server in the botton as
server.1=0.0.0.0:2888:3888
server.2=184.72.205.209:2888:3888
server.3=34.207.92.20:2888:3888
Step 8: Create a file without extension as myid under /var/lib/zookeeper
And give the unique id as 2 for server2
Step 9: The Same configuration to be applied for the second server as below
server.1=34.229.138.19:2888:3888
server.2=0.0.0.0:2888:3888
server.3=34.207.92.20:2888:3888
Step 10: Install nc package and lsof packages as below
Sudo yum install nc
Sudo yum install lsof
Step 11:Now Start the Zookeeper in all servers as
Sudo /opt/zookeeper/bin/zkServer.sh start
Step 12: To Stop the Zookeeper Server
Sudo /opt/zookeeper/bin/zkServer.sh Stop
To Check the Status of Zookeeper Server
Sudo /opt/zookeeper/bin/zkServer.sh Status
Important Points to be noted
1.For Zookeeper 2F+1 server to be maintained ie. If you have 1 servers then (2*1)+1=3 Servers to be maintained , if you have 2 servers then (2*2)+1=5 Servers to be maintained , F stands for number of servers
2.All the Servers should have zoo.cfg configuration file and the local servers IP should be 0.0.0.0
3.zookeeper uses 2888 port to connect to individual followers nodes with the leader node
4.Port 3888 is for peer to peer communication
5.Leader election will be taken care by zookeeper automatically, and if the leader down, with in 2 micro seconds , it will elect the other leader and shares the information of the followers
6.In zoo.cfg configuration file Client port must be 2181

Categories

Resources