ElasticSearch Node Client on Tomcat - failed to read requesting data - java

I have exception on tomcat console while using embedded Elastic Search instance. I have configured the embedded instance as a node client cluster, starts off with application runs on tomcat. I've got everything working fine for this cluster, however I'm getting following exception while starting off the instance. I also get same exception when I start another node or shut off existing node for the same cluster.
Apr 07, 2015 4:13:28 PM org.elasticsearch.discovery.zen.ping.multicast
WARNING: [Base] failed to read requesting data from /10.4.1.94:54328
java.io.IOException: Expected handle header, got [15]
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(Hand
lesStreamInput.java:65)
at org.elasticsearch.cluster.ClusterName.readFrom(ClusterName.java:64)
at org.elasticsearch.cluster.ClusterName.readClusterName(ClusterName.jav
a:58)
at org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Recei
ver.run(MulticastZenPing.java:402)
at java.lang.Thread.run(Thread.java:745)
Apr 07, 2015 4:13:30 PM org.elasticsearch.discovery.zen.ping.multicast
WARNING: [Base] failed to read requesting data from /10.4.1.94:54328
java.io.IOException: Expected handle header, got [15]
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(Hand
lesStreamInput.java:65)
at org.elasticsearch.cluster.ClusterName.readFrom(ClusterName.java:64)
at org.elasticsearch.cluster.ClusterName.readClusterName(ClusterName.jav
a:58)
at org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Recei
ver.run(MulticastZenPing.java:402)
at java.lang.Thread.run(Thread.java:745)
Apr 07, 2015 4:13:32 PM org.elasticsearch.discovery.zen.ping.multicast
WARNING: [Base] failed to read requesting data from /10.4.1.94:54328
java.io.IOException: Expected handle header, got [15]
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(Hand
lesStreamInput.java:65)
at org.elasticsearch.cluster.ClusterName.readFrom(ClusterName.java:64)
at org.elasticsearch.cluster.ClusterName.readClusterName(ClusterName.jav
a:58)
at org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Recei
ver.run(MulticastZenPing.java:402)
at java.lang.Thread.run(Thread.java:745)
From the exception it looks like a handshaking problem with other cluster nodes, despite this issue cluster remains healthy and happy to serve payload. I'm using same ElasticSearch version (1.4.4) for both java client and external installations. So answer for this question is not valid anymore (ElasticSearch - failed to read requesting data). Also note I've checked this with isolated node client (java main program) and I don't get to see this exception there.

Related

Connecting Java client to Hazelcast-Kubernetes fails

I'm running a kubernetes cluster in which I am deploying a "cloud native hazelcast" following the instructions on the kubernetes-hazelcast github page. Once I have a number of hazelcast instances running, I try to connect a java client to one of the instances but for some reason the connection fails.
Some background
Using a kubernetes external endpoint I can connect to hazelcast from outside the kubernetes cluster. When I do a REST call with curl kubernetes-master:32469/hazelcast/rest/cluster, I get a correct response from hazelcast with it's cluster information. So I know my endpoint works.
The hazelcast-kubernetes deployment uses the hazelcast-kubernetes-bootstrapper which allows some configuration by setting environment variables with the replication controller, but I'm using all defaults. So my group and password are "someGroup" and "someSecret".
The java client
My Java client code is really straightforward:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().setConnectionAttemptLimit(0);
clientConfig.getNetworkConfig().setConnectionTimeout(10000);
clientConfig.getNetworkConfig().setConnectionAttemptPeriod(2000);
clientConfig.getNetworkConfig().addAddress("kubernetes-master:32469");
clientConfig.getGroupConfig().setName("someGroup");
clientConfig.getGroupConfig().setPassword("someSecret")
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
When start my client this is the log output of the hazelcast container
2016-07-05 12:54:38.143 INFO 5 --- [thread-Acceptor] com.hazelcast.nio.tcp.SocketAcceptor : [172.16.15.4]:5701 [someGroup] [3.5.2] Accepting socket connection from /172.16.29.0:54333
2016-07-05 12:54:38.143 INFO 5 --- [ cached4] c.h.nio.tcp.TcpIpConnectionManager : [172.16.15.4]:5701 [someGroup] [3.5.2] Established socket connection between /172.16.15.4:5701
2016-07-05 12:54:38.157 INFO 5 --- [.IO.thread-in-1] c.h.nio.tcp.SocketClientMessageReader : [172.16.15.4]:5701 [someGroup] [3.5.2] Unknown client type: <
And the console output of the client
jul 05, 2016 2:54:37 PM com.hazelcast.core.LifecycleService
INFO: HazelcastClient[hz.client_0_someGroup][3.6.2] is STARTING
jul 05, 2016 2:54:38 PM com.hazelcast.core.LifecycleService
INFO: HazelcastClient[hz.client_0_someGroup][3.6.2] is STARTED
jul 05, 2016 2:54:48 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
WARNING: Unable to get alive cluster connection, try in 0 ms later, attempt 1 of 2147483647.
jul 05, 2016 2:54:58 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
WARNING: Unable to get alive cluster connection, try in 0 ms later, attempt 2 of 2147483647.
jul 05, 2016 2:55:08 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
etc...
The client just keeps trying to connect but no connection is ever established.
What am I missing?
So why won't my client connect to the hazelcast instance? Is it some configuration part I'm missing?
Not sure about the official kubernetes support, however Hazelcast has a kubernetes discovery plugin (based on the new discovery spi) that works on both, client and nodes: https://github.com/noctarius/hazelcast-kubernetes-discovery
Looking at the console logs, you have different Hazelcast versions between Node and Client? Can you either update both to be 3.6.4 i.e., the latest or just change the cluster to be 3.6.2 to match with client. 3.6.x has many configuration changes and many bug fixes as well.

Does ojdbc still use CharacterConverter002e.glb?

In my web-based application, we've recently implemented language (utf-8) support. Since we have gotten it working, I have had the following error message spammed in my tomcat output. I don't have to have the web page open, only the application deployed, and this starts printing to the output at exact 30 second intervals, 5 times per interval.
Oct 09, 2013 4:26:46 PM org.apache.catalina.loader.WebappClassLoader findResourceInternal
INFO: Illegal access: this web application instance has been stopped already.
Could not load oracle/sql/converter_xcharset/CharacterConverter002e.glb. The
eventual following stack trace is caused by an error thrown for debugging purposes
as well as to attempt to terminate the thread which caused the illegal access, and
has no functional impact.
I am using ojdbc6 and see that there isn't anything like "CharacterConverter002e.glb" in the library. What is causing my application to look for this? Am I using the wrong ojdbc? It doesn't appear to keep anything from working but it's bothersome.

How to remove errors UnsupportedOperationException and Possible memory leak

I have been searching a lot but have not found a perfect one which can help me to solve this problem.
I have web services and am generating the stub using JAX-WS.
To access the methods of web services I have wriiten a class in which all the methods are static like
public static String getLocation()
{
//call to the web service
}
I am specifying static because I want to confirm this is not the root cause of my problem.
Now when I am checking the logs in the Tomcat directory the catilina log shows some thins like this...This error is occured when I Startup or shutdown the tomcat server
Mar 18, 2010 11:13:07 PM org.apache.catalina.core.ApplicationContext log
INFO: HTMLManager: stop: Stopping web application at '/testWeb'
Mar 18, 2010 11:13:07 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: A web application appears to have started a thread named [leakingThread] but has failed to stop it. This is very likely to create a memory leak.
Another error I am seeing is
SEVERE: Unable to determine string representation of value of type [com.sun.xml.stream.writers.XMLStreamWriterImpl]
java.lang.UnsupportedOperationException
at com.sun.xml.stream.writers.XMLStreamWriterImpl.entrySet (XMLStreamWriterImpl.java:2134)
at java.util.AbstractMap.toString(AbstractMap.java:478)
at org.apache.catalina.loader.WebappClassLoader.clearThreadLocalMap(WebappClassLoader.java:2433)
at org.apache.catalina.loader.WebappClassLoader.clearReferencesThreadLocals(WebappClassLoader.java:2349)
at org.apache.catalina.loader.WebappClassLoader.clearReferences(WebappClassLoader.java:1921)
at org.apache.catalina.loader.WebappClassLoader.stop(WebappClassLoader.java:1833)
at org.apache.catalina.loader.WebappLoader.stop(WebappLoader.java:740)
at org.apache.catalina.core.StandardContext.stop(StandardContext.java:4920)
at org.apache.catalina.core.ContainerBase.removeChild(ContainerBase.java:936)
Please can any one help me to clear these errors.
Thanks in advance.
My diagnossis: you have a Map implementation in a thread local, and this map doesn't support the operation entrySet, which is triggered by Map#toString. To be precise, your exception is thrown from this line of code in com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.
Tomcat's code that clears the thread local is quite unfortunately written to unconditionally call toString on objects just to be able to log them if the debug level is on.
If you can't get rid of using a thread-local for this, you may have quite some trouble working around this problem.
Your thread leak, by the way, is very probably the result of failed cleanup due to the above error.

Maven repository blocking specific ip addresses?

I am trying to download the file http://repo.maven.apache.org/maven2/org/sonatype/aether/aether-api/1.13.1/aether-api-1.13.1.jar as part of a maven build by my ci server but I get the error:
Downloading: http://repo.maven.apache.org/maven2/org/sonatype/aether/aether-api/1.13.1/aether-api-1.13.1.jar
01-Nov-2012 08:44:26 Nov 01, 2012 8:44:26 AM org.apache.maven.wagon.providers.http.httpclient.impl.client.DefaultRequestDirector tryExecute
01-Nov-2012 08:44:26 INFO: I/O exception (java.net.SocketException) caught when processing request: Connection reset
01-Nov-2012 08:44:26 Nov 01, 2012 8:44:26 AM org.apache.maven.wagon.providers.http.httpclient.impl.client.DefaultRequestDirector tryExecute
01-Nov-2012 08:44:26 INFO: Retrying request
After locating this error in the logs I tried to download this artefact by hand with wget which didn't work also. Further investigation revealed that downloading form another server form this provider (different ip/same ip-range) is not possible either.
downloading this file to servers form other providers was successful at the same time.
I was able to ping repo.maven.apache.org so the server was reachable.
Is it possible that the ip-address of my ci-server is blocked for download?
Do I have to move my ci-server to a different provider?
(atm my ci-server is hosted at jiffybox/domainfactory, if that helps answering the question)
There are currently serious problems with maven's CDN provider, see
the support forum at
https://getsatisfaction.com/sonatype/topics/unable_to_dowload_maven_jetty_plugin_version_6_1_26_from_central
current issue reports at
https://issues.sonatype.org/browse/MVNCENTRAL-257
https://issues.sonatype.org/browse/MVNCENTRAL-259
https://issues.sonatype.org/browse/MVNCENTRAL-260
So obviously they are working on it.
The only workaround for me was to download through a VPN tunnel.

Solr Read Timeout (only in production environment)

I am working with a Java application that uses SolrJ to index documents to a Solr server.
In my local test environment, I run a local Solr instance on a Tomcat server on my windows xp box. When I run the Java app from a different Windows box, the indexing completes successfully and the Solr log files look normal.
However, running the same Java application deployed on linux webserver communicating to another linux webserver running Solr, I receive "read timed out" messages after every solr update command:
Jul 14, 2011 3:12:31 AM org.apache.solr.core.SolrCore execute INFO: []
webapp=/solr path=/update params={wt=javabin&version=1} status=400
QTime=20020 Jul 14, 2011 3:12:51 AM
org.apache.solr.update.processor.LogUpdateProcessor finish INFO: {} 0
20021 Jul 14, 2011 3:12:51 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException:
java.net.SocketTimeoutException: Read timed out at
org.apache.solr.handler.XMLLoader.load(XMLLoader.java:72) at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at...
Caused by:
javax.xml.stream.XMLStreamException: java.net.SocketTimeoutException:
Read timed out
Any idea why this might be happening? My suspicion is that something is closing these connections after they are initiated (e.g. web filtering software, firewall...), but the network admins at my workplace say that no traffic is being blocked.
Is it getting timedout only with updates or even with querying?
Check the server settings on the linux server machine whether it has very less timeout value.

Categories

Resources