I'm running a kubernetes cluster in which I am deploying a "cloud native hazelcast" following the instructions on the kubernetes-hazelcast github page. Once I have a number of hazelcast instances running, I try to connect a java client to one of the instances but for some reason the connection fails.
Some background
Using a kubernetes external endpoint I can connect to hazelcast from outside the kubernetes cluster. When I do a REST call with curl kubernetes-master:32469/hazelcast/rest/cluster, I get a correct response from hazelcast with it's cluster information. So I know my endpoint works.
The hazelcast-kubernetes deployment uses the hazelcast-kubernetes-bootstrapper which allows some configuration by setting environment variables with the replication controller, but I'm using all defaults. So my group and password are "someGroup" and "someSecret".
The java client
My Java client code is really straightforward:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().setConnectionAttemptLimit(0);
clientConfig.getNetworkConfig().setConnectionTimeout(10000);
clientConfig.getNetworkConfig().setConnectionAttemptPeriod(2000);
clientConfig.getNetworkConfig().addAddress("kubernetes-master:32469");
clientConfig.getGroupConfig().setName("someGroup");
clientConfig.getGroupConfig().setPassword("someSecret")
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
When start my client this is the log output of the hazelcast container
2016-07-05 12:54:38.143 INFO 5 --- [thread-Acceptor] com.hazelcast.nio.tcp.SocketAcceptor : [172.16.15.4]:5701 [someGroup] [3.5.2] Accepting socket connection from /172.16.29.0:54333
2016-07-05 12:54:38.143 INFO 5 --- [ cached4] c.h.nio.tcp.TcpIpConnectionManager : [172.16.15.4]:5701 [someGroup] [3.5.2] Established socket connection between /172.16.15.4:5701
2016-07-05 12:54:38.157 INFO 5 --- [.IO.thread-in-1] c.h.nio.tcp.SocketClientMessageReader : [172.16.15.4]:5701 [someGroup] [3.5.2] Unknown client type: <
And the console output of the client
jul 05, 2016 2:54:37 PM com.hazelcast.core.LifecycleService
INFO: HazelcastClient[hz.client_0_someGroup][3.6.2] is STARTING
jul 05, 2016 2:54:38 PM com.hazelcast.core.LifecycleService
INFO: HazelcastClient[hz.client_0_someGroup][3.6.2] is STARTED
jul 05, 2016 2:54:48 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
WARNING: Unable to get alive cluster connection, try in 0 ms later, attempt 1 of 2147483647.
jul 05, 2016 2:54:58 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
WARNING: Unable to get alive cluster connection, try in 0 ms later, attempt 2 of 2147483647.
jul 05, 2016 2:55:08 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
etc...
The client just keeps trying to connect but no connection is ever established.
What am I missing?
So why won't my client connect to the hazelcast instance? Is it some configuration part I'm missing?
Not sure about the official kubernetes support, however Hazelcast has a kubernetes discovery plugin (based on the new discovery spi) that works on both, client and nodes: https://github.com/noctarius/hazelcast-kubernetes-discovery
Looking at the console logs, you have different Hazelcast versions between Node and Client? Can you either update both to be 3.6.4 i.e., the latest or just change the cluster to be 3.6.2 to match with client. 3.6.x has many configuration changes and many bug fixes as well.
Related
I would like to configure logging from Google Cloud Bigtable (gRPC) like the other modules of my application
Logs with SLF4J/log4j with properties configured to dismiss INFO logs from spark
2017-01-11 10:44:08 INFO algoServingLauncherTest$:12 - Starting algo Serving...
2017-01-11 10:44:09 INFO MongoDBAlgorithm$:12 - Retrieving all algorithms from mongodb://mongo:27017/mycompany/algorithms...
2017-01-11 10:44:09 INFO MongoDBAlgorithmHandler$:12 - Algorithms retrieval succeeded (1 algorithms)
2017-01-11 10:44:09 INFO MongoDBPredictor$:12 - Retrieving all predictors from mongodb://mongo:27017/mycompany/predictors...
2017-01-11 10:44:09 INFO MongoDBPredictorHandler$:12 - Predictors retrieval succeeded (13 predictors)
logs from Bigtable gRPC
Jan 11, 2017 10:44:23 AM com.google.bigtable.repackaged.io.grpc.internal.ManagedChannelImpl <init>
INFO: [ManagedChannelImpl#1cf6d1be] Created with target directaddress:///bigtable.googleapis.com/[omitted]:443
Jan 11, 2017 10:44:23 AM com.google.bigtable.repackaged.io.grpc.internal.ManagedChannelImpl <init>
INFO: [ManagedChannelImpl#4b29d1d2] Created with target directaddress:///bigtable.googleapis.com/[omitted]:443
Jan 11, 2017 10:44:23 AM com.google.bigtable.repackaged.io.grpc.internal.ManagedChannelImpl <init>
INFO: [ManagedChannelImpl#7f485fda] Created with target directaddress:///bigtable.googleapis.com/[omitted]:443
I would like to force logging to use SLF4J/Log4j.
What should I do ?
Thanks for your help
Unfortunately there isn't an existing direct SLF4J integration in grpc; grpc currently uses java.util.logging which has some complications.
You may want to read the open GitHub issue which discusses this in a lot more detail, and in particular this comment suggests that you may have some success with SLF4JBridgeHandler.
I'm working through an EJB tutorial where my client program invokes a method via remote stateless EJB to add a book. Upon exit the client retrieves and prints all the books from the EJB (I know it's not a good idea to store data in a list within a stateless EJB). All of this works fine, except the initial RMI also returns the following exception (I've included the full output from the client test as well).
Client output:
Nov 29, 2016 11:34:29 PM org.jboss.ejb.client.EJBClient <clinit>
INFO: JBoss EJB Client version 2.1.4.Final
**********************
Welcome to Book Store
**********************
Options
1. Add Book
2. Exit
Enter Choice: 1
Enter book name: Some book
Nov 29, 2016 11:34:44 PM org.xnio.Xnio <clinit>
INFO: XNIO version 3.4.0.Final
Nov 29, 2016 11:34:44 PM org.xnio.nio.NioXnio <clinit>
INFO: XNIO NIO Implementation Version 3.4.0.Final
Nov 29, 2016 11:34:44 PM org.jboss.remoting3.EndpointImpl <clinit>
INFO: JBoss Remoting version 4.0.21.Final
Nov 29, 2016 11:34:45 PM org.jboss.ejb.client.remoting.VersionReceiver handleMessage
INFO: EJBCLIENT000017: Received server version 2 and marshalling strategies [river]
Nov 29, 2016 11:34:45 PM org.jboss.ejb.client.remoting.RemotingConnectionEJBReceiver associate
INFO: EJBCLIENT000013: Successful version handshake completed for receiver context EJBReceiverContext{clientContext=org.jboss.ejb.client.EJBClientContext#4f7d0008, receiver=Remoting connection EJB receiver [connection=org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection#271053e1,channel=jboss.ejb,nodename=slave01:server01]} on channel Channel ID 87a6ebda (outbound) of Remoting connection 64bfbc86 to /127.0.0.1:8133 of endpoint "client-endpoint" <64bf3bbf>
Nov 29, 2016 11:34:45 PM org.jboss.ejb.client.remoting.RemotingConnectionClusterNodeManager getEJBReceiver
INFO: Could not create a connection for cluster node ClusterNode{clusterName='ejb', nodeName='slave01:server01', clientMappings=[ClientMapping{sourceNetworkAddress=/0:0:0:0:0:0:0:0, sourceNetworkMaskBits=0, destinationAddress='0.0.0.0', destinationPort=8080}], resolvedDestination=[Destination address=0.0.0.0, destination port=8080]} in cluster ejb
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.xnio.nio.WorkerThread$ConnectHandle.handleReady(WorkerThread.java:321)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567)
at ...asynchronous invocation...(Unknown Source)
at org.jboss.remoting3.EndpointImpl.doConnect(EndpointImpl.java:294)
at org.jboss.remoting3.EndpointImpl.connect(EndpointImpl.java:430)
at org.jboss.ejb.client.remoting.NetworkUtil.connect(NetworkUtil.java:153)
at org.jboss.ejb.client.remoting.NetworkUtil.connect(NetworkUtil.java:133)
at org.jboss.ejb.client.remoting.ConnectionPool.getConnection(ConnectionPool.java:78)
at org.jboss.ejb.client.remoting.RemotingConnectionManager.getConnection(RemotingConnectionManager.java:51)
at org.jboss.ejb.client.remoting.RemotingConnectionClusterNodeManager.getEJBReceiver(RemotingConnectionClusterNodeManager.java:79)
at org.jboss.ejb.client.ClusterContext$EJBReceiverAssociationTask.call(ClusterContext.java:469)
at org.jboss.ejb.client.ClusterContext$EJBReceiverAssociationTask.call(ClusterContext.java:443)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
**********************
Welcome to Book Store
**********************
Options
1. Add Book
2. Exit
Enter Choice: 2
Book(s) entered so far: 2
1. test1
2. Some book
***Using second lookup to get library stateless object***
Book(s) entered so far: 2
1. test1
2. Some book
So everything with the client, other than the exception, appears to work correctly. I suspect this issue has something to do with the zero'd out node addresses, but I'm not certain. The client properties file is below (in case that configuration is incorrect).
jboss-ejb-clients.properties:
endpoint.name=client-endpoint
remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false
invocation.timeout=3000
reconnect.tasks.timeout=2000
# User Credentials
username=user
password=pass
# Remote Connections
remote.connections=h1,h2
remote.connection.h1.host=127.0.0.1
remote.connection.h1.port=8133
remote.connection.h1.username=user
remote.connection.h1.password=pass
remote.connection.h2.host=127.0.0.1
remote.connection.h2.port=8134
remote.connection.h2.username=user
remote.connection.h2.password=pass
# Cluster
remote.clusters=ejb
remote.cluster.ejb.connect.timeout=2500
remote.cluster.ejb.max-allowed-connected-nodes=2
remote.cluster.ejb.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.cluster.ejb.connect.options.org.xnio.Options.SSL_ENABLED=false
remote.cluster.ejb.username=user
remote.cluster.ejb.password=pass
After extensive research (and a good amount of trial and error with test code), I found a book on Safari (Java EE 7 Development with WildFly) that lead me in the right direction. I had to drop the jboss-ejb-clients.properties file and add the ejb-client configuration found in the answer here to my main client class.
I am working on Spring web MVC. I am using Spring tools suite and pivotal server. I did not found any server log messages in STS. Console only show the server startup message. I want to view all the server message, error and exceptions. Currently I am unable to view any server error messages and errors.
The console is showing the message like this-
Nov 19, 2016 1:08:21 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 1312 ms
Nov 19, 2016 1:08:26 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 5332 ms
i see the question is old but I came across the same issue just now and didn't want to let this one go unanswered ;)
I found an solution from pivotal themselves. But it has a little typo in it: The first line in the logging.properties is
handlers= java.util.logging.ConsoleHandler.level= INFO
but should rather be just
handlers= java.util.logging.ConsoleHandler
This should do the trick.
And if you would still want to keep the default catalina, localhost, etc. files of the tcServer additional to the console output you can modify or copy the logging.properties file found inside of your tcServer - base instance ({pivotal-tc-server-folder}/base-instance/conf/).
Here are many configurations but the one concerning the console output, which is otherwise only in the catalina-{date}.log is in the line
.handlers = 1catalina.org.apache.juli.AsyncFileHandler
just change it to
.handlers = 1catalina.org.apache.juli.AsyncFileHandler, java.util.logging.ConsoleHandler
and you'll have both: logs and console output.
Best regards,
Uwe
I have exception on tomcat console while using embedded Elastic Search instance. I have configured the embedded instance as a node client cluster, starts off with application runs on tomcat. I've got everything working fine for this cluster, however I'm getting following exception while starting off the instance. I also get same exception when I start another node or shut off existing node for the same cluster.
Apr 07, 2015 4:13:28 PM org.elasticsearch.discovery.zen.ping.multicast
WARNING: [Base] failed to read requesting data from /10.4.1.94:54328
java.io.IOException: Expected handle header, got [15]
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(Hand
lesStreamInput.java:65)
at org.elasticsearch.cluster.ClusterName.readFrom(ClusterName.java:64)
at org.elasticsearch.cluster.ClusterName.readClusterName(ClusterName.jav
a:58)
at org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Recei
ver.run(MulticastZenPing.java:402)
at java.lang.Thread.run(Thread.java:745)
Apr 07, 2015 4:13:30 PM org.elasticsearch.discovery.zen.ping.multicast
WARNING: [Base] failed to read requesting data from /10.4.1.94:54328
java.io.IOException: Expected handle header, got [15]
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(Hand
lesStreamInput.java:65)
at org.elasticsearch.cluster.ClusterName.readFrom(ClusterName.java:64)
at org.elasticsearch.cluster.ClusterName.readClusterName(ClusterName.jav
a:58)
at org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Recei
ver.run(MulticastZenPing.java:402)
at java.lang.Thread.run(Thread.java:745)
Apr 07, 2015 4:13:32 PM org.elasticsearch.discovery.zen.ping.multicast
WARNING: [Base] failed to read requesting data from /10.4.1.94:54328
java.io.IOException: Expected handle header, got [15]
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(Hand
lesStreamInput.java:65)
at org.elasticsearch.cluster.ClusterName.readFrom(ClusterName.java:64)
at org.elasticsearch.cluster.ClusterName.readClusterName(ClusterName.jav
a:58)
at org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing$Recei
ver.run(MulticastZenPing.java:402)
at java.lang.Thread.run(Thread.java:745)
From the exception it looks like a handshaking problem with other cluster nodes, despite this issue cluster remains healthy and happy to serve payload. I'm using same ElasticSearch version (1.4.4) for both java client and external installations. So answer for this question is not valid anymore (ElasticSearch - failed to read requesting data). Also note I've checked this with isolated node client (java main program) and I don't get to see this exception there.
I am working with a Java application that uses SolrJ to index documents to a Solr server.
In my local test environment, I run a local Solr instance on a Tomcat server on my windows xp box. When I run the Java app from a different Windows box, the indexing completes successfully and the Solr log files look normal.
However, running the same Java application deployed on linux webserver communicating to another linux webserver running Solr, I receive "read timed out" messages after every solr update command:
Jul 14, 2011 3:12:31 AM org.apache.solr.core.SolrCore execute INFO: []
webapp=/solr path=/update params={wt=javabin&version=1} status=400
QTime=20020 Jul 14, 2011 3:12:51 AM
org.apache.solr.update.processor.LogUpdateProcessor finish INFO: {} 0
20021 Jul 14, 2011 3:12:51 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException:
java.net.SocketTimeoutException: Read timed out at
org.apache.solr.handler.XMLLoader.load(XMLLoader.java:72) at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at...
Caused by:
javax.xml.stream.XMLStreamException: java.net.SocketTimeoutException:
Read timed out
Any idea why this might be happening? My suspicion is that something is closing these connections after they are initiated (e.g. web filtering software, firewall...), but the network admins at my workplace say that no traffic is being blocked.
Is it getting timedout only with updates or even with querying?
Check the server settings on the linux server machine whether it has very less timeout value.