I am getting this error repeatedly while traversing a resultset of size 100,000 . I have used
.withReconnectionPolicy(new ConstantReconnectionPolicy(1000))
.withRetryPolicy(DefaultRetryPolicy.INSTANCE)
.withQueryOptions(new QueryOptions().setFetchSize(2000))
in my connector. Still i am getting this error. Typically it fails after fetching 80000 rows.
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /172.16.12.143:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response), /172.16.12.141:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response)) at
com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at
com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:259)
at
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:175)
at
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at
com.payu.merchantAnalytics.FunnelHourlyUtilCql.groupByHourCql(FunnelHourlyUtilCql.java:87)
at
com.payu.merchantAnalytics.FunnelHourlyUtilCql.main(FunnelHourlyUtilCql.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497) at
org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
at
org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by:
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /172.16.12.143:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response), /172.16.12.141:9042
(com.datastax.driver.core.exceptions.DriverException: Timed out
waiting for server response)) at
com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108)
at
com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Solved
Thanks Andy Tolbert for the answer. Setting the readTimeoutMillis using SocketOptions.setReadTimeoutMillis(100000) worked.
Now the code looks like :-
SocketOptions socketOptions = new SocketOptions().setReadTimeoutMillis(100000);
Cluster.builder()
.addContactPoints(nodes)
.withReconnectionPolicy(new ConstantReconnectionPolicy(1000))
.withRetryPolicy(DefaultRetryPolicy.INSTANCE)
.withQueryOptions(new QueryOptions().setFetchSize(2000))
.withSocketOptions(socketOptions)
.withCredentials(username, password).build();
Thanks A Lot :)
Related
I'm trying to connect to Hbase using Hbase client API in a kerborized Cloudera cluster.
Sample code:
Configuration hbaseConf = HBaseConfiguration.create();
/*hbaseConf.set("hbase.master", "somenode.net:2181");
hbaseConf.set("hbase.client.scanner.timeout.period", "1200000");
hbaseConf.set("hbase.zookeeper.quorum",
"somenode.net,somenode2.net");
hbaseConf.set("zookeeper.znode.parent", "/hbase");*/
hbaseConf.setInt("timeout", 120000);
hbaseConf.set(TableInputFormat.INPUT_TABLE, tableName);
//hbaseConf.addResource("src/main/resources/hbase-site.xml");
UserGroupInformation.setConfiguration(hbaseConf);
UserGroupInformation.loginUserFromKeytab("principal", "keytab");
JavaPairRDD<ImmutableBytesWritable, Result> javaPairRdd = ctx
.newAPIHadoopRDD(hbaseConf, TableInputFormat.class,
ImmutableBytesWritable.class, Result.class);
I tried to set the hbase-site.xml in the maven project resources, also passed as jar file in spark-submit command using --jars, but nothing works.
Error log:
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68545: row '¨namespace:test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hostname.net,60020,1511970022474, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:278)
at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:266)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:394)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:203)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
18/02/26 16:25:42 INFO spark.SparkContext: Invoking stop() from shutdown hook
The problem you are facing is because your environment is not properly set up.
I have answer it to my own question here
Someone would help on this issue with Nifi 1.3.0 and Hive. I get the same error with hive 1.2 and Hive 2.1.1. The hive table is partioned , bucketed and stored as ORC format.
The partition is created on hdfs but data failed on writing stage. Please check the logs as below:
[5:07 AM] papesdiop: Failed connecting to EndPoint {metaStoreUri='thrift://localhost:9083', database='mydb', table='guys', partitionVals=[dev] }
[5:13 AM] papesdiop: I get in log see next, hope it might help too:
[5:13 AM] papesdiop: Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable to acquire lock on {metaStoreUri='thrift://localhost:9083', database='mydb', table='guys', partitionVals=[dev] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:578)
FULL TRACE LOGS:
reconnect.
org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3906)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1863)
at sun.reflect.GeneratedMethodAccessor380.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy126.lock(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:573)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:547)
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:261)
at org.apache.nifi.util.hive.HiveWriter.(HiveWriter.java:73)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:964)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:875)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$40(PutHiveStreaming.java:676)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$44(PutHiveStreaming.java:673)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2136)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2106)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:627)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$36(PutHiveStreaming.java:551)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:551)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2017-09-07 06:41:31,015 DEBUG [Timer-4] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=13ed53d2-015e-1000-c7b1-5af434c38751] Start sending heartbeat on all writers
2017-09-07 06:41:31,890 INFO [Timer-Driven Process Thread-3] hive.metastore Trying to connect to metastore with URI thrift://localhost:9083
2017-09-07 06:41:31,893 INFO [Timer-Driven Process Thread-3] hive.metastore Connected to metastore.
2017-09-07 06:41:31,911 ERROR [Timer-Driven Process Thread-3] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=13ed53d2-015e-1000-c7b1-5af434c38751] Failed to create HiveWriter for endpoint: {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=[dev] }: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=[dev] }
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=[dev] }
at org.apache.nifi.util.hive.HiveWriter.(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:964)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:875)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$40(PutHiveStreaming.java:676)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$44(PutHiveStreaming.java:673)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2136)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2106)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:627)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$36(PutHiveStreaming.java:551)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:551)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=[dev] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable to acquire lock on {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=[dev] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:578)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:547)
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:261)
... 25 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3906)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1863)
at sun.reflect.GeneratedMethodAccessor380.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy126.lock(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:573)
... 27 common frames omitted
2017-09-07 06:41:31,911 ERROR [Timer-Driven Process Thread-3] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=13ed53d2-015e-1000-c7b1-5af434c38751] Error connecting to Hive endpoint: table guys at thrift://localhost:9083
2017-09-07 06:41:31,911 DEBUG [Timer-Driven Process Thread-3] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=13ed53d2-015e-1000-c7b1-5af434c38751] has chosen to yield its resources; will not be scheduled to run again for 1000 milliseconds
2017-09-07 06:41:31,912 ERROR [Timer-Driven Process Thread-3] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=13ed53d2-015e-1000-c7b1-5af434c38751] Hive Streaming connect/write error, flow file will be penalized and routed to retry. org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://localhost:9083', database='default', table='guys', partitionVals=
The Hive table
CREATE TABLE mydb.guys(
firstname string,
lastname string)
PARTITIONED BY (
job string)
CLUSTERED BY (
firstname)
INTO 10 BUCKETS
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS ORC
LOCATION
'hdfs://localhost:9000/user/papesdiop/guys'
TBLPROPERTIES ( 'transactional'='true')
Thanks in advance
If this is failing during the write to HDFS, perhaps your user does not have permissions to write to the target directory? If you have more information from the full stack trace please add it to your question, as it will help diagnose the problem. When I had this issue a while ago, it was because my NiFi user needed to be created on the target OS and added to the appropriate HDFS group(s) in order to get permission for PutHiveStreaming to write out to the ORC file(s) in HDFS.
I just use Elasticsearch 5.0.0 Java API to search index, then the results of the searching do come out. But at end the problem occur also.
Exception in thread "main" java.lang.IllegalStateException: failed to create a child event loop
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88)
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58)
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47)
at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:58)
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:77)
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:72)
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:59)
at org.elasticsearch.transport.netty4.Netty4Transport.createBootstrap(Netty4Transport.java:200)
at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:171)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68)
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:182)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68)
at org.elasticsearch.client.transport.TransportClient.buildTemplate(TransportClient.java:169)
at org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:228)
at org.elasticsearch.transport.client.PreBuiltTransportClient.<init>(PreBuiltTransportClient.java:69)
at org.elasticsearch.transport.client.PreBuiltTransportClient.<init>(PreBuiltTransportClient.java:65)
at util.ESUtil.getTransportClient(ESUtil.java:27)
at core.KKId2KKGeo.convertKKId2KKGeo(KKId2KKGeo.java:57)
at core.KKId2KKGeo.main(KKId2KKGeo.java:83)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: io.netty.channel.ChannelException: failed to open a new selector
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:157)
at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:148)
at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:126)
at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:36)
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
... 23 more
Caused by: java.io.IOException: Unable to establish loopback connection
at sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:94)
at sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at sun.nio.ch.PipeImpl.<init>(PipeImpl.java:171)
at sun.nio.ch.SelectorProviderImpl.openPipe(SelectorProviderImpl.java:50)
at java.nio.channels.Pipe.open(Pipe.java:155)
at sun.nio.ch.WindowsSelectorImpl.<init>(WindowsSelectorImpl.java:127)
at sun.nio.ch.WindowsSelectorProvider.openSelector(WindowsSelectorProvider.java:44)
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:155)
... 27 more
Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at java.nio.channels.SocketChannel.open(SocketChannel.java:189)
at sun.nio.ch.PipeImpl$Initializer$LoopbackConnector.run(PipeImpl.java:127)
at sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:76)
... 35 more
this some code that I write to search something through ES 5.0.0 java API :
TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("localhost"), 9300));
QueryBuilder qb = QueryBuilders.queryStringQuery(d_kkid);
SearchResponse searchResponse = client.prepareSearch("kkinfo_index")
.setScroll(new TimeValue(60000))
.setQuery(qb)
.setSize(10).execute().actionGet();
I have resolved my issue, it is caused by what I have coded is not reasonale. My wrong code is below :
for(Object object : object_list){
client = getTransportClient();
//do ES search...
}
The reasonale code is like this:
client = getTransportClient();
for(Object object : object_list){
// do ES search...
}
We post messages to queue from our Java application. Recently we moved to new high availabulity Production server with same configuration as our old. But now we see a new issue whenever we are trying to post messages. After posting few messages we are getting:
"MQJMS2005: failed to create MQQueueManager An MQException occurred:
Completion Code 2, Reason 2059 MQJE011: Socket connection attempt
refused"
we did telnet and everything looks fine.The other part is whenever our MQ team tries to enable trace to capture error it works fine.
org.springframework.jms.UncategorizedJmsException: Uncategorized exception occured during JMS processing; nested exception is javax.jms.JMSException: MQJMS2005: failed to create MQQueueManager for 'AXMQMTIMSPRDHA:AXMQMTIMSPRDHA_QM'; nested exception is com.ibm.mq.MQException: MQJE001: An MQException occurred: Completion Code 2, Reason 2059
MQJE011: Socket connection attempt refused
at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:316)
at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:168)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:469)
at org.springframework.jms.core.JmsTemplate.send(JmsTemplate.java:534)
at org.springframework.jms.core.JmsTemplate.convertAndSend(JmsTemplate.java:641)
at org.springframework.jms.core.JmsTemplate.convertAndSend(JmsTemplate.java:630)
at com.lowes.trf.rerate.jms.MessageSender.sendMessageAsXml(MessageSender.java:54)
at com.lowes.trf.rerate.jms.MessageSender$$FastClassByCGLIB$$b52d5402.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:698)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:96)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:260)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:631)
at com.lowes.trf.rerate.jms.MessageSender$$EnhancerByCGLIB$$c88e6908.sendMessageAsXml(<generated>)
at com.lowes.trf.rerate.service.ReRateService.sendMessageAsXml(ReRateService.java:151)
at com.lowes.trf.rerate.batch.controller.ReRateBatchController.postResponseToEsbAsXml(ReRateBatchController.java:300)
at com.lowes.trf.rerate.batch.controller.ReRateBatchController.execute(ReRateBatchController.java:229)
at com.lowes.trf.rerate.batch.controller.ReRateBatchController$$FastClassByCGLIB$$66bcc521.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:627)
at com.lowes.trf.rerate.batch.controller.ReRateBatchController$$EnhancerByCGLIB$$44b4ed47.execute(<generated>)
at com.lowes.trf.rerate.batch.controller.ReRateBatchController.main(ReRateBatchController.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:56)
at java.lang.reflect.Method.invoke(Method.java:620)
at org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
Caused by: javax.jms.JMSException: MQJMS2005: failed to create MQQueueManager for 'AXMQMTIMSPRDHA:AXMQMTIMSPRDHA_QM'
at com.ibm.mq.jms.services.ConfigEnvironment.newException(ConfigEnvironment.java:586)
at com.ibm.mq.jms.MQConnection.createQM(MQConnection.java:2110)
at com.ibm.mq.jms.MQConnection.createQMNonXA(MQConnection.java:1532)
at com.ibm.mq.jms.MQQueueConnection.<init>(MQQueueConnection.java:150)
at com.ibm.mq.jms.MQQueueConnectionFactory.createQueueConnection(MQQueueConnectionFactory.java:185)
at com.ibm.mq.jms.MQQueueConnectionFactory.createQueueConnection(MQQueueConnectionFactory.java:112)
at com.ibm.mq.jms.MQQueueConnectionFactory.createConnection(MQQueueConnectionFactory.java:1050)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:184)
at org.springframework.jms.core.JmsTemplate.access$500(JmsTemplate.java:85)
at org.springframework.jms.core.JmsTemplate$JmsTemplateResourceFactory.createConnection(JmsTemplate.java:1031)
at org.springframework.jms.connection.ConnectionFactoryUtils.doGetTransactionalSession(ConnectionFactoryUtils.java:297)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:453)
... 27 more
IBM Documentation says the remote MQ Manager is possibly down, and make sure the channels you are using are fine. Also when you see the connection being refused, run dspmq on the remote machine to ensure the MQManager is really up.
I've a java program that connects to MySQL 5.0 a lot of times.
I use mysql-connector-5.0.8.jar file to do it.
From time to time, and it's not with a specific period, I've got the following exception:
Last packet sent to the server was 1 ms ago.
2014/01/09 11:43:50:536 ERROR [com.bnpparibas.peach.core.EventDispatcherThread:3] EventDispatcherThread - org.springframework.transaction.CannotCreateTransactionException: Could not open Hibernate Session for transaction; nested exception is org.hibernate.exception.JDBCConnectionException: Cannot open connection
org.springframework.transaction.CannotCreateTransactionException: Could not open Hibernate Session for transaction; nested exception is org.hibernate.exception.JDBCConnectionException: Cannot open connection
at org.springframework.orm.hibernate3.HibernateTransactionManager.doBegin(HibernateTransactionManager.java:599)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:374)
at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:263)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:101)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy18.persistFeed(Unknown Source)
at com.bnpparibas.peach.core.FeedConfiguratorImpl.persistFeed(FeedConfiguratorImpl.java:102)
at com.bnpparibas.peach.core.EventFeedImpl.persistIfNeeded(EventFeedImpl.java:117)
at com.bnpparibas.peach.core.EventDispatcherThread.run(EventDispatcherThread.java:35)
Caused by: org.hibernate.exception.JDBCConnectionException: Cannot open connection
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:74)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:29)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:420)
at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:144)
at org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:129)
at org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:57)
at org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1290)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doBegin(HibernateTransactionManager.java:558)
... 9 more
Caused by: com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:
** BEGIN NESTED EXCEPTION **
java.net.UnknownHostException
MESSAGE: machine.fr.net.intra
STACKTRACE:
java.net.UnknownHostException: machine.fr.net.intra
at java.net.InetAddress.getAllByName0(InetAddress.java:1158)
at java.net.InetAddress.getAllByName(InetAddress.java:1084)
at java.net.InetAddress.getAllByName(InetAddress.java:1020)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:246)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:271)
at com.mysql.jdbc.Connection.createNewIO(Connection.java:2771)
at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
at org.apache.tomcat.dbcp.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
at org.apache.tomcat.dbcp.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
at org.apache.tomcat.dbcp.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1148)
at org.apache.tomcat.dbcp.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:82)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:417)
at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:144)
at org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:129)
at org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:57)
at org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1290)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doBegin(HibernateTransactionManager.java:558)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:374)
at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:263)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:101)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy18.persistFeed(Unknown Source)
at com.bnpparibas.peach.core.FeedConfiguratorImpl.persistFeed(FeedConfiguratorImpl.java:102)
at com.bnpparibas.peach.core.EventFeedImpl.persistIfNeeded(EventFeedImpl.java:117)
at com.bnpparibas.peach.core.EventDispatcherThread.run(EventDispatcherThread.java:35)
** END NESTED EXCEPTION **
Do you know how can I solve this issue?
Or point me in a direction to solve it ^^
It was indeed a DNS problem.
Changing to the ip address worked as a charm.