Using Schemaless JSON converter for Hbase Connector Kafka - java

I'm using the hbase sink connector for kafka (https://github.com/mravi/kafka-connect-hbase). So I tried to implement this connector using its JsonConverter in event parser class like below.
{
"name": "test-hbase",
"config": {
"connector.class": "io.svectors.hbase.sink.HBaseSinkConnector",
"tasks.max": "1",
"topics": "hbase_connect",
"zookeeper.quorum": "xxxxx.xxxx.xx.xx,xxxxx.xxxx.xx.xx,xxxxx.xxxx.xx.xx",
"event.parser.class": "io.svectors.hbase.parser.JsonEventParser",
"hbase.hbase_connect.rowkey.columns": "id",
"hbase.hbase_connect.family": "col1",
}
}
And this is my run distributed properties of kafka connect :
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
The problem is when I'm trying to produce JSON message with no schema, the connector thrown null pointer exception below :
[2018-12-10 16:45:06,607] ERROR WorkerSinkTask{id=hbase_connect-0}
Task threw an uncaught and unrecoverable exception.
Task is being killed and will not recover until manually restarted.
(org.apache.kafka.connect.runtime.WorkerSinkTask:515)
java.lang.NullPointerException
at io.svectors.hbase.util.ToPutFunction.apply(ToPutFunction.java:78)
at io.svectors.hbase.sink.HBaseSinkTask.lambda$4(HBaseSinkTask.java:105)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at io.svectors.hbase.sink.HBaseSinkTask.lambda$3(HBaseSinkTask.java:105)
at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321)
at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1696)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at io.svectors.hbase.sink.HBaseSinkTask.put(HBaseSinkTask.java:104)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:288)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) [2018-12-10 16:45:06,607] ERROR WorkerSinkTask{id=hbase_connect-0}
Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:172)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:517)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:288)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
This is the message that I use :
{"id": "9","name": "wis"}
Any suggestions for this error?

It was an issue with connector for schema less json .
It has been fixed in the release here : https://github.com/nishutayal/kafka-connect-hbase/issues/5

Related

Springboot2 use StringRedisTemplate can't connect to redis cluster in docker

I set up the redis-cluster in docker in aws ec2. They use the docker swarm net. The ip is from 10.0.0.6:6379 to 10.0.0.11:6379 and is mapped to the ec2 static address like (public address:5001 to public address:5006) I can use the client tool like TablePlus to access them using public address successfully. But when I use the spring boot(version 2.1.1) and StringRedisTemplate to set value in redis,(the configuration file is
spring.redis.database=0
spring.redis.password=xxxxxx
spring.redis.cluster.nodes=public address:5001,public address:5002,public address:5003,public address:5004,public address:5005,public address:5006
spring.redis.timeout=5000
The pom is
<!-- redis -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
</dependency>
, I always got the error message like below:
2019-01-14 08:15:28,570 [http-nio-8081-exec-2] [org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:524)] - [INFO] Initializing Servlet 'dispatcherServlet'
2019-01-14 08:15:28,577 [http-nio-8081-exec-2] [org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:546)] - [INFO] Completed initialization in 7 ms
2019-01-14 08:15:52,589 [http-nio-8081-exec-8] [com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:110)] - [INFO] HikariPool-1 - Starting...
2019-01-14 08:15:56,831 [http-nio-8081-exec-8] [com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:123)] - [INFO] HikariPool-1 - Start completed.
2019-01-14 08:15:58,734 [http-nio-8081-exec-8] [io.lettuce.core.EpollProvider.<clinit>(EpollProvider.java:68)] - [INFO] Starting without optional epoll library
2019-01-14 08:15:58,736 [http-nio-8081-exec-8] [io.lettuce.core.KqueueProvider.<clinit>(KqueueProvider.java:70)] - [INFO] Starting without optional kqueue library
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-1] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.7:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.7:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.7:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-9] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.10:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.10:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.10:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-10] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.6:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.6:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.6:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-11] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.11:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.11:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.11:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-12] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.9:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.9:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.9:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-5] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.8:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.8:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.8:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:15,509 [lettuce-nioEventLoop-4-7] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.7:6379
2019-01-14 08:16:25,511 [lettuce-nioEventLoop-4-8] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.10:6379
2019-01-14 08:16:35,518 [lettuce-nioEventLoop-4-9] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.11:6379
2019-01-14 08:16:45,520 [lettuce-nioEventLoop-4-10] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.9:6379
2019-01-14 08:16:55,527 [lettuce-nioEventLoop-4-11] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.8:6379
2019-01-14 08:17:05,534 [lettuce-nioEventLoop-4-12] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.6:6379
Jan 14, 2019 8:17:05 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect] with root cause
io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.6:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
2019-01-14 08:21:56,268 [HikariPool-1 housekeeper] [com.zaxxer.hikari.pool.HikariPool$HouseKeeper.run(HikariPool.java:758)] - [WARN] HikariPool-1 - Retrograde clock change detected (housekeeper delta=29s293ms), soft-evicting connections from pool.
Looks like it directly accesses the ip address in docker swarm net not the exposed public ip address. I take a long time try to find a solution but still can't fix this. Can anyone help to figure out how to solve this problem? Thank you in advance.

Apache Nifi: PutHiveStreaming is not connecting

I have a simple process flow that follows the example: https://community.hortonworks.com/articles/52856/stream-data-into-hive-like-a-king-using-nifi.html
The flow looks like this:
The flow files go through the entire process but then fails to write to the Hive DB in the processor "Stream CSV to Hive."
When I look at the nifi-app.log, I get the following exception:
2018-03-03 00:51:33,942 ERROR [Timer-Driven Process Thread-8]
o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-
1000-1713-79d402d400b2] Error connecting to Hive endpoint: table olympics
at thrift://master:9083
2018-03-03 00:51:33,947 ERROR [Timer-Driven Process Thread-8]
o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-
1000-1713-79d402d400b2] Hive Streaming co nnect/write error, flow file will
be penalized and routed to retry.
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to
EndPoint {metaStoreUri='thrift://m aster:9083', database='default',
table='olympics', partitionVals=[] }:
org.apache.nifi.processors.hive.PutHiveStreaming$ShouldRetryException: Hive
Streaming connect/write error , flow file will be penalized and routed to
retry. org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed
connecting to EndPoint {metaStoreUri='thrift://master:9083', data
base='default', table='olympics', partitionVals=[] }
org.apache.nifi.processors.hive.PutHiveStreaming$ShouldRetryException: Hive
Streaming connect/write error, flow file will be penalized and routed to
retry. org.apache.nifi.util .hive.HiveWriter$ConnectFailure: Failed
connecting to EndPoint {metaStoreUri='thrift://master:9083',
database='default', table='olympics', partitionVals=[] }
at
org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onHiveRecordsError$1(Put
HiveStreaming.java:527)
at org.apache.nifi.processor.util.pattern.ExceptionHandler$OnError.lambda$andThen$0(ExceptionHandler.java:54)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onHiveRecordError$2(PutHiveStreaming.java:545)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:148)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$12(PutHiveStreaming.java:677)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2174)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2144)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:631)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$4(PutHiveStreaming.java:555)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:555)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partit ionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:968)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:879)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$8(PutHiveStreaming.java:680)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
... 19 common frames omitted
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://master:9083', database='default', tab le='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable to acquire lock on {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionV als=[] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:578)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:547)
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:261)
... 25 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3906)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1863)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy85.lock(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:573)
... 27 common frames omitted
I have the Hive metastore daemon running and the default port, 9083 is open and I can connect to it with telnet. Hive is running on the same machine as Nifi and has an entry in /etc/hosts to name it master.
I'm getting the following exception:
org.apache.thrift.transport.TTransportException: null.
Which is getting passed a null to the parent, Exception class's constructor.
I've also tried using older versions of Apache NiFi and Apache Hive. I just tried Nifi version 1.4 and Hive version 1.2.2. I'm now getting a different error:
2018-03-13 19:17:20,126 INFO [put-hive-streaming-0] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:20,220 INFO [put-hive-streaming-0] hive.metastore Connected to metastore.
2018-03-13 19:17:20,524 INFO [Timer-Driven Process Thread-6] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:20,525 INFO [Timer-Driven Process Thread-6] hive.metastore Connected to metastore.
2018-03-13 19:17:21,204 WARN [put-hive-streaming-0] o.a.h.h.m.RetryingMetaStoreClient MetaStoreClient lost connection. Attempting to reconnect.
org.apache.thrift.TApplicationException: Internal error processing open_txns
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_open_txns(ThriftHiveMetastore.java:3834)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.open_txns(ThriftHiveMetastore.java:3821)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openTxns(HiveMetaStoreClient.java:1841)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy122.openTxns(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.openTxnImpl(HiveEndPoint.java:520)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:504)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
at org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$2(HiveWriter.java:259)
at org.apache.nifi.util.hive.HiveWriter.lambda$callWithTimeout$4(HiveWriter.java:365)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-03-13 19:17:22,205 INFO [put-hive-streaming-0] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:22,207 INFO [put-hive-streaming-0] hive.metastore Connected to metastore.
2018-03-13 19:17:22,214 ERROR [Timer-Driven Process Thread-6] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-1000-1713-79d402d400b2] Failed to create HiveWriter for endpoint: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:968)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:879)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$8(PutHiveStreaming.java:680)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$12(PutHiveStreaming.java:677)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2174)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2144)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:631)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$4(PutHiveStreaming.java:555)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:555)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionBatchUnAvailable: Unable to acquire transaction batch on end point: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:511)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
at org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$2(HiveWriter.java:259)
at org.apache.nifi.util.hive.HiveWriter.lambda$callWithTimeout$4(HiveWriter.java:365)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 common frames omitted
Caused by: org.apache.thrift.TApplicationException: Internal error processing open_txns
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_open_txns(ThriftHiveMetastore.java:3834)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.open_txns(ThriftHiveMetastore.java:3821)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openTxns(HiveMetaStoreClient.java:1841)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy122.openTxns(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.openTxnImpl(HiveEndPoint.java:520)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:504)
... 9 common frames omitted
What version of NiFi/HDF are you using and what kind/version of Hadoop cluster are you connecting to (HDP, e.g.)? It is a known issue that Apache NiFi will not work with HDP Hive starting with HDP 2.5 (or maybe 2.4), you would have to download the NiFi-only distro of Hortonworks Data Flow in order to make it work.

Unable to add artifact using WSO2 governance REST API into governance registry, as it is throwing exception. Can somebody help to resolve this issue?

I am trying to add the artifact through SOAP UI withstandard JSON format mentioned in WSO2 Greg documentation:
Below is the exception :
ERROR {org.wso2.carbon.governance.api.common.GovernanceArtifactManager} - Failed to add artifact: artifact id: 34d84d58-9102-48db-903a-932d7a0141f8, path: /trunk/restservices/1.0.0/R1_Demo2. An exception occurred while executing handler chain. null {org.wso2.carbon.governance.api.common.GovernanceArtifactManager}
org.wso2.carbon.registry.core.exceptions.RegistryException: An exception occurred while executing handler chain. null
at org.wso2.carbon.registry.core.jdbc.handlers.HandlerManager.get(HandlerManager.java:2466)
at org.wso2.carbon.registry.core.jdbc.handlers.HandlerLifecycleManager.get(HandlerLifecycleManager.java:910)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.get(EmbeddedRegistry.java:512)
at org.wso2.carbon.registry.extensions.utils.CommonUtil.getDefaultLifecycle(CommonUtil.java:794)
at org.wso2.carbon.registry.extensions.handlers.utils.RESTServiceUtils.addServiceToRegistry(RESTServiceUtils.java:406)
at org.wso2.carbon.registry.extensions.handlers.RESTServiceMediaTypeHandler.put(RESTServiceMediaTypeHandler.java:198)
at org.wso2.carbon.registry.core.jdbc.handlers.HandlerManager.put(HandlerManager.java:2503)
at org.wso2.carbon.registry.core.jdbc.handlers.HandlerLifecycleManager.put(HandlerLifecycleManager.java:957)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.put(EmbeddedRegistry.java:697)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.put(CacheBackedRegistry.java:591)
at org.wso2.carbon.registry.core.session.UserRegistry.putInternal(UserRegistry.java:828)
at org.wso2.carbon.registry.core.session.UserRegistry.access$1000(UserRegistry.java:61)
at org.wso2.carbon.registry.core.session.UserRegistry$11.run(UserRegistry.java:804)
at org.wso2.carbon.registry.core.session.UserRegistry$11.run(UserRegistry.java:801)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.put(UserRegistry.java:801)
at org.wso2.carbon.governance.api.common.GovernanceArtifactManager.addGovernanceArtifact(GovernanceArtifactManager.java:208)
at org.wso2.carbon.governance.api.generic.GenericArtifactManager.addGenericArtifact(GenericArtifactManager.java:218)
at org.wso2.carbon.governance.rest.api.Asset.createGovernanceAsset(Asset.java:505)
at org.wso2.carbon.governance.rest.api.Asset.createAsset(Asset.java:172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:188)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:104)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:204)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:101)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:58)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:94)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:249)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:248)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:222)

UnresolvedAddressException with MLlib/Spark

When I'm running MLlib for files > 1 partition in our cluster I get the following exception:
16/08/14 12:43:23 WARN TaskSetManager: Lost task 2.0 in stage 2.1 (TID
49, da06.qcri.org): FetchFailed(BlockManagerId(3, da08.qcri.org,
33322), shuffleId=0, mapId=5, reduceId=2, message=
org.apache.spark.shuffle.FetchFailedException: Failed to connect to
da08.qcri.org:33322 at
org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:323)
at
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:300)
at
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:51)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at
scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
at
org.apache.spark.util.collection.ExternalAppendOnlyMap.insertAll(ExternalAppendOnlyMap.scala:152)
at
org.apache.spark.Aggregator.combineCombinersByKey(Aggregator.scala:58)
at
org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:83)
at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:98) at
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at
org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at
org.apache.spark.scheduler.Task.run(Task.scala:89) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by:
java.io.IOException: Failed to connect to ***.org:33322 at
org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
at
org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
at
org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:90)
at
org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
at
org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)
at
org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3
more
Caused by: java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Net.java:123) at
sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621) at
io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:209)
at
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:207)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1097)
at
io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at
io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at
io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
at
io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at
io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at
io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at
io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at
io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at
io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:438)
at
io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:908)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:203)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:166) at
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
In the slaves configuration file, I have the nodes by IP and not by hostname. In addition, when I ping the machines from the master node using the hostname, it does not seem to have any issue.
Anyone had a similar issue or has some idea on how to solve it?

Error starting camel subsystem in Wildfly 8.2.0.Final

I get the following exception in my standalone/log/server.log when I use the instructions in (http://blog.eisele.net/2014/12/wildfly-camel-subsystem-for-wildfly-integrates-javaee-getting-started.html) to use the Camel subsystem in Wildfly 8.2.0.Final.
The issue seems to be resolved in a later version of Camel (https://issues.apache.org/jira/browse/CAMEL-7992). So I tried changing the org/apache/camel/core module to use camel-core-2.14.1.jar but I still see the same exception. Any pointers on how I can resolve this issue? I am eager to try out the Camel subsystem.
2014-12-23 14:35:09,950 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) JBAS014613: Operation ("deploy") failed - address: ([("deployment" => "wildfly-camel.war")]) - failure description: {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"wildfly-camel.war\".WeldStartService" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"wildfly-camel.war\".WeldStartService: Failed to start service
Caused by: org.jboss.weld.exceptions.DeploymentException: Exception List with 1 exceptions:
Exception 0 :
org.jboss.weld.exceptions.IllegalStateException: WELD-000143: Container lifecycle event method invoked outside of extension observer method invocation.
at org.jboss.weld.bootstrap.events.ContainerEvent.checkWithinObserverNotification(ContainerEvent.java:61)
at org.jboss.weld.bootstrap.events.ProcessAnnotatedTypeImpl.getAnnotatedType(ProcessAnnotatedTypeImpl.java:56)
at org.apache.camel.cdi.internal.CamelContextConfig.configure(CamelContextConfig.java:47)
at org.apache.camel.cdi.internal.CamelContextBean.configureCamelContext(CamelContextBean.java:131)
at org.apache.camel.cdi.internal.CamelExtension.startConsumeBeans(CamelExtension.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.jboss.weld.injection.MethodInjectionPoint.invokeOnInstanceWithSpecialValue(MethodInjectionPoint.java:90)
at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:271)
at org.jboss.weld.event.ExtensionObserverMethodImpl.sendEvent(ExtensionObserverMethodImpl.java:121)
at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:258)
at org.jboss.weld.event.ObserverMethodImpl.notify(ObserverMethodImpl.java:237)
at org.jboss.weld.event.ObserverNotifier.notifyObserver(ObserverNotifier.java:174)
at org.jboss.weld.event.ObserverNotifier.notifyObservers(ObserverNotifier.java:133)
at org.jboss.weld.event.ObserverNotifier.fireEvent(ObserverNotifier.java:107)
at org.jboss.weld.bootstrap.events.AbstractContainerEvent.fire(AbstractContainerEvent.java:54)
at org.jboss.weld.bootstrap.events.AbstractDeploymentContainerEvent.fire(AbstractDeploymentContainerEvent.java:35)
at org.jboss.weld.bootstrap.events.AfterDeploymentValidationImpl.fire(AfterDeploymentValidationImpl.java:28)
at org.jboss.weld.bootstrap.WeldStartup.validateBeans(WeldStartup.java:439)
at org.jboss.weld.bootstrap.WeldBootstrap.validateBeans(WeldBootstrap.java:90)
at org.jboss.as.weld.WeldStartService.start(WeldStartService.java:93)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
"}}
You need to upgarde the camel-cdi jar to 2.14.1 as well. Please make sure you are using the same version of camel jars.

Categories

Resources