How to fix NoNode error - storm kafka? - java

I am working on storm and kafka. I am using this project.
Note
I am running this project locally. It is throwing following error.
java.lang.RuntimeException: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/clickstreamlog/partitions
at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:81) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
at storm.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:42) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
at storm.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:57) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
at storm.kafka.KafkaSpout.open(KafkaSpout.java:87) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.daemon.executor$fn__3284$fn__3299.invoke(executor.clj:520) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:429) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:679) [na:1.6.0_27]
Caused by: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/clickstreamlog/partitions
at storm.kafka.DynamicBrokersReader.getNumPartitions(DynamicBrokersReader.java:94) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:65) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
... 7 common frames omitted
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/clickstreamlog/partitions
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) ~[zookeeper-3.4.5.jar:3.4.5-1392090]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[zookeeper-3.4.5.jar:3.4.5-1392090]
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1586) ~[zookeeper-3.4.5.jar:3.4.5-1392090]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214) ~[curator-framework-2.4.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203) ~[curator-framework-2.4.0.jar:na]
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) ~[curator-client-2.4.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:199) ~[curator-framework-2.4.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191) ~[curator-framework-2.4.0.jar:na]
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38) ~[curator-framework-2.4.0.jar:na]
at storm.kafka.DynamicBrokersReader.getNumPartitions(DynamicBrokersReader.java:91) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
... 8 common frames omitted
Anyone faced the same issue using storm-kafka-0.8-plus-test project.
Any suggestion will be grateful.

this error in current context signify, that zookeeper is not being able to automatically create any/all of following nodes -
storm-kafka, storm, zookeeper
You may try out creating the nodes manually on the zookeeper, I think that will work for u.

Related

Springboot2 use StringRedisTemplate can't connect to redis cluster in docker

I set up the redis-cluster in docker in aws ec2. They use the docker swarm net. The ip is from 10.0.0.6:6379 to 10.0.0.11:6379 and is mapped to the ec2 static address like (public address:5001 to public address:5006) I can use the client tool like TablePlus to access them using public address successfully. But when I use the spring boot(version 2.1.1) and StringRedisTemplate to set value in redis,(the configuration file is
spring.redis.database=0
spring.redis.password=xxxxxx
spring.redis.cluster.nodes=public address:5001,public address:5002,public address:5003,public address:5004,public address:5005,public address:5006
spring.redis.timeout=5000
The pom is
<!-- redis -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
</dependency>
, I always got the error message like below:
2019-01-14 08:15:28,570 [http-nio-8081-exec-2] [org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:524)] - [INFO] Initializing Servlet 'dispatcherServlet'
2019-01-14 08:15:28,577 [http-nio-8081-exec-2] [org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:546)] - [INFO] Completed initialization in 7 ms
2019-01-14 08:15:52,589 [http-nio-8081-exec-8] [com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:110)] - [INFO] HikariPool-1 - Starting...
2019-01-14 08:15:56,831 [http-nio-8081-exec-8] [com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:123)] - [INFO] HikariPool-1 - Start completed.
2019-01-14 08:15:58,734 [http-nio-8081-exec-8] [io.lettuce.core.EpollProvider.<clinit>(EpollProvider.java:68)] - [INFO] Starting without optional epoll library
2019-01-14 08:15:58,736 [http-nio-8081-exec-8] [io.lettuce.core.KqueueProvider.<clinit>(KqueueProvider.java:70)] - [INFO] Starting without optional kqueue library
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-1] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.7:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.7:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.7:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-9] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.10:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.10:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.10:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-10] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.6:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.6:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.6:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-11] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.11:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.11:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.11:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-12] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.9:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.9:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.9:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:09,935 [lettuce-nioEventLoop-4-5] [io.lettuce.core.cluster.topology.ClusterTopologyRefresh.lambda$getConnections$2(ClusterTopologyRefresh.java:228)] - [WARN] Unable to connect to 10.0.0.8:6379
java.util.concurrent.CompletionException: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.8:6379
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$initializeChannelAsync0$4(AbstractRedisClient.java:329)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.8:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
... 8 more
2019-01-14 08:16:15,509 [lettuce-nioEventLoop-4-7] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.7:6379
2019-01-14 08:16:25,511 [lettuce-nioEventLoop-4-8] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.10:6379
2019-01-14 08:16:35,518 [lettuce-nioEventLoop-4-9] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.11:6379
2019-01-14 08:16:45,520 [lettuce-nioEventLoop-4-10] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.9:6379
2019-01-14 08:16:55,527 [lettuce-nioEventLoop-4-11] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.8:6379
2019-01-14 08:17:05,534 [lettuce-nioEventLoop-4-12] [io.lettuce.core.cluster.RedisClusterClient.lambda$connect$13(RedisClusterClient.java:621)] - [WARN] io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.6:6379
Jan 14, 2019 8:17:05 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect] with root cause
io.netty.channel.ConnectTimeoutException: connection timed out: /10.0.0.6:6379
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:466)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
2019-01-14 08:21:56,268 [HikariPool-1 housekeeper] [com.zaxxer.hikari.pool.HikariPool$HouseKeeper.run(HikariPool.java:758)] - [WARN] HikariPool-1 - Retrograde clock change detected (housekeeper delta=29s293ms), soft-evicting connections from pool.
Looks like it directly accesses the ip address in docker swarm net not the exposed public ip address. I take a long time try to find a solution but still can't fix this. Can anyone help to figure out how to solve this problem? Thank you in advance.

Integrating spark and spring boot

After fighting with logger dependencies, I finally started successfully the spring boot application with the usual "java -jar" command.
In the application there is a REST service in which it is used Spark to extract data from Oracle and MongoDB.
When I called this REST service I got this exception:
Driver stacktrace:
Job 0 failed: treeAggregate at MongoInferSchema.scala:80, took 0.233175 s
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 172.16.212.49, executor 0): java.lang.ClassNotFoundException: com.mongodb.spark.rdd.partitioner.MongoPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:313)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:] with root cause
java.lang.ClassNotFoundException: com.mongodb.spark.rdd.partitioner.MongoPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:313)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Closing MongoClient: [127.0.0.1:27017]
The pom.xml contains the mongodb dependencies:
<dependency>
<groupId>org.mongodb.spark</groupId>
<artifactId>mongo-spark-connector_2.11</artifactId>
<version>2.3.0</version>
</dependency>
And the compiled Jar contains the mongodb libraries:
....
825351 Mon Jul 30 14:42:22 CEST 2018 BOOT-INF/lib/mongo-spark-connector_2.11-2.3.0.jar
1897919 Mon May 28 23:33:28 CEST 2018 BOOT-INF/lib/mongo-java-driver-3.6.4.jar
....
I tried to add the libraries in the classpath too, but with no result.
Has anyone an idea how to get Spark to see the jars it needs?
EDIT:
Following the suggestion of #Ramdev, I added this portion of code to my code:
JavaSparkContext context = new JavaSparkContext(sparkSession.sparkContext());
context.addJar("/home/user/.m3/repository/org/mongodb/spark/mongo-spark-connector_2.11/2.3.0/mongo-spark-connector_2.11-2.3.0.jar");
context.addJar("/home/user/.m3/repository/org/mongodb/mongo-java-driver/3.8.1/mongo-java-driver-3.8.1.jar");
The result is Spark now sees the jars, but it seems to be in conflict with the ones in the applicacation jar:
018-09-25 11:39:51 ERROR [dispatcherServlet]:182 - Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Handler dispatch failed; nested exception is java.lang.NoSuchMethodError: com.mongodb.client.MongoCollection.countDocuments(Lorg/bson/conversions/Bson;)J] with root cause
java.lang.NoSuchMethodError: com.mongodb.client.MongoCollection.countDocuments(Lorg/bson/conversions/Bson;)J
at com.mongodb.spark.rdd.partitioner.MongoSamplePartitioner$$anonfun$7.apply(MongoSamplePartitioner.scala:88)
at com.mongodb.spark.rdd.partitioner.MongoSamplePartitioner$$anonfun$7.apply(MongoSamplePartitioner.scala:88)
at com.mongodb.spark.MongoConnector$$anonfun$withCollectionDo$1.apply(MongoConnector.scala:186)
at com.mongodb.spark.MongoConnector$$anonfun$withCollectionDo$1.apply(MongoConnector.scala:184)
at com.mongodb.spark.MongoConnector$$anonfun$withDatabaseDo$1.apply(MongoConnector.scala:171)
at com.mongodb.spark.MongoConnector$$anonfun$withDatabaseDo$1.apply(MongoConnector.scala:171)
at com.mongodb.spark.MongoConnector.withMongoClientDo(MongoConnector.scala:154)
at com.mongodb.spark.MongoConnector.withDatabaseDo(MongoConnector.scala:171)
at com.mongodb.spark.MongoConnector.withCollectionDo(MongoConnector.scala:184)
at com.mongodb.spark.rdd.partitioner.MongoSamplePartitioner.partitions(MongoSamplePartitioner.scala:88)
at com.mongodb.spark.rdd.partitioner.DefaultMongoPartitioner.partitions(DefaultMongoPartitioner.scala:34)
at com.mongodb.spark.rdd.MongoRDD.getPartitions(MongoRDD.scala:139)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:91)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$.prepareShuffleDependency(ShuffleExchangeExec.scala:318)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.prepareShuffleDependency(ShuffleExchangeExec.scala:91)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:128)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:119)
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:119)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:371)
at org.apache.spark.sql.execution.SortExec.inputRDDs(SortExec.scala:121)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.InputAdapter.doExecute(WholeStageCodegenExec.scala:363)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.joins.SortMergeJoinExec.inputRDDs(SortMergeJoinExec.scala:386)
at org.apache.spark.sql.execution.ProjectExec.inputRDDs(basicPhysicalOperators.scala:41)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:150)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.prepareShuffleDependency(ShuffleExchangeExec.scala:92)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:128)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:119)
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:119)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:371)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:150)
at org.apache.spark.sql.execution.BaseLimitExec$class.inputRDDs(limit.scala:62)
at org.apache.spark.sql.execution.LocalLimitExec.inputRDDs(limit.scala:97)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:337)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3273)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2484)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2484)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2698)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
at org.apache.spark.sql.Dataset.show(Dataset.scala:723)
at org.apache.spark.sql.Dataset.show(Dataset.scala:682)
at org.apache.spark.sql.Dataset.show(Dataset.scala:691)
at my.app.common.spark.SparkSpringBootHandler.querySpark(SparkSpringBootHandler.java:92)
SparkSpringBootHandler.java:85-92 lines
String queryJ = "select count(s.idlocalizator) "
+ "from installationOnBoard i join storicpos s on s.installation_uuid = i.uuid ";
result += sdf.format(new Date()) + " - ***************** QUERY ***************** Start...\n";
Dataset<Long> counter = sparkSession.sql(queryJ).as(Encoders.LONG());
counter.show();
I am not sure how you are integrating Spark jobs and Spring Boot. I am sharing my views based on what I did in one project.
We had a separate project for Spark/Scala and building a fat jar with all dependency using sbt assembly.
On the Spring Boot project side, we were calling Spark job using Apache Livy API and tracking status of the job using Apache Livy generated batch Id.
Apache Livy is available for both Spark 1.x and Spark 2.x
https://livy.incubator.apache.org/docs/latest/rest-api.html
I hope it may help in some direction.

Apache Nifi: PutHiveStreaming is not connecting

I have a simple process flow that follows the example: https://community.hortonworks.com/articles/52856/stream-data-into-hive-like-a-king-using-nifi.html
The flow looks like this:
The flow files go through the entire process but then fails to write to the Hive DB in the processor "Stream CSV to Hive."
When I look at the nifi-app.log, I get the following exception:
2018-03-03 00:51:33,942 ERROR [Timer-Driven Process Thread-8]
o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-
1000-1713-79d402d400b2] Error connecting to Hive endpoint: table olympics
at thrift://master:9083
2018-03-03 00:51:33,947 ERROR [Timer-Driven Process Thread-8]
o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-
1000-1713-79d402d400b2] Hive Streaming co nnect/write error, flow file will
be penalized and routed to retry.
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to
EndPoint {metaStoreUri='thrift://m aster:9083', database='default',
table='olympics', partitionVals=[] }:
org.apache.nifi.processors.hive.PutHiveStreaming$ShouldRetryException: Hive
Streaming connect/write error , flow file will be penalized and routed to
retry. org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed
connecting to EndPoint {metaStoreUri='thrift://master:9083', data
base='default', table='olympics', partitionVals=[] }
org.apache.nifi.processors.hive.PutHiveStreaming$ShouldRetryException: Hive
Streaming connect/write error, flow file will be penalized and routed to
retry. org.apache.nifi.util .hive.HiveWriter$ConnectFailure: Failed
connecting to EndPoint {metaStoreUri='thrift://master:9083',
database='default', table='olympics', partitionVals=[] }
at
org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onHiveRecordsError$1(Put
HiveStreaming.java:527)
at org.apache.nifi.processor.util.pattern.ExceptionHandler$OnError.lambda$andThen$0(ExceptionHandler.java:54)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onHiveRecordError$2(PutHiveStreaming.java:545)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:148)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$12(PutHiveStreaming.java:677)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2174)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2144)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:631)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$4(PutHiveStreaming.java:555)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:555)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partit ionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:968)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:879)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$8(PutHiveStreaming.java:680)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
... 19 common frames omitted
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://master:9083', database='default', tab le='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable to acquire lock on {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionV als=[] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:578)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:547)
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:261)
... 25 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3906)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1863)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy85.lock(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:573)
... 27 common frames omitted
I have the Hive metastore daemon running and the default port, 9083 is open and I can connect to it with telnet. Hive is running on the same machine as Nifi and has an entry in /etc/hosts to name it master.
I'm getting the following exception:
org.apache.thrift.transport.TTransportException: null.
Which is getting passed a null to the parent, Exception class's constructor.
I've also tried using older versions of Apache NiFi and Apache Hive. I just tried Nifi version 1.4 and Hive version 1.2.2. I'm now getting a different error:
2018-03-13 19:17:20,126 INFO [put-hive-streaming-0] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:20,220 INFO [put-hive-streaming-0] hive.metastore Connected to metastore.
2018-03-13 19:17:20,524 INFO [Timer-Driven Process Thread-6] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:20,525 INFO [Timer-Driven Process Thread-6] hive.metastore Connected to metastore.
2018-03-13 19:17:21,204 WARN [put-hive-streaming-0] o.a.h.h.m.RetryingMetaStoreClient MetaStoreClient lost connection. Attempting to reconnect.
org.apache.thrift.TApplicationException: Internal error processing open_txns
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_open_txns(ThriftHiveMetastore.java:3834)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.open_txns(ThriftHiveMetastore.java:3821)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openTxns(HiveMetaStoreClient.java:1841)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy122.openTxns(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.openTxnImpl(HiveEndPoint.java:520)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:504)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
at org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$2(HiveWriter.java:259)
at org.apache.nifi.util.hive.HiveWriter.lambda$callWithTimeout$4(HiveWriter.java:365)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-03-13 19:17:22,205 INFO [put-hive-streaming-0] hive.metastore Trying to connect to metastore with URI thrift://master:9083
2018-03-13 19:17:22,207 INFO [put-hive-streaming-0] hive.metastore Connected to metastore.
2018-03-13 19:17:22,214 ERROR [Timer-Driven Process Thread-6] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=e88d5c4e-0161-1000-1713-79d402d400b2] Failed to create HiveWriter for endpoint: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }: org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:79)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:968)
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:879)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$8(PutHiveStreaming.java:680)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$12(PutHiveStreaming.java:677)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2174)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2144)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:631)
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$4(PutHiveStreaming.java:555)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:555)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:264)
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:73)
... 24 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionBatchUnAvailable: Unable to acquire transaction batch on end point: {metaStoreUri='thrift://master:9083', database='default', table='olympics', partitionVals=[] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:511)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
at org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$2(HiveWriter.java:259)
at org.apache.nifi.util.hive.HiveWriter.lambda$callWithTimeout$4(HiveWriter.java:365)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 common frames omitted
Caused by: org.apache.thrift.TApplicationException: Internal error processing open_txns
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_open_txns(ThriftHiveMetastore.java:3834)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.open_txns(ThriftHiveMetastore.java:3821)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openTxns(HiveMetaStoreClient.java:1841)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
at com.sun.proxy.$Proxy122.openTxns(Unknown Source)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.openTxnImpl(HiveEndPoint.java:520)
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:504)
... 9 common frames omitted
What version of NiFi/HDF are you using and what kind/version of Hadoop cluster are you connecting to (HDP, e.g.)? It is a known issue that Apache NiFi will not work with HDP Hive starting with HDP 2.5 (or maybe 2.4), you would have to download the NiFi-only distro of Hortonworks Data Flow in order to make it work.

karaf with orient db gives "java.lang.NoClassDefFoundError: com/sun/jna/Platform"

Iam trying to build up an application with
karaf 4.0.5
orientdb 2.1.16
But the orientdb service isnt loading and in the log is following error:
The callback method validate has thrown an exception : com/sun/jna/Platform
java.lang.NoClassDefFoundError: com/sun/jna/Platform
at com.orientechnologies.nio.OCLibraryFactory.<clinit>(OCLibraryFactory.java:35)
at com.orientechnologies.nio.OJNADirectMemory.<clinit>(OJNADirectMemory.java:35)
at com.orientechnologies.common.directmemory.ODirectMemoryFactory.<clinit>(ODirectMemoryFactory.java:59)
at com.orientechnologies.common.directmemory.ODirectMemoryPointerFactory.<init>(ODirectMemoryPointerFactory.java:28)
at com.orientechnologies.common.directmemory.ODirectMemoryPointerFactory.<clinit>(ODirectMemoryPointerFactory.java:24)
at com.orientechnologies.orient.core.storage.impl.local.paginated.wal.ODiskWriteAheadLog$LogSegment.initPageCache(ODiskWriteAheadLog.java:558)
at com.orientechnologies.orient.core.storage.impl.local.paginated.wal.ODiskWriteAheadLog$LogSegment.init(ODiskWriteAheadLog.java:279)
at com.orientechnologies.orient.core.storage.impl.local.paginated.wal.ODiskWriteAheadLog.<init>(ODiskWriteAheadLog.java:692)
at com.orientechnologies.orient.core.storage.impl.local.paginated.wal.ODiskWriteAheadLog.<init>(ODiskWriteAheadLog.java:616)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.initWalAndDiskCache(OLocalPaginatedStorage.java:292)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.open(OAbstractPaginatedStorage.java:169)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.open(ODatabaseDocumentTx.java:248)
at com.tinkerpop.blueprints.impls.orient.OrientGraphFactory.getDatabase(OrientGraphFactory.java:125)
at com.tinkerpop.blueprints.impls.orient.OrientGraphFactory.getDatabase(OrientGraphFactory.java:106)
at com.tinkerpop.blueprints.impls.orient.OrientGraphFactory.getTx(OrientGraphFactory.java:66)
at xx.yy.zz.orientdb.OrientDatabase.__M_validate(OrientDatabase.java:30)
at xx.yy.zz.orientdb.OrientDatabase.validate(OrientDatabase.java)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_91]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_91]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_91]
at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_91]
at org.apache.felix.ipojo.util.Callback.call(Callback.java:233)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.util.Callback.call(Callback.java:193)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.handlers.lifecycle.callback.LifecycleCallback.call(LifecycleCallback.java:86)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.handlers.lifecycle.callback.LifecycleCallbackHandler.__M_stateChanged(LifecycleCallbackHandler.java:162)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.handlers.lifecycle.callback.LifecycleCallbackHandler.stateChanged(LifecycleCallbackHandler.java)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.InstanceManager.setState(InstanceManager.java:560)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.InstanceManager.start(InstanceManager.java:440)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.ComponentFactory.createInstance(ComponentFactory.java:179)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.IPojoFactory.createComponentInstance(IPojoFactory.java:319)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.IPojoFactory.createComponentInstance(IPojoFactory.java:240)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.extender.internal.linker.ManagedType$InstanceSupport$1.call(ManagedType.java:312)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.extender.internal.linker.ManagedType$InstanceSupport$1.call(ManagedType.java:306)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.extender.internal.queue.JobInfoCallable.call(JobInfoCallable.java:114)[28:org.apache.felix.ipojo:1.12.1]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_91]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_91]
Caused by: java.lang.ClassNotFoundException: com.sun.jna.Platform not found by com.orientechnologies.orientdb-core [13]
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1574)
at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79)
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:2018)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)[:1.8.0_91]
... 38 more
2016-05-06 19:54:58,171 | ERROR | pool-1-thread-1 | orientdb | 21 - xx.yy.zz.orientdb - 1.0.0.SNAPSHOT | [ERROR] : java.lang.NoClassDefFoundError: com/sun/jna/Platform
java.lang.IllegalStateException: java.lang.NoClassDefFoundError: com/sun/jna/Platform
at org.apache.felix.ipojo.handlers.lifecycle.callback.LifecycleCallbackHandler.__M_stateChanged(LifecycleCallbackHandler.java:171)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.handlers.lifecycle.callback.LifecycleCallbackHandler.stateChanged(LifecycleCallbackHandler.java)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.InstanceManager.setState(InstanceManager.java:560)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.InstanceManager.start(InstanceManager.java:440)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.ComponentFactory.createInstance(ComponentFactory.java:179)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.IPojoFactory.createComponentInstance(IPojoFactory.java:319)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.IPojoFactory.createComponentInstance(IPojoFactory.java:240)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.extender.internal.linker.ManagedType$InstanceSupport$1.call(ManagedType.java:312)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.extender.internal.linker.ManagedType$InstanceSupport$1.call(ManagedType.java:306)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.extender.internal.queue.JobInfoCallable.call(JobInfoCallable.java:114)[28:org.apache.felix.ipojo:1.12.1]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_91]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_91]
Caused by: java.lang.NoClassDefFoundError: com/sun/jna/Platform
at com.orientechnologies.nio.OCLibraryFactory.<clinit>(OCLibraryFactory.java:35)
at com.orientechnologies.nio.OJNADirectMemory.<clinit>(OJNADirectMemory.java:35)
at com.orientechnologies.common.directmemory.ODirectMemoryFactory.<clinit>(ODirectMemoryFactory.java:59)
at com.orientechnologies.common.directmemory.ODirectMemoryPointerFactory.<init>(ODirectMemoryPointerFactory.java:28)
at com.orientechnologies.common.directmemory.ODirectMemoryPointerFactory.<clinit>(ODirectMemoryPointerFactory.java:24)
at com.orientechnologies.orient.core.storage.impl.local.paginated.wal.ODiskWriteAheadLog$LogSegment.initPageCache(ODiskWriteAheadLog.java:558)
at com.orientechnologies.orient.core.storage.impl.local.paginated.wal.ODiskWriteAheadLog$LogSegment.init(ODiskWriteAheadLog.java:279)
at com.orientechnologies.orient.core.storage.impl.local.paginated.wal.ODiskWriteAheadLog.<init>(ODiskWriteAheadLog.java:692)
at com.orientechnologies.orient.core.storage.impl.local.paginated.wal.ODiskWriteAheadLog.<init>(ODiskWriteAheadLog.java:616)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.initWalAndDiskCache(OLocalPaginatedStorage.java:292)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.open(OAbstractPaginatedStorage.java:169)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.open(ODatabaseDocumentTx.java:248)
at com.tinkerpop.blueprints.impls.orient.OrientGraphFactory.getDatabase(OrientGraphFactory.java:125)
at com.tinkerpop.blueprints.impls.orient.OrientGraphFactory.getDatabase(OrientGraphFactory.java:106)
at com.tinkerpop.blueprints.impls.orient.OrientGraphFactory.getTx(OrientGraphFactory.java:66)
at xx.yy.zz.orientdb.OrientDatabase.__M_validate(OrientDatabase.java:30)
at xx.yy.zz.orientdb.OrientDatabase.validate(OrientDatabase.java)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_91]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_91]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_91]
at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_91]
at org.apache.felix.ipojo.util.Callback.call(Callback.java:233)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.util.Callback.call(Callback.java:193)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.handlers.lifecycle.callback.LifecycleCallback.call(LifecycleCallback.java:86)[28:org.apache.felix.ipojo:1.12.1]
at org.apache.felix.ipojo.handlers.lifecycle.callback.LifecycleCallbackHandler.__M_stateChanged(LifecycleCallbackHandler.java:162)[28:org.apache.felix.ipojo:1.12.1]
... 13 more
Caused by: java.lang.ClassNotFoundException: com.sun.jna.Platform not found by com.orientechnologies.orientdb-core [13]
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1574)
at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79)
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:2018)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)[:1.8.0_91]
... 38 more
the maven module looks like
<dependency>
<groupId>com.orientechnologies</groupId>
<artifactId>orientdb-graphdb</artifactId>
<version>2.1.16</version>
</dependency>
<dependency>
<groupId>net.java.dev.jna</groupId>
<artifactId>jna-platform</artifactId>
<version>4.2.2</version>
</dependency>
adding "com.sun.jna" and "com.sun.jna.platform" to the property "org.osgi.framework.system.packages.extra" in the custom.properties file doesnt helps either
As karaf felix base Iam using:
https://github.com/thometal/ipojokaraf
Why does the orientdb module not recognize that the "jna" bundles are there?
Thank you in advance

How to deploy an EJB application to Glassfish server

When I am running a JSP file I am getting this error:
SEVERE: Exception while loading the app
java.lang.RuntimeException: EJB Container initialization error
at org.glassfish.ejb.startup.EjbApplication.loadContainers(EjbApplication.java:219)
at org.glassfish.ejb.startup.EjbDeployer.load(EjbDeployer.java:197)
at org.glassfish.ejb.startup.EjbDeployer.load(EjbDeployer.java:63)
at org.glassfish.internal.data.ModuleInfo.load(ModuleInfo.java:175)
at org.glassfish.internal.data.ApplicationInfo.load(ApplicationInfo.java:216)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:338)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183)
at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:310)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235)
at org.glassfish.deployment.autodeploy.AutoOperation.run(AutoOperation.java:141)
at org.glassfish.deployment.autodeploy.AutoDeployer.deploy(AutoDeployer.java:573)
at org.glassfish.deployment.autodeploy.AutoDeployer.deployAll(AutoDeployer.java:459)
at org.glassfish.deployment.autodeploy.AutoDeployer.run(AutoDeployer.java:391)
at org.glassfish.deployment.autodeploy.AutoDeployer.run(AutoDeployer.java:376)
at org.glassfish.deployment.autodeploy.AutoDeployService$1.run(AutoDeployService.java:195)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
Caused by: java.lang.RuntimeException: Error while binding JNDI name ejbExample.stateful.Account#ejbExample.stateful.Account for EJB : AccountBean
at com.sun.ejb.containers.BaseContainer.initializeHome(BaseContainer.java:1530)
at com.sun.ejb.containers.StatefulSessionContainer.initializeHome(StatefulSessionContainer.java:214)
at com.sun.ejb.containers.ContainerFactoryImpl.createContainer(ContainerFactoryImpl.java:161)
at org.glassfish.ejb.startup.EjbApplication.loadContainers(EjbApplication.java:207)
... 20 more
Caused by: javax.naming.NameAlreadyBoundException: Use rebind to override
at com.sun.enterprise.naming.impl.TransientContext.doBindOrRebind(TransientContext.java:275)
at com.sun.enterprise.naming.impl.TransientContext.bind(TransientContext.java:214)
at com.sun.enterprise.naming.impl.SerialContextProviderImpl.bind(SerialContextProviderImpl.java:79)
at com.sun.enterprise.naming.impl.LocalSerialContextProviderImpl.bind(LocalSerialContextProviderImpl.java:81)
at com.sun.enterprise.naming.impl.SerialContext.bind(SerialContext.java:586)
at com.sun.enterprise.naming.impl.SerialContext.bind(SerialContext.java:602)
at javax.naming.InitialContext.bind(InitialContext.java:404)
at com.sun.enterprise.naming.impl.GlassfishNamingManagerImpl.publishObject(GlassfishNamingManagerImpl.java:206)
at com.sun.enterprise.naming.impl.GlassfishNamingManagerImpl.publishObject(GlassfishNamingManagerImpl.java:187)
at com.sun.ejb.containers.BaseContainer$JndiInfo.publish(BaseContainer.java:5533)
at com.sun.ejb.containers.BaseContainer.initializeHome(BaseContainer.java:1515)
... 23 more
WARNING: [AutoDeploy] Autodeploy failed : C:\glassfishv3\glassfish\domains\domain1\autodeploy\StatefullSessionBeanExample.jar.
How is this caused and how can I solve it?
Try to undeploy the project (clean and build) and redeploy it .

Categories

Resources