Gremlin-Driver update causes NoSuchMethodError - java

I have a Java Spring Boot application that connects to an Amazon Neptune graph database running on engine version 1.1.1.0.
After upgrading the gremlin-driver and TinkerPop dependencies to 3.5.2 from 3.4.6 (working on this version), the application can no longer make a connection to the graph database on AWS and it throws this exception
io.netty.channel.ChannelInitializer : Failed to initialize a channel. Closing: [id: 0xf213a752]ecs/XYZ
java.lang.NoSuchMethodError: io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13.<init>(Ljava/net/URI;Lio/netty/handler/codec/http/websocketx/WebSocketVersion;Ljava/lang/String;ZLio/netty/handler/codec/http/HttpHeaders;IZZJ)V
I haven't made any changes with the builder besides the code-breaking change with one of the imports and the method name change. Did I miss something in this update?
This is the builder configuration that I am using from 3.4.6
Cluster.Builder builder = Cluster.build();
builder.addContactPoints(gremlinProperties.getContactPoints());
builder.port(gremlinProperties.getPort());
builder.nioPoolSize(gremlinProperties.getNioPoolSize());
builder.workerPoolSize(gremlinProperties.getWorkerPoolSize());
builder.minConnectionPoolSize(gremlinProperties.getMinConnectionPoolSize());
builder.maxConnectionPoolSize(gremlinProperties.getMaxConnectionPoolSize());
builder.minSimultaneousUsagePerConnection(gremlinProperties.getMinSimultaneousUsagePerConnection());
builder.maxSimultaneousUsagePerConnection(gremlinProperties.getMaxSimultaneousUsagePerConnection());
builder.maxInProcessPerConnection(gremlinProperties.getMaxInProcessPerConnection());
builder.minInProcessPerConnection(gremlinProperties.getMinInProcessPerConnection());
builder.maxWaitForConnection(gremlinProperties.getMaxWaitForConnection());
builder.maxWaitForClose(gremlinProperties.getMaxWaitForSessionClose());
builder.maxContentLength(gremlinProperties.getMaxContentLength());
builder.reconnectInterval(gremlinProperties.getReconnectInterval());
builder.resultIterationBatchSize(gremlinProperties.getResultIterationBatchSize());
builder.keepAliveInterval(gremlinProperties.getKeepAliveInterval());
builder.channelizer(Channelizer.WebSocketChannelizer.class);
builder.enableSsl(gremlinProperties.isEnableSsl());
return builder.create();
The values are extracted from a property file

Since the code that handles the gremlin connection and queries is located in a dependency jar project, the netty.version declared in the main project using that jars overrides the netty.io version that is used in the said jar project. I just have to declare a netty.version property in the main project pom so that it matches the netty version used in the dependency.

Related

Missing dependency on EmbeddedKafka on integration test with groovy and spock

I've been trying to create an integration test using the embeddedKafka, but I'm getting problem of missing dependency when trying to run it, this is the error:
Unable to load class org.springframework.kafka.test.EmbeddedKafkaBroker due to missing dependency org/I0Itec/zkclient/serialize/ZkSerializer
I saw some stuff saying that this is related to my dependencies, so here is my dependencies:
springBootVersion = '2.3.5.RELEASE'
compile("org.springframework.boot:spring-boot-starter-web:${springBootVersion}")
compile("org.springframework.kafka:spring-kafka:${springBootVersion}")
testCompile("org.springframework.boot:spring-boot-starter-test:${springBootVersion}",
'org.spockframework:spock-core:1.2-groovy-2.4',
'org.spockframework:spock-spring:1.2-groovy-2.4',
'com.microsoft.azure:spring-data-cosmosdb:2.3.0',
'com.nimbusds:oauth2-oidc-sdk:5.64.4',
)
testCompile("org.springframework.kafka:spring-kafka-test:${springBootVersion}")
So, my question is, am I missing something?
EDIT
After changed the versions as indicated, I got a different error:
Error creating bean with name 'embeddedKafka': Invocation of init
method failed; nested exception is java.lang.NoClassDefFoundError:
scala/math/Ordering$$anon$7
I've added the scala dependencies, but still having the same issue:
testImplementation("org.scala-lang:scala-library:2.12.11")
testImplementation("org.scala-lang:scala-reflect:2.12.11")
You somehow have a mismatched kafka Vs. kafka-clients jars on the classpath; they all must be the same version.
You generally should not specify a version on boot's dependencies and use its dependency management instead.
You are pulling in spring-kafka 2.3.5 whereas spring-boot 2.3.5 requires spring-kafka 2.5.7.
Spring-kafka 2.5.x uses the kafka-clients 2.5.1.
See here for how to override versions of kafka jars when using a different version to the version that Boot prescribes.
the Kafka Client libraries for a time were inlining a particular version of the Scala library. This caused problems for those of us wanting to use the kafka client library with a slightly different version of Scala than that inline version.
In this cases the version of Scala they were using inline is Scala 2.12.10
They removed this dependency in later versions and this was backported as fixes (the earliest being 2.8.0) https://archive.apache.org/dist/kafka/2.8.0/RELEASE_NOTES.html)

How to use org.apache.httpcomponents inside a spark job on Hadoop/Spark?

I am trying to run a spark job on a Hadoop cluster that also makes an http request to another server. I am using org.apache.httpcomponents to make this request, which works fine locally on my machine. However this fails the moment I submit the job to the cluster (managed by Cloudera) with the following error:
User class threw exception: java.lang.NoSuchFieldError: INSTANCE
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.(SSLConnectionSocketFactory.java:151)
at org.apache.http.impl.client.HttpClientBuilder.build(HttpClientBuilder.java:977)
at org.apache.http.impl.client.HttpClients.createDefault(HttpClients.java:56)
From all the reading I have done, this error is caused by multiple versions of Apache Http client jar. It appears that Hadoop/Spark engine has it's own dependency to Apache Http client and that is a different version than the one I am using. Because my jar is run as part of the hadoop/spark engine it ends up including both my version of http as well as the one Hadoop requires.
If I add 'compileOnly' for org.apache.httpcomponents in my build.gradle and submit, I get this error instead:
User class threw exception: java.lang.NoClassDefFoundError: org/apache/http/impl/client/HttpClients
Is there a way for me to configure this in gradle so that when I build my jar, it will use the already existing version on Hadoop? ie. A way to declare a temporary dependency (when running locally download and use latest version, but when building UberJar drop the dependency)?
UPDATE
I decided to try swapping to a different http library (okhttp3) to see if that would resolve the issue. However I get a very similar exception when trying to run through the cluster here too:
User class threw exception: java.lang.NoSuchFieldError: Companion
at okhttp3.internal.Util.(Util.kt:70)
at okhttp3.OkHttpClient.(OkHttpClient.kt:959)
Looks like Cloudera also supplies a version of okhttp with it's spark2 client which is unfortunate.

Apache Cayenne "DI container has no binding for key ObjectContextFactory" error

I'm using Apache 4.0 BETA and I'm getting the following runtime error:
org.apache.cayenne.di.DIRuntimeException: DI container has no binding for key <BindingKey: org.apache.cayenne.configuration.ObjectContextFactory>
at org.apache.cayenne.di.spi.DefaultInjector.getProvider(DefaultInjector.java:158)
at org.apache.cayenne.di.spi.DefaultInjector.getProvider(DefaultInjector.java:144)
at org.apache.cayenne.di.spi.DefaultInjector.getInstance(DefaultInjector.java:134)
at org.apache.cayenne.configuration.CayenneRuntime.newContext(CayenneRuntime.java:124)
As Cayenne is modular, I've included only these dependencies (see pic).
What library needs to be included?
Thanks!
(The backend db is postgres but I don't this this is relavant to this error.)
I resolved this problem myself. Turns out that I need to build the artifact with "copy to output directory and link via manifest" (using IntelliJ IDEA) and upload the entire generated _jar directory to the server.

Elastic Search with Java for Standalone Application

I'm getting the below error :
StackTrace:
Exception in thread "main" java.lang.NoSuchMethodError: io.netty.buffer.CompositeByteBuf.addComponents(ZLjava/lang/Iterable;)Lio/netty/buffer/CompositeByteBuf;
at org.elasticsearch.transport.netty4.Netty4Utils.toByteBuf(Netty4Utils.java:117)
at org.elasticsearch.transport.netty4.Netty4Transport.sendMessage(Netty4Transport.java:395)
at org.elasticsearch.transport.netty4.Netty4Transport.sendMessage(Netty4Transport.java:94)
at org.elasticsearch.transport.TcpTransport.internalSendMessage(TcpTransport.java:1125)
at org.elasticsearch.transport.TcpTransport.sendRequestToChannel(TcpTransport.java:1107)
at org.elasticsearch.transport.TcpTransport.executeHandshake(TcpTransport.java:1622)
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:556)
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:117)
at org.elasticsearch.transport.TransportService.openConnection(TransportService.java:334)
at org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:408)
at org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:358)
at org.elasticsearch.client.transport.TransportClientNodesService.addTransportAddresses(TransportClientNodesService.java:199)
at org.elasticsearch.client.transport.TransportClient.addTransportAddress(TransportClient.java:322)
I am using ES: 5.4.2 and Lucene: 6.5.1 and netty-all 4.0.9, netty buffer 4.1.11 and netty-common 4.1.11 jars
my java code is as below :
Settings settings =Settings.builder().put("cluster.name", "my-application").build();
TransportClient client = new PreBuiltTransportClient(settings);
TransportAddress address = new InetSocketTransportAddress(InetAddress.getByName("localhost"), 9300);
client.addTransportAddress(address);
The problem is being caused a Netty version conflict as far as I feel because the code is error-free.
This problem is caused by concurrent versions of Netty being used by different dependencies in your project.
Basically, ES 5 Transport API required Netty 4. And Dependency X, still use Netty 3. This can cause this problem.
Try in order:
Add Netty 4 as a dependency in your project
Create an independent project for the use of ES 5 Transport Client
There was no issue with netty3 jar ,there were some other jar issue ,it is resolved now.Have included the below jars as depicted in screenshot

Java jetty servlet container fails to resolve class dependencies

15:28:38.716 [qtp1588771273-32] WARN o.e.jetty.servlet.ServletHandler - /tp/gremlin/execute
java.lang.IllegalArgumentException: Could not resolve dependency of type:javax.transaction.TransactionManager
at org.neo4j.graphdb.DependencyResolver$Adapter$1.select(DependencyResolver.java:87) ~[neo4j-kernel-2.2.9.jar:2.2.9]
at org.neo4j.kernel.extension.KernelExtensions.resolveDependency(KernelExtensions.java:112) ~[neo4j-kernel-2.2.9.jar:2.2.9]
This is from Neo4j 2.x (the Gremlin plug-in). The package, when built and deployed as instructed at https://github.com/thinkaurelius/neo4j-gremlin-plugin, does contain the jar-file which describes this class, and Maven did download it and did install it there. But, when the server attempts to load and execute the extension, nothing is resolved.
Why?

Categories

Resources