I'm trying to implement code that uses the Azure Java SDK to list the SQL Server instances running on my Azure subscription.
I followed the examples posted and wrote the following code:
Azure azure = Azure.authenticate(credentials)).withDefaultSubscription();
SqlServers sqlServers = azure.sqlServers();
PagedList<SqlServer> list = sqlServers.list();
But the last line throws a java.lang.NoSuchMethodError with the following stack trace:
Exception in thread "main" java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.type.TypeBindings.create(Ljava/lang/Class;[Lcom/fasterxml/jackson/databind/JavaType;)Lcom/fasterxml/jackson/databind/type/TypeBindings;
at com.microsoft.rest.serializer.JacksonAdapter.constructJavaType(JacksonAdapter.java:119)
at com.microsoft.rest.serializer.JacksonAdapter.deserialize(JacksonAdapter.java:131)
at com.microsoft.rest.ServiceResponseBuilder.buildBody(ServiceResponseBuilder.java:216)
at com.microsoft.rest.ServiceResponseBuilder.build(ServiceResponseBuilder.java:110)
at com.microsoft.azure.AzureResponseBuilder.build(AzureResponseBuilder.java:56)
at com.microsoft.azure.management.sql.implementation.ServersInner.listDelegate(ServersInner.java:553)
at com.microsoft.azure.management.sql.implementation.ServersInner.access$400(ServersInner.java:42)
at com.microsoft.azure.management.sql.implementation.ServersInner$17.call(ServersInner.java:539)
at com.microsoft.azure.management.sql.implementation.ServersInner$17.call(ServersInner.java:535)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:69)
at retrofit2.adapter.rxjava.RxJavaCallAdapterFactory$RequestArbiter.request(RxJavaCallAdapterFactory.java:173)
at rx.Subscriber.setProducer(Subscriber.java:211)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(OnSubscribeMap.java:102)
at retrofit2.adapter.rxjava.RxJavaCallAdapterFactory$CallOnSubscribe.call(RxJavaCallAdapterFactory.java:152)
at retrofit2.adapter.rxjava.RxJavaCallAdapterFactory$CallOnSubscribe.call(RxJavaCallAdapterFactory.java:138)
at rx.Observable.unsafeSubscribe(Observable.java:10142)
at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48)
at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33)
at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
at rx.Observable.subscribe(Observable.java:10238)
at rx.Observable.subscribe(Observable.java:10205)
at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:444)
at rx.observables.BlockingObservable.single(BlockingObservable.java:341)
at com.microsoft.azure.management.sql.implementation.ServersInner.list(ServersInner.java:488)
at com.microsoft.azure.management.resources.fluentcore.arm.collection.implementation.TopLevelModifiableResourcesImpl.list(TopLevelModifiableResourcesImpl.java:116)
at AzureDiscoveryClient.main(AzureDiscoveryClient.java:19)
This obviously looks like a defect as the SDK allows to do that and should return a reasonable error code if something goes wrong.
Still, is there any other way to list the SQL Server instances running in my Azure subscription? Or perhaps I'm doing something in my code?
Please refer to my working code as below:
import com.microsoft.azure.AzureEnvironment;
import com.microsoft.azure.PagedList;
import com.microsoft.azure.credentials.ApplicationTokenCredentials;
import com.microsoft.azure.management.Azure;
import com.microsoft.azure.management.sql.SqlServer;
import com.microsoft.azure.management.sql.SqlServers;
import java.io.IOException;
public class ListSqlInstance {
public static void main(String[] args) throws IOException {
ApplicationTokenCredentials credentials = new ApplicationTokenCredentials(
"{client Id}",
"{talent Id}",
"{secret}",
AzureEnvironment.AZURE);
Azure.Authenticated azureAuth = Azure.authenticate(credentials);
Azure azure = azureAuth.withDefaultSubscription();
SqlServers sqlServers = azure.sqlServers();
PagedList<SqlServer> list = sqlServers.list();
System.out.println(list.size());
}
}
My sdk version:
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure</artifactId>
version>1.12.0</version>
</dependency>
Don't forget grant access sql server permissons to your client.
Hope it helps you.
I am not sure if you are looking for a powershell cmdlet solution. But this Powershell cmdlet should list all the sql server instances in your subscription.
Assuming prior to running this powershell cmdlet you already logged in to your azure account in powershell and set the current subscription in the current powershell session if you have multiple subscriptions mapped to your account.
PS C:\azure> get-azurermresourcegroup | get-azurermsqlserver
ResourceGroupName : DbRG
ServerName : testSqlserver
Location : East US
SqlAdministratorLogin : XXXX
SqlAdministratorPassword : XXX
ServerVersion : 12.0
Tags :
Problem solved!
The error came from an incompatibility between the Azure SDK and the jackson-databind library.
One of the methods used from this library is relatively new, and the version I had in my project (clean project with just one import - Azure SDK) did not have this method.
Imported the latest jackson-databind as follows:
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.9.5</version>
</dependency>
Has solved the problem.
Thanks Everyone for the help.
This question is about java library of Siddhi - CEP
Description:
I tried to establish an HTTP source to receive data. There was no error creating the Runtime and starting it.
[nioEventLoopGroup-2-1] INFO org.wso2.transport.http.netty.listener.ServerConnectorBootstrap$HTTPServerConnector - HTTP(S) Interface starting on host localhost and port 9056
[main] INFO org.wso2.extension.siddhi.io.http.source.HttpConnectorPortBindingListener - siddhi: started HTTP server connector localhost:9056
[main] INFO org.wso2.extension.siddhi.io.http.source.HttpSourceListener - Source Listener has created for url http://localhost:9056/endpoints/
However, when I send a POST request to the designated address. I get an error:
[nioEventLoopGroup-3-1] ERROR org.wso2.extension.siddhi.io.http.source.HTTPConnectorListener - Error in http server connector
java.lang.NoSuchMethodError: io.netty.handler.codec.http.HttpRequest.method()Lio/netty/handler/codec/http/HttpMethod;
at org.wso2.transport.http.netty.listener.CustomHttpContentCompressor.decode(CustomHttpContentCompressor.java:44)
at org.wso2.transport.http.netty.listener.CustomHttpContentCompressor.decode(CustomHttpContentCompressor.java:14)
at io.netty.handler.codec.MessageToMessageCodec$2.decode(MessageToMessageCodec.java:81)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:354)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:748)
Could anyone suggest a reason of what I have done wrong? Thank you in advance.
Affected Product Version:
4.1.17
OS, DB, other environment details and versions:
IntelliJ IDEA 2017.3.5 (Community Edition)
Build #IC-173.4674.33, built on March 6, 2018
JRE: 1.8.0_152-release-1024-b15 amd64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Windows 10 10.0
Steps to reproduce:
The test code I wrote:
import org.wso2.siddhi.core.SiddhiAppRuntime;
import org.wso2.siddhi.core.SiddhiManager;
import org.wso2.siddhi.core.event.Event;
import org.wso2.siddhi.core.stream.output.StreamCallback;
import org.wso2.siddhi.core.util.EventPrinter;
//import org.wso2.extension.siddhi.io.http.source.*;
public class httpTest
{
public static void main(String[] args) {
String siddhiString = "#App:name(\"haha\") " +
"#App:description(\"fasd\") " +
"#App:statistics(reporter = \"jmx\", interval = \"30\") " +
"#source(type=\"http\",receiver.url=\"http://localhost:9056/endpoints/\",#map(type=\"text\",fail.on.missing.attribute=\"true\",regex.A=\"(.*)\",#attributes(data=\"A\"))) " +
"#sink(type=\"mqtt\",url=\"tcp://120.78.71.179:1883\",topic=\"34\",#map(type=\"text\")) " +
"define stream a4P068X5YCK(data String);";
SiddhiManager siddhiManager = new SiddhiManager();
SiddhiAppRuntime siddhiAppRuntime = siddhiManager.createSiddhiAppRuntime(siddhiString);
siddhiAppRuntime.addCallback("a4P068X5YCK", new StreamCallback() {
#Override
public void receive(Event[] events) {
EventPrinter.print(events);
}
});
siddhiAppRuntime.start();
}
}
Then I send a POST request to http://localhost:9056/endpoints/. It returns the exception posted above.
Update:
I went back and check the Siddhi-io-http github documentation page. I found that it says:
... This extension only works inside the WSO2 Data Analytic Server and cannot be run with standalone siddhi.
I guess it might suggest that http is not supported by siddhi library at the moment. I have submitted issue on siddhi repository page to ask for confirmation.
Update 2:
I have changed my Siddhi Query so that it copy the source stream into the other sink stream. Other part of the code remains the same:
String siddhiString = "#App:name(\"haha\") " +
"#App:description(\"fasd\") " +
"#App:statistics(reporter = \"jmx\", interval = \"30\") " +
"#source(type=\"http\",receiver.url=\"http://localhost:9056/endpoints/\",#map(type=\"text\",fail.on.missing.attribute=\"true\",regex.A=\"(.*)\",#attributes(data=\"A\"))) " +
"define stream a4P068X5YCK(data String); " +
"#sink(type=\"mqtt\",url=\"tcp://120.78.71.179:1883\",topic=\"34\",#map(type=\"text\")) " +
"define stream pout(data String); " +
"from a4P068X5YCK " +
"select * " +
"insert into pout; " +
"";
The same problem still exists. I tried the wso2 processor and it works fine. Now my guesses are:
1. version mismatch
2. lack of some packages in wso2 processor dependecies.
I will try to identify it in those two direction and will update in here and Issue page as soon as I find something new.
Update 3:
As I keep adding updates, the format seems to have some problem but fortunately this issue also comes to an end. I tried to Include all dependencies from wso2 processor source code and my test program starts working. Therefore I assume there is a component in wso2 processor that siddhi library is lacking.
I tried to delete the dependencies one by one to see if my test program still works. Finally I have found that package. With this package my code works well.
<dependency>
<groupId>org.wso2.msf4j</groupId>
<artifactId>org.wso2.msf4j.feature</artifactId>
<version>${msf4j.version}</version>
<type>zip</type>
</dependency>
As I am a beginner to coding, I am not exactly what was the problem. I would be grateful if someone could explain to me the reason behind the problem. I appreciate all the helps received in this process and it would also be a great experience for me.
Update 4: #Grainier I tried the sample code you posted and it actually worked! Although I still have no idea why. I tried to copy your exact code to a new .java in my project. It still won't work. Therefore I guess there is something to do with POM file.
Something I noticed is that when I ran your sample code there are few more WARNINGS printed in console: SMALL UPDATE: I have found that the Warnings appeared because I am using JDK 10. As soon as I switch back to 1.8 warnings disappeared and the code still works. So maybe this is not the reason.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.ReflectionUtil (file:/C:/Users/ktz001/.m2/repository/io/netty/netty-common/4.1.16.Final/netty-common-4.1.16.Final.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.ReflectionUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
The second difference is in the POM file. In you have one more repository added compared to mine.
<repository>
<id>wso2-nexus</id>
<name>WSO2 internal Repository</name>
<url>http://maven.wso2.org/nexus/content/groups/wso2-public/</url>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>ignore</checksumPolicy>
</releases>
</repository>
It would be great if you could suggest any reason.
Thank you for all of your work! It has been really helpful.
There seems to be an issue with the documentation... This should work with standalone Siddhi. All you have to do is add following dependencies in your project (also mqtt, which I haven't included below);
<dependencies>
<dependency>
<groupId>org.wso2.siddhi</groupId>
<artifactId>siddhi-core</artifactId>
<version>${siddhi.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.siddhi</groupId>
<artifactId>siddhi-annotations</artifactId>
<version>${siddhi.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.siddhi</groupId>
<artifactId>siddhi-query-compiler</artifactId>
<version>${siddhi.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.extension.siddhi.io.http</groupId>
<artifactId>siddhi-io-http</artifactId>
<version>${siddhi.io.http.version}</version>
</dependency>
<dependency>
<groupId>org.wso2.extension.siddhi.map.text</groupId>
<artifactId>siddhi-map-text</artifactId>
<version>${siddhi.mapper.text.version}</version>
</dependency>
</dependencies>
However, there's an issue with your query which is, you have defined a #source and a #sink to a single stream. Which is wrong. If you want to make it a passthrough, then you have to define two streams (one for source and one for sink) and write a query to insert events from source stream to sink stream.
UPDATE:
A sample can be found here; Please try that and see whether it's working.
I'm trying to use WireMock in my JUnit tests to mock calls to an external API.
public class ExampleWiremockTest {
#Rule
public WireMockRule wireMockRule = new WireMockRule(9999);
#Before
public void setUp() {
stubFor(get(urlEqualTo("/bin/sillyServlet"))
.willReturn(aResponse()
.withStatus(200)
.withBody("Hello WireMock!")
)
);
}
#Test
public void testNothing() throws URISyntaxException, IOException {
URI uri = new URIBuilder().setScheme("http")
.setHost("localhost")
.setPort(9999)
.setPath("/bin/sillyServlet")
.build();
HttpGet httpGet = new HttpGet(uri);
CloseableHttpClient httpClient = HttpClients.createDefault();
CloseableHttpResponse response = httpClient.execute(httpGet);
HttpEntity entity = response.getEntity();
String body = EntityUtils.toString(entity);
assertThat(body, is("Hello WireMock!"));
}
}
The code compiles but when I run my test, WireMock throws an HTTP 500 that seems to be caused by an inconsistency in the underlying Servlet API version.
Running com.example.core.ExampleWiremockTest
[main] INFO wiremock.org.eclipse.jetty.util.log - Logging initialized #1030ms
[main] INFO wiremock.org.eclipse.jetty.server.Server - jetty-9.2.z-SNAPSHOT
[main] INFO wiremock.org.eclipse.jetty.server.handler.ContextHandler - Started w.o.e.j.s.ServletContextHandler#ef9296d{/__admin,null,AVAILABLE}
[main] INFO wiremock.org.eclipse.jetty.server.handler.ContextHandler - Started w.o.e.j.s.ServletContextHandler#659a969b{/,null,AVAILABLE}
[main] INFO wiremock.org.eclipse.jetty.server.NetworkTrafficServerConnector - Started NetworkTrafficServerConnector#723d73e1{HTTP/1.1}{0.0.0.0:9999}
[main] INFO wiremock.org.eclipse.jetty.server.Server - Started #1168ms
[qtp436546048-16] INFO /__admin - RequestHandlerClass from context returned com.github.tomakehurst.wiremock.http.AdminRequestHandler. Normalized mapped under returned 'null'
[qtp436546048-20] INFO / - RequestHandlerClass from context returned com.github.tomakehurst.wiremock.http.StubRequestHandler. Normalized mapped under returned 'null'
[qtp436546048-20] WARN wiremock.org.eclipse.jetty.servlet.ServletHandler - Error for /bin/sillyServlet
java.lang.NoSuchMethodError: javax.servlet.http.HttpServletResponse.getHeader(Ljava/lang/String;)Ljava/lang/String;
at wiremock.org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:322)
at wiremock.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at wiremock.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at wiremock.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at wiremock.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at wiremock.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at wiremock.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at wiremock.org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at wiremock.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at wiremock.org.eclipse.jetty.server.Server.handle(Server.java:499)
at wiremock.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at wiremock.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at wiremock.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at wiremock.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at wiremock.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
[qtp436546048-20] WARN wiremock.org.eclipse.jetty.server.HttpChannel - /bin/sillyServlet
java.lang.NoSuchMethodError: javax.servlet.http.HttpServletRequest.isAsyncStarted()Z
at wiremock.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:684)
at wiremock.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at wiremock.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at wiremock.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at wiremock.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at wiremock.org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at wiremock.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at wiremock.org.eclipse.jetty.server.Server.handle(Server.java:499)
at wiremock.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at wiremock.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at wiremock.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at wiremock.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at wiremock.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
[qtp436546048-20] WARN wiremock.org.eclipse.jetty.server.HttpChannel - Could not send response error 500: java.lang.NoSuchMethodError: javax.servlet.http.HttpServletRequest.isAsyncStarted()Z
[main] INFO wiremock.org.eclipse.jetty.server.NetworkTrafficServerConnector - Stopped NetworkTrafficServerConnector#723d73e1{HTTP/1.1}{0.0.0.0:9999}
[main] INFO wiremock.org.eclipse.jetty.server.handler.ContextHandler - Stopped w.o.e.j.s.ServletContextHandler#659a969b{/,null,UNAVAILABLE}
[main] INFO wiremock.org.eclipse.jetty.server.handler.ContextHandler - Stopped w.o.e.j.s.ServletContextHandler#ef9296d{/__admin,null,UNAVAILABLE}
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.577 sec <<< FAILURE! - in com.example.core.ExampleWiremockTest
I do have other libraries in my classpath that rely on different versions of Jetty, which I suppose is the cause of the problem.
WireMock uses Jetty 9.2.13, I also have a transitive dependency on Cobertura, which relies on 6.1.14
I was originally trying to use the following dependency:
<dependency>
<groupId>com.github.tomakehurst</groupId>
<artifactId>wiremock</artifactId>
<version>2.6.0</version>
</dependency>
I switched to the standalone Jar version, hoping it would help me avoid the conflict but the result is exactly the same.
<dependency>
<groupId>com.github.tomakehurst</groupId>
<artifactId>wiremock-standalone</artifactId>
<version>2.6.0</version>
</dependency>
Check your servlet-api version, you likely are using an old version.
Both of those methods were added in Servlet 3.0
HttpServletResponse.getHeader(String name)
HttpServletRequest.isAsyncStarted()
You probably have the Servlet 2.5 (or Servlet 2.4) jar in your classpath.
As mentioned, in the question, one of the libraries I was using had a dependency on Cobertura, which in turn introducde a dependency on Servlet API 2.5 (both as a transitive dependency via an older verison of Jetty and a direct dependency).
Excluding the artifact from the original dependency (the one dependent on cobertura) allowed me to run my test successfully.
<dependency>
<groupId>com.cognifide.slice</groupId>
<artifactId>slice-core-api</artifactId>
<version>${slice.version}</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>org.mortbay.jetty</groupId>
<artifactId>servlet-api-2.5</artifactId>
</exclusion>
</exclusions>
</dependency>
I'm using CUPS4J for my project, which depends on http-client, http-core, and slf4j.
To resolve dependencies we use Maven, and I have defined dependencies as follows:
<dependency>
<groupId>cups4j</groupId>
<artifactId>cups4j</artifactId>
<version>0.6.4</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.0.3</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpcore</artifactId>
<version>4.1</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.7</version>
</dependency>
The cups4j dependency is on our ArtiFactory server (I couldn't find it online).
Everything works like a charm if I create a sample main method to print some document and launch it as a java application.
When I publish my classes to the Websphere server and call that method from a webpage, it generates a java.lang.LinkageError.
This is the relevant part of the stacktrace:
Caused by: java.lang.LinkageError: loader constraint violation: loader "org/eclipse/osgi/internal/baseadaptor/DefaultClassLoader#208c132" previously initiated loading for a different type with name "org/apache/http/client/methods/HttpUriRequest" defined by loader "com/ibm/ws/classloader/CompoundClassLoader#1e0f797"
at java.lang.ClassLoader.defineClassImpl(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:260)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.defineClass(DefaultClassLoader.java:188)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.defineClass(ClasspathManager.java:580)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findClassImpl(ClasspathManager.java:550)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClassImpl(ClasspathManager.java:481)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass_LockClassName(ClasspathManager.java:460)
at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:447)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:393)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:469)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:612)
at org.apache.http.impl.client.AbstractHttpClient.determineTarget(AbstractHttpClient.java:584)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:708)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:700)
at org.cups4j.operations.IppOperation.sendRequest(IppOperation.java:207)
at org.cups4j.operations.IppOperation.request(IppOperation.java:76)
at org.cups4j.CupsPrinter.print(CupsPrinter.java:113)
at it.dropcomp.tasks.print.PrinterService.printPDF(PrinterService.java:160)
This is the method that prints the PDF (Inside it.dropcomp.tasks.print.PrinterService):
public void printPDF() throws RemoteServiceException {
/*
* generatedPDF is defined as File, and it's properly initialized
* before calling this method.
*/
if(generatedPDF == null) {
throw new RemoteServiceException("You must generate a file first!");
}
try {
CupsPrinter selectedPrinter = new CupsPrinter(
new URL(Constants.PRINTER_FULL_URL),
Constants.PRINTER_NAME, true
);
InputStream is = new FileInputStream(generatedPDF);
PrintJob pj = new PrintJob.Builder(is).build();
selectedPrinter.print(pj); //this is line 160
} catch (Exception e) {
LOG.error("Exception", e);
throw new RemoteServiceException(e);
}
}
It seems that HttpUriRequest already exists and makes conflict with the one provided by the httpclient library from Apache, but if I try removing that dependency from pom.xml, I get a NoClassDefFoundException for that class.
If it matters, my IDE is Eclipse Luna.
How can I solve this exception?
WebSphere uses also httpclient library which may conflict with one you are providing.
Try to create isolated shared library in admin console via Environment > Shared Libraries. Put http-*, slf4j and cups4j jars there and associate that shared library with your application.
Just downloaded and installed elasticsearch 1.3.2 in past hour
Opened IP tables to port 9200 and 9300:9400
Set my computer name and ip in /etc/hosts
Head Module and Paramedic Installed and running smoothly
curl on localhost works flawlessy
copied all jars from download into eclipse so same version client
--Java--
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.QueryBuilders;
public class Test{
public static void main(String[] args) {
Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", "elastictest").build();
TransportClient transportClient = new TransportClient(settings);
Client client = transportClient.addTransportAddress(new InetSocketTransportAddress("143.79.236.xxx",9300));//just masking ip with xxx for SO Question
try{
SearchResponse response = client.prepareSearch().setQuery(QueryBuilders.matchQuery("url", "twitter")).setSize(5).execute().actionGet();//bunch of urls indexed
String output = response.toString();
System.out.println(output);
}catch(Exception e){
e.printStackTrace();
}
client.close();
}
}
--Output--
log4j:WARN No appenders could be found for logger (org.elasticsearch.plugins).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:298)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:214)
at org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:105)
at org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:330)
at org.elasticsearch.client.transport.TransportClient.search(TransportClient.java:421)
at org.elasticsearch.action.search.SearchRequestBuilder.doExecute(SearchRequestBuilder.java:1097)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at Test.main(Test.java:20)
Update: Now I am REALLY confused. I just pressed run in eclipse 3 times. 2 times received the error above. 1 time the search worked!?? Brand new Centos 6.5 vps, brand new jdk installed. Then installed elasticsearch, have done nothing else to box.
Update: After running ./bin/elasticsearch console
[2014-09-18 08:56:13,694][INFO ][node ] [Acrobat] version[1.3.2], pid[2978], build[dee175d/2014-08-13T14:29:30Z]
[2014-09-18 08:56:13,695][INFO ][node ] [Acrobat] initializing ...
[2014-09-18 08:56:13,703][INFO ][plugins ] [Acrobat] loaded [], sites [head, paramedic]
[2014-09-18 08:56:15,941][WARN ][common.network ] failed to resolve local host, fallback to loopback
java.net.UnknownHostException: elasticsearchtest: elasticsearchtest: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
at org.elasticsearch.common.network.NetworkUtils.<clinit>(NetworkUtils.java:54)
at org.elasticsearch.transport.netty.NettyTransport.<init>(NettyTransport.java:204)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:54)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:200)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:830)
at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)
at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)
at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)
at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)
at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:59)
at org.elasticsearch.node.internal.InternalNode.<init>(InternalNode.java:192)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:70)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:203)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.net.UnknownHostException: elasticsearchtest: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
... 62 more
[2014-09-18 08:56:16,937][INFO ][node ] [Acrobat] initialized
[2014-09-18 08:56:16,937][INFO ][node ] [Acrobat] starting ...
[2014-09-18 08:56:17,110][INFO ][transport ] [Acrobat] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/143.79.236.31:9300]}
[2014-09-18 08:56:17,126][INFO ][discovery ] [Acrobat] elastictest/QvSNFajjQ9SFjU7WOdjaLw
[2014-09-18 08:56:20,145][INFO ][cluster.service ] [Acrobat] new_master [Acrobat][QvSNFajjQ9SFjU7WOdjaLw][localhost][inet[/143.79.236.31:9300]], reason: zen-disco-join (elected_as_master)
[2014-09-18 08:56:20,212][INFO ][http ] [Acrobat] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/143.79.236.31:9200]}
[2014-09-18 08:56:20,214][INFO ][node ] [Acrobat] started
--cluster config in elasticsearch.yml--
################################### Cluster ###################################
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: elastictest
possible problem:
wrong port, if you use a Java or Scala client, correct port is 9300, not 9200
wrong cluster name, make sure the cluster name you set in your code is the same as the cluster.name you set in $ES_HOME/config/elasticsearch.yml
the sniff option, set client.transport.sniff to be true but can't connect to all nodes of ES cluster will cause this problem too. ES doc here explained why.
Elasticsearch settings are in $ES_HOME/config/elasticsearch.yml. There, if the cluster.name setting is commented out, it means ES would take just about any cluster name. So, in your code, the cluster.name as "elastictest" might be the problem. Try this:
Client client = new TransportClient()
.addTransportAddress(new InetSocketTransportAddress(
"143.79.236.xxx",
9300));
You should check the node's port, you could do it using head.
These ports are not same. Example,
The web URL you can open is localhost:9200,
but the node's port is 9300, so none of the configured nodes are available if you use the 9200 as the port.
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{UfB9geJCR--spyD7ewmoXQ}{192.168.1.245}{192.168.1.245:9300}]]
In my case it was the difference in versions. If you check the logs in elasticsearch cluster you will see.
Elasticsearch logs
[node1] exception caught on transport layer [NettyTcpChannel{localAddress=/192.168.1.245:9300, remoteAddress=/172.16.1.47:65130}], closing connection
java.lang.IllegalStateException: Received message from unsupported version: [5.0.0] minimal compatible version is: [5.6.0]
I was using elasticsearch client and transport version 5.1.1. And my elasticsearch cluster version was in 6. So I changes my library version to 5.4.3.
Faced similar issue, and here is the solution
Example :
In elasticsearch.yml add the below properties
cluster.name: production
node.name: node1
network.bind_host: 10.0.1.22
network.host: 0.0.0.0
transport.tcp.port: 9300
Add the following in Java Elastic API for Bulk Push (just a code snippet).
For IP Address add public IP address of elastic search machine
Client client;
BulkRequestBuilder requestBuilder;
try {
client = TransportClient.builder().settings(Settings.builder().put("cluster.name", "production").put("node.name","node1")).build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName(""), 9300));
requestBuilder = (client).prepareBulk();
}
catch (Exception e) {
}
Open the Firewall ports for 9200,9300
I spend days together to figure out this issue. I know its late but this might be helpful:
I resolved this issue by changing the compatible/stable version of:
Spring boot: 2.1.1
Spring Data Elastic: 2.1.4
Elastic: 6.4.0 (default)
Maven:
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.1.RELEASE</version>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
<version>2.1.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
</dependency>
You don't need to mention Elastic version. By default it is 6.4.0. But if you want to add a specific verison. Use below snippet inside properties tag and use the compatible version of Spring Boot and Spring Data(if required)
<properties>
<elasticsearch.version>6.8.0</elasticsearch.version>
</properties>
Also, I used the Rest High Level client in ElasticConfiguration :
#Value("${elasticsearch.host}")
public String host;
#Value("${elasticsearch.port}")
public int port;
#Bean(destroyMethod = "close")
public RestHighLevelClient restClient1() {
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
RestClientBuilder builder = RestClient.builder(new HttpHost(host, port));
RestHighLevelClient client = new RestHighLevelClient(builder);
return client;
}
}
Important Note:
Elastic use 9300 port to communicate between nodes and 9200 as HTTP client. In application properties:
elasticsearch.host=10.40.43.111
elasticsearch.port=9200
spring.data.elasticsearch.cluster-nodes=10.40.43.111:9300 (customized Elastic server)
spring.data.elasticsearch.cluster-name=any-cluster-name (customized cluster name)
From Postman, you can use: http://10.40.43.111:9200/[indexname]/_search
Happy coding :)
For completion's sake, here's the snippet that creates the transport client using proper static method provided by InetSocketTransportAddress:
Client esClient = TransportClient.builder()
.settings(settings)
.build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("143.79.236.xxx"), 9300));
If the above advices do not work for you, change the log level of your logging framework configuration (log4j, logback...) to INFO. Then re-check the output.
The logger may be hiding messages like:
INFO org.elasticsearch.client.transport.TransportClientNodesService - failed to get node info for...
Caused by: ElasticsearchSecurityException: missing authentication token for action...
(in the example above, there was X-Pack plugin in ElasticSearch which requires authentication)
For other users getting this problem.
You may get this error if you are running a newer ElasticSearch (5.5 or later) while running Spring Boot <2 version.
Recommendation is to use the REST Client since the Java Client will be deprecated.
Other workaround would be to upgrade to Spring Boot 2, since that should be compatible.
See https://discuss.elastic.co/t/spring-data-elasticsearch-cant-connect-with-elasticsearch-5-5-0/94235 for more information.
Since most of the ansswers seem to be outdated here is the setting that worked for me:
Elasticsearch-Version: 7.2.0 (OSS) running on Docker
Java-Version: JDK-11
elasticsearch.yml:
cluster.name: production
node.name: node1
network.host: 0.0.0.0
transport.tcp.port: 9300
cluster.initial_master_nodes: node1
Setup:
client = new PreBuiltTransportClient(Settings.builder().put("cluster.name", "production").build());
client.addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9300));
Since PreBuiltTransportClient is deprecated you should use RestHighLevelClient for Elasticsearch-Version 7.3.0: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-high-level-client/7.3.0/index.html
If you are using java Transport client
1.check 9300 is access able /open.
2.check the node and cluster name ,this should be the correct,you can check the node and cluster name by type ip:port in your browser.
3.Check the versions of your jar and Es installed version.
This one did work for me in ES 1.7.5:
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.xcontent.XContentBuilder;
public static void main(String[] args) throws IOException {
Settings settings = ImmutableSettings.settingsBuilder()
.put("client.transport.sniff",true)
.put("cluster.name","elasticcluster").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("[ipaddress]",9300));
XContentBuilder builder = null;
try {
builder = jsonBuilder().startObject().field("user", "testdata").field("postdata",new Date()).field("message","testmessage")
.endObject();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println(builder.string());
IndexResponse response = client.prepareIndex("twitter","tweet","1").setSource(builder).execute().actionGet();
client.close();
}
Check your elasticsearch.yml, "transport.host" property must be "0.0.0.0" not "127.0.0.1" or "localhost"
This means we are not able to instantiate ES transportClient and throw this exception. There are couple of possibilities that cause this issue.
Cluster name is incorrect. So open ES_HOME_DIR/config/elasticserach.yml file and check the cluster name value OR use this command: curl -XGET 'http://localhost:9200/_nodes'
Verify port 9200 is http port but elasticsearch service is using tcp port 9300 [by default]. So verify that the port is not blocked.
Authentication issue: set the header in transportClient's context for authentication:
client.threadPool().getThreadContext()
.putHeader("Authorization", "Basic " + encodeBase64String(basicHeader.getBytes()));
If you are still facing this issue then add the following property:
put("client.transport.ignore_cluster_name", true)
The below basic code is working fine for me:
Settings settings = Settings.builder()
.put("cluster.name", "my-application").put("client.transport.sniff", true).put("client.transport.ignore_cluster_name", false).build();
TransportClient client = new PreBuiltTransportClient(settings).addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
I know I'm a bit late, incase the above answers didn't work, I recommend checking the logs in elasticsearch terminal. I found out that the error message says that i need to update my version from 5.0.0-rc1 to 6.8.0, i resolved it by updating my maven dependencies to:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>6.8.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>6.8.0</version>
</dependency>
This changes my code as well, since InetSocketTransportAddress is deprecated. I have to change it to TransportAddress
TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)
.addTransportAddress(new TransportAddress(InetAddress.getByName(host), port));
And you also need to add this to your config/elasticsearch.yml file (use your host address)
transport.host: localhost
Check the ES server logs
sudo tail -f /var/log/elasticsearch/elasticsearch.log
I was using an outdated client
Received message from unsupported version: [5.0.0] minimal compatible version is: [6.8.0]
I had the same problem. my problem was that the version of the dependency had conflict with the elasticsearch version. check the version in ip:9200 and use the dependency version that match it
This is a common issue for ES version 5.6.10+. The thing is that Elasticsearch had the TransportClient, that was using PreBuild and has been deprecated on that version. The alternative (which is the solution here in case you are using ES 7.14 or earlier), is to use the Java High Level REST client. See the documentation (they also have a great migration guide to migrate an application from TransportClient to the REST client).
From the 7.15 version, they dropped Java High Level REST, in favor to Java API Client, and there’s also migration guides too. They did this mainly to reduce dependencies from the client. See docs.
If you are using the same version of client as the cluster, and the fit client library, the issue should be resolved.
You should check logs
If you see like below
"stacktrace": ["java.lang.IllegalStateException: Received message from unsupported version: [6.4.3] minimal compatible version is: [6.8.0]"
You can check this link
https://discuss.elastic.co/t/java-client-or-spring-boot-for-elasticsearch-7-3-1/199778
You have to explicit declare es version.