Am forming a SearchRequest and by using ElasticSearch RestHighLevelClient am trying to fetch documents from ElasticSearch. But, while search documents in ES am getting the below error.
Please find the stack trace below :
`18-Sep-2018 06:35:55.819 SEVERE [Thread-10] com.demo.searchengine.dao.DocumentSearch.getDocumentByName listener timeout after waiting for [30000] ms
java.io.IOException: listener timeout after waiting for [30000] ms
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:663)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:222)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:194)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:443)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:429)
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:368)
at com.demo.searchengine.dao.DocumentSearch.getDocumentByName(DocumentSearch.java:76)
at com.demo.searchengineservice.mqservice.Service.searchByDocuments(Service.java:43)
at com.demo.searchengineservice.mqservice.Consumer.consume(Consumer.java:27)
at com.demo.utils.Consumer$1$1.run(Consumer.java:89)
at java.lang.Thread.run(Unknown Source)`
Please find my code below :
public class SearchEngineClient {
private static PropertiesFile propertiesFile = PropertiesFile.getInstance();
private final static String elasticHost =propertiesFile.extractPropertiesFile().getProperty("ELASTIC_HOST");
private static RestHighLevelClient instance = new RestHighLevelClient(RestClient.builder(
new HttpHost(elasticHost, 9200, "http"),
new HttpHost(elasticHost, 9201, "http")));
public static RestHighLevelClient getInstance() {
return instance;
}
}
Am using the client instance below to getting the response from ES.
searchResponse = SearchEngineClient.getInstance().search(contentSearchRequest);
It looks like a problem with your elasticsearch server being not reachable from the outside. The ES server will only bind to localhost by default, which means it is not reachable from the outside.
So on your remote ES server you should find the elasticsearch.yml configuration file. In this file find and change network.host to your IP address or 0.0.0.0 to listen on all interfaces. After that change you need to restart ES.
Related
I'm trying to upload a document from a Lambda script, however I've been stuck where I keep getting the following whenever the Lambda script starts:
com.mongodb.MongoSocketException: cluster0-whnfd.mongodb.net: No address associated with hostname
The error seems obvious, however I can connect using that same URL via Mongo Compass. The Java class I'm using looks like:
public class MongoStore {
private final static String MONGO_ADDRESS = "mongodb+srv://<USERNAME>:<PASSWORD>#cluster0-whnfd.mongodb.net/test";
private MongoCollection<Document> collection;
public MongoStore() {
final MongoClientURI uri = new MongoClientURI(MONGO_ADDRESS);
final MongoClient mongoClient = new MongoClient(uri);
final MongoDatabase database = mongoClient.getDatabase("test");
this.collection = database.getCollection("test");
}
public void save(String payload) {
Document document = new Document();
document.append("message", payload);
collection.insertOne(document);
}
}
Have I just misconfigured my Java class, or is there something more tricky going on here?
The same problem I had with freshly created MongoDB Atlas database, when I started the migration of my Python web application from Heroku.
So I've realised the DNS name cluster0.hgmft.mongodb.net just doesn't exist.
The magic happened when I've installed the library dnspython (my app is written in Python), with this library MongoDB client was able to connect to my database in Mongo Atlas.
I try to define Braid server in java like this repo. And the following is my BootstrapBraidService class:
#CordaService
public class BootstrapBraidService extends SingletonSerializeAsToken{
private AppServiceHub appServiceHub;
private BraidConfig braidConfig;
public BootstrapBraidService(AppServiceHub appServiceHub){
this.appServiceHub = appServiceHub;
this.braidConfig = new BraidConfig();
// Include a flow on the Braid server.
braidConfig.withFlow(ExtendedStatusFlow.IssueFlow.class);
// Include a service on the Braid server.
braidConfig.withService("myService", new BraidService(appServiceHub));
// The port the Braid server listens on.
braidConfig.withPort(3001);
// Using http instead of https.
braidConfig.withHttpServerOptions(new HttpServerOptions().setSsl(false));
// Start the Braid server.
braidConfig.bootstrapBraid(this.appServiceHub,Object::notify);
}
}
However node startup without my setting, like port use default(8080) instead of my setting(3001).
And NodeJS server fails to get services descriptor:
{ Error: failed to get services descriptor from
http://localhost:8080/api/
at createHangUpError (_http_client.js:331:15)
at Socket.socketOnEnd (_http_client.js:423:23)
at emitNone (events.js:111:20)
at Socket.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1064:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9) code: 'ECONNRESET', url: 'http://localhost:8080/api/' }
Can somebody tell me how to fix this problem? Thanks.
Update:
the node shell screenshot
The reason why this isn't working is because BraidConfig is an immutable class with a fluent API, but your code is using it as a classic mutable POJO which means none of your changes are being applied to the BraidConfig.
The following should work fine:
#CordaService
public class BootstrapBraidService extends SingletonSerializeAsToken{
private AppServiceHub appServiceHub;
private BraidConfig braidConfig;
public BootstrapBraidService(AppServiceHub appServiceHub){
this.appServiceHub = appServiceHub;
this.braidConfig = new BraidConfig()
// Include a flow on the Braid server.
.withFlow(ExtendedStatusFlow.IssueFlow.class)
// Include a service on the Braid server.
braidConfig.withService(new BraidService(appServiceHub))
// The port the Braid server listens on.
braidConfig.withPort(3001)
// Using http instead of https.
braidConfig.withHttpServerOptions(new HttpServerOptions().setSsl(false));
// Start the Braid server.
braidConfig.bootstrapBraid(this.appServiceHub,null);
}
}
regards,
Fuzz
I am able to access the ElasticSearch via http://127.0.0.1:9200, however when trying to connect from the same machine via RestHighLevelClient I get the java.net.ConnectException: Connection refused.
try {
final BulkResponse response=this.restHighLevelClient.bulk(bulkRequest);
}
catch (final IOException exn) {
LOG.error("Bulk insert failed", exn);
}
The configuration class for Elastic search client is like below.
#Bean
public RestHighLevelClient restClient() {
return new RestHighLevelClient(RestClient.builder(new HttpHost("localhost", "9200", "http")));
}
I have retained the default settings in elastic-search.yml file and debugged to be sure that host and port are correct.
Any ideas please?
I had the same issue but my problem was that I was connecting to the wrong host by mistake.
I have problem with vertx HttpClient.
Here's code which shows that tests GET using vertx and plain java.
Vertx vertx = Vertx.vertx();
HttpClientOptions options = new HttpClientOptions()
.setTrustAll(true)
.setSsl(false)
.setDefaultPort(80)
.setProtocolVersion(HttpVersion.HTTP_1_1)
.setLogActivity(true);
HttpClient client = vertx.createHttpClient(options);
client.getNow("google.com", "/", response -> {
System.out.println("Received response with status code " + response.statusCode());
});
System.out.println(getHTML("http://google.com"));
Where getHTML() is from here: How do I do a HTTP GET in Java?
This is my output:
<!doctype html><html... etc <- correct output from plain java
Feb 08, 2017 11:31:21 AM io.vertx.core.http.impl.HttpClientRequestImpl
SEVERE: java.net.UnknownHostException: failed to resolve 'google.com'. Exceeded max queries per resolve 3
But vertx can't connect. What's wrong here? I'm not using any proxy.
For reference: a solution, as described in this question and in tsegismont's comment here, is to set the flag vertx.disableDnsResolver to true:
-Dvertx.disableDnsResolver=true
in order to fall back to the JVM DNS resolver as explained here:
sometimes it can be desirable to use the JVM built-in resolver, the JVM system property -Dvertx.disableDnsResolver=true activates this behavior
I observed this DNS resolution issue with a redis client in a kubernetes environment.
I had this issue, what caused it for me was stale DNS servers being picked up by the Java runtime, i.e. servers registered for a network the machine was no longer connected to. The issue is first in the Sun JNDI implementation, it also exists in Netty which uses JNDI to bootstrap its list of name servers on most platforms, then finally shows up in VertX.
I think a good place to fix this would be in the Netty layer where the set of default DNS servers is bootstrapped. I have raised a ticket with the Netty project so we'll see if they agree with me! Here is the Netty ticket
In the mean time a fairly basic workaround is to filter the default DNS servers detected by Netty, based on whether they are reachable or not. Here is a code Sample in Kotlin to apply before constructing the main VertX instance.
// The default set of name servers provided by JNDI can contain stale entries
// This default set is picked up by Netty and in turn by VertX
// To work around this, we filter for only reachable name servers on startup
val nameServers = DefaultDnsServerAddressStreamProvider.defaultAddressList()
val reachableNameServers = nameServers.stream()
.filter {ns -> ns.address.isReachable(NS_REACHABLE_TIMEOUT)}
.map {ns -> ns.address.hostAddress}
.collect(Collectors.toList())
if (reachableNameServers.size == 0)
throw StartupException("There are no reachable name servers available")
val opts = VertxOptions()
opts.addressResolverOptions.servers = reachableNameServers
// The primary Vertx instance
val vertx = Vertx.vertx(opts)
A little more detail in case it is helpful. I have a company machine, which at some point was connected to the company network by a physical cable. Details of the company's internal name servers were set up by DHCP on the physical interface. Using the wireless interface at home, DNS for the wireless interface gets set to my home DNS while the config for the physical interface is not updated. This is fine since that device is not active, ipconfig /all does not show the internal company DNS servers. However, looking in the registry they are still there:
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces
They get picked up by the JNDI mechanism, which feeds Netty and in turn VertX. Since they are not reachable from my home location, DNS resolution fails. I can imagine this home/office situation is not unique to me! I don't know whether something similar could occur with multiple virtual interfaces on containers or VMs, it could be worth looking at if you are having problems.
Here is the sample code which works for me.
public class TemplVerticle extends HttpVerticle {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
// Create the web client and enable SSL/TLS with a trust store
WebClient client = WebClient.create(vertx,
new WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
.setDefaultHost("www.w3schools.com")
);
client.get("www.w3schools.com")
.as(BodyCodec.string())
.send(ar -> {
if (ar.succeeded()) {
HttpResponse<String> response = ar.result();
System.out.println("Got HTTP response body");
System.out.println(response.body().toString());
} else {
ar.cause().printStackTrace();
}
});
}
}
Try using web client instead of httpclient, here you have an example (with rx):
private val client: WebClient = WebClient.create(vertx, WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
)
open fun <T> get(uri: String, marshaller: Class<T>): Single<T> {
return client.getAbs(host + uri).rxSend()
.map { extractJson(it, uri, marshaller) }
}
Another option is to use getAbs.
I created an AWS Lambda package (Java) with a function that reads some files from Amazon S3 and pushes the data to AWS ElasticSearch Service. Since I'm using AWS Elastic Search, I can't use the Transport client, in which case I'm working with the Jest Client to push via REST. The issue is with the Jest client.
Here's my Jest client instance:
public JestClient getClient() throws InterruptedException{
final Supplier<LocalDateTime> clock = () -> LocalDateTime.now(ZoneOffset.UTC);
DefaultAWSCredentialsProviderChain awsCredentialsProvider = new DefaultAWSCredentialsProviderChain();
final AWSSigner awsSigner = new AWSSigner(awsCredentialsProvider, REGION, SERVICE, clock);
JestClientFactory factory = new JestClientFactory() {
#Override
protected HttpClientBuilder configureHttpClient(HttpClientBuilder builder) {
builder.addInterceptorLast(new AWSSigningRequestInterceptor(awsSigner));
return builder;
}
#Override
protected HttpAsyncClientBuilder configureHttpClient(HttpAsyncClientBuilder builder) {
builder.addInterceptorLast(new AWSSigningRequestInterceptor(awsSigner));
return builder;
}
};
factory.setHttpClientConfig(
new HttpClientConfig.Builder(URL)
.discoveryEnabled(true)
.multiThreaded(true).build());
JestClient jestClient = factory.getObject();
return jestClient;
}
Since the AWS Elasticsearch domain is protected by an IAM access policy, I sign the requests for them to be authorized by AWS (example here). I use POJOs to index documents.
The problem I face is that I am not able to execute more than one action with the jest client instance. For example, if I created the index first :
client.execute(new CreateIndex.Builder(indexName).build());
and later on, I wanted to, for example do some bulk indexing:
for (Object object : listOfObjects) {
bulkIndexBuilder.addAction(new Index.Builder(object ).
index(INDEX_NAME).type(DOC_TYPE).build());
}
client.execute(bulkIndexBuilder.build());
only the first action will be executed and the second will fail. Why is that? Is it possible to execute more than one action?
Morover, using the provided code, I'm not able to execute more than 20 Bulk operations when I want to index the document. Basically, around 20 is fine, but anything more than that, the client.execute(bulkIndexBuilder.build()); just does not execute and the client shuts down.
Any help or suggestion would be appriciated.
UPDATE:
It seems that AWS ElasticSearch does not allow connecting to individual nodes. Simply turning off node discovery in the Jest client .discoveryEnabled(false) solved all the problems. This answer helped.