I am using elastic search with java transport client .
Elastic search : 1.1.0
Java client : 1.1.0
I am making search query using transport client which works pretty smoothly for sometime. But the problem i am facing is that after some time or few minutes it starts giving me no node available exception on all requests from my java client while the same machine gives me response from curl.
Settings settings = ImmutableSettings.settingsBuilder().put("client.transport.ping_timeout","50s").build();
client = new TransportClient(settings).addTransportAddress(new InetSocketTransportAddress(host, port));
This is how i am connecting with the node .
Problem: I am getting no node available exception after some time when we hit it starts giving exception
Also after few mins it again starts giving me response properly
Also I have integrated elasticsearch with couchbase with transport plugin so it also keeps on updating in parallel not heavily but sometimes , can it also be a reason as elastic search also indexes some data from couchbase
Related
One application to get details from an external system was developed and deployed in the IIB 10.0.0.10 version. Later, the application was moved to a new server in which IIB 10.0.0.14 was installed. The issue is that the application gives proper response after deployment or an EG restart, but giving parsing error after that. A java compute node is used for connecting to the external system and the parsing error occurs at the below line:
String response = getStringValue(detailsObject.getLastChild().getFirstElementByPath("./element1/element2/element3"));
The same service is working fine in the old server(IIB 10.0.0.10). Also, the first hit of the service after the deployment will give a proper response to the new server.
Code Requirements:
User hits the service by the url pattern of /database//collection//entities
Java attempts to connect to that specific Database & Collection via Gremlin. If connection fails then return the error to the User
If connection was successful, Java then runs a pre-built query and returns the results to the User.
Issue I am facing: using the tutorial located at https://github.com/Azure-Samples/azure-cosmos-db-graph-java-getting-started/blob/master/src/GetStarted/Program.java, I am building a Cluster followed by a Client Object using the correct credentials; in that when all the configurations are correct it works without any issues. However if I change any parameter, DATABASE_ID, COLLECTION_ID, or PASSWORD the code will continue past the building of the Cluster and past the cluster running connect() until it attempts to run "client.submit(query)" where it will return a NullPointerException.
Question: is there a method built into the Cluster or the Client Object which returns if it has successfully authenticated.
CODE CONSOLE:
DATABSE_ID:PURPOSELY_WRONG_DB
COLLECTION_ID:PURPOSELY_WRONG_COLLECTION
PASSWORD:PURPOSELY_WRONG_PASSWORD_TO_TEST_IF_CONNECTION_THROWS_ERROR
QUERY:g.V().count()
START QUERYING GREMLIN SERVER
AT THIS POINT I HAVE PASSED CLIENT.CONNECT()
ABOUT TO SUBMIT THE QUERY.....
java.lang.NullPointerException: null
at org.apache.tinkerpop.gremlin.driver.Handler$GremlinResponseHandler.channelRead0(Handler.java:239)
at org.apache.tinkerpop.gremlin.driver.Handler$GremlinResponseHandler.channelRead0(Handler.java:195)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
The Gremlin Server protocol uses SASL-based authentication, and the auth handshake begins when the first request is sent.
Basic handshake sequence as follows:
Server receives a request on a new connection.
Server sends authentication challenge.
Client sends authentication response with credentials.
Server checks the credentials and either responds with the results of the original request OR returns invalid creds response.
However, the null pointer exception is not expected.
Can you provide:
The version of gremlin-java client you are using?
A sample that repros the issue.
if it's possible, the response message that GremlinResponseHandler is attempting to read?
See also the Gremlin request/response reference here.
My source system provides SOAP url (hosted in IIS server) which we use it to get data, but lately we cannot pull data where it fails with nothing more than this msg in my side
"org.apache.axiom.om.OMException: SOAP message MUST NOT contain a Document Type Declaration(DTD)"
When the issue was debugged on the other side we got the following error
DEBUG httpclient.333.content [main] << " [0x9]IIS received the request; however, an internal error occurred during the processing of the request. The root cause of this error depends on which module handles the request and what was happening in the worker process when this error occurred."
The team that developed this service too cannot provide us anything useful info like where their app fails or any other useful info. A bizzare scenario occurs every now and then, wherein a data pull is successful in a test server but fails in prod server even when both of them point to the same SOAP url.
Everything worked fine as long as they hosted it in Apache tomcat, things worsened after they moved to IIS.
I want to know what settings to be looked at to resolve the issue.
So I'm running a Hadoop query that requires info from a field in an ElasticSearch index running on an Amazon EC2. Issue is, I keep getting the "None of the configured nodes are available" error. Even more frustrating is that I had this working a couple days ago and then it quit in the middle of the query because of a lack of CPU ops. But my partner didn't know that, so his attempts to figure out why it lost the connection in the middle of a query seemed to have caused this problem. And he doesn't remember what he did.
I know this question has been asked before, but I'm certain my cluster name is right and the query that I'm running on ES shouldn't cause timeouts, and didn't, when it was running before. Additionally, there shouldn't be firewall issues because I am running the program directly on the EC2 instance. And it's a pseudo-distributed single node cluster using yarn. The EC2 instance has an associated Elastic IP (meaning its public IP will remain the same), and is running Amazon's ubuntu image.
Here's the java code (identifying information removed):
public static String getAccountNumber(int fieldValue){
//tried it without the Settings, but still no dice.
Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", "elasticsearch").build();
TransportClient client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("ec2-<ELASTIC_IP>.compute-1.amazonaws.com", 9300));
FilterBuilder filter = boolFilter()
.should(termFilter("objectName1.field", fieldValue))
.should(termFilter("objectName2.field", fieldValue));
SearchResponse response= client.prepareSearch("indexName")
.setTypes("type")
.setPostFilter(filter)
.setSize(1000)
.execute()
.actionGet();
//other logic
Please let me know if you need me to provide my core-site.xml, hdfs-site.xml, or whatever.
Solved it! In the other logic part, I had "client.close();". Commenting that out resolved my issues.
I'm using vagrant and I installed ES on it using the debian package:
elasticsearch-1.1.1.deb
In my web app, I am using the jar:
org.elasticsearch elasticsearch 1.1.1
I am creating my client like:
val node = nodeBuilder.client(true).node
val client: Client = node.client
When I try and index I get the error:
val response = client.prepareIndex("articles", "article", article.id.toString).setSource(json).execute.actionGet
The error I get is:
[MasterNotDiscoveredException: waited for [1m]]
I can see my ES instance is working fine by going to:
http://localhost:9200
I ran some test queries from the README file and they worked fine, but now for some reason it isn't working either:
http://localhost:9200/twitter/user/kimchy?pretty=true
I get the error:
{
"error" : "ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];]",
"status" : 503
}
My vagrant file 2 ports open for elastic search:
config.vm.network "forwarded_port", guest: 9200, host: 9200 # ES
config.vm.network "forwarded_port", guest: 9300, host: 9300 # ES
What seems to be the problem?
Note: my web application isn't using a elasticsearch.yml file because it is just connecting to the default localhost:9200 from what I understand.
Normally you have to connect to ES from outside through http (normally, but there are also others protocols available) and then talk REST/JSON. So your webapp should use a scala/java ES client (see http://www.elasticsearch.org/guide/en/elasticsearch/client/community/current/clients.html) and then connect via http to your host which is running ES on port 9200. Port 9300 is only for internode communication (ES is a distributed clustered system). But there is another way to talk remotely to ES: Powerup a node which joins the cluster and then query this node through the internal client. But:
In your above question you try to connect to ES through the internal Java client (internal transport) which starts a node and then try to joins the cluster. That fails because the master node could to be found. Maybe due to networking issues. Try to include elasticsearch.yml in the classpath or use REST like described above. There is also a third option: TransportClient - look http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/client.html#transport-client
See also http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-transport.html and http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html and http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-memcached.html
Since you are generating your client node with .client(true), that disables both data-storage and master-eligibility on your node, if I understand the docs correctly. (the source is not very helpful either)
Note that any ES cluster needs at least 1 master node.
First, to clarify the config situation, your main elasticsearch.yml (see reference config) configuration is under /etc/elasticsearch/. You can also configure a second elasticsearch.yml in your src/main/resources folder, which will apply to the nodes you create in your app. I'd recommend doing this as it's way clearer compared to using the mysterious nodeBuilder methods.
Can you show what is the response when you query, right after starting es up, http://localhost:9200/_nodes ?
Specifically, if you have
"attributes": {
"master": "true"
},
set on one of the nodes. If so, then it looks like a networking problem as your client node is unable to contact the master node. I actually had a similar issue when I was setting up, and the solution was to set network.host: 127.0.0.1 in the app's elasticsearch.yml (wish I knew why)
uncomment discovery.zen.ping.multicast.enabled: false in /etc/elasticsearch/elasticsearch.yml