Elastic search: java.net.ConnectException when connecting via RestHighLevelClient - java

I am able to access the ElasticSearch via http://127.0.0.1:9200, however when trying to connect from the same machine via RestHighLevelClient I get the java.net.ConnectException: Connection refused.
try {
final BulkResponse response=this.restHighLevelClient.bulk(bulkRequest);
}
catch (final IOException exn) {
LOG.error("Bulk insert failed", exn);
}
The configuration class for Elastic search client is like below.
#Bean
public RestHighLevelClient restClient() {
return new RestHighLevelClient(RestClient.builder(new HttpHost("localhost", "9200", "http")));
}
I have retained the default settings in elastic-search.yml file and debugged to be sure that host and port are correct.
Any ideas please?

I had the same issue but my problem was that I was connecting to the wrong host by mistake.

Related

How to establish a FTPS data connection to a FileZilla Server 1.2.0

It is a known problem to use the Java FTPSClient of Apache commons-net with session resumption. Session resumption is a security feature which a FTPS server can require for data connections. The Apache FTPSClient does not support session resumption, and the JDK APIs make it hard to build a custom implementation. There are a couple of workarounds using reflection, see e.g. this answer and this commons-net bug entry.
I use such a workaround (see snipped below) in JDK 11 and tested it against a local FileZilla Server. It works with FileZilla Server 0.9.6, but it doesn't with FileZilla Server 1.2.0, which is the latest version at the time of writing. With that version, when trying to establish a data connection, the server responds with:
425 Unable to build data connection: TLS session of data connection not resumed.
As I said, FileZilla Server 0.9.6 is fine with how I do session resumption, and I made sure that the setting for requiring session resumption is activated.
In FileZilla Server 1.2.0, such settings are now set implicitly and cannot be changed via the GUI, maybe not at all. Are there some server settings that I can tweak for this to work? Or is it an issue with how I implemented the workaround? Does anyone experience similar issues?
This is the workaround I am using:
public class FTPSClientWithSessionResumption extends FTPSClient {
static {
System.setProperty("jdk.tls.useExtendedMasterSecret", "false");
System.setProperty("jdk.tls.client.enableSessionTicketExtension", "false");
}
#Override
protected void _connectAction_() throws IOException {
super._connectAction_();
execPBSZ(0);
execPROT("P");
}
#Override
protected void _prepareDataSocket_(Socket socket) throws IOException {
if (useSessionResumption && socket instanceof SSLSocket) {
// Control socket is SSL
final SSLSession session = ((SSLSocket)_socket_).getSession();
if (session.isValid()) {
final SSLSessionContext context = session.getSessionContext();
try {
final Field sessionHostPortCache = context.getClass().getDeclaredField("sessionHostPortCache");
sessionHostPortCache.setAccessible(true);
final Object cache = sessionHostPortCache.get(context);
final Method putMethod = cache.getClass().getDeclaredMethod("put", Object.class, Object.class);
putMethod.setAccessible(true);
Method getHostMethod;
try {
getHostMethod = socket.getClass().getMethod("getPeerHost");
}
catch (NoSuchMethodException e) {
// Running in IKVM
getHostMethod = socket.getClass().getDeclaredMethod("getHost");
}
getHostMethod.setAccessible(true);
Object peerHost = getHostMethod.invoke(socket);
InetAddress iAddr = socket.getInetAddress();
int port = socket.getPort();
putMethod.invoke(cache, String.format("%s:%s", peerHost, port).toLowerCase(Locale.ROOT), session);
putMethod.invoke(cache, String.format("%s:%s", iAddr.getHostName(), port).toLowerCase(Locale.ROOT), session);
putMethod.invoke(cache, String.format("%s:%s", iAddr.getHostAddress(), port).toLowerCase(Locale.ROOT), session);
}
catch (Exception e) {
throw new IOException(e);
}
}
else {
throw new IOException("Invalid SSL Session");
}
}
}
}
The address under which the socket is cached is determined using getPeerHost, getInetAddress().getHostName(), and getInetAddress().getHostAddress(). I tried several combinations of doing or not doing these three, but I always get the same result.
Edit:
Here is a screenshot of the server logs of the full session:
As stated in this StackOverflow post it is possible to tell the JVM that only TLS 1.2 should be used.
Here is the link to the original answer which worked for me: command for java to use TLS1.2 only
You have to add a command line parameter at the start of the JVM in this case this is: java -Djdk.tls.client.protocols=TLSv1.2 -jar ... <rest of command line here>
This simple parameter worked for me, now I can connect and transfer data from a FTP-Server wich runs FileZilla FTP-Server 1.3.0

Elastic Search HightlevelRestClient SearchRequest Timeout issue

Am forming a SearchRequest and by using ElasticSearch RestHighLevelClient am trying to fetch documents from ElasticSearch. But, while search documents in ES am getting the below error.
Please find the stack trace below :
`18-Sep-2018 06:35:55.819 SEVERE [Thread-10] com.demo.searchengine.dao.DocumentSearch.getDocumentByName listener timeout after waiting for [30000] ms
java.io.IOException: listener timeout after waiting for [30000] ms
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:663)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:222)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:194)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:443)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:429)
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:368)
at com.demo.searchengine.dao.DocumentSearch.getDocumentByName(DocumentSearch.java:76)
at com.demo.searchengineservice.mqservice.Service.searchByDocuments(Service.java:43)
at com.demo.searchengineservice.mqservice.Consumer.consume(Consumer.java:27)
at com.demo.utils.Consumer$1$1.run(Consumer.java:89)
at java.lang.Thread.run(Unknown Source)`
Please find my code below :
public class SearchEngineClient {
private static PropertiesFile propertiesFile = PropertiesFile.getInstance();
private final static String elasticHost =propertiesFile.extractPropertiesFile().getProperty("ELASTIC_HOST");
private static RestHighLevelClient instance = new RestHighLevelClient(RestClient.builder(
new HttpHost(elasticHost, 9200, "http"),
new HttpHost(elasticHost, 9201, "http")));
public static RestHighLevelClient getInstance() {
return instance;
}
}
Am using the client instance below to getting the response from ES.
searchResponse = SearchEngineClient.getInstance().search(contentSearchRequest);
It looks like a problem with your elasticsearch server being not reachable from the outside. The ES server will only bind to localhost by default, which means it is not reachable from the outside.
So on your remote ES server you should find the elasticsearch.yml configuration file. In this file find and change network.host to your IP address or 0.0.0.0 to listen on all interfaces. After that change you need to restart ES.

java.net.ConnectException: Connection timed out: connect in Eclipse

I am trying to consume below public web service using Eclipse.
http://www.webservicex.com/globalweather.asmx?wsdl
When I execute in the java client it gives the error;
java.net.ConnectException: Connection timed out: connect
Below is the simple client program;
public class ClientTest1
{
public static void main(String[] args)
{
GlobalWeatherSoapProxy obj1 = new GlobalWeatherSoapProxy();
try
{
System.out.println(obj1.getCitiesByCountry("Japan"));
}
catch(Exception e1)
{
System.out.println(+e1.getMessage());
}
}
}
However strangely this works fine when consumed through SOAP UI. Hence I assume this is something to do with Eclipse configuration.
Thank you in advance for any help.
Eclipse has nothing to do with it. Your code is executed by the JVM, even if your development environment is Eclipse. A connection time out means that your client is not able to connect with the endpoint.
You have auto-generated the client proxy in some way getting GlobalWeatherSoapProxy. This class will obtain the reference to endpoint by loading WSDL. Alternatively url can be provided by code. Review the content of that class to see how endpoint URL is loaded
You should see something like (check this full example)
URL url = new URL("http://localhost:9999/ws/hello?wsdl");
QName qname = new QName("http://ws.mkyong.com/", "HelloWorldImplService");
Service service = Service.create(url, qname);
HelloWorld hello = service.getPort(HelloWorld.class);

Check MongoDB server is running and credentials are valid in Java

I am programming an UI where a user should be able to put in the URL and port to check whether a mongoDB server is running. Furthermore he should be able to provide credentials when necessary.
If the server is not running or the credentials are wrong, I want to provide a message for each case. Similar questions have been answered here:
Check MongoDB authentication with Java 3.0 driver
how to check from a driver, if mongoDB server is running
Unfortunately they use older versions of the Java driver for mongo. I'm using the 3.2+ version of the MongoDB java driver, where i.e. getDB() is deprecated.
My "solution" for the problem looks somewhat like this:
try {
String database = "test";
MongoClient client = null;
if (StringUtils.isNotBlank(username) && StringUtils.isNotBlank(password)) {
MongoCredential credentials = MongoCredential.createCredential(username, database, password.toCharArray());
client = new MongoClient(new ServerAddress(url, Integer.parseInt(port)), Arrays.asList(credentials));
}
else {
client = new MongoClient(url, Integer.parseInt(port));
}
MongoDatabase db = client.getDatabase(database);
db.listCollectionNames().first();
client.close();
return true;
}
catch (MongoCommandException | MongoSecurityException e) {
// program does not get in here when credentials are wrong,
// only when no credentials are provided, but necessary
}
catch (MongoSocketOpenException | MongoTimeoutException e) {
// only get in here after db.listCollectionNames().first() caused a timeout
}
How can I manage to:
Find out when mongoDB server is not running?
Find out that credentials are correct, when necessary?
Edit:
When credentials are wrong (username and/or password) the method catches only the MongoTimeoutException. It's the same when the the wrong URL or port or database is provided. To be clear there are other exceptions printed out, but not caught. Only difference is, when providing no password and no username, even though the database requires them, then the MongoCommandException is caught

Communicating with AMQP 1.0 broker over SSL using Qpid

I am using ActiveMQ 5.8.0, which supports AMQP 1.0 as a queue broker. I am trying to communicate with this from a Java client using the Qpid AMQP1.0 client jms library but do not see a method of specifying keystore and truststore information.
I have successfully configured a client by passing in the SSL credentials via the Java VM options (e.g. -Djavax.net.ssl.keyStore), however this is not an acceptable method for my final solution... I need to be able to specify this information from within the code.
I am currently using the createFromURL method to generate the connection from a URL that includes SSL parameters as defined here, but the keystore information (and potentially failover params) do not appear to be parsed from the URL.
String connectionUrl = "amqps://localhost/?brokerlist='tcp://localhost:5671?ssl='true'&key_store='C:/apache-activemq-5.8.0/conf/client.ks'&key_store_password='password'&trust_store='C:/apache-activemq-5.8.0/conf/client.ts'&trust_store_password='password'";
ConnectionFactoryImpl connectionFactory = ConnectionFactoryImpl.createFromURL(connectionUrl);
Does anyone know a better way of providing the security information into the connection?
Update:
Right, so doing some digging through the API I have identified that the library uses the Default SSLSocketFactory
See: org.apache.qpid.amqp_1_0.client.Connection
final Socket s;
if(ssl)
{
s = SSLSocketFactory.getDefault().createSocket(address, port);
}
Therefore, there seems no way of specifying this information outside of the JVM options to set the default values... at least in the current version of the Qpid client library.
The connection URL parameters for the QPID JMS AMQP 1.0 client are a little bit different than the parameters for the previous AMQP version.
Here is an example for a connection URL that works for the 1.0 client:
amqp://myhost:myport?ssl=true&ssl-cert-alias=myalias&clientid=myclientid&remote-host=default&sync-publish=false&trust-store=C:/trusstore.ts&trust-store-password=mytrustkeypass&key-store=C:/keystore.ks&key-store-password=mykeypass
see also this link
Is the URL the right place to put the SSL parameters?
Should the ConnectionFactory not be getting a javax.net.ssl.SSLContext and then creating the connection?
(I'm not familiar with the particulars of the ActiveMQ API.)
For version 0.9.0 of QPid, which supports AMQP version 1.0.0, the client configuration page at QPID can also help with doing this programmatically.
I've also provided sample code of a successful program (NOTE: config is a class I created that stores all my configuration values):
String ampqProtocol = "amqp";
List<String> queryVariables = new ArrayList<String>();
if(config.isUseSSL()) {
queryVariables.add("transport.keyStoreLocation="+config.getKeyStorePath());
queryVariables.add("transport.keyStorePassword="+config.getKeyStorePassword());
queryVariables.add("transport.trustStoreLocation="+config.getTrustStorePath());
queryVariables.add("transport.trustStorePassword="+config.getTrustStorePassword());
queryVariables.add("transport.keyAlias="+config.getKeyStoreAlias());
queryVariables.add("transport.contextProtocol="+config.getSslProtocol());
queryVariables.add("transport.verifyHost="+!config.isDontValidateSSLHostname());
ampqProtocol = "amqps";
}
String connectionString = ampqProtocol+"://"+config.getAddress()+":"+config.getPort();
if(!queryVariables.isEmpty()) {
try {
connectionString += "?"+URLEncoder.encode(StringUtils.join(queryVariables, "&"), StandardCharsets.UTF_8.name());
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
}
Hashtable<Object, Object> env = new Hashtable<Object, Object>();
env.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.qpid.jms.jndi.JmsInitialContextFactory");
env.put("connectionfactory.myFactoryLookup", connectionString);
Context context = null;
ConnectionFactory connectionFactory = null;
try {
context = new InitialContext(env);
connectionFactory = (ConnectionFactory) context.lookup("myFactoryLookup");
} catch (NamingException e) {
e.printStackTrace();
}
Connection connection = null;
try {
connection = connectionFactory.createConnection();
connection.start();
} catch (JMSException e) {
e.printStackTrace();
}

Categories

Resources