I'm trying to upload a document from a Lambda script, however I've been stuck where I keep getting the following whenever the Lambda script starts:
com.mongodb.MongoSocketException: cluster0-whnfd.mongodb.net: No address associated with hostname
The error seems obvious, however I can connect using that same URL via Mongo Compass. The Java class I'm using looks like:
public class MongoStore {
private final static String MONGO_ADDRESS = "mongodb+srv://<USERNAME>:<PASSWORD>#cluster0-whnfd.mongodb.net/test";
private MongoCollection<Document> collection;
public MongoStore() {
final MongoClientURI uri = new MongoClientURI(MONGO_ADDRESS);
final MongoClient mongoClient = new MongoClient(uri);
final MongoDatabase database = mongoClient.getDatabase("test");
this.collection = database.getCollection("test");
}
public void save(String payload) {
Document document = new Document();
document.append("message", payload);
collection.insertOne(document);
}
}
Have I just misconfigured my Java class, or is there something more tricky going on here?
The same problem I had with freshly created MongoDB Atlas database, when I started the migration of my Python web application from Heroku.
So I've realised the DNS name cluster0.hgmft.mongodb.net just doesn't exist.
The magic happened when I've installed the library dnspython (my app is written in Python), with this library MongoDB client was able to connect to my database in Mongo Atlas.
Related
Has anyone managed to connect a java program to AWS DocumentDB where the java program is running outside of AWS and DocumentDB has tls enabled? Any examples or guidance provided would be greatly appreciated.
This is what I've done so far =>
I've been following AWS's developer guide and I understand to be able to do this I need an SSH tunnel set up to a jump box (EC2 instance) and then to the DB Cluster. I have done this and connected from my laptop.
I have then created the required .jks file from AWS's rds-combined-ca-bundle.pem file and referenced it in a basic java main class. From the java main class I have referenced the cluster as localhost:27017 as this is where I've set up the SSH tunnel from.
My test code is following the AWS example for Java and I get the following error when I run the program =>
Caused by: javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching localhost found.
public class CertsTestMain {
public static void main(String[] args) {
String template = "mongodb://%s:%s#%s/test?ssl=true&replicaSet=rs0&readpreference=%s";
String username = "dummy";
String password = "dummy";
String clusterEndpoint = "localhost:27017";
String readPreference = "secondaryPreferred";
String connectionString = String.format(template, username, password, clusterEndpoint, readPreference);
String truststore = "C:/Users/eclipse-workspace/certs/certs/rds-truststore.jks";
String truststorePassword = "test!";
System.setProperty("javax.net.ssl.trustStore", truststore);
System.setProperty("javax.net.ssl.trustStorePassword", truststorePassword);
MongoClient mongoClient = MongoClients.create(connectionString);
MongoDatabase testDB = mongoClient.getDatabase("test");
MongoCollection<Document> bookingCollection = testDB.getCollection("booking");
MongoCursor<Document> cursor = bookingCollection.find().iterator();
try {
while (cursor.hasNext()) {
System.out.println(cursor.next().toJson());
}
} finally {
cursor.close();
}
}
}
So, for me, to make this work I only had to alter the template to:
String template = "mongodb://%s:%s#%s/test?sl=true&tlsAllowInvalidHostnames&readpreference=%s";
As long as you have created your .jks file correctly
(I did this simply it by using a linux env and running the script AWS provide for Java in the following link in Point 2 => https://docs.aws.amazon.com/documentdb/latest/developerguide/connect_programmatically.html)
and you have a fully working ssh tunnel as described in https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
then the above code will work.
I am unable to connect to cloud SQL from inside a custom DoFn while running in cloud dataflow. The errors that show up in the log are:
Connecting to Cloud SQL instance [] via ssl socket.
[Docbuilder-worker-exception]: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed
to initialize pool: Could not create connection to database server.
The same code and config work fine when connecting to cloud sql from the appenginer handle.
I have explicitly given the compute engine service account - -compute#developer.gserviceaccount.com - the Cloud SQL client, Cloud SQL viewer and Editor roles.
Any help to troubleshoot this is greatly appreciated!
To connect to Cloud SQL from external applications there are some methods that could follow in the document How to connect to Cloud SQL from external applications[1] you can find the alternatives and the steps to achieve your goal.
[1]https://cloud.google.com/sql/docs/postgres/connect-external-app
I've also run into a lot of issues when trying to use connection pooling with cloud dataflow to cloud sql with custom DoFn. Now I do not remember if my error was the same as yours, but my solution was to create an #Setup method in the DoFn class like this:
static class ProcessDatabaseEvent extends DoFn<String, String> {
#Setup
public void createConnectionPool() throws IOException {
final Properties properties = new Properties();
properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream("config.properties"));
final String JDBC_URL = properties.getProperty("jdbc.url");
final String JDBC_USER = properties.getProperty("jdbc.username");
final String JDBC_PASS = properties.getProperty("jdbc.password");
final HikariConfig config = new HikariConfig();
config.setMinimumIdle(5);
config.setMaximumPoolSize(50);
config.setConnectionTimeout(10000);
config.setIdleTimeout(600000);
config.setMaxLifetime(1800000);
config.setJdbcUrl(JDBC_URL);
config.setUsername(JDBC_USER);
config.setPassword(JDBC_PASS);
pool = new HikariDataSource(config);
}
#ProcessElement
public void processElement(final ProcessContext context) throws IOException, SQLException {
//Your DoFn code here...
}
Am forming a SearchRequest and by using ElasticSearch RestHighLevelClient am trying to fetch documents from ElasticSearch. But, while search documents in ES am getting the below error.
Please find the stack trace below :
`18-Sep-2018 06:35:55.819 SEVERE [Thread-10] com.demo.searchengine.dao.DocumentSearch.getDocumentByName listener timeout after waiting for [30000] ms
java.io.IOException: listener timeout after waiting for [30000] ms
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:663)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:222)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:194)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:443)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:429)
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:368)
at com.demo.searchengine.dao.DocumentSearch.getDocumentByName(DocumentSearch.java:76)
at com.demo.searchengineservice.mqservice.Service.searchByDocuments(Service.java:43)
at com.demo.searchengineservice.mqservice.Consumer.consume(Consumer.java:27)
at com.demo.utils.Consumer$1$1.run(Consumer.java:89)
at java.lang.Thread.run(Unknown Source)`
Please find my code below :
public class SearchEngineClient {
private static PropertiesFile propertiesFile = PropertiesFile.getInstance();
private final static String elasticHost =propertiesFile.extractPropertiesFile().getProperty("ELASTIC_HOST");
private static RestHighLevelClient instance = new RestHighLevelClient(RestClient.builder(
new HttpHost(elasticHost, 9200, "http"),
new HttpHost(elasticHost, 9201, "http")));
public static RestHighLevelClient getInstance() {
return instance;
}
}
Am using the client instance below to getting the response from ES.
searchResponse = SearchEngineClient.getInstance().search(contentSearchRequest);
It looks like a problem with your elasticsearch server being not reachable from the outside. The ES server will only bind to localhost by default, which means it is not reachable from the outside.
So on your remote ES server you should find the elasticsearch.yml configuration file. In this file find and change network.host to your IP address or 0.0.0.0 to listen on all interfaces. After that change you need to restart ES.
I deployed my java web application on jelastic.I created a node for glassfish and mongodb on jelastic.I am not able to connect to the database deployed on code.
I used the following way to connect to the database-
Mongo mongo = new Mongo("http://arpitsolanki.jelastic.servint.net/", 27017);
but it throws a Number Format Exception
java.lang.NumberFormatException: For input string: "//arpitsolanki.jelastic.servint.net/"
What is the right way of connecting with the database?
I have used Mongo Client for connection and it works fine for me
// import com.mongodb.MongoClient;
// import com.mongodb.DBObject;
MongoClient mongoClient = new MongoClient("URL", PORT);
DB db = mongoClient.getDB("dbName");
boolean auth = db.authenticate("username", password.toCharArray());
I'm using Solr in my web application as search engine. I use the DataImportHandler to automatically import data from my database into the search index. When the DataImportHandler adds new data, the data is successfully added to the index, but it isn't returned when I query the index using SolrJ: I have to restart my application server for the data to be found by SolrJ. Is there some kind of caching going on? I used SolrJ in embedded mode. Here's my SolrJ code:
private static final SolrServer solrServer = initSolrServer();
private static SolrServer initSolrServer() {
try {
CoreContainer.Initializer initializer = new CoreContainer.Initializer();
coreContainer = initializer.initialize();
EmbeddedSolrServer server = new EmbeddedSolrServer(coreContainer, "");
return server;
} catch (Exception ex) {
logger.log(Level.SEVERE, "Error initializing SOLR server", ex);
return null;
}
}
Then to query I do the following:
SolrQuery query = new SolrQuery(keyword);
QueryResponse response = solrServer.query(query);
As you can see, my SolrServer is declared as static. Should I create a new EmbeddedSolrServer for each query instead? I'm afraid that will incur a big performance penalty.
Standard configuration of Solr Server doesn't provide auto-commit. If you have solrconfig.xml file, look for the commented tag "autoCommit". If not, then after each document added you can call server.commit();, although with large stream of documents this could prove a big performance issue (as commit is relatively heavy operation).
If you are using it in a web application, I'd advise using solr-x.x.war in your deploy instead of EmbeddedSolrServer. This will provide you with rich Http interface for updating, administrating and searching the index.