Everything using MongoDB must throw UnknownHostException - java

I'm working on a plugin for a Bukkit (Minecraft) Server. I want to be able to write stuff to my MongoDB database, but any methods that include creating a mongoClient must throw an UnknownHostException, as well as everything it's nested in. For example: The listener class listens for a player login, which will trigger the login utilities class, which will trigger the database class. And all of them need to throw the exception. The problem is that adding the exception to all of them creates this error (or maybe something else is causing it): server log
This is a portion of the database class if it helps:
public static boolean checkForPlayer(String playername) throws UnknownHostException{
BasicDBObject query = new BasicDBObject();
query.put("username", playername);
//create client
MongoClient mongo = new MongoClient("some_address", 27017);
//create database
DB db = mongo.getDB("test");
//create collection
DBCollection table = db.getCollection("test");
//create cursor
DBCursor cursor = table.find(query);
if(!cursor.hasNext()){
return false;
}
return true;
}
I'm not very good at java so the problem might be something silly :/

You have to add the mongodb driver to the class path. http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-java-driver/#getting-started-with-java-driver

Related

Implementing Spring + Apache Flink project with Postgres

I have a SpringBoot gradle project using apache flink to process datastream signals. When a new signal comes through the datastream, I would like to query look up (i.e. findById() ) it's details using an ID in a postgres database table which is already created in order to get additional information about the signal and enrich the data. I would like to avoid using spring dependencies to perform the lookup (i.e Autowire repository) and want to stick with flink implementation for the lookup.
Where can i specify how to add the postgres connection config information such as port, database, url, username, password etc... (for simplicity purposes can assume the postgres db is local in my machine). Is it as simple as adding the configuration to the application.properties file? if so how can i write the query method to look up the record in the postgres table when searching by non primary key value?
Some online sources are suggesting using this skeleton code but I am not sure how/id it fits my use case. (I have a EventEntity model created which contains all the params/columns from the table which i'm looking up).
like so
public class DatabaseMapper extends RichFlatMapFunction<String, EventEntity> {
// Declare DB connection & query statements
public void open(Configuration parameters) throws Exception {
//Initialize DB connection
//prepare query statements
}
#Override
public void flatMap(String value, Collector<EventEntity> out) throws Exception {
}
}
Your sample code is correct. You can set all your custom initialization and preparation code for PostgreSQL in open() method. Then you can use your pre-configured fields in your flatMap() function.
Here is one sample for Redis operations
I have used RichAsyncFunction here and I suggest you do the same as it is suggested as best practice. Read here for more: https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/stream/operators/asyncio.html)
You can pass configuration parameteres in your constructor method and use it in your initialization process
public static class AsyncRedisOperations extends RichAsyncFunction<Object,Object> {
private JedisPool jedisPool;
private Configuration redisConf;
public AsyncRedisOperations(Configuration redisConf) {
this.redisConf = redisConf;
}
#Override
public void open(Configuration parameters) {
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(this.redisConf.getInteger("pool", 8));
jedisPoolConfig.setMaxIdle(this.redisConf.getInteger("pool", 8));
jedisPoolConfig.setMaxWaitMillis(this.redisConf.getInteger("maxWait", 0));
JedisPool jedisPool = new JedisPool(jedisPoolConfig,
this.redisConf.getString("host", "192.168.10.10"),
this.redisConf.getInteger("port", 6379), 5000);
try {
this.jedisPool = jedisPool;
this.logger.info("Redis connected: " + jedisPool.getResource().isConnected());
} catch (Exception e) {
this.logger.error(BaseUtil.append("Exception while connecting Redis"));
}
}
#Override
public void asyncInvoke(Object in, ResultFuture<Object> out) {
try (Jedis jedis = this.jedisPool.getResource()) {
String key = jedis.get(key);
this.logger.info("Redis Key: " + key);
}
}
}

How to provision throughput at the database level in java code using azure-documentdb

I wish to create azure cosmos databases from Java code using the com.microsoft.azure:azure-documentdb:2.4.1. I can only find the option to set offerThroughput which is for collections created in the database.
Does anyone know how to do this?
Please use below code:
public static void main(String[] args) throws DocumentClientException {
DocumentClient client = new DocumentClient(
YOUR_COSMOS_DB_ENDPOINT,
YOUR_COSMOS_DB_MASTER_KEY,
new ConnectionPolicy(),
ConsistencyLevel.Session);
RequestOptions requestOptions = new RequestOptions();
requestOptions.setOfferThroughput(500);
Database database = new Database();
database.setId("testdb");
client.createDatabase(database,requestOptions);
}
If you want to update RU settings, you could refer to my previous case:
1.Reducing Provisioned Throughput for CosmosDB
2.Cosmos Db Throughput

Java MongoDB connection pool

I am using Java with MongoDB. Here I am opening MongoClient in each method. I only need to open it once through out the class and close it once.
public class A
{
public String name()
{
MongoClient mongo = new MongoClient(host, port);
DB db = mongo.getDB(database);
DBCollection coll = db.getCollection(collection);
BasicDBObject doc = new BasicDBObject("john", e.getName())
}
public String age()
{
MongoClient mongo = new MongoClient(host, port);
DB db = mongo.getDB(database);
DBCollection coll = db.getCollection(collection);
BasicDBObject doc = new BasicDBObject("age", e.getAge())
}
}
You can use a Singleton pattern to guarantee only one instance of MongoClient class per application. Once you obtain the instance of MongoClient, you can perform your operations and don't need to explicitly manage operations like MongoClient.close, as this object manages connection pooling automatically.
In your example, you can initialize the MongoClient in a static variable.

ArangoDB java driver on executing AQL sometimes return NULL and other times the correct result

I am unable to wrap my head around this peculiar issue.
I am using arangodb 3.0.10 and arangodb-java-driver 3.0.4.
I am executing a very simple AQL fetch query. (See code below) All my unit tests pass every time and problem never arises when debugging. The problem does not occur all the time (around half the time). It gets even stranger, the most frequent manifestation is NullPointerException at
return cursor.getUniqueResult();
but also got once a ConcurrentModificationException
Questions:
Do I have to manage the database connection? Like closing the driver
connection after each use.
Am i doing something completely wrong
with the ArangoDB query?
Any hint in the right direction is appreciated.
Error 1:
java.lang.NullPointerException
at org.xworx.sincapp.dao.UserDAO.get(UserDAO.java:41)
Error 2:
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)
at java.util.HashMap$EntryIterator.next(HashMap.java:1471)
at java.util.HashMap$EntryIterator.next(HashMap.java:1469)
at com.google.gson.internal.bind.MapTypeAdapterFactory$Adapter.write(MapTypeAdapterFactory.java:206)
at com.google.gson.internal.bind.MapTypeAdapterFactory$Adapter.write(MapTypeAdapterFactory.java:145)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:68)
at com.google.gson.internal.bind.MapTypeAdapterFactory$Adapter.write(MapTypeAdapterFactory.java:208)
at com.google.gson.internal.bind.MapTypeAdapterFactory$Adapter.write(MapTypeAdapterFactory.java:145)
at com.google.gson.Gson.toJson(Gson.java:593)
at com.google.gson.Gson.toJson(Gson.java:572)
at com.google.gson.Gson.toJson(Gson.java:527)
at com.google.gson.Gson.toJson(Gson.java:507)
at com.arangodb.entity.EntityFactory.toJsonString(EntityFactory.java:201)
at com.arangodb.entity.EntityFactory.toJsonString(EntityFactory.java:165)
at com.arangodb.impl.InternalCursorDriverImpl.getCursor(InternalCursorDriverImpl.java:94)
at com.arangodb.impl.InternalCursorDriverImpl.executeCursorEntityQuery(InternalCursorDriverImpl.java:79)
at com.arangodb.impl.InternalCursorDriverImpl.executeAqlQuery(InternalCursorDriverImpl.java:148)
at com.arangodb.ArangoDriver.executeAqlQuery(ArangoDriver.java:2158)
at org.xworx.sincapp.dao.UserDAO.get(UserDAO.java:41)
ArangoDBConnector
public abstract class ArangoDBConnector {
protected static ArangoDriver driver;
protected static ArangoConfigure configure;
public ArangoDBConnector() {
final ArangoConfigure configure = new ArangoConfigure();
configure.loadProperties(ARANGODB_PROPERTIES);
configure.init();
final ArangoDriver driver = new ArangoDriver(configure);
ArangoDBConnector.configure = configure;
ArangoDBConnector.driver = driver;
}
UserDAO
#Named
public class UserDAO extends ArangoDBConnector{
private Map<String, Object> bindVar = new HashMap();
public UserDAO() {}
public User get(#NotNull String objectId) {
bindVar.clear();
bindVar.put("uuid", objectId);
String fetchUserByObjectId = "FOR user IN User FILTER user.uuid == #uuid RETURN user";
CursorResult<User> cursor = null;
try {
cursor = driver.executeAqlQuery(fetchUserByObjectId, bindVar, driver.getDefaultAqlQueryOptions(), User.class);
} catch (ArangoException e) {
new ArangoDaoException(e.getErrorMessage());
}
return cursor.getUniqueResult();
}
As AntJavaDev said, you access bindVar more than once the same time. When one thread modify bindVar and another tried to build the AQL call at the same time by reading bindVar. This leads to the ConcurrentModificationException.
The NullPointerException results from an AQL call with no result. e.g. when you clear bindVar and directly after that, execute the AQL in another thread with no content in bindVar.
To your questions:
1. No, you do not have to close the driver connection after each call.
2. Beside the shared bindVar, everything looks correct.

Openshift can't connect to mongoDB from java code, time out

I've got a MongoDB cartridge installed on openshift and i'm having troubles with connecting to it from java code. Ip address, port and credentials are taken from openshift's RockMongo cartridge. The following method invocation:
public Document insert(String audio, String username) {
Document document = new Document();
document.put("username", username);
document.put("audio", audio);
document.put("timestamp", new Date());
collection.insertOne(document);
return document;
}
and this mongo client configuration:
private static MongoClient build() throws UnknownHostException {
if (mongoClient == null) {
mongoClient = new MongoClient(
new MongoClientURI( "mongodb://admin:password#X.X.X.X:27017/dbName"));
}
return mongoClient;
}
public static MongoCollection<Document> getCollection(String collectionName) {
try {
build();
} catch (UnknownHostException e) {
}
MongoDatabase db = mongoClient.getDatabase(dbName);
MongoCollection<Document> collection = db.getCollection(collectionName);
return collection;
}
results in INFO: No server chosen by PrimaryServerSelector from cluster description ClusterDescription, and exception: Timed out after 30000 ms while waiting for a server that matches PrimaryServerSelector.
EDIT: I can't connect with mongoDB service on openshift via mongo terminal application either: "exception: connect failed", so I think it's openshift configuration issue. Port forwarding and the service itself are started
I suppose you have not correctly configure cluster (message in logs told about this problem), I'm not sure how OpenShift Cratridge works, but I recommend you to check if it has mondo-db correctly started. Check it via ssh client and run mongo-db command to check its status and if started. Take a look on this question: Java MongoClient cannot connect to primary, I suppose it give you some idea how to check where you have problem.

Categories

Resources