Connection to MongoDB using Morphia (Java) - java

I am developing a Java Restful web application and planning to use MongoDB with Morphia as ODM . As I am new to MongoDB, I needed a few suggestions.
The best way to handle db connections is to make use of db connection pool, which mongoClient takes care of.
Morphia morphia = new Morphia();
ServerAddress addr = new ServerAddress("127.0.0.1", 27017);
String databaseName = "test";
MongoClient mongoClient = new MongoClient(addr);
Datastore datastore = morphia.createDatastore(mongoClient, databaseName);
So I need to reuse the above datastore and not create a new instance upon every request as it can waste a lot of resources and affect performance. Should I be implementing the above as singleton class? Can someone help me through this?
Also can someone explain as to how I can set up configuration for db connections such as max connections per host, connection timeout in Morphia using MongoClientOptions?

Check out the config in this demo github.com/xeraa/morphia-demo
/**
* MongoDB providing the database connection for main.
*/
public class MongoDB {
public static final String DB_HOST = "127.0.0.1";
public static final int DB_PORT = 27017;
public static final String DB_NAME = "morphia_demo";
private static final Logger LOG = Logger.getLogger(MongoDB.class.getName());
private static final MongoDB INSTANCE = new MongoDB();
private final Datastore datastore;
private MongoDB() {
MongoClientOptions mongoOptions = MongoClientOptions.builder()
.socketTimeout(60000) // Wait 1m for a query to finish, https://jira.mongodb.org/browse/JAVA-1076
.connectTimeout(15000) // Try the initial connection for 15s, http://blog.mongolab.com/2013/10/do-you-want-a-timeout/
.maxConnectionIdleTime(600000) // Keep idle connections for 10m, so we discard failed connections quickly
.readPreference(ReadPreference.primaryPreferred()) // Read from the primary, if not available use a secondary
.build();
MongoClient mongoClient;
mongoClient = new MongoClient(new ServerAddress(DB_HOST, DB_PORT), mongoOptions);
mongoClient.setWriteConcern(WriteConcern.SAFE);
datastore = new Morphia().mapPackage(BaseEntity.class.getPackage().getName())
.createDatastore(mongoClient, DB_NAME);
datastore.ensureIndexes();
datastore.ensureCaps();
LOG.info("Connection to database '" + DB_HOST + ":" + DB_PORT + "/" + DB_NAME + "' initialized");
}
public static MongoDB instance() {
return INSTANCE;
}
// Creating the mongo connection is expensive - (re)use a singleton for performance reasons.
// Both the underlying Java driver and Datastore are thread safe.
public Datastore getDatabase() {
return datastore;
}
}

you can like this:
private MongoClient mongoClient = null;
private String mongoUrl="xxxx";//You can also use spring injection,#Value(value = "${db.mongo.url}")
private static final Morphia MORPHIA = new Morphia();
public Datastore getMORPHIADB(String dbName) {
if (mongoClient == null) {
//初始化
init();
}
return MORPHIA.createDatastore(mongoClient, dbName);
}
#PostConstruct
public void init() {
try {
//此处其它参数我们不做配置,采用默认配置MongoClientOptions.Builder里面配置连接池的参数
MongoClientOptions.Builder options = new MongoClientOptions.Builder()
.connectionsPerHost(30); //连接池大小(默认初始化为100个,原来老版本是10个)
MongoClientURI mongoClientURI = new MongoClientURI(mongoUrl, options);
mongoClient = new MongoClient(mongoClientURI);
} catch (MongoClientException e) {
LOGGER.error("建立MongoClient异常");
}
}
#PreDestroy
public synchronized void closeConnection() {
if (mongoClient != null) {
mongoClient.close();
}
}
when need use, you can like this:
private Datastore datastore = mongoConnService.getMORPHIADB("xxx");

Related

SMBJ and DFS and "Nested Session"

I have a project, where I am given an id, and then using that ID look up files paths and process them... these files are on various mounted drives, so I am using the SMBJ java libraries to access them.
The problem I am having is that some (most) of the files are using a DFS mountpoint... Now, this in and of itself is NOT a problem per se, but apparently the SMBJ libraries appear to create nested sessions for each distinct DFS location. So even though I am closing the actual FILE after I am done reading it the DiskSession object is holding onto all these nested sessions... and eventually either through the DFS config settings, or through these libraries I am hitting some point where it just blows up and stops allowing more sessions to be created.
I am processing hundreds of thousands of records, and the "crash" appears to happen somewhere around 500ish records(session) being processed. I do not see anything obvious looking at the code to explicitly close these nested sessions.. in fact I see no external access to them at all externally from the DiskShare object.
Is there some sort of setting I am missing that maximizes the sessions that this is holding onto? Other than me managing some sort of my own counter around this, and closing and reopening sessions/connections I am at a loss how to handle this.
Does anyone know what I am missing here?
Code below:
public class Smb {
private static SMBClient client;
private static String[] DFSMounts = {"DFS1","dfs1"};
private static final Logger Log = LoggerFactory.getLogger(Smb.class);
private static HashMap<String,DiskShare> shares = new HashMap<>();
private static HashMap<String,Connection> connections = new HashMap<>();
private static HashMap<Connection,Session> sessions = new HashMap<>();
private synchronized static SMBClient getClient(){
if (client == null){
SmbConfig cfg = SmbConfig.builder().withDfsEnabled(true).build();
client = new SMBClient(cfg);
}
return client;
}
private synchronized static Connection getConnection(String realDomainName) throws IOException{
Log.info("DOMAIN NAME "+realDomainName);
Connection connection = (connections.get(realDomainName) == null) ? client.connect(realDomainName) : connections.get(realDomainName);
if(!connection.isConnected()) {
connection.close();
sessions.remove(connection);
connection = client.connect(realDomainName);
}
// connection = client.connect(realDomainName);
connections.put(realDomainName,connection);
return connection;
}
private synchronized static Session getSession(Connection connection,SMBClient client){
Session session = sessions.get(connection);
if(session==null) {
PropertiesCache props = PropertiesCache.getInstance();
String sambaUsername = props.getProperty("smb.user");
String sambaPass = props.getProperty("smb.password");
String sambaDomain = props.getProperty("smb.domain");
Log.info("CLIENT " + client);
session = (sessions.get(connection) != null) ? sessions.get(connection) : connection.authenticate(new AuthenticationContext(sambaUsername, sambaPass.toCharArray(), sambaDomain));
sessions.put(connection, session);
}
return session;
}
#SuppressWarnings("UnusedReturnValue")
public synchronized static DiskShare getShare(String domainName, String shareName) throws SmbException
{
DiskShare share = shares.get(domainName+"/"+shareName);
if((share!=null)&&(!share.isConnected())) share=null;
if(share == null){
try {
PropertiesCache props = PropertiesCache.getInstance();
String sambaUsername = props.getProperty("smb.user");
String sambaPass = props.getProperty("smb.password");
String sambaDomain = props.getProperty("smb.domain");
String dfsIP = props.getProperty("smb.sambaIP");
SMBClient client = getClient();
String realDomainName = (Arrays.stream(DFSMounts).anyMatch(domainName::equals)) ? dfsIP: domainName;
Connection connection = getConnection(realDomainName);
Session session = getSession(connection,client);
share = (DiskShare) session.connectShare(shareName);
shares.put(domainName+"/"+shareName,share);
}
catch (Exception e){
Log.info("EXCEPTION E "+e);
Log.info("EX "+e.getMessage());
throw new SmbException();
}
}
return(share);
}
public static String fixFilename(String filename){
String[] parts = filename.split("\\\\");
ArrayList<String> partsList = new ArrayList<>(Arrays.asList(parts));
partsList.remove(0);
partsList.remove(0);
partsList.remove(0);
partsList.remove(0);
return String.join("/",partsList);
}
public static File open(String filename) throws SmbException {
String[] parts = filename.split("\\\\");
String domainName = parts[2];
String shareName = parts[3];
DiskShare share = getShare(domainName,shareName);
Set<SMB2ShareAccess> s = new HashSet<>();
s.add(SMB2ShareAccess.ALL.iterator().next());
filename = fixFilename(filename);
return(share.openFile(filename, EnumSet.of(AccessMask.GENERIC_READ), null, s, SMB2CreateDisposition.FILE_OPEN, null));
}
}
And here is how the OPEN is being used (to show it is closing the file after use):
String filename = documents.get(0).getUNCPath();
try (File f = Smb.open(filename)){
Process the file code...
f.closeSilently();
}
And:
while(i.hasNext()){
String filename = (String)i.next();
Log.info("FILENAME "+filename);
try(File f = Smb.open(filename)){
Process the file stuff here
}
}
I have created a PR for SMBJ which changes this. It will reuse the nested session for same host. I have successfully used it myself to avoid the exact same problem you are having. https://github.com/hierynomus/smbj/pull/489

Mongo Connection is created multiple times in RESTful API and never released

I have written a RESTful API using Apache Jersey. I am using MongoDB as my backend. I used Morphia (v.1.3.4) to map and persist POJO to database. I tried to follow "1 application 1 connection" in my API as recommended everywhere but I am not sure I am successful. I run my API in Tomcat 8. I also ran Mongostat to see the details and connection. At start, Mongostat showed 1 connection to MongoDB server. I tested my API using Postman and it was working fine. I then created a load test in SoapUI where I simulated 100 users per second. I saw the update in Mongostat. I saw there were 103 connections. Here is the gif which shows this behaviour.
I am not sure why there are so many connections. The interesting fact is that number of mongo connection are directly proportional to number of users I create on SoapUI. Why is that? I found other similar questions but I think I have implemented there suggestions.
Mongo connection leak with morphia
Spring data mongodb not closing mongodb connections
My code looks like this.
DatabaseConnection.java
// Some imports
public class DatabaseConnection {
private static volatile MongoClient instance;
private static String cloudhost="localhost";
private DatabaseConnection() { }
public synchronized static MongoClient getMongoClient() {
if (instance == null ) {
synchronized (DatabaseConnection.class) {
if (instance == null) {
ServerAddress addr = new ServerAddress(cloudhost, 27017);
List<MongoCredential> credentialsList = new ArrayList<MongoCredential>();
MongoCredential credentia = MongoCredential.createCredential(
"test", "test", "test".toCharArray());
credentialsList.add(credentia);
instance = new MongoClient(addr, credentialsList);
}
}
}
return instance;
}
}
PourService.java
#Secured
#Path("pours")
public class PourService {
final static Logger logger = Logger.getLogger(Pour.class);
private static final int POUR_SIZE = 30;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createPour(String request)
{
WebApiResponse response = new WebApiResponse();
Gson gson = new GsonBuilder().setDateFormat("dd/MM/yyyy HH:mm:ss").create();
String message = "Pour was not created.";
HashMap<String, Object> data = null;
try
{
Pour pour = gson.fromJson(request, Pour.class);
// Storing the pour to
PourRepository pourRepository = new PourRepository();
String id = pourRepository.createPour(pour);
data = new HashMap<String, Object>();
if ("" != id && null != id)
{
data.put("id", id);
message = "Pour was created successfully.";
logger.debug(message);
return response.build(true, message, data, 200);
}
logger.debug(message);
return response.build(false, message, data, 500);
}
catch (Exception e)
{
message = "Error while creating Pour.";
logger.error(message, e);
return response.build(false, message, new Object(),500);
}
}
PourDao.java
public class PourDao extends BasicDAO<Pour, String>{
public PourDao(Class<Pour> entityClass, Datastore ds) {
super(entityClass, ds);
}
}
PourRepository.java
public class PourRepository {
private PourDao pourDao;
final static Logger logger = Logger.getLogger(PourRepository.class);
public PourRepository ()
{
try
{
MongoClient mongoClient = DatabaseConnection.getMongoClient();
Datastore ds = new Morphia().map(Pour.class)
.createDatastore(mongoClient, "tilt45");
pourDao = new PourDao(Pour.class,ds);
}
catch (Exception e)
{
logger.error("Error while creating PourDao", e);
}
}
public String createPour (Pour pour)
{
try
{
return pourDao.save(pour).getId().toString();
}
catch (Exception e)
{
logger.error("Error while creating Pour.", e);
return null;
}
}
}
When I work with Mongo+Morphia I get better results using a Factory pattern for the Datastore and not for the MongoClient, for instance, check the following class:
public DatastoreFactory(String dbHost, int dbPort, String dbName) {
final Morphia morphia = new Morphia();
MongoClientOptions.Builder options = MongoClientOptions.builder().socketKeepAlive(true);
morphia.getMapper().getOptions().setStoreEmpties(true);
final Datastore store = morphia.createDatastore(new MongoClient(new ServerAddress(dbHost, dbPort), options.build()), dbName);
store.ensureIndexes();
this.datastore = store;
}
With that approach, everytime you need a datastore you can use the one provided by the factory. Of course, this can implemented better if you use a framework/library that support factory pattern (e.g.: HK2 with org.glassfish.hk2.api.Factory), and also singleton binding.
Besides, you can check the documentation of MongoClientOptions's builder method, perhaps you can find a better connection control there.

Define Max connection pool size in MongoDB + Java

For my Application, I am trying to set min and maximum pool size for the connection. So, Can anyone help me how to do it with mongo Client.
Also not I saw the options through the MOngoClientURI but is there any other option with MongoClientOption or MongoClient.
My Current Code:
public void buildMongoClient() {
mongoClient = new MongoClient(dbHostName, dbPort);
mongoDatabase = mongoClient.getDatabase(DATABASE);
}
You can try something like this.
public void buildMongoClient() {
MongoClientOptions.Builder clientOptions = new MongoClientOptions.Builder();
clientOptions.minConnectionsPerHost();//min
clientOptions.connectionsPerHost();//max
mongoClient = new MongoClient(new ServerAddress(dbHostName, dbPort), clientOptions.build);
mongoDatabase = mongoClient.getDatabase(DATABASE);
}

Java MongoDB connection pool

I am using Java with MongoDB. Here I am opening MongoClient in each method. I only need to open it once through out the class and close it once.
public class A
{
public String name()
{
MongoClient mongo = new MongoClient(host, port);
DB db = mongo.getDB(database);
DBCollection coll = db.getCollection(collection);
BasicDBObject doc = new BasicDBObject("john", e.getName())
}
public String age()
{
MongoClient mongo = new MongoClient(host, port);
DB db = mongo.getDB(database);
DBCollection coll = db.getCollection(collection);
BasicDBObject doc = new BasicDBObject("age", e.getAge())
}
}
You can use a Singleton pattern to guarantee only one instance of MongoClient class per application. Once you obtain the instance of MongoClient, you can perform your operations and don't need to explicitly manage operations like MongoClient.close, as this object manages connection pooling automatically.
In your example, you can initialize the MongoClient in a static variable.

connect to local cassandra nodes using datastax java driver?

I am using datastax java driver 3.1.0 to connect to cassandra cluster and my cassandra cluster version is 2.0.10.
Below is the singleton class I am using to connect to cassandra cluster.
public class CassUtil {
private static final Logger LOGGER = Logger.getInstance(CassUtil.class);
private Session session;
private Cluster cluster;
private static class Holder {
private static final CassUtil INSTANCE = new CassUtil();
}
public static CassUtil getInstance() {
return Holder.INSTANCE;
}
private CassUtil() {
List<String> servers = TestUtils.HOSTNAMES;
String username =
TestUtils.loadCredentialFile().getProperty(TestUtils.USERNAME);
String password =
TestUtils.loadCredentialFile().getProperty(TestUtils.PASSWORD);
// is this right setting?
PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions.setConnectionsPerHost(HostDistance.LOCAL, 4, 10).setConnectionsPerHost(
HostDistance.REMOTE, 2, 4);
Builder builder = Cluster.builder();
cluster =
builder
.addContactPoints(servers.toArray(new String[servers.size()]))
.withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE)
.withPoolingOptions(poolingOptions)
.withReconnectionPolicy(new ConstantReconnectionPolicy(100L))
.withLoadBalancingPolicy(
DCAwareRoundRobinPolicy
.builder()
.withLocalDc(
!TestUtils.isProduction() ? "DC2" : TestUtils.getCurrentLocation()
.get().name().toLowerCase()).build())
.withCredentials(username, password).build();
try {
session = cluster.connect("testkeyspace");
StringBuilder sb = new StringBuilder();
Set<Host> allHosts = cluster.getMetadata().getAllHosts();
for (Host host : allHosts) {
sb.append("[");
sb.append(host.getDatacenter());
sb.append(host.getRack());
sb.append(host.getAddress());
sb.append("]");
}
LOGGER.logInfo("connected: " + sb.toString());
} catch (NoHostAvailableException ex) {
LOGGER.logError("error= ", ExceptionUtils.getStackTrace(ex));
} catch (Exception ex) {
LOGGER.logError("error= " + ExceptionUtils.getStackTrace(ex));
}
}
public void shutdown() {
LOGGER.logInfo("Shutting down the whole cassandra cluster");
if (null != session) {
session.close();
}
if (null != cluster) {
cluster.close();
}
}
public Session getSession() {
if (session == null) {
throw new IllegalStateException("No connection initialized");
}
return session;
}
public Cluster getCluster() {
return cluster;
}
}
What is the settings I need to use to connect to local cassandra nodes first and if they are down, then only talk to remote nodes. Also my pooling configuration options is right here which I am using in the above code?
By default the datastax drivers will only connect to nodes in the local DC. If you do not use withLocalDc it will attempt to discern the local datacenter from the DC of the contact point it is able to connect to.
If you want the driver to fail over to host in remote data center(s) you should use withUsedHostsPerRemoteDc, i.e.:
cluster.builder()
.withLoadBalancingPolicy(DCAwareRoundRobinPolicy.builder()
.withLocalDc("DC1")
.withUsedHostsPerRemoteDc(3).build())
With this configuration, the driver will establish connections to 3 hosts in each remote DC, and only send queries to them if all hosts in the local datacenter is down.
There are other strategies for failover to remote data centers. For example, you could run your application clients in each same physical data center as your C* data centers, and then when a physical data center fails, you can fail over at a higher level (like your load balancer).
Also my pooling configuration options is right here which I am using in the above code?
I think what you have is fine. The defaults are fine too.

Categories

Resources