I am trying to implement a singleton pattern for connecting to my mongodb database so as to make sure i have only one connection.I have written following code '
public enum MongoConnector {
CONNECTION;
private MongoClient client = null;
/**
* This function is used to create a single instance of the MongoDb connector
* Thread Pooling is handled internally by MongoDb
*/
private MongoConnector() {
try {
client = new MongoClient();
} catch (Exception e) {
e.printStackTrace();
}
}
public MongoClient getClient() {
if (client == null) {
throw new RuntimeException();
}
return client;
}
}
So i want to know if this is ensuring the singleton pattern.If not please let me know how it should be .Thank you
Yes, the MongoConnector is a singleton.
Related
I have written a RESTful API using Apache Jersey. I am using MongoDB as my backend. I used Morphia (v.1.3.4) to map and persist POJO to database. I tried to follow "1 application 1 connection" in my API as recommended everywhere but I am not sure I am successful. I run my API in Tomcat 8. I also ran Mongostat to see the details and connection. At start, Mongostat showed 1 connection to MongoDB server. I tested my API using Postman and it was working fine. I then created a load test in SoapUI where I simulated 100 users per second. I saw the update in Mongostat. I saw there were 103 connections. Here is the gif which shows this behaviour.
I am not sure why there are so many connections. The interesting fact is that number of mongo connection are directly proportional to number of users I create on SoapUI. Why is that? I found other similar questions but I think I have implemented there suggestions.
Mongo connection leak with morphia
Spring data mongodb not closing mongodb connections
My code looks like this.
DatabaseConnection.java
// Some imports
public class DatabaseConnection {
private static volatile MongoClient instance;
private static String cloudhost="localhost";
private DatabaseConnection() { }
public synchronized static MongoClient getMongoClient() {
if (instance == null ) {
synchronized (DatabaseConnection.class) {
if (instance == null) {
ServerAddress addr = new ServerAddress(cloudhost, 27017);
List<MongoCredential> credentialsList = new ArrayList<MongoCredential>();
MongoCredential credentia = MongoCredential.createCredential(
"test", "test", "test".toCharArray());
credentialsList.add(credentia);
instance = new MongoClient(addr, credentialsList);
}
}
}
return instance;
}
}
PourService.java
#Secured
#Path("pours")
public class PourService {
final static Logger logger = Logger.getLogger(Pour.class);
private static final int POUR_SIZE = 30;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createPour(String request)
{
WebApiResponse response = new WebApiResponse();
Gson gson = new GsonBuilder().setDateFormat("dd/MM/yyyy HH:mm:ss").create();
String message = "Pour was not created.";
HashMap<String, Object> data = null;
try
{
Pour pour = gson.fromJson(request, Pour.class);
// Storing the pour to
PourRepository pourRepository = new PourRepository();
String id = pourRepository.createPour(pour);
data = new HashMap<String, Object>();
if ("" != id && null != id)
{
data.put("id", id);
message = "Pour was created successfully.";
logger.debug(message);
return response.build(true, message, data, 200);
}
logger.debug(message);
return response.build(false, message, data, 500);
}
catch (Exception e)
{
message = "Error while creating Pour.";
logger.error(message, e);
return response.build(false, message, new Object(),500);
}
}
PourDao.java
public class PourDao extends BasicDAO<Pour, String>{
public PourDao(Class<Pour> entityClass, Datastore ds) {
super(entityClass, ds);
}
}
PourRepository.java
public class PourRepository {
private PourDao pourDao;
final static Logger logger = Logger.getLogger(PourRepository.class);
public PourRepository ()
{
try
{
MongoClient mongoClient = DatabaseConnection.getMongoClient();
Datastore ds = new Morphia().map(Pour.class)
.createDatastore(mongoClient, "tilt45");
pourDao = new PourDao(Pour.class,ds);
}
catch (Exception e)
{
logger.error("Error while creating PourDao", e);
}
}
public String createPour (Pour pour)
{
try
{
return pourDao.save(pour).getId().toString();
}
catch (Exception e)
{
logger.error("Error while creating Pour.", e);
return null;
}
}
}
When I work with Mongo+Morphia I get better results using a Factory pattern for the Datastore and not for the MongoClient, for instance, check the following class:
public DatastoreFactory(String dbHost, int dbPort, String dbName) {
final Morphia morphia = new Morphia();
MongoClientOptions.Builder options = MongoClientOptions.builder().socketKeepAlive(true);
morphia.getMapper().getOptions().setStoreEmpties(true);
final Datastore store = morphia.createDatastore(new MongoClient(new ServerAddress(dbHost, dbPort), options.build()), dbName);
store.ensureIndexes();
this.datastore = store;
}
With that approach, everytime you need a datastore you can use the one provided by the factory. Of course, this can implemented better if you use a framework/library that support factory pattern (e.g.: HK2 with org.glassfish.hk2.api.Factory), and also singleton binding.
Besides, you can check the documentation of MongoClientOptions's builder method, perhaps you can find a better connection control there.
Hello i am working with mongodb java driver. In their documentation, they mentioned that,
The MongoClient class is designed to be thread safe and shared among threads.
Typically you create only 1 instance for a given database cluster and use it across
your application.
So, I want to make this object available for every user. how can i do this?
The best way to do this is to use Singleton design pattern. This is the code-
public class MongoDBManager {
public MongoClient mongoClient = null;
String host = "127.0.0.1";
static MongoDBManager mongo=new MongoDBManager();
private MongoDBManager() {
try {
mongoClient = new MongoClient( host , 27017);
} catch (UnknownHostException e) {
System.err.println("Connection errors");
e.printStackTrace();
}
}
public static MongoDBManager getInstance(){
return mongo;
}
}
Call only MongoDBManager.getInstance() whenever you need connection. only one object will be used.
We use hazelcast in client-server mode. The hazelcast cluster contains 2 hazelcast nodes and we have about 25 clients connected to the cluster.
What I am lookin for now is a simple check that tries to figure out if the cluster is still alive. It should be a rather cheap operation because this check will occure on every client quite frequently (once every second I could imagine).
What is the best way to do so?
The simplest way would be the register a LifecycleListener to the client HazelcastInstance:
HazelcastInstance client = HazelcastClient.newHazelcastClient();
client.getLifecycleService().addLifecycleListener(new LifecycleListener() {
#Override
public void stateChanged(LifecycleEvent event) {
}
})
The client uses a periodic heartbeat to detect if the cluster is still running.
You can use the LifecycleService.isRunning() method as well:
HazelcastInstance hzInstance = HazelcastClient.newHazelcastClient();
hzInstance.getLifecycleService().isRunning()
As isRunning() may be true even if cluster is down, I'd go for the following approach (a mixture of #konstantin-zyubin's answer and this). This doesn't need an event-listener, which is an advantage in my setup:
if (!hazelcastInstance.getLifecycleService().isRunning()) {
return Health.down().build();
}
int parameterCount;
LocalTopicStats topicStats;
try {
parameterCount = hazelcastInstance.getMap("parameters").size();
topicStats = hazelcastInstance.getTopic("myTopic").getLocalTopicStats();
} catch (Exception e) {
// instance may run but cluster is down:
Health.Builder builder = Health.down();
builder.withDetail("Error", e.getMessage());
return builder.build();
}
Health.Builder builder = Health.up();
builder.withDetail("parameterCount", parameterCount);
builder.withDetail("receivedMsgs", topicStats.getReceiveOperationCount());
builder.withDetail("publishedMsgs", topicStats.getPublishOperationCount());
return builder.build();
I have found a more reliable way to check hazelcast availability, because
client.getLifecycleService().isRunning()
when you use async reconnection mode is always return true, as was mentioned.
#Slf4j
public class DistributedCacheServiceImpl implements DistributedCacheService {
private HazelcastInstance client;
#Autowired
protected ConfigLoader<ServersConfig> serversConfigLoader;
#PostConstruct
private void initHazelcastClient() {
ClientConfig config = new ClientConfig();
if (isCacheEnabled()) {
ServersConfig.Hazelсast hazelcastConfig = getWidgetCacheSettings().getHazelcast();
config.getGroupConfig().setName(hazelcastConfig.getName());
config.getGroupConfig().setPassword(hazelcastConfig.getPassword());
for (String address : hazelcastConfig.getAddresses()) {
config.getNetworkConfig().addAddress(address);
}
config.getConnectionStrategyConfig()
.setAsyncStart(true)
.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC);
config.getNetworkConfig()
.setConnectionAttemptLimit(0) // infinite (Integer.MAX_VALUE) attempts to reconnect
.setConnectionTimeout(5000);
client = HazelcastClient.newHazelcastClient(config);
}
}
#Override
public boolean isCacheEnabled() {
ServersConfig.WidgetCache widgetCache = getWidgetCacheSettings();
return widgetCache != null && widgetCache.getEnabled();
}
#Override
public boolean isCacheAlive() {
boolean aliveResult = false;
if (isCacheEnabled() && client != null) {
try {
IMap<Object, Object> defaultMap = client.getMap("default");
if (defaultMap != null) {
defaultMap.size(); // will throw Hazelcast exception if cluster is down
aliveResult = true;
}
} catch (Exception e) {
log.error("Connection to hazelcast cluster is lost. Reason : {}", e.getMessage());
}
}
return aliveResult;
}
}
I am trying to create a pool of channels/connections to a queue server and was trying to use ObjectPool but am having trouble using it from the example on their site.
So far I have threads that do work but I want each of them to grab a channel from the pool and then return it. I understand how to use it(borrowObject/returnObjects) but not sure how to create the intial pool.
Here's how channels are made in rabbitmq:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
and my code just uses channel to do stuff. I'm confused because the only example I could find (on their site) starts it like this:
private ObjectPool<StringBuffer> pool;
public ReaderUtil(ObjectPool<StringBuffer> pool) {
this.pool = pool;
}
Which does not make sense to me. I realized this is common to establishing database connections so I tried to find tutorials using databases and ObjectPool but they seem to use DBCP which is specific to databases(and I can't seem to use the logic for my queue server).
Any suggestions on how to use it? Or is there a another approach used for pools in java?
They create a class that creates objects & knows what to do when they are returned. That might be something like this for you:
public class PoolConnectionFactory extends BasePoolableObjectFactory<Connection> {
private final ConnectionFactory factory;
public PoolConnectionFactory() {
factory = new ConnectionFactory();
factory.setHost("localhost");
}
// for makeObject we'll simply return a new Connection
public Connection makeObject() {
return factory.newConnection();
}
// when an object is returned to the pool,
// we'll clear it out
public void passivateObject(Connection con) {
con.I_don't_know_what_to_do();
}
// for all other methods, the no-op
// implementation in BasePoolableObjectFactory
// will suffice
}
now you create a ObjectPool<Connection> somewhere:
ObjectPool<Connection> pool = new StackObjectPool<Connection>(new PoolConnectionFactory());
then you can use pool inside your threads like
Connection c = pool.borrowObject();
c.doSomethingWithMe();
pool.returnObject(c);
The lines that don't make sense to you are a way to pass the pool object to a different class. See last line, they create the pool while creating the reader.
new ReaderUtil(new StackObjectPool<StringBuffer>(new StringBufferFactory()))
You'll need a custom implementation of PoolableObjectFactory to create, validate, and destroy the objects you want to pool. Then pass an instance of your factory to an ObjectPool's contructor and you're ready to start borrowing objects.
Here's some sample code. You can also look at the source code for commons-dbcp, which uses commons-pool.
import org.apache.commons.pool.BasePoolableObjectFactory;
import org.apache.commons.pool.ObjectPool;
import org.apache.commons.pool.PoolableObjectFactory;
import org.apache.commons.pool.impl.GenericObjectPool;
public class PoolExample {
public static class MyPooledObject {
public MyPooledObject() {
System.out.println("hello world");
}
public void sing() {
System.out.println("mary had a little lamb");
}
public void destroy() {
System.out.println("goodbye cruel world");
}
}
public static class MyPoolableObjectFactory extends BasePoolableObjectFactory<MyPooledObject> {
#Override
public MyPooledObject makeObject() throws Exception {
return new MyPooledObject();
}
#Override
public void destroyObject(MyPooledObject obj) throws Exception {
obj.destroy();
}
// PoolableObjectFactory has other methods you can override
// to valdiate, activate, and passivate objects.
}
public static void main(String[] args) throws Exception {
PoolableObjectFactory<MyPooledObject> factory = new MyPoolableObjectFactory();
ObjectPool<MyPooledObject> pool = new GenericObjectPool<MyPooledObject>(factory);
// Other ObjectPool implementations with special behaviors are available;
// see the JavaDoc for details
try {
for (int i = 0; i < 2; i++) {
MyPooledObject obj;
try {
obj = pool.borrowObject();
} catch (Exception e) {
// failed to borrow object; you get to decide how to handle this
throw e;
}
try {
// use the pooled object
obj.sing();
} catch (Exception e) {
// this object has failed us -- never use it again!
pool.invalidateObject(obj);
obj = null; // don't return it to the pool
// now handle the exception however you want
} finally {
if (obj != null) {
pool.returnObject(obj);
}
}
}
} finally {
pool.close();
}
}
}
I am creating a java application that connects to multiple databases. A user will be able to select the database they want to connect to from a drop down box.
The program then connects to the database by passing the name to a method that creates an initial context so it can talk with an oracle web logic data source.
public class dbMainConnection {
private static dbMainConnection conn = null;
private static java.sql.Connection dbConn = null;
private static javax.sql.DataSource ds = null;
private static Logger log = LoggerUtil.getLogger();
private dbMainConnection(String database) {
try {
Context ctx = new InitialContext();
if (ctx == null) {
log.info("JDNI Problem, cannot get InitialContext");
}
database = "jdbc/" + database;
log.info("This is the database string in DBMainConnection" + database);
ds = (javax.sql.DataSource) ctx.lookup (database);
} catch (Exception ex) {
log.error("eMTSLogin: Error in dbMainConnection while connecting to the database : " + database, ex);
}
}
public Connection getConnection() {
try {
return ds.getConnection();
} catch (Exception ex) {
log.error("Error in main getConnection while connecting to the database : ", ex);
return null;
}
}
public static dbMainConnection getInstance(String database) {
if (dbConn == null) {
conn = new dbMainConnection(database);
}
return conn;
}
public void freeConnection(Connection c) {
try {
c.close();
log.info(c + " is now closed");
} catch (SQLException sqle) {
log.error("Error in main freeConnection : ", sqle);
}
}
}
My problem is what happens if say someone forgets to create the data source for the database but they still add it to the drop down box? Right now what happens is if I try and connect to a database that doesn't have a data source it errors saying it cannot get a connection. Which is what I want but if I connect to a database that does have a data source first, which works, then try and connect to the database that doesn't have a data source, again it errors with
javax.naming.NameNotFoundException: Unable to resolve 'jdbc.peterson'. Resolved 'jdbc'; remaining name 'peterson'.
Which again I would expect but what is confusing me is it then grabs the last good connection which is for a different database and process everything as if nothing happened.
Anyone know why that is? Is weblogic caching the connection or something as a fail safe? Is it a bad idea to create connections this way?
You're storing a unique datasource (and connection, and dbMainConnection) in a static variable of your class. Each time someone asks for a datasource, you replace the previous one by the new one. If an exception occurs while getting a datasource from JNDI, the static datasource stays as it is. You should not store anything in a static variable. Since your dbMainConnection class is constructed with the name of a database, and there are several database names, it makes no sense to make it a singleton.
Just use the following code to access the datasource:
public final class DataSourceUtil {
/**
* Private constructor to prevent unnecessary instantiations
*/
private DataSourceUtil() {
}
public static DataSource getDataSource(String name) {
try {
Context ctx = new InitialContext();
String database = "jdbc/" + name;
return (javax.sql.DataSource) ctx.lookup (database);
}
catch (NamingException e) {
throw new IllegalStateException("Error accessing JNDI and getting the database named " + name);
}
}
}
And let the callers get a connection from the datasource and close it when they have finished using it.
You're catching JNDI exception upon lookup of the nonexistent datasource but your singleton still keeps the reference to previously looked up datasource. As A.B. Cade says, null reference to ds upon exception, or even before that.
On a more general note, perhaps using Singleton is not the best idea.