mongodb open connection issue - java

I have the following log in my mongo console:
Tue Jul 23 17:20:01.301 [initandlisten] waiting for connections on port 27017
Tue Jul 23 17:20:01.401 [websvr] admin web console waiting for connections on port 28017
Tue Jul 23 17:20:01.569 [initandlisten] connection accepted from 127.0.0.1:58090 #1 (1 connection now open)
Tue Jul 23 17:20:01.570 [initandlisten] connection accepted from 127.0.0.1:58089 #2 (2 connections now open)
Tue Jul 23 17:20:21.799 [initandlisten] connection accepted from 127.0.0.1:58113 #3 (3 connections now open)
....
....
....
likewise the log goes on and now it is in 112. Each time when i start mongo server this happens. I only have a singleton connection in my code. What can be the issue here:
public static DB getConnection(String databaseName) throws AppConnectionException {
if (null != db) {
Logger.debug("Returning existing db connection...!");
return db;
}
Logger.debug("Creating new db connection...!");
final String connStr = PropertyRetreiver.getPropertyFromConfigurationFile("rawdata.url");
try {
final MongoClientURI uri = new MongoClientURI(connStr);
final MongoClient client = new MongoClient(uri);
db = client.getDB(databaseName);
} catch (UnknownHostException e) {
throw new AppConnectionException(
"Unable to connect to the given host / port.");
}
return db;
}

MongoClient has internal connection pool. Maximum number of connections can be configured (default is 100). You can set it by using MongoClientOptions like this:
MongoClientOptions options = MongoClientOptions.builder()
.connectionsPerHost(100)
.autoConnectRetry(true)
.build();
And then give these options to MongoClient (checked it in Mongo Java API v2.11.1).
Connections in pool are maintained open (opening and closing connection is usually an expensive operation) so that they can be later reused.
I would also refactor your MongoDB client singleton using enum for example to avoid putting synchronized on this method.
Here is a sketch of what I mean:
public enum MongoDB {
INSTANCE;
private static final String MONGO_DB_HOST = "some.mongohost.com";
private Mongo mongo;
private DB someDB;
MongoDB() {
MongoClientOptions options = MongoClientOptions.builder()
.connectionsPerHost(100)
.autoConnectRetry(true)
.readPreference(ReadPreference.secondaryPreferred())
.build();
try {
mongo = new MongoClient(MONGO_DB_HOST, options);
} catch (UnknownHostException e) {
e.printStackTrace();
}
someDB = mongo.getDB("someDB");
//authenticate if needed
//boolean auth = someDB.authenticate("username", "password".toCharArray());
//if(!auth){
// System.out.println("Error Connecting To DB");
//}
}
public DB getSomeDB() {
return someDB;
}
//call it on your shutdown hook for example
public void close(){
mongo.close();
}
}
Then, you can access your database via
MongoDB.INSTANCE.getSomeDB().getCollection("someCollection").count();

Related

com.mysql.cj.jdbc.exceptions.CommunicationsException: The last packet successfully received from the server was XXXXXXXXXXXX milliseconds ago

I have a spring boot application that does not use connection pool and we didn't want to open a DB connection at every request
So, here is what we have in a class called MySQLService which has methods with DB queries:
#Autowired
#Qualifier("mysqlDB")
private Connection connection;
This connection object is always used in all of the methods with queries.
In MySQLConnection class,
#Bean(name = "mysqlDB")
public Connection getConnection() {
Connection connection = null;
try {
Class.forName(mysqlDriver);
LOGGER.debug("get mysql connection...");
connection = DriverManager
.getConnection(jdbcUrl,
user, password);
} catch (Exception exception) {
LOGGER.error("ERROR :: {}", exception);
}
return connection;
}
}
So, we are never really closing the connection, it is being managed by spring context but since we are not using JDBCTemplates, it does not get closed. We have autoreconnect set to true in connection string.
In a day or two, we get the exception:
com.mysql.cj.jdbc.exceptions.CommunicationsException: The last packet successfully received from the server was 61,183,452 milliseconds ago.
I understand it is because SQL Server has connection lifetime set so it expires the connection but what is a way to handle this without using a connection pool
Schedule a ping to the MySQL Server every 6 hours or so, executing this query: select 1 from dual. For that, you need to enable scheduling:
#Configuration
#EnableScheduling
public class SpringConfig {
//...
}
then:
#Scheduled(cron = "0 */6 * * *")
public void schedulePingMySQL() {
// execute `select 1 from dual`
}
Anyway, using a connection pool is the recommended way. This case the code may look like:
#Autowired
private DataSource dataSource;
public void save (Dto dto) {
Connection con = dataSource.getConnection();
// finally, close the connection
}

How to reduced timeout exception on AD host unreachable time using unboundid SDK?

I am using UNBOUNDID SDK for AD server authentication
public static Boolean adAuthentication(String ldapUrl, int ldapPort,
String bindUserName, String bindPassword) {
SSLUtil sslUtil = new SSLUtil(null, new TrustAllTrustManager());
SocketFactory socketFactory;
LDAPConnection ldapConnection = null;
Boolean isAuthentic = null;
try {
socketFactory = sslUtil.createSSLSocketFactory();
LDAPConnectionOptions options = new LDAPConnectionOptions();
options.setAbandonOnTimeout(true);
**options.setResponseTimeoutMillis(10000);
options.setConnectTimeoutMillis(10000);**
ldapConnection = new LDAPConnection(socketFactory, ldapUrl, ldapPort);
ldapConnection.setConnectionOptions(options);
if(ldapConnection.isConnected()){
final BindRequest bindRequest = new SimpleBindRequest(bindUserName, bindPassword);
final BindResult bindResult = ldapConnection.bind(bindRequest);
final ResultCode resultCode = bindResult.getResultCode();
isAuthentic = resultCode.equals(ResultCode.SUCCESS) ? true : false;
ldapConnection.close();
}
} catch (LDAPException ldapException) {
logger.error("AD Host Exception ::: "+ ldapException);
} catch (GeneralSecurityException exception) {
logger.error("AD Security Exception ::: " + exception);
}finally{
if(ldapConnection!= null)
ldapConnection.close();
}
return isAuthentic;
}
This code working fine on AD server reachable time suppose AD server is unreachable mean it throw following error after 60 seconds
"AD Host Exception ::: LDAPException(resultCode=91 (connect error), errorMessage='An error occurred while attempting to connect to server 000.000.00.00:369: java.io.IOException: Unable to establish a connection to server 000.000.00.00:369 within the configured timeout of 60000 milliseconds.')"
But i needed throw the error within 20 seconds. I set timeout limit as follow but no effect.
options.setResponseTimeoutMillis(20000);
options.setConnectTimeoutMillis(20000);
Thanks.
Since you're providing connection information in the constructor, then the constructor is establishing the connection. However, you're not setting the connection options until after the constructor, so the constructor is using the default connection options.
Rather than using the LDAPConnection(SocketFactory,String,int) constructor, you should use the LDAPConnection(SocketFactory,LDAPConnectionOptions,String,int) constructor. This will cause the connection establishment to use the provided connection options instead of the default.

Openshift can't connect to mongoDB from java code, time out

I've got a MongoDB cartridge installed on openshift and i'm having troubles with connecting to it from java code. Ip address, port and credentials are taken from openshift's RockMongo cartridge. The following method invocation:
public Document insert(String audio, String username) {
Document document = new Document();
document.put("username", username);
document.put("audio", audio);
document.put("timestamp", new Date());
collection.insertOne(document);
return document;
}
and this mongo client configuration:
private static MongoClient build() throws UnknownHostException {
if (mongoClient == null) {
mongoClient = new MongoClient(
new MongoClientURI( "mongodb://admin:password#X.X.X.X:27017/dbName"));
}
return mongoClient;
}
public static MongoCollection<Document> getCollection(String collectionName) {
try {
build();
} catch (UnknownHostException e) {
}
MongoDatabase db = mongoClient.getDatabase(dbName);
MongoCollection<Document> collection = db.getCollection(collectionName);
return collection;
}
results in INFO: No server chosen by PrimaryServerSelector from cluster description ClusterDescription, and exception: Timed out after 30000 ms while waiting for a server that matches PrimaryServerSelector.
EDIT: I can't connect with mongoDB service on openshift via mongo terminal application either: "exception: connect failed", so I think it's openshift configuration issue. Port forwarding and the service itself are started
I suppose you have not correctly configure cluster (message in logs told about this problem), I'm not sure how OpenShift Cratridge works, but I recommend you to check if it has mondo-db correctly started. Check it via ssh client and run mongo-db command to check its status and if started. Take a look on this question: Java MongoClient cannot connect to primary, I suppose it give you some idea how to check where you have problem.

cassandra single node connection error

i am trying to use cassandra as database for an app i am working on. The app is a Netbeans platform app.
In order to start the cassandra server on my localhost i issue Runtime.getRuntime().exec(command)
where command is the string to start the cassandra server and then i connect to the cassandra sever with the datastax driver. However i get the error:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:80)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1154)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:318)
at org.dhviz.boot.DatabaseClient.connect(DatabaseClient.java:43)
at org.dhviz.boot.Installer.restored(Installer.java:67)
....
i figure it out that the server requires some time to start so i have added the line Thread.sleep(MAX_DELAY_SERVER) which seem to resolve the problem.
Is there any more elegant way to sort this issue?
Thanks.
Code is below.
public class Installer extends ModuleInstall {
private final int MAX_DELAY_SERVER = 12000;
//private static final String pathSrc = "/org/dhviz/resources";
#Override
public void restored() {
/*
-*-*-*-*-*DESCRIPTION*-*-*-*-*-*
IMPLEMENT THE CASSANDRA DATABASE
*********************************
*/
DatabaseClient d = new DatabaseClient();
// launch an instance of the cassandra server
d.loadDatabaseServer();
/*wait for MAX_DELAY_SERVER milliseconds before launching the other instructions.
*/
try {
Thread.sleep(MAX_DELAY_SERVER);
Logger.getLogger(Installer.class.getName()).log(Level.INFO, "wait for MAX_DELAY_SERVER milliseconds before the connect database");
} catch (InterruptedException ex) {
Exceptions.printStackTrace(ex);
Logger.getLogger(Installer.class.getName()).log(Level.INFO, "exeption in thread sleep");
}
d.connect("127.0.0.1");
}
}
public class DatabaseClient {
private Cluster cluster;
private Session session;
private ShellCommand shellCommand;
private final String defaultKeyspace = "dhviz";
final private String LOAD_CASSANDRA = "launchctl load /usr/local/Cellar/cassandra/2.1.2/homebrew.mxcl.cassandra.plist";
final private String UNLOAD_CASSANDRA = "launchctl unload /usr/local/Cellar/cassandra/2.1.2/homebrew.mxcl.cassandra.plist";
public DatabaseClient() {
shellCommand = new ShellCommand();
}
public void connect(String node) {
//this connect to the cassandra database
cluster = Cluster.builder()
.addContactPoint(node).build();
// cluster.getConfiguration().getSocketOptions().setConnectTimeoutMillis(12000);
Metadata metadata = cluster.getMetadata();
System.out.printf("Connected to cluster: %s\n",
metadata.getClusterName());
for (Host host
: metadata.getAllHosts()) {
System.out.printf("Datatacenter: %s; Host: %s; Rack: %s\n",
host.getDatacenter(), host.getAddress(), host.getRack());
}
session = cluster.connect();
Logger.getLogger(DatabaseClient.class.getName()).log(Level.INFO, "connected to server");
}
public void loadDatabaseServer() {
if (shellCommand == null) {
shellCommand = new ShellCommand();
}
shellCommand.executeCommand(LOAD_CASSANDRA);
Logger.getLogger(DatabaseClient.class.getName()).log(Level.INFO, "database cassandra loaded");
}
public void unloadDatabaseServer() {
if (shellCommand == null) {
shellCommand = new ShellCommand();
}
shellCommand.executeCommand(UNLOAD_CASSANDRA);
Logger.getLogger(DatabaseClient.class.getName()).log(Level.INFO, "database cassandra unloaded");
}
}
If you are calling cassandra without any parameters in Runtime.getRuntime().exec(command) it's likely that this is spawning cassandra as a background process and returning before the cassandra node has fully started and is listening.
I'm not sure why you are attempting to embed cassandra in your app, but you may find using cassandra-unit useful for providing a mechanism to embed cassandra in your app. It's primarily used for running tests that require a cassandra instance, but it may also meet your use case.
The wiki provides a helpful example on how to start an embedded cassandra instance using cassandra-unit:
EmbeddedCassandraServerHelper.startEmbeddedCassandra();
In my experience cassandra-unit will wait until the server is up and listening before returning. You could also write a method that waits until a socket is in use, using logic opposite of this answer.
I have changed the code to the following taking inspiration from the answers below. Thanks for your help!
cluster = Cluster.builder()
.addContactPoint(node).build();
cluster.getConfiguration().getSocketOptions().setConnectTimeoutMillis(50000);
boolean serverConnected = false;
while (serverConnected == false) {
try {
try {
Thread.sleep(MAX_DELAY_SERVER);
} catch (InterruptedException ex) {
Exceptions.printStackTrace(ex);
}
cluster = Cluster.builder()
.addContactPoint(node).build();
cluster.getConfiguration().getSocketOptions().setConnectTimeoutMillis(50000);
session = cluster.connect();
serverConnected = true;
} catch (NoHostAvailableException ex) {
Logger.getLogger(DatabaseClient.class.getName()).log(Level.INFO, "trying connection to cassandra server...");
serverConnected = false;
}
}

Getting Stale Connection using OracleDataSource with OCI driver

I am getting stale connection error when there is no requests to the database from my java application for couple of hours.
Its a simple java application runned on Linux box with OCI (type driver). Dont ask me why OCI, why not thin. I am using OracleDataSource and OracleConnectionCacheManager for maintaining the cache of connection objects. Here is the code snippet:
import java.sql.Connection;
import java.sql.SQLException;
import java.util.Properties;
import oracle.jdbc.pool.OracleConnectionCacheManager;
import oracle.jdbc.pool.OracleDataSource;
import org.apache.log4j.Logger;
import com.exception.DataException;
public class ConnectionManager {
private static OracleDataSource poolDataSource = null;
private final static String CACHE_NAME = "CONNECTION_POOL_CACHE";
private static OracleConnectionCacheManager occm = null;
public static void init(String url,String userId,String password) throws PCTDataException{
Properties cacheProps = null;
try {
poolDataSource = new OracleDataSource();
poolDataSource.setURL(url);
poolDataSource.setUser(userId);
poolDataSource.setPassword(password);
cacheProps = new Properties();
cacheProps.setProperty("MinLimit", "1");
cacheProps.setProperty("MaxLimit", "5");
cacheProps.setProperty("InitialLimit", "1");
cacheProps.setProperty("ValidateConnection", "true");
poolDataSource.setConnectionCachingEnabled(true);
occm = OracleConnectionCacheManager.getConnectionCacheManagerInstance();
occm.createCache(CACHE_NAME, poolDataSource, cacheProps);
occm.enableCache(CACHE_NAME);
} catch (SQLException se) {
throw new DataException("SQL Exception while initializing connection pool");
}catch(Exception e){
throw new DataException("Exception while initializing connection pool");
}
}
public static Connection getConnection() throws PCTDataException {
try{
if (poolDataSource == null) {
throw new SQLException("OracleDataSource is null.");
}
occm.refreshCache(CACHE_NAME, OracleConnectionCacheManager.REFRESH_INVALID_CONNECTIONS);
Connection connection = poolDataSource.getConnection();
return connection;
}catch(SQLException se){
se.printStackTrace();
throw new DataException("Exception while getting Connection object");
}catch(Exception e){
e.printStackTrace();
throw new DataException("Exception while getting Connection object");
}
}
public static void closePooledConnections() {
try{
if (poolDataSource != null) {
poolDataSource.close();
}
}catch(SQLException se){
}catch(Exception e){
}
}
}
The error is as follows:
ConnectionManager.java:getConnection:87 - Exception while getting Connection object:
java.sql.SQLException: Invalid or Stale Connection found in the Connection Cache
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:208)
at oracle.jdbc.pool.OracleImplicitConnectionCache.getConnection(OracleImplicitConnectionCache.java:390)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:404)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:189)
What am I missing?
Maybe you need to set keep alives on? what this does is periodically, when not in use it send a ping to the database server basically saying I am still here and don't close me out. This is not that fun to try to debug though. The problem could be a setting on your database server where there is a max connection age, or a time to kill idle connections. There also could be some settings in your pool that you could use to that would check for this and then just tell it to get a new one when this happens. I wish I could be more help but I have not worked with oracle.
Instead of using OracleDataSource + OracleConnectionCacheManager, I would recommend using the OracleOCIConnectionPool, which was specifically designed for caching OCI connections.
It is a drop in replacement for OracleDataSource, except the PoolConfig properties for OracleDataSource and the OracleOCIConnectionPool are a bit different.
You will get the "Invalid or Stale Connection" error when you have a connection in the connection pool which is no longer connected to the Database actively. Below are few scenarios which can lead to this
Connection is manually aborted from the database by a dba. For example, if
the connection was killed using "ALTER SYSTEM KILL SESSION"
When a connection exists in the connection pool without being used for a
long time and is disconnected due to the timeouts enforced by the
database (idle_time)
A database restart
A network event has caused
the connection to drop, probably because the network has become
unavailable or a firewall has dropped a connection which has been
open for too long.
Run the below query to determine the IDLE_TIME enforced by the Database
select * from dba_profiles dp, dba_users du
where dp.profile = du.profile and du.username ='YOUR_JDBC_USER_NAME';
Now try with the below configuration
Properties cacheProps = new Properties();
cacheProps.setProperty("MinLimit", "0");
cacheProps.setProperty("MaxLimit", "5");
cacheProps.setProperty("InitialLimit", "1");
cacheProps.setProperty("ValidateConnection", "true");
cacheProps.setProperty("InactivityTimeout", "17000"); //something lower than the DB IDLE_TIME
cacheProps.setProperty("PropertyCheckInterval", "16000") /*something lower than the inactivity timeout
- to make sure that connections which were inactive for more than InactivityTimeout
are always removed from the pool*/

Categories

Resources