I am trying to insert data into a table in SQL Server hosted on AWS RDS.
It was working fine and suddenly I started getting an issue. It seems like an intermittent issue but I am unable to see why it is this happening
Fail to read any response from the server, the underlying connection might get lost unexpectedly.
This is how I am creating the database connection:
public static MSSQLPool createMssqlDbPool(Vertx vertx, ConfigModel configModel) {
MSSQLConnectOptions connectOptions = new MSSQLConnectOptions()
.setHost(System.getenv().getOrDefault("DB_HOST", configModel.getDbConfig().getHost()))
.setPort(Integer.parseInt(System.getenv().getOrDefault("DB_PORT", configModel.getDbConfig().getPort())))
.setDatabase(System.getenv().getOrDefault("DB_NAME", configModel.getDbConfig().getDatabase()))
.setUser(System.getenv().getOrDefault("DB_USER", configModel.getDbConfig().getUser()))
.setPassword(System.getenv().getOrDefault("DB_PASSWORD", configModel.getDbConfig().getPassword()));
// Pool options
PoolOptions poolOptions = new PoolOptions()
.setMaxSize(4);
LOG.info("DB connection : {}", connectOptions.toJson());
return MSSQLPool.pool(vertx, connectOptions, poolOptions);
}
I have read threads on GitHub about adding timeout but they are not definitive.
If you see this error it is likely a pooled connection has been idle for too long and closed by some intermediate proxy.
Change your pool options to close idle connections eagerly:
// Pool options
PoolOptions poolOptions = new PoolOptions()
.setMaxSize(4)
.setIdleTimeout(5)
.setIdleTimeoutUnit(TimeUnit.MINUTES);
5 minutes is just an example. The longer a connection lives, the better.
Related
I have created a Storm topology which connects to Redis Cluster using Jedis library. Storm component always expects that Redis is up and running and only then it connects to Redis and subscribes the events.Currently we use pub-sub strategy of Redis.
Below is the code sample that explains my Jedis Connectivity inside Storm to for Redis.
try {
jedis.psubscribe(listener, pattern);
} catch(Exception ex) {
//catch statement here.
} finally {
pool.returnResource(jedis);
}
....
pool = new JedisPool(new JedisPoolConfig(), host, port); //redis host port
ListenerThread listener = new ListenerThread(queue, pool, pattern);
listener.start();
EXPECTED BEHAVIOUR
Once Redis dies and comes back online, Storm is expected to identify the status of Redis. It must not need a restart in case when Redis die and come online.
ACTUAL BEHAVIOUR
Once Redis restarts due to any reason, I always have to restart the Storm topology as well and only then it starts listening back to Redis.
QUESTION
How can I make Storm listen and reconnect to Redis again after Redis is restarted? any guidance would be appreciated, viz. docs, forum answer.
Catch the exception for the connection lost error and set the pool to null
(Assume that you doing this in Spout) Use an if-else statement to check if pool is null then create a new instance of JedisPool() assigning to the pool like in your code:
pool = new JedisPool(new JedisPoolConfig(), host, port); //redis host port
If pool not null (means connected) then continue your work
This is a common issue with apache-storm where connection thread is alivein stale condition, although the source from where you are consuming is down/restarted. Ideally it should retry to create new connection thread instead reusing the existing one. Hence the Idea is to have it automated it by by detecting the Exception (e.g. JMSConnectionError in case of JMS).
refer this Failover Consumer Example which will give you brief idea what to do in such cases.(P.S this is JMS which would be JMS in redis your case.)
The Steps would be something like this.
Catch Exception in case of ERROR or connection lost.
Init connection (if not voluntarily closed by program) from catch.
If Exception got to step 1.
I am looking for a Java database connection pool that allows me to use AWS IAM Database Authentication for my Aurora MySQL. The pool should be able to work Tomcat context.xml file.
I have looked at Tomcat DBCP, dbcp2, HikariCP and c3p0. But they all seem to asume the username and password is known at application startup and does not change in the lifetime of the application.
For IAM database authentication the credentials change every 15 minutes so the pool needs to ask the AWS IAM for a new credentials whenever it creates new connections (the credentials could be cached a few minutes.).
Is this implemented in any Java connection pool? Or do you have an idea on how get this to work?
I've also had to face this problem using node js lambda and MySql RDS. We were using a mysql connection pool and so we implemented a solution that created a future date-time that we could check to see if connections were about to expire whenever a connection was requested from the pool. This date-time was 15 minutes minus some jitter after the connection pool was initialized.
So getting the connection pool (to get a connection) would look like:
const getPool = async (): Promise<DbConnectionPool> => {
if (isRdsIamTokenCloseToExpiring()) {
await poolHolder.lock.acquire();
try {
// if, after having acquired lock, thread pool is still about to expire...
if (isRdsIamTokenCloseToExpiring()) {
await closeConnectionsInPool();
await initializeConnectionPool();
}
} finally {
poolHolder.lock.release();
}
}
if (!poolHolder.pool) {
throw new Error('pool holder is null - this should never happen');
} else {
return poolHolder.pool;
}
};
Because we had multiple concurrent async threads trying to get a connection we had to introduce a semaphore to control the pool re-initialization. All in all having to do this was more cumbersome than using a username & password but it is more secure.
To answer Isen Ng's comment above (I don't have the rep to answer directly), connections whose RDS IAM token expires will stop working.
I had the same problem recently.... I use HikariCP connection pool and until now it does not have support for this. Fortunately I found a PR with this tool:
https://github.com/brettwooldridge/HikariCP/pull/1335
I recommend you make a project fork and use it until the official repository accepts this PR.
My implementation of this:
public DataSource setup() throws Exception {
Supplier<String> passwordSupplier = () -> {
return this.generateAuthToken(host, port, user);
};
com.zaxxer.hikari.HikariDataSource dataSource = new com.zaxxer.hikari.HikariDataSource();
dataSource.setPasswordSupplier(passwordSupplier); ...
It's very important include this on your pool configuration:
dataSource.setMaxLifetime(15 * 60 * 1000);
Because your pool connections can't live more than 15 minutes with RDS Iam Auth
Good luck.
I know this is an older question, but after a some searching I found a pretty easy way you can now do this using the MariaDB driver. In version 2.5 they added an AWS IAM credential plugin to the driver. It will handle generating, caching and refreshing the token automatically. You can activate it like this:
jdbc:mariadb://host/db?credentialType=AWS-IAM&useSsl&serverSslCert=/somepath/rds-combined-ca-bundle.pem
I've tested using HikariCP connection pool and it is working for me. Make sure you are using the MariaDB driver (not MySQL) and set maxLifetime to 600000ms (driver caches tokens for 10 minutes).
Hope this helps someone else - most examples I found online involved custom code, background threads, etc - but using the new driver feature is much easier!
I am using withRemote to connect my java application to gremlin server running in AWS with dynamodb storage backend. I am getting connection timeout after few seconds (~3.3 seconds):
org.apache.tinkerpop.gremlin.process.remote.RemoteConnectionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.nio.channels.ClosedChannelException]]
I need to figure out how to reconnect which means detecting if the connection is closed. I am not sure how to detect that. I get the above exception when I use the graph traversal, is there a way to discover it before and reconnect or is there an option in configuration that allows reconnecting automatically (like create new connection before this one closes) so my application is always connected?
In case you need, this is how I am doing connection - currently connection part is singleton when the application starts:
this.graph = EmptyGraph.instance();
GryoMessageSerializerV1d0 gryoMessageSerializerV1d0 = new GryoMessageSerializerV1d0(
GryoMapper.build().addRegistry(JanusGraphIoRegistry.getInstance()));
this.cluster = Cluster.build().serializer(gryoMessageSerializerV1d0)
.addContactPoint(configuration.getString("graphDb.host", "localhost"))
.port(configuration.getInt("graphDb.port", 8182)).create();
this.graphTraversalSource = this.graph.traversal().withRemote(DriverRemoteConnection.using(cluster));
I feel like this problem is already solved with connection.keepAlive configuration option. It defaults to 180 seconds so it's longer than your timeout of 60 seconds in your load balancer which is why it gives up.
That said, the driver should be reconnecting on its own. It's constantly trying to do that given the connectionPool.reconnectInterval but perhaps there is a condition where you're quickly exhausting all the connections to the point of getting that error....not sure. Either way, hopefully the
I'm connecting my Java application via JDBC driver and Tomcat configurations. I used this class to define my configurations. But sometimes, I got following exceptions:
com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
java.sql.SQLException: Query execution was interrupted
java.sql.BatchUpdateException: Statement cancelled due to timeout or client request
But there is not much load on database, so I think the problem is about my configuration. Here are some of my configs:
maxActive = 100
minIdle = 10
initialSize = 10
maxWait = 10000
maxIdle = 15
These are for some heavily loaded system. So I need to observe if my pool size is enough and other things like available connection count at any time. Is there any nice way of monitoring inside of connection pools?
For the timeout exceptions, consider the following pseudocode:
if(!connection.isValid())
connection = getNewConnection();
resultSet rs = connection.execute(qry);
Basically check if your connection has timed out before you execute the query.
If your query was interrupted, theres not much you can do other than to rollback the transaction and try again.
Enable JMX and tomcat will register your datasources with it's mbean server. You can then use jconsole to look at the connection pool details.
A console application executing under:
1). Multiple threads
2). Connection Pooling (as the database connections range could be 5 to 30) of type Microsoft Access using DBCP.
While executing this application at my end (not tested the database limit) it works fine. And whenever I try to introduce the same application on one of other machines it generates an error.
I'm wondering why this is happening as there is only the difference of machines here. So, it works perfectly at my end.
I don't know much about connection pooling but it seems whatever I have understood I have implemented as:
public class TestDatabases implements Runnable{
public static Map<String, Connection> correctDatabases;
#Override
public void run() {
// validating the databases using DBCP
datasource.getConnection(); // Obtaining the java.sql.Connection from DataSource
// if validated successfully °º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°`°º¤ø,¸ putting them in correctDatabases
}
}
The above case is implemented using ExecutorService = Number of databases.
Finally, I'm trying to put them in a static Collection of Type
Map<String, Connection> and making use of it throughout the application. In other words: I'm trying to collect the connectionString along with the Connection in a Map.
In other parts of my application I'm simply dealing with multiple threads coming along with the Connection URL. So, to perform any database operations I'm calling the
Connection con = TestDatabases.correctDatases.get(connectUrl);
For that machine, this application works fine for around ~5 databases. And the error is always getting generated when I'm trying to fire the query using above Connection (con) as stmt.executeQuery(query);
As, I'm not able to reproduce this issue at my end, it seems something is going-on wrong with the Connection Pooling or I have not configured my application to deal with Connection Pooling correctly.
Just for your information, I'm correctly performing Connection close in finally block where my application terminates and this Application is using Quartz Scheduler as well. For Connection Pooling, a call to the following from TestDatabases class is done for setUp as:
public synchronized DataSource setUp() throws Exception {
Class.forName(RestConnectionValidator.prop.getProperty("driverClass")).newInstance();
log.debug("Class Loaded.");
connectionPool = new GenericObjectPool();
log.debug("Connection pool made.");
connectionPool.setMaxActive(100);
ConnectionFactory cf = new DriverManagerConnectionFactory(
RestConnectionValidator.prop.getProperty("connectionUrl")+new String(get().toString().trim()),
"","");
PoolableConnectionFactory pcf =
new PoolableConnectionFactory(cf, connectionPool,
null, null, false, true);
return new PoolingDataSource(connectionPool);
}
Following is the error I'm getting (at the other machine)
java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] System resource exceeded.
Following is the Database Path:
jdbc:odbc:DRIVER= {Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\\DataSources\\PR01.mdb
Each of those database seems to be not much heavy (its ~ 5 to 15 MB of total size).
So, I'm left with the following solutions:
1). Correction of Connection Pooling or migrate to the newer one's like c3p0 or DBPool or BoneCP.
2). Introducing batch concept - in which I will schedule my application for each group of 4 databases. It could be very expensive to deal with as any time the other schedule may also collapse.
I’m pretty sure that this is Java related error but I can’t fathom out why.
Just done the migration to BoneCP which solved my problem. I guess due to multi-threaded environment the dpcp was not providing the connection from pool rather it was trying to hit the database again and again. Maybe I could have solved the dpcp issue but migrating to BoneCP also provides advantage of performance.