AWS IAM Database Authentication using Java connection pool - java

I am looking for a Java database connection pool that allows me to use AWS IAM Database Authentication for my Aurora MySQL. The pool should be able to work Tomcat context.xml file.
I have looked at Tomcat DBCP, dbcp2, HikariCP and c3p0. But they all seem to asume the username and password is known at application startup and does not change in the lifetime of the application.
For IAM database authentication the credentials change every 15 minutes so the pool needs to ask the AWS IAM for a new credentials whenever it creates new connections (the credentials could be cached a few minutes.).
Is this implemented in any Java connection pool? Or do you have an idea on how get this to work?

I've also had to face this problem using node js lambda and MySql RDS. We were using a mysql connection pool and so we implemented a solution that created a future date-time that we could check to see if connections were about to expire whenever a connection was requested from the pool. This date-time was 15 minutes minus some jitter after the connection pool was initialized.
So getting the connection pool (to get a connection) would look like:
const getPool = async (): Promise<DbConnectionPool> => {
if (isRdsIamTokenCloseToExpiring()) {
await poolHolder.lock.acquire();
try {
// if, after having acquired lock, thread pool is still about to expire...
if (isRdsIamTokenCloseToExpiring()) {
await closeConnectionsInPool();
await initializeConnectionPool();
}
} finally {
poolHolder.lock.release();
}
}
if (!poolHolder.pool) {
throw new Error('pool holder is null - this should never happen');
} else {
return poolHolder.pool;
}
};
Because we had multiple concurrent async threads trying to get a connection we had to introduce a semaphore to control the pool re-initialization. All in all having to do this was more cumbersome than using a username & password but it is more secure.
To answer Isen Ng's comment above (I don't have the rep to answer directly), connections whose RDS IAM token expires will stop working.

I had the same problem recently.... I use HikariCP connection pool and until now it does not have support for this. Fortunately I found a PR with this tool:
https://github.com/brettwooldridge/HikariCP/pull/1335
I recommend you make a project fork and use it until the official repository accepts this PR.
My implementation of this:
public DataSource setup() throws Exception {
Supplier<String> passwordSupplier = () -> {
return this.generateAuthToken(host, port, user);
};
com.zaxxer.hikari.HikariDataSource dataSource = new com.zaxxer.hikari.HikariDataSource();
dataSource.setPasswordSupplier(passwordSupplier); ...
It's very important include this on your pool configuration:
dataSource.setMaxLifetime(15 * 60 * 1000);
Because your pool connections can't live more than 15 minutes with RDS Iam Auth
Good luck.

I know this is an older question, but after a some searching I found a pretty easy way you can now do this using the MariaDB driver. In version 2.5 they added an AWS IAM credential plugin to the driver. It will handle generating, caching and refreshing the token automatically. You can activate it like this:
jdbc:mariadb://host/db?credentialType=AWS-IAM&useSsl&serverSslCert=/somepath/rds-combined-ca-bundle.pem
I've tested using HikariCP connection pool and it is working for me. Make sure you are using the MariaDB driver (not MySQL) and set maxLifetime to 600000ms (driver caches tokens for 10 minutes).
Hope this helps someone else - most examples I found online involved custom code, background threads, etc - but using the new driver feature is much easier!

Related

Vert.x issue while inserting data into SQL Server

I am trying to insert data into a table in SQL Server hosted on AWS RDS.
It was working fine and suddenly I started getting an issue. It seems like an intermittent issue but I am unable to see why it is this happening
Fail to read any response from the server, the underlying connection might get lost unexpectedly.
This is how I am creating the database connection:
public static MSSQLPool createMssqlDbPool(Vertx vertx, ConfigModel configModel) {
MSSQLConnectOptions connectOptions = new MSSQLConnectOptions()
.setHost(System.getenv().getOrDefault("DB_HOST", configModel.getDbConfig().getHost()))
.setPort(Integer.parseInt(System.getenv().getOrDefault("DB_PORT", configModel.getDbConfig().getPort())))
.setDatabase(System.getenv().getOrDefault("DB_NAME", configModel.getDbConfig().getDatabase()))
.setUser(System.getenv().getOrDefault("DB_USER", configModel.getDbConfig().getUser()))
.setPassword(System.getenv().getOrDefault("DB_PASSWORD", configModel.getDbConfig().getPassword()));
// Pool options
PoolOptions poolOptions = new PoolOptions()
.setMaxSize(4);
LOG.info("DB connection : {}", connectOptions.toJson());
return MSSQLPool.pool(vertx, connectOptions, poolOptions);
}
I have read threads on GitHub about adding timeout but they are not definitive.
If you see this error it is likely a pooled connection has been idle for too long and closed by some intermediate proxy.
Change your pool options to close idle connections eagerly:
// Pool options
PoolOptions poolOptions = new PoolOptions()
.setMaxSize(4)
.setIdleTimeout(5)
.setIdleTimeoutUnit(TimeUnit.MINUTES);
5 minutes is just an example. The longer a connection lives, the better.

Why won't my Java JDBC to Firebird database connection disconnect?

I am using the following code to connect to a firebird database
public static Connection dbStatic;
...
public void getConnection(){
FBWrappingDataSource DataSource = new FBWrappingDataSource();
DataSource.setDatabase("localhost/3050:C:/MyDatabase.FDB");
DataSource.setDescription("TNS Development Database");
DataSource.setType("TYPE4");
DataSource.setEncoding("ISO8859_1");
DataSource.setLoginTimeout(10);
try {
dbStatic = DataSource.getConnection("UserName", "Password");
} catch (SQLException e) {
e.printStackTrace();
}
}
...and the following to disconnect:
...
dbStatic.close();
...
I am using Firebird 2.1 runing on a Windows 7-32 bit machine, with Java verstion 1.7, Jaybird version 2.2.8, Tomcat version 7.xx running on Win7-32bit, Browser is Chrome version something or other (newish) running Win XP SP3.
I use a third party tool called IBExpert to look at the number of connections and/or I run this statement:
select * from mon$attachments;
When I look at the number of connections to the database after the .close() statement runs the number does not decrease. Why is that? The number of connections do decrease if I wait long enough, or if the Tomcat server is restarted. Closing the browser does not affect the connections.
As Andreas already pointed out in the comments, FBWrappingDataSource is a connection pool. This means that the pool keeps physical connections open, and it hands out logical connections backed by the physical connections in the connection pool. Once you call close() on that logical connection, the physical connection is returned to the pool as available for reuse. The physical connection remains open.
If you want to close all connections, you need to call FBWrappingDataSource.shutdown(). This closes all physical connections that are not currently in use(!), and marks the data source as shutdown.
However, everything in package org.firebirdsql.pool should be considered deprecated; it will be removed in Jaybird 3. See Important changes to Datasources
If you just want a data source, use org.firebirdsql.pool.FBSimpleDataSource (with Jaybird 3 you will need to use org.firebirdsql.ds.FBSimpleDataSource instead).
If you want connection pooling, use a third party connection pool library like HikariCP, DBCP or c3p0.
That said, I want to point out several things you should consider:
Jaybird 2.2.8 is not the latest version, consider upgrading to 2.2.12, the current latest release of Jaybird.
Using a static field for a connection is generally not a good idea (especially with a web application), consider your design if that is really what you need. You might be better off making a data source the static field, and obtain (and close!) connections for a unit of work (ie: one request). It might also indicate that it would be simpler for you to just use DriverManager to create the connection.
Naming conventions: your variable DataSource should be called dataSource if you follow the common Java conventions
setLoginTimeout(Integer.parseInt(10)) should lead to a compilation error, as there is no method Integer.parseInt that takes an int, and the method itself already accepts an int.

Play framework resource starvation after a few days

I am experiencing an issue in Play 2.5.8 (Java) where database related service endpoints starts timing out after a few days even though the server CPU & memory usage seems fine. Endpoints that does not access the DB continue to work perfectly.
The application runs on a t2.medium EC2 instance with a t2.medium MySQL RDS, both in the same availability zone. Most HTTP calls do lookups/updates to the database with around 8-12 requests per second, and there are also ±800 WebSocket connections/actors with ±8 requests/second (90% of the WebSocket messages does not access the database). DB operations are mostly simple lookups & updates taking around 100ms.
When using only the default thread pool it took about 2 days to reach the deadlock, and after moving the database requests to a separate thread pool as per https://www.playframework.com/documentation/2.5.x/ThreadPools#highly-synchronous, it improved but only to about 4 days.
This is my current thread config in application.conf:
akka {
actor {
guardian-supervisor-strategy = "actors.RootSupervisionStrategy"
}
loggers = ["akka.event.Logging$DefaultLogger",
"akka.event.slf4j.Slf4jLogger"]
loglevel = WARNING
## This pool handles all HTTP & WebSocket requests
default-dispatcher {
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 64
}
}
db-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 210
}
}
}
Database configuration:
play.db.pool="default"
play.db.prototype.hikaricp.maximumPoolSize=200
db.default.driver=com.mysql.jdbc.Driver
I have played around with the amount of connections in the DB pool & adjusting the size of the default & db-dispatcher pool size but it doesn't seem to make any difference. It feels I'm missing something fundamental about Play's thread pools & configuration as I don't think the load on the server should not be an issue for Play to handle.
After more investigation I found that the issue is not related to thread pool configuration at all, but rather TCP connections that build up due to WS reconnections until the server (or Play framework) cannot accept any more connections. When this happens, only established TCP connections are serviced which mostly includes the established WebSocket connections.
I could not yet determine why the connections are not managed/closed properly.
My issue relates to this question:
Play 2.5 WebSocket Connection Build

Error: System Resource Exceeded

A console application executing under:
1). Multiple threads
2). Connection Pooling (as the database connections range could be 5 to 30) of type Microsoft Access using DBCP.
While executing this application at my end (not tested the database limit) it works fine. And whenever I try to introduce the same application on one of other machines it generates an error.
I'm wondering why this is happening as there is only the difference of machines here. So, it works perfectly at my end.
I don't know much about connection pooling but it seems whatever I have understood I have implemented as:
public class TestDatabases implements Runnable{
public static Map<String, Connection> correctDatabases;
#Override
public void run() {
// validating the databases using DBCP
datasource.getConnection(); // Obtaining the java.sql.Connection from DataSource
// if validated successfully °º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°`°º¤ø,¸ putting them in correctDatabases
}
}
The above case is implemented using ExecutorService = Number of databases.
Finally, I'm trying to put them in a static Collection of Type
Map<String, Connection> and making use of it throughout the application. In other words: I'm trying to collect the connectionString along with the Connection in a Map.
In other parts of my application I'm simply dealing with multiple threads coming along with the Connection URL. So, to perform any database operations I'm calling the
Connection con = TestDatabases.correctDatases.get(connectUrl);
For that machine, this application works fine for around ~5 databases. And the error is always getting generated when I'm trying to fire the query using above Connection (con) as stmt.executeQuery(query);
As, I'm not able to reproduce this issue at my end, it seems something is going-on wrong with the Connection Pooling or I have not configured my application to deal with Connection Pooling correctly.
Just for your information, I'm correctly performing Connection close in finally block where my application terminates and this Application is using Quartz Scheduler as well. For Connection Pooling, a call to the following from TestDatabases class is done for setUp as:
public synchronized DataSource setUp() throws Exception {
Class.forName(RestConnectionValidator.prop.getProperty("driverClass")).newInstance();
log.debug("Class Loaded.");
connectionPool = new GenericObjectPool();
log.debug("Connection pool made.");
connectionPool.setMaxActive(100);
ConnectionFactory cf = new DriverManagerConnectionFactory(
RestConnectionValidator.prop.getProperty("connectionUrl")+new String(get().toString().trim()),
"","");
PoolableConnectionFactory pcf =
new PoolableConnectionFactory(cf, connectionPool,
null, null, false, true);
return new PoolingDataSource(connectionPool);
}
Following is the error I'm getting (at the other machine)
java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] System resource exceeded.
Following is the Database Path:
jdbc:odbc:DRIVER= {Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\\DataSources\\PR01.mdb
Each of those database seems to be not much heavy (its ~ 5 to 15 MB of total size).
So, I'm left with the following solutions:
1). Correction of Connection Pooling or migrate to the newer one's like c3p0 or DBPool or BoneCP.
2). Introducing batch concept - in which I will schedule my application for each group of 4 databases. It could be very expensive to deal with as any time the other schedule may also collapse.
I’m pretty sure that this is Java related error but I can’t fathom out why.
Just done the migration to BoneCP which solved my problem. I guess due to multi-threaded environment the dpcp was not providing the connection from pool rather it was trying to hit the database again and again. Maybe I could have solved the dpcp issue but migrating to BoneCP also provides advantage of performance.

Handle a HicariCP Oracle connection attempts

I presume i have a close to default HicariConfiguration with MaximumPoolSize(5).
The problem i faced with is there're a lot of attempts to connect to database even the first one failed. I mean, for instance, the password i'm going to use to connect to Oracle is wrong and connection fails, but then we have one more attempts to connect to database which lock the account as a result.
Question: What HicariCP setting is supposed to be used to limit up to 1 number of attempt to connect?
Thanks for any information!
### UPDATE
env.conf:
jdbc {
test1 {
datasourceClassName="oracle.jdbc.pool.OracleDataSource"
dataSourceUrl=.....jdbc url
dataSourceUser=USER
dataSourcePassword=password
setMaximumPoolSize = 5
setJdbc4ConnectionTest = true
}
}
Conf file is read by means of ConfigFactory, and create HicariConfig based on conf file (setDriverClassName etc).
Output of HikariConfig:
autoCommit.....................true
connectionTimeOut..............30000
idleTimeOut....................600000
initializationFailFast.........false
isolateInternalQueries.........false
jdbc4ConnectionTest............test
maxLifetime....................1800000
minimumIdle....................5
https://github.com/brettwooldridge/HikariCP/issues/312, As explained at the end of this issue, HikariCP will keep trying to acquire a connection. It removed the acquireRetries parameters deliberately. so the way is to configure the right username/password, since DB only lock after authenticaions failures.
Here's extracted from the issue. HikariCP intends to retry forever.
Back to acquireRetries... Without a concept of acquireRetries, how
long does the dedicated thread continue to try to create a new
connection? Forever. The background creation thread will continue to
try to add a connection to the pool forever, or until one of three
conditions is met:

Categories

Resources