I am using the following code to connect to a firebird database
public static Connection dbStatic;
...
public void getConnection(){
FBWrappingDataSource DataSource = new FBWrappingDataSource();
DataSource.setDatabase("localhost/3050:C:/MyDatabase.FDB");
DataSource.setDescription("TNS Development Database");
DataSource.setType("TYPE4");
DataSource.setEncoding("ISO8859_1");
DataSource.setLoginTimeout(10);
try {
dbStatic = DataSource.getConnection("UserName", "Password");
} catch (SQLException e) {
e.printStackTrace();
}
}
...and the following to disconnect:
...
dbStatic.close();
...
I am using Firebird 2.1 runing on a Windows 7-32 bit machine, with Java verstion 1.7, Jaybird version 2.2.8, Tomcat version 7.xx running on Win7-32bit, Browser is Chrome version something or other (newish) running Win XP SP3.
I use a third party tool called IBExpert to look at the number of connections and/or I run this statement:
select * from mon$attachments;
When I look at the number of connections to the database after the .close() statement runs the number does not decrease. Why is that? The number of connections do decrease if I wait long enough, or if the Tomcat server is restarted. Closing the browser does not affect the connections.
As Andreas already pointed out in the comments, FBWrappingDataSource is a connection pool. This means that the pool keeps physical connections open, and it hands out logical connections backed by the physical connections in the connection pool. Once you call close() on that logical connection, the physical connection is returned to the pool as available for reuse. The physical connection remains open.
If you want to close all connections, you need to call FBWrappingDataSource.shutdown(). This closes all physical connections that are not currently in use(!), and marks the data source as shutdown.
However, everything in package org.firebirdsql.pool should be considered deprecated; it will be removed in Jaybird 3. See Important changes to Datasources
If you just want a data source, use org.firebirdsql.pool.FBSimpleDataSource (with Jaybird 3 you will need to use org.firebirdsql.ds.FBSimpleDataSource instead).
If you want connection pooling, use a third party connection pool library like HikariCP, DBCP or c3p0.
That said, I want to point out several things you should consider:
Jaybird 2.2.8 is not the latest version, consider upgrading to 2.2.12, the current latest release of Jaybird.
Using a static field for a connection is generally not a good idea (especially with a web application), consider your design if that is really what you need. You might be better off making a data source the static field, and obtain (and close!) connections for a unit of work (ie: one request). It might also indicate that it would be simpler for you to just use DriverManager to create the connection.
Naming conventions: your variable DataSource should be called dataSource if you follow the common Java conventions
setLoginTimeout(Integer.parseInt(10)) should lead to a compilation error, as there is no method Integer.parseInt that takes an int, and the method itself already accepts an int.
Related
I am looking for a Java database connection pool that allows me to use AWS IAM Database Authentication for my Aurora MySQL. The pool should be able to work Tomcat context.xml file.
I have looked at Tomcat DBCP, dbcp2, HikariCP and c3p0. But they all seem to asume the username and password is known at application startup and does not change in the lifetime of the application.
For IAM database authentication the credentials change every 15 minutes so the pool needs to ask the AWS IAM for a new credentials whenever it creates new connections (the credentials could be cached a few minutes.).
Is this implemented in any Java connection pool? Or do you have an idea on how get this to work?
I've also had to face this problem using node js lambda and MySql RDS. We were using a mysql connection pool and so we implemented a solution that created a future date-time that we could check to see if connections were about to expire whenever a connection was requested from the pool. This date-time was 15 minutes minus some jitter after the connection pool was initialized.
So getting the connection pool (to get a connection) would look like:
const getPool = async (): Promise<DbConnectionPool> => {
if (isRdsIamTokenCloseToExpiring()) {
await poolHolder.lock.acquire();
try {
// if, after having acquired lock, thread pool is still about to expire...
if (isRdsIamTokenCloseToExpiring()) {
await closeConnectionsInPool();
await initializeConnectionPool();
}
} finally {
poolHolder.lock.release();
}
}
if (!poolHolder.pool) {
throw new Error('pool holder is null - this should never happen');
} else {
return poolHolder.pool;
}
};
Because we had multiple concurrent async threads trying to get a connection we had to introduce a semaphore to control the pool re-initialization. All in all having to do this was more cumbersome than using a username & password but it is more secure.
To answer Isen Ng's comment above (I don't have the rep to answer directly), connections whose RDS IAM token expires will stop working.
I had the same problem recently.... I use HikariCP connection pool and until now it does not have support for this. Fortunately I found a PR with this tool:
https://github.com/brettwooldridge/HikariCP/pull/1335
I recommend you make a project fork and use it until the official repository accepts this PR.
My implementation of this:
public DataSource setup() throws Exception {
Supplier<String> passwordSupplier = () -> {
return this.generateAuthToken(host, port, user);
};
com.zaxxer.hikari.HikariDataSource dataSource = new com.zaxxer.hikari.HikariDataSource();
dataSource.setPasswordSupplier(passwordSupplier); ...
It's very important include this on your pool configuration:
dataSource.setMaxLifetime(15 * 60 * 1000);
Because your pool connections can't live more than 15 minutes with RDS Iam Auth
Good luck.
I know this is an older question, but after a some searching I found a pretty easy way you can now do this using the MariaDB driver. In version 2.5 they added an AWS IAM credential plugin to the driver. It will handle generating, caching and refreshing the token automatically. You can activate it like this:
jdbc:mariadb://host/db?credentialType=AWS-IAM&useSsl&serverSslCert=/somepath/rds-combined-ca-bundle.pem
I've tested using HikariCP connection pool and it is working for me. Make sure you are using the MariaDB driver (not MySQL) and set maxLifetime to 600000ms (driver caches tokens for 10 minutes).
Hope this helps someone else - most examples I found online involved custom code, background threads, etc - but using the new driver feature is much easier!
I'm using latest derby10.11.1.1.
Doing something like this:
DriverManager.registerDriver(new org.apache.derby.jdbc.EmbeddedDriver())
java.sql.Connection connection = DriverManager.getConnection("jdbc:derby:filePath", ...)
Statement stmt = connection.createStatement();
stmt.setQueryTimeout(2); // shall stop query after 2 seconds but id does nothing
stmt.executeQuery(strSql);
stmt.cancel(); // this in fact would run in other thread
I get exception "java.sql.SQLFeatureNotSupportedException: Caused by: ERROR 0A000: Feature not implemented: cancel"
Do you know if there is way how to make it work? Or is it really not implemented in Derby and I would need to use different embedded database? Any tip for some free DB, which I can use instead of derby and which would support SQL timeout?
As i got in java docs
void cancel() throws SQLException
Cancels this Statement object if both the DBMS and driver support aborting an SQL statement. This method can be used by one thread to cancel a statement that is being executed by another thread.
and it will throws
SQLFeatureNotSupportedException - if the JDBC driver does not support this method
you can go with mysql.
there are so many embedded database available you can go through
embedded database
If you get Feature not implemented: cancel then that is definite, cancel is not supported.
From this post by H2's author it looks like H2 supports two ways to timeout your queries, both through the JDBC API and through a setting on the JDBC URL.
Actually I found that there is deadlock timeout in derby as well only set to 60 seconds by default and I never have patience to reach it :).
So the correct answer would be:
stmt.setQueryTimeout(2); truly seems not working
stmt.cancel(); truly seems not implemented
But luckily timeout in database manager exists. And it is set to 60 seconds. See derby dead-locks.
Time can be changed using command:
statement.executeUpdate("CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY(" +
"'derby.locks.waitTimeout', '5')");
And it works :)
A console application executing under:
1). Multiple threads
2). Connection Pooling (as the database connections range could be 5 to 30) of type Microsoft Access using DBCP.
While executing this application at my end (not tested the database limit) it works fine. And whenever I try to introduce the same application on one of other machines it generates an error.
I'm wondering why this is happening as there is only the difference of machines here. So, it works perfectly at my end.
I don't know much about connection pooling but it seems whatever I have understood I have implemented as:
public class TestDatabases implements Runnable{
public static Map<String, Connection> correctDatabases;
#Override
public void run() {
// validating the databases using DBCP
datasource.getConnection(); // Obtaining the java.sql.Connection from DataSource
// if validated successfully °º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°`°º¤ø,¸ putting them in correctDatabases
}
}
The above case is implemented using ExecutorService = Number of databases.
Finally, I'm trying to put them in a static Collection of Type
Map<String, Connection> and making use of it throughout the application. In other words: I'm trying to collect the connectionString along with the Connection in a Map.
In other parts of my application I'm simply dealing with multiple threads coming along with the Connection URL. So, to perform any database operations I'm calling the
Connection con = TestDatabases.correctDatases.get(connectUrl);
For that machine, this application works fine for around ~5 databases. And the error is always getting generated when I'm trying to fire the query using above Connection (con) as stmt.executeQuery(query);
As, I'm not able to reproduce this issue at my end, it seems something is going-on wrong with the Connection Pooling or I have not configured my application to deal with Connection Pooling correctly.
Just for your information, I'm correctly performing Connection close in finally block where my application terminates and this Application is using Quartz Scheduler as well. For Connection Pooling, a call to the following from TestDatabases class is done for setUp as:
public synchronized DataSource setUp() throws Exception {
Class.forName(RestConnectionValidator.prop.getProperty("driverClass")).newInstance();
log.debug("Class Loaded.");
connectionPool = new GenericObjectPool();
log.debug("Connection pool made.");
connectionPool.setMaxActive(100);
ConnectionFactory cf = new DriverManagerConnectionFactory(
RestConnectionValidator.prop.getProperty("connectionUrl")+new String(get().toString().trim()),
"","");
PoolableConnectionFactory pcf =
new PoolableConnectionFactory(cf, connectionPool,
null, null, false, true);
return new PoolingDataSource(connectionPool);
}
Following is the error I'm getting (at the other machine)
java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] System resource exceeded.
Following is the Database Path:
jdbc:odbc:DRIVER= {Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\\DataSources\\PR01.mdb
Each of those database seems to be not much heavy (its ~ 5 to 15 MB of total size).
So, I'm left with the following solutions:
1). Correction of Connection Pooling or migrate to the newer one's like c3p0 or DBPool or BoneCP.
2). Introducing batch concept - in which I will schedule my application for each group of 4 databases. It could be very expensive to deal with as any time the other schedule may also collapse.
I’m pretty sure that this is Java related error but I can’t fathom out why.
Just done the migration to BoneCP which solved my problem. I guess due to multi-threaded environment the dpcp was not providing the connection from pool rather it was trying to hit the database again and again. Maybe I could have solved the dpcp issue but migrating to BoneCP also provides advantage of performance.
I have created a desktop application and I have connect it to a MySQL database with a database connection (bean/class) and I can CRUD. I have seen on the NetBeans site that they create a connection pool on a web application.
Is a connection pool the same with the class/bean on a desktop application?
Does this mean that i create a bean/class like a desktop application that is connected with to DB model(MVC), or do i have to do something else?
On a Glassfish server you do the connection pool with a wizard; on Apache you do not. Do I have to create the DB connection bean for Apache?
What are the practices (beans, something else?) to connect a DB to a web application?
I have also read about Hibernate, but I don't understand the use of it. Where can hibernate help? I mean, it's ORM, but what can Hibernate do for me so that my code is easier? I think I'm missing the point of ORM
Hibernate will help you with your transaction management. It will enable you to open several different connections to the database, and also give you warnings when you are using unavailable objects (like beans that gets pulled in from different threads).
A concrete example of where Hibernate's ORM will make your code easier is when you are querying the DB. Instead of writing the standard SQL queries as strings, you can use Criteria-queries.
In Java, DB connections always use a JDBC driver. No DB that I know of allows to run more than a single SQL command over a single connection at the same time, so each connection becomes bottleneck if your application can run several SQL commands at the same time (the usual case for web servers where hundreds of users can interact with the database at the same time).
UPDATE: What I'm saying is: You can easily daisy-chain commands over a single connection (like UPDATE ... ; COMMIT) but you can't send two UPDATE commands at the same time -- you always have to wait for the first command to complete before you can send the next. Some databases allow to send several commands in a single query but they are executed one after the other and not all at the same time. Think about it: If you could run several commands concurrently over a single connection, how would you know in which order they were executed?
On top of that, creating DB connections is expensive for most DBs. Hence they are created in advance during application startup and held in a pool. As soon as you "connect" to the database with the pooled JDBC driver, it picks an unused connection from the pool and returns it. That (almost) takes no time. When you "close" the connection, it's returned to the pool.
As an additional benefit, the pool can keep the connections alive. So you never need to worry about connection errors when you need a new connection (well, as long as the DB is running).
From the application side, this is either transparent (most JDBC drivers either pool internally today or they have a pooling API). If your JDBC driver doesn't, you can always use a pool like DBCP. The pool handles all the nasty details and you write your application against the pool API instead of using JDBC directly. The docs will tell you how to do it.
How Hibernate is a different beast. Hibernate is a layer on top of JDBC that can transform POJOs into SQL and back.
So instead of saying INSERT INTO data(ID, VALUE) values (?, ?), you can say
class Pojo { long id; String value; }
Pojo demo = new Pojo();
demo.value = "Test";
session.persist(demo);
and Hibernate will create the SQL for you and send it to the DB. At this stage, it doesn't make your life easier. Hibernate starts to shine when you change your Pojos:
class Pojo { long id; String value;
String name; // Oops ... forget the name
}
Pojo demo = new Pojo();
demo.name = "John";
demo.value = "Test";
session.persist(demo);
Hibernate will change the DB definition accordingly and update all the SQL commands it needs to load and save the objects.
Sometimes when I call connect() on a third-party proprietary JDBC driver, it never returns and a stack trace shows that it is stuck waiting for a socket read (usually). Is there a generic way to forcibly cancel this operation from another thread? It's a blocking I/O call so Thread.interrupt() won't work, and I can't directly close the socket because I don't have access to it since it's created inside the proprietary code.
I'm looking for generic solutions because I have a heterogeneous DB environment (Oracle, MySQL, Sybase etc). But driver-specific suggestions are also welcome. Thanks,
There is no standard JDBC interface to set connection or read timeouts, so you are bound to use proprietary extensions, if the JDBC driver supports timeouts at all. For the Oracle JDBC thin driver, you can e.g. set the system properties "oracle.net.CONNECT_TIMEOUT" and or "oracle.jdbc.ReadTimeout" or pass a Properties instance to DriverManager.getConnection with these properties set. Although not particulary well documented, the Oracle specific properties are listed in the API documentation.
For other JDBC drivers, the documentation should contain the relevant references.
Ah ... the joys of using closed-source libraries ...
If interrupt() doesn't work, and you cannot set some kind of timeout, then I think there is no safe way to do it. Calling Thread.kill() might do the job, but the method is deprecated because it is horribly unsafe. And this is the kind of scenario where the unsafe-ness of Thread.kill() could come back and bite you.
I suggest that you simply code your application to abandon the stuck thread. Assuming that your application doesn't repeatedly try to connect to the DB, a stuck thread isn't a huge overhead.
Alternatively use a better JDBC driver. (And on your way out of the door, complain to the supplier about their driver being too inflexible. There is a slight chance that someone might listen to you ...)
At least one JDBC driver (not one of those you listed, though) will cleanly close the connection if the thread this connection attempt is running on is interrupted. I don't know if this will work for all drivers though.
This is a problem with Java, not the JDBC driver. In certain circumstances, the socket connect call ignores the timeout parameters and can take minutes to return. This happens to us when firewall blocks the port. It happens to all TCP connections (HTTP, RMI).
The only solution I find is to open connection in a different thread like this,
private static final ExecutorService THREADPOOL
= Executors.newCachedThreadPool();
private static <T> T call(Callable<T> c, long timeout, TimeUnit timeUnit)
throws InterruptedException, ExecutionException, TimeoutException
{
FutureTask<T> t = new FutureTask<T>(c);
THREADPOOL.execute(t);
return t.get(timeout, timeUnit);
}
try {
Data data = call(new Callable<Data>() {
public Data call() throws Exception
{
// Open connection, get data here
return data;
}, 2, TimeUnit.SECONDS);
} catch (TimeoutException e) {
System.err.println("Data call timed-out");
}