Why is BoneCP causing Derby ShutdownException after instantiation? - java

Basically I do:
CONNECTION_POOL = new BoneCP(config);
And then right after:
CONNECTION_POOL.getConnection().createStatement("SELECT * FROM myTable");
which will sometimes (not always) throw a org.apache.derby.iapi.error.ShutdownException. I suspect there is some kind of racing condition or threading issue with BoneCP and it's instantiation, but I can't find anything anywhere. I did see something about lazy instantiation, but setting it to true or false makes no difference.
Any help would be greatly appreciated.

An embedded Derby database is shut down using a call to getConnection with the shutdown property set true. This will shut down the database AND throw a ShutdownException. Nothing is wrong, but according to jdbc getConnection is only allowed to return a valid connection or throw an exception. And since there cannot be a valid connection to a database that has been shut down, and exception must be thrown. So the question here really is if the framework you use is supposed to catch this exception, or if you are supposed to handle it.

Calling Thread.interrupt will cause derby to shutdown in some cases. To quote the Derby Guide: "As a rule, do not use Thread.interrupt() calls to signal possibly waiting threads that are also accessing a database, because Derby may catch the interrupt and close the connection to the database. Use wait and notify calls instead."
This is actually bad because some libraries will call Thread.interrupt and so on. And in some cases it's good practice depending on what you're doing. In my opinion this is a big design flaw of the database engine.
In any case the only solution I found was that if you're going to call Thread.interrupt(), then you need to adjust the Derby source code to ignore these thread interruptions. Specifically the method noteAndClearInterrupt(...) in the class org.apache.derby.iapi.util.InterruptStatus

Related

JDBC Connection close vs abort

I asked this question (How do I call java.sql.Connection::abort?) and it led me to another question.
With
java.sql.Connection conn = ... ;
What is the difference between
conn.close();
and
conn.abort(...);
?
You use Connection.close() for a normal, synchronous, close of the connection. The abort method on the other hand is for abruptly terminating a connection that may be stuck.
In most cases you will need to use close(), but close() can sometimes not complete in time, for example it could block if the connection is currently busy (eg executing a long running query or update, or maybe waiting for a lock).
The abort method is for that situation: the driver will mark the connection as closed (hopefully) immediately, the method returns, and the driver can then use the provided Executor to asynchronously perform the necessary cleanup work (eg making sure the statement that is stuck gets aborted, cleaning up other resources, etc).
I hadn't joined the JSR-221 (JDBC specification) Expert Group yet when this method was defined, but as far as I'm aware, the primary intended users for this method is not so much application code, but connection pools, transaction managers and other connection management code that may want to forcibly end connections that are in use too long or 'stuck'.
That said, application code can use abort as well. It may be faster than close (depending on the implementation), but you won't get notified of problems during the asynchronous clean up, and you may abort current operations in progress.
However keep in mind, an abort is considered an abrupt termination of the connection, so it may be less graceful than a close, and it could lead to unspecified behaviour. Also, I'm not sure how well it is supported in drivers compared to a normal close().
Consulting the java docs seems to indicate that abort is more thorough than close, which is interesting.
abort...
Terminates an open connection. Calling abort results in: The
connection marked as closed Closes any physical connection to the
database Releases resources used by the connection Insures that any
thread that is currently accessing the connection will either progress
to completion or throw an SQLException.
close...
Releases this Connection object's database and JDBC resources
immediately instead of waiting for them to be automatically released.
Calling the method close on a Connection object that is already closed
is a no-op.
So it seems if you are only concerned with releasing the objects, use close. If you want to make sure it's somewhat more "thread safe", using abort appears to provide a more graceful disconnect.
Per Mark Rotteveel's comment (which gives an accurate summary of the practical difference), my interpretation was incorrect.
Reference: https://docs.oracle.com/javase/8/docs/api/java/sql/Connection.html#close--

In what cases can a statement.close throw exception? What does it mean for the connection?

Everywhere we see that the Statement.close() is 'handled' by eating up the exception that it throws. What are the cases where it can throw an exception in the first place? And what does it mean for the connection with which this statement was created?
In other words, when does statement.close() throw an exception and would the connection still be 'healthy' to be used for creating new statements?
Also, what happens if resultset.close() throws?
First, consider what the close() method might need to do, and what might cause an exception.
E.g. a PreparedStatement might have created a stored procedure, which needs to be deleted by the close() method. executeQuery() may have opened a cursor, which is used by the ResultSet, and close() needs to close that cursor.
Exception could of course be an internal error, but is most likely a communication error, preventing the close operation from succeeding.
So, what does that mean? It means that resources are not being explicitly cleaned up. Since your operation is already complete, it's generally ok to ignore those close() exceptions, since resources will be reclaimed eventually anyway.
However, since the cause is probably a communication error, the connection is likely broken, which means that you'll just get another exception on whatever you try next, making it even less likely that your handling of the close() exception matters.
To be safe, an exception means that something is wrong, and unless you examine the exception to understand how bad it is, you should abort whatever you're doing. A new connection should be established if you want to try again.
But, as already mentioned, ignoring close() exceptions aren't really a big issue. It may lead to resource leaks, but if the problem is bad, you're just going to get another exception on your next action anyway.
Simplest case for such an exception: the connection, that handled the statement is closed before you are trying to close the statement or if the statement was closed -somehow- already. speaking for jdbc, the connection should be healthy.
In general: As Peter stated, if the documentation of the driver does not contain any recommendations how to handle such an exception, you can only log or debug it. maybe you could re-establish the connection to be sure it is healthy.
When you close a statement, a lot of things can happen. These basic things can happen when closing a statement:
The open result set - if any - is closed, which may require communication to the database
The statement handle on the database server is released, which requires communication to the database
Given this involves communication to the database, all kinds of errors can occur: file system errors, network connection problems, etc. These may be safe to ignore, but could also indicated something very wrong with your application or database.
A secondary effect of a statement close can be a transaction completion (a commit or rollback). This could - for example - happen in auto-commit mode when you execute a data-modifying statement that produces a result set: the transaction ends when the result set is closed by the close of the statement. If this transaction commit fails, and you ignore it, your application may have just had a data-loss event (because the data was not persisted), and you just went ahead and ignored it.
In other words: you should not just ignore or swallow exceptions from Statement.close() unless you are absolutely sure there will be no detrimental effects. At minimum log them so you can trace them in your logs (and maybe define alerts on the number of exceptions logged), but always consider if you need to wrap them in application-specific exceptions and throw them higher up the call chain for handling, or - for commit failures - if you need to retry anything.

Using ThreadPoolExecutor and connection pool without random blocking method call

I've been using StackOverFlow for a long time now and always found existing answers, but this time I couldn't find any information about what I'm trying to do.
Using java, I have a process composed of about 10 different tasks that gather distinct data from the database using pure jdbc (no ejb/jpa here). Each task (callable) can actually be run concurrently and is responsible for obtaining a connection, which is what we are doing. However we're randomly experiencing trouble with the connection pool (accessed via jndi), sometimes we're blocked because the connection pool doesn't have any available connection.
To solve this problem, I thought we could change the way we're obtaining the connections, instead of letting each callable opening and closing a connection ( following the number of tasks to execute and the number of threads to use in the ThreadPoolExecutor), I would like to create some kind of local connections pool dedicated to this process, so that we're sure nothing will block later (eventually if we can't acquire all the requested connections, we would then adapt the number of threads to launch with a minimum of 1)
My colleagues approve this idea, but what surprises me is that I can't found any similar approaches or discussion on the web (maybe I'm not using the right keywords).
I would like to know what you think about this idea, whether you already tried something similar or if I'm missing something important.
In advance, thank you.
You have not mentioned which connection pool is used. If it is not HikariCP and you are allowed to switch, having contributed there I recommend it.
HikariCP seems rather interesting finally, i'll have to check this further. But this isn't directly related to the question :)
Just a little return of experience, my idea is working, with one caveat, I couldn't get rid of one downcast from a runnable to my implementation on which I can do .setConnection() during the before() of my ExecutorService. And all tasks must have been given to the executor with the execute() method, otherwise the runnable is autolatically wrapped in a FutureTask without the ability to access the inner runnable. Maybe one of you know of to do this correctly ?

How to use Java.sql.Connection.setNetworkTimeout?

I ran into the exact issue that setNetworkTimeout is supposed to solve according to Oracle. A query got stuck in socket.read() for several minutes.
But I have little idea what the first parameter for this method needs to be. Submitting a null causes an AbstractMethodError exception, so... does the implementation actually need some sort of thread pool just to set a network timeout?
Is there some way to achieve the same effect without running a thread pool just for this one condition?
It seems like the documentation explains this horribly, but without looking at any code behind the class my guess would be that you are expected to pass an Executor instance to the method so that implementations can spawn jobs/threads in order to check on the status of the connection.
Since connection reads will block, in order to implement any sort of timeout logic it's necessary to have another thread besides the reading one which can check on the status of the connection.
It sounds like a design decision was made that instead of the JDBC driver implementing the logic internally, of how/when to spawn threads to handle this, the API wants you as the client to pass in an Executor that will be used to check on the timeouts. This way you as the client can control things like how often the check executes, preventing it from spawning more threads in your container than you like, etc.
If you don't already have an Executor instance around you can just create a default one:
conn.setNetworkTimeout(Executors.newFixedThreadPool(numThreads), yourTimeout);
As far as Postgres JDBC driver is concerned (postgresql-42.2.2.jar), the setNetworkTimeout implementation does not make use of the Executor parameter. It simply sets the specified timeout as the underlying socket's timeout using the Socket.setSoTimeout method.
It looks like the java.sql.Connection interface is trying not to make any assumptions about the implementation and provides for an executor that may be used if the implementation needs it.

Database Pooler

Hello i am trying to implement a database-object("connection") pooler for BerkeleyDB...
I decided to use a singleton EJB propably or ENUM singleton implementation for this..
A final concurrenthash map would store database objects with a timestamp...
the method getConnection() would use double check locking as long as the value from map is volatile. - No performance issues i believe..(Java Connection Pooler getConnection is synchronized!!)
The database is spread into 100 files + the daily ones.. (application designed in mid seventies 1976)..
So far everything is fine... But i want to close daily unused handles.
So i decided to use a Timer to run every 24 hours a cleanup routine..
The problem is that how can i ensure that during cleanup a connection to be closed isnt requested ?
Pseudo algorithm
cleanup(){
for(Database db in map){
if(db.getLastAccess - now >24hours) {
res=map.remove("key",db);
db.close();
}
}
}
i know that the above isnt thread safe..How could i block getconnection ? Because many things could go wrong... "If condition" may be true but before removing db obj getLastAccess could be changed! Cleanup would be called by single thread though..
Is there any solution to block getconnection somehow so cleanup to work or anyother solution?
I am not sure if you currently do this, but if you have a way to determine if a connection is in use this would make this slightly easier. One thing that you can do, is iterate over the connections in your pool. When you find one that matches your criteria for being closed, try to mark it as being in use (assuming that a connection that is in use will not be returned as a open connection). If you succeed, close it. Otherwise, check it until it becomes free and you are able to mark it as being in use. Once you have been able to do this, you should be able to close it.
Each connection would have a lock associated with it, in order for the connection to be returned by the getConnection method, the correct lock would have to be acquired. The cleanup method would also need to acquire the lock before closing a connection. Take a look at the java.util.concurrent.lock package.
Maybe a Semaphore is a better solution. Follow the link for an example.
I've never worked with BerkeleyDB, but I assume it has a JDBC interface. Can't you use an out of the box solution like DBCP or c3p0? Also check the Pool Component, it is a generic pool interface.

Categories

Resources