How does HikariCP handle incomplete JDBC transactions? - java

Yesterday I came across HikariCP and spent the whole night studying it. I'm really impressed with the amount of detail and effort put into fine tuning its implementation and design.
Straight to the point, I could not determine how it actually deals with connections that are checked back into the pool with their autoCommit set to false, while neither commit() nor rollback() is issued on them, for example, due to an exception. This potentially can be the source of many serious transactional problems for the next requester that expects a fresh connection but unfortunately receives this connection with its dangling transaction state.
While C3P0 and Tomcat's JDBC pool have some of these so called Knobs for this very purpose (through configuration or interception), I could not find anything in HikariCP's documentation or support group. Please correct me if I'm wrong, but writing a simple unit test showed me that the pool does nothing about this.
I need to know if this observation is actually correct and I'm not missing anything about it. Also, if there is any plan for addressing this in HikariCP since it is critical for me.
Thanks.

I am one of the authors of HikariCP. HikariCP does not automatically execute either commit or rollback if auto commit is turned off. It is generally expected that an application that is turning off auto commit explicitly is prepared to properly handle these (recommended in a finally block) -- as in this example from the official JDBC documentation.
We are willing to add automatic "rollback" behavior to HikariCP (but not automatic "commit") if a connection is returned to the pool with auto commit set to false. Please open a feature request if you wish this behavior.
UPDATE: HikariCP 1.2.2 and above perform an automatic "rollback" for closed connections with auto-commit set to 'false'. Additionally, it will reset transaction isolation level to the configured default, and as noted in comments below will of course close open Statements, etc.
UPDATE: HikariCP 2.3.x and above now additionally track transaction state when auto-commit is set to false, and will bypass the automatic rollback operation if the transaction state is clean.

Related

use jpa hibernate high concurrency will unable to acquire JDBC Connection

When I encounter high concurrency during jpa hibernate, the project will report “Unable to acquire JDBC Connection” error after running for a while
But after I added the hikari database connection pool, the problem is solved. Why is this happening or there is no other way to solve it?
It depends on what pool you used before.
The HikariCP-maxLifeTime default is 30 Minutes. After this, the connection will be given back to the DBMS, which normally limits the maximum number of connections.
DBCP default is without limit.
If you did not use a pool, nobody closes the connections if you don't do it yourself.
So that might be the cause, why you don't get the exceptions anymore. But be aware, that a memory-leak might be left. That means there might be hibernate-sessions stored anywhere in your code, which never get used and never get closed.

Abandoned connection cleanup in mariadb (compared to mysql)?

Switching from mysql-connector to mariadb client library:
What is the equivalent of the mysql class com.mysql.cj.jdbc.AbandonedConnectionCleanupThread.checkedShutdown()?
If there is any at all?
(I'm also using hikari connection pool).
I don't believe there is an equivalent, it looks like this feature was not migrated to Maria DB. It would be more prudent to fix the connection leak in the application instead.
As explained by HikariCP pool author in this message, this feature of force closing abandoned connections has a number of problems:
Yes, we have considered it (removing abandoned connections), but ultimately we decided to pass. The problem with closing leaked connections is several fold. Some thread is possibly using that connection, and its going to blow-up (in production) somewhere if we close it. Or nothing is using that connection, and closing it has no negative impact, but now we've just covered up a leak that will cause constant cycling of connections in the pool.
Applications are responsible for cleaning up resources. Java developers tend to get lazy compared to C/C++ programmers. This is leak just like a memory leak, and both can and rightfully should eventually kill your application. How else would you 1) know a problem exist, and 2) be motivated to track it down and fix it.
We do appreciate all input, even if not adopted. In this case, users looking for a library to defensively cover-up coding errors should probably look to tomcat-jdbc.
Note, leak detection can be run in production, and can be enabled at runtime through a JMX console, so there's not a lot of justification for adding proactive connection reclamation.

How to keep jdbc to postgres alive

So I've been tracking a bug for a day or two now which happens out on a remote server that I have little control over. The ins and outs of my code are, I provide a jar file to our UI team, which wraps postgres and provides storage for data that users import. The import process is very slow due to multiple reasons, one of which is that the users are importing unpredictable, large amounts of data (which we can't really cut down on). This has lead to a whole plethora of time out issues.
After some preliminary investigation, I've narrowed it down to the jdbc to the postgres database is timing out. I had a lot of trouble replicating this on my local test setup, but have finally managed to by reducing the 'socketTimeout' of the connection properties to 10s (there's more than 10s between each call made on the connection).
My question now is, what is the best way to keep this alive? I've set the 'tcpKeepAlive' to true, but this doesn't seem to have an effect, do I need to poll the connection manually or something? From what I've read, I'm assuming that polling is automatic, and is controlled by the OS. If this is true, I don't really have control of the OS settings in the run environment, what would be the best way to handle this?
I was considering testing the connection each time it is used, and if it has timed out, I will just create a new one. Would this be the correct course of action or is there a better way to keep the connection alive? I've just taken a look at this post where people are suggesting that you should open and close a connection per query:
When my app loses connection, how should I recover it?
In my situation, I have a series of sequential inserts which take place on a single thread, if a single one fails, they all fail. To achieve this I've used transactions:
m_Connection.setAutoCommit(false);
m_TransactionSave = m_Connection.setSavepoint();
// Do something
m_Connection.commit();
m_TransactionSave = null;
m_Connection.setAutoCommit(true);
If I do keep reconnecting, or use a connection pool like PGBouncer (like someone suggested in comments), how do I persist this transaction across them?
JDBC connections to PostGres can be configured with a keep-alive setting. An issue was raised against this functionality here: JDBC keep alive issue. Additionally, there's the parameter help page.
From the notes on that, you can add the following to your connection parameters for the JDBC connection:
tcpKeepAlive=true;
Reducing the socketTimeout should make things worse, not better. The socketTimeout is a measure of how long a connection should wait when it expects data to arrive, but it has not. Making that longer, not shorter would be my instinct.
Is it possible that you are using PGBouncer? That process will actively kill connections from the server side if there is no activity.
Finally, if you are running on Linux, you can change the TCP keep alive settings with: keep alive settings. I am sure something similar exists for Windows.

Long running transactions with Spring and Hibernate?

The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set.
To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction).
This worked fine in my development environment. However, in production I got the following exception:
java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case.
So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction?
If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?
I consider keeping transaction open for an extended time evil. During my career the definition of "extended" has descended from seconds to milli-seconds.
It is an unending source of non-repeatable problems and headscratching problems.
I would bite the bullet in this case and keep a 'work log' in software which you can replay in reverse to clean up if the batch fails.
When you say your table is temporary, is it transaction scoped? That might lead to other transactions (perhaps on a different transaction) not being able to see/access it. Perhaps a join involving a real table and a temporary table somehow locks the real table.
Root cause: Have you tried to use the MySQL tools to determine what is locking the connection? It might be something like next row locking. I don't know the MySQL tools that well, but on oracle you can see what connections are blocking other connections.
Transaction timeout: You should create a second connection pool/data source with a much longer timeout. Use that connection pool for your long running task. I think your production environment is 'trying' to help you out by detecting stuck connections.
As mentioned by Justin regarding Transaction timeout, I recently faced the problem in which the connection pool ( in my case tomcat dbcp in Tomcat 7), had setting which was supposed to mark the long running connections mark abandon and then close them. After tweaking those parameters I could avoid that issue.

Putting timeouts on Prepared statements

I am trying to use prepared statements which operate on a database located quite far away, there is considerable lag and unreliability involved in the network connection used to access this database.
Downtimes of up to a minute are common.
The problem is that in case of such a failure, if my program tries to execute any prepared statement, the whole thread goes into infinite wait. It never times out and just remains stuck, waiting for a response from the database.
I tried using the method setQueryTimeout() to explicitly put a timeout on the execution, but, seems there is some problem with this method wherein it cant work properly if the network fails.
Is there any alternative way around this ?
In my knowledge, there is no such alternative if the network itself fails.
The exact details of setQueryTimeout involve the JDBC driver being instructed to send an Out Of Band signal (atleast in the case of the Oracle JDBC driver) to the database to halt the execution of the Prepared Statement; this part is important, as it depends on the support built into the driver and the database. Following this, it is upto the database to schedule execution of this 'cancel' operation; this could take a while if things have to be rolled back or if other transactions have to be executed etc.
Given the first part of the nature of the implementation, it is quite unlikely that a "clean" implementation of a timeout feature can be established. You might want to investigate the use of a transaction manager here (JTA perhaps), but I'm not sure if will encounter another set of bizarre exceptions (think heuristic exceptions).
Addendum
Using a thread monitor that monitors the execution of other threads and kills the 'stuck' threads might be a bad idea. This SO question would help answer why such an activity should be avoided. This is also the tactic chosen by WebLogic Server.

Categories

Resources