Putting timeouts on Prepared statements - java

I am trying to use prepared statements which operate on a database located quite far away, there is considerable lag and unreliability involved in the network connection used to access this database.
Downtimes of up to a minute are common.
The problem is that in case of such a failure, if my program tries to execute any prepared statement, the whole thread goes into infinite wait. It never times out and just remains stuck, waiting for a response from the database.
I tried using the method setQueryTimeout() to explicitly put a timeout on the execution, but, seems there is some problem with this method wherein it cant work properly if the network fails.
Is there any alternative way around this ?

In my knowledge, there is no such alternative if the network itself fails.
The exact details of setQueryTimeout involve the JDBC driver being instructed to send an Out Of Band signal (atleast in the case of the Oracle JDBC driver) to the database to halt the execution of the Prepared Statement; this part is important, as it depends on the support built into the driver and the database. Following this, it is upto the database to schedule execution of this 'cancel' operation; this could take a while if things have to be rolled back or if other transactions have to be executed etc.
Given the first part of the nature of the implementation, it is quite unlikely that a "clean" implementation of a timeout feature can be established. You might want to investigate the use of a transaction manager here (JTA perhaps), but I'm not sure if will encounter another set of bizarre exceptions (think heuristic exceptions).
Addendum
Using a thread monitor that monitors the execution of other threads and kills the 'stuck' threads might be a bad idea. This SO question would help answer why such an activity should be avoided. This is also the tactic chosen by WebLogic Server.

Related

How to keep jdbc to postgres alive

So I've been tracking a bug for a day or two now which happens out on a remote server that I have little control over. The ins and outs of my code are, I provide a jar file to our UI team, which wraps postgres and provides storage for data that users import. The import process is very slow due to multiple reasons, one of which is that the users are importing unpredictable, large amounts of data (which we can't really cut down on). This has lead to a whole plethora of time out issues.
After some preliminary investigation, I've narrowed it down to the jdbc to the postgres database is timing out. I had a lot of trouble replicating this on my local test setup, but have finally managed to by reducing the 'socketTimeout' of the connection properties to 10s (there's more than 10s between each call made on the connection).
My question now is, what is the best way to keep this alive? I've set the 'tcpKeepAlive' to true, but this doesn't seem to have an effect, do I need to poll the connection manually or something? From what I've read, I'm assuming that polling is automatic, and is controlled by the OS. If this is true, I don't really have control of the OS settings in the run environment, what would be the best way to handle this?
I was considering testing the connection each time it is used, and if it has timed out, I will just create a new one. Would this be the correct course of action or is there a better way to keep the connection alive? I've just taken a look at this post where people are suggesting that you should open and close a connection per query:
When my app loses connection, how should I recover it?
In my situation, I have a series of sequential inserts which take place on a single thread, if a single one fails, they all fail. To achieve this I've used transactions:
m_Connection.setAutoCommit(false);
m_TransactionSave = m_Connection.setSavepoint();
// Do something
m_Connection.commit();
m_TransactionSave = null;
m_Connection.setAutoCommit(true);
If I do keep reconnecting, or use a connection pool like PGBouncer (like someone suggested in comments), how do I persist this transaction across them?
JDBC connections to PostGres can be configured with a keep-alive setting. An issue was raised against this functionality here: JDBC keep alive issue. Additionally, there's the parameter help page.
From the notes on that, you can add the following to your connection parameters for the JDBC connection:
tcpKeepAlive=true;
Reducing the socketTimeout should make things worse, not better. The socketTimeout is a measure of how long a connection should wait when it expects data to arrive, but it has not. Making that longer, not shorter would be my instinct.
Is it possible that you are using PGBouncer? That process will actively kill connections from the server side if there is no activity.
Finally, if you are running on Linux, you can change the TCP keep alive settings with: keep alive settings. I am sure something similar exists for Windows.

Manage Sqlite connections in java for performance in a single threaded environment

What is the best (highest performance) way in java to manage reading/writing to an SqlLite file?
1. Create one connection object
Create and share a single Connection object on app startup. In this case do the calls need to be synchronised/serialised? Java programmers seem to like always closing connections to prevent bad code leaving a a connection in a bad state or with unclosed statements, etc...
2. Open and close a connection for every transaction
Avoids the above mentioned problem, and would allow for the code to be run in a multi threaded environment if need be in the future? I also read that some mobile versions of java may require this behaviour.
I am hoping someone else already has experience in this area and can share it, otherwise I am going to have to learn the hard way. I am using the xerial jdbc driver if that makes a difference
SQLite supports three different threading modes:
Single-thread. In this mode, all mutexes are disabled and SQLite is
unsafe to use in more than a single thread at once.
Multi-thread. In this mode, SQLite can be safely used by multiple
threads provided that no single database connection is used
simultaneously in two or more threads.
Serialized. In serialized mode, SQLite can be safely used by multiple
threads with no restriction.
For details see http://www.sqlite.org/threadsafe.html
Note one multi-thread problem I've encountered is if you close a connection in one thread while still reading from a ResultSet in another thread. If you use the native library this will cause a fatal access violation error and crash the JVM proces. You would need to detect this condition in your code and not read from the ResultSet or even close the ResultSet.

What is the best way for (potentially) hundreds of mobile clients to access a MySQL database?

So, here is the deal.
I'm developing an Android application (although it could just as easily be any other mobile platform) that will occasionally be sending queries to a server (which is written is Java). This server will then search a MySQL database for the query, and send the results back to the Android. Although this sounds fairly generic, here are some specifics:
The Android will make a new TCP connection to the server every time it queries. The server is geographically close, the Android could potentially be moving around a lot, and, since the Android app might run for hours while only sending a few queries, this seemed the best use of resources.
The server could potentially have hundreds (or possibly even thousands) of these queries at once.
Since each query runs in its own Thread, each query will at least need its own Statement (and could have its own Connection).
Right now, the server is set up to make one Connection to the database, and then create a new Statement for each query. My questions for those of you with some database experience (MySQL in particular, since it is a MySQL database) are:
a) Is it thread safe to create one Statement per Thread from a single Connection? From what I understand it is, just looking for confirmation.
b) Is there any thread safe way for multiple threads to use a single PreparedStatement? These queries will all be pretty much identical, and since each thread will execute only one query and then return, this would be ideal.
c) Should I be creating a new Connection for each Thread, or is it better to spawn new Statements from a single Connection? I think a single Connection would be better performance-wise, but I have no idea what the overhead for establishing a DB Connection is.
d) Is it best to use stored SQL procedures for all this?
Any hints / comments / suggestions from your experience in these matters are greatly appreciated.
EDIT:
Just to clarify, the android sends queries over the network to the server, which then queries the database. The android does not directly communicate with the database. I am mainly wondering about best practices for the server-database connection here.
Just because a Connection object is thread safe does not mean its thread efficient. You should use a Connection pool as a best practice to avoid potential blocking issues. But in answer to your question, yes you can share a Connection object between multiple threads.
You do need to create a new Statements/Prepared Statements in each thread that will be accessing the database, they are NOT thread safe. I would highly recommend using Prepared Statements as you will gain efficiency and protection against SQL injection attacks.
Stored procedures will speed up your database queries since the execution plan is compiled already and saved - highly recommended to use if you can.
Have you looked at caching your database data? Take a look at spymemcached if you can, its a great product for reducing number of calls to your data store.
From my experience, you should devote a little time to wrap the database in a web service. This accomplishes two things:
You are forced to examine the data for wider consumption
You make it easier for new consumers to consume the data
A bit more development time, but direct connections to a database via an open network (Internet) is more problematic than specifying what can be accessed through a method.
Use a connection pool such as Commons DBCP. It will handle all the stuff you're worrying about, out of the box.

Have Oracle automatically roll back abandoned sessions?

Is there any way to guarantee that an application won't fail to release row locks in Oracle? If I make sure to put commit statements in finally blocks, that handles the case of unexpected errors, but what if the app process just suddenly dies before it commits (or someone kicks the power cord / lan cable out).
Is there a way to have Oracle automatically roll back idle sessions after X amount of time? Or roll back when I somehow detects that the connection was lost?
From the experiments I've done, if I terminate an app process before it commits, the rows locks stay forever until I log into the database and manually kill the session.
Thanks.
Try setting SQLNET.EXPIRE_TIME in your sqlnet.ora.
SQLNET.EXPIRE_TIME=10
From the documentation:
Purpose
To specify a time interval, in minutes, to send a check to verify that client/server connections are active.
COMMIT inside finally is probably the last thing you should do since you should (almost) never commit anything that threw an exception.
I am not a DBA so I am sure you can find a better solution...
but there are certain deadlock conditions that seem to happen that will not roll back on our own. My last DBA had a process that would run every minute and kill anything that had been running more than 10 minutes.

Long running transactions with Spring and Hibernate?

The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set.
To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction).
This worked fine in my development environment. However, in production I got the following exception:
java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case.
So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction?
If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?
I consider keeping transaction open for an extended time evil. During my career the definition of "extended" has descended from seconds to milli-seconds.
It is an unending source of non-repeatable problems and headscratching problems.
I would bite the bullet in this case and keep a 'work log' in software which you can replay in reverse to clean up if the batch fails.
When you say your table is temporary, is it transaction scoped? That might lead to other transactions (perhaps on a different transaction) not being able to see/access it. Perhaps a join involving a real table and a temporary table somehow locks the real table.
Root cause: Have you tried to use the MySQL tools to determine what is locking the connection? It might be something like next row locking. I don't know the MySQL tools that well, but on oracle you can see what connections are blocking other connections.
Transaction timeout: You should create a second connection pool/data source with a much longer timeout. Use that connection pool for your long running task. I think your production environment is 'trying' to help you out by detecting stuck connections.
As mentioned by Justin regarding Transaction timeout, I recently faced the problem in which the connection pool ( in my case tomcat dbcp in Tomcat 7), had setting which was supposed to mark the long running connections mark abandon and then close them. After tweaking those parameters I could avoid that issue.

Categories

Resources