I am trying to run a web application with the result that it's not able to process further. So it seems it is "locked" somehow (It stayes at the "Start in progress..."-process until it crashes).
After thinking about what could be the cause, I remembered that I didn't stop a transaction.
I just entered the command
entityManager.getTransaction().begin();
but haven't stopped or closed the transaction.
So my question is:
Do transactions stop after the program got closed?
And if not, do you know how I can stop the transaction the easiest way?
PS: Please correct every grammar-mistake I've made - I just like questions which are beautifully formulated.
The database is going to see a connection being closed, so the transaction is definitely going to end. What's not defined is whether the transaction is committed or rolled back.
If I'm not mistaken the more popular option is to rollback any uncommitted transactions (after all it indicates that something's gone wrong and you'd prefer a rollback), but this can depend on the database being used.
Related
I have a problem with sleeping transaction locks on MS SQL Server 2008.
Sometimes it's not really sleeping or when transaction is not completed and there is real lock. But this is not my case.
We have tomcat 6 and java app on it. When I'm doing any update in java and then I'm trying to select updated records - no luck.
If I use with (nolock) - it helps, I see the correct changed rows, but then, after some period of time this updated is rolled back.
If I do update through Studio - then it works fine, immediate result.
Similar situation I have in many places of my application.
For example I have nightly job which is doing recalculation of big chunks of data into bitfields. And at the end it removes old bitfields and copies new bitfields inside. But when job is completed - rollback happens. I tried to save these bitfields manually in tmp tables and then I replaced old with new one through Studio - it worked.
All statements and connections are closed. I verified. And I tried to do all these changes in scope of one transaction.
This application was running for a long period of time without any troubles, but now something happened.
I tried to find out the reason and used sp_who, sp_who2, different scripts which shows locking queries, also I did massive monitoring with SQL server profiler and I tried to find any solution on the Internet. I found only
Error: 1222, Severity: 16, State: 18
which is the result of this problem, not the cause. May be I'm moving in wrong direction. For me it looks like something changed in SQL server configuration and now, for some reason, it holds connection and all changes, which were made in scope of it for ever. When this connection is killed - everything rolled back.
If you have any ideas I would appreciate it.
Thanks in advance for any help.
UPDATE: I've searched and found one article:https://support.microsoft.com/en-us/kb/137983
and there is an option like:
If the Windows NT Server computer has successfully closed the connection, but the client process still exists on the SQL Server as indicated by sp_who, then it may indicate a problem with SQL Server's connection management. In this case, you should work with your primary support provider to resolve this issue.
May be this is mine option. I will investigate it further.
Yesterday I came across HikariCP and spent the whole night studying it. I'm really impressed with the amount of detail and effort put into fine tuning its implementation and design.
Straight to the point, I could not determine how it actually deals with connections that are checked back into the pool with their autoCommit set to false, while neither commit() nor rollback() is issued on them, for example, due to an exception. This potentially can be the source of many serious transactional problems for the next requester that expects a fresh connection but unfortunately receives this connection with its dangling transaction state.
While C3P0 and Tomcat's JDBC pool have some of these so called Knobs for this very purpose (through configuration or interception), I could not find anything in HikariCP's documentation or support group. Please correct me if I'm wrong, but writing a simple unit test showed me that the pool does nothing about this.
I need to know if this observation is actually correct and I'm not missing anything about it. Also, if there is any plan for addressing this in HikariCP since it is critical for me.
Thanks.
I am one of the authors of HikariCP. HikariCP does not automatically execute either commit or rollback if auto commit is turned off. It is generally expected that an application that is turning off auto commit explicitly is prepared to properly handle these (recommended in a finally block) -- as in this example from the official JDBC documentation.
We are willing to add automatic "rollback" behavior to HikariCP (but not automatic "commit") if a connection is returned to the pool with auto commit set to false. Please open a feature request if you wish this behavior.
UPDATE: HikariCP 1.2.2 and above perform an automatic "rollback" for closed connections with auto-commit set to 'false'. Additionally, it will reset transaction isolation level to the configured default, and as noted in comments below will of course close open Statements, etc.
UPDATE: HikariCP 2.3.x and above now additionally track transaction state when auto-commit is set to false, and will bypass the automatic rollback operation if the transaction state is clean.
Is there any way to guarantee that an application won't fail to release row locks in Oracle? If I make sure to put commit statements in finally blocks, that handles the case of unexpected errors, but what if the app process just suddenly dies before it commits (or someone kicks the power cord / lan cable out).
Is there a way to have Oracle automatically roll back idle sessions after X amount of time? Or roll back when I somehow detects that the connection was lost?
From the experiments I've done, if I terminate an app process before it commits, the rows locks stay forever until I log into the database and manually kill the session.
Thanks.
Try setting SQLNET.EXPIRE_TIME in your sqlnet.ora.
SQLNET.EXPIRE_TIME=10
From the documentation:
Purpose
To specify a time interval, in minutes, to send a check to verify that client/server connections are active.
COMMIT inside finally is probably the last thing you should do since you should (almost) never commit anything that threw an exception.
I am not a DBA so I am sure you can find a better solution...
but there are certain deadlock conditions that seem to happen that will not roll back on our own. My last DBA had a process that would run every minute and kill anything that had been running more than 10 minutes.
The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set.
To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction).
This worked fine in my development environment. However, in production I got the following exception:
java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case.
So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction?
If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?
I consider keeping transaction open for an extended time evil. During my career the definition of "extended" has descended from seconds to milli-seconds.
It is an unending source of non-repeatable problems and headscratching problems.
I would bite the bullet in this case and keep a 'work log' in software which you can replay in reverse to clean up if the batch fails.
When you say your table is temporary, is it transaction scoped? That might lead to other transactions (perhaps on a different transaction) not being able to see/access it. Perhaps a join involving a real table and a temporary table somehow locks the real table.
Root cause: Have you tried to use the MySQL tools to determine what is locking the connection? It might be something like next row locking. I don't know the MySQL tools that well, but on oracle you can see what connections are blocking other connections.
Transaction timeout: You should create a second connection pool/data source with a much longer timeout. Use that connection pool for your long running task. I think your production environment is 'trying' to help you out by detecting stuck connections.
As mentioned by Justin regarding Transaction timeout, I recently faced the problem in which the connection pool ( in my case tomcat dbcp in Tomcat 7), had setting which was supposed to mark the long running connections mark abandon and then close them. After tweaking those parameters I could avoid that issue.
I am trying to use prepared statements which operate on a database located quite far away, there is considerable lag and unreliability involved in the network connection used to access this database.
Downtimes of up to a minute are common.
The problem is that in case of such a failure, if my program tries to execute any prepared statement, the whole thread goes into infinite wait. It never times out and just remains stuck, waiting for a response from the database.
I tried using the method setQueryTimeout() to explicitly put a timeout on the execution, but, seems there is some problem with this method wherein it cant work properly if the network fails.
Is there any alternative way around this ?
In my knowledge, there is no such alternative if the network itself fails.
The exact details of setQueryTimeout involve the JDBC driver being instructed to send an Out Of Band signal (atleast in the case of the Oracle JDBC driver) to the database to halt the execution of the Prepared Statement; this part is important, as it depends on the support built into the driver and the database. Following this, it is upto the database to schedule execution of this 'cancel' operation; this could take a while if things have to be rolled back or if other transactions have to be executed etc.
Given the first part of the nature of the implementation, it is quite unlikely that a "clean" implementation of a timeout feature can be established. You might want to investigate the use of a transaction manager here (JTA perhaps), but I'm not sure if will encounter another set of bizarre exceptions (think heuristic exceptions).
Addendum
Using a thread monitor that monitors the execution of other threads and kills the 'stuck' threads might be a bad idea. This SO question would help answer why such an activity should be avoided. This is also the tactic chosen by WebLogic Server.