We're having some problems with connections getting 'stuck' on SQL Server 2008 when issued from Hibernate, running on a Glassfish instance.
Occasionally, there will be 10 or 20 calls to a stored procedure, that are all 'sleeping', and are holding open a number of transactions.
When i use DBCC INPUTBUFFER to find out more about them, it has:
Name : implicit_transaction
Does this mean that the java app, is setting 'SET IMPLICIT_TRANSACTIONS ON' as part of the batch? The only way I was able to replciate that transaction name DB side, was to use that syntax.
It appears that the java app is hanging on to the connection, but somehow losing context of the call itself, as such it never comes back to commit the transaction.
The java developers say that they are not explicitly defining any connections, and are not away that they are setting any other connection properties on purpose. The calls are all being made under the READ COMMITTED isolation level.
How can I found out whether there's some hidden attribute set, or if some hibernate setting is causing this annoying behaviour?
You can ask Hibernate to log the sql statements it's executing.
This article describes two ways to do that.
SessionFactory sf = new Configuration()
.setProperty("hibernate.show_sql", "true")
// ...
.buildSessionFactory();
or in the log4j configuration
log4j.logger.org.hibernate.SQL=DEBUG, SQL_APPENDER
log4j.additivity.org.hibernate.SQL=false
Related
I have a war service with three my custom jar inside. Every my jar use a self-datasource. So, I have three different datasources defined in my JBoss configuration and corresponding persistence.xml where I relate on them. I can depict it as follow:
Where we have three services: SM, RE and RA and each one have own-datasource. SM and RE use datasources to read and write, but RA just read from DB. The order of these interactions with datasources is presented as numbers near arrows on the picture. So, initially SM reads from DB, and in the end, RE writes to DB.
My question is: what I need to use if I want to write data in the end to both datasources but in one transaction?
There are two possible answers, but both don't satisfy me:
Use UserTransaction from JBoss in both SM and RE, when I want to write data, and manually handle the begin of a transaction and then commit. But here I have too many additional issues like WFTXN0001: A transaction is already in progress, even if I never begin it before. I can't understand how it works, and I can't find concise and clear documentation or example where UserTransaction usage is clarified. So, the first answer is "it is too difficult to use, so let's use something else".
Reject distributed transaction at all and commit changes to both databases sequentially. I know that it is a trap, but it works if I avoid any transactions using and then insert to WildFly configuration the next property:
<system-properties>
<property name="com.arjuna.ats.arjuna.allowMultipleLastResources" value="true"/>
</system-properties>
It works! Howerer there is a warning message in the log:
16:09:23,581 WARN [com.arjuna.ats.arjuna] (default task-1) ARJUNA012141: Multiple last resources have been added to the current transaction. This is transactionally unsafe and should not be relied upon. Current resource is LastResourceRecord(XAOnePhaseResource(LocalXAResourceImpl#dbaabf3[connectionListener=767c209 connectionManager=78f5d4e2 warned=false currentXid=< formatId=131077, gtrid_length=29, bqual_length=36, tx_uid=0:ffff0ac03a90:74af3477:5c7fb46c:343, node_name=1, branch_uid=0:ffff0ac03a90:74af3477:5c7fb46c:35f, subordinatenodename=null, eis_name=java:/db3 > productName=PostgreSQL productVersion=9.6.2 jndiName=java:/db3]))
...
16:09:23,631 WARN [com.arjuna.ats.arjuna] (default task-1) ARJUNA012141: Multiple last resources have been added to the current transaction. This is transactionally unsafe and should not be relied upon. Current resource is LastResourceRecord(XAOnePhaseResource(LocalXAResourceImpl#57a1383c[connectionListener=5ddce49c connectionManager=c66a93f warned=false currentXid=null productName=PostgreSQL productVersion=9.6.9 jndiName=java:/db2]))
I want to run away from these terrible messages, but I can't because JBoss restrict me. Maybe you know any other way how to use distributed transaction on JBoss without a pain?
Thanks.
P.S. I don't use Spring. I'm forbidden to use it.
Well, not using Spring is not such a bad thing.
Your diagram has a WAR file that accesses three databases via the various jars. What persistence service (if any) are you using?
What is your transaction demarcation? You should be making the database calls in the same transaction context. You should not need to start a user transaction. If you use the '#Transactional' annotation on your web service class, JBoss will start a transaction and the database accesses should be joining the transaction.
My project connects to a database using hibernate, getting connections from a connection pool on JBoss. I want to replace some of the reads/writes to tables with publish/consume from queues. I built a working example that uses OracleAQ, however, I am connecting to the DB using:
AQjmsFactory.getQueueConnectionFactory followed by createQueueConnection,
then using createQueueSession to get a (JMS) QueueSession on which I can call createProducer and createConsumer.
So I know how to do what I want using a jms.QueueSession. But using hibernate, I get a hibernate.session, which doesn't have those methods.
I don't want to open a new connection every time I perform an action on a queue - which is what I am doing now in my working example. Is there a way to perform queue operations from a hibernate.session? Only with SQL queries?
I think you're confusing a JMS (message queue) session with a Hibernate (database) session. The Hibernate framework doesn't have any overlap with JMS, so it can't be used to do both things.
You'll need 2 different sessions for this to work:
A Hibernate Session (org.hibernate.Session) for DB work
A JMS Session (javax.jms.Session) to to JMS/queue work
Depending on your use case, you may also want an XA transaction manager to do a proper two-phase commit across both sessions and maintain transactional integrity.
I was also looking for some "sane" way how to use JMS connection to manipulate database data. There is not any. Dean is right, you have to use two different connections to the same data and have distributed XA transaction between them.
This solution opens a world of various problems never seen before. In real life distributed transactions can really be non-trivial. Surprisingly in some situations Oracle can detect that two connections are pointing into the same database and then two-phase commit can be bypassed - even when using XA.
What would be the best way to setup/design or simply configure an Hibernate based Java web application to support being started (i.e. sessionfactory initialization) up if the database connectivity is not yet available, but will be, albeit at a much later time.
In other words, is there an easy way to handle out of order initialization between an Hibernate server application and its database?
As far as i know . If you use external connection pool and hibernate is no responsible to making the connections and in additional hbm2ddl is set to none than hibernate should not connect to the database untill you open a session.
Any way if it will failed to open session because there is no connection it will success to open new session as soon as there is databas connectivity.
I have a Tomcat servlet that incorporates hibernate. It works fine normally. When the servlet starts I initialize hibernate and create a session factory. I then use this session factory to generate sessions when performing various database transactions. So far so good. My problem comes after a long period of inactivity on the servlet (say when the users go home for the night and then try to log in the next morning). Suddenly, I am unable to communicate with the databse. In the logs I see
org.hibernate.exception.JDBCConectionException: Could not execute query.
If I stop and restart Tomcat, reinitializing my servlet and rebuilding my session factory, everything works fine. It is almost like the session factory itself is timing out?
Any ideas?
Thanks,
Elliott
If I stop and restart Tomcat, reinitializing my servlet and rebuilding my session factory, everything works fine. It is almost like the session factory itself is timing out?
It's not the session factory but the connections used by the session factory (e.g. MySQL is well known to timeout connections after 8 hours of inactivity by default). Either:
use a connection pool that is able to validate connections on borrow and to renew them ~or~
increase the idle timeout on the database side
OK. Suppose I use a c3P0 connection pool. How do I specify in the hibernate.cfg.xml file that I want to "validate connections on borrow" or does it do this by default?
The various options when using C3P0 are documented in Configuring Connection Testing. My advice would be to use the idleConnectionTestPeriod parameter:
The most reliable time to test
Connections is on check-out. But this
is also the most costly choice from a
client-performance perspective. Most
applications should work quite
reliably using a combination of
idleConnectionTestPeriod and
testConnectionsOnCheckIn. Both the
idle test and the check-in test are
performed asynchronously, which leads
to better performance, both perceived
and actual.
Note that for many applications, high
performance is more important than the
risk of an occasional database
exception. In its default
configuration, c3p0 does no Connection
testing at all. Setting a fairly long
idleConnectionTestPeriod, and not
testing on checkout and check-in at
all is an excellent, high-performance
approach.
To configure C3P0 with Hibernate, be sure to read the relevant instructions (and to use the appropriate properties, and the appropriate files).
I'm using a vendor API to obtain a JDBC connection to the application's database. The API works when running in the application server or when running in a stand-alone mode. I want to run a series of SQL statements in a single transaction. I'm fine with them occurring in the context of the JTA transaction if it exists. However, if it doesn't then I need to use the JDBC transaction demarcation methods. (Calling these methods on a JDBC connection that is participating in a JTA transaction causes a SQLException.)
So I need to be able to determine whether the Connection came from the JTA enabled DataSource or if it's just a straight JDBC connection.
Is there a straight forward way to make this determination?
Thanks!
Even if it's straight JDBC, you can have a JTA transaction enabled. Checking the autoCommit flag will NOT help in this regard. You can be in a transaction, distributed or otherwise, with autoCommit set to false. autoCommit set to true would tell you you're not in a distributed transaction but a value of false just means you won't auto-commit... it could be in any kind of transaction.
I think you're going to have to call UserTransaction.getStatus() and verify that it is not equal to Status.NoTransaction(). This would tell you if you're in a JTA transaction.
What thilo says does make sense.
Otherwise, Not sure of a straight way BUT I will give you a "hack" way
write a BAD SQL which you know will give a DB exception.
That will result in a stack trace. From the stack trace, you can find out if it is a JTA derived connection or NOT ?
You could try to check the Connection's autoCommit flag to see if it is in a transaction (regardless of where it came from).
(Apparently, see the accepted answer, this does not work too well. I am not deleting this answer because the following still stands: )
But I think you should really modify your API to depend on external transactions exclusively. If you still want to support plain JDBC, wrap it into a separate API that just starts the transaction.
Update: Just re-read your question and saw that you are not providing an API, but want to use a container-managed connection. But still, can you just mandate (as part of your application's requirements) that JTA be in effect? If not, you could provide a configuration option to fall back to manually managed transactions. For such a critical feature it seems reasonable to require the proper configuration (as opposed to try to guess what would be appropriate).