We are facing the following exception in weblogic server v10.3.2.0. We are using JRockit JRE 6.0.
We have around 6-7 XA datasources involved in every server request. We face this exception when processing on the last datasource just begins.
Please someone advise.
java.sql.SQLException: Unexpected exception while enlisting XAConnection
java.sql.SQLException: Transaction rolled back: setRollbackOnly called on transaction
at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1616)
at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1503)
at weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:446)
at weblogic.jdbc.jta.DataSource.connect(DataSource.java:403)
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:364)
at com.ibatis.sqlmap.engine.transaction.jta.JtaTransaction.init(JtaTransaction.java:68)
at com.ibatis.sqlmap.engine.transaction.jta.JtaTransaction.getConnection(JtaTransaction.java:131)
at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryForObject(MappedStatement.java:120)
at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:518)
at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:493)
at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForObject(SqlMapSessionImpl.java:106)
at com.ibatis.sqlmap.engine.impl.SqlMapClientImpl.queryForObject(SqlMapClientImpl.java:82)
As you wrote, the cause is unkown in this sample.
We can see the transaction has been marked as "must roll back", probably by the previous datasources when something went wrong.
Maybe you can check previous logs, for the previous datasource, to find the cause ?
You say that it is the last datasource - have you read this ? : http://muness.blogspot.com/2005/09/distributed-transactions-and-timeouts.html .
If you need more info, can you replace ibatis with a version with hacked com.ibatis.sqlmap.engine.transaction.jta.JtaTransaction.init() ? Add some logging there and you'll know more, probably.
If I had to guess I would say the last datasource is not configured properly as an XA datasource, doesn't have the the XA driver installed, or doesn't support XA.
Are you doing any funny exception handling here that would truncate the stack(Catching re-throwing but only keeping the top set of stack frames) or using a custom exception handling library? If you are I would abandon it. It seems like there should be a caused by: with additonal lower level stack related to your datasource's drivers that would reveal additional information.
If that isn't the case and this is the only info you're getting. It might be time to crank your server's logging up to debug or trace and get down and dirty with how weblogic gets things done..
Alternatively, if you are supported I would verify your driver versions/ configurations with your vendor. If you're not, you need to track down the documentation and verify for yourself.
Related
I have a method annotated with #Transactional which obtain messages and perform some operations on the database to persist things correctly.
Now, some times this method throws a DeadLock with the following stacktrace:
Caused by: org.hibernate.TransactionException: Unable to commit against JDBC Connection
at org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor.commit(AbstractLogicalConnectionImplementor.java:87)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.commit(JdbcResourceLocalTransactionCoordinatorImpl.java:272)
at org.hibernate.engine.transaction.internal.TransactionImpl.commit(TransactionImpl.java:104)
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:532)
... 34 common frames omitted
Caused by: com.mysql.cj.jdbc.exceptions.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:123)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.ConnectionImpl.commit(ConnectionImpl.java:813)
at com.zaxxer.hikari.pool.ProxyConnection.commit(ProxyConnection.java:361)
at com.zaxxer.hikari.pool.HikariProxyConnection.commit(HikariProxyConnection.java)
at org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor.commit(AbstractLogicalConnectionImplementor.java:81)
... 37 common frames omitted
The problem I am facing is that on the database side, MySQL is not registering any deadlock to happen in that moment.
I've enabled the innodb_print_all_deadlocks variable and I verified that it is working as expected by manually raising deadlocks which are immediately logged by the db.
I've also verified that when a real deadlock happens within the application (with real I mean that MySQL logs the deadlock on its side), the stacktrace is different and usually the deadlock is found by the database server way before the commit, for example when the application tries to ClientPreparedStatement.executeUpdateInternal.
It looks weird to me that the deadlock is raised on ConnectionImpl.commit. I've checked the ConnectionImpl code and I can't understand why a deadlock could be thrown there (at the line in the stacktrace, there's just return; )
Is it possible that JPA is somehow raising a deadlock on the app side?
Leaving this as it might help other people stuck with the same weird problem.
The reason for my problem was the fact that the application was using MariaDB driver instead of MySQL one. For some reasons, the MariaDB driver was throwing fake deadlock exceptions, so the MySQL server was correct in not logging any deadlock happening.
Switching the driver with the more correct one, all problems stopped.
I see these exception
org.hibernate.queryexception could not resolve property
in Dynatrace exception logs thrown from a specific Hibernate query fired through an action performed. I am trying to replicate this error in my local workspace (Eclipse Mars with Websphere 8.5) in order to debug and fix this issue but I don't get this error in my server logs. I have made hibernate.show_sql = true in hibernate.cfg.xml, but this only prints the HQL statements. Is there some other properties that I would have to set in order see this exception in my server logs?
Dynatrace will also capture Exceptions that dont make it to your log files as Dynatrace captures Exceptions when Exception objects are created and not when they are logged to disk. This is why you typically see more Exceptions in Dynatrace than in log files.
What you could do is to use Dynatrace on your local workstation. There is a free for life version for local workstations - https://www.dynatrace.com/en/products/dynatrace-personal-license.html?utm_medium=blog&utm_source=dynatrace&utm_campaign=devops&utm_term=agrabner
Andi
I'm using Wildfly 8.2 and fire a series of DB requests when a certain web page is opened. All queries are invoked thru JPA Criteria API, return results as expected - and - none of them delivers a warning, error or exception. It all runs in Parallel Plesk.
Now, I noticed that within 2 to 3 days the following error appears and the site becomes unresponsive. I restart and I wait approx another 3 days till it happens again (depending on the number of requests I have).
I checked the tcpsndbuf on my linux server and I noticed it is constantly at max. Unless I restart Wildfly. Apparently it fails to release the connections.
The connections are managed by JPA/Hibernate and the Wildfly container. I don't do any special or custom transaction handling e.g. open, close. etc. I leave it all to Wildfly.
The MySQL Driver I'm using is 5.1.21 (mysql-connector-java-5.1.21-bin.jar)
In the standalone.xml I have defined the following datasource datasource values (among others):
<transaction-isolation>TRANSACTION_READ_COMMITTED</transaction-isolation>
<pool>
<min-pool-size>3</min-pool-size>
<max-pool-size>10</max-pool-size>
</pool>
<statement>
<prepared-statement-cache-size>32</prepared-statement-cache-size>
<shared-prepared-statements>true</shared-prepared-statements>
</statement
Has anyone experience the same rise of tcpsndbuf values (or this error)? In case you require more config or log files, let me know. Thanks!
UPDATE
Despite the following additional timeout settings, it still runs into the hanger. And thus, it will then use 100% CPU time, whenever the max tcpsndbuf is reached.,
Try adding this Hibernate property:
<property name="hibernate.connection.release_mode">after_transaction</property>
By default, JTA mandates that connection should be released after each statement, which is undesirable for most use cases. Most Drivers don't allow multiplexing a connection over multiple XA transactions anyway.
Do you use openvz? I think this question should be asked on serverfault. It is related to linux configuration. You can read: tcpsndbuf. You should count opened sockets and check condition:
Our Postgresql application is getting Hibernate error: org.hibernate.util.JDBCExceptionReporter - ERROR: deadlock detected. One of the ways recommended to deal with this problem is setting transaction timeout
(Hibernate Reference Documentation) :
sess.getTransaction().setTimeout(3)
How this value of 3 seconds is defined?
If your queries are deadlocking, look into why they are deadlocking and fix that. The error message in the PostgreSQL server error log tells you about the transactions that deadlocked. If that alone isn't enough, set log_statement = 'all', add a log_line_prefix to identify transactions or use csv logging, and analyze the log files to see what's happening.
If you're stuck you can hand-recreate the deadlock and take a look at pg_locks for additional information about what's going on; see the lock monitoring wiki article.
If the deadlock detection timeout is too long for you, lower PostgreSQL's deadlock detection timeout, don't add statement timeout hacks in the application. See the documentation on lock management.
I'm curious about something... is it possible for an Oracle 11 instance to be configured so that it does not return any ORA-?????? error messages?
I've issued many invalid queries where I've misspelled column names, table names... things where I would expect an ORA error message.
Say for security purposes say if a stray java stack trace got exposed to a browser could you force oracle to always show the same bogus error message in the stack trace?
I always get this one:java.sql.SQLException: IO Error: Size Data Unit (SDU) mismatch
I've googled that error up and down, and I do not have any connection or database configuration issues at all! I get it on a per-query basis.
Not a direct solution but I was having the same problem with the SDU mismatch masking the real error. I found a link (http://www.rajivnarula.com/blog/2013/03/13/table-not-found-or-error-not-found/) that gave an indirect way to expose the error:
I tried swapping the JDBC driver with the older ojdbc14.jar and voila
! The real error was exposed : Good old
ORA-00942 (table or view does not exist)
Once I put the table in- everything works fine- with ojdbc14.jar as
well ojdbc6.jar
Obviously a pain but useful until someone posts a way to get the underlying error with the newer driver...
Basically your setup is not correct. Either on the client or on the server or on both the sdu size has been set and they do not match between client and server. The sdu size can be set on the client in
the sqlnet.ora file or
in the connect descriptor
On the server it can be set with
in the sqlnet.ora file
the dispatchers init.ora parameter
or the listener.ora file.
If you are still not convinced, trace the tns traffic to verify this. Client side tracing can be enabled by adding the following settings to the sqlnet.ora file:
trace_level_client = 10
trace_unique_client = on
trace_file_client = sqlnet.trc
trace_directory_client = <path_to_trace_dir>
Server side settings can be enabled with the following settings:
trace_level_server = 10
trace_file_server = server.trc
trace_directory_server = <path_to_trace_dir>
If level 10 is not sufficient, set the level to 16. This will create a trace file that you can analyze.
You can try to upgrade the Oracle 11g JDBC driver to a version greater than 11.2.0.3.0, as described here
There is another chance whether the table in the query exists in the database or not. Check for table name in the query or try whether you are trying for Oracle Database using mysql driver