Azure SQL : error code for Connection is closed exception's - java

I am getting below exceptions at times in my java code which connects to Azure SQL server. For this, I need to implement retry logic i.e. when I face below exception I will retry 1) Create new connection 2) Re-execute the SQL query 3) Commit the transaction.
But, I am unable to get the Azure SQL server error code for below error. Please let me know the error code ? I do not see the error code when I run below query:
SELECT * FROM sys.messages WHERE language_id = 1033
Exception:
com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:227)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.checkClosed(SQLServerConnection.java:796)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.commit(SQLServerConnection.java:2681)
Also, is it a good practice to use e.getMessage() and check if return string is "The connection is closed." and retry my logic ?
I do not see the error code in Error codes from SQL

This exception seems to be related to an attempt to convert from a supported data type to an unsupported data type. This is a good example.
Try another version of the JDBC driver.

Related

Best way to solve conflict between EclipseLink #Version Timestamp-Field and DB2

In the current project I'm working on, we are using EclipseLink as JPA-Provider. In the background is a DB2-database.
We want to introduce "Optimistic Locking" with the restriction only using Timestamp-fields. Furthermore, these timestamp fields are CURRENT TIMESTAMP fields, so when updating an entity, the CURRENT TIMESTAMP field updates the value itself. This is necessary to guarantee the reliability of the whole system.
When annotating the attribute of an entity with #Version, I get the following error message:
Internal Exception: com.ibm.websphere.ce.cm.StaleConnectionException: THE SQL STATEMENT IS NOT SUPPORTED. SQLCODE=-142, SQLSTATE=42612, DRIVER=4.22.29
Error Code: -142
Call: VALUES CURRENT TIMESTAMP
Query: ValueReadQuery(sql="VALUES CURRENT TIMESTAMP")
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl$1.handleException(EntityManagerSetupImpl.java:767)
at org.eclipse.persistence.transaction.AbstractSynchronizationListener.handleException(AbstractSynchronizationListener.java:275)
[…]
Note: I only get this error message when using the mentioned annotation.
My research resulted in the following:
Internal Exception: com.ibm.websphere.ce.cm.StaleConnectionException:
--> A StaleConnectionException is an exception that is generated by the WebSphere Application Server database connection code when a JDBC driver returns a fatal error from a connection request or operation. In WebSphere Application Server, the StaleConnectionException is issued when the database vendor issues an exception indicating that a connection currently in the connection pool is no longer valid. ( https://developer.ibm.com/answers/questions/205910/how-to-resolve-the-staleconnectionexception-in-web/ )
THE SQL STATEMENT IS NOT SUPPORTED. SQLCODE=-142,
--> An SQL statement was detected that is not supported by the database. The statement might be valid for other IBM® relational database products or it might be valid in another context. For example, statements such as VALUES and SIGNAL or RESIGNAL SQLSTATE can be used only in certain contexts, such as in a trigger body or in an SQL Procedure. ( https://www.ibm.com/support/knowledgecenter/en/SSEPEK_10.0.0/codes/src/tpc/n142.html )
SQLSTATE=42612,
--> Seems to be internal, nothing for my relevance
After getting more into the topic, I came across the following link, which appears as the most encouraging so far:
https://www.idug.org/p/fo/et/thread=36380
In summary, the suggested way is to extend org.eclipse.persistence.platform.database.DB2MainframePlatform and manipulate the generated SQL-statement.
My expectation is, that there should be a better way, since this should be a standard-case for the framework. If not, is there an more proper/better way to solve this problem?

Postgres JDBC: Especific error code of PSQLException?

When writing java code that uses an Oracle database, one can always catch SQLException an read an specific Oracle error with e.getErrorCode(). For example, error 28001 means expired password, 28000 is blocked account, 1017 is wrong user/passsword, etc.
That way I can manage different errors the appropiate way.
But with PostgreSQL databases e.getErrorCode() always returns 0, even when catching Postgres-specific PSQLException.
The Question
Is there a way that I don't know of to get an specific error code for a Postgres database exception in Java other than trying to parse the error message (which by the way could be in any localized language)?
Have you tried looking at getSqlState() instead? See also: http://www.postgresql.org/docs/9.3/static/errcodes-appendix.html

java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1 ORA-16000: database open for read-only access

I have an spring batch application which reads data from Database and writes the result in a .dat file.
The job runs fine in the DB having read and write permissions.But if I run the job with the DB having only read access. I'm getting the below error...
org.springframework.dao.DataAccessResourceFailureException:
Could not obtain sequence value;nested exception is java.sql.SQLException:
ORA-00604: error occurred at recursive SQL level 1 ORA-16000:
database open for read-only access
My Query is a simple select statement.Don't know what is the root cause for this error.Please suggest
You did not posted your query so I will just assume, that you are trying to select from a non-local table via DB_LINK and Oracle starts a transaction, just in case.
Try to set your transaction read-only too, before executing your query:
set transaction read only;

Spring Batch - Connection closed in when processing is done in external process

I have a job that is built of several steps - one of the steps is a tasklet that activates processing Pentaho
I pass to Pentaho the parameters it needs in order to connect to the DB on its own and it works OK
The issue I have starts when the processing time in Pentaho is long
Pentaho completes successfully and the code in the tasklet that activated it completes OK, but in the job mechanism that wraps it I get an error when it tries to update the job execution table in the db because the connection it has was already closed
o.s.j.s.SQLErrorCodesFactory: Error while extracting database product name - falling back to empty error codes
org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData;
nested exception is
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:296)
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:320)
at org.springframework.jdbc.support.SQLErrorCodesFactory.getErrorCodes(SQLErrorCodesFactory.java:214)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.setDataSource(SQLErrorCodeSQLExceptionTranslator.java:141)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.<init>(SQLErrorCodeSQLExceptionTranslator.java:104)
at org.springframework.jdbc.support.JdbcAccessor.getExceptionTranslator(JdbcAccessor.java:99)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:603)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:812)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:868)
at org.springframework.batch.core.repository.dao.JdbcExecutionContextDao.persistSerializedContext(JdbcExecutionContextDao.java:230)
at org.springframework.batch.core.repository.dao.JdbcExecutionContextDao.updateExecutionContext(JdbcExecutionContextDao.java:159)
at org.springframework.batch.core.repository.support.SimpleJobRepository.updateExecutionContext(SimpleJobRepository.java:203)
...
14:21:37.143 UTC [ERROR] jobScheduler_Worker-2 T:b U: o.s.t.i.TransactionInterceptor: Application exception overridden by rollback exception
org.springframework.dao.RecoverableDataAccessException: PreparedStatementCallback; SQL [UPDATE BAT_STEP_EXECUTION_CONTEXT SET SHORT_CONTEXT = ?, SERIALIZED_CONTEXT = ? WHERE STEP_EXECUTION_ID = ?]; Communications link failure
It looks like the connection that the job repository received when the job started was abandoned and I'm trying to understand if there is a way to order it get a new connection or give it some keep alive command
I tried the following workarounds
change the step status in a job listener so the job will complete - fails with the same DB error
mark this exception as if it can be skipped - fails with the same DB error
<batch:no-rollback-exception-classes>
<batch:include class="com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException" />
<batch:include class="org.springframework.jdbc.support.MetaDataAccessException" />
</batch:no-rollback-exception-classes>
Any ideas how I can work around this?
Can I configure a job listener that will restart the job from the step that follows the Pentaho step?
Additional info
I think that the issue is here -
org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSource)
This
ConnectionHolder conHolder = (ConnectionHolder) TransactionSynchronizationManager.getResource(dataSource);
thinks that the connection is valid
so I guess the solution will be to call org.springframework.transaction.support.TransactionSynchronizationManager.unbindResource(Object)
and the question is how can I get the data source object to pass to this method
I will try querying the
org.springframework.transaction.support.TransactionSynchronizationManager.getResourceMap() and see where it gets me
update
no luck - the get resources map gives me just the repositories I'm using, not the data source. Still digging...
Another update
I'm debugging the process and it seems that the problem is indeed org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSource) the connection holder is holding a connection that is closed but the code here doesn't check if the connection is open; it only checks if the connection isn't null and if it was some weak reference maybe it was enough here - but in this use case it just proceedes with the closed connection instead of requesting a new one.
add this to the tasklet definition
<batch:transaction-attributes propagation="NEVER" />
since the Tasklet is doing external processing and doesn't need a spring batch transaction it need to tell spring batch not to open a transaction for this tasklet.
see
http://www.javabeat.net/transaction-management-in-spring-batch-components/
http://forum.spring.io/forum/spring-projects/batch/91158-legacy-integration-tasklet-transaction

PL/SQL: numeric or value error: character string buffer too small when running query multiple times

I've got a webapp in Java EE, and sometimes in logs I see the following error:
`org.apache.ibatis.exceptions.PersistenceException`:
Error querying database. Cause: java.sql.SQLException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small"
I turned on the statements logging and if I run the query from SQL Developer it runs successfully, without any errors.
However, if I run the same query a few times I got this error. I guess the cause is on the DB server, any ideas?
You need to trace the session to get the exact sql statement and bind variables to see what caused the error. Set the following in your code at connection to create the trace file on the database server.
alter session set events '10046 trace name context forever, level 8';

Categories

Resources