In batch script I use a loop to execute a bunch of sql (hql) against
a Teradata databse. After some iterations I receive the following error:
Teradata databse: 3130 Response limit exceeded
Now the documentation suggests (as well the answer on this question) that this is due to to many open result sets for the same session.
Now the session and the ResultSet are managed by the EntityManager, and I wonder if there is a way to avoid closing and reopening the connection in this case via jpa/hiberate.
I have tried entityManager.clear or flush without any effect.
is there a way to handle this better? maybe I am missing something. My "batch" runes under spring 2.5. in a "cli" mode.
in my case it turned out to be a row with large blob data. after refining steps I could retrieve data without 3130 popping out.
Related
How can I check that a connection to db is active or lost using spring data jpa?
Only the way is to make a query "SELECT 1"?
Nothing. Just execute your query. If the connection has died, either your JDBC driver will reconnect (if it supports it, and you enabled it in your connection string--most don't support it) or else you'll get an exception.
If you check the connection is up, it might fall over before you actually execute your query, so you gain absolutely nothing by checking.
That said, a lot of connection pools validate a connection by doing something like SELECT 1 before handing connections out. But this is nothing more than just executing a query, so you might just as well execute your business query.
our best chance is to just perform a simple query against one table, e.g.:
select 1 from SOME_TABLE;
https://docs.oracle.com/javase/6/docs/api/java/sql/Connection.html#isValid%28int%29
if you can use Spring Boot. Spring Boot Actuator is useful for you.
actuator will configure automatically and after it has activated ,
you can get know database status to request "health"
http://[CONTEXT_ROOT]/health
and it will return , database status like below
{"status":"UP","db":{"status":"UP","database":"PostgreSQL","hello":1}}
I've created a mariadb cluster and I'm trying to get a Java application to be able to failover to another host when one of them dies.
I've created an application that creates a connection with "jdbc:mysql:sequential://host1,host2,host3/database?socketTimeout=2000&autoReconnect=true". The application makes a query in a loop every second. If I kill the node where the application is currently executing the query (Statement.executeQuery()) I get a SQLException because of a timeout. I can catch the exception and re-execute the statement and I see that the request is being sent to another server, so failover in that case works ok. But I was expecting that executeQuery() would not throw an exception and silently retry another server automatically.
Am I wrong in assuming that I shouldn't have to handle an exception and explicitely retry the query? Is there something more I need to configure for that to happen?
It is dangerous to auto reconnect for the following reason. Let's say you have this code:
BEGIN;
SELECT ... FROM tbl WHERE ... FOR UPDATE;
(line 3)
UPDATE tbl ... WHERE ...;
COMMIT;
Now let's say the server crashes at (line 3). The transaction will be rolled back. In my fabricated example, that only involves releasing the lock on tbl.
Now let's say that some other connection succeeds in performing the same transaction on the same row while you are auto-reconnecting.
Now, with auto-reconnect, the first thread is oblivious that the first half of the transaction was rolled back and proceeds to do the UPDATE based on data that is now out of date.
You need to get an exception so that you can go back to the BEGIN so that you can be "transaction safe".
You need this anyway -- With Galera, and no crashes, a similar thing could happen. Two threads performing that transaction on two different nodes at the same time... Each succeeds until it gets to the COMMIT, at which point the Galera magic happens and one of the COMMITs is told to fail. The 'right' response is replay the entire transaction on the server that was chosen for failure.
Note that Galera, unlike non-Galera, requires checking for errors on COMMIT.
More Galera tips (aimed at devs and dbas migrating from non-Galera)
Failover doesn't mean that application doesn't have to handle exceptions.
Driver will try to reconnect to another server when connection is lost.
If driver fail to reconnect to another server a SQLNonTransientConnectionException will be thrown, pools will automatically discard those connection.
If connection is recovered, there is some marginals cases where relaunching query is safe: when query is not in a transaction, and connection is currently in read-only mode (using Spring #Transactional(readOnly = false)) for example. For thoses cases, MariaDb java connection will then relaunch query automatically. In those particular cases, no exception will be thrown, and failover is transparent.
Driver cannot re-execute current query during a transaction.
Even without without transaction, if query is an UPDATE command, driver cannot know if the last request has been received by the database server and executed.
Then driver will send an SQLException (with SQLState begining by "25" = INVALID_TRANSACTION_STATE), and it's up to the application to handle those cases.
I have configure mongo uri in property file as below,
spring.data.mongodb.uri=mongodb://db1.dev.com,db2.dev.com,db3.dev.com
spring.data.mongodb.database=mydb
I use mongoowl as a monitoring tool.
When i do a get request, it shows hits in every mongodb which ideally should be show only in one db right?
No, You are actually opening a cluster replica set connection, in this connection type spring actually connects to all 3 databases to maintain fail over conditions or to full fill "read from secondary" option(hence you see hits on all 3 databases), but however the read and write operations happen only on primary unless you have specified it to read from a secondary.
I'm using latest derby10.11.1.1.
Doing something like this:
DriverManager.registerDriver(new org.apache.derby.jdbc.EmbeddedDriver())
java.sql.Connection connection = DriverManager.getConnection("jdbc:derby:filePath", ...)
Statement stmt = connection.createStatement();
stmt.setQueryTimeout(2); // shall stop query after 2 seconds but id does nothing
stmt.executeQuery(strSql);
stmt.cancel(); // this in fact would run in other thread
I get exception "java.sql.SQLFeatureNotSupportedException: Caused by: ERROR 0A000: Feature not implemented: cancel"
Do you know if there is way how to make it work? Or is it really not implemented in Derby and I would need to use different embedded database? Any tip for some free DB, which I can use instead of derby and which would support SQL timeout?
As i got in java docs
void cancel() throws SQLException
Cancels this Statement object if both the DBMS and driver support aborting an SQL statement. This method can be used by one thread to cancel a statement that is being executed by another thread.
and it will throws
SQLFeatureNotSupportedException - if the JDBC driver does not support this method
you can go with mysql.
there are so many embedded database available you can go through
embedded database
If you get Feature not implemented: cancel then that is definite, cancel is not supported.
From this post by H2's author it looks like H2 supports two ways to timeout your queries, both through the JDBC API and through a setting on the JDBC URL.
Actually I found that there is deadlock timeout in derby as well only set to 60 seconds by default and I never have patience to reach it :).
So the correct answer would be:
stmt.setQueryTimeout(2); truly seems not working
stmt.cancel(); truly seems not implemented
But luckily timeout in database manager exists. And it is set to 60 seconds. See derby dead-locks.
Time can be changed using command:
statement.executeUpdate("CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY(" +
"'derby.locks.waitTimeout', '5')");
And it works :)
I have a job that is built of several steps - one of the steps is a tasklet that activates processing Pentaho
I pass to Pentaho the parameters it needs in order to connect to the DB on its own and it works OK
The issue I have starts when the processing time in Pentaho is long
Pentaho completes successfully and the code in the tasklet that activated it completes OK, but in the job mechanism that wraps it I get an error when it tries to update the job execution table in the db because the connection it has was already closed
o.s.j.s.SQLErrorCodesFactory: Error while extracting database product name - falling back to empty error codes
org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData;
nested exception is
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:296)
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:320)
at org.springframework.jdbc.support.SQLErrorCodesFactory.getErrorCodes(SQLErrorCodesFactory.java:214)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.setDataSource(SQLErrorCodeSQLExceptionTranslator.java:141)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.<init>(SQLErrorCodeSQLExceptionTranslator.java:104)
at org.springframework.jdbc.support.JdbcAccessor.getExceptionTranslator(JdbcAccessor.java:99)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:603)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:812)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:868)
at org.springframework.batch.core.repository.dao.JdbcExecutionContextDao.persistSerializedContext(JdbcExecutionContextDao.java:230)
at org.springframework.batch.core.repository.dao.JdbcExecutionContextDao.updateExecutionContext(JdbcExecutionContextDao.java:159)
at org.springframework.batch.core.repository.support.SimpleJobRepository.updateExecutionContext(SimpleJobRepository.java:203)
...
14:21:37.143 UTC [ERROR] jobScheduler_Worker-2 T:b U: o.s.t.i.TransactionInterceptor: Application exception overridden by rollback exception
org.springframework.dao.RecoverableDataAccessException: PreparedStatementCallback; SQL [UPDATE BAT_STEP_EXECUTION_CONTEXT SET SHORT_CONTEXT = ?, SERIALIZED_CONTEXT = ? WHERE STEP_EXECUTION_ID = ?]; Communications link failure
It looks like the connection that the job repository received when the job started was abandoned and I'm trying to understand if there is a way to order it get a new connection or give it some keep alive command
I tried the following workarounds
change the step status in a job listener so the job will complete - fails with the same DB error
mark this exception as if it can be skipped - fails with the same DB error
<batch:no-rollback-exception-classes>
<batch:include class="com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException" />
<batch:include class="org.springframework.jdbc.support.MetaDataAccessException" />
</batch:no-rollback-exception-classes>
Any ideas how I can work around this?
Can I configure a job listener that will restart the job from the step that follows the Pentaho step?
Additional info
I think that the issue is here -
org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSource)
This
ConnectionHolder conHolder = (ConnectionHolder) TransactionSynchronizationManager.getResource(dataSource);
thinks that the connection is valid
so I guess the solution will be to call org.springframework.transaction.support.TransactionSynchronizationManager.unbindResource(Object)
and the question is how can I get the data source object to pass to this method
I will try querying the
org.springframework.transaction.support.TransactionSynchronizationManager.getResourceMap() and see where it gets me
update
no luck - the get resources map gives me just the repositories I'm using, not the data source. Still digging...
Another update
I'm debugging the process and it seems that the problem is indeed org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSource) the connection holder is holding a connection that is closed but the code here doesn't check if the connection is open; it only checks if the connection isn't null and if it was some weak reference maybe it was enough here - but in this use case it just proceedes with the closed connection instead of requesting a new one.
add this to the tasklet definition
<batch:transaction-attributes propagation="NEVER" />
since the Tasklet is doing external processing and doesn't need a spring batch transaction it need to tell spring batch not to open a transaction for this tasklet.
see
http://www.javabeat.net/transaction-management-in-spring-batch-components/
http://forum.spring.io/forum/spring-projects/batch/91158-legacy-integration-tasklet-transaction