InvalidStateException while trying to enter data into DB - java

I have a method that return the entity manager for particular DB.Now when i use the method for the first time to get entity manager everything works fine.I can save data into any tables A,B,C using entity manager.Now say i get a exception while saving in table B
Now when I try to perform any operation on DB after geting exception above, the next time i try to run same code it fails when updating in table A itself.I can see folloing eception
<openjpa-1.2.2-SNAPSHOT-r422266:778978M-OPENJPA-975 nonfatal user error> org.apache.openjpa.persistence.InvalidStateException: The factory has been closed. The stack trace at which the factory was closed is available if Runtime=TRACE logging is enabled.
at org.apache.openjpa.kernel.AbstractBrokerFactory.assertOpen(AbstractBrokerFactory.java:673)
at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:182)
at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:142)
at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:192)
at ..

Somewhere in your code you (or a framework) close your EntityManagerFactory. Make sure you didn't call close() on it.

Related

Using galera mariadb with jdbc

I've created a mariadb cluster and I'm trying to get a Java application to be able to failover to another host when one of them dies.
I've created an application that creates a connection with "jdbc:mysql:sequential://host1,host2,host3/database?socketTimeout=2000&autoReconnect=true". The application makes a query in a loop every second. If I kill the node where the application is currently executing the query (Statement.executeQuery()) I get a SQLException because of a timeout. I can catch the exception and re-execute the statement and I see that the request is being sent to another server, so failover in that case works ok. But I was expecting that executeQuery() would not throw an exception and silently retry another server automatically.
Am I wrong in assuming that I shouldn't have to handle an exception and explicitely retry the query? Is there something more I need to configure for that to happen?
It is dangerous to auto reconnect for the following reason. Let's say you have this code:
BEGIN;
SELECT ... FROM tbl WHERE ... FOR UPDATE;
(line 3)
UPDATE tbl ... WHERE ...;
COMMIT;
Now let's say the server crashes at (line 3). The transaction will be rolled back. In my fabricated example, that only involves releasing the lock on tbl.
Now let's say that some other connection succeeds in performing the same transaction on the same row while you are auto-reconnecting.
Now, with auto-reconnect, the first thread is oblivious that the first half of the transaction was rolled back and proceeds to do the UPDATE based on data that is now out of date.
You need to get an exception so that you can go back to the BEGIN so that you can be "transaction safe".
You need this anyway -- With Galera, and no crashes, a similar thing could happen. Two threads performing that transaction on two different nodes at the same time... Each succeeds until it gets to the COMMIT, at which point the Galera magic happens and one of the COMMITs is told to fail. The 'right' response is replay the entire transaction on the server that was chosen for failure.
Note that Galera, unlike non-Galera, requires checking for errors on COMMIT.
More Galera tips (aimed at devs and dbas migrating from non-Galera)
Failover doesn't mean that application doesn't have to handle exceptions.
Driver will try to reconnect to another server when connection is lost.
If driver fail to reconnect to another server a SQLNonTransientConnectionException will be thrown, pools will automatically discard those connection.
If connection is recovered, there is some marginals cases where relaunching query is safe: when query is not in a transaction, and connection is currently in read-only mode (using Spring #Transactional(readOnly = false)) for example. For thoses cases, MariaDb java connection will then relaunch query automatically. In those particular cases, no exception will be thrown, and failover is transparent.
Driver cannot re-execute current query during a transaction.
Even without without transaction, if query is an UPDATE command, driver cannot know if the last request has been received by the database server and executed.
Then driver will send an SQLException (with SQLState begining by "25" = INVALID_TRANSACTION_STATE), and it's up to the application to handle those cases.

mongo uri for springboot get request

I have configure mongo uri in property file as below,
spring.data.mongodb.uri=mongodb://db1.dev.com,db2.dev.com,db3.dev.com
spring.data.mongodb.database=mydb
I use mongoowl as a monitoring tool.
When i do a get request, it shows hits in every mongodb which ideally should be show only in one db right?
No, You are actually opening a cluster replica set connection, in this connection type spring actually connects to all 3 databases to maintain fail over conditions or to full fill "read from secondary" option(hence you see hits on all 3 databases), but however the read and write operations happen only on primary unless you have specified it to read from a secondary.

When to call flush() and commit() in Hibernate?

I have the following case:
openSession()
tx = session.beginTransaction();
try
{
...
session.saveOrUpdate(obj_1);
...
session.saveOrUpdate(obj_2);
...
session.saveOrUpdate(obj_3);
session.flush();
tx.commit();
}
catch()
{
tx.rollback()
}
finally
{
session.close();
}
The first call of saveOrUpdate(obj_1) will failed due to Duplicate entry error. But this error would not actually happen until the database is accessed at the end of the session.flush().
Is there a way to make this kind of error happens early and give me more chance to correctly handle it?
I also don't want to cut the transaction too small because it is hard to rollback.
My Hibernate entities are reversed from an existing database schema.
I have disabled the cache in my configuration file:
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
EDIT:
My question is how can I get the error as early as possible if any. The error itself doesn't matter to me.
Right now the error happens at the end, but it actually happened at the first access.
You can execute session.flush() after each session.saveOrUpdate to force hibernate to execute the update or insert on your database. Without the flush command the actual database operations are deferred and will only be processed immediately before ending the transaction.
This way you don't have to change the size of your transaction.
You can execute Surendran after each session.saveOrUpdate to force hibernate to execute the update or insert on your database. Without the flush command the actual database operations are deferred and will only be processed immediately before ending the transaction.
This way you don't have to change the size of your transaction.

java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1 ORA-16000: database open for read-only access

I have an spring batch application which reads data from Database and writes the result in a .dat file.
The job runs fine in the DB having read and write permissions.But if I run the job with the DB having only read access. I'm getting the below error...
org.springframework.dao.DataAccessResourceFailureException:
Could not obtain sequence value;nested exception is java.sql.SQLException:
ORA-00604: error occurred at recursive SQL level 1 ORA-16000:
database open for read-only access
My Query is a simple select statement.Don't know what is the root cause for this error.Please suggest
You did not posted your query so I will just assume, that you are trying to select from a non-local table via DB_LINK and Oracle starts a transaction, just in case.
Try to set your transaction read-only too, before executing your query:
set transaction read only;

Spring Batch - Connection closed in when processing is done in external process

I have a job that is built of several steps - one of the steps is a tasklet that activates processing Pentaho
I pass to Pentaho the parameters it needs in order to connect to the DB on its own and it works OK
The issue I have starts when the processing time in Pentaho is long
Pentaho completes successfully and the code in the tasklet that activated it completes OK, but in the job mechanism that wraps it I get an error when it tries to update the job execution table in the db because the connection it has was already closed
o.s.j.s.SQLErrorCodesFactory: Error while extracting database product name - falling back to empty error codes
org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData;
nested exception is
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:296)
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:320)
at org.springframework.jdbc.support.SQLErrorCodesFactory.getErrorCodes(SQLErrorCodesFactory.java:214)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.setDataSource(SQLErrorCodeSQLExceptionTranslator.java:141)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.<init>(SQLErrorCodeSQLExceptionTranslator.java:104)
at org.springframework.jdbc.support.JdbcAccessor.getExceptionTranslator(JdbcAccessor.java:99)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:603)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:812)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:868)
at org.springframework.batch.core.repository.dao.JdbcExecutionContextDao.persistSerializedContext(JdbcExecutionContextDao.java:230)
at org.springframework.batch.core.repository.dao.JdbcExecutionContextDao.updateExecutionContext(JdbcExecutionContextDao.java:159)
at org.springframework.batch.core.repository.support.SimpleJobRepository.updateExecutionContext(SimpleJobRepository.java:203)
...
14:21:37.143 UTC [ERROR] jobScheduler_Worker-2 T:b U: o.s.t.i.TransactionInterceptor: Application exception overridden by rollback exception
org.springframework.dao.RecoverableDataAccessException: PreparedStatementCallback; SQL [UPDATE BAT_STEP_EXECUTION_CONTEXT SET SHORT_CONTEXT = ?, SERIALIZED_CONTEXT = ? WHERE STEP_EXECUTION_ID = ?]; Communications link failure
It looks like the connection that the job repository received when the job started was abandoned and I'm trying to understand if there is a way to order it get a new connection or give it some keep alive command
I tried the following workarounds
change the step status in a job listener so the job will complete - fails with the same DB error
mark this exception as if it can be skipped - fails with the same DB error
<batch:no-rollback-exception-classes>
<batch:include class="com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException" />
<batch:include class="org.springframework.jdbc.support.MetaDataAccessException" />
</batch:no-rollback-exception-classes>
Any ideas how I can work around this?
Can I configure a job listener that will restart the job from the step that follows the Pentaho step?
Additional info
I think that the issue is here -
org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSource)
This
ConnectionHolder conHolder = (ConnectionHolder) TransactionSynchronizationManager.getResource(dataSource);
thinks that the connection is valid
so I guess the solution will be to call org.springframework.transaction.support.TransactionSynchronizationManager.unbindResource(Object)
and the question is how can I get the data source object to pass to this method
I will try querying the
org.springframework.transaction.support.TransactionSynchronizationManager.getResourceMap() and see where it gets me
update
no luck - the get resources map gives me just the repositories I'm using, not the data source. Still digging...
Another update
I'm debugging the process and it seems that the problem is indeed org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSource) the connection holder is holding a connection that is closed but the code here doesn't check if the connection is open; it only checks if the connection isn't null and if it was some weak reference maybe it was enough here - but in this use case it just proceedes with the closed connection instead of requesting a new one.
add this to the tasklet definition
<batch:transaction-attributes propagation="NEVER" />
since the Tasklet is doing external processing and doesn't need a spring batch transaction it need to tell spring batch not to open a transaction for this tasklet.
see
http://www.javabeat.net/transaction-management-in-spring-batch-components/
http://forum.spring.io/forum/spring-projects/batch/91158-legacy-integration-tasklet-transaction

Categories

Resources