Our backend app is Spring/Java based batch server. Our backend db is Oracle.
I am writing unit tests to use HSQL DB instead of Oracle. I inserted some test data and running Select query on it. Its not using any parameters.
Using a 'NamedJDBCTemplate', I call the
'query(String sql, RowCallBackHandler rch)' method to run the query and read results.
I get the error
org.springframework.jdbc.UncategorizedSQLException:
PreparedStatementCallback; uncategorized SQLException for SQL.... SQL
state [null]; error code [0]; A problem occurred while trying to
acquire a cached PreparedStatement in a background thread.; nested
exception is java.sql.SQLException: A problem occurred while trying to
acquire a cached PreparedStatement in a background thread
I wonder is this because of some config settings.. My settings are below
hibernate.hbm2ddl.auto=create
hibernate.connection.driver_class=org.hsqldb.jdbc.JDBCDriver
hibernate.connection.url=jdbc:hsqldb:mem:mers;sql.syntax_ora=true
hibernate.connection.username=mers hibernate.connection.password=mers
hibernate.dialect=org.hibernate.dialect.HSQLDialect
hibernate.ejb.naming_strategy=org.hibernate.cfg.ImprovedNamingStrategy
hibernate.cache.use_query_cache=false hibernate.c3p0.minPoolSize=3
hibernate.c3p0.maxPoolSize=25 hibernate.c3p0.timeout=300
#hibernate.c3p0.maxStatements=50000 hibernate.c3p0.maxStatements=50000
#hibernate.c3p0.maxStatementsPerConnection=1000 hibernate.c3p0.maxStatementsPerConnection=1000
hibernate.c3p0.idleConnectionTestPeriod=1800
hibernate.c3p0.acquireIncrement=3 hibernate.cache.provider_class=
hibernate.cache.use_second_level_cache=false
hibernate.event.merge.entity_copy_observer=allow showSql=true
Any leads what could be causing it ? Is it because of 'prepared statement caching' ?
We are using informix 12.10 version. We are deleting multiple rows of records across 54 tables from Java batch. we are using callable strategy in Multi-threading.
Please refer to the below code:
SampleImpl.java:
Callable<Integer> callable=null;
List<Callable<Integer>> taskList = null;
List<Future<Integer>> futureList = null;
for (Map.Entry<String, String> entry : datas.entrySet()){
callable = new Callable<Integer>(){
public Integer call() throws Exception {
return sampleDel.callSqlDelete();
}
};
taskList.add(callable);
}
SampleDaoImpl:
public void callSqlDelete(){
Statement stmt = null;
connection.setAutoCommit(false);
stmt = connection.createStatement();
stmt.execute("SET LOCK MODE TO WAIT");
stmt.addBatch("DELETE FROM TABLE1 WHERE col1 IN(select from tableAAA where id=101)");
stmt.addBatch("DELETE FROM TABLE2 WHERE col1 IN(select from tableAAA where id=101)"");
int delCnt[] = stmt.executeBatch();
connection.commit();
}
In our java code we have already set lock mode to wait to infinite time interval but still we are getting the below exception:
java.sql.BatchUpdateException: Could not do a physical-order read to fetch next row.
at com.informix.jdbc.IfxStatement.executeBatch(IfxStatement.java:1650)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at oracle.ucp.jdbc.proxy.StatementProxyFactory.invoke(StatementProxyFactory.java:272)
at com.sun.proxy.$Proxy1.executeBatch(Unknown Source)
at com.sample.samplereport.dao.impl.SampleDAOPurgeImpl.processDelByStmts(SampleDAOPurgeImpl.java:1305)
at com.sample.samplereport.util.SamplePlSqlDeleter.callSqlDelete(SamplePlSqlDeleter.java:58)
at com.sample.samplereport.dao.impl.SampleDAOPurgeImpl$1.call(SampleDAOPurgeImpl.java:298)
at com.sample.samplereport.dao.impl.SampleDAOPurgeImpl$1.call(SampleDAOPurgeImpl.java:1)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Please help on the above issue?
For this type of error it is usually helpful to see the ISAM error code that the Informix engine also provides. This gives more information on why the operation failed, in this case why it was unable to read the next row. One way to get the ISAM error is to set the environment variable APPENDISAM in the client Java environment. There may well be other ways to achieve this as well. FYI you can find further information in the Informix JDBC Driver documentation at https://www.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.jdbc_pg.doc/ids_jdbc_040.htm
For this problem I suspect the ISAM error may be 143 "deadlock detected." This results when one thread needs to wait on a lock that is held by another thread which in turn is waiting on a lock already held by the first thread. Since you have set lock mode to wait without a timeout the result would be the threads waiting forever so the server returns a deadlock error instead.
To help avoid the problem you should check that row level locking is used in preference to page level locking for TABLE1 and TABLE2. You may also want to check the isolation level used. If using Repeatable Read isolation or the database is mode ANSI then the select statement used in the sub-query will place a lock on every row it considers although these should be minimized if there is an index on the "id" column.
At an application code level deadlock is frequently handled by rolling back the transaction and repeating it.
I have implemented Datanucleus JDO 5.0.10 with Informix database. I do manage to get the PMF and my query is being compiled but no results are being returned.
I have tried to change the version of datanucleus but the error still comes up; upgrading and downgrading.
Here is the stack trace i get:
javax.jdo.JDOUserException: Exception thrown while loading remaining rows of query at org.datanucleus.api.jdo.JDOAdapter.getUserExceptionForException(JDOAdapter.java:670)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.closingConnection(ForwardQueryResult.java:310)
at org.datanucleus.store.query.AbstractQueryResult.disconnect(AbstractQueryResult.java:105)
at org.datanucleus.store.rdbms.query.AbstractRDBMSQueryResult.disconnect(AbstractRDBMSQueryResult.java:248)
at org.datanucleus.store.rdbms.query.JDOQLQuery$2.managedConnectionPreClose(JDOQLQuery.java:734)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.close(ConnectionFactoryImpl.java:519)
at org.datanucleus.store.connection.AbstractManagedConnection.release(AbstractManagedConnection.java:83)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.release(ConnectionFactoryImpl.java:353)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:809)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1926)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1815)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:431)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:318)
at com.swanretail.server.service.LocalClientPM.execute(LocalClientPM.java:143)
at com.swanretail.service.Context.execute(Context.java:408)
at com.swanretail.service.QueryBuilder.execute(QueryBuilder.java:286)
at com.swanretail.service.SystemService.getLicencedUsers(SystemService.java:1930)
at com.swanretail.jdo.Main.main(Main.java:99)
NestedThrowablesStackTrace:
javax.jdo.JDODataStoreException: Failed to read the result set : ResultSet not open, operation 'next' not permitted. Verify that autocommit is OFF
at org.datanucleus.api.jdo.JDOAdapter.getDataStoreExceptionForException(JDOAdapter.java:681)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.nextResultSetElement(ForwardQueryResult.java:238)
at org.datanucleus.store.rdbms.query.ForwardQueryResult$QueryResultIterator.next(ForwardQueryResult.java:416)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.processNumberOfResults(ForwardQueryResult.java:143)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.advanceToEndOfResultSet(ForwardQueryResult.java:171)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.closingConnection(ForwardQueryResult.java:298)
at org.datanucleus.store.query.AbstractQueryResult.disconnect(AbstractQueryResult.java:105)
at org.datanucleus.store.rdbms.query.AbstractRDBMSQueryResult.disconnect(AbstractRDBMSQueryResult.java:248)
at org.datanucleus.store.rdbms.query.JDOQLQuery$2.managedConnectionPreClose(JDOQLQuery.java:734)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.close(ConnectionFactoryImpl.java:519)
at org.datanucleus.store.connection.AbstractManagedConnection.release(AbstractManagedConnection.java:83)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.release(ConnectionFactoryImpl.java:353)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:809)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1926)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1815)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:431)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:318)
at com.swanretail.server.service.LocalClientPM.execute(LocalClientPM.java:143)
at com.swanretail.service.Context.execute(Context.java:408)
at com.swanretail.service.QueryBuilder.execute(QueryBuilder.java:286)
at com.swanretail.service.SystemService.getLicencedUsers(SystemService.java:1930)
at com.swanretail.jdo.Main.main(Main.java:99)
NestedThrowablesStackTrace:
java.sql.SQLException: ResultSet not open, operation 'next' not permitted. Verify that autocommit is OFF
at com.informix.util.IfxErrMsg.buildException(IfxErrMsg.java:480)
at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:449)
at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:400)
at com.informix.jdbc.IfxResultSet.next(IfxResultSet.java:442)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.nextResultSetElement(ForwardQueryResult.java:220)
at org.datanucleus.store.rdbms.query.ForwardQueryResult$QueryResultIterator.next(ForwardQueryResult.java:416)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.processNumberOfResults(ForwardQueryResult.java:143)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.advanceToEndOfResultSet(ForwardQueryResult.java:171)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.closingConnection(ForwardQueryResult.java:298)
at org.datanucleus.store.query.AbstractQueryResult.disconnect(AbstractQueryResult.java:105)
at org.datanucleus.store.rdbms.query.AbstractRDBMSQueryResult.disconnect(AbstractRDBMSQueryResult.java:248)
at org.datanucleus.store.rdbms.query.JDOQLQuery$2.managedConnectionPreClose(JDOQLQuery.java:734)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.close(ConnectionFactoryImpl.java:519)
at org.datanucleus.store.connection.AbstractManagedConnection.release(AbstractManagedConnection.java:83)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.release(ConnectionFactoryImpl.java:353)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:809)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1926)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1815)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:431)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:318)
at com.swanretail.server.service.LocalClientPM.execute(LocalClientPM.java:143)
at com.swanretail.service.Context.execute(Context.java:408)
at com.swanretail.service.QueryBuilder.execute(QueryBuilder.java:286)
at com.swanretail.service.SystemService.getLicencedUsers(SystemService.java:1930)
at com.swanretail.jdo.Main.main(Main.java:99)
We use HibernateTemplate to get an object, but it cause an update action.Our database are read-only, because we don't need to update data.
Why that happen? Please Help me..
My code:
public <T> T get(Class<T> clasz, Serializable id) {
return (T) getHibernateTemplate().get(clasz, id);
}
The error message as follow:
org.springframework.jdbc.UncategorizedSQLException: Hibernate operation: Could not execute JDBC batch update; uncategorized SQLException for SQL [update kytv_resource_station set area=?, create_id=?, create_time=?, display=?, modify_id=?, modify_time=?, nature=?, partner=?, showDomain=?, status=?, tv_character=?, tv_desc=?, tv_elogo=?, tv_elogo_id=?, tv_keyword=?, tv_logo=?, tv_logo_id=?, tv_name=?, tv_order=?, tv_short_name=?, tv_show_name=?, tv_tvm_name=? where id=?]; SQL state [HY000]; error code [1290]; The MySQL server is running with the --read-only option so it cannot execute this statement; nested exception is java.sql.BatchUpdateException: The MySQL server is running with the --read-only option so it cannot execute this statement
at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.translate(SQLStateSQLExceptionTranslator.java:124)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.translate(SQLErrorCodeSQLExceptionTranslator.java:322)
at org.springframework.orm.hibernate3.HibernateAccessor.convertJdbcAccessException(HibernateAccessor.java:424)
at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:410)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:424)
at org.springframework.orm.hibernate3.HibernateTemplate.executeWithNativeSession(HibernateTemplate.java:374)
at org.springframework.orm.hibernate3.HibernateTemplate.get(HibernateTemplate.java:522)
at org.springframework.orm.hibernate3.HibernateTemplate.get(HibernateTemplate.java:516)
Your program has error because of this query
update kytv_resource_station set area=?, create_id=?, create_time=?, display=?, modify_id=?, modify_time=?, nature=?, partner=?, showDomain=?, status=?, tv_character=?, tv_desc=?, tv_elogo=?, tv_elogo_id=?, tv_keyword=?, tv_logo=?, tv_logo_id=?, tv_name=?, tv_order=?, tv_short_name=?, tv_show_name=?, tv_tvm_name=? where id=?
Your program displays MySql Error 1290, in mysql documentation it means
The MySQL server is running with the %s option so it cannot execute this statement.
You mean Our database are read-only, so why do you make update operation.If you need correct answer, post your code with edit question.
I'm using PostgreSQL 8.2.9, Solr 3.1, Tomcat 5.5
I have following problem:
When I execute delta-import - /dataimport?command=delta-import - any update queries to database are not responding for about 30 seconds.
I can easily repeat this behaviour (using psql or hibernate):
PSQL:
Execute delta-import
Immediately in psql - run SQL query: 'UPDATE table SET ... WHERE id = 1;' several times
The second/third/... time - I must wait ~30 seconds for query to return
Hibernate:
In logs - hibernate waits ~30 seconds on method 'doExecuteBatch(...)' after setting query parameters
No other queries are executed when I'm testing this. On the other hand when I'm executing other commands (like full-import, etc.)- everything works perfectly fine.
In Solr's dataconfig.xml:
I have readOnly attribute set to true on PostgreSQL dataSource.
deltaImportQuery, deltaQuery, ... on entity tags don't lock database (simple SELECT's)
Web app (using hibernate) logs:
2012-01-08 18:54:52,403 DEBUG my.package.Class1... Executing method: save
2012-01-08 18:55:26,707 DEBUG my.package.Class1... Executing method: list
Solr logs:
INFO: [] webapp=/search path=/dataimport params={debug=query&command=delta-import&commit=true} status=0 QTime=1
2012-01-08 18:54:50 org.apache.solr.handler.dataimport.DataImporter doDeltaImport
INFO: Starting Delta Import
...
2012-01-08 18:54:50 org.apache.solr.handler.dataimport.JdbcDataSource$1 call
INFO: Time taken for getConnection(): 4
...
FINE: Executing SQL: select ... where last_edit_date > '2012-01-08 18:51:43'
2012-01-08 18:54:50 org.apache.solr.core.Config getNode
...
FINEST: Time taken for sql :4
...
INFO: Import completed successfully
2012-01-08 18:54:50 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start commit(optimize=true,waitFlush=false,waitSearcher=true,expungeDeletes=false)
2012-01-08 18:54:53 org.apache.solr.core.Config getNode
...
2012-01-08 18:54:53 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: end_commit_flush
...
2012-01-08 18:54:53 org.apache.solr.handler.dataimport.DocBuilder execute
INFO: Time taken = 0:0:2.985
There're no 'SELECT ... FOR UPDATE / LOCK / etc.' queries in above logs.
I have set logging for PostgreSQL - there're no locks. Even sessions are set to:
Jan 11 14:33:07 test postgres[26201]: [3-1] <26201> LOG: execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY
Jan 11 14:33:07 test postgres[26201]: [4-1] <26201> LOG: execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Why is this happening? This looks like some kind of database lock but then why when import is completed (2 secs) queries are still waiting (for 30 secs)?
The Update waiting for the SELECT statement to complete before executing. Not a lot you can do about that that I'm aware of. We get around the issue by doing our indexing in batches. Multiple SELECT statements are fine, but UPDATE and DELETE affect the records and wont execute until it can lock the table.
Ok. It was hard to find solution to this problem.
The reason was the underlying platform - disk write saturation. There was too many small "disk-write"'s that consumed all disk-write power.
Now we have new agreement with our service layer provider.
Test query:
while true ; do echo "UPDATE table_name SET type='P' WHERE type='P'" | psql -U user -d database_name ; sleep 1 ; done
plus making changes via out other application, plus update index simultaneously.
This was before platform change:
And here is how it works now: