preparedStatement.executeUpdate() not inserting values - java

I am working in JBehave non-Maven framework and I am running an insert statement which is inside a text file which I am passing to my method as qry. The insert statement is:
Insert into croutreach.ACME_OUTREACH_EMT_STG
(SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT)
values (49,<EID>,'new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
The code below is executing and printing the below statement
Executing to run insert query
But after this it is not proceeding to next step and that is why the connection is not getting closed. It is getting stuck as preparedstatement.executeupdate().
private Connection dbConnection = null;
public boolean runInsertQuery(String qry) {
int result;
System.out.println("Preparing to run insert query");
try {
PreparedStatement preparedStatement =dbConnection.prepareStatement(qry);
System.out.println("Execuing to run insert query");
result = preparedStatement.executeUpdate();
System.out.println("Completed to run insert query");
return result == 1;
} catch (SQLException e) {
LOG.fatal("Error in insertion into table :", e);
return false;
} finally {
this.disconnectDb();
}
}
My output is coming as this :-
Method public org.jbehave.core.configuration.Configuration com.TEA.framework.Functional.OnlineCoachingStory.configuration() has a #Test annotation but also a return value: ignoring it. Use <suite allow-return-values="true"> to fix this
Method public org.jbehave.core.steps.InjectableStepsFactory com.TEA.framework.Functional.OnlineCoachingStory.stepsFactory() has a #Test annotation but also a return value: ignoring it. Use <suite allow-return-values="true"> to fix this
[TestNG] Running:
C:\Users\M41974\AppData\Local\Temp\testng-eclipse-1062921629\testng-customsuite.xml
Processing system properties {}
Using controls EmbedderControls[batch=false,skip=false,generateViewAfterStories=true,ignoreFailureInStories=false,ignoreFailureInView=false,verboseFailures=false,verboseFiltering=false,storyTimeoutInSecs=900000000,threads=1]
(BeforeStories)
Running story com/TEA/framework/Story/Copy of OnlineCoaching.story
(com/TEA/framework/Story/Copy of OnlineCoaching.story)
Scenario: Setup the Environment
2015-11-03 04:46:49,425 FATAL pool-1-thread-1- ACME.SAT.Server.Path not found in the Config.properties file
2015-11-03 04:46:49,440 INFO pool-1-thread-1- Host:- lsh1042a.sys.cigna.com Username:- ank_1 password:- abc_123 path:- null
Given Set the environment to SAT
2015-11-03 04:46:49,768 INFO pool-1-thread-1- DB Connection successful
Insert into croutreach.ACME_OUTREACH_EMT_STG (SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT) values (49,'1234567890','new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
Preparing to run insert query
Executing to run insert query
|
Previously I had kept my SQL developer window open and when tried after closing that and again ran the same script it shows the fatal error asORA-00001: unique constraint violated.
I am facing 3 different issues:
When SQL Developer was open, running the script got stuck at the insert.
While that was stuck I closed the SQL Developer window session and after that output is as below:
2015-11-03 05:01:35,061 INFO main- DB Connection successful
Insert into croutreach.ACME_OUTREACH_EMT_STG (SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT) values (49,'02123456789','new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
Preparing to run insert query
Executing to run insert query
Completed to run insert query
2015-11-03 05:02:11,128 INFO main- DB Disconnected!!
Insert complete
After closing the SQL Developer window, running the script again completed but with fatal error as below:
2015-11-03 05:13:08,177 INFO main- DB Connection successful
Insert into croutreach.ACME_OUTREACH_EMT_STG (SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT) values (49,'02123456789','new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
Preparing to run insert query
Executing to run insert query
2015-11-03 05:13:08,319 FATAL main- Error in insertion into table :
java.sql.SQLException: ORA-00001: unique constraint (CROUTREACH.ACME_OUTREACH_EMT_STG_PK) violated
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:955)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1168)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3316)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3400)
at com.TEA.framework.Technology.DatabaseClass.runInsertQuery(DatabaseClass.java:169)
at com.TEA.framework.Business.AcmeDB.insertACME_OUTREACH_EMT_STG(AcmeDB.java:186)
at com.TEA.framework.Technology.MainClass.DBinsertTest(MainClass.java:123)
at com.TEA.framework.Technology.MainClass.main(MainClass.java:24)
2015-11-03 05:13:08,350 INFO main- DB Disconnected!!
2015-11-03 05:13:08,365 INFO main- DB Disconnected!!

You're just seeing one session block another. When you do the insert in SQL Developer and leave the session running without committing or rolling back you have a lock on the newly-inserted row and a (sort of pending) index entry.
If you run your program and do the same insert again then that is in a different session. It cannot see the data you inserted through SQL Developer, but it can see the lock, and it waits for that lock to be cleared. While its blocked it just sits there waiting, which you're interpreting as it being stuck.
If you roll back the SQL Developer insert then the lock is cleared and your program's blocked insert then continues, successfully. The new record is inserted, and will either be committed or rolled back depending on what your code does next, or the connection's autocommit setting.
It looks like it commits, since running it again tries to repeat the insert, and now it does see the existing committed data - hence the ORA-00001 error, as you're trying to create a second entry with the same primary key value. You would have seen the same thing if you had committed the SQL Developer session instead of rolling back.
If you want to run your program a second time you will need to delete the row (and commit that change!) before trying the insert again.

Related

Testing using HSQL DB in Spring - cached PreparedStatement error

Our backend app is Spring/Java based batch server. Our backend db is Oracle.
I am writing unit tests to use HSQL DB instead of Oracle. I inserted some test data and running Select query on it. Its not using any parameters.
Using a 'NamedJDBCTemplate', I call the
'query(String sql, RowCallBackHandler rch)' method to run the query and read results.
I get the error
org.springframework.jdbc.UncategorizedSQLException:
PreparedStatementCallback; uncategorized SQLException for SQL.... SQL
state [null]; error code [0]; A problem occurred while trying to
acquire a cached PreparedStatement in a background thread.; nested
exception is java.sql.SQLException: A problem occurred while trying to
acquire a cached PreparedStatement in a background thread
I wonder is this because of some config settings.. My settings are below
hibernate.hbm2ddl.auto=create
hibernate.connection.driver_class=org.hsqldb.jdbc.JDBCDriver
hibernate.connection.url=jdbc:hsqldb:mem:mers;sql.syntax_ora=true
hibernate.connection.username=mers hibernate.connection.password=mers
hibernate.dialect=org.hibernate.dialect.HSQLDialect
hibernate.ejb.naming_strategy=org.hibernate.cfg.ImprovedNamingStrategy
hibernate.cache.use_query_cache=false hibernate.c3p0.minPoolSize=3
hibernate.c3p0.maxPoolSize=25 hibernate.c3p0.timeout=300
#hibernate.c3p0.maxStatements=50000 hibernate.c3p0.maxStatements=50000
#hibernate.c3p0.maxStatementsPerConnection=1000 hibernate.c3p0.maxStatementsPerConnection=1000
hibernate.c3p0.idleConnectionTestPeriod=1800
hibernate.c3p0.acquireIncrement=3 hibernate.cache.provider_class=
hibernate.cache.use_second_level_cache=false
hibernate.event.merge.entity_copy_observer=allow showSql=true
Any leads what could be causing it ? Is it because of 'prepared statement caching' ?

Java Multithreading: Informix 12.10 - Could not do a physical-order read to fetch next row

We are using informix 12.10 version. We are deleting multiple rows of records across 54 tables from Java batch. we are using callable strategy in Multi-threading.
Please refer to the below code:
SampleImpl.java:
Callable<Integer> callable=null;
List<Callable<Integer>> taskList = null;
List<Future<Integer>> futureList = null;
for (Map.Entry<String, String> entry : datas.entrySet()){
callable = new Callable<Integer>(){
public Integer call() throws Exception {
return sampleDel.callSqlDelete();
}
};
taskList.add(callable);
}
SampleDaoImpl:
public void callSqlDelete(){
Statement stmt = null;
connection.setAutoCommit(false);
stmt = connection.createStatement();
stmt.execute("SET LOCK MODE TO WAIT");
stmt.addBatch("DELETE FROM TABLE1 WHERE col1 IN(select from tableAAA where id=101)");
stmt.addBatch("DELETE FROM TABLE2 WHERE col1 IN(select from tableAAA where id=101)"");
int delCnt[] = stmt.executeBatch();
connection.commit();
}
In our java code we have already set lock mode to wait to infinite time interval but still we are getting the below exception:
java.sql.BatchUpdateException: Could not do a physical-order read to fetch next row.
at com.informix.jdbc.IfxStatement.executeBatch(IfxStatement.java:1650)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at oracle.ucp.jdbc.proxy.StatementProxyFactory.invoke(StatementProxyFactory.java:272)
at com.sun.proxy.$Proxy1.executeBatch(Unknown Source)
at com.sample.samplereport.dao.impl.SampleDAOPurgeImpl.processDelByStmts(SampleDAOPurgeImpl.java:1305)
at com.sample.samplereport.util.SamplePlSqlDeleter.callSqlDelete(SamplePlSqlDeleter.java:58)
at com.sample.samplereport.dao.impl.SampleDAOPurgeImpl$1.call(SampleDAOPurgeImpl.java:298)
at com.sample.samplereport.dao.impl.SampleDAOPurgeImpl$1.call(SampleDAOPurgeImpl.java:1)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Please help on the above issue?
For this type of error it is usually helpful to see the ISAM error code that the Informix engine also provides. This gives more information on why the operation failed, in this case why it was unable to read the next row. One way to get the ISAM error is to set the environment variable APPENDISAM in the client Java environment. There may well be other ways to achieve this as well. FYI you can find further information in the Informix JDBC Driver documentation at https://www.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.jdbc_pg.doc/ids_jdbc_040.htm
For this problem I suspect the ISAM error may be 143 "deadlock detected." This results when one thread needs to wait on a lock that is held by another thread which in turn is waiting on a lock already held by the first thread. Since you have set lock mode to wait without a timeout the result would be the threads waiting forever so the server returns a deadlock error instead.
To help avoid the problem you should check that row level locking is used in preference to page level locking for TABLE1 and TABLE2. You may also want to check the isolation level used. If using Repeatable Read isolation or the database is mode ANSI then the select statement used in the sub-query will place a lock on every row it considers although these should be minimized if there is an index on the "id" column.
At an application code level deadlock is frequently handled by rolling back the transaction and repeating it.

Datanucleus JDO Informix query not returning results

I have implemented Datanucleus JDO 5.0.10 with Informix database. I do manage to get the PMF and my query is being compiled but no results are being returned.
I have tried to change the version of datanucleus but the error still comes up; upgrading and downgrading.
Here is the stack trace i get:
javax.jdo.JDOUserException: Exception thrown while loading remaining rows of query at org.datanucleus.api.jdo.JDOAdapter.getUserExceptionForException(JDOAdapter.java:670)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.closingConnection(ForwardQueryResult.java:310)
at org.datanucleus.store.query.AbstractQueryResult.disconnect(AbstractQueryResult.java:105)
at org.datanucleus.store.rdbms.query.AbstractRDBMSQueryResult.disconnect(AbstractRDBMSQueryResult.java:248)
at org.datanucleus.store.rdbms.query.JDOQLQuery$2.managedConnectionPreClose(JDOQLQuery.java:734)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.close(ConnectionFactoryImpl.java:519)
at org.datanucleus.store.connection.AbstractManagedConnection.release(AbstractManagedConnection.java:83)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.release(ConnectionFactoryImpl.java:353)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:809)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1926)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1815)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:431)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:318)
at com.swanretail.server.service.LocalClientPM.execute(LocalClientPM.java:143)
at com.swanretail.service.Context.execute(Context.java:408)
at com.swanretail.service.QueryBuilder.execute(QueryBuilder.java:286)
at com.swanretail.service.SystemService.getLicencedUsers(SystemService.java:1930)
at com.swanretail.jdo.Main.main(Main.java:99)
NestedThrowablesStackTrace:
javax.jdo.JDODataStoreException: Failed to read the result set : ResultSet not open, operation 'next' not permitted. Verify that autocommit is OFF
at org.datanucleus.api.jdo.JDOAdapter.getDataStoreExceptionForException(JDOAdapter.java:681)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.nextResultSetElement(ForwardQueryResult.java:238)
at org.datanucleus.store.rdbms.query.ForwardQueryResult$QueryResultIterator.next(ForwardQueryResult.java:416)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.processNumberOfResults(ForwardQueryResult.java:143)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.advanceToEndOfResultSet(ForwardQueryResult.java:171)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.closingConnection(ForwardQueryResult.java:298)
at org.datanucleus.store.query.AbstractQueryResult.disconnect(AbstractQueryResult.java:105)
at org.datanucleus.store.rdbms.query.AbstractRDBMSQueryResult.disconnect(AbstractRDBMSQueryResult.java:248)
at org.datanucleus.store.rdbms.query.JDOQLQuery$2.managedConnectionPreClose(JDOQLQuery.java:734)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.close(ConnectionFactoryImpl.java:519)
at org.datanucleus.store.connection.AbstractManagedConnection.release(AbstractManagedConnection.java:83)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.release(ConnectionFactoryImpl.java:353)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:809)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1926)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1815)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:431)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:318)
at com.swanretail.server.service.LocalClientPM.execute(LocalClientPM.java:143)
at com.swanretail.service.Context.execute(Context.java:408)
at com.swanretail.service.QueryBuilder.execute(QueryBuilder.java:286)
at com.swanretail.service.SystemService.getLicencedUsers(SystemService.java:1930)
at com.swanretail.jdo.Main.main(Main.java:99)
NestedThrowablesStackTrace:
java.sql.SQLException: ResultSet not open, operation 'next' not permitted. Verify that autocommit is OFF
at com.informix.util.IfxErrMsg.buildException(IfxErrMsg.java:480)
at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:449)
at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:400)
at com.informix.jdbc.IfxResultSet.next(IfxResultSet.java:442)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.nextResultSetElement(ForwardQueryResult.java:220)
at org.datanucleus.store.rdbms.query.ForwardQueryResult$QueryResultIterator.next(ForwardQueryResult.java:416)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.processNumberOfResults(ForwardQueryResult.java:143)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.advanceToEndOfResultSet(ForwardQueryResult.java:171)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.closingConnection(ForwardQueryResult.java:298)
at org.datanucleus.store.query.AbstractQueryResult.disconnect(AbstractQueryResult.java:105)
at org.datanucleus.store.rdbms.query.AbstractRDBMSQueryResult.disconnect(AbstractRDBMSQueryResult.java:248)
at org.datanucleus.store.rdbms.query.JDOQLQuery$2.managedConnectionPreClose(JDOQLQuery.java:734)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.close(ConnectionFactoryImpl.java:519)
at org.datanucleus.store.connection.AbstractManagedConnection.release(AbstractManagedConnection.java:83)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.release(ConnectionFactoryImpl.java:353)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:809)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1926)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1815)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:431)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:318)
at com.swanretail.server.service.LocalClientPM.execute(LocalClientPM.java:143)
at com.swanretail.service.Context.execute(Context.java:408)
at com.swanretail.service.QueryBuilder.execute(QueryBuilder.java:286)
at com.swanretail.service.SystemService.getLicencedUsers(SystemService.java:1930)
at com.swanretail.jdo.Main.main(Main.java:99)

Hibernate get action cause an update action

We use HibernateTemplate to get an object, but it cause an update action.Our database are read-only, because we don't need to update data.
Why that happen? Please Help me..
My code:
public <T> T get(Class<T> clasz, Serializable id) {
return (T) getHibernateTemplate().get(clasz, id);
}
The error message as follow:
org.springframework.jdbc.UncategorizedSQLException: Hibernate operation: Could not execute JDBC batch update; uncategorized SQLException for SQL [update kytv_resource_station set area=?, create_id=?, create_time=?, display=?, modify_id=?, modify_time=?, nature=?, partner=?, showDomain=?, status=?, tv_character=?, tv_desc=?, tv_elogo=?, tv_elogo_id=?, tv_keyword=?, tv_logo=?, tv_logo_id=?, tv_name=?, tv_order=?, tv_short_name=?, tv_show_name=?, tv_tvm_name=? where id=?]; SQL state [HY000]; error code [1290]; The MySQL server is running with the --read-only option so it cannot execute this statement; nested exception is java.sql.BatchUpdateException: The MySQL server is running with the --read-only option so it cannot execute this statement
at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.translate(SQLStateSQLExceptionTranslator.java:124)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.translate(SQLErrorCodeSQLExceptionTranslator.java:322)
at org.springframework.orm.hibernate3.HibernateAccessor.convertJdbcAccessException(HibernateAccessor.java:424)
at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:410)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:424)
at org.springframework.orm.hibernate3.HibernateTemplate.executeWithNativeSession(HibernateTemplate.java:374)
at org.springframework.orm.hibernate3.HibernateTemplate.get(HibernateTemplate.java:522)
at org.springframework.orm.hibernate3.HibernateTemplate.get(HibernateTemplate.java:516)
Your program has error because of this query
update kytv_resource_station set area=?, create_id=?, create_time=?, display=?, modify_id=?, modify_time=?, nature=?, partner=?, showDomain=?, status=?, tv_character=?, tv_desc=?, tv_elogo=?, tv_elogo_id=?, tv_keyword=?, tv_logo=?, tv_logo_id=?, tv_name=?, tv_order=?, tv_short_name=?, tv_show_name=?, tv_tvm_name=? where id=?
Your program displays MySql Error 1290, in mysql documentation it means
The MySQL server is running with the %s option so it cannot execute this statement.
You mean Our database are read-only, so why do you make update operation.If you need correct answer, post your code with edit question.

Apache Solr - waiting on sql queries when delta-import command is executed

I'm using PostgreSQL 8.2.9, Solr 3.1, Tomcat 5.5
I have following problem:
When I execute delta-import - /dataimport?command=delta-import - any update queries to database are not responding for about 30 seconds.
I can easily repeat this behaviour (using psql or hibernate):
PSQL:
Execute delta-import
Immediately in psql - run SQL query: 'UPDATE table SET ... WHERE id = 1;' several times
The second/third/... time - I must wait ~30 seconds for query to return
Hibernate:
In logs - hibernate waits ~30 seconds on method 'doExecuteBatch(...)' after setting query parameters
No other queries are executed when I'm testing this. On the other hand when I'm executing other commands (like full-import, etc.)- everything works perfectly fine.
In Solr's dataconfig.xml:
I have readOnly attribute set to true on PostgreSQL dataSource.
deltaImportQuery, deltaQuery, ... on entity tags don't lock database (simple SELECT's)
Web app (using hibernate) logs:
2012-01-08 18:54:52,403 DEBUG my.package.Class1... Executing method: save
2012-01-08 18:55:26,707 DEBUG my.package.Class1... Executing method: list
Solr logs:
INFO: [] webapp=/search path=/dataimport params={debug=query&command=delta-import&commit=true} status=0 QTime=1
2012-01-08 18:54:50 org.apache.solr.handler.dataimport.DataImporter doDeltaImport
INFO: Starting Delta Import
...
2012-01-08 18:54:50 org.apache.solr.handler.dataimport.JdbcDataSource$1 call
INFO: Time taken for getConnection(): 4
...
FINE: Executing SQL: select ... where last_edit_date > '2012-01-08 18:51:43'
2012-01-08 18:54:50 org.apache.solr.core.Config getNode
...
FINEST: Time taken for sql :4
...
INFO: Import completed successfully
2012-01-08 18:54:50 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start commit(optimize=true,waitFlush=false,waitSearcher=true,expungeDeletes=false)
2012-01-08 18:54:53 org.apache.solr.core.Config getNode
...
2012-01-08 18:54:53 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: end_commit_flush
...
2012-01-08 18:54:53 org.apache.solr.handler.dataimport.DocBuilder execute
INFO: Time taken = 0:0:2.985
There're no 'SELECT ... FOR UPDATE / LOCK / etc.' queries in above logs.
I have set logging for PostgreSQL - there're no locks. Even sessions are set to:
Jan 11 14:33:07 test postgres[26201]: [3-1] <26201> LOG: execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY
Jan 11 14:33:07 test postgres[26201]: [4-1] <26201> LOG: execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Why is this happening? This looks like some kind of database lock but then why when import is completed (2 secs) queries are still waiting (for 30 secs)?
The Update waiting for the SELECT statement to complete before executing. Not a lot you can do about that that I'm aware of. We get around the issue by doing our indexing in batches. Multiple SELECT statements are fine, but UPDATE and DELETE affect the records and wont execute until it can lock the table.
Ok. It was hard to find solution to this problem.
The reason was the underlying platform - disk write saturation. There was too many small "disk-write"'s that consumed all disk-write power.
Now we have new agreement with our service layer provider.
Test query:
while true ; do echo "UPDATE table_name SET type='P' WHERE type='P'" | psql -U user -d database_name ; sleep 1 ; done
plus making changes via out other application, plus update index simultaneously.
This was before platform change:
And here is how it works now:

Categories

Resources