I have implemented Datanucleus JDO 5.0.10 with Informix database. I do manage to get the PMF and my query is being compiled but no results are being returned.
I have tried to change the version of datanucleus but the error still comes up; upgrading and downgrading.
Here is the stack trace i get:
javax.jdo.JDOUserException: Exception thrown while loading remaining rows of query at org.datanucleus.api.jdo.JDOAdapter.getUserExceptionForException(JDOAdapter.java:670)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.closingConnection(ForwardQueryResult.java:310)
at org.datanucleus.store.query.AbstractQueryResult.disconnect(AbstractQueryResult.java:105)
at org.datanucleus.store.rdbms.query.AbstractRDBMSQueryResult.disconnect(AbstractRDBMSQueryResult.java:248)
at org.datanucleus.store.rdbms.query.JDOQLQuery$2.managedConnectionPreClose(JDOQLQuery.java:734)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.close(ConnectionFactoryImpl.java:519)
at org.datanucleus.store.connection.AbstractManagedConnection.release(AbstractManagedConnection.java:83)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.release(ConnectionFactoryImpl.java:353)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:809)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1926)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1815)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:431)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:318)
at com.swanretail.server.service.LocalClientPM.execute(LocalClientPM.java:143)
at com.swanretail.service.Context.execute(Context.java:408)
at com.swanretail.service.QueryBuilder.execute(QueryBuilder.java:286)
at com.swanretail.service.SystemService.getLicencedUsers(SystemService.java:1930)
at com.swanretail.jdo.Main.main(Main.java:99)
NestedThrowablesStackTrace:
javax.jdo.JDODataStoreException: Failed to read the result set : ResultSet not open, operation 'next' not permitted. Verify that autocommit is OFF
at org.datanucleus.api.jdo.JDOAdapter.getDataStoreExceptionForException(JDOAdapter.java:681)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.nextResultSetElement(ForwardQueryResult.java:238)
at org.datanucleus.store.rdbms.query.ForwardQueryResult$QueryResultIterator.next(ForwardQueryResult.java:416)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.processNumberOfResults(ForwardQueryResult.java:143)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.advanceToEndOfResultSet(ForwardQueryResult.java:171)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.closingConnection(ForwardQueryResult.java:298)
at org.datanucleus.store.query.AbstractQueryResult.disconnect(AbstractQueryResult.java:105)
at org.datanucleus.store.rdbms.query.AbstractRDBMSQueryResult.disconnect(AbstractRDBMSQueryResult.java:248)
at org.datanucleus.store.rdbms.query.JDOQLQuery$2.managedConnectionPreClose(JDOQLQuery.java:734)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.close(ConnectionFactoryImpl.java:519)
at org.datanucleus.store.connection.AbstractManagedConnection.release(AbstractManagedConnection.java:83)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.release(ConnectionFactoryImpl.java:353)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:809)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1926)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1815)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:431)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:318)
at com.swanretail.server.service.LocalClientPM.execute(LocalClientPM.java:143)
at com.swanretail.service.Context.execute(Context.java:408)
at com.swanretail.service.QueryBuilder.execute(QueryBuilder.java:286)
at com.swanretail.service.SystemService.getLicencedUsers(SystemService.java:1930)
at com.swanretail.jdo.Main.main(Main.java:99)
NestedThrowablesStackTrace:
java.sql.SQLException: ResultSet not open, operation 'next' not permitted. Verify that autocommit is OFF
at com.informix.util.IfxErrMsg.buildException(IfxErrMsg.java:480)
at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:449)
at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:400)
at com.informix.jdbc.IfxResultSet.next(IfxResultSet.java:442)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.nextResultSetElement(ForwardQueryResult.java:220)
at org.datanucleus.store.rdbms.query.ForwardQueryResult$QueryResultIterator.next(ForwardQueryResult.java:416)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.processNumberOfResults(ForwardQueryResult.java:143)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.advanceToEndOfResultSet(ForwardQueryResult.java:171)
at org.datanucleus.store.rdbms.query.ForwardQueryResult.closingConnection(ForwardQueryResult.java:298)
at org.datanucleus.store.query.AbstractQueryResult.disconnect(AbstractQueryResult.java:105)
at org.datanucleus.store.rdbms.query.AbstractRDBMSQueryResult.disconnect(AbstractRDBMSQueryResult.java:248)
at org.datanucleus.store.rdbms.query.JDOQLQuery$2.managedConnectionPreClose(JDOQLQuery.java:734)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.close(ConnectionFactoryImpl.java:519)
at org.datanucleus.store.connection.AbstractManagedConnection.release(AbstractManagedConnection.java:83)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.release(ConnectionFactoryImpl.java:353)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:809)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1926)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1815)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:431)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:318)
at com.swanretail.server.service.LocalClientPM.execute(LocalClientPM.java:143)
at com.swanretail.service.Context.execute(Context.java:408)
at com.swanretail.service.QueryBuilder.execute(QueryBuilder.java:286)
at com.swanretail.service.SystemService.getLicencedUsers(SystemService.java:1930)
at com.swanretail.jdo.Main.main(Main.java:99)
I have a table where I am inserting records using record.insert() method. I believe this method is doing an insert and then a select but in a different transactions. At the same time I have another thread which pools this table for records processes them and then deletes them.
In some cases I am getting the below exception:
org.jooq.exception.NoDataFoundException: Exactly one row expected for refresh. Record does not exist in database.
at org.jooq.impl.UpdatableRecordImpl.refresh(UpdatableRecordImpl.java:345)
at org.jooq.impl.TableRecordImpl.getReturningIfNeeded(TableRecordImpl.java:232)
at org.jooq.impl.TableRecordImpl.storeInsert0(TableRecordImpl.java:208)
at org.jooq.impl.TableRecordImpl$1.operate(TableRecordImpl.java:169)
My solution was to use DSL.using(configuration()).insertInto instead of record.insert().
My question is shouldn't the insert and fetch be done in the same transaction?
UPDATE:
This is a dropwizard app that is using jooqbundle: com.bendb.dropwizard:dropwizard-jooq.
The configuration is injected in the DAO, the insert is as follows:
R object = // jooq record
object.attach(configuration);
object.insert();
On the second thread I am just selecting some records from this table, processing them and then deleting them
Jooq logs clearly shows that the 2 queries are not run in same transaction:
14:07:09.550 [main] DEBUG org.jooq.tools.LoggerListener - -> with bind values : insert into "queue"
....
14:07:09.083', 1)
14:07:09.589 [main] DEBUG org.jooq.tools.LoggerListener - Affected row(s) : 1
14:07:09.590 [main] DEBUG org.jooq.tools.StopWatch - Query executed : Total: 47.603ms
14:07:09.591 [main] DEBUG org.jooq.tools.StopWatch - Finishing : Total: 48.827ms, +1.223ms
14:07:09.632 [main] DEBUG org.jooq.tools.LoggerListener - Executing query : select "queue"."
I do not see the "autocommit off" or "savepoint" statements in the logs which are generally printed by jooq in case the queries are run in a transaction. I hope this helps, let me know if you need more info
UPDATE 2:
Jooq version is 3.9.1
mysql version 5.6.23
Database and jooq entry yml file:
database:
driverClass: com.mysql.jdbc.Driver
user: ***
password: ***
url: jdbc:mysql://localhost:3306/mySchema
properties:
charSet: UTF-8
characterEncoding: UTF-8
# the maximum amount of time to wait on an empty pool before throwing an exception
maxWaitForConnection: 1s
# the SQL query to run when validating a connection's liveness
validationQuery: "SELECT 1"
# the timeout before a connection validation queries fail
validationQueryTimeout: 3s
# initial number of connections
initialSize: 25
# the minimum number of connections to keep open
minSize: 25
# the maximum number of connections to keep open
maxSize: 25
# whether or not idle connections should be validated
checkConnectionWhileIdle: true
# the amount of time to sleep between runs of the idle connection validation, abandoned cleaner and idle pool resizing
evictionInterval: 10s
# the minimum amount of time an connection must sit idle in the pool before it is eligible for eviction
minIdleTime: 1 minute
jooq:
# The flavor of SQL to generate. If not specified, it will be inferred from the JDBC connection URL. (default: null)
dialect: MYSQL
# Whether to write generated SQL to a logger before execution. (default: no)
logExecutedSql: no
# Whether to include schema names in generated SQL. (default: yes)
renderSchema: yes
# How names should be rendered in generated SQL. One of QUOTED, AS_IS, LOWER, or UPPER. (default: QUOTED)
renderNameStyle: QUOTED
# How keywords should be rendered in generated SQL. One of LOWER, UPPER. (default: UPPER)
renderKeywordStyle: UPPER
# Whether generated SQL should be pretty-printed. (default: no)
renderFormatted: no
# How parameters should be represented. One of INDEXED, NAMED, or INLINE. (default: INDEXED)
paramType: INDEXED
# How statements should be generated; one of PREPARED_STATEMENT or STATIC_STATEMENT. (default: PREPARED_STATEMENT)
statementType: PREPARED_STATEMENT
# Whether internal jOOQ logging should be enabled. (default: no)
executeLogging: no
# Whether optimistic locking should be enabled. (default: no)
executeWithOptimisticLocking: yes
# Whether returned records should be 'attached' to the jOOQ context. (default: yes)
attachRecords: yes
# Whether primary-key fields should be updatable. (default: no)
updatablePrimaryKeys: no
Have included the jooq bundle in the Application class as described in https://github.com/benjamin-bader/droptools/tree/master/dropwizard-jooq.
Using https://github.com/xvik/dropwizard-guicey to inject the configuration into each DAO.
The guide module has the following binding:
bind(Configuration.class).toInstance(jooqBundle.getConfiguration());
I am working in JBehave non-Maven framework and I am running an insert statement which is inside a text file which I am passing to my method as qry. The insert statement is:
Insert into croutreach.ACME_OUTREACH_EMT_STG
(SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT)
values (49,<EID>,'new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
The code below is executing and printing the below statement
Executing to run insert query
But after this it is not proceeding to next step and that is why the connection is not getting closed. It is getting stuck as preparedstatement.executeupdate().
private Connection dbConnection = null;
public boolean runInsertQuery(String qry) {
int result;
System.out.println("Preparing to run insert query");
try {
PreparedStatement preparedStatement =dbConnection.prepareStatement(qry);
System.out.println("Execuing to run insert query");
result = preparedStatement.executeUpdate();
System.out.println("Completed to run insert query");
return result == 1;
} catch (SQLException e) {
LOG.fatal("Error in insertion into table :", e);
return false;
} finally {
this.disconnectDb();
}
}
My output is coming as this :-
Method public org.jbehave.core.configuration.Configuration com.TEA.framework.Functional.OnlineCoachingStory.configuration() has a #Test annotation but also a return value: ignoring it. Use <suite allow-return-values="true"> to fix this
Method public org.jbehave.core.steps.InjectableStepsFactory com.TEA.framework.Functional.OnlineCoachingStory.stepsFactory() has a #Test annotation but also a return value: ignoring it. Use <suite allow-return-values="true"> to fix this
[TestNG] Running:
C:\Users\M41974\AppData\Local\Temp\testng-eclipse-1062921629\testng-customsuite.xml
Processing system properties {}
Using controls EmbedderControls[batch=false,skip=false,generateViewAfterStories=true,ignoreFailureInStories=false,ignoreFailureInView=false,verboseFailures=false,verboseFiltering=false,storyTimeoutInSecs=900000000,threads=1]
(BeforeStories)
Running story com/TEA/framework/Story/Copy of OnlineCoaching.story
(com/TEA/framework/Story/Copy of OnlineCoaching.story)
Scenario: Setup the Environment
2015-11-03 04:46:49,425 FATAL pool-1-thread-1- ACME.SAT.Server.Path not found in the Config.properties file
2015-11-03 04:46:49,440 INFO pool-1-thread-1- Host:- lsh1042a.sys.cigna.com Username:- ank_1 password:- abc_123 path:- null
Given Set the environment to SAT
2015-11-03 04:46:49,768 INFO pool-1-thread-1- DB Connection successful
Insert into croutreach.ACME_OUTREACH_EMT_STG (SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT) values (49,'1234567890','new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
Preparing to run insert query
Executing to run insert query
|
Previously I had kept my SQL developer window open and when tried after closing that and again ran the same script it shows the fatal error asORA-00001: unique constraint violated.
I am facing 3 different issues:
When SQL Developer was open, running the script got stuck at the insert.
While that was stuck I closed the SQL Developer window session and after that output is as below:
2015-11-03 05:01:35,061 INFO main- DB Connection successful
Insert into croutreach.ACME_OUTREACH_EMT_STG (SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT) values (49,'02123456789','new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
Preparing to run insert query
Executing to run insert query
Completed to run insert query
2015-11-03 05:02:11,128 INFO main- DB Disconnected!!
Insert complete
After closing the SQL Developer window, running the script again completed but with fatal error as below:
2015-11-03 05:13:08,177 INFO main- DB Connection successful
Insert into croutreach.ACME_OUTREACH_EMT_STG (SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT) values (49,'02123456789','new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
Preparing to run insert query
Executing to run insert query
2015-11-03 05:13:08,319 FATAL main- Error in insertion into table :
java.sql.SQLException: ORA-00001: unique constraint (CROUTREACH.ACME_OUTREACH_EMT_STG_PK) violated
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:955)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1168)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3316)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3400)
at com.TEA.framework.Technology.DatabaseClass.runInsertQuery(DatabaseClass.java:169)
at com.TEA.framework.Business.AcmeDB.insertACME_OUTREACH_EMT_STG(AcmeDB.java:186)
at com.TEA.framework.Technology.MainClass.DBinsertTest(MainClass.java:123)
at com.TEA.framework.Technology.MainClass.main(MainClass.java:24)
2015-11-03 05:13:08,350 INFO main- DB Disconnected!!
2015-11-03 05:13:08,365 INFO main- DB Disconnected!!
You're just seeing one session block another. When you do the insert in SQL Developer and leave the session running without committing or rolling back you have a lock on the newly-inserted row and a (sort of pending) index entry.
If you run your program and do the same insert again then that is in a different session. It cannot see the data you inserted through SQL Developer, but it can see the lock, and it waits for that lock to be cleared. While its blocked it just sits there waiting, which you're interpreting as it being stuck.
If you roll back the SQL Developer insert then the lock is cleared and your program's blocked insert then continues, successfully. The new record is inserted, and will either be committed or rolled back depending on what your code does next, or the connection's autocommit setting.
It looks like it commits, since running it again tries to repeat the insert, and now it does see the existing committed data - hence the ORA-00001 error, as you're trying to create a second entry with the same primary key value. You would have seen the same thing if you had committed the SQL Developer session instead of rolling back.
If you want to run your program a second time you will need to delete the row (and commit that change!) before trying the insert again.
I'm working on an Integrator for Hibernate (background on Integrators: https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch14.html#objectstate-decl-security) that by using listeners is supposed to take my data from how it's stored in the DB and convert it into a different form for processing at runtime. This works great when saving the data using .persist() however there's an odd behavior involving transactions. The following code is from Hibernate's own quickstart tutorial code:
// now lets pull events from the database and list them
entityManager = entityManagerFactory.createEntityManager();
entityManager.getTransaction().begin();
List<Event> result = entityManager.createQuery( "from Event", Event.class ).getResultList();
for ( Event event : result ) {
System.out.println( "Event (" + event.getDate() + ") : " + event.getTitle() );
}
entityManager.getTransaction().commit();
entityManager.close();
Notice the unusual transaction begin/commit wrapping the query to select the data. Running this gives the following output after the query completes:
01:01:59.111 [main] DEBUG org.hibernate.engine.transaction.spi.AbstractTransactionImpl.commit(175) - committing
01:01:59.112 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.prepareEntityFlushes(149) - Processing flush-time cascades
01:01:59.112 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.prepareCollectionFlushes(189) - Dirty checking collections
01:01:59.114 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.logFlushResults(123) - Flushed: 0 insertions, 2 updates, 0 deletions to 2 objects
01:01:59.114 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.logFlushResults(130) - Flushed: 0 (re)creations, 0 updates, 0 removals to 0 collections
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(114) - Listing entities:
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(121) - org.hibernate.tutorial.em.Event{date=2015-07-28 01:01:57.776, id=1, title=Our very first event!}
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(121) - org.hibernate.tutorial.em.Event{date=2015-07-28 01:01:58.746, id=2, title=A follow up event}
01:01:59.115 [main] DEBUG org.hibernate.SQL.logStatement(109) - update EVENTS set EVENT_DATE=?, title=? where id=?
Hibernate: update EVENTS set EVENT_DATE=?, title=? where id=?
01:01:59.119 [main] DEBUG org.hibernate.SQL.logStatement(109) - update EVENTS set EVENT_DATE=?, title=? where id=?
Hibernate: update EVENTS set EVENT_DATE=?, title=? where id=?
01:01:59.120 [main] DEBUG org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doCommit(113) - committed JDBC Connection
01:01:59.120 [main] DEBUG org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.close(201) - HHH000420: Closing un-released batch
01:01:59.121 [main] DEBUG org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.releaseConnection(246) - Releasing JDBC connection
01:01:59.121 [main] DEBUG org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.releaseConnection(264) - Released JDBC connection
01:01:59.121 [main] DEBUG org.hibernate.internal.SessionFactoryImpl.close(1339) - HHH000031: Closing
It appears that since the Integrator does a modification on the entity in question it gets marked as "dirty" and upon committing this odd transaction, it bypasses my event listeners and writes the value back in the wrong format! I did some digging in the code and it turns out that org.hibernate.event.internal.AbstractFlushingEventListener.flushEntities(FlushEvent, PersistenceContext) gets called above and tries to get listeners for EventType.FLUSH_ENTITY. Unfortunately a listener added for this EventType is never called in my Integrator. How can I write my Integrator to behave correctly in this case so that I can "undo" the conversion that has happened with my entities at runtime and not flush the wrong value out?
Ultimately the problem was due to the EventTypes of the event listeners added with the EventListenerRegistry. What worked was using EventType.POST_LOAD for all the read operations combined with EventType.PRE_UPDATE and EventType.PRE_INSERT for writes that call a helper method for handling both the same way.
To prevent unneeded writes after making your entity updates it's a good idea to reset the data used for tracking if the entity is dirty in EntityEntry called loadedState. This is a private field in Hibernate 4 so you'll need to use Reflection, however in Hibernate 5 it's available via the getLoadedState() method. One more gotcha is you need to update values of the "state" used when actually flushing the values to the database by the PreInsertEvent and PreUpdateEvent which can be retrieved from the getState() method defined in each.
I'm using PostgreSQL 8.2.9, Solr 3.1, Tomcat 5.5
I have following problem:
When I execute delta-import - /dataimport?command=delta-import - any update queries to database are not responding for about 30 seconds.
I can easily repeat this behaviour (using psql or hibernate):
PSQL:
Execute delta-import
Immediately in psql - run SQL query: 'UPDATE table SET ... WHERE id = 1;' several times
The second/third/... time - I must wait ~30 seconds for query to return
Hibernate:
In logs - hibernate waits ~30 seconds on method 'doExecuteBatch(...)' after setting query parameters
No other queries are executed when I'm testing this. On the other hand when I'm executing other commands (like full-import, etc.)- everything works perfectly fine.
In Solr's dataconfig.xml:
I have readOnly attribute set to true on PostgreSQL dataSource.
deltaImportQuery, deltaQuery, ... on entity tags don't lock database (simple SELECT's)
Web app (using hibernate) logs:
2012-01-08 18:54:52,403 DEBUG my.package.Class1... Executing method: save
2012-01-08 18:55:26,707 DEBUG my.package.Class1... Executing method: list
Solr logs:
INFO: [] webapp=/search path=/dataimport params={debug=query&command=delta-import&commit=true} status=0 QTime=1
2012-01-08 18:54:50 org.apache.solr.handler.dataimport.DataImporter doDeltaImport
INFO: Starting Delta Import
...
2012-01-08 18:54:50 org.apache.solr.handler.dataimport.JdbcDataSource$1 call
INFO: Time taken for getConnection(): 4
...
FINE: Executing SQL: select ... where last_edit_date > '2012-01-08 18:51:43'
2012-01-08 18:54:50 org.apache.solr.core.Config getNode
...
FINEST: Time taken for sql :4
...
INFO: Import completed successfully
2012-01-08 18:54:50 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start commit(optimize=true,waitFlush=false,waitSearcher=true,expungeDeletes=false)
2012-01-08 18:54:53 org.apache.solr.core.Config getNode
...
2012-01-08 18:54:53 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: end_commit_flush
...
2012-01-08 18:54:53 org.apache.solr.handler.dataimport.DocBuilder execute
INFO: Time taken = 0:0:2.985
There're no 'SELECT ... FOR UPDATE / LOCK / etc.' queries in above logs.
I have set logging for PostgreSQL - there're no locks. Even sessions are set to:
Jan 11 14:33:07 test postgres[26201]: [3-1] <26201> LOG: execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY
Jan 11 14:33:07 test postgres[26201]: [4-1] <26201> LOG: execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Why is this happening? This looks like some kind of database lock but then why when import is completed (2 secs) queries are still waiting (for 30 secs)?
The Update waiting for the SELECT statement to complete before executing. Not a lot you can do about that that I'm aware of. We get around the issue by doing our indexing in batches. Multiple SELECT statements are fine, but UPDATE and DELETE affect the records and wont execute until it can lock the table.
Ok. It was hard to find solution to this problem.
The reason was the underlying platform - disk write saturation. There was too many small "disk-write"'s that consumed all disk-write power.
Now we have new agreement with our service layer provider.
Test query:
while true ; do echo "UPDATE table_name SET type='P' WHERE type='P'" | psql -U user -d database_name ; sleep 1 ; done
plus making changes via out other application, plus update index simultaneously.
This was before platform change:
And here is how it works now: