Why my pessimistic Locking in JPA with Oracle is not working - java

I'm trying to implement some kind of semaphores for cron jobs that runs in different JBoss nodes. I'm trying to use the database (Oracle 11g) as a locking mechanism using one table to syncronize the cron jobs in the different nodes.
The table is very simple:
CREATE TABLE SYNCHRONIZED_CRON_JOB_TASK
(
ID NUMBER(10) NOT NULL,
CRONJOBTYPE VARCHAR2(255 Byte),
CREATIONDATE TIMESTAMP(6) NOT NULL,
RUNNING NUMBER(1)
);
ALTER TABLE SYNCHRONIZED_CRON_JOB_TASK
ADD CONSTRAINT PK_SYNCHRONIZED_CRON_JOB_TASK
PRIMARY KEY (ID);
So when a job starts it searches in the table for a entry of its cronjobtype, and checks if it is already running. If not it updates the entry setting running flag to true. This first select is made with JPA CriteriaApi using Hibernate and Pessimistic Lock.
query.setLockMode(javax.persistence.LockModeType.PESSIMISTIC_WRITE);
All those opperations are made within one transaction.
When one process runs, the querys it makes are the following:
[Server:server-two] 10:38:00,049 INFO [stdout] (scheduler-2) 2015-04-30 10:38:00,048 WARN (Loader.java:264) - HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query executes
[Server:server-two] 10:38:00,049 INFO [stdout] (scheduler-2) Hibernate: select * from ( select distinct synchroniz0_.id as id1_127_, synchroniz0_.creationDate as creation2_127_, synchroniz0_.running as running3_127_, synchroniz0_.CRONJOBTYPE as CRONJOBT4_127_ from SYNCHRONIZED_CRON_JOB_TASK synchroniz0_ where synchroniz0_.CRONJOBTYPE=? ) where rownum <= ?
[Server:server-two] 10:38:00,053 INFO [stdout] (scheduler-2) Hibernate: select id from SYNCHRONIZED_CRON_JOB_TASK where id =? for update
[Server:server-two] 10:38:00,056 INFO [stdout] (scheduler-2) Hibernate: update SYNCHRONIZED_CRON_JOB_TASK set creationDate=?, running=?, CRONJOBTYPE=? where id=?
There is no problem with this warning, you can see a first select and then a select for update, so Oracle should block other select operations on this row.
But that's the point, the queries are not being blocked so two jobs can enter and make the select and update without problem. The lock is not working, we can see it if we run two cron jobs simultaneously:
[Server:server-one] 10:38:00,008 INFO [stdout] (scheduler-3) 2015-04-30 10:38:00,008 WARN (Loader.java:264) - HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query executes
[Server:server-two] 10:38:00,008 INFO [stdout] (scheduler-2) 2015-04-30 10:38:00,008 WARN (Loader.java:264) - HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query executes
[Server:server-two] 10:38:00,009 INFO [stdout] (scheduler-2) Hibernate: select * from ( select distinct synchroniz0_.id as id1_127_, synchroniz0_.creationDate as creation2_127_, synchroniz0_.running as running3_127_, synchroniz0_.CRONJOBTYPE as CRONJOBT4_127_ from SYNCHRONIZED_CRON_JOB_TASK synchroniz0_ where synchroniz0_.CRONJOBTYPE=? ) where rownum <= ?
[Server:server-one] 10:38:00,009 INFO [stdout] (scheduler-3) Hibernate: select * from ( select distinct synchroniz0_.id as id1_127_, synchroniz0_.creationDate as creation2_127_, synchroniz0_.running as running3_127_, synchroniz0_.CRONJOBTYPE as CRONJOBT4_127_ from SYNCHRONIZED_CRON_JOB_TASK synchroniz0_ where synchroniz0_.CRONJOBTYPE=? ) where rownum <= ?
[Server:server-two] 10:38:00,013 INFO [stdout] (scheduler-2) Hibernate: select id from SYNCHRONIZED_CRON_JOB_TASK where id =? for update
[Server:server-one] 10:38:00,014 INFO [stdout] (scheduler-3) Hibernate: select id from SYNCHRONIZED_CRON_JOB_TASK where id =? for update
[Server:server-two] 10:38:00,016 INFO [stdout] (scheduler-2) 2015-04-30 10:38:00,015 DEBUG (SynchronizedCronJobService.java:65) - Task read SynchronizedCronJobTask [id=185, type=AlertMailTaskExecutor, creationDate=2015-04-25 07:11:33.0, running=false]
[Server:server-two] 10:38:00,018 INFO [stdout] (scheduler-2) Hibernate: update SYNCHRONIZED_CRON_JOB_TASK set creationDate=?, running=?, CRONJOBTYPE=? where id=?
[Server:server-one] 10:38:00,022 INFO [stdout] (scheduler-3) 2015-04-30 10:38:00,022 DEBUG (SynchronizedCronJobService.java:65) - Task read SynchronizedCronJobTask [id=185, type=AlertMailTaskExecutor, creationDate=2015-04-25 07:11:33.0, running=false]
[Server:server-one] 10:38:00,024 INFO [stdout] (scheduler-3) Hibernate: update SYNCHRONIZED_CRON_JOB_TASK set creationDate=?, running=?, CRONJOBTYPE=? where id=?
I've tried to make this select for update on a SQL tool (SQLWorkbenchJ) with two connections and the bloking is working fine within this tool. But if i make this select for update on the SQL tool and launch the cron jobs, they are not bloked and run without problems.
I think the problem comes from JPA, Hibernate or the Oracle driver but i'm not sure. Any idea on where is the problem? Should i use anotehr strategy?
Thanks in advance.

Finally i managed to make it work but with some modiffications. The idea is to use LockModeType.PESSIMISTIC_FORCE_INCREMENT instead of PESSIMISTIC_WRITE. Using this lock mode the Cron Jobs behave as follows:
When the first job makes the select for update everything goes as expected but the version on the object changes.
If another job tries to make the same select while the first is still on its transaction, JPA launches a OptimisticLockException so if you catch that exception you can be sure that it was thrown for a read lock.
This solution has various counterparts:
SynchronizedCronJobTask must have a version field and be under version control with #Version
You need to handle OptimisticLockException, and it should be catch outside the transactional service method in order to make rollback when de lock happens.
IMHO is a non elegant solution, much worse than simply a lock where the Cron Jobs wait for the previous Jobs to finish.

I can confirm Ricardos observation. I have several Lock-Modes tested with a H2-Database, and all worked as expected. Neither of the pessimistic Lock-Modes worked correctly in combination with an Oracle database. I did not try optimistic locking, but it's amazing that there's a lockmode that doesn't work with the top dog at all.

Set locking mode to PESSIMISTIC_READ, because you need that second server knew about changes of first server before changing data.

Related

jooq insert throws an exception when another thread is reading from same table

I have a table where I am inserting records using record.insert() method. I believe this method is doing an insert and then a select but in a different transactions. At the same time I have another thread which pools this table for records processes them and then deletes them.
In some cases I am getting the below exception:
org.jooq.exception.NoDataFoundException: Exactly one row expected for refresh. Record does not exist in database.
at org.jooq.impl.UpdatableRecordImpl.refresh(UpdatableRecordImpl.java:345)
at org.jooq.impl.TableRecordImpl.getReturningIfNeeded(TableRecordImpl.java:232)
at org.jooq.impl.TableRecordImpl.storeInsert0(TableRecordImpl.java:208)
at org.jooq.impl.TableRecordImpl$1.operate(TableRecordImpl.java:169)
My solution was to use DSL.using(configuration()).insertInto instead of record.insert().
My question is shouldn't the insert and fetch be done in the same transaction?
UPDATE:
This is a dropwizard app that is using jooqbundle: com.bendb.dropwizard:dropwizard-jooq.
The configuration is injected in the DAO, the insert is as follows:
R object = // jooq record
object.attach(configuration);
object.insert();
On the second thread I am just selecting some records from this table, processing them and then deleting them
Jooq logs clearly shows that the 2 queries are not run in same transaction:
14:07:09.550 [main] DEBUG org.jooq.tools.LoggerListener - -> with bind values : insert into "queue"
....
14:07:09.083', 1)
14:07:09.589 [main] DEBUG org.jooq.tools.LoggerListener - Affected row(s) : 1
14:07:09.590 [main] DEBUG org.jooq.tools.StopWatch - Query executed : Total: 47.603ms
14:07:09.591 [main] DEBUG org.jooq.tools.StopWatch - Finishing : Total: 48.827ms, +1.223ms
14:07:09.632 [main] DEBUG org.jooq.tools.LoggerListener - Executing query : select "queue"."
I do not see the "autocommit off" or "savepoint" statements in the logs which are generally printed by jooq in case the queries are run in a transaction. I hope this helps, let me know if you need more info
UPDATE 2:
Jooq version is 3.9.1
mysql version 5.6.23
Database and jooq entry yml file:
database:
driverClass: com.mysql.jdbc.Driver
user: ***
password: ***
url: jdbc:mysql://localhost:3306/mySchema
properties:
charSet: UTF-8
characterEncoding: UTF-8
# the maximum amount of time to wait on an empty pool before throwing an exception
maxWaitForConnection: 1s
# the SQL query to run when validating a connection's liveness
validationQuery: "SELECT 1"
# the timeout before a connection validation queries fail
validationQueryTimeout: 3s
# initial number of connections
initialSize: 25
# the minimum number of connections to keep open
minSize: 25
# the maximum number of connections to keep open
maxSize: 25
# whether or not idle connections should be validated
checkConnectionWhileIdle: true
# the amount of time to sleep between runs of the idle connection validation, abandoned cleaner and idle pool resizing
evictionInterval: 10s
# the minimum amount of time an connection must sit idle in the pool before it is eligible for eviction
minIdleTime: 1 minute
jooq:
# The flavor of SQL to generate. If not specified, it will be inferred from the JDBC connection URL. (default: null)
dialect: MYSQL
# Whether to write generated SQL to a logger before execution. (default: no)
logExecutedSql: no
# Whether to include schema names in generated SQL. (default: yes)
renderSchema: yes
# How names should be rendered in generated SQL. One of QUOTED, AS_IS, LOWER, or UPPER. (default: QUOTED)
renderNameStyle: QUOTED
# How keywords should be rendered in generated SQL. One of LOWER, UPPER. (default: UPPER)
renderKeywordStyle: UPPER
# Whether generated SQL should be pretty-printed. (default: no)
renderFormatted: no
# How parameters should be represented. One of INDEXED, NAMED, or INLINE. (default: INDEXED)
paramType: INDEXED
# How statements should be generated; one of PREPARED_STATEMENT or STATIC_STATEMENT. (default: PREPARED_STATEMENT)
statementType: PREPARED_STATEMENT
# Whether internal jOOQ logging should be enabled. (default: no)
executeLogging: no
# Whether optimistic locking should be enabled. (default: no)
executeWithOptimisticLocking: yes
# Whether returned records should be 'attached' to the jOOQ context. (default: yes)
attachRecords: yes
# Whether primary-key fields should be updatable. (default: no)
updatablePrimaryKeys: no
Have included the jooq bundle in the Application class as described in https://github.com/benjamin-bader/droptools/tree/master/dropwizard-jooq.
Using https://github.com/xvik/dropwizard-guicey to inject the configuration into each DAO.
The guide module has the following binding:
bind(Configuration.class).toInstance(jooqBundle.getConfiguration());

preparedStatement.executeUpdate() not inserting values

I am working in JBehave non-Maven framework and I am running an insert statement which is inside a text file which I am passing to my method as qry. The insert statement is:
Insert into croutreach.ACME_OUTREACH_EMT_STG
(SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT)
values (49,<EID>,'new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
The code below is executing and printing the below statement
Executing to run insert query
But after this it is not proceeding to next step and that is why the connection is not getting closed. It is getting stuck as preparedstatement.executeupdate().
private Connection dbConnection = null;
public boolean runInsertQuery(String qry) {
int result;
System.out.println("Preparing to run insert query");
try {
PreparedStatement preparedStatement =dbConnection.prepareStatement(qry);
System.out.println("Execuing to run insert query");
result = preparedStatement.executeUpdate();
System.out.println("Completed to run insert query");
return result == 1;
} catch (SQLException e) {
LOG.fatal("Error in insertion into table :", e);
return false;
} finally {
this.disconnectDb();
}
}
My output is coming as this :-
Method public org.jbehave.core.configuration.Configuration com.TEA.framework.Functional.OnlineCoachingStory.configuration() has a #Test annotation but also a return value: ignoring it. Use <suite allow-return-values="true"> to fix this
Method public org.jbehave.core.steps.InjectableStepsFactory com.TEA.framework.Functional.OnlineCoachingStory.stepsFactory() has a #Test annotation but also a return value: ignoring it. Use <suite allow-return-values="true"> to fix this
[TestNG] Running:
C:\Users\M41974\AppData\Local\Temp\testng-eclipse-1062921629\testng-customsuite.xml
Processing system properties {}
Using controls EmbedderControls[batch=false,skip=false,generateViewAfterStories=true,ignoreFailureInStories=false,ignoreFailureInView=false,verboseFailures=false,verboseFiltering=false,storyTimeoutInSecs=900000000,threads=1]
(BeforeStories)
Running story com/TEA/framework/Story/Copy of OnlineCoaching.story
(com/TEA/framework/Story/Copy of OnlineCoaching.story)
Scenario: Setup the Environment
2015-11-03 04:46:49,425 FATAL pool-1-thread-1- ACME.SAT.Server.Path not found in the Config.properties file
2015-11-03 04:46:49,440 INFO pool-1-thread-1- Host:- lsh1042a.sys.cigna.com Username:- ank_1 password:- abc_123 path:- null
Given Set the environment to SAT
2015-11-03 04:46:49,768 INFO pool-1-thread-1- DB Connection successful
Insert into croutreach.ACME_OUTREACH_EMT_STG (SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT) values (49,'1234567890','new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
Preparing to run insert query
Executing to run insert query
|
Previously I had kept my SQL developer window open and when tried after closing that and again ran the same script it shows the fatal error asORA-00001: unique constraint violated.
I am facing 3 different issues:
When SQL Developer was open, running the script got stuck at the insert.
While that was stuck I closed the SQL Developer window session and after that output is as below:
2015-11-03 05:01:35,061 INFO main- DB Connection successful
Insert into croutreach.ACME_OUTREACH_EMT_STG (SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT) values (49,'02123456789','new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
Preparing to run insert query
Executing to run insert query
Completed to run insert query
2015-11-03 05:02:11,128 INFO main- DB Disconnected!!
Insert complete
After closing the SQL Developer window, running the script again completed but with fatal error as below:
2015-11-03 05:13:08,177 INFO main- DB Connection successful
Insert into croutreach.ACME_OUTREACH_EMT_STG (SEQ,EID,EVENT_TYPE_HINT,LOADER_TYPE,CLIENT_ID_LIST,ACCT_NUM_LIST,PROCS_DT) values (49,'02123456789','new','CI',null,null,'02-SEP-13 06.55.41.952000000 PM')
Preparing to run insert query
Executing to run insert query
2015-11-03 05:13:08,319 FATAL main- Error in insertion into table :
java.sql.SQLException: ORA-00001: unique constraint (CROUTREACH.ACME_OUTREACH_EMT_STG_PK) violated
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:955)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1168)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3316)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3400)
at com.TEA.framework.Technology.DatabaseClass.runInsertQuery(DatabaseClass.java:169)
at com.TEA.framework.Business.AcmeDB.insertACME_OUTREACH_EMT_STG(AcmeDB.java:186)
at com.TEA.framework.Technology.MainClass.DBinsertTest(MainClass.java:123)
at com.TEA.framework.Technology.MainClass.main(MainClass.java:24)
2015-11-03 05:13:08,350 INFO main- DB Disconnected!!
2015-11-03 05:13:08,365 INFO main- DB Disconnected!!
You're just seeing one session block another. When you do the insert in SQL Developer and leave the session running without committing or rolling back you have a lock on the newly-inserted row and a (sort of pending) index entry.
If you run your program and do the same insert again then that is in a different session. It cannot see the data you inserted through SQL Developer, but it can see the lock, and it waits for that lock to be cleared. While its blocked it just sits there waiting, which you're interpreting as it being stuck.
If you roll back the SQL Developer insert then the lock is cleared and your program's blocked insert then continues, successfully. The new record is inserted, and will either be committed or rolled back depending on what your code does next, or the connection's autocommit setting.
It looks like it commits, since running it again tries to repeat the insert, and now it does see the existing committed data - hence the ORA-00001 error, as you're trying to create a second entry with the same primary key value. You would have seen the same thing if you had committed the SQL Developer session instead of rolling back.
If you want to run your program a second time you will need to delete the row (and commit that change!) before trying the insert again.

Hibernate Integrator Causes Flush When Using JPA Transactions Around Queries

I'm working on an Integrator for Hibernate (background on Integrators: https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch14.html#objectstate-decl-security) that by using listeners is supposed to take my data from how it's stored in the DB and convert it into a different form for processing at runtime. This works great when saving the data using .persist() however there's an odd behavior involving transactions. The following code is from Hibernate's own quickstart tutorial code:
// now lets pull events from the database and list them
entityManager = entityManagerFactory.createEntityManager();
entityManager.getTransaction().begin();
List<Event> result = entityManager.createQuery( "from Event", Event.class ).getResultList();
for ( Event event : result ) {
System.out.println( "Event (" + event.getDate() + ") : " + event.getTitle() );
}
entityManager.getTransaction().commit();
entityManager.close();
Notice the unusual transaction begin/commit wrapping the query to select the data. Running this gives the following output after the query completes:
01:01:59.111 [main] DEBUG org.hibernate.engine.transaction.spi.AbstractTransactionImpl.commit(175) - committing
01:01:59.112 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.prepareEntityFlushes(149) - Processing flush-time cascades
01:01:59.112 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.prepareCollectionFlushes(189) - Dirty checking collections
01:01:59.114 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.logFlushResults(123) - Flushed: 0 insertions, 2 updates, 0 deletions to 2 objects
01:01:59.114 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.logFlushResults(130) - Flushed: 0 (re)creations, 0 updates, 0 removals to 0 collections
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(114) - Listing entities:
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(121) - org.hibernate.tutorial.em.Event{date=2015-07-28 01:01:57.776, id=1, title=Our very first event!}
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(121) - org.hibernate.tutorial.em.Event{date=2015-07-28 01:01:58.746, id=2, title=A follow up event}
01:01:59.115 [main] DEBUG org.hibernate.SQL.logStatement(109) - update EVENTS set EVENT_DATE=?, title=? where id=?
Hibernate: update EVENTS set EVENT_DATE=?, title=? where id=?
01:01:59.119 [main] DEBUG org.hibernate.SQL.logStatement(109) - update EVENTS set EVENT_DATE=?, title=? where id=?
Hibernate: update EVENTS set EVENT_DATE=?, title=? where id=?
01:01:59.120 [main] DEBUG org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doCommit(113) - committed JDBC Connection
01:01:59.120 [main] DEBUG org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.close(201) - HHH000420: Closing un-released batch
01:01:59.121 [main] DEBUG org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.releaseConnection(246) - Releasing JDBC connection
01:01:59.121 [main] DEBUG org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.releaseConnection(264) - Released JDBC connection
01:01:59.121 [main] DEBUG org.hibernate.internal.SessionFactoryImpl.close(1339) - HHH000031: Closing
It appears that since the Integrator does a modification on the entity in question it gets marked as "dirty" and upon committing this odd transaction, it bypasses my event listeners and writes the value back in the wrong format! I did some digging in the code and it turns out that org.hibernate.event.internal.AbstractFlushingEventListener.flushEntities(FlushEvent, PersistenceContext) gets called above and tries to get listeners for EventType.FLUSH_ENTITY. Unfortunately a listener added for this EventType is never called in my Integrator. How can I write my Integrator to behave correctly in this case so that I can "undo" the conversion that has happened with my entities at runtime and not flush the wrong value out?
Ultimately the problem was due to the EventTypes of the event listeners added with the EventListenerRegistry. What worked was using EventType.POST_LOAD for all the read operations combined with EventType.PRE_UPDATE and EventType.PRE_INSERT for writes that call a helper method for handling both the same way.
To prevent unneeded writes after making your entity updates it's a good idea to reset the data used for tracking if the entity is dirty in EntityEntry called loadedState. This is a private field in Hibernate 4 so you'll need to use Reflection, however in Hibernate 5 it's available via the getLoadedState() method. One more gotcha is you need to update values of the "state" used when actually flushing the values to the database by the PreInsertEvent and PreUpdateEvent which can be retrieved from the getState() method defined in each.

Apache Solr - waiting on sql queries when delta-import command is executed

I'm using PostgreSQL 8.2.9, Solr 3.1, Tomcat 5.5
I have following problem:
When I execute delta-import - /dataimport?command=delta-import - any update queries to database are not responding for about 30 seconds.
I can easily repeat this behaviour (using psql or hibernate):
PSQL:
Execute delta-import
Immediately in psql - run SQL query: 'UPDATE table SET ... WHERE id = 1;' several times
The second/third/... time - I must wait ~30 seconds for query to return
Hibernate:
In logs - hibernate waits ~30 seconds on method 'doExecuteBatch(...)' after setting query parameters
No other queries are executed when I'm testing this. On the other hand when I'm executing other commands (like full-import, etc.)- everything works perfectly fine.
In Solr's dataconfig.xml:
I have readOnly attribute set to true on PostgreSQL dataSource.
deltaImportQuery, deltaQuery, ... on entity tags don't lock database (simple SELECT's)
Web app (using hibernate) logs:
2012-01-08 18:54:52,403 DEBUG my.package.Class1... Executing method: save
2012-01-08 18:55:26,707 DEBUG my.package.Class1... Executing method: list
Solr logs:
INFO: [] webapp=/search path=/dataimport params={debug=query&command=delta-import&commit=true} status=0 QTime=1
2012-01-08 18:54:50 org.apache.solr.handler.dataimport.DataImporter doDeltaImport
INFO: Starting Delta Import
...
2012-01-08 18:54:50 org.apache.solr.handler.dataimport.JdbcDataSource$1 call
INFO: Time taken for getConnection(): 4
...
FINE: Executing SQL: select ... where last_edit_date > '2012-01-08 18:51:43'
2012-01-08 18:54:50 org.apache.solr.core.Config getNode
...
FINEST: Time taken for sql :4
...
INFO: Import completed successfully
2012-01-08 18:54:50 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start commit(optimize=true,waitFlush=false,waitSearcher=true,expungeDeletes=false)
2012-01-08 18:54:53 org.apache.solr.core.Config getNode
...
2012-01-08 18:54:53 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: end_commit_flush
...
2012-01-08 18:54:53 org.apache.solr.handler.dataimport.DocBuilder execute
INFO: Time taken = 0:0:2.985
There're no 'SELECT ... FOR UPDATE / LOCK / etc.' queries in above logs.
I have set logging for PostgreSQL - there're no locks. Even sessions are set to:
Jan 11 14:33:07 test postgres[26201]: [3-1] <26201> LOG: execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY
Jan 11 14:33:07 test postgres[26201]: [4-1] <26201> LOG: execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Why is this happening? This looks like some kind of database lock but then why when import is completed (2 secs) queries are still waiting (for 30 secs)?
The Update waiting for the SELECT statement to complete before executing. Not a lot you can do about that that I'm aware of. We get around the issue by doing our indexing in batches. Multiple SELECT statements are fine, but UPDATE and DELETE affect the records and wont execute until it can lock the table.
Ok. It was hard to find solution to this problem.
The reason was the underlying platform - disk write saturation. There was too many small "disk-write"'s that consumed all disk-write power.
Now we have new agreement with our service layer provider.
Test query:
while true ; do echo "UPDATE table_name SET type='P' WHERE type='P'" | psql -U user -d database_name ; sleep 1 ; done
plus making changes via out other application, plus update index simultaneously.
This was before platform change:
And here is how it works now:

SELECT 1 from DUAL: MySQL

In looking over my Query log, I see an odd pattern that I don't have an explanation for.
After practically every query, I have "select 1 from DUAL".
I have no idea where this is coming from, and I'm certainly not making the query explicitly.
The log basically looks like this:
10 Query SELECT some normal query
10 Query select 1 from DUAL
10 Query SELECT some normal query
10 Query select 1 from DUAL
10 Query SELECT some normal query
10 Query select 1 from DUAL
10 Query SELECT some normal query
10 Query select 1 from DUAL
10 Query SELECT some normal query
10 Query select 1 from DUAL
...etc...
Has anybody encountered this problem before?
MySQL Version: 5.0.51
Driver: Java 6 app using JDBC. mysql-connector-java-5.1.6-bin.jar
Connection Pool: commons-dbcp 1.2.2
The validationQuery was set to "select 1 from DUAL" (obviously) and apparently the connection pool defaults testOnBorrow and testOnReturn to true when a validation query is non-null.
One further question that this brings up for me is whether or not I actually need to have a validation query, or if I can maybe get a performance boost by disabling it or at least reducing the frequency with which it is used. Unfortunately, the developer who wrote our "database manager" is no longer with us, so I can't ask him to justify it for me. Any input would be appreciated. I'm gonna dig through the API and google for a while and report back if I find anything worthwhile.
EDIT: added some more info
EDIT2: Added info that was asked for in the correct answer for anybody who finds this later
It could be coming from the connection pool your application is using. We use a simple query to test the connection.
Just had a quick look in the source to mysql-connector-j and it isn't coming from in there.
The most likely cause is the connection pool.
Common connection pools:
commons-dbcp has a configuration property validationQuery, this combined with testOnBorrow and testOnReturn could cause the statements you see.
c3p0 has preferredTestQuery, testConnectionOnCheckin, testConnectionOnCheckout and idleConnectionTestPeriod
For what's it's worth I tend to configure connection testing and checkout/borrow even if it means a little extra network chatter.
I have performed 100 inserts/deltes and tested on both DBCP and C3PO.
DBCP :: testOnBorrow=true impacts the response time by more than 4 folds.
C3P0 :: testConnectionOnCheckout=true impacts the response time by more than 3 folds.
Here are the results :
DBCP – BasicDataSource
Average time for 100 transactions ( insert operation )
testOnBorrow=false :: 219.01 ms
testOnBorrow=true :: 1071.56 ms
Average time for 100 transactions ( delete opration )
testOnBorrow=false :: 223.4 ms
testOnBorrow=true :: 1067.51 ms
C3PO – ComboPooledDataSource
Average time for 100 transactions ( insert operation )
testConnectionOnCheckout=false :: 220.08 ms
testConnectionOnCheckout=true :: 661.44 ms
Average time for 100 transactions ( delete opration )
testConnectionOnCheckout=false :: 216.52 ms
testConnectionOnCheckout=true :: 648.29 ms
Conculsion : Setting testOnBorrow=true in DBCP or testConnectionOnCheckout=true in C3PO impacts the performance by 3-4 folds. Is there any other setting that will enhance the performance.
-Durga Prasad
The "dual" table/object name is an Oracle construct, which MySQL supports for compatibility - or to provide a target for queries that dont have a target but people want one to feel all warm and fuzzy. E.g.
select curdate()
can be
select curdate() from dual
Someone could be sniffing you to see if you're running Oracle.

Categories

Resources