Hibernate + enabling SQLite Pragmas to gain speed on Windows 7 machine - java

Used software:
hibernate 3.6
sqlite jbdc 3.6.0
java jre 1.6.X
I have a problem with transferring data over a tcp connection ( 20 000 entrys )
create a sqlite database with the help of hibernate
use hibernateview and hibernate annotations to create querys
hibernate proberties are also used
storing 20 000 entries with hibernate and NO sqlite pragmas enabled lasts nearly 6 minutes ( ~ 330 sec) on Windows 7
storing 20 000 entries without hibernate and all relevant sql pragmas enabled lasts ca 2 minutes ( ~ 109 sec ) on windows 7
tests with hibernate and sqlite without pragmas on windows XP and windows Vista run fast, but on win7 it lasts nearly
3 times ( ~ 330 sec - win 7) as long as on the XP machine
on windows 7 we want to activate sqlite pragmas to gain speed boost
relevant pragmas are:
PRAGMA cache_size = 400000;
PRAGMA synchronous = OFF;
PRAGMA count_changes = OFF;
PRAGMA temp_store = MEMORY;
PRAGMA auto_vacuum = NONE;
Problem: we must use hibernate ( no Nhibernate ! )
Questions:
how to enable these pragmas in hibernate sqlite connection if its possible?
Is it possible to do so with using hibernate?

I was also looking for some way to set another pragma: PRAGMA foreign_keys = ON for hibernate connections. I didn't find anything on the subject and the only solution I came up with is to decor SQLite JDBC driver and set required pragma each time new connection is retrieved. See sample code below:
#Override
public Connection connect(String url, Properties info) throws SQLException {
final Connection connection = originalDriver.connect(url, info);
initPragmas(connection);
return connection;
}
private void initPragmas(Connection connection) throws SQLException {
//Enabling foreign keys
connection.prepareStatement("PRAGMA foreign_keys = ON;").execute();
}
Full sample is here: https://gist.github.com/52dbc7066787684de634. Then when initializing hibernate.connection.driver_class property just set it to your package.DriverDecorator

Inserts one-by-one can be very slow; you may want to consider batching. Please see my answer to this other post.

for PRAGMA foreign_keys = ON equivalent
hibernate.connection.foreign_keys=true
or
<property name="connection.foreign_keys">true</property>
depending on your strategy

Related

JDBC oracle connection error: ORA-12519, TNS:no appropriate service handler found

In my project I am using jdbc to connect to a oracle 12c instance in a multi-threading environment, earlier we had an oracle 9i instance and we were using ojdbc6 and it was working perfectly but we receltly got this oracle 12c instance which gave following error at JDBC connection point.
java.sql.SQLException: Listener refused the connection with the following error:
ORA-12519, TNS:no appropriate service handler found
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:774)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:688)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:39)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:691)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
So I thought it may be because of the older driver version that we had, so I incorporated ojdbc8 which I found over the internet to be compatible with 12 but the above error is still there. My JDK version is 1.8.
I'd appreciate any input on resolving this issue. Thanks in advance.
Sid. I think the bug occurred dut to the Oracle initialization parameter setting problem.
Please use the command line as below:
SQL > SHOW the PARAMETER of the SESSION
------------------------------------ ----------- ------------------------------
java_max_sessionspace_size integer 0
java_soft_sessionspace_limit integer 0
license_max_sessions integer 0
license_sessions_warning integer 0
session_cached_cursors integer 50
session_max_open_files integer 10
sessions integer 600
shared_server_sessions integer
the other command line:
SQL> SHOW PARAMETER PROCESS
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes integer 0
db_writer_processes integer 2
gcs_server_processes integer 2
global_txn_processes integer 1
job_queue_processes integer 1000
log_archive_max_processes integer 4
processes integer 150
According to the Oracle documentation, SESSIONS and the TRANSACTIONS of the initialization parameters should be derived from the PROCESSES, according to the default setting SESSIONS = 1.1 + 5 by the PROCESSES.
SESSIONS currently set up to 600, and the PROCESSES of setting has not changed, still for 150, led to too much user session to connect to the Oracle, Oracle do not have enough background PROCESSES to support these SESSIONS.
The direct solution is set the appropriate the PROCESSES.

Cassandra NoHostAvailableException: All host(s) tried for query failed in Production

We have 10 Cassandra nodes in production running Cassandra-2.1.8. We recently upgraded to 2.1.8 version. Previously we were using only 3 nodes running Cassandra-2.1.2. First we upgraded the initial 3 nodes from 2.1.2 to 2.1.8 (following the procedure as described in Upgrading Cassandra). Then we added 7 more nodes running Cassandra-2.1.8 in cluster. Then we started our client programs. For first few hours everything worked fine, but after few hours, we saw some errors in client program logs like
Thread-0 [29/07/15 17:41:23.356] ERROR com.cleartrail.entityprofiling.engine.InterpretationWriter - Error:com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:259)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:175)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.cleartrail.entityprofiling.engine.InterpretationWriter.WriteInterpretation(InterpretationWriter.java:430)
at com.cleartrail.entityprofiling.engine.Profiler.buildProfile(Profiler.java:1042)
at com.cleartrail.messageconsumer.consumer.KafkaConsumer.run(KafkaConsumer.java:336)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:102)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Now, I double checked the Firewall (as suggested in few posts), ports, timeouts in client as well as nodes and they all are correct.
I am also not closing the connection anywhere in between. I am using batch queries with batch size of 1000 and the queries are update queries updating counters in my table with three columns
entity , twfwv , cvalue
where entity and twfwv columns are text and primary key and cvalue is counter column.
I even restarted all my nodes (because this trick helped me in my dev environment when I faced the same exception) but its not helping. Please suggest what can be the probable problem here.
My issue was resolved by checking the errors collection of NoHostAvailableException as advised by Olivier Michallat in the comments. For me it was the protocol version on the cluster configuration. Mine was null, setting it to 3 fixed the problem.
My issue was resolved by removing/using a property to set or unset the custom load balancing TokenAwarePolicy my connection was using, and relying on the default.
Specifically, I was trying to get a local spring boot app talking to a single dockerized Cassandra instance.
Cluster.Builder builder = Cluster.builder()
.addContactPoints(cassandraProperties.getHosts())
.withPort(cassandraProperties.getPort())
.withProtocolVersion(ProtocolVersion.V4)
.withRetryPolicy(new LoggingRetryPolicy(DefaultRetryPolicy.INSTANCE))
.withCredentials(cassandraProperties.getUsername(), cassandraProperties.getPassword())
.withCodecRegistry(codecRegistry);
if (loadBalanced) {
builder.withLoadBalancingPolicy(
new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().withLocalDc(localDc).build()));
}

Why does my Oracle PreparedStatement sometimes never return despite no contention?

Update 2/5/2014:
Problem was resolved by rebooting the Linux server hosting the Oracle database. The server had not been booted since May of last year even though Oracle itself had been restarted on a regular basis.
I have a couple of Java 1.6 programs that use an Oracle 11.2 database and the 11.2.0.3.0 ojdbc6.jar Oracle driver. At seemingly random points it will apparently hang, never returning control from PreparedStatement.executeUpdate().
Frequently my program binds data to a BLOB column and in this case (again at random times) it may hang at a call to OutputStream.flush(), where my OutputStream is a wrapper for the OracleBlobOutputStream.
In both cases the thread is stuck waiting forever trying to read a socket for an Oracle response before it will continue.
Monitoring sessions in the Oracle database for my JDBC Thin Client with sqlDeveloper I can see that the session is waiting as shown with Seconds In Wait. In the particular case of of flushing a blob, the ActiveSQL tab shows No Text Available. In the case of hanging at PreparedStatement.executeUpdate() that tab will show the full text of my insert statement. In either case the Waits tab will show "SQL*Net more data from client", which to me indicates that the Oracle server is waiting for more data to complete the client request.
So I can see that the Oracle server seems to be waiting for the client to finish his request. The client seems to have completed the request and is waiting for the server to return a response.
Could network errors be the cause of this? I would think the client and server would be protected by the retry logic of a TCP/IP stream. I frequently use this application over a VPN connection on the internet (against test instances of the database) where I'd expect more errors but I never see a problem in that context.
I've seen fixes for a getNextPacket() issue in the Oracle driver but as shown above we're using the latest driver and should have those.
The Contention tab never indicates anything, as I would expect. From everything I can tell competing transactions are not the issue here. And the program will still fail at night, when there's hardly any other activity than my program.
This code works flawlessly in my test environment. It also works in a test environment at my client's site. But in the production environment it fails. It may insert 50-100K rows of data before failing.
In some cases it does not hang. It throws inconsistent exceptions such as one about how you can only bind a LONG value to a LONG column. This too I never see in testing on four different databases and the problem moves around from one table to another with no discernible pattern.
To the best of my knowledge dynamic SQL will work and the problem is specific to prepared statements. But I can't be certain of that.
This production database is bigger than any of the test instances. It is sized to handle about two terabytes of data and is probably 1/3 on the way to that goal. All of the tablespaces have plenty of space and the rollback segment was recently enlarged by a factor of 3 and is very underutilized.
I'm not aware of a hang in auto-commit mode and it seems to hang only after a transaction accumulates a good bit of data. But with the problem so random I can't conclusively say that.
This program worked for months without problem and then this started a couple weeks ago without any change to the software whatsoever. The client's database has been steadily getting bigger, so that's a change. And I hear the client installed some network monitoring software about that time but I don't have any specifics on that.
Sometimes JDBC batching is in play, other times not and it still fails.
I'm pulling my hair out over this one, something I have so little of to work with!
Any insight from my friends at stackoverflow?
Here is a callstack where I waited to see Seconds in Wait at the server and then paused my client program in the eclipse debugger. Everything from OracleOutputStream on up is ojdbc6.jar code.
Thread [GraphicsTranslator:1] (Suspended)
owns: T4CConnection (id=26)
owns: Input (id=27)
SocketInputStream.socketRead0(FileDescriptor, byte[], int, int, int) line: not available [native method]
SocketInputStream.read(byte[], int, int) line: 129
DataPacket(Packet).receive() line: 293
DataPacket.receive() line: 92
NetInputStream.getNextPacket() line: 174
NetInputStream.read(byte[], int, int) line: 119
NetInputStream.read(byte[]) line: 94
NetInputStream.read() line: 79
T4CSocketInputStreamWrapper.readNextPacket() line: 122
T4CSocketInputStreamWrapper.read() line: 78
T4CMAREngine.unmarshalUB1() line: 1040
T4CMAREngine.unmarshalSB1() line: 1016
T4C8TTIBlob(T4C8TTILob).receiveReply() line: 847
T4C8TTIBlob(T4C8TTILob).write(byte[], long, byte[], long, long) line: 243
T4CConnection.putBytes(BLOB, long, byte[], int, int) line: 2078
BLOB.setBytes(long, byte[], int, int) line: 698
OracleBlobOutputStream.flushBuffer() line: 215
OracleBlobOutputStream.flush() line: 167
ISOToDBWriter.bindElementBuffer(ParameterBinding, SpatialObject, boolean) line: 519
ISOToDBWriter.writePrimitive(SpatialObject, boolean) line: 1720
ISOToDBWriter.writeDgnElement(SpatialObject, Properties, String, boolean) line: 1427
ISOToDBWriter.write(SpatialObject) line: 1405
ISOHandler.inputObject(InputEvent) line: 864
InputEventMulticaster.inputObject(InputEvent) line: 87
Input(Input).notifyInput(Object, Object) line: 198
Input(Input).notifyInput(Object) line: 157
Input.readElement(int) line: 468
Input.readElement() line: 403
Input.run() line: 741
GraphicsTranslator.processAllDgnFiles() line: 1190
GraphicsTranslator.run() line: 1364
Thread.run() line: 662
Update 2/3/2014:
I've been able to do more testing at the client's site. Apparently the problem is caused by network errors. I wrote a small test program with straight-jdbc calls and it fails too. It only fails against this specific database instance. The test program binds increasingly long strings into a prepared statement it keeps executing and ultimately rolls back is transaction (if it gets that far). The test program, rather than hang, sometimes throws an exception randomly as follows:
java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:208)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1046)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1336)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3613)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3694)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1354)
at com.byers.test.outage.TestPreparedInsert.insertThenRollback(TestPreparedInsert.java:81)
at com.byers.test.outage.TestPreparedInsert.runTest(TestPreparedInsert.java:54)
at com.byers.test.outage.TestPreparedInsert.main(TestPreparedInsert.java:28)
The test program inserts thousands of rows and runs at a pretty good clip till the insert strings get longer than about 1,300 bytes. Then it gets increasingly slow and by the time the strings are around 1,500 bytes a single insert will take 30 seconds or more. I suspect the problems start when the request exceeds a packet in size.
I ran WireShark and captured all IP packets going between me and the Oracle server. Then I see lots of TCP ACKed unseen segment, TCP Previous Segment not captured, TCP Dup ACK 3#1, TCP Dup ACK 3#2, etc. I'm no network expert but I'm smart enough to say "this is not good".
Unlike my production system, my test program does not actually cause Oracle to "hang" so far. The Oracle session does not show Seconds In Wait and if I wait long enough the program continues (even though my patience with that has been limited). I've also not seen the above exception thrown unless I run more than one instance of the program at the same time, although that too may be a matter of not waiting long enough?
Invocations of the below code such as:
insertThenRollback(con, 50, 2000, 0);
are pretty good at producing the errors. Interestingly, starting out with big insert strings like 3000 bytes does not lead to errors until the program recycles at 4000 and counts back up into the 1300+ range.
private static void insertThenRollback(Connection con, int delayMs, int rowCount, int startCharCount)
throws SQLException, InterruptedException
{
System.out.println("Batch " + (++batchCount) + ". Insert " + rowCount + " rows with "
+ delayMs + "ms. delay between, then rollback");
String sql = "Insert Into config (name,value) values(?,?)";
PreparedStatement stmt = con.prepareStatement(sql);
String insString = "";
for (int c = 0; c < startCharCount; ++c)
{
int randomChar = (int) (Math.random() * DATA_PALLET.length());
insString += DATA_PALLET.charAt(randomChar);
}
try
{
for (int i = 0; i < rowCount; ++i)
{
if (insString.length() > MAX_INSERT_LEN - 1)
insString = "";
int randomChar = (int) (Math.random() * DATA_PALLET.length());
insString += DATA_PALLET.charAt(randomChar);
String randomName = "randomName--" + UUID.randomUUID();
System.out.println("Row " + (i + 1) + "->" + randomName + '/' + insString.length()
+ " chars");
stmt.setString(1, randomName);
stmt.setString(2, insString);
stmt.executeUpdate();
Thread.sleep(delayMs);
}
}
finally
{
System.out.println("Rollback");
con.rollback();
stmt.close();
}
}
This seems to put me on solid footing to tell the client that the problem is with their network. Would you all agree? Is it not also true that the client should be able to monitor their network somehow for these kinds of errors? It seems almost silly to me that we would invest hundreds of hours of collective effort chasing a problem like this just to find out it is hardware or some kind of invasive software. Are there ways to detect a high degree of these kinds of network errors by monitoring of some kind?
Recently we had the same behavior in our production application, after restart the application the database persistence came back to work.
Looking for a clue of what could be happened I ended up in this article (Understanding JDBC Internals & Timeout Configuration) which explain in details how JDBC works and its different type of Timeout.
We will try to configure a timeout on JDBC (using oracle.net.CONNECT_TIMEOUT /
oracle.jdbc.ReadTimeout) to try avoid this issue in the future.

Apache Solr - waiting on sql queries when delta-import command is executed

I'm using PostgreSQL 8.2.9, Solr 3.1, Tomcat 5.5
I have following problem:
When I execute delta-import - /dataimport?command=delta-import - any update queries to database are not responding for about 30 seconds.
I can easily repeat this behaviour (using psql or hibernate):
PSQL:
Execute delta-import
Immediately in psql - run SQL query: 'UPDATE table SET ... WHERE id = 1;' several times
The second/third/... time - I must wait ~30 seconds for query to return
Hibernate:
In logs - hibernate waits ~30 seconds on method 'doExecuteBatch(...)' after setting query parameters
No other queries are executed when I'm testing this. On the other hand when I'm executing other commands (like full-import, etc.)- everything works perfectly fine.
In Solr's dataconfig.xml:
I have readOnly attribute set to true on PostgreSQL dataSource.
deltaImportQuery, deltaQuery, ... on entity tags don't lock database (simple SELECT's)
Web app (using hibernate) logs:
2012-01-08 18:54:52,403 DEBUG my.package.Class1... Executing method: save
2012-01-08 18:55:26,707 DEBUG my.package.Class1... Executing method: list
Solr logs:
INFO: [] webapp=/search path=/dataimport params={debug=query&command=delta-import&commit=true} status=0 QTime=1
2012-01-08 18:54:50 org.apache.solr.handler.dataimport.DataImporter doDeltaImport
INFO: Starting Delta Import
...
2012-01-08 18:54:50 org.apache.solr.handler.dataimport.JdbcDataSource$1 call
INFO: Time taken for getConnection(): 4
...
FINE: Executing SQL: select ... where last_edit_date > '2012-01-08 18:51:43'
2012-01-08 18:54:50 org.apache.solr.core.Config getNode
...
FINEST: Time taken for sql :4
...
INFO: Import completed successfully
2012-01-08 18:54:50 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start commit(optimize=true,waitFlush=false,waitSearcher=true,expungeDeletes=false)
2012-01-08 18:54:53 org.apache.solr.core.Config getNode
...
2012-01-08 18:54:53 org.apache.solr.update.DirectUpdateHandler2 commit
INFO: end_commit_flush
...
2012-01-08 18:54:53 org.apache.solr.handler.dataimport.DocBuilder execute
INFO: Time taken = 0:0:2.985
There're no 'SELECT ... FOR UPDATE / LOCK / etc.' queries in above logs.
I have set logging for PostgreSQL - there're no locks. Even sessions are set to:
Jan 11 14:33:07 test postgres[26201]: [3-1] <26201> LOG: execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY
Jan 11 14:33:07 test postgres[26201]: [4-1] <26201> LOG: execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Why is this happening? This looks like some kind of database lock but then why when import is completed (2 secs) queries are still waiting (for 30 secs)?
The Update waiting for the SELECT statement to complete before executing. Not a lot you can do about that that I'm aware of. We get around the issue by doing our indexing in batches. Multiple SELECT statements are fine, but UPDATE and DELETE affect the records and wont execute until it can lock the table.
Ok. It was hard to find solution to this problem.
The reason was the underlying platform - disk write saturation. There was too many small "disk-write"'s that consumed all disk-write power.
Now we have new agreement with our service layer provider.
Test query:
while true ; do echo "UPDATE table_name SET type='P' WHERE type='P'" | psql -U user -d database_name ; sleep 1 ; done
plus making changes via out other application, plus update index simultaneously.
This was before platform change:
And here is how it works now:

SELECT 1 from DUAL: MySQL

In looking over my Query log, I see an odd pattern that I don't have an explanation for.
After practically every query, I have "select 1 from DUAL".
I have no idea where this is coming from, and I'm certainly not making the query explicitly.
The log basically looks like this:
10 Query SELECT some normal query
10 Query select 1 from DUAL
10 Query SELECT some normal query
10 Query select 1 from DUAL
10 Query SELECT some normal query
10 Query select 1 from DUAL
10 Query SELECT some normal query
10 Query select 1 from DUAL
10 Query SELECT some normal query
10 Query select 1 from DUAL
...etc...
Has anybody encountered this problem before?
MySQL Version: 5.0.51
Driver: Java 6 app using JDBC. mysql-connector-java-5.1.6-bin.jar
Connection Pool: commons-dbcp 1.2.2
The validationQuery was set to "select 1 from DUAL" (obviously) and apparently the connection pool defaults testOnBorrow and testOnReturn to true when a validation query is non-null.
One further question that this brings up for me is whether or not I actually need to have a validation query, or if I can maybe get a performance boost by disabling it or at least reducing the frequency with which it is used. Unfortunately, the developer who wrote our "database manager" is no longer with us, so I can't ask him to justify it for me. Any input would be appreciated. I'm gonna dig through the API and google for a while and report back if I find anything worthwhile.
EDIT: added some more info
EDIT2: Added info that was asked for in the correct answer for anybody who finds this later
It could be coming from the connection pool your application is using. We use a simple query to test the connection.
Just had a quick look in the source to mysql-connector-j and it isn't coming from in there.
The most likely cause is the connection pool.
Common connection pools:
commons-dbcp has a configuration property validationQuery, this combined with testOnBorrow and testOnReturn could cause the statements you see.
c3p0 has preferredTestQuery, testConnectionOnCheckin, testConnectionOnCheckout and idleConnectionTestPeriod
For what's it's worth I tend to configure connection testing and checkout/borrow even if it means a little extra network chatter.
I have performed 100 inserts/deltes and tested on both DBCP and C3PO.
DBCP :: testOnBorrow=true impacts the response time by more than 4 folds.
C3P0 :: testConnectionOnCheckout=true impacts the response time by more than 3 folds.
Here are the results :
DBCP – BasicDataSource
Average time for 100 transactions ( insert operation )
testOnBorrow=false :: 219.01 ms
testOnBorrow=true :: 1071.56 ms
Average time for 100 transactions ( delete opration )
testOnBorrow=false :: 223.4 ms
testOnBorrow=true :: 1067.51 ms
C3PO – ComboPooledDataSource
Average time for 100 transactions ( insert operation )
testConnectionOnCheckout=false :: 220.08 ms
testConnectionOnCheckout=true :: 661.44 ms
Average time for 100 transactions ( delete opration )
testConnectionOnCheckout=false :: 216.52 ms
testConnectionOnCheckout=true :: 648.29 ms
Conculsion : Setting testOnBorrow=true in DBCP or testConnectionOnCheckout=true in C3PO impacts the performance by 3-4 folds. Is there any other setting that will enhance the performance.
-Durga Prasad
The "dual" table/object name is an Oracle construct, which MySQL supports for compatibility - or to provide a target for queries that dont have a target but people want one to feel all warm and fuzzy. E.g.
select curdate()
can be
select curdate() from dual
Someone could be sniffing you to see if you're running Oracle.

Categories

Resources