I was fetch records from .csv file using smooks the size of file is around 30MB data in single file ,in this file around 3 laks of records fetch into List..
i was done next List is divided into subparts using subList Partition size is upto 2000 .
i want to flush 2000 records in single transaction .but it not allow like that in my code.
I am using seam 2.1.2 ,jap with hibernate ,EntityManager ,JTA Transactions.
components.xml
<core:init debug="false" jndi-pattern="#jndiPattern#" />
<core:manager concurrent-request-timeout="2000"
conversation-id-parameter="cid" conversation-timeout="120000"
parent-conversation-id-parameter="pid" />
<web:hot-deploy-filter url-pattern="/*.mobee" />
<persistence:entity-manager-factory
installed="#seamBootstrapsPu#" name="entityManagerFactory"
persistence-unit-name="mobeeadmin" />
<persistence:managed-persistence-context
auto-create="true" entity-manager-factory="#seamEmfRef#" name="entityManager"
persistence-unit-jndi-name="#puJndiName#" />
<async:quartz-dispatcher />
<security:identity authenticate-method="#{authenticator.authenticate}" />
<web:rewrite-filter view-mapping="*.mobee" />
<web:multipart-filter create-temp-files="true" max-request-size="28672000" url-pattern="*.seam"/>
<event type="org.jboss.seam.security.notLoggedIn">
<action execute="#{redirect.captureCurrentView}" />
</event>
<event type="org.jboss.seam.security.loginSuccessful">
<action execute="#{redirect.returnToCapturedView}" />
</event>
<mail:mail-session host="localhost" port="25" />
Java Code:
private List<DoTempCustomers> doTempCustomers;
int partitionSize = 2000;
for (int i = 0; i < doTempCustomers.size(); i += partitionSize) {
String message= tempCustomerMigration(doTempCustomers.subList(i,
i + Math.min(partitionSize, doTempCustomers.size() - i)));
}
#Begin(join=true)
public String tempCustomerMigration(List<DoTempCustomers> list){
PersistenceProvider.instance().setFlushModeManual(getEntityManager());
TempCustomers temp = null;
for(DoTempCustomers tempCustomers:list){
try {
temp=new TempCustomers();
BeanUtils.copyProperties(temp, tempCustomers);
getEntityManager.persist();
getEntityManager.flush();
}
i was tried so many times this issue never got sol on how to flush record to DB in each transaction before send server response to GUI
otherwise some process time i got exception is
2012-12-06 17:09:56,380 WARN [com.arjuna.ats.arjuna.logging.arjLoggerI18N] [com.arjuna.ats.arjuna.coordinator.BasicAction_58] - Abort of action id -53eff40e:f2db:50c0a356:7d invoked while multiple threads active within it.
2012-12-06 17:09:56,380 WARN [com.arjuna.ats.arjuna.logging.arjLoggerI18N] [com.arjuna.ats.arjuna.coordinator.CheckedAction_2] - CheckedAction::check - atomic action -53eff40e:f2db:50c0a356:7d aborting with 1 threads active!
2012-12-06 17:09:56,522 DEBUG [org.jboss.util.NestedThrowable] org.jboss.util.NestedThrowable.parentTraceEnabled=true
2012-12-06 17:09:56,522 DEBUG [org.jboss.util.NestedThrowable] org.jboss.util.NestedThrowable.nestedTraceEnabled=false
2012-12-06 17:09:56,522 DEBUG [org.jboss.util.NestedThrowable] org.jboss.util.NestedThrowable.detectDuplicateNesting=true
2012-12-06 17:09:56,524 INFO [STDOUT] [Mobee]- WARN 2012-12-06 17:09:56,524 [] JDBCExceptionReporter - SQL Error: 0, SQLState: null
2012-12-06 17:09:56,525 INFO [STDOUT] [Mobee]-ERROR 2012-12-06 17:09:56,524 [] JDBCExceptionReporter - Transaction is not active: tx=TransactionImple < ac, BasicAction: -53eff40e:f2db:50c0a356:7d status: ActionStatus.ABORTING >; - nested throwable: (javax.resource.ResourceException: Transaction is not active: tx=TransactionImple < ac, BasicAction: -53eff40e:f2db:50c0a356:7d status: ActionStatus.ABORTING >)
2012-12-06 17:09:56,545 INFO [STDOUT] [Mobee]-ERROR 2012-12-06 17:09:56,527 [] AbstractFlushingEventListener - Could not synchronize database state with session
org.hibernate.exception.GenericJDBCException: Cannot open connection
at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:103)
for this exception i found sol in google for increase to TransactionTimeout in jboss--service.xml for me there is no use even increasing timeout parameter.
You could create a new Seam component that is an EJB Session Bean and use a UserTransaction to perform your updates/inserts in batches. UserTransaction also lets you specify the transaction timeout. You would inject this new component into the component that you're using above. Here is an example - see the 5th post.. Otherwise, if you don't want to use an EJB, I think your approach would require you to use a nested Seam conversation since it appears that you are using a Seam managed persistence context that is scoped to a single conversation.
Related
I am getting a lot of SQL Warning in the logs with the spring boot and sybase.
o.h.engine.jdbc.spi.SqlExceptionHelper : [] SQL Warning Code: 0, SQLState: 010SK
o.h.engine.jdbc.spi.SqlExceptionHelper : [] 010SK: Database cannot set connection option SET_READONLY_TRUE.
o.h.engine.jdbc.spi.SqlExceptionHelper : [] 010SK: Database cannot set connection option SET_READONLY_FALSE.
Could anyone explain the reason behind this?
Solution 1:
java.sql.Connection has a setReadOnly(boolean) method that is meant to notify the database of the type of result set being requested in order to perform any optimizations. However Sybase ASE doesn't require any optimizations, therefore setReadOnly() produces a SQLWarning.
In order to suppress the message you'll need to update the spt_mda table in the MASTER database.
update spt_mda set querytype = 4, set query = '0'
where mdinfo = 'SET_READONLY_FALSE'
and
update spt_mda set querytype = 4, set query = '0'
where mdinfo = 'SET_READONLY_TRUE'
These two entries (they are the only ones) are set to a querytype of 3 by default, which means "not supported", which explains the SQLWarning.
Changing them to a 4 (meaning boolean values) with a query type of "0" basically causes the JDBC Driver to return false without the warning..
Solution 2:
You might turn off/on on logging for specific part of hibernate logging modules, these are different configurations:
# Hibernate logging
# Log everything (a lot of information, but very useful for troubleshooting)
log4j.logger.org.hibernate=FATAL
# Log all SQL DML statements as they are executed
log4j.logger.org.hibernate.SQL=INHERITED
# Log all JDBC parameters
log4j.logger.org.hibernate.type=INHERITED
# Log all SQL DDL statements as they are executed
log4j.logger.org.hibernate.tool.hbm2ddl=INHERITED
# Log the state of all entities (max 20 entities) associated with the session at flush time
log4j.logger.org.hibernate.pretty=INHERITED
# Log all second-level cache activity
log4j.logger.org.hibernate.cache=INHERITED
# Log all OSCache activity - used by Hibernate
log4j.logger.com.opensymphony.oscache=INHERITED
# Log transaction related activity
log4j.logger.org.hibernate.transaction=INHERITED
# Log all JDBC resource acquisition
log4j.logger.org.hibernate.jdbc=INHERITED
# Log all JAAS authorization requests
log4j.logger.org.hibernate.secure=INHERITED
Possible values:
OFF
FATAL
ERROR
WARN
INFO
DEBUG
TRACE
ALL
I am using mybatis 3.4.6 along with org.xerial:sqlite-jdbc 3.28.0. Below is my configuration to use an in-memory database with shared mode enabled
db.driver=org.sqlite.JDBC
db.url=jdbc:sqlite:file::memory:?cache=shared
The db.url is correct according to this test class
And I managed to setup the correct transaction isolation level with below mybatis configuration though there is a typo of property read_uncommitted according to this issue which is reported by me as well
<environment id="${db.env}">
<transactionManager type="jdbc"/>
<dataSource type="POOLED">
<property name="driver" value="${db.driver}" />
<property name="url" value="${db.url}"/>
<property name="username" value="${db.username}" />
<property name="password" value="${db.password}" />
<property name="defaultTransactionIsolationLevel" value="1" />
<property name="driver.synchronous" value="OFF" />
<property name="driver.transaction_mode" value="IMMEDIATE"/>
<property name="driver.foreign_keys" value="ON"/>
</dataSource>
</environment>
This line of configuration
<property name="defaultTransactionIsolationLevel" value="1" />
does the trick to set the correct value of PRAGMA read_uncommitted
I am pretty sure of it since I debugged the underneath code which initialize the connection and check the value has been set correctly
However with the above setting, my program still encounters SQLITE_LOCKED_SHAREDCACHE intermittently while reading, which I think it shouldn't happen according the description highlighted in the red rectangle of below screenshot. I want to know the reason and how to resolve it, though the occurring probability of this error is low.
Any ideas would be appreciated!!
The debug configurations is below
===CONFINGURATION==============================================
jdbcDriver org.sqlite.JDBC
jdbcUrl jdbc:sqlite:file::memory:?cache=shared
jdbcUsername
jdbcPassword ************
poolMaxActiveConnections 10
poolMaxIdleConnections 5
poolMaxCheckoutTime 20000
poolTimeToWait 20000
poolPingEnabled false
poolPingQuery NO PING QUERY SET
poolPingConnectionsNotUsedFor 0
---STATUS-----------------------------------------------------
activeConnections 5
idleConnections 5
requestCount 27
averageRequestTime 7941
averageCheckoutTime 4437
claimedOverdue 0
averageOverdueCheckoutTime 0
hadToWait 0
averageWaitTime 0
badConnectionCount 0
===============================================================
Attachments:
The exception is below
org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
### The error may exist in mapper/MsgRecordDO-sqlmap-mappering.xml
### The error may involve com.super.mock.platform.agent.dal.daointerface.MsgRecordDAO.getRecord
### The error occurred while executing a query
### Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
I finally resolved this issue by myself and share the workaround below in case someone else encounters similar issue in the future.
First of all, we're able to get the completed call stack of the exception shown below
Going through the source code indicated by the callback, we have below findings.
SQLite is built-in with auto commit enabled by default which is contradict with MyBatis which disables auto commit by default since we're using SqlSessionManager
MyBatis would override the auto commit property during connection initialization using method setDesiredAutoCommit which finally invokes SQLiteConnection#setAutoCommit
SQLiteConnection#setAutoCommit would incur a begin immediate operation against the database which is actually exclusive, check out below source code screenshots for detailed explanation since we configure our transaction mode to be IMMEDIATE
<property name="driver.transaction_mode" value="IMMEDIATE"/>
So until now, An apparent solution is to change the transaction mode to be DEFERRED. Furthermore, the solution of making the auto commit setting the same between MyBatis and SQLite has been considered as well, however, it's not adopted since there is no way to set the auto commit of SQLiteConnection during initialization stage, there would be always switching (from true to false or vice versa) and switch would cause the above error probably if transaction mode is not set properly
I have following scenario to save my object
// Started transaction
User objUser = getUser("123");// get user from DB
objUser.set(...)
.
.
UserAddress objUserAddress = objUser.getUserAddress();
objUserAddress.set(..);
.
.
hibernateSession.flush(); //#Line 1
hibernateSession.saveOrUpdate(objUserAddress); //#Line 2
hibernateSession.flush(); //#Line 3
hibernateSession.saveOrUpdate(objUser); //#Line 4
// Commit transaction
Here is hibernate mapping between User and Address class
<class name="com.service.core.bo.impl.User" table="USERS">
.
.
<many-to-one name="userAddress" class="com.service.core.bo.impl.UserAddress"
column="ADDRESS_ID" not-null="false" unique="true" cascade="save-update"
lazy="false" />
.
.
</class>
At some time I received deadlock on #Line 1. here is exception stack.
[2017-12-12 11:15:02.131 GMT] WARN [] [] [] [] [] [] [] [] [] http-bio-8280-exec-14 org.hibernate.util.JDBCExceptionReporter - SQL Error: 60, SQLState: 61000
[2017-12-12 11:15:02.131 GMT] ERROR [] [] [] [] [] [] [] [] [] http-bio-8280-exec-14 org.hibernate.util.JDBCExceptionReporter - ORA-00060: deadlock detected while waiting for resource
[2017-12-12 11:15:02.131 GMT] ERROR [] [] [] [] [] [] [] [] [] http-bio-8280-exec-14 org.hibernate.event.def.AbstractFlushingEventListener - Could not synchronize database state with session
org.hibernate.exception.LockAcquisitionException: Could not execute JDBC batch update
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:87)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:253)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:172)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000)
at sun.reflect.GeneratedMethodAccessor706.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
.
.
Caused by: java.sql.BatchUpdateException: ORA-00060: deadlock detected while waiting for resource
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:12296)
at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:246)
at sun.reflect.GeneratedMethodAccessor553.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.ucp.jdbc.proxy.StatementProxyFactory.invoke(StatementProxyFactory.java:353)
at oracle.ucp.jdbc.proxy.PreparedStatementProxyFactory.invoke(PreparedStatementProxyFactory.java:178)
at com.sun.proxy.$Proxy66.executeBatch(Unknown Source)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:48)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:246)
... 71 more
Problem is, this error is not consistent. This may occur once or multiple times in a day and other day, nothing.
Not sure why flush() causing Could not synchronize database state with session and ORA-00060: deadlock detected while waiting for resource. I found some links like Could not synchronize database state with session for session state but my actual cause is deadlock as per above exception.
Could not synchronize database state with session is an generic hibernate error that can have multiple root causes.
The error that you should focus on is:
ORA-00060: deadlock detected while waiting for resource
This is Oracle specific and it happens when updates to the same data (row in the database) is happening from more then one connection. Whenever Oracle (and just about any database, in fact) it updates a row, it locks it for the duration of the update. If another update is attempted on the same line while it is locked, then this error happens.
Here is the official error explanation from Oracle:
ORA-00060: deadlock detected while waiting for resource
Cause: Transactions deadlocked one another while waiting for resources.
Action: Look at the trace file to see the transactions and resources involved. Retry if necessary.
One way to address this problem is to use versioning:
https://www.intertech.com/Blog/versioning-optimistic-locking-in-hibernate/
This adds a version column to the table, which is automatically incremented when the row is updated. Before the update the version is checked and if it is higher then the one expected, the updated is not even updated and a specific error is thrown, which you can handle. Usually handling involves reloading the info from the database for that entity, reset its values to what you want and then save.
I'm working on an Integrator for Hibernate (background on Integrators: https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch14.html#objectstate-decl-security) that by using listeners is supposed to take my data from how it's stored in the DB and convert it into a different form for processing at runtime. This works great when saving the data using .persist() however there's an odd behavior involving transactions. The following code is from Hibernate's own quickstart tutorial code:
// now lets pull events from the database and list them
entityManager = entityManagerFactory.createEntityManager();
entityManager.getTransaction().begin();
List<Event> result = entityManager.createQuery( "from Event", Event.class ).getResultList();
for ( Event event : result ) {
System.out.println( "Event (" + event.getDate() + ") : " + event.getTitle() );
}
entityManager.getTransaction().commit();
entityManager.close();
Notice the unusual transaction begin/commit wrapping the query to select the data. Running this gives the following output after the query completes:
01:01:59.111 [main] DEBUG org.hibernate.engine.transaction.spi.AbstractTransactionImpl.commit(175) - committing
01:01:59.112 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.prepareEntityFlushes(149) - Processing flush-time cascades
01:01:59.112 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.prepareCollectionFlushes(189) - Dirty checking collections
01:01:59.114 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.logFlushResults(123) - Flushed: 0 insertions, 2 updates, 0 deletions to 2 objects
01:01:59.114 [main] DEBUG org.hibernate.event.internal.AbstractFlushingEventListener.logFlushResults(130) - Flushed: 0 (re)creations, 0 updates, 0 removals to 0 collections
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(114) - Listing entities:
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(121) - org.hibernate.tutorial.em.Event{date=2015-07-28 01:01:57.776, id=1, title=Our very first event!}
01:01:59.114 [main] DEBUG org.hibernate.internal.util.EntityPrinter.toString(121) - org.hibernate.tutorial.em.Event{date=2015-07-28 01:01:58.746, id=2, title=A follow up event}
01:01:59.115 [main] DEBUG org.hibernate.SQL.logStatement(109) - update EVENTS set EVENT_DATE=?, title=? where id=?
Hibernate: update EVENTS set EVENT_DATE=?, title=? where id=?
01:01:59.119 [main] DEBUG org.hibernate.SQL.logStatement(109) - update EVENTS set EVENT_DATE=?, title=? where id=?
Hibernate: update EVENTS set EVENT_DATE=?, title=? where id=?
01:01:59.120 [main] DEBUG org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doCommit(113) - committed JDBC Connection
01:01:59.120 [main] DEBUG org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.close(201) - HHH000420: Closing un-released batch
01:01:59.121 [main] DEBUG org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.releaseConnection(246) - Releasing JDBC connection
01:01:59.121 [main] DEBUG org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.releaseConnection(264) - Released JDBC connection
01:01:59.121 [main] DEBUG org.hibernate.internal.SessionFactoryImpl.close(1339) - HHH000031: Closing
It appears that since the Integrator does a modification on the entity in question it gets marked as "dirty" and upon committing this odd transaction, it bypasses my event listeners and writes the value back in the wrong format! I did some digging in the code and it turns out that org.hibernate.event.internal.AbstractFlushingEventListener.flushEntities(FlushEvent, PersistenceContext) gets called above and tries to get listeners for EventType.FLUSH_ENTITY. Unfortunately a listener added for this EventType is never called in my Integrator. How can I write my Integrator to behave correctly in this case so that I can "undo" the conversion that has happened with my entities at runtime and not flush the wrong value out?
Ultimately the problem was due to the EventTypes of the event listeners added with the EventListenerRegistry. What worked was using EventType.POST_LOAD for all the read operations combined with EventType.PRE_UPDATE and EventType.PRE_INSERT for writes that call a helper method for handling both the same way.
To prevent unneeded writes after making your entity updates it's a good idea to reset the data used for tracking if the entity is dirty in EntityEntry called loadedState. This is a private field in Hibernate 4 so you'll need to use Reflection, however in Hibernate 5 it's available via the getLoadedState() method. One more gotcha is you need to update values of the "state" used when actually flushing the values to the database by the PreInsertEvent and PreUpdateEvent which can be retrieved from the getState() method defined in each.
The setup of my project is -
Spring JDBC for persistence
Apache DBCP 1.4 for connection pooling
Mysql 5 on Linux
Here is the log of my application that captures the interactions with the database.
2013-01-29 15:52:21,549 DEBUG http-bio-8080-exec-3 org.springframework.jdbc.core.JdbcTemplate - Executing SQL query [SELECT id from emp]
2013-01-29 15:52:21,558 DEBUG http-bio-8080-exec-3 org.springframework.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource
2013-01-29 15:52:31,878 INFO http-bio-8080-exec-3 jdbc.connection - 1. Connection opened org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
2013-01-29 15:52:31,878 DEBUG http-bio-8080-exec-3 jdbc.connection - open connections: 1 (1)
2013-01-29 15:52:31,895 INFO http-bio-8080-exec-3 jdbc.connection - 1. Connection closed org.apache.commons.dbcp.DelegatingConnection.close(DelegatingConnection.java:247)
2013-01-29 15:52:31,895 DEBUG http-bio-8080-exec-3 jdbc.connection - open connections: none
2013-01-29 15:52:41,950 INFO http-bio-8080-exec-3 jdbc.connection - 2. Connection opened org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
2013-01-29 15:52:41,950 DEBUG http-bio-8080-exec-3 jdbc.connection - open connections: 2 (1)
2013-01-29 15:52:52,001 INFO http-bio-8080-exec-3 jdbc.connection - 3. Connection opened org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
2013-01-29 15:52:52,002 DEBUG http-bio-8080-exec-3 jdbc.connection - open connections: 2 3 (2)
2013-01-29 15:53:02,058 INFO http-bio-8080-exec-3 jdbc.connection - 4. Connection opened org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
2013-01-29 15:53:02,058 DEBUG http-bio-8080-exec-3 jdbc.connection - open connections: 2 3 4 (3)
2013-01-29 15:53:03,403 DEBUG http-bio-8080-exec-3 org.springframework.jdbc.core.BeanPropertyRowMapper - Mapping column 'id' to property 'id' of type int
2013-01-29 15:53:04,494 DEBUG http-bio-8080-exec-3 org.springframework.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource
Two things are clear from the log -
The connection pool only starts creating connections when the first request to execute a query is received.
A pool of 4 connections takes nearly 30 seconds to initialize.
My questions are -
How should one configure DBCP to initialize on startup automatically?
Should it really take that long to create connections?
Note: Please don't suggest switching to C3P0 or Tomcat connection pool. I'm aware of those solutions. I'm more interested in understanding the problem at hand than just a quick fix.
Besides I'm sure something so basic should be possible with DBCP as well.
Contents of dbcontext -
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${db.driver}" />
<property name="url" value="${db.jdbc.url}" />
<property name="username" value="${db.user}" />
<property name="password" value="${db.password}" />
<property name="maxActive" value="20" />
<property name="initialSize" value="4" />
<property name="testOnBorrow" value="true" />
<property name="validationQuery" value="SELECT 1" />
</bean>
The initialSize doesn't take effect until you first request a connection. From the java docs to BasicDataSource#setInitialSize
Sets the initial size of the connection pool.
Note: this method currently has no effect once the pool has been
initialized. The pool is initialized the first time one of the
following methods is invoked: getConnection, setLogwriter,
setLoginTimeout, getLoginTimeout, getLogWriter.
Try adding init-method="getLoginTimeout" to your bean to confirm this.
For a web application, you can implement ServletContextListener.contextInitialized() method and fire a test query (e.g. Select ID From Emp Limit 1) using your DataAccess layer. This should initialize your connection pool and make it ready before your application starts serving real user from web.
Have a look at the initialSize property -especially the part about when the pool is initialized. As sbridges points out, you can use the init-method property on beans to call one of the methods to trigger pool creation.
Also, you should look into why it takes 7.5 seconds on average to create a connection...
Cheers,