Connection Reset using Spring + Hibernate - java

I am using Spring + Hibernate on my JavaEE project.
In this project the user can upload an XLS file which I should import to my database. Before importing I have to validate this file checking its integrity with the other entities on my database. So I have more or less the following:
// The importer
#Component("importer")
public class Importer {
#Autowired
FirstDAO firstDao;
#Autowired
SecondDAO secondDao;
// Read the file and open it (65.000 lines for example)
public void validate() {
foreach line in the file {
firstDAO.has(line[col1]);
secondDao.has(line[col2]);
}
// It stores the valid objects in a List and persist them at the end
}
}
// The DAO
#Repository
public class FirstDao {
#PersistenceContext
protected EntityManager entityManager;
#Transactional(propagation = Propagation.NOT_SUPPORTED)
public boolean has(String name) {
List<Object> result = entityManager.createQuery( from FIRST_TABLE where name = :name)
.setParameter("name", name)
.getResultList();
if (result.size > 0) return true;
else return false;
}
}
// The PersistenceContext/Hibernate configuration
<!-- Data Source -->
<jee:jndi-lookup id="myDS" jndi-name="jdbc/my-DS" cache="true" proxy-interface="javax.sql.DataSource" />
<!-- Entity Manager -->
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property value="classpath:META-INF/my_persistence.xml" name="persistenceXmlLocation"/>
<property name="dataSource" ref="myDS"/>
<property name="persistenceUnitName" value="myPersistenceUnit" />
<!--
<property name="loadTimeWeaver">
<bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/>
</property>
-->
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="database" value="ORACLE" />
<property name="showSql" value="false" />
</bean>
</property>
</bean>
After logging the application I have noticed:
For each query (has method on my DAO) a connection is opened and closed with my Database.
The memory on the server is being flooded (probably memory leak).
After a lot of opening and closing connections I have a connection reset from the Database. Don't know why. And if I still keep requesting coonections, the Datasource is suspended.
I have read somethings about entityManager but I still don't know if I am doing it right, so:
Is it right to execute the validation in a for loop that way? (One connection for each item, meaning 130.000 connections open and closed in a 65000 lines file)
I have read about Stateless Persistence Context for the entityManager. I suspect the memory leak may be there. Maybe Hibernate is kepting a lot of objects in the PersistenceContext. How do I tell Entity Manager to not cache those guys when validating?
Thanks in advance.

First of all, you really shouldn't do that line by line unless you have a very very good reason. Even if the data size is bigger than your memory you should do that 1000 lines at a time or something like that but definitely not one by one.
Because one of the most important optimization for database usage is reducing number of database hit.
Secondly you should not retrieve the data just to check if it is exist.
You should use a basic "select count" query. By that way you will get rid of all stuff like consuming IO to read data and retrieving that data through network to your server and spending memory to just get the number of object in that list.
If you will use my first advice and check the existing of records not one at a time but 1000s at a time you can select just the names instead of all rows.
Btw as far as I can see you are using a datasource if that is properly configured like number of max connection etc. you shouldn't worry about number of database connection.

Related

How to handle 2 databases in one method?

I have a Spring Boot RESTful CRUD service in car rental domain.
From the high overview, it's a simple CRUD app with SQL database and all such entities as a Car, Client, Lease, etc.
Now I have to introduce a report generation feature that aimed to process lease data and calculate some statistics based on data in SQL db and persist the report into MongoDB.
I've already implemented it by creating a ReportGenerationService that depends on OriginDataService and MongoService.
ReportGenerationService generates the report based on a data returned by OriginDataService. In turn, OriginDataService has a method getData() that do a number of calls to DAO layer and thus annotated with #Transactional(isolation = Isolation.REPEATABLE_READ). I want the returned data to be consistent. After getting the data ReportGenerationService generates a report and persists it by invoking MongoService's persist(Report) method.
In my implementation I get data -> generate report -> persist report.
But what if the base data and report can't fit into RAM?
The solution is to select it little by little, generate a part of the report, persist the part of the report and after all rows of data is processed merge the report.
It means that one method should read the data, processes it and persists.
I also want my method to read data with Repeatable Read isolation level, then I have to annotate the method with #Transactional(isolation = Isolation.REPEATABLE_READ). But since 2 dbs are used in the method the #Transactional will spread on both of them, and I want only SQL to use it.
How can I do gradually reads and writes to different dbs?
Refer below links and code sample, it may help you to resolve your issue.
https://www.javaworld.com/article/2077963/distributed-transactions-in-spring--with-and-without-xa.html?page=2
Transaction management for multiple database Using Spring & Hibernate
<bean id="transactionManager" class="com.springsource.open.db.ChainedTransactionManager">
<property name="transactionManagers">
<list>
<bean
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<bean
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="otherDataSource" />
</bean>
</list>
</property>
</bean>

How to verify whether a connection pool has beem set up in spring MVC web app?

In one of my question asked earlier I got to know that DriverManagerDataSource is NOT intended for production use. So I changed my configuration. I know I am using DBCP which is also outdated and a lot of other connection pools are available like HIkariCP and BOneCP but
I wish to understand the way how to verify that a pool has been setup
or not?
On searching a lot I got some answer at the following link
How would you test a Connection Pool
but I didn't get a way to verify programmatically. Also I cannot debug my jar files used for connection pooling because no source code is available. I dont know why but I can't change my jars for offical reasons.
The following are my configuration (OLD and NEW)
OLD
<bean id="webLogicXADataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="#[csa.db.driver]" />
<property name="url" value="#[csa.db.url]" />
<property name="username" value="#[csa.db.username]" />
<property name="password" value="#[csa.db.password]" />
</bean>
NEW
Using DBCP connection pool
<bean id="webLogicXADataSource"
class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="#[csa.db.driver]" />
<property name="url" value="#[csa.db.url]" />
<property name="username" value="#[csa.db.username]" />
<property name="password" value="#[csa.db.password]" />
</bean>
OTHER ELEMENTS:(Thus far I have kept them same like they were earlier)
Place holder
<bean id="placeholderConfig"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>file:${DB_PROPERTIES}</value>
</list>
</property>
<property name="placeholderPrefix" value="#[" />
<property name="placeholderSuffix" value="]" />
</bean>
Transaction Manager
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="webLogicXADataSource" />
<qualifier value="inventoryTxManager"/>
</bean>
DAOIMPL SAMPLE BEAN
<bean id="inventoryDao"
class="com.lxnx.fab.ce.icce.inventoryRoutingInvoice.dao.InventoryDaoImpl">
<property name="dataSource" ref="webLogicXADataSource" />
<property name="transactionManager" ref="transactionManager" />
Right now all the DAO classes in my project are singleton(no prototype property set for any of the beans)
The following is the sample java code of the DAOImpl.java class where I need to do all the transactions:
DAOImpl.java
public class InventoryDaoImpl implements InventoryDao {
private final static ISmLog iSmLog = Instrumentation
.getSmLog(LNConstants.SYSTEM_LOG_NAME);
private JdbcTemplate jdbcTemplate;
private DataSource dataSource;
private PlatformTransactionManager transactionManager;
public void setDataSource(DataSource dataSource) {
this.jdbcTemplate = new JdbcTemplate(dataSource);
this.dataSource = dataSource;
}
public void setTransactionManager(
PlatformTransactionManager transactionManager) {
this.transactionManager = transactionManager;
}
#Transactional private void insertRelatedInfoData(
InventoryModel inventoryModel) {
final List<String> relatedLniList = inventoryModel.getArrRelatedLni();
final String documentLni = inventoryModel.getDocumentLNI();
String sql = "INSERT INTO SCSMD_REPO.INV_RELATED_INFO(LNI, RELATED_LNI) VALUES(?,?)";
jdbcTemplate.batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
String relatedLni = relatedLniList.get(i);
ps.setString(1, documentLni);
ps.setString(2, relatedLni);
}
#Override
public int getBatchSize() {
return relatedLniList.size();
}
});
}
}
I am not getting any errors. Just wanted to verify If a pool has been setup U wish to verify the same
Are all configurations fine or did I miss something?? Please help me out with you valuable answers. thanks
If you don't have logs enable then you can't verify it. however there is one more donkey logic.
every database server will have timeout functionality. if db not hit by application after some time connection will break. for example mysql server will break it's connection from application after 8 hour (if there is no hit from application). you check modify timeout to minimum time (say 30 min)in mysql config file and check after 30 minutes you get connection close exception in you appication, when you hit db
The easiest way, as explained, would be to examine the logs. It's quite likely that a connection pool will log something, at least if your logging level is low enough.
Another way would be to examine the class of the Connection that the datasource returns. If you're dealing with a connection pool, the class will be a wrapper or a proxy class for that pool. The wrapper/proxy class makes sure that when you call close() the connection isn't really closed, it's just returned to the pool for further use. For example if you were to use HikariCP as your pool, you could check if(connection instanceof IHikariConnectionProxy) to see if the pool is being used.
Adding that kind of code in your software would be a bad idea in practically all cases. If you don't know whether a connection pool is being used or not, it's not something you solve with code. It's something you solve by reading and studying more.
You've also named your bean webLogicXADataSource even though nothing seems to support it being an XA datasource. Are you perhaps working on things a bit too advanced for you?

Does java web application contains only one hibernate session and how to clear this hibernate session?

I am having an issue in saving and retrieving objects in database in just one request.
I want to clear the cache of our hibernate session to get the updated entity in our database.
My code looks like this:
public class SampleController{
protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors)
throws Exception {
myServiceOne.doAllotsOfSaving(parameters);
//some code enhancements to remove cache in hibernate session
//without affecting the session of other user logged in.
//some fields in MyEntity class contains the old values but the actual data in database is already updated
MyEntity entity = myServiceTwo.getMyEntityByOrderNo(orderNo);
}
}
--configurations
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation" value="classpath:hibernate.cfg.xml" />
<property name="hibernateProperties">
<ref local="hibernateProperties"/>
</property>
<property name="entityInterceptor">
<ref bean="auditLogInterceptor" />
</property>
</bean>
<bean id="myServiceOne" class="com.test.service.impl.MyServiceOneImpl">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<bean id="myServiceTwo" class="com.test.service.impl.MyServiceTwoImpl">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
--configurations
Does java web application contains only one hibernate session and how to clear this hibernate session?
No any hibernate based application must use multiple sessions. Each of these sessions must be closed when they perform their task. Hibernate can manage sessions for you if you configure the same in hibernate's configuration file.
However you should have only one instance of SessionFactory per application.
To clear the session you can call the session.clear() method. It clears the session level cache.
without affecting the session of other user logged in
Since you have a web application, you have a different thread for each user for database transactions. This means each user will have a different hibernate session, so you won't have to worry about this. If by some means you're using same session for all the users, you're doing it wrong and results can be catastrophic. After some time you'll get an OutOfMemoryError because of session level cache.
You must note that you cannot disable hibernate session level cache. For this purpose you may use StatelessSession.
Session factory is long live multithreaded object.
Usually one session factory should be created for one database.
A Session is used to get a physical connection with a database. The Session object is lightweight and designed to be instantiated each time an interaction is needed with the database.
The main function of the Session is to offer CRUD operations for instances of mapped entity classes. Instances may exist in one of the following three states at a given point in time:
transient: A new instance of a a persistent class which is not associated with a Session and has no representation in the database and no identifier value is considered transient by Hibernate.
persistent: You can make a transient instance persistent by associating it with a Session. A persistent instance has a representation in the database, an identifier value and is associated with a Session.
detached: Once we close the Hibernate Session, the persistent instance will become a detached instance.

How to run native SQL queries in the same Hibernate transaction?

We have a Service which is #Stateful. Most of the Data-Operations are atomic, but within a certain set of functions We want to run multiple native queries within one transaction.
We injected the EntityManager with a transaction scoped persistence context. When creating a "bunch" of normal Entities, using em.persist() everything is working fine.
But when using native queries (some tables are not represented by any #Entity) Hibernate does not run them within the same transaction but basically uses ONE transaction per query.
So, I already tried to use manual START TRANSACTION; and COMMIT; entries - but that seems to interfere with the transactions, hibernate is using to persist Entities, when mixing native queries and persistence calls.
#Stateful
class Service{
#PersistenceContext(unitName = "service")
private EntityManager em;
public void doSth(){
this.em.createNativeQuery("blabla").executeUpdate();
this.em.persist(SomeEntity);
this.em.createNativeQuery("blablubb").executeUpdate();
}
}
Everything inside this method should happen within one transaction. Is this possible with Hibernate?
When debugging it, it is clearly visible that every statement happens "independent" of any transaction. (I.e. Changes are flushed to the database right after every statement.)
I've tested the bellow given example with a minimum setup in order to eliminate any other factors on the problem (Strings are just for breakpoints to review the database after each query):
#Stateful
#TransactionManagement(value=TransactionManagementType.CONTAINER)
#TransactionAttribute(value=TransactionAttributeType.REQUIRED)
public class TestService {
#PersistenceContext(name = "test")
private EntityManager em;
public void transactionalCreation(){
em.createNativeQuery("INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('a','b','c')").executeUpdate();
String x = "test";
em.createNativeQuery("INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('a','c','b')").executeUpdate();
String y = "test2";
em.createNativeQuery("INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('c','b','a')").executeUpdate();
}
}
Hibernate is configured like this:
<persistence-unit name="test">
<provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
<jta-data-source>java:jboss/datasources/test</jta-data-source>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5InnoDBDialect" />
<property name="hibernate.transaction.jta.platform"
value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform" />
<property name="hibernate.archive.autodetection" value="true" />
<property name="hibernate.jdbc.batch_size" value="20" />
<property name="connection.autocommit" value="false"/>
</properties>
</persistence-unit>
And the outcome is the same as with autocommit mode: After every native query, the database (reviewing content from a second connection) is updated immediately.
The idea of using the transaction in a manual way leads to the same result:
public void transactionalCreation(){
Session s = em.unwrap(Session.class);
Session s2 = s.getSessionFactory().openSession();
s2.setFlushMode(FlushMode.MANUAL);
s2.getTransaction().begin();
s2.createSQLQuery("INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('a','b','c')").executeUpdate();
String x = "test";
s2.createSQLQuery("INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('a','c','b')").executeUpdate();
String y = "test2";
s2.createSQLQuery("INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('c','b','a')").executeUpdate();
s2.getTransaction().commit();
s2.close();
}
In case you don't use container managed transactions then you need to add the transaction policy too:
#Stateful
#TransactionManagement(value=TransactionManagementType.CONTAINER)
#TransactionAttribute(value=REQUIRED)
I have only seen this phenomenon in two situations:
the DataSource is running in auto-commit mode, hence each statement is executed in a separate transaction
the EntityManager was not configured with #Transactional, but then only queries can be run since any DML operation would end-up throwing a transaction required exception.
Let's recap you have set the following Hibernate properties:
hibernate.current_session_context_class=JTA
transaction.factory_class=org.hibernate.transaction.JTATransactionFactory
jta.UserTransaction=java:comp/UserTransaction
Where the final property must be set with your Application Server UserTransaction JNDI naming key.
You could also use the:
hibernate.transaction.manager_lookup_class=org.hibernate.transaction.JBossTransactionManagerLookup
or some other strategy according to your current Java EE Application Server.
After reading about the topic for another bunch of hours while playing around with every configuration property and/or annotation I could find a working solution for my usecase. It might not be the best or only solution, but since the question has received some bookmarks and upvotes, i'd like to share what i have so far:
At first, there was no way to get it working as expected when running the persistence-unit in managed mode. (<persistence-unit name="test" transaction-type="JTA"> - JTA is default if no value given.)
I decided to add another persistence-unit to the persistence xml, which is configured to run in unmanaged mode: <persistence-unit name="test2" transaction-type="RESOURCE_LOCAL">.
(Note: The waring about Multiple Persistence Units is just cause eclipse can't handle. It has no functional impact at all)
The unmanaged persitence-context requires local configuration of the database, since it is no longer container-provided:
<persistence-unit name="test2" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
<class>test.AEntity</class>
<properties>
<property name="hibernate.connection.url" value="jdbc:mysql://localhost/test"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5InnoDBDialect" />
<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/>
<property name="hibernate.connection.password" value="1234"/>
<property name="hibernate.connection.username" value="root"/>
<property name="hibernate.hbm2ddl.auto" value="update" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.archive.autodetection" value="true" />
<property name="hibernate.jdbc.batch_size" value="20" />
<property name="hibernate.connection.autocommit" value="false" />
</properties>
</persistence-unit>
A change required to the project would now be, that you add an unitName, whenever you use the #PersistenceContext annotation to retrieve a managed instance of the EntityManager.
But be aware, that you can only use #PersistenceContext for the managed persistence-unit. For the unmanaged one, you could implement a simple Producer and Inject the EntityManager using CDI whenever required:
#ApplicationScoped
public class Resources {
private static EntityManagerFactory emf;
static {
emf = Persistence.createEntityManagerFactory("test2");
}
#Produces
public static EntityManager createEm(){
return emf.createEntityManager();
}
}
Now, in the example given in the original Post, you need to Inject the EntityManager and manually take care about transactions.
#Stateful
public class TestService {
#Inject
private EntityManager em;
public void transactionalCreation() throws Exception {
em.getTransaction().begin();
try {
em.createNativeQuery(
"INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('a','b','a')")
.executeUpdate();
em.createNativeQuery(
"INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('a','b','b')")
.executeUpdate();
em.createNativeQuery(
"INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('a','b','c')")
.executeUpdate();
em.createNativeQuery(
"INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('a','b','d')")
.executeUpdate();
AEntity a = new AEntity();
a.setName("TestEntity1");
em.persist(a);
// force unique key violation, rollback should appear.
// em.createNativeQuery(
// "INSERT INTO `ttest` (`name`,`state`,`constraintCol`)VALUES('a','b','d')")
// .executeUpdate();
em.getTransaction().commit();
} catch (Exception e) {
em.getTransaction().rollback();
}
}
}
My tests so far showed that mixing of native queries and persistence calls lead to the desired result: Either everything is commited or the transaction is rolledback as a whole.
For now, the solution seems to work. I will continue to validate it's functionality in the main project and check if there are any other sideeffects.
Another thing I need to verify is if it would be save to:
Inject both Versions of the EM into one Bean and mix usage. (First checks seem to work, even when using both ems at the same time on the same table(s))
Having both Versions of the EM operating on the same datasource. (Same data source would most likely be no problem, same tables I assume could lead to unexpected problems.)
ps.: This is Draft 1. I will continue to improve the answer and point out problems and/or drawbacks I'm going to find.
You have to add <hibernate.connection.release_mode key="hibernate.connection.release_mode" value="after_transaction" /> to your properties. After a restart should the Transaction handling work.

What's the proper way to handle JDBC connections with Spring and DBCP?

I'm using the Spring MVC to build a thin layer on top of a SQL Server database. When I began testing, it seems that it doesn't handle stress very well :). I'm using Apache Commons DBCP to handle connection pooling and the data source.
When I first attempted ~10-15 simultaneous connections, it used to hang and I'd have to restart the server (for dev I'm using Tomcat, but I'm gonna have to deploy on Weblogic eventually).
These are my Spring bean definitions:
<bean id="dataSource" destroy-method="close"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
<property name="url" value="[...]"/>
<property name="username" value="[...]" />
<property name="password" value="[...]" />
</bean>
<bean id="partnerDAO" class="com.hp.gpl.JdbcPartnerDAO">
<constructor-arg ref="dataSource"/>
</bean>
<!-- + other beans -->
And this is how I use them:
// in the DAO
public JdbcPartnerDAO(DataSource dataSource) {
jdbcTemplate = new JdbcTemplate(dataSource);
}
// in the controller
#Autowired
private PartnerDAO partnerDAO;
// in the controller method
Collection<Partner> partners = partnerDAO.getPartners(...);
After reading around a little bit, I found the maxWait, maxActive and maxIdle properties for the BasicDataSource (from GenericObjectPool). Here comes the problem. I'm not sure how I should set them, performance-wise. From what I know, Spring should be managing my connections so I shouldn't have to worry about releasing them.
<bean id="dataSource" destroy-method="close"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
<property name="url" value="[...]"/>
<property name="username" value="[...]" />
<property name="password" value="[...]" />
<property name="maxWait" value="30" />
<property name="maxIdle" value="-1" />
<property name="maxActive" value="-1" />
</bean>
First, I set maxWait, so that it wouldn't hang and instead throw an exception when no connection was available from the pool. The exception message was:
Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object
There are some long-running queries, but the exception was thrown regardless of the query complexity.
Then, I set maxActive and maxIdle so that it wouldn't throw the exceptions in the first place. The default values are 8 for maxActive and maxIdle (I don't understand why); if I set them to -1 there are no more exceptions thrown and everything seems to work fine.
Considering that this app should support a large number of concurrent requests is it ok to leave these settings to infinite? Will Spring actually manage my connections, considering the errors I was receiving? Should I switch to C3P0 considering it's kinda dead?
DBCP maxWait parameter should be defined in milliseconds. 30 ms is very low value, consider increasing it to 30000 ms and try again.
As you already found out, the default dbcp connection pool is 8 connections, so if you want to run 9 simultaneous queries one of them will be blocked. I suggest you connect to your database and run exec sp_who2 which will show you what is connected, and active, and whether any queries are being blocked. You can then confirm whether the issue is on the db or in your code.
As long as you are using Spring's JdbcTemplate family of objects your connections will be managed as you expect, and if you want to use a raw DataSource make sure you use DataSourceUtils to obtain a Connection.
One other suggestion - prior to Spring 3, don't ever using JdbcTemplate, stick to SimpleJdbcTemplate, you can still access the same methods using SimpleJdbcTemplate.getJdbcOperations(), but you should find yourself writing much nicer code using generics, and remove the need to ever create JdbcTemplate/NamedParameterJdbcTemplate instances.
Let's change the perspective.
but the exception was thrown
regardless of the query complexity
It could be because the table or the records in the table, which you are querying against has been locked (by some other active transaction) and hence it times out.
Try running the same query from SQLServer Client and if it takes a long time, then you can be sure that it is the table or record lock that is causing this.

Categories

Resources