I am using hibernate for database connectivity. Now I need read uncommitted isolation of transaction but it is not working for me. Please anyone help me what is reasons, I have used this code :
#org.springframework.transaction.annotation.Transactional(
isolation =Isolation.READ_UNCOMMITTED)
public addRank(){
//here i want to change status of rank corresponding rank id 1 so i called private mthoed addChangeStatus with id 1
addChangeStatus(1);
//here i want to get rank with id and status true but i got null
}
private addChangeStatus( int rankId ){
// first get rank entity from database using given id
Rank rank = dao.getRankById("query");
//set status of rank true here
rank.status(true);
}
in dao getRankById method
Query getRankById(String query){
//get currentSession from session factory
getCurrentSession().getNamedQuery(queryName);
}
Can anyone tell me what is reason read uncommitted is not working . In xml file i have also configure transaction manager as
<bean id="transactionManager"
class="org.springframework.orm.hibernate5.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
Can any one help me how i can get update rank entity within same transaction.
You cannot test READ_UNCOMMITTED from the same transaction.
What you tried to test there is not working but for a different reason. If you were trying to retrieve using EntityManager#find() you'd get back the result from the 1st Level Cache that is storing your not-yet-persisted change without a DB round-trip. 1st Level Cache is not working via Queries.
So:
If you wanted to store and read in the same transaction then you don't need READ_UNCOMMITTED. Just change your retrieval to either EntityManger#find() or Session#load() to fetch it from the 1st Level Cache
If indeed you need READ_UNCOMMITTED revise your test involving multiple transactions.
Related
I'm a newbie to JPA/Hibernate first level caching.
I have the following repository class
Each time I call the findByState method(within the same transaction), I see the hibernate sql query being output onto the console
public interface PersonRepository extends JpaRepository<PersonEntity, id> {
#Query("select person from PersonEntity p where name= (?1)")
List<PersonEntity> findByState(String state);
....
}
I expected the results to be cached by the first level cache and the database not be repeatedly queried.
What am I doing wrong?
There is often a misunderstanding about caching.
Hibernate does not cache queries and query results by default. The only thing the first level cache is used is when you call EntityManger.find() you will not see a SQL query executing. And the cache is used to avoid object creation if the entity is already loading.
What you are looking for is called "query cache".
This can be enabled by setting hibernate.cache.use_query_cache=true
Please read more about this topic in the official documentation:
https://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#caching-query
The query will always go to the database. The first level cache will only contain the constructed entities. Its purpose is to ensure that the same db id is mapped to the same entity object (within a session)
Its also possible to use a query cache. You have to enable per query. Check the docs https://docs.jboss.org/hibernate/core/4.0/devguide/en-US/html/ch06.html
for example I have a method in my CRUD interface which deletes a user from the database:
public interface CrudUserRepository extends JpaRepository<User, Integer> {
#Transactional
#Modifying
#Query("DELETE FROM User u WHERE u.id=:id")
int delete(#Param("id") int id, #Param("userId") int userId);
}
This method will work only with the annotation #Modifying. But what is the need for the annotation here? Why cant spring analyze the query and understand that it is a modifying query?
CAUTION!
Using #Modifying(clearAutomatically=true) will drop any pending updates on the managed entities in the persistence context spring states the following :
Doing so triggers the query annotated to the method as an updating
query instead of selecting one. As the EntityManager might contain
outdated entities after the execution of the modifying query, we do
not automatically clear it (see the JavaDoc of EntityManager.clear()
for details), since this effectively drops all non-flushed changes
still pending in the EntityManager. If you wish the EntityManager to
be cleared automatically, you can set the #Modifying annotation’s
clearAutomatically attribute to true.
Fortunately, starting from Spring Boot 2.0.4.RELEASE Spring Data added flushAutomatically flag (https://jira.spring.io/browse/DATAJPA-806) to auto flush any managed entities on the persistence context before executing the modifying query check reference https://docs.spring.io/spring-data/jpa/docs/2.0.4.RELEASE/api/org/springframework/data/jpa/repository/Modifying.html#flushAutomatically
So the safest way to use #Modifying is :
#Modifying(clearAutomatically=true, flushAutomatically=true)
What happens if we don't use those two flags??
Consider the following code :
repo {
#Modifying
#Query("delete User u where u.active=0")
public void deleteInActiveUsers();
}
Scenario 1 why flushAutomatically
service {
User johnUser = userRepo.findById(1); // store in first level cache
johnUser.setActive(false);
repo.save(johnUser);
repo.deleteInActiveUsers();// BAM it won't delete JOHN right away
// JOHN still exist since john with active being false was not
// flushed into the database when #Modifying kicks in
// so imagine if after `deleteInActiveUsers` line you called a native
// query or started a new transaction, both cases john
// was not deleted so it can lead to faulty business logic
}
Scenario 2 why clearAutomatically
In following consider johnUser.active is false already
service {
User johnUser = userRepo.findById(1); // store in first level cache
repo.deleteInActiveUsers(); // you think that john is deleted now
System.out.println(userRepo.findById(1).isPresent()) // TRUE!!!
System.out.println(userRepo.count()) // 1 !!!
// JOHN still exists since in this transaction persistence context
// John's object was not cleared upon #Modifying query execution,
// John's object will still be fetched from 1st level cache
// `clearAutomatically` takes care of doing the
// clear part on the objects being modified for current
// transaction persistence context
}
So if - in the same transaction - you are playing with modified objects before or after the line which does #Modifying, then use clearAutomatically & flushAutomatically if not then you can skip using these flags
BTW this is another reason why you should always put the #Transactional annotation on service layer, so that you only can have one persistence context for all your managed entities in the same transaction.
Since persistence context is bounded to hibernate session, you need to know that a session can contain couple of transactions see this answer for more info https://stackoverflow.com/a/5409180/1460591
The way spring data works is that it joins the transactions together (known as Transaction Propagation) into one transaction (default propagation (REQUIRED)) see this answer for more info https://stackoverflow.com/a/25710391/1460591
To connect things together if you have multiple isolated transactions (e.g not having a transactional annotation on the service) hence you would have multiple sessions following the way spring data works hence you have multiple persistence contexts (aka 1st level cache) that means you might delete/modify an entity in a persistence context even with using flushAutomatically the same deleted/modified entity might be fetched and cached in another transaction's persistence context already, That would cause wrong business decisions due to wrong or un-synced data.
This will trigger the query annotated to the method as updating query instead of a selecting one. As the EntityManager might contain outdated entities after the execution of the modifying query, we automatically clear it (see JavaDoc of EntityManager.clear() for details). This will effectively drop all non-flushed changes still pending in the EntityManager. If you don't wish the EntityManager to be cleared automatically you can set #Modifying annotation's clearAutomatically attribute to false;
for further detail you can follow this link:-
http://docs.spring.io/spring-data/jpa/docs/1.3.4.RELEASE/reference/html/jpa.repositories.html
Queries that require a #Modifying annotation include INSERT, UPDATE, DELETE, and DDL statements.
Adding #Modifying annotation indicates the query is not for a SELECT query.
When you use only #Query annotation,you should use select queries
However you #Modifying annotation you can use insert,delete,update queries above the method.
I use Hibernate 3.6.8, ehcache 2.4.5 (also tried with latest 2.8.0), jvm 1.6.0_22 on a high traffic site, and sometimes I experience
ObjectNotFoundException: No row with the given identifier exists: [com.example.Foo#123]`
when a new Foo (in this case with id 123) is created via the simplest code possible:
Foo foo = new Foo();
session.save(foo);
The reason is that in all pages of this high traffic site I get all Foos like this:
session.createQuery("from Foo").setCacheable(true).list();
The table storing Foos contains 1000 rows, and the entity is cached in ehcache:
<class-cache class="com.example.Foo" usage="read-write" />
Other possibly relevant parts of my Hibernate configuration are:
<property name="connection.url">jdbc:mysql://localhost:3306/example?characterEncoding=UTF-8</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.idle_test_period">60</property>
<property name="hibernate.c3p0.min_size">10</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.max_statements">0</property>
<property name="hibernate.c3p0.timeout">0</property>
<property name="hibernate.c3p0.acquireRetryAttempts">1</property>
<property name="hibernate.c3p0.acquireRetryDelay">1</property>
<property name="hibernate.show_sql">true</property>
<property name="hibernate.use_sql_comments">true</property>
<property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.jdbc.use_scrollable_resultset">true</property>
<property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</property>
<property name="net.sf.ehcache.configurationResourceName">/ehcache.xml</property>
<property name="hibernate.cache.use_query_cache">true</property>
The error happens once and then goes away. I suspect that the ehcache query cache is updated with the new entity id (123) id, but the entity cache is not yet updated with the contents of that entity. I reproduce this fairly easily locally using JMeter.
Any idea on how to solve this?
On Foo creation the ObjectNotFoundException is thrown once. If on the other hand I delete an instance of Foo then I constantly (and forever) get ObjectNotFoundException for each execution of .list(). The stacktrace can be seen at http://pastebin.com/raw.php?i=dp3HBgDB
The read-write strategy does not guarantee transactionality between the database and the cache, so I think this is what happens when the write occurs:
the new object Foo is attached to the hibernate session of the write.
a lazy loading proxy is inserted into the second level cache by the hibernate session associated to the write request.
The new Foo will be inserted in the database by that same session, but the insert will take a certain time to be built, flushed and committed.
meanwhile another request as hit the proxy to load all Foos. It finds the lazy loading proxy in the cache (see the stacktrace DefaultLoadEventListener.proxyOrLoad()), and decides to load the object (DefaultLoadEventListener.load()).
This triggers a Hibernate load() of a Foo not yet inserted in the database by the write thread.
No Foo with that Id is found on the database, so a ObjectNotFoundException is thrown.
To confirm this, put an exception breakpoint on your IDE, to see that at the moment the exception is thrown the object had not yet been inserted in the DB. One way to solve it would be to use the transactional strategy.
To mitigate the case where an entity is deleted and then list() does not work at all I've caught ObjectNotFoundException at a higher level and when this happens I do:
session.getSessionFactory().getCache().evictCollectionRegions();
session.getSessionFactory().getCache().evictDefaultQueryRegion();
session.getSessionFactory().getCache().evictQueryRegions();
Clearing the 2nd level cache makes the site work again. This of course doesn't prevent the problem from occuring, but it solves the downtime problem of the whole site.
Taken from the documentation on Cache Configuration:
The following attributes and elements are optional.
timeToIdleSeconds:
Sets the time to idle for an element before it expires.
i.e. The maximum amount of time between accesses before an element expires
Is only used if the element is not eternal.
Optional attribute. A value of 0 means that an Element can idle for infinity.
The default value is 0.
timeToLiveSeconds:
Sets the time to live for an element before it expires.
i.e. The maximum time between creation time and when an element expires.
Is only used if the element is not eternal.
Optional attribute. A value of 0 means that and Element can live for infinity.
The default value is 0.
Or you can also go with the alternate options:
Data Freshness and Expiration
I have a PostgreSQL 8.4 database with some tables and views which are essentially joins on some of the tables. I used NetBeans 7.2 (as described here) to create REST based services derived from those views and tables and deployed those to a Glassfish 3.1.2.2 server.
There is another process which asynchronously updates contents in some of tables used to build the views. I can directly query the views and tables and see these changes have occured correctly. However, when pulled from the REST based services, the values are not the same as those in the database. I am assuming this is because JPA has cached local copies of the database contents on the Glassfish server and JPA needs to refresh the associated entities.
I have tried adding a couple of methods to the AbstractFacade class NetBeans generates:
public abstract class AbstractFacade<T> {
private Class<T> entityClass;
private String entityName;
private static boolean _refresh = true;
public static void refresh() { _refresh = true; }
public AbstractFacade(Class<T> entityClass) {
this.entityClass = entityClass;
this.entityName = entityClass.getSimpleName();
}
private void doRefresh() {
if (_refresh) {
EntityManager em = getEntityManager();
em.flush();
for (EntityType<?> entity : em.getMetamodel().getEntities()) {
if (entity.getName().contains(entityName)) {
try {
em.refresh(entity);
// log success
}
catch (IllegalArgumentException e) {
// log failure ... typically complains entity is not managed
}
}
}
_refresh = false;
}
}
...
}
I then call doRefresh() from each of the find methods NetBeans generates. What normally happens is the IllegalArgumentsException is thrown stating somethng like Can not refresh not managed object: EntityTypeImpl#28524907:MyView [ javaType: class org.my.rest.MyView descriptor: RelationalDescriptor(org.my.rest.MyView --> [DatabaseTable(my_view)]), mappings: 12].
So I'm looking for some suggestions on how to correctly refresh the entities associated with the views so it is up to date.
UPDATE: Turns out my understanding of the underlying problem was not correct. It is somewhat related to another question I posted earlier, namely the view had no single field which could be used as a unique identifier. NetBeans required I select an ID field, so I just chose one part of what should have been a multi-part key. This exhibited the behavior that all records with a particular ID field were identical, even though the database had records with the same ID field but the rest of it was different. JPA didn't go any further than looking at what I told it was the unique identifier and simply pulled the first record it found.
I resolved this by adding a unique identifier field (never was able to get the multipart key to work properly).
I recommend adding an #Startup #Singleton class that establishes a JDBC connection to the PostgreSQL database and uses LISTEN and NOTIFY to handle cache invalidation.
Update: Here's another interesting approach, using pgq and a collection of workers for invalidation.
Invalidation signalling
Add a trigger on the table that's being updated that sends a NOTIFY whenever an entity is updated. On PostgreSQL 9.0 and above this NOTIFY can contain a payload, usually a row ID, so you don't have to invalidate your entire cache, just the entity that has changed. On older versions where a payload isn't supported you can either add the invalidated entries to a timestamped log table that your helper class queries when it gets a NOTIFY, or just invalidate the whole cache.
Your helper class now LISTENs on the NOTIFY events the trigger sends. When it gets a NOTIFY event, it can invalidate individual cache entries (see below), or flush the entire cache. You can listen for notifications from the database with PgJDBC's listen/notify support. You will need to unwrap any connection pooler managed java.sql.Connection to get to the underlying PostgreSQL implementation so you can cast it to org.postgresql.PGConnection and call getNotifications() on it.
An an alternative to LISTEN and NOTIFY, you could poll a change log table on a timer, and have a trigger on the problem table append changed row IDs and change timestamps to the change log table. This approach will be portable except for the need for a different trigger for each DB type, but it's inefficient and less timely. It'll require frequent inefficient polling, and still have a time delay that the listen/notify approach does not. In PostgreSQL you can use an UNLOGGED table to reduce the costs of this approach a little bit.
Cache levels
EclipseLink/JPA has a couple of levels of caching.
The 1st level cache is at the EntityManager level. If an entity is attached to an EntityManager by persist(...), merge(...), find(...), etc, then the EntityManager is required to return the same instance of that entity when it is accessed again within the same session, whether or not your application still has references to it. This attached instance won't be up-to-date if your database contents have since changed.
The 2nd level cache, which is optional, is at the EntityManagerFactory level and is a more traditional cache. It isn't clear whether you have the 2nd level cache enabled. Check your EclipseLink logs and your persistence.xml. You can get access to the 2nd level cache with EntityManagerFactory.getCache(); see Cache.
#thedayofcondor showed how to flush the 2nd level cache with:
em.getEntityManagerFactory().getCache().evictAll();
but you can also evict individual objects with the evict(java.lang.Class cls, java.lang.Object primaryKey) call:
em.getEntityManagerFactory().getCache().evict(theClass, thePrimaryKey);
which you can use from your #Startup #Singleton NOTIFY listener to invalidate only those entries that have changed.
The 1st level cache isn't so easy, because it's part of your application logic. You'll want to learn about how the EntityManager, attached and detached entities, etc work. One option is to always use detached entities for the table in question, where you use a new EntityManager whenever you fetch the entity. This question:
Invalidating JPA EntityManager session
has a useful discussion of handling invalidation of the entity manager's cache. However, it's unlikely that an EntityManager cache is your problem, because a RESTful web service is usually implemented using short EntityManager sessions. This is only likely to be an issue if you're using extended persistence contexts, or if you're creating and managing your own EntityManager sessions rather than using container-managed persistence.
You can either disable caching entirely (see: http://wiki.eclipse.org/EclipseLink/FAQ/How_to_disable_the_shared_cache%3F ) but be preparedto a fairly large performance loss.
Otherwise, you can perform a clear cache programmatically with
em.getEntityManagerFactory().getCache().evictAll();
You can map it to a servlet so you can call it externally - this is better if your database is modify externally very seldom and you just want to be sure JPS will pick up the new version
Just a thought, but how do you receive your EntityManager/Session/whatever?
If you queried the entity in one session, it will be detached in the next one and you will have to merge it back into the persistence context to get it managed again.
Trying to work with detached entities may result in those not-managed exceptions, you should re-query the entity or you could try it with merge (or similar methods).
JPA doesn't do any caching by default. You have to explicitly configure it. I believe its the side effect of the architectural style you have chosen: REST. I think caching is happening at the web servers, proxy servers etc. I suggest you read this and debug more.
When I try to delete an entry from a db, using
session.delete(object)
then I can the following:
1) If the row is present in DB then two SQL queries are getting executed: A select and then a delete
2) If the row is not present in the DB then only the select query is getting executed
But again this is not the case for update. Irrespective of the presence of DB row, only the update query is getting executed.
Please let me know why this kind of behaviour for delete operation. Isn't it a performance issue since two queries are getting hit rather than one?
Edit:
I am using hibernate 3.2.5
Sample code:
SessionFactory sessionFactory = new Configuration().configure("student.cfg.xml").buildSessionFactory();
Session session = sessionFactory.openSession();
Student student = new Student();
student.setFirstName("AAA");
student.setLastName("BBB");
student.setCity("CCC");
student.setState("DDD");
student.setCountry("EEE");
student.setId("FFF");
session.delete(student);
session.flush();
session.close();
cfg.xml
<property name="hibernate.connection.username">system</property>
<property name="hibernate.connection.password">XXX</property>
<property name="hibernate.connection.driver_class">oracle.jdbc.OracleDriver</property>
<property name="hibernate.connection.url">jdbc:oracle:thin:#localhost:1521/orcl</property>
<property name="hibernate.jdbc.batch_size">30</property>
<property name="hibernate.dialect">org.hibernate.dialect.OracleDialect</property>
<property name="hibernate.cache.use_query_cache">false</property>
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.connection.release_mode">after_transaction</property>
<property name="hibernate.connection.autocommit">true</property>
<property name="hibernate.connection.pool_size">0</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.show_sql">true</property>
<property name="hibernate.hbm2ddl.auto">update</property>
hbm.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
<hibernate-mapping>
<class name="com.infy.model.Student" table="STUDENT">
<id name="id" column="ID">
<generator class="assigned"></generator>
</id>
<property name="firstName" type="string" column="FIRSTNAME"></property>
<property name="lastName" type="string" column="LASTNAME"></property>
<property name="city" type="string" column="CITY"></property>
<property name="state" type="string" column="STATE"></property>
<property name="country" type="string" column="COUNTRY"></property>
</class>
The reason is that for deleting an object, Hibernate requires that the object is in persistent state. Thus, Hibernate first fetches the object (SELECT) and then removes it (DELETE).
Why Hibernate needs to fetch the object first? The reason is that Hibernate interceptors might be enabled (http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/events.html), and the object must be passed through these interceptors to complete its lifecycle. If rows are delete directly in the database, the interceptor won't run.
On the other hand, it's possible to delete entities in one single SQL DELETE statement using bulk operations:
Query q = session.createQuery("delete Entity where id = X");
q.executeUpdate();
To understand this peculiar behavior of hibernate, it is important to understand a few hibernate concepts -
Hibernate Object States
Transient - An object is in transient status if it has been
instantiated and is still not associated with a Hibernate session.
Persistent - A persistent instance has a representation in the
database and an identifier value. It might just have been saved or
loaded, however, it is by definition in the scope of a Session.
Detached - A detached instance is an object that has been persistent,
but its Session has been closed.
http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/objectstate.html#objectstate-overview
Transaction Write-Behind
The next thing to understand is 'Transaction Write behind'. When objects attached to a hibernate session are modified they are not immediately propagated to the database. Hibernate does this for at least two different reasons.
To perform batch inserts and updates.
To propagate only the last change. If an object is updated more than once, it still fires only one update statement.
http://learningviacode.blogspot.com/2012/02/write-behind-technique-in-hibernate.html
First Level Cache
Hibernate has something called 'First Level Cache'. Whenever you pass an object to save(), update() or saveOrUpdate(), and whenever you retrieve an object using load(), get(), list(), iterate() or scroll(), that object is added to the internal cache of the Session. This is where it tracks changes to various objects.
Hibernate Intercepters and Object Lifecycle Listeners -
The Interceptor interface and listener callbacks from the session to the application, allow the application to inspect and/or manipulate properties of a persistent object before it is saved, updated, deleted or loaded.
http://docs.jboss.org/hibernate/orm/4.0/hem/en-US/html/listeners.html#d0e3069
This section Updated
Cascading
Hibernate allows applications to define cascade relationships between associations. For example, 'cascade-delete' from parent to child association will result in deletion of all children when a parent is deleted.
So, why are these important.
To be able to do transaction write-behind, to be able to track multiple changes to objects (object graphs) and to be able to execute lifecycle callbacks hibernate needs to know whether the object is transient/detached and it needs to have the object in it's first level cache before it makes any changes to the underlying object and associated relationships.
That's why hibernate (sometimes) issues a 'SELECT' statement to load the object (if it's not already loaded) in to it's first level cache before it makes changes to it.
Why does hibernate issue the 'SELECT' statement only sometimes?
Hibernate issues a 'SELECT' statement to determine what state the object is in. If the select statement returns an object, the object is in detached state and if it does not return an object, the object is in transient state.
Coming to your scenario -
Delete - The 'Delete' issued a SELECT statement because hibernate needs to know if the object exists in the database or not. If the object exists in the database, hibernate considers it as detached and then re-attches it to the session and processes delete lifecycle.
Update - Since you are explicitly calling 'Update' instead of 'SaveOrUpdate', hibernate blindly assumes that the object is in detached state, re-attaches the given object to the session first level cache and processes the update lifecycle. If it turns out that the object does not exist in the database contrary to hibernate's assumption, an exception is thrown when session flushes.
SaveOrUpdate - If you call 'SaveOrUpdate', hibernate has to determine the state of the object, so it uses a SELECT statement to determine if the object is in Transient/Detached state. If the object is in transient state, it processes the 'insert' lifecycle and if the object is in detached state, it processes the 'Update' lifecycle.
I'm not sure but:
If you call the delete method with a non transient object, this means first fetched the object from the DB. So it is normal to see a select statement. Perhaps in the end you see 2 select + 1 delete?
If you call the delete method with a transient object, then it is possible that you have a cascade="delete" or something similar which requires to retrieve first the object so that "nested actions" can be performed if it is required.
Edit:
Calling delete() with a transient instance means doing something like that:
MyEntity entity = new MyEntity();
entity.setId(1234);
session.delete(entity);
This will delete the row with id 1234, even if the object is a simple pojo not retrieved by Hibernate, not present in its session cache, not managed at all by Hibernate.
If you have an entity association Hibernate probably have to fetch the full entity so that it knows if the delete should be cascaded to associated entities.
instead of using
session.delete(object)
use
getHibernateTemplate().delete(object)
In both place for select query and also for delete use getHibernateTemplate()
In select query you have to use DetachedCriteria or Criteria
Example for select query
List<foo> fooList = new ArrayList<foo>();
DetachedCriteria queryCriteria = DetachedCriteria.forClass(foo.class);
queryCriteria.add(Restrictions.eq("Column_name",restriction));
fooList = getHibernateTemplate().findByCriteria(queryCriteria);
In hibernate avoid use of session,here I am not sure but problem occurs just because of session use