I use Hibernate 3.6.8, ehcache 2.4.5 (also tried with latest 2.8.0), jvm 1.6.0_22 on a high traffic site, and sometimes I experience
ObjectNotFoundException: No row with the given identifier exists: [com.example.Foo#123]`
when a new Foo (in this case with id 123) is created via the simplest code possible:
Foo foo = new Foo();
session.save(foo);
The reason is that in all pages of this high traffic site I get all Foos like this:
session.createQuery("from Foo").setCacheable(true).list();
The table storing Foos contains 1000 rows, and the entity is cached in ehcache:
<class-cache class="com.example.Foo" usage="read-write" />
Other possibly relevant parts of my Hibernate configuration are:
<property name="connection.url">jdbc:mysql://localhost:3306/example?characterEncoding=UTF-8</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.idle_test_period">60</property>
<property name="hibernate.c3p0.min_size">10</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.max_statements">0</property>
<property name="hibernate.c3p0.timeout">0</property>
<property name="hibernate.c3p0.acquireRetryAttempts">1</property>
<property name="hibernate.c3p0.acquireRetryDelay">1</property>
<property name="hibernate.show_sql">true</property>
<property name="hibernate.use_sql_comments">true</property>
<property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.jdbc.use_scrollable_resultset">true</property>
<property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</property>
<property name="net.sf.ehcache.configurationResourceName">/ehcache.xml</property>
<property name="hibernate.cache.use_query_cache">true</property>
The error happens once and then goes away. I suspect that the ehcache query cache is updated with the new entity id (123) id, but the entity cache is not yet updated with the contents of that entity. I reproduce this fairly easily locally using JMeter.
Any idea on how to solve this?
On Foo creation the ObjectNotFoundException is thrown once. If on the other hand I delete an instance of Foo then I constantly (and forever) get ObjectNotFoundException for each execution of .list(). The stacktrace can be seen at http://pastebin.com/raw.php?i=dp3HBgDB
The read-write strategy does not guarantee transactionality between the database and the cache, so I think this is what happens when the write occurs:
the new object Foo is attached to the hibernate session of the write.
a lazy loading proxy is inserted into the second level cache by the hibernate session associated to the write request.
The new Foo will be inserted in the database by that same session, but the insert will take a certain time to be built, flushed and committed.
meanwhile another request as hit the proxy to load all Foos. It finds the lazy loading proxy in the cache (see the stacktrace DefaultLoadEventListener.proxyOrLoad()), and decides to load the object (DefaultLoadEventListener.load()).
This triggers a Hibernate load() of a Foo not yet inserted in the database by the write thread.
No Foo with that Id is found on the database, so a ObjectNotFoundException is thrown.
To confirm this, put an exception breakpoint on your IDE, to see that at the moment the exception is thrown the object had not yet been inserted in the DB. One way to solve it would be to use the transactional strategy.
To mitigate the case where an entity is deleted and then list() does not work at all I've caught ObjectNotFoundException at a higher level and when this happens I do:
session.getSessionFactory().getCache().evictCollectionRegions();
session.getSessionFactory().getCache().evictDefaultQueryRegion();
session.getSessionFactory().getCache().evictQueryRegions();
Clearing the 2nd level cache makes the site work again. This of course doesn't prevent the problem from occuring, but it solves the downtime problem of the whole site.
Taken from the documentation on Cache Configuration:
The following attributes and elements are optional.
timeToIdleSeconds:
Sets the time to idle for an element before it expires.
i.e. The maximum amount of time between accesses before an element expires
Is only used if the element is not eternal.
Optional attribute. A value of 0 means that an Element can idle for infinity.
The default value is 0.
timeToLiveSeconds:
Sets the time to live for an element before it expires.
i.e. The maximum time between creation time and when an element expires.
Is only used if the element is not eternal.
Optional attribute. A value of 0 means that and Element can live for infinity.
The default value is 0.
Or you can also go with the alternate options:
Data Freshness and Expiration
Related
I wanted to update my database schema by adding new tables. But hibernate doesn't do anything. My database is still the same every time even I changed the property to create. But it won't change anything. And it's the first time happened.
I have added annotation and all what is needed and nothing happened.
First of all make sure that your persistence file has your Entity classes added within the class
<Class>path.EntityName </Class>
Secondly, the persistence file should also have the update property set:
<property name="hibernate.hbm2ddl.auto" value="update"/>
However, the above property does not work in the following cases:
hibernate.hbm2ddl.auto" value="update" will add a db column that doesn't already exist but will not delete a db column that is removed/no longer in your entity.
hibernate.hbm2ddl.auto" value="update" will not modify a db column
that has already been created.
hibernate.hbm2ddl.auto" value="update" won't modify existing table column definitions.
You'll need to backup the table data, drop it and restart your application to get that table's schema back in sync with your entity. Then reload your data. Or you can do it manually through SQL queries on the database tables.
Add hibernate configuration:
<prop key="hibernate.hbm2ddl.auto">create</prop>
I am using hibernate for database connectivity. Now I need read uncommitted isolation of transaction but it is not working for me. Please anyone help me what is reasons, I have used this code :
#org.springframework.transaction.annotation.Transactional(
isolation =Isolation.READ_UNCOMMITTED)
public addRank(){
//here i want to change status of rank corresponding rank id 1 so i called private mthoed addChangeStatus with id 1
addChangeStatus(1);
//here i want to get rank with id and status true but i got null
}
private addChangeStatus( int rankId ){
// first get rank entity from database using given id
Rank rank = dao.getRankById("query");
//set status of rank true here
rank.status(true);
}
in dao getRankById method
Query getRankById(String query){
//get currentSession from session factory
getCurrentSession().getNamedQuery(queryName);
}
Can anyone tell me what is reason read uncommitted is not working . In xml file i have also configure transaction manager as
<bean id="transactionManager"
class="org.springframework.orm.hibernate5.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
Can any one help me how i can get update rank entity within same transaction.
You cannot test READ_UNCOMMITTED from the same transaction.
What you tried to test there is not working but for a different reason. If you were trying to retrieve using EntityManager#find() you'd get back the result from the 1st Level Cache that is storing your not-yet-persisted change without a DB round-trip. 1st Level Cache is not working via Queries.
So:
If you wanted to store and read in the same transaction then you don't need READ_UNCOMMITTED. Just change your retrieval to either EntityManger#find() or Session#load() to fetch it from the 1st Level Cache
If indeed you need READ_UNCOMMITTED revise your test involving multiple transactions.
When I try to delete an entry from a db, using
session.delete(object)
then I can the following:
1) If the row is present in DB then two SQL queries are getting executed: A select and then a delete
2) If the row is not present in the DB then only the select query is getting executed
But again this is not the case for update. Irrespective of the presence of DB row, only the update query is getting executed.
Please let me know why this kind of behaviour for delete operation. Isn't it a performance issue since two queries are getting hit rather than one?
Edit:
I am using hibernate 3.2.5
Sample code:
SessionFactory sessionFactory = new Configuration().configure("student.cfg.xml").buildSessionFactory();
Session session = sessionFactory.openSession();
Student student = new Student();
student.setFirstName("AAA");
student.setLastName("BBB");
student.setCity("CCC");
student.setState("DDD");
student.setCountry("EEE");
student.setId("FFF");
session.delete(student);
session.flush();
session.close();
cfg.xml
<property name="hibernate.connection.username">system</property>
<property name="hibernate.connection.password">XXX</property>
<property name="hibernate.connection.driver_class">oracle.jdbc.OracleDriver</property>
<property name="hibernate.connection.url">jdbc:oracle:thin:#localhost:1521/orcl</property>
<property name="hibernate.jdbc.batch_size">30</property>
<property name="hibernate.dialect">org.hibernate.dialect.OracleDialect</property>
<property name="hibernate.cache.use_query_cache">false</property>
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.connection.release_mode">after_transaction</property>
<property name="hibernate.connection.autocommit">true</property>
<property name="hibernate.connection.pool_size">0</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.show_sql">true</property>
<property name="hibernate.hbm2ddl.auto">update</property>
hbm.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
<hibernate-mapping>
<class name="com.infy.model.Student" table="STUDENT">
<id name="id" column="ID">
<generator class="assigned"></generator>
</id>
<property name="firstName" type="string" column="FIRSTNAME"></property>
<property name="lastName" type="string" column="LASTNAME"></property>
<property name="city" type="string" column="CITY"></property>
<property name="state" type="string" column="STATE"></property>
<property name="country" type="string" column="COUNTRY"></property>
</class>
The reason is that for deleting an object, Hibernate requires that the object is in persistent state. Thus, Hibernate first fetches the object (SELECT) and then removes it (DELETE).
Why Hibernate needs to fetch the object first? The reason is that Hibernate interceptors might be enabled (http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/events.html), and the object must be passed through these interceptors to complete its lifecycle. If rows are delete directly in the database, the interceptor won't run.
On the other hand, it's possible to delete entities in one single SQL DELETE statement using bulk operations:
Query q = session.createQuery("delete Entity where id = X");
q.executeUpdate();
To understand this peculiar behavior of hibernate, it is important to understand a few hibernate concepts -
Hibernate Object States
Transient - An object is in transient status if it has been
instantiated and is still not associated with a Hibernate session.
Persistent - A persistent instance has a representation in the
database and an identifier value. It might just have been saved or
loaded, however, it is by definition in the scope of a Session.
Detached - A detached instance is an object that has been persistent,
but its Session has been closed.
http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/objectstate.html#objectstate-overview
Transaction Write-Behind
The next thing to understand is 'Transaction Write behind'. When objects attached to a hibernate session are modified they are not immediately propagated to the database. Hibernate does this for at least two different reasons.
To perform batch inserts and updates.
To propagate only the last change. If an object is updated more than once, it still fires only one update statement.
http://learningviacode.blogspot.com/2012/02/write-behind-technique-in-hibernate.html
First Level Cache
Hibernate has something called 'First Level Cache'. Whenever you pass an object to save(), update() or saveOrUpdate(), and whenever you retrieve an object using load(), get(), list(), iterate() or scroll(), that object is added to the internal cache of the Session. This is where it tracks changes to various objects.
Hibernate Intercepters and Object Lifecycle Listeners -
The Interceptor interface and listener callbacks from the session to the application, allow the application to inspect and/or manipulate properties of a persistent object before it is saved, updated, deleted or loaded.
http://docs.jboss.org/hibernate/orm/4.0/hem/en-US/html/listeners.html#d0e3069
This section Updated
Cascading
Hibernate allows applications to define cascade relationships between associations. For example, 'cascade-delete' from parent to child association will result in deletion of all children when a parent is deleted.
So, why are these important.
To be able to do transaction write-behind, to be able to track multiple changes to objects (object graphs) and to be able to execute lifecycle callbacks hibernate needs to know whether the object is transient/detached and it needs to have the object in it's first level cache before it makes any changes to the underlying object and associated relationships.
That's why hibernate (sometimes) issues a 'SELECT' statement to load the object (if it's not already loaded) in to it's first level cache before it makes changes to it.
Why does hibernate issue the 'SELECT' statement only sometimes?
Hibernate issues a 'SELECT' statement to determine what state the object is in. If the select statement returns an object, the object is in detached state and if it does not return an object, the object is in transient state.
Coming to your scenario -
Delete - The 'Delete' issued a SELECT statement because hibernate needs to know if the object exists in the database or not. If the object exists in the database, hibernate considers it as detached and then re-attches it to the session and processes delete lifecycle.
Update - Since you are explicitly calling 'Update' instead of 'SaveOrUpdate', hibernate blindly assumes that the object is in detached state, re-attaches the given object to the session first level cache and processes the update lifecycle. If it turns out that the object does not exist in the database contrary to hibernate's assumption, an exception is thrown when session flushes.
SaveOrUpdate - If you call 'SaveOrUpdate', hibernate has to determine the state of the object, so it uses a SELECT statement to determine if the object is in Transient/Detached state. If the object is in transient state, it processes the 'insert' lifecycle and if the object is in detached state, it processes the 'Update' lifecycle.
I'm not sure but:
If you call the delete method with a non transient object, this means first fetched the object from the DB. So it is normal to see a select statement. Perhaps in the end you see 2 select + 1 delete?
If you call the delete method with a transient object, then it is possible that you have a cascade="delete" or something similar which requires to retrieve first the object so that "nested actions" can be performed if it is required.
Edit:
Calling delete() with a transient instance means doing something like that:
MyEntity entity = new MyEntity();
entity.setId(1234);
session.delete(entity);
This will delete the row with id 1234, even if the object is a simple pojo not retrieved by Hibernate, not present in its session cache, not managed at all by Hibernate.
If you have an entity association Hibernate probably have to fetch the full entity so that it knows if the delete should be cascaded to associated entities.
instead of using
session.delete(object)
use
getHibernateTemplate().delete(object)
In both place for select query and also for delete use getHibernateTemplate()
In select query you have to use DetachedCriteria or Criteria
Example for select query
List<foo> fooList = new ArrayList<foo>();
DetachedCriteria queryCriteria = DetachedCriteria.forClass(foo.class);
queryCriteria.add(Restrictions.eq("Column_name",restriction));
fooList = getHibernateTemplate().findByCriteria(queryCriteria);
In hibernate avoid use of session,here I am not sure but problem occurs just because of session use
I am implementing an Entity Attribute Value based persistence mechanism. All DB access is done via Hibernate.
I have a table that contains paths for nodes, it is extremely simple, just an id, and a path (string) The paths would be small in number, around a few thousand.
The main table has millions of rows, and rather than repeating the paths, I've normalized the paths to their own table. The following is the behaviour I want, when inserting into main table
1) Check if the path exists in paths table (query via entity manager, using path value as parameter)
2) if it does not exist, insert, and get id (persist via entity manager)
3) put id as foreign key value to main table row, and insert this into main table.
This is going to happen thousands of times for a set of domain objects, which correspond to lots of rows in main table and some other tables. So the steps above are repeated using a single transaction like this:
EntityTransaction t = entityManager.getTransaction();
t.begin();
//perform steps given above, check, and then persist etc..
t.commit();
When I perform step 2, it introduces a huge performance drop to the total operation. It is begging for caching, because after a while that table will be at most 10-20k entries with very rare new inserts. I've tried to do this with Hibernate, and lost almost 2 days.
I'm using Hibernate 4.1, with JPA annotations and ECache. I've tried to enable query caching, even using the same query object throughout the inserts, as shown below:
Query call = entityManager.createQuery("select pt from NodePath pt " +
"where pt.path = :pathStr)");
call.setHint("org.hibernate.cacheable", true);
call.setParameter("pathStr", pPath);
List<NodePath> paths = call.getResultList();
if(paths.size() > 1)
throw new Exception("path table should have unique paths");
else if (paths.size() == 1){
NodePath path = paths.get(0);
return path.getId();
}
else {//paths null or has zero size
NodePath newPath = new NodePath();
newPath.setPath(pPath);
entityManager.persist(newPath);
return newPath.getId();
}
The NodePath entity is annotated as follows:
#Entity
#Cacheable
#Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
#Table(name = "node_path", schema = "public")
public class NodePath implements java.io.Serializable {
The query cache is being used, as far as I can see from the statistics, but no use for second level cache is reported:
queries executed to database=1
query cache puts=1
query cache hits=689
query cache misses=1
....
second level cache puts=0
second level cache hits=0
second level cache misses=0
entities loaded=1
....
A simple, hand written hashtable as a cache, works as expected, cutting down total time drastically. I guess I'm failing to trigger Hibernate's caching due to nature of my operations.
How do I use hibernate's second level cache with this setup? For the record, this is my persistence xml:
http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
version="2.0">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<class>...</class>
<exclude-unlisted-classes>true</exclude-unlisted-classes>
<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<properties>
<property name="hibernate.connection.driver_class" value="org.postgresql.Driver" />
<property name="hibernate.connection.password" value="zyx" />
<property name="hibernate.connection.url" value="jdbc:postgresql://192.168.0.194:5432/testdbforml" />
<property name="hibernate.connection.username" value="postgres"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
<property name="hibernate.search.autoregister_listeners" value="false"/>
<property name="hibernate.jdbc.batch_size" value="200"/>
<property name="hibernate.connection.autocommit" value="false"/>
<property name="hibernate.generate_statistics" value="true"/>
<property name="hibernate.cache.use_structured_entries" value="true"/>
<property name="hibernate.cache.use_second_level_cache" value="true"/>
<property name="hibernate.cache.use_query_cache" value="true"/>
<property name="hibernate.cache.region.factory_class" value="org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory"/>
</properties>
Ok, I found it.
My problem was that, cached query was keeping only Ids of query results in the cache, and it was (probably) going back to db to get the actual values, rather than getting them from the second level cache.
The problem is of course, the query did not put those values to second level cache, since they were not selected by primary id. So the solution is to use a method that will put values to second level cache, and with hibernate 4.1, I've manage to do this with natural id. Here is the function that either inserts or returns the value from cache, just in case it helps anybody else:
private UUID persistPath(String pPath) throws Exception{
org.hibernate.Session session = (Session) entityManager.getDelegate();
NodePath np = (NodePath) session.byNaturalId(NodePath.class).using("path", pPath).load();
if(np != null)
return np.getId();
else {//no such path entry, so let's create one
NodePath newPath = new NodePath();
newPath.setPath(pPath);
entityManager.persist(newPath);
return newPath.getId();
}
}
I turned off the EclipseLink cache because I'm modifying data externally and don't want the hassle of having to manually refresh everything. Apparently, this is the correct way to switch off the cache in persistence.xml to avoid object identity issues:
<properties>
<property name="eclipselink.cache.shared.default" value="false"/>
</properties>
And here's the exception:
Exception [EclipseLink-6094] (Eclipse Persistence Services - 2.3.0.v20110604-r9504): org.eclipse.persistence.exceptions.QueryException
Exception Description: The parameter name [patient_id] in the query's selection criteria does not match any parameter name defined in the query.
Query: ReadAllQuery(name="file:/C:/dev/repsitory/trunk/java/server/myapp-server/myapp-server-ear/target/gfdeploy/au.com.myapp_myapp-server-ear_ear_1.0-SNAPSHOT/myapp-server-ejb-1.0-SNAPSHOT_jar/_myappPU590288694" referenceClass=PatientRecord sql="SELECT active, new_patient, patient_id_external, rank, patient_id, clinic_system_id FROM postgres.myapp.patient_record WHERE (patient_id = ?)")
I can't even understand the exception message. It's talking about parameter names in the query, but JDBC parameters aren't named.
Any idea how to work around this without switching the cache back on?
As it turns out, I had created an instance of PatientRecord that included one or two detached objects (many-to-one from PatientRecord's perspective). This wasn't a problem with caching on because those objects never became detached.
I merged the objects first then it worked.