I have a hibernate query (hibernate 3) that only reads data from the database. The database is updated by a separate application and the query result does not reflect the changes in the database.
With a bit of research, I think it may have something to do with the Hibernate L2 cache (I don't think it's the L1 cache since I always open a new session and close it after it's done).
Session session = sessionFactoryWrapper.getSession();
List<FlowCount> result = session.createSQLQuery(flowCountSQL).list();
session.close();
I tried disabling the second-layer cache in the hibernate config file but it's not working:
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
<propertyname="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>
I also added session.setCacheMode(CacheMode.Refresh); after Session session = sessionFactoryWrapper.getSession(); to force a refresh on the L1 cache but still not working...
Is there another way to pick up the changes in the database? Am I doing something wrong on how to disable the cache? Thanks.
Update:
I did another experiment by monitoring the database query log:
Run the code the 1st time. Check the log. The query shows up.
Wait a few minutes. The data has changed by another application. I verified it through MySql Workbench. To distinguish from the previous query I add a dummy condition.
Run the code the 2nd time. Check the log and the query shows up.
Both time I'm using the same query but since the data has changed, the result should be different but somehow it's not...
In order to force a L1 cache refresh you can use the refresh(Object) method of Session.
From the Hibernate Docs,
Re-read the state of the given instance from the underlying database.
It is inadvisable to use this to implement long-running sessions that
span many business tasks. This method is, however, useful in certain
special circumstances. For example
where a database trigger alters the object state upon insert or update
after executing direct SQL (eg. a mass update) in the same session
after inserting a Blob or Clob
Moreover you mentioned that you added session.setCacheMode(CacheMode.Refresh) to force a refresh on the L1 cache. This won't work because, CacheMode doesn't have to do anything with L1 cache. From the Hibernate Docs again,
CacheMode controls how the session interacts with the second-level
cache and query cache.
Without second-level cache and query cache, hibernate will always fetch all data from database in a new session.
You can check which query exactly is executed by Hibernate by enabling DEBUG log level for org.hibernate package (and TRACE level for org.hibernate.type if you want to see bound variables).
How old of a change is the query reflecting? If it is showing the changes after sometime, it might have to do with how you obtain your session.
I am not familiar with the SessionFactoryWrapper class, is this a custom class that you wrote? Are you somehow caching the session object longer than it is necessary? If so, the query will be reusing the objects if it has already been loaded in the session. This is the idea behind the repeatable read semantics that Hibernate guarantees.
You can clear the session before running your query and it will then return the latest data.
Hibernate's built-in connection pooling mechanism is bugged.
Replace it with a production quality alternative like c3p0.
I had the exact same issue where stale data was returned until I started using c3p0.
Just in case it IS the 1st Level Cache
Can you show the query you make ?
See following Bugs:
https://hibernate.atlassian.net/browse/HHH-9367
https://jira.grails.org/browse/GRAILS-11645
Additional:
http://howtodoinjava.com/2013/07/01/understanding-hibernate-first-level-cache-with-example/
http://www.dineshonjava.com/p/cacheing-in-hibernate-first-level-and.html#.VhZ7o3VElhE
Repeatable finder problem caused by Hibernates 1st Level Cache
To be clear, both test succeed - not logically at all:
userByEmail('foo#bar.com').email != 'foo#bar.com'
Complete Test
#Issue('https://jira.grails.org/browse/GRAILS-11645')
class FirstLevelCacheSpec extends IntegrationSpec {
def sessionFactory
def setup() {
User.withNewSession {
User user = new User(email: 'test#test.org', password: 'test-password')
user.save(flush: true, failOnError: true)
}
}
private void updateObjectInNewSession(){
User.withNewSession {
def u = User.findByEmail('test#test.org', [cache: false])
u.email = 'foo#bar.com'
u.save(flush: true, failOnError: true)
}
}
private User userByEmail(String email){
User.findByEmail(email, [cache: false])
}
def "test first update"() {
when: 'changing the object in another session'
updateObjectInNewSession()
then: 'retrieving the object by changed identifier (no 2nd level cache)'
userByEmail('foo#bar.com').email == 'foo#bar.com'
}
def "test stale object in 1st level"() {
when: 'changing the object after pulling objects to cache by finder'
userByEmail('test#test.org')
updateObjectInNewSession()
then: 'retrieving the object by changed identifier (no 2nd level cache)'
userByEmail('foo#bar.com').email != 'foo#bar.com'
}
}
Related
I've been struggling for few hours with this one and could do with some help.
A client sends an object that contains a list;
One of the objects in the list has been modified on the client;
In some cases I don't want that modified entity to be persisted to the database, I want to keep the original database values.
I have tried the following and various attempts to clear(), refresh() and flush() the session:
List<Integer> notToModifyIds = dao.getDoNotModifyIds(parentEntity.getId());
MyEntityFromList entityFromClient, entityFromDb;
for(Integer notToModifyId : notToModifyIds){
ListIterator iterator = parentEntity.getEntities().listIterator();
while(iterator.hasNext()){
entityFromClient = (MyEntity) iterator.next();
if(Objects.equals(entityFromClient.getId(), notToModifyId)){
dao.evict(entityFromClient);
entityFromDb = (MyEntity) dao.get(MyEntity.class, notToModifyId);
iterator.remove(entityFromClient);
iterator.add(entityFromDb);
}
}
}
However, no matter what I try I always get the values from the client persisted to the database. When I add a breakpoint after iterator.add() I can check that the database value has not been updated at that point, hence I know that if I could load the entity from the DB then I would have the value I want.
I'm feeling a little suppid!
I don't know if I got the whole scenario here. Are those modified "entitiesFromClient" attached to the Hibernate session? If they are, the changes were probably automatically flushed to the database before you "evicted" them.
Setting a MANUAL flush mode would help you avoid the automatic behaviour.
First of all, I would enable the Hibernate SQL logging to see more precisely what is happening. See Enable Hibernate logging.
Checking the database in another session (while stopped in the breakpoint) will not help if this code is running within a transaction. Even if the change was already flushed in the database you wouldn't see it until the transaction is commited.
I use MyBatis 3.1.
I have two use cases when I need to bypass MyBatis local cache and directly hit the DB.
Since MyBatis configuration file only have global settings, it is not applicable to my case, because I need it as an exception, not as a default. Attributes of MyBatis <select> XML statement do not seem to include this option.
Use case 1: 'select sysdate from dual'.
MyBatis caching causes this one to always return the same value within a MyBatis session. This causes an issue in my integration test, when I try to replicate a situation of an outdated entry.
My workaround was just to use a plain JDBC call.
Use case 2: 'select' from one thread does not always see the value written by another thread.
Thread 1:
SomeObject stored = dao.insertSomeObject(obj);
runInAnotherThread(stored.getId());
//complete and commit
Thread 2:
//'id' received as an argument provided to 'runInAnotherThread(...)'
SomeObject stored = dao.findById(id);
int count = 0;
while(stored == null && count < 300) {
++count;
Thread.sleep(1000);
stored = dao.findById(id);
}
if (stored == null) {
throw new MyException("There is no SomeObject with id="+id);
}
I occasionally receive MyException errors on a server, but can't reproduce on my local machine. In all cases the object is always in the DB. So I guess the error depends on whether the stored object was in MyBatis local cache at the first time, and waiting for 5 minutes does not help, since it never checks the actual DB.
So my question is how to solve the above use cases within MyBatis without falling back to the plain JDBC?
Being able just to somehow signal MyBatis not to use a cached value in a specific call (the best) or in all calls to a specific query would be the preferred option, but I will consider any workaround as well.
I don't know a way to bypass local cache but there are two options how to achieve what you need.
The first option is to set flushCache="true" on select. This will clear the cache after statement execution so next query will hit database.
<select id="getCurrentDate" resultType="date" flushCache="true">
SELECT SYSDATE FROM DUAL
</select>
Another option is to use STATEMENT level local cache. By default local cache is used during SESSION (which is typically translates to transaction). This is specified by localCacheScope option and is set per session factory. So this will affect all queries using this mybatis session factory.
Let me summarize.
The solution from the previous answer, 'flushCache="true"' option on the query, works and solves both use cases. It will flush cache after every such 'select', so the next 'select' statement will hit the DB. Although it works after the 'select' statement is executed, it's OK since the cache is empty anyway before the first 'select'.
Another solution is to start a new session. I use Spring, so it's enough to mark a method with #Transactional(propagation = Propagation.REQUIRES_NEW). Since MyBatis session is tied to Spring transaction, this will cause to create another MyBatis session with fresh cache every time the method is called.
By some reason, the MyBatis option 'useCache="false"' in the query does not work.
The following Options annotation can be used:
#Options(useCache=false, flushCache=FlushCachePolicy.TRUE)
Apart from answers by Roman and Alexander there is one more solution for this:
Configuration configuration = MyBatisUtil.getSqlSessionFactory().getConfiguration();
Collection<Cache> caches = configuration.getCaches();
//If you have multiple caches and want a particular to get deleted.
// Cache cache = configuration.getCache("PPL"); // namespace of particular XML
for (Cache cache : caches) {
Lock w = cache.getReadWriteLock().writeLock();
w.lock();
try {
cache.clear();
} finally {
w.unlock();
}
}
I have an application which has a table and when you click on an item in the table it fills in a group of textfields with its data (FieldGroup), and then you have the option of saving the changes I was wondering how would I save the changes the user makes to my postgres database. I am using vaadin and hibernate for this application. So far I have tried to do
editorField.commit() // after the user clicks the save button
I have tried
editorField.commit()
hbsession.persist(editorField) //hbsession is the name of my Session
and I have also tried
editorField.commit();
hbsession.save(editorField);
The last two ones give me the following error
Caused by: org.hibernate.SessionException: Session is closed!
Well, the first thing you need to realize is Vaadin differs from conventional request/response web framework. Actually, Vaadin is *event driven* framework very similar to Swing. It builds a application context from very first click of the user and holds it during whole website visit. The problem is there is no entry request point you can start hibernate session and no response point to close. There are tons of requests during a single click on button.
So, entitymanager-per-request pattern is completely useless. It is better to use one standalone em or em-per-session pattern with hibernate.connection_release after_transaction to keep connection pool low.
To the JPAContianer, it is not usable as far you need to refresh the container or you have to handle beans with relations. Also, I did not manage to get it working with batch load, so every reading of entry or relation equals one select to DB. Do not support lazy loading.
All you need is open EM/session. Try to use suggested patters or open EM/session every transaction and merge your bean first.
Your question is quite complex and hard to answer, but I hope these links help you get into:
Pojo binding strategy for hibernate
https://vaadin.com/forum#!/thread/39712
MVP-lite
https://vaadin.com/directory#addon/mvp-lite (stick with event driven pattern)
I have figured out how to make changes to the database here is some code to demonstrate:
try {
/** define the session and begin it **/
hbsession = HibernateUtil.getSessionFactory().getCurrentSession();
hbsession.beginTransaction();
/** table is the name of the Bean class linked to the corresponding SQL table **/
String query = "UPDATE table SET name = " + textfield.getValue();
/** Run the string as an SQL query **/
Query q = hbsession.createQuery(query);
q.executeUpdate(); /** This command saves changes or deletes a entry in table **/
hbsession.getTransaction().commit();
} catch (RuntimeException rex) {
hbsession.getTransaction().rollback();
throw rex;
}
I'm trying to update all my 4000 Objects in ProfileEntity but I am getting the following exception:
javax.persistence.QueryTimeoutException: The datastore operation timed out, or the data was temporarily unavailable.
this is my code:
public synchronized static void setX4all()
{
em = EMF.get().createEntityManager();
Query query = em.createQuery("SELECT p FROM ProfileEntity p");
List<ProfileEntity> usersList = query.getResultList();
int a,b,x;
for (ProfileEntity profileEntity : usersList)
{
a = profileEntity.getA();
b = profileEntity.getB();
x = func(a,b);
profileEntity.setX(x);
em.getTransaction().begin();
em.persist(profileEntity);
em.getTransaction().commit();
}
em.close();
}
I'm guessing that I take too long to query all of the records from ProfileEntity.
How should I do it?
I'm using Google App Engine so no UPDATE queries are possible.
Edited 18/10
In this 2 days I tried:
using Backends as Thanos Makris suggested but got to a dead end. You can see my question here.
reading DataNucleus suggestion on Map-Reduce but really got lost.
I'm looking for a different direction. Since I only going to do this update once, Maybe I can update manually every 200 objects or so.
Is it possible to to query for the first 200 objects and after it the second 200 objects and so on?
Given your scenario, I would advice to run a native update query:
Query query = em.createNativeQuery("update ProfileEntity pe set pe.X = 'x'");
query.executeUpdate();
Please note: Here the query string is SQL i.e. update **table_name** set ....
This will work better.
Change the update process to use something like Map-Reduce. This means all is done in datastore. The only problem is that appengine-mapreduce is not fully released yet (though you can easily build the jar yourself and use it in your GAE app - many others have done so).
If you want to set(x) for all object's, better to user update statement (i.e. native SQL) using JPA entity manager instead of fetching all object's and update it one by one.
Maybe you should consider the use of the Task Queue API that enable you to execute tasks up to 10min. If you want to update such a number of entities that Task Queues do not fit you, you could also consider the user of Backends.
Put the transaction outside of the loop:
em.getTransaction().begin();
for (ProfileEntity profileEntity : usersList) {
...
}
em.getTransaction().commit();
Your class behaves not very well - JPA is not suitable for bulk updates this way - you just starting a lot of transaction in rapid sequence and produce a lot of load on the database. Better solution for your use case would be scalar query setting all the objects without loading them into JVM first ( depending on your objects structure and laziness you would load much more data as you think )
See hibernate reference:
http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/batch.html#batch-direct
I'm having some trouble with EclipseLink. My program has to interact with a database (representing a building). I've written a little input-testmode where I can manually insert stuff through the console.
My problem: a normal getByID-operation works just fine if I try to retrieve an entity I previously inserted through EclipseLink itself (by commit()), but throws a NoResultException when trying to select a row manually inserted via SQL-script (building -> lots of rooms -> script).
This (oversemplified) works fine:
main() {
MyRoom r = new MyRoom();
r.setID("floor1-roomnr4");
em.commit(r); //entity manager
DAO.getRoomByID("floor1-roomnr4"); // works
}
and the combination of generation-script + simply getRoomByID() throws an exception.
If I try it in SQL Developer I get the result I want for the exact select statement which just threw a NoResultException. I also only get this problem in the input-mode, otherwise selecting the generated rows works also fine.
Does EclipseLink have some cache-mechanism I'm unaware of which is causing some problem?
Are you sure EclipseLink and SQL Developer are connected to the same Database? Please verify the connection information for both. Is the generation-script committing the changes with the "commit" command?
If EclipseLink works similarly to Hibernate then yes there is a cache. The "first level cache" guaranties that you get the exact same instance within one transaction which makes sense. If you know EclipseLink/transactions then try to evict all loaded instances or commit the transaction and then try your DAO again. This would force EclipseLink to fetch the data from the database again
See Answer to similar question