Get all session-cached objects in Hibernate - java

I have a method which executes HQL using org.hibernate.Query.executeUpdate() and changes some rows in a database. If some of affected by the query rows were previously loaded into the current session (e.g. using Session.get()), they are now stale and need refreshing.
But I want that method to be independent of a previous work with the session, and not to track all loaded objects that might be affected in order to refresh them afterwards. Is it possible with Hibernate to retrieve and iterate through objects in the 1-level cache?

I've found the following solution for the issue, which works for me:
Map.Entry<Object,EntityEntry>[] entities = ((SessionImplementor)session).getPersistenceContext().reentrantSafeEntityEntries()

Related

JAVA Swing: JPA controller function not return updated data from database

I am using JAVA to build a panel with a JTable showing the information about books from sql server database. If I click a book button, this panel will display with data.
My problem is that, after the first display, if I change the modify data in database and re-click the book button (to reload the panel and display the change), the data does not change.
When I looked into the problem, I found that the function, which is responsible for retrieve data from database, return same result each click, though the data from database has changed.
This is the function (line 227 with breakpoint)
and what it does (this is auto-generated from JPA Controller, I do not modify anything)
try
{
CriteriaQuery cq = em.getCriteriaBuilder().createQuery();
cq.select(cq.from(Books.class));
Query q = em.createQuery(cq);
if (!all)
{
q.setMaxResults(maxResults);
q.setFirstResult(firstResult);
}
return q.getResultList();
}
finally
{
em.close();
}
My thought is that each time I click the button to reload the JTable, this function would be called to connect database and retrieved current data at db. In fact, though it is called and it still returns old data,not retrieve current one at db.
Can anyone explain this situation?
I have faced the same issue during one of my implementation in spring-boot.
I have solved the same using entityManager.refresh(entity).
You need to refresh loaded dada using EntityManager.
The EntityManager.refresh() operation is used to refresh an object's state from the database. This will revert any non-flushed changes made in the current transaction to the object and refresh its state to what is currently defined on the database. If a flush has occurred, it will refresh to what was flushed. Refresh must be called on a managed object, so you may first need to find the object with the active EntityManager if you have a non-managed instance.
Another solution is to clear the cache using
entityManager.getEntityManagerFactory().getCache().evictAll();
This will empty the cache, and fetch objects changed outside the entity manager, it will do an actual database query instead of using the outdated cached value.
Difference between evict and refresh
evict: Mark an instance as no longer needed in the cache.
refresh: Refresh the state of the instance from the database, overwriting changes made to the entity
Remember few things this is a classic case, where a snapshot of data is read by a thread and another thread updates the data, which results in data inconsistency. Hence any such sharing should be synchronized.
In your case a transaction is already in progress and someone modified the data from the database, hence you have to refresh everything again and then you shall see the modified data.

JDO query returns entity that doesn't match the filter condition

I ran the following jdo query, it is intended to get a list of entities that has empty "flow id". However, when I check the returned list, it did contain an entity with non-empty flow id.
PersistenceManager pm = pmf.getPersistenceManagerProxy();
String flowId = "";
Map<String, Object> params = new HashMap<>();
params.put("flowId", flowId);
List<MyEntity> entities = pm.newQuery(MyEntity.class, " this.flowId == :flowId ").setNamedParameters(params).executeList();
It doesn't always happen, but when it happens, I found there is always an update to that entity from another process to clear the "flow id" at a similar time. However, the result I got from the above query have that entity but shows a non-empty flow id. I also checked the JDO object state of the unexpectedly returned entity, it is persistent-clean. The query is run within an active transaction.
Here are the SQL compiled by JDOQLQuery.
SELECT 'com.example.MyEntity' AS "NUCLEUS_TYPE","A0"."CREATE_TIME","A0"."DATA_MAX_TIMESTAMP","A0"."DATA_MIN_TIMESTAMP","A0"."ID","A0"."OBSERVATION_ID","A0"."PARTITION_VALUE","A0"."PARTITION_CYCLE","A0"."PARTITION_TIMESTAMP","A0"."FLOW_ID","A0"."PROCESSING_STAGE","A0"."PROCESSING_STATUS","A0"."RECORD_COUNT","A0"."UPDATE_TIME" FROM "MY_ENTITIES" "A0" WHERE "A0"."FLOW_ID" = ?
Although I don't think it is relevant, the isolation level is read-committed, the entity is detachable, and that query above is running within a transaction. Please help, thanks!
Update
After I change the isolation level to repeatable-read, it never happens again. So highly likely it is related with isolation level. I'm not sure whether there is a bug or not. My data nucleus version is 4.1.6. Any thoughts will help.
I turn off both level1 and level2 cache. It seems caching in datanucleus doesn't work in some rare cases with multiple processes updating the same row with read-committed isolation level.
When the data store is applying the filtering condition, it can see the latest update from another process (set the flow_id to empty in my example). So it returns that row. However when datanucleus lookup its field using the row id, it checks the cache first but there is a potential stale value in cache for that entity, which seems to be the root cause of this issue.

How to keep a java list in memory synced with a table in database?

I want to perform a search of a inputs in a list. That list resides in a database. I see two options for doing that-
Hit the db for each search and return the result.
keep a copy in memory synced with table and search in memory and return the result.
I like the second option as it will be faster. However I am confused on how to keep the list in sync with table.
example : I have a list L = [12,11,14,42,56]
and I receive an input : 14
I need to return the result if the input does exists in the list or not. The list can be updated by other applications. I need to keep the list in sync with table.
What would be the most optimized approach here and how to keep the list in sync with database?
Is there any way my application can be informed of the changes in the table so that I can reload the list on demand.
Instead of recreating your own implementation of something that already exists, I would leverage Hibernate's Second Level Cache (2LC) with an implementation such as EhCache.
By using a 2LC, you can specify the time-to-live expiration time for your entities and once they expire, any query would reload them from the database. If the entity cache has not yet expired, Hibernate will hydrate them from the 2LC application cache rather than the database.
If you are using Spring, you might also want to take a look at #Cachable. This operates at the component / bean tier allowing Spring to cache a result-set into a named region. See their documentation for more details.
To satisfied your requirement, you should control the read and write in one place, otherwise, there will always be some unsync case for the data.

How can I stop Java or Hibernate Caching

I have an app to retrieve data from Database, and I monitor the time my app takes to retrieve data.
But I have an issue when I use the same data input set to retrieve data with my app, the second time retrieving will take much less time.
I assume Java or Hibernate has some cache or temp file to save the data, so second time run will be fast, but I don't want it happen. I need monitor the time it actually takes, not the time retrieve from cache or temp file.
I tried to forbid the cache and temp file generate in Java control Panel, I tried to disable the hibernate cache(first level or second level). But these are still not solve my problem. The second time run still takes less time than it should take.
Any idea the reason caused the second time run faster? it just a simple app to retrieve data from DB
The Hibernate 1st level cache can not be disabled (see How to disable hibernate caching). You need to understand Hibernate's session cache if you want to force Hibernate querying to the database.
Lokesh Gupta has a good tutorial on http://howtodoinjava.com/2013/07/01/understanding-hibernate-first-level-cache-with-example/
First level cache is associated with “session” object and other
session objects in application can not see it.
The scope of cache objects is of session. Once session is closed,
cached objects are gone forever.
First level cache is enabled by default and you can not disable it.
When we query an entity first time, it is retrieved from database
and stored in first level cache associated with hibernate session.
If we query same object again with same session object, it will be
loaded from cache and no sql query will be executed.
The loaded entity can be removed from session using evict() method.
The next loading of this entity will again make a database call if
it has been removed using evict() method.
The whole session cache can be removed using clear() method. It will
remove all the entities stored in cache.
You should therefor either use the evict() or clear() method to force a query to the database.
In order to verify this, you can turn on SQL output using the hibernate.show_sql configuration property (see https://docs.jboss.org/hibernate/orm/5.0/manual/en-US/html/ch03.html#configuration-optional).
Have you tried disabling the cache in the database itself?
I believe that Hibernate first and second level caches are Hibernate specific, but the database will still cache under the hood.
MySQL - force not to use cache for testing speed of query

Getting up-to-date data from entity that has One-To-One relation and data is updated externally

Instead of explaining my question, I will try to give an example:
Let's say I have a User entity and a Item entity. User entity has one-to-one relation to Item.
Let's say that at some point my server updates The table using a sql-update query.
and my question is: Next time I do something like:
Item item = user.getItem();
How can I make sure that the data is up-to-date ? and not the old data that was initially read from DB when User instance was first queried?
Hope my question is clear...
Thank you!
You can be certain about updated entities by flushing entity manager after the DML commands and then query the object again.
Regards
Himanshu
If you do it in the new session, then it will be up to date (if you don't use L2 cache for the entities in question).
If you use L2 cache, then it will not be up to date (if the data is updated in the database without Hibernate being aware of it). In this case, if it is ok for your use cases to use stale data for a specific time interval, you can configure expiration policy for User and Item entities, so that their lifespan in the cache is limited. After they expire from the cache, updated data will be fetched from the database.
If you can properly invalidate the affected second-level cache entries upon changing the data in the background, then you can avoid using the stale data entirely (or reduce the possible time interval in which they will be used as stale).
If you do it in the existing session instance, then both Item and User instances will be in the first-level cache, so you will always get the data that were initially fetched. This is almost always the desired behavior. However, you can manually evict an entity instance (session.evict(item); session.evict(user)) or clear the entire session (session.clear()) to evict all the instances from the current session and then re-read them again from the database.

Categories

Resources