I have an app to retrieve data from Database, and I monitor the time my app takes to retrieve data.
But I have an issue when I use the same data input set to retrieve data with my app, the second time retrieving will take much less time.
I assume Java or Hibernate has some cache or temp file to save the data, so second time run will be fast, but I don't want it happen. I need monitor the time it actually takes, not the time retrieve from cache or temp file.
I tried to forbid the cache and temp file generate in Java control Panel, I tried to disable the hibernate cache(first level or second level). But these are still not solve my problem. The second time run still takes less time than it should take.
Any idea the reason caused the second time run faster? it just a simple app to retrieve data from DB
The Hibernate 1st level cache can not be disabled (see How to disable hibernate caching). You need to understand Hibernate's session cache if you want to force Hibernate querying to the database.
Lokesh Gupta has a good tutorial on http://howtodoinjava.com/2013/07/01/understanding-hibernate-first-level-cache-with-example/
First level cache is associated with “session” object and other
session objects in application can not see it.
The scope of cache objects is of session. Once session is closed,
cached objects are gone forever.
First level cache is enabled by default and you can not disable it.
When we query an entity first time, it is retrieved from database
and stored in first level cache associated with hibernate session.
If we query same object again with same session object, it will be
loaded from cache and no sql query will be executed.
The loaded entity can be removed from session using evict() method.
The next loading of this entity will again make a database call if
it has been removed using evict() method.
The whole session cache can be removed using clear() method. It will
remove all the entities stored in cache.
You should therefor either use the evict() or clear() method to force a query to the database.
In order to verify this, you can turn on SQL output using the hibernate.show_sql configuration property (see https://docs.jboss.org/hibernate/orm/5.0/manual/en-US/html/ch03.html#configuration-optional).
Have you tried disabling the cache in the database itself?
I believe that Hibernate first and second level caches are Hibernate specific, but the database will still cache under the hood.
MySQL - force not to use cache for testing speed of query
Related
I have a data table like "animals" and I want to search for 5 animals at once (open connection once) and get their IDs respectively then close the connection one time only - for the sake of performance - instead of opening connection everytime and search for one animal and then close the connection and redo this times.
Is there a way to do this
I think you are looking for a caching solution. So when we are using an ORM library like Hibernate, it will automatically enable 1st level cache that's available through the particular hibernate session. Also if you need you can use 2nd level cache(Ex: EH Cache) that's available throughout the entire application. (this will need manual implementation)
If the application level cache is not possible you can go with a database cache like Redis. But in your case, I guess application-level cache will do the trick.
I want to perform a search of a inputs in a list. That list resides in a database. I see two options for doing that-
Hit the db for each search and return the result.
keep a copy in memory synced with table and search in memory and return the result.
I like the second option as it will be faster. However I am confused on how to keep the list in sync with table.
example : I have a list L = [12,11,14,42,56]
and I receive an input : 14
I need to return the result if the input does exists in the list or not. The list can be updated by other applications. I need to keep the list in sync with table.
What would be the most optimized approach here and how to keep the list in sync with database?
Is there any way my application can be informed of the changes in the table so that I can reload the list on demand.
Instead of recreating your own implementation of something that already exists, I would leverage Hibernate's Second Level Cache (2LC) with an implementation such as EhCache.
By using a 2LC, you can specify the time-to-live expiration time for your entities and once they expire, any query would reload them from the database. If the entity cache has not yet expired, Hibernate will hydrate them from the 2LC application cache rather than the database.
If you are using Spring, you might also want to take a look at #Cachable. This operates at the component / bean tier allowing Spring to cache a result-set into a named region. See their documentation for more details.
To satisfied your requirement, you should control the read and write in one place, otherwise, there will always be some unsync case for the data.
Instead of explaining my question, I will try to give an example:
Let's say I have a User entity and a Item entity. User entity has one-to-one relation to Item.
Let's say that at some point my server updates The table using a sql-update query.
and my question is: Next time I do something like:
Item item = user.getItem();
How can I make sure that the data is up-to-date ? and not the old data that was initially read from DB when User instance was first queried?
Hope my question is clear...
Thank you!
You can be certain about updated entities by flushing entity manager after the DML commands and then query the object again.
Regards
Himanshu
If you do it in the new session, then it will be up to date (if you don't use L2 cache for the entities in question).
If you use L2 cache, then it will not be up to date (if the data is updated in the database without Hibernate being aware of it). In this case, if it is ok for your use cases to use stale data for a specific time interval, you can configure expiration policy for User and Item entities, so that their lifespan in the cache is limited. After they expire from the cache, updated data will be fetched from the database.
If you can properly invalidate the affected second-level cache entries upon changing the data in the background, then you can avoid using the stale data entirely (or reduce the possible time interval in which they will be used as stale).
If you do it in the existing session instance, then both Item and User instances will be in the first-level cache, so you will always get the data that were initially fetched. This is almost always the desired behavior. However, you can manually evict an entity instance (session.evict(item); session.evict(user)) or clear the entire session (session.clear()) to evict all the instances from the current session and then re-read them again from the database.
I have a method which executes HQL using org.hibernate.Query.executeUpdate() and changes some rows in a database. If some of affected by the query rows were previously loaded into the current session (e.g. using Session.get()), they are now stale and need refreshing.
But I want that method to be independent of a previous work with the session, and not to track all loaded objects that might be affected in order to refresh them afterwards. Is it possible with Hibernate to retrieve and iterate through objects in the 1-level cache?
I've found the following solution for the issue, which works for me:
Map.Entry<Object,EntityEntry>[] entities = ((SessionImplementor)session).getPersistenceContext().reentrantSafeEntityEntries()
http://www.grails.org/1.3.1+Release+Notes
Improved Query Caching
The findAll query method now supports
taking advantage of the 2nd level
cache.
Book.findAll("from Book as b where b.author=:author", [author:'Dan Brown'], [cache: true])
What advantages or disadvantages of using 2nd level cache ?
I'm developing web-server for iPhone application so i have a lot of parallel connections, DB queries, etc.
Generally, the 2nd level cache holds the application data previously retrieved from the database. The advantage is that you can make big savings on avoiding database calls for the same data. If 2nd level cache is going to be efficient or not depends on how your app is working with the data and also on the size of the data you can store in memory. Probably the only major disadvantage is that cache need to be invalidated when data is updated in the database. When that happens from your application, some frameworks can handle that automatically (e.g. write trough cache), but if database is changed externally you can only rely on the cace expiration.