Cache with fixed expiry time in Java - java

My Java web application (tomcat) gets all of its data from an SQL database. However, large parts of this database are only updated once a day via a batchjob. Since queries on these tables tend do be rather slow, I want to cache the results.
Before rolling my own solution, I wanted to check out existing cache solutions for java. Obviously, I searched stackoverflow and found references and recommendations for ehcache.
But looking through the documentation it seems it only allows for setting the lifetime of cached objects as a duration (e.g. expire 1 hour after added), while I want an expiry based on a fixed time (e.g. expire at 0h30 am).
Does anyone know a cache library that allows such expiry behaviour? Or how to do this with ehcache if that's possible?

EhCache allows you programmatically set the expiry duration on an individual cache element when you create it. The values configured in ehcache.xml are just defaults.
If you know the specific absolute time that the element should expire, then you can calculate the difference in seconds between then and "now" (i.e. the time you add to the cache), and set that as the time-to-live duration, using Element.setTimeToLive()

Do you need a full blown cache solution? You use standard Maps and then have a job scheduled to clear them at the required time.

Related

Hazelcast - Evicting an entry from a IMap after a fixed period of time, no matter how many times it is updated

I have the following use case where I need an entry to be evicted from an IMap, no matter how many times it is Updated. My key is a String and my value is a Java Object.
If for example, an entry is added on 12th May, it needs to be evicted after 14 days, i.e. 26th May, no matter how many times it is updated.
Hazelcast has a tag in its configuration called time-to-live-seconds, where you can configure how much time an entry can stay in a map.
So from Hazelcast Documentation,
"Maximum time in seconds for each entry to stay on the map. If it is not 0, entries that are older than this time and not updated for this time are evicted automatically. Valid values are integers between 0 and Integer.MAX VALUE. Its default value is 0, which means infinite. If it is not 0, entries are evicted regardless of the set eviction-policy."
So, with the above, if you consider the above example, an entry added originally on 12th May and then updated on 24th May will be removed 14 days after the 24th of May, not on 26th May.
Hence, to solve the above problem, I am using the following approach. When I have to update an entry, I am first getting the EntryView from the Map and then using that obtaining the Expiration Time. Then getting the current time and taking the difference of expiration time with the current time and then updating the value, with time-to-live as the difference of expiration time and the current time.
Employee employee= IMap.get("A12");
employee.setDescr("loasfdeff");
EntryView<String,Employee> entryView=iMap.getEntryView("A12");
Long expirationTime=entryView.getExpirationTime();
Long currentTime=System.currentTimeMillis();
Long difference=expirationTime-currentTime;
iMap.set("A12",employee, difference, TimeUnit.MILLISECONDS);
I have tested the above approach, and it works. Although, I would like to explore other alternatives to see if there is anything hazelcast provides out of the box, to help me solve my use-case.
Any help is much appreciated!
EDIT-
GITHUB ISSUE- https://github.com/hazelcast/hazelcast/issues/13012
You are correct in how the TTL operates. A simple update of an entry is essentially the same as putting a new entry, so the system can’t interpret the intention. However, this would be a nice enhancement: adding a switch to preserve the expiration datetime.
I have a couple of alternative approaches:
1) Consider adding a timestamp field to the value object and setting this to the current time on the original object put. Once that’s present you could write an executor service to run at intervals and invalidate the objects based on the overall TTL you want. You can index this field as well to make it more performant.
2) You can write a custom eviction policy by extending the MapCustomEvictionPolicy class and applying that to your map. You would most likely still need to add a timestamp in the value (or key if you wanted to make that a custom object). You would then have a blank slate for how you want this to work.
I’ll create a product enhancement request for this in the meantime. Could probably get it in the next release as it doesn’t seem too hard of an add.

hibernate cache expiration after days

Application is using Hibernate 3 and needs to cache data while saving to the database.
Is it possible to cache only a few columns of an entity, instead of all?
Is it possible to set expiration time of an entity after a few days? For example, is it possible to set expiration time to 3 days, so that records can be auto-removed from the cache after 3 days?
I have started looking into caching process but not sure if the above two needs can be fulfilled.
No, it is not possible to cache only a subset of columns for an entity in the Hibernate second-level cache, as otherwise Hibernate would need to go to the database to fetch the remaining data when assembling such entity instances anyway, thus defeating the purpose of the cache.
Yes, it is possible to set expiration time for cached data. Hibernate leaves it to up to the L2 cache provider to manage expiration and eviction policy, so you need to configure it there (consult the documentation of the cache provider you use).
More details here and here.
1) YES, Make other fields Transient, so they will not take part in second level caching pr

Update item without modifying the expiry

Is there a way to update an item in couchbase without altering its expiration time? I am using Java SDK and Couchbase 3
No, this is not possible right now. The simple reason is that the underlying protocol does not allow for it - everytime the document is modified its expiration time is reset.
The only reasonable workaround I can think of right now can be used when your expiration times are long and a small change won't matter: when you create a view you can grab the TTL as part of the meta information. So you load the current TTL and write the new document with this TTL (maybe then even substracting the time your business processing took). This would approximate it (and it can also work with N1QL).

Java cache with expiration since first write

I have events which should be accumulated into persistent key-value store. After 24 hours after key first insert this accumulated record should be processed and remove from store.
Expired data processing is distributed among multiple nodes, so use of database involves processing synchronization problems. I don't want to use any SQL database.
The best fit for me is probably some cache with configurable expiration policy according to my needs. Is there any? Or can be this solved with some No-SQL database?
It should be possible with products like infinispan or hazelcast.
Both are JSR107 compatible.
With a JSR107 compatible cache API a possible approach is to set your 24h hours expiry via the CreatedExpiryPolicy. Next, you implement and register CacheEntryExpiredListener to get a call when the entry is expired.
The call on the CacheEntryExpiredListener may be lenient and implementation dependent. Actually the event is triggered on the "eviction due to expiry". For example, one implementation may do a peridoc scan and remove expired entries every 30 minutes. However I think that "lag time" is adjustible in most implementations, so you will be able to operate in defined bounds.
Also check whether there are some resource constraints for the event callbacks you may run into, like thread pools.
I mentioned infispan or hazelcast for two reasons:
You may need the distribution capabilities.
Since you do long running processing and store data that is not recoverable, you may need the persistence and fault tolerance features. So I would say a simple in memory cache like Google Guava is out of the scope.
Good luck!

Memcache getItemCount() counting expired keys?

I want to get the count of "alive" keys at any given time. Now according to the API documentation getItemCount() is meant to return this.
However it's not. Expired keys are not reducing the value of getItemCount(). Why is this? How can I accurately get a count of all "active" or "alive" keys that have not expired?
Here's my put code;
syncCache.put(uid, cachedUID, Expiration.byDeltaSeconds(3), SetPolicy.SET_ALWAYS);
Now that should expire keys after 3 seconds. It expires them but getItemCount() does not reflect the true count of keys.
UPDATE:
It seems memcache might not be be what I should be using so here's what I'm trying to do.
I wish to write a google-app engine server/app that works as a "users online" feature for a desktop application. The desktop application makes a http request to the app with a unique ID as a paramater. The app stores this UID along with a timestamp. This is done every 3 minutes.
Every 5 mninute any entries that have a timestamp outside of that 5 minute window are removed. Then you count how many entries you have and that's how many users are "online".
The expire feature seemed perfect as then I wouldn't even need to worry about timestamps or clearing expired entries.
It might be a problem in the documentation, the python one does not mention anything about alive ones. This is reproducible also in python.
See also this related post How does the lazy expiration mechanism in memcached operate?.
getItemCount() might return expired keys because it's the way memcache works and many other caches too.
Memcache can help you to do what you describe but not in the way you tried to do it. Consider completely opposite situation: you put online users in memcache and then appengine wipe them out from memcache because of lack of free memory. cache doesn't give you any guaranties you items will be stored any particular period. So memcache can let you decrease number of requests to datastore and latency.
One of the ways to do it:
Maintain sorted map (user-id, last-login-refreshed) entries which stored in datastore (or in memcache if you do not need it to be very precise). on every login/refresh you update value by particular key, and from periodic cron-job remove old users from the map. So size of map would be number of logged in use at particular moment.
Make sure map will fit in 1 Mb which is limit for both memcache or datastore.

Categories

Resources