Is there a way to update an item in couchbase without altering its expiration time? I am using Java SDK and Couchbase 3
No, this is not possible right now. The simple reason is that the underlying protocol does not allow for it - everytime the document is modified its expiration time is reset.
The only reasonable workaround I can think of right now can be used when your expiration times are long and a small change won't matter: when you create a view you can grab the TTL as part of the meta information. So you load the current TTL and write the new document with this TTL (maybe then even substracting the time your business processing took). This would approximate it (and it can also work with N1QL).
Related
I have the following use case where I need an entry to be evicted from an IMap, no matter how many times it is Updated. My key is a String and my value is a Java Object.
If for example, an entry is added on 12th May, it needs to be evicted after 14 days, i.e. 26th May, no matter how many times it is updated.
Hazelcast has a tag in its configuration called time-to-live-seconds, where you can configure how much time an entry can stay in a map.
So from Hazelcast Documentation,
"Maximum time in seconds for each entry to stay on the map. If it is not 0, entries that are older than this time and not updated for this time are evicted automatically. Valid values are integers between 0 and Integer.MAX VALUE. Its default value is 0, which means infinite. If it is not 0, entries are evicted regardless of the set eviction-policy."
So, with the above, if you consider the above example, an entry added originally on 12th May and then updated on 24th May will be removed 14 days after the 24th of May, not on 26th May.
Hence, to solve the above problem, I am using the following approach. When I have to update an entry, I am first getting the EntryView from the Map and then using that obtaining the Expiration Time. Then getting the current time and taking the difference of expiration time with the current time and then updating the value, with time-to-live as the difference of expiration time and the current time.
Employee employee= IMap.get("A12");
employee.setDescr("loasfdeff");
EntryView<String,Employee> entryView=iMap.getEntryView("A12");
Long expirationTime=entryView.getExpirationTime();
Long currentTime=System.currentTimeMillis();
Long difference=expirationTime-currentTime;
iMap.set("A12",employee, difference, TimeUnit.MILLISECONDS);
I have tested the above approach, and it works. Although, I would like to explore other alternatives to see if there is anything hazelcast provides out of the box, to help me solve my use-case.
Any help is much appreciated!
EDIT-
GITHUB ISSUE- https://github.com/hazelcast/hazelcast/issues/13012
You are correct in how the TTL operates. A simple update of an entry is essentially the same as putting a new entry, so the system can’t interpret the intention. However, this would be a nice enhancement: adding a switch to preserve the expiration datetime.
I have a couple of alternative approaches:
1) Consider adding a timestamp field to the value object and setting this to the current time on the original object put. Once that’s present you could write an executor service to run at intervals and invalidate the objects based on the overall TTL you want. You can index this field as well to make it more performant.
2) You can write a custom eviction policy by extending the MapCustomEvictionPolicy class and applying that to your map. You would most likely still need to add a timestamp in the value (or key if you wanted to make that a custom object). You would then have a blank slate for how you want this to work.
I’ll create a product enhancement request for this in the meantime. Could probably get it in the next release as it doesn’t seem too hard of an add.
I'm transitioning from Ehcache2.X to Ehcache3.3.1 and I can't find a way to get the time-to-live configuration for a cache at run-time. Previously I used:
cache.getCacheConfiguration().getTimeToLiveSeconds()
Now, it looks like I need to do something akin to:
cache.getRuntimeConfiguration().getExpiry().getExpiryForCreation().getLength()
but, getExpiryForCreation() requires a key, value pair for a specific element and appears to return the duration for that element.
Am I missing something in the API or docs?
I will post here the same answer as on the ehcache mailing list.
An Expiry implementation can be very dynamic and select the expiry time using a given cached key and value.
If you know that you did something like
Expirations.timeToLiveExpiration(Duration.of(20, TimeUnit.SECONDS))
to create it, then, it won't be dynamic. So you can do
cache.getRuntimeConfiguration().getExpiry().getExpiryForCreation(null, null)
to get the duration of a cache entry after creation.
If you then want to dynamically change the TTL, it is possible but you will need to provided your own Expiry implementation (not really hard to do). With a setter for the TTL.
However, the new value will only apply to new added entries. Existing entries won't see their TTLs changed. This is because we calculate the expiration timestamp when the entry is added. Instead of reapplying the duration all the time. For performance reasons.
I'm currently developing an application in Java that connects to a MySQL database using JDBC, and displays records in jTable. The application is going to be run by more than one user at a time and I'm trying to implement a way to see if the table has been modified. EG if user one modifies a column such as stock level, and then user two tries to access the same record tries to change it based on level before user one interacts.
At the moment I'm storing the checksum of the table that's being displayed as a variable and when a user tries to modify a record it will do a check whether the stored checksum is the same as the one generated before the edit.
As I'm new to this I'm not sure if this a correct way to do it or not; as I have no experience in this matter.
Calculating the checksum of an entire table seems like a very heavy-handed solution and definitely something that wouldn't scale in the long term. There are multiple ways of handling this but the core theme is to do as little work as possible to ensure that you can scale as the number of users increase. Imagine implementing the checksum based solution on table with million rows continuously updated by hundreds of users!
One of the solutions (which requires minimal re-work) would be to "check" the stock name against which the value is updated. In the background, you'll fire across a query to the table to see if the data for "that particular stock" has been updated after the table was populated. If yes, you can warn the user or mark the updated cell as dirty to indicate that that value has changed. The problem here is that the query won't be fired off till the user tries to save the updated value. Or you could poll the database to avoid that but again hardly an efficient solution.
As a more robust solution, I would recommend using a database which implements native "push notifications" to all the connected clients. Redis is a NoSQL database which comes to mind for this.
Another tried and tested technique would be to forgo direct database connection and use a middleware layer like a messaging queue (e.g. RabbitMQ). Message queues enable design of systems which communicate using message. So for e.g. every update the stock value in the JTable would be sent across as a message to an "update database queue". Once the update is done, a message would be sent across to a "update notification queue" to which all clients would be connected. This will enable all of them to know that the value of a given stock has been updated and act accordingly. The advantage to this solution is that you get to keep your existing stack (Java, MySQL) and can implement notifications without polling the DB and killing it.
Checksum is a way to see if data has changed.
Anyway I would suggest you store a column "last_update_date", this column is supposed to be always updated at every update of the record.
So you juste have to store this date (precision date time) and do the check with that.
You can also add a column version number : a simple counter incremented by 1 at each update.
Note:
You can add a trigger on update for updating last_update_date, it should be 100% reliable, maybe you don't need a trigger if you control all updates.
When using in network communication:
A checksum is a count of the number of bits in a transmission unit
that is included with the unit so that the receiver can check to see
whether the same number of bits arrived. If the counts match, it's
assumed that the complete transmission was received.
So it can be translated to check 2 objects are different, your approach is correct.
I want to get the count of "alive" keys at any given time. Now according to the API documentation getItemCount() is meant to return this.
However it's not. Expired keys are not reducing the value of getItemCount(). Why is this? How can I accurately get a count of all "active" or "alive" keys that have not expired?
Here's my put code;
syncCache.put(uid, cachedUID, Expiration.byDeltaSeconds(3), SetPolicy.SET_ALWAYS);
Now that should expire keys after 3 seconds. It expires them but getItemCount() does not reflect the true count of keys.
UPDATE:
It seems memcache might not be be what I should be using so here's what I'm trying to do.
I wish to write a google-app engine server/app that works as a "users online" feature for a desktop application. The desktop application makes a http request to the app with a unique ID as a paramater. The app stores this UID along with a timestamp. This is done every 3 minutes.
Every 5 mninute any entries that have a timestamp outside of that 5 minute window are removed. Then you count how many entries you have and that's how many users are "online".
The expire feature seemed perfect as then I wouldn't even need to worry about timestamps or clearing expired entries.
It might be a problem in the documentation, the python one does not mention anything about alive ones. This is reproducible also in python.
See also this related post How does the lazy expiration mechanism in memcached operate?.
getItemCount() might return expired keys because it's the way memcache works and many other caches too.
Memcache can help you to do what you describe but not in the way you tried to do it. Consider completely opposite situation: you put online users in memcache and then appengine wipe them out from memcache because of lack of free memory. cache doesn't give you any guaranties you items will be stored any particular period. So memcache can let you decrease number of requests to datastore and latency.
One of the ways to do it:
Maintain sorted map (user-id, last-login-refreshed) entries which stored in datastore (or in memcache if you do not need it to be very precise). on every login/refresh you update value by particular key, and from periodic cron-job remove old users from the map. So size of map would be number of logged in use at particular moment.
Make sure map will fit in 1 Mb which is limit for both memcache or datastore.
My Java web application (tomcat) gets all of its data from an SQL database. However, large parts of this database are only updated once a day via a batchjob. Since queries on these tables tend do be rather slow, I want to cache the results.
Before rolling my own solution, I wanted to check out existing cache solutions for java. Obviously, I searched stackoverflow and found references and recommendations for ehcache.
But looking through the documentation it seems it only allows for setting the lifetime of cached objects as a duration (e.g. expire 1 hour after added), while I want an expiry based on a fixed time (e.g. expire at 0h30 am).
Does anyone know a cache library that allows such expiry behaviour? Or how to do this with ehcache if that's possible?
EhCache allows you programmatically set the expiry duration on an individual cache element when you create it. The values configured in ehcache.xml are just defaults.
If you know the specific absolute time that the element should expire, then you can calculate the difference in seconds between then and "now" (i.e. the time you add to the cache), and set that as the time-to-live duration, using Element.setTimeToLive()
Do you need a full blown cache solution? You use standard Maps and then have a job scheduled to clear them at the required time.