I'm using Jedis client to store geo coordinates in Redis.
Is there any way to set expiry time for member in Redis? I know, I can set the expiry time for a key.
For example, I added below three coordinates and now I want to expire "Bahn" member in 10 secs.
redis.geoadd(key, 8.6638775, 49.5282537, "Weinheim");
redis.geoadd(key, 8.3796281, 48.9978127, "EFS9");
redis.geoadd(key, 8.665351, 49.553302, "Bahn");
Behind the scene, GEOADD uses a ZSET to store its data.
You can store the same data (without geolocation) in a second ZSET, with a unix timestamp as score this time, using a regular ZADD command.
ZADD expirationzset <expiration date> <data>
You can get the expired data from this second ZSET, using
ZRANGEBYSCORE expirationzset -inf <current unix timestamp>
Then you have to remove them from both ZSETs, using ZREM for the geolocation ZSET, and ZREMRANGEBYSCORE for the expiration zset:
ZREM geolocationzset <expired_data1> <expired_data3> <expired_data3>...
ZREMRANGEBYSCORE expirationzset -inf <current unix timestamp>
Related
As I come from RDBM background I am bit confuse with DynamoDB, how to write this query.
Problem : need to filter out those data which is more than 15 minutes.
I have created GSI with hashkeymaterialType and createTime (create time format Instant.now().toEpochMilli()).
Now I have to write a java query which gives those values which is more than 15 minutes.
Here is an example using cli.
:v1 should be the material type I'd that you are searching on. :v2 should be your epoch time in milliseconds for 30 mins ago which you will have to calculate.
aws dynamodb query \
--table-name mytable \
--index-name myindex
--key-condition-expression "materialType = :v1 AND create time > :v2" \
--expression-attribute-values '{
":v1": {"S": "some id"},
":V2": {"N": "766677876567"}
}'
I have a function that use lettuce to talk to a redis cluster.
In this function, I insert data into a stream data structure.
import io.lettuce.core.cluster.SlotHash;
...
public void addData(Map<String, String> dataMap) {
var sync = SlotHash.getSlot(key).sync()
sync.xadd(key, dataMap);
}
I also want to set the ttl when I insert a record for the first time. It is because part of the user requirement is expire the structure after a fix length of time. In this case it is 10 hour.
Unfortunately the XADD function does not accept an extra parameter to set the TTL like the SET function.
So now I am setting the ttl this way:
public void addData(Map<String, String> dataMap) {
var sync = SlotHash.getSlot(key).sync()
sync.xadd(key, dataMap);
sync.expire(key, 60000 /* 10 hours */);
}
What is the best way to ensure the I will set the expiry time only once (i.e. when the stream structure is first created)? I should not set TTL multiple times within the function because every call to xadd will also follow by a call of expire which effectively postpone the expiry time indefinitely.
I think I can always check the number of items in the stream data structure but it is an overhead. I don't want to keep flags in the java application side because the app could be restarted and this information will be removed from the memory.
You may want to try lua script, sample script below which sets the expiry only if it's not set for key, works with any type of key in redis.
eval "local ttl = redis.call('ttl', KEYS[1]); if ttl == -1 then redis.call('expire', KEYS[1], ARGV[1]); end; return ttl;" 1 mykey 12
script also returns the actual expiry time left in seconds.
I use #Cacheable(name = "rates", key = "#car.name")
Can I set up a TTL for this cache? and the TTL is by the car.name?
for example
I want to set name = "rates" TTL 60 secs
running the java:
time: 0 car.name = 1, return "11"
time: 30 car.name = 2, return "22"
time: 60 car.name = 1 key should be gone.
time: 90 car.name = 2 key should be gone.
and I want to set multiple TTL for multiple names.
name = "rates2" TTL 90 secs.
You can't #Cacheable is static configuration and what you want is more on the dynamic side. Keep in mind that Spring just provides abstraction that is supposed to fit all providers. You should either specify different regions for the different entries, or do a background process invalidating the keys that need invalidation.
Time to live setting is on per region basis when statically configured.
If you walk away from the static configuration you can set expiration while inserting an entry , but then you are getting away from spring(one size to fit them all remember) and entering the territory of the caching provider which can be anything Redis, Hazelcast,Ehcache,infinispan and each will have different contract
Here is example contract of IMap interface from Hazelcast:
IMap::put(Key, Value, TTL, TimeUnit)
But this has nothing to do with spring.
With Spring means you can do the following:
#Cacheable(name="floatingRates")
List<Rate> floatingRates;
#Cacheable(name="fixedRates")
List<Rate> fixedRates;
and then define TTL for each.
I am using set to put values on IMap where i set the ttl.
The problem i am trying to solve is, when i read the key from the map, i want to be able to get the corresponding ttl. I am new to hazelcast, would appreciate some help.
val testMap: IMap[String, String] = hc.getNativeInstance().getMap(testhcMap)
if (!testMap.containsKey(key)) {
val duration = TimeUnit.HOURS
val ttlLen: Long = 1
md5Map.set(key: String, event: acp_event, ttlLen: Long, duration: TimeUnit)
return true
}
The above snippet sets the values. I want to add one more check before inserting data into the IMap, I want to check if the ttl is less than an hour and do some action based on that.
This should help you out:
IMap<String, String> foo;
foo.getEntryView(key).getExpirationTime();
You cannot access the TTL value. You would have to store it (the deadline => currentTime + timeout = deadline) in either key or value before you actually store it in Hazelcast. Easiest way might be to use some envelope-alike class to store the actual value + deadline.
I have an action in struts2 that will query the database for an object and then copy it with a few changes. Then, it needs to retrieve the new objectID from the copy and create a file called objectID.txt.
Here is relevant the code:
Action Class:
ObjectVO objectVOcopy = objectService.searchObjects(objectId);
//Set the ID to 0 so a new row is added, instead of the current one being updated
objectVOcopy.setObjectId(0);
Date today = new Date();
Timestamp currentTime = new Timestamp(today.getTime());
objectVOcopy.setTimeStamp(currentTime);
//Add copy to database
objectService.addObject(objectVOcopy);
//Get the copy object's ID from the database
int newObjectId = objectService.findObjectId(currentTime);
File inboxFile = new File(parentDirectory.getParent()+"\\folder1\\folder2\\"+newObjectId+".txt");
ObjectDAO
//Retrieve identifying ID of copy object from database
List<ObjectVO> object = getHibernateTemplate().find("from ObjectVO where timeStamp = ?", currentTime);
return object.get(0).getObjectId();
The problem is that more often than not, the ObjectDAO search method will not return anything. When debugging I've noticed that the Timestamp currentTime passed to it is usually about 1-2ms off the value in the database. I have worked around this bug changing the hibernate query to search for objects with a timestamp within 3ms of the one passed, but I'm not sure where this discrepancy is coming from. I'm not recalculating the currentTime; I'm using the same one to retrieve from the database as I am to write to the database. I'm also worried that when I deploy this to another server the discrepancy might be greater. Other than the objectID, this is the only unique identifier so I need to use it to get the copy object.
Does anyone know why this is occuring and is there a better work around than just searching through a range? I'm using Microsoft SQL Server 2008 R2 btw.
Thanks.
Precision in SQL Server's DATETIME data type does not precisely match what you can generate in other languages. SQL Server rounds to the nearest 0.003 - this is why you can say:
DECLARE #d DATETIME = '20120821 23:59:59.997';
SELECT #d;
Result:
2012-08-21 23:59:59.997
Then try:
DECLARE #d DATETIME = '20120821 23:59:59.999';
SELECT #d;
Result:
2012-08-22 00:00:00.000
Since you are using SQL Server 2008 R2, you should make sure to use the DATETIME2 data type instead of DATETIME.
That said, #RedFilter makes a good point - why are you relying on the time stamp when you can use the generated ID instead?
This feels wrong.
Other than the objectID, this is the only unique identifier
Databases have the concept of a unique identifier for a reason. You should really use that to retrieve an instance of your object.
You can use the get method on the Hibernate session and take advantage of the session and second level caches as well.
With your approach you execute a query everytime you retrieve your object.