I use #Cacheable(name = "rates", key = "#car.name")
Can I set up a TTL for this cache? and the TTL is by the car.name?
for example
I want to set name = "rates" TTL 60 secs
running the java:
time: 0 car.name = 1, return "11"
time: 30 car.name = 2, return "22"
time: 60 car.name = 1 key should be gone.
time: 90 car.name = 2 key should be gone.
and I want to set multiple TTL for multiple names.
name = "rates2" TTL 90 secs.
You can't #Cacheable is static configuration and what you want is more on the dynamic side. Keep in mind that Spring just provides abstraction that is supposed to fit all providers. You should either specify different regions for the different entries, or do a background process invalidating the keys that need invalidation.
Time to live setting is on per region basis when statically configured.
If you walk away from the static configuration you can set expiration while inserting an entry , but then you are getting away from spring(one size to fit them all remember) and entering the territory of the caching provider which can be anything Redis, Hazelcast,Ehcache,infinispan and each will have different contract
Here is example contract of IMap interface from Hazelcast:
IMap::put(Key, Value, TTL, TimeUnit)
But this has nothing to do with spring.
With Spring means you can do the following:
#Cacheable(name="floatingRates")
List<Rate> floatingRates;
#Cacheable(name="fixedRates")
List<Rate> fixedRates;
and then define TTL for each.
Related
I have a function that use lettuce to talk to a redis cluster.
In this function, I insert data into a stream data structure.
import io.lettuce.core.cluster.SlotHash;
...
public void addData(Map<String, String> dataMap) {
var sync = SlotHash.getSlot(key).sync()
sync.xadd(key, dataMap);
}
I also want to set the ttl when I insert a record for the first time. It is because part of the user requirement is expire the structure after a fix length of time. In this case it is 10 hour.
Unfortunately the XADD function does not accept an extra parameter to set the TTL like the SET function.
So now I am setting the ttl this way:
public void addData(Map<String, String> dataMap) {
var sync = SlotHash.getSlot(key).sync()
sync.xadd(key, dataMap);
sync.expire(key, 60000 /* 10 hours */);
}
What is the best way to ensure the I will set the expiry time only once (i.e. when the stream structure is first created)? I should not set TTL multiple times within the function because every call to xadd will also follow by a call of expire which effectively postpone the expiry time indefinitely.
I think I can always check the number of items in the stream data structure but it is an overhead. I don't want to keep flags in the java application side because the app could be restarted and this information will be removed from the memory.
You may want to try lua script, sample script below which sets the expiry only if it's not set for key, works with any type of key in redis.
eval "local ttl = redis.call('ttl', KEYS[1]); if ttl == -1 then redis.call('expire', KEYS[1], ARGV[1]); end; return ttl;" 1 mykey 12
script also returns the actual expiry time left in seconds.
Using the following model:
#RedisHash("positions")
public class Position {
#Id
private String id;
#GeoIndexed
private Point coordinates;
#TimeToLive(unit = TimeUnit.MINUTES)
protected int ttl;
//...
}
I noticed that some data remains persisted after the Time To Live expires. Notice the difference between keys * command before and after the expire event:
Before
127.0.0.1:6379> keys *
1) "positions:336514e6-3e52-487a-a88b-98b110ec1c28"
2) "positions:coordinates"
3) "positions:336514e6-3e52-487a-a88b-98b110ec1c28:idx"
4) "positions"
5) "positions:336514e6-3e52-487a-a88b-98b110ec1c28:phantom"
After
127.0.0.1:6379> keys *
1) "positions:coordinates"
2) "positions:336514e6-3e52-487a-a88b-98b110ec1c28:idx"
3) "positions"
4) "positions:336514e6-3e52-487a-a88b-98b110ec1c28:phantom"
Only the positions:336514e6-3e52-487a-a88b-98b110ec1c28 item was deleted.
I also notice that, after some more time, the *:phantom item also is deleted, but not the rest. Is this a bug or it is required to configure/implement something more?
Your application needs to stay active. Redis Repositories use keyspace events to get notified about expiration so Spring Data Redis can cleanup index structures.
Redis supports expiry on top-level keys only, it does not support expiry on list/set elements.
The :phantom key has a slightly longer expiration, that's why it expires after the original key has expired. It's used to provide the expired hash values for index cleanup and such.
I am using set to put values on IMap where i set the ttl.
The problem i am trying to solve is, when i read the key from the map, i want to be able to get the corresponding ttl. I am new to hazelcast, would appreciate some help.
val testMap: IMap[String, String] = hc.getNativeInstance().getMap(testhcMap)
if (!testMap.containsKey(key)) {
val duration = TimeUnit.HOURS
val ttlLen: Long = 1
md5Map.set(key: String, event: acp_event, ttlLen: Long, duration: TimeUnit)
return true
}
The above snippet sets the values. I want to add one more check before inserting data into the IMap, I want to check if the ttl is less than an hour and do some action based on that.
This should help you out:
IMap<String, String> foo;
foo.getEntryView(key).getExpirationTime();
You cannot access the TTL value. You would have to store it (the deadline => currentTime + timeout = deadline) in either key or value before you actually store it in Hazelcast. Easiest way might be to use some envelope-alike class to store the actual value + deadline.
I'm using Jedis client to store geo coordinates in Redis.
Is there any way to set expiry time for member in Redis? I know, I can set the expiry time for a key.
For example, I added below three coordinates and now I want to expire "Bahn" member in 10 secs.
redis.geoadd(key, 8.6638775, 49.5282537, "Weinheim");
redis.geoadd(key, 8.3796281, 48.9978127, "EFS9");
redis.geoadd(key, 8.665351, 49.553302, "Bahn");
Behind the scene, GEOADD uses a ZSET to store its data.
You can store the same data (without geolocation) in a second ZSET, with a unix timestamp as score this time, using a regular ZADD command.
ZADD expirationzset <expiration date> <data>
You can get the expired data from this second ZSET, using
ZRANGEBYSCORE expirationzset -inf <current unix timestamp>
Then you have to remove them from both ZSETs, using ZREM for the geolocation ZSET, and ZREMRANGEBYSCORE for the expiration zset:
ZREM geolocationzset <expired_data1> <expired_data3> <expired_data3>...
ZREMRANGEBYSCORE expirationzset -inf <current unix timestamp>
Here is what i'm trying to do :
I have a list a twitter user ID, for each one of them I need to retrieve a complete list of his followers ID and his friends ID. I don't need anything else, no screen name etc..
i'm using twitter4j btw
Here is how I'm doing it :
for each user i'm executing the following code in order to get a complete list of his followers IDs
long lCursor = -1
do{
IDs response = t.getFollowersIDs(id, lCursor);
long tab[] = response.getIDs();
for(long val : tab){
myIdList.add(val);
}
lCursor = response.getNextCursor();
}while(lCursor != 0);
My problem :
according to this page : https://dev.twitter.com/docs/api/1.1/get/followers/ids
the request rate limit for getFollowersIDs() is 15, considering this method return a maximum number of 5000 IDs, it means that it will be only possible to get 15*5000 IDs (or 15 users if they have less than 5000 followers).
This is really not enough for what i'm trying to do.
Am I doing something wrong ? Is there any solutions to improve that ? (even slightly)
Thanks for your help :)
The rate limit for that endpoint in v1.1 is 15 calls per 15 minutes per access token. See https://dev.twitter.com/docs/rate-limiting/1.1 for more information about the limits.
With that in mind, if you have an access token for each of your users, you should be able to fetch up to 75,000 (15*5000) follower IDs every 15 minutes for each access token.
If you only have one access token you'll, unfortunately, be limited in the manner you described and will just have to handle when your application hits the rate limit and continue processing once the 15 minutes is up.