Hazelcast get ttl of key in Imap - java

I am using set to put values on IMap where i set the ttl.
The problem i am trying to solve is, when i read the key from the map, i want to be able to get the corresponding ttl. I am new to hazelcast, would appreciate some help.
val testMap: IMap[String, String] = hc.getNativeInstance().getMap(testhcMap)
if (!testMap.containsKey(key)) {
val duration = TimeUnit.HOURS
val ttlLen: Long = 1
md5Map.set(key: String, event: acp_event, ttlLen: Long, duration: TimeUnit)
return true
}
The above snippet sets the values. I want to add one more check before inserting data into the IMap, I want to check if the ttl is less than an hour and do some action based on that.

This should help you out:
IMap<String, String> foo;
foo.getEntryView(key).getExpirationTime();

You cannot access the TTL value. You would have to store it (the deadline => currentTime + timeout = deadline) in either key or value before you actually store it in Hazelcast. Easiest way might be to use some envelope-alike class to store the actual value + deadline.

Related

How to ensure the expiry of a stream data structure in redis is set once only?

I have a function that use lettuce to talk to a redis cluster.
In this function, I insert data into a stream data structure.
import io.lettuce.core.cluster.SlotHash;
...
public void addData(Map<String, String> dataMap) {
var sync = SlotHash.getSlot(key).sync()
sync.xadd(key, dataMap);
}
I also want to set the ttl when I insert a record for the first time. It is because part of the user requirement is expire the structure after a fix length of time. In this case it is 10 hour.
Unfortunately the XADD function does not accept an extra parameter to set the TTL like the SET function.
So now I am setting the ttl this way:
public void addData(Map<String, String> dataMap) {
var sync = SlotHash.getSlot(key).sync()
sync.xadd(key, dataMap);
sync.expire(key, 60000 /* 10 hours */);
}
What is the best way to ensure the I will set the expiry time only once (i.e. when the stream structure is first created)? I should not set TTL multiple times within the function because every call to xadd will also follow by a call of expire which effectively postpone the expiry time indefinitely.
I think I can always check the number of items in the stream data structure but it is an overhead. I don't want to keep flags in the java application side because the app could be restarted and this information will be removed from the memory.
You may want to try lua script, sample script below which sets the expiry only if it's not set for key, works with any type of key in redis.
eval "local ttl = redis.call('ttl', KEYS[1]); if ttl == -1 then redis.call('expire', KEYS[1], ARGV[1]); end; return ttl;" 1 mykey 12
script also returns the actual expiry time left in seconds.

Can I set a TTL for #Cacheable by key?

I use #Cacheable(name = "rates", key = "#car.name")
Can I set up a TTL for this cache? and the TTL is by the car.name?
for example
I want to set name = "rates" TTL 60 secs
running the java:
time: 0 car.name = 1, return "11"
time: 30 car.name = 2, return "22"
time: 60 car.name = 1 key should be gone.
time: 90 car.name = 2 key should be gone.
and I want to set multiple TTL for multiple names.
name = "rates2" TTL 90 secs.
You can't #Cacheable is static configuration and what you want is more on the dynamic side. Keep in mind that Spring just provides abstraction that is supposed to fit all providers. You should either specify different regions for the different entries, or do a background process invalidating the keys that need invalidation.
Time to live setting is on per region basis when statically configured.
If you walk away from the static configuration you can set expiration while inserting an entry , but then you are getting away from spring(one size to fit them all remember) and entering the territory of the caching provider which can be anything Redis, Hazelcast,Ehcache,infinispan and each will have different contract
Here is example contract of IMap interface from Hazelcast:
IMap::put(Key, Value, TTL, TimeUnit)
But this has nothing to do with spring.
With Spring means you can do the following:
#Cacheable(name="floatingRates")
List<Rate> floatingRates;
#Cacheable(name="fixedRates")
List<Rate> fixedRates;
and then define TTL for each.

How to change timestamp of records?

I'm using FluentD (v.12 last stable version) to send messages to Kafka. But FluentD is using an old KafkaProducer, so that the records timestamp is always set to -1.
Thus i have to use the WallclockTimestampExtractor to set the timestamp of the record to the point in time, when the message arrives in kafka.
Is there a Kafka Streams-specific solution?
The timestamp i'm realy interested in, is send by fluentd within the message:
"timestamp":"1507885936","host":"V.X.Y.Z."
record representation in kafka:
offset = 0, timestamp= - 1, key = null, value = {"timestamp":"1507885936","host":"V.X.Y.Z."}
i would like to have a record like this in kafka:
offset = 0, timestamp= 1507885936, key = null, value = {"timestamp":"1507885936","host":"V.X.Y.Z."}
my workaround would look like:
write a consumer to extract the timestamp (https://kafka.apache.org/0110/javadoc/org/apache/kafka/streams/processor/TimestampExtractor.html)
write a producer to produce a new record with the timestamp set (ProducerRecord(String topic, Integer partition, Long timestamp, K key, V value)
I would prefer a KafkaStreams solution, if there is one.
You can write a very simple Kafka Streams Application like:
KStreamBuilder builder = new KStreamBuilder();
builder.stream("input-topic").to("output-topic");
and configure the application with a custom TimestampExtractor that extract the timestamp from the record and returns it.
Kafka Streams will use the returned timestamps when writing the records back to Kafka.
Note: if you have out of order data -- ie, timestamps are not strictly ordered -- the result will contain out of order timestamps, too. Kafka Streams uses the returned timestamps to writing back to Kafka (ie, whatever the extractor returns, is used as record metadata timestamp). Note, that on write, the timestamp from the currently processed input record is used for all generated output records -- this hold for version 1.0 but might change in future releases.).
Update:
In general, you can modify timestamps via the Processor API. Calling context.forward() you can set the output record timestamp via To.all().withTimestamp(...) as a parameter for forward().

How to get features record for plan estimate change using lookback API

I am using rally lookback api with java. I am trying to fetch historical data features, sample code that i am using is as shown below.
LookbackApi lookbackApi = new LookbackApi();
lookbackApi.setCredentials("username", "password");
lookbackApi.setWorkspace(47903209423);
lookbackApi.setServer("https://rally1.rallydev.com");
//lookbackApi.setWorkspace("90432948");
LookbackQuery query = lookbackApi.newSnapshotQuery();
query.addFindClause("_TypeHierarchy", "PortfolioItem/Feature");
query.setPagesize(200) // set pagesize to 200 instead of the default 20k
.setStart(200) // ask for the second page of data
.requireFields("ScheduleState", // A useful set of fields for defects, add any others you may want
"ObjectID",
"State",
"Project",
"PlanEstimate",
"_ValidFrom",
"_ValidTo")
.sortBy("_UnformattedID")
.hydrateFields("ScheduleState","State", "PlanEstimate","Project"); // ScheduleState will come back as an OID if it doesn't get hydrated
LookbackResult resultSet = query.execute();
int resultCount = resultSet.Results.size();
Map<String,Object> firstSnapshot = resultSet.Results.get(0);
Iterator<Map<String,Object>> iterator = resultSet.getResultsIterator();
while (iterator.hasNext()) {
Map<String, Object> snapshot = iterator.next();
}
I need a way to put a condition so that it will fetch all the records from history which will have plan estimate changed,but will ignore other history for any feature and underlying user story. I need it this way so that we can track plan estimate change but, will be able to avoid fetching un-necessary data and reduce the time to do this.
I'm not familiar with the java toolkit, but using the raw Lookback API, you would accomplish this with a filter clause like {"_PreviousValues.PlanEstimate": {"$exists": true}}.
Map ifExist = new HashMap();
ifExist.put("$exists", true);
// Note:- true is java boolean, be careful with this as string "true" will not work.
query.addFindClause("_PreviousValues.PlanEstimate",ifExist);
Additinally one need to consider adding "_PreviousValues.PlanEstimate" into
.requireFields() in case only "PlanEstimate" is required to hydrated

DynamoDB's withLimit clause with DynamoDBMapper.query

I am using DynamoDBMapper for a class, let's say "User" (username being the primary key) which has a field on it which says "Status". It is a Hash+Range key table, and everytime a user's status changes (changes are extremely infrequent), we add a new entry to the table alongwith the timestamp (which is the range key). To fetch the current status, this is what I am doing:
DynamoDBQueryExpression expr =
new DynamoDBQueryExpression(new AttributeValue().withS(userName))
.withScanIndexForward(false).withLimit(1);
PaginatedQueryList<User> result =
this.getMapper().query(User.class, expr);
if(result == null || result.size() == 0) {
return null;
}
for(final User user : result) {
System.out.println(user.getStatus());
}
This for some reason, is printing all the statuses a user has had till now. I have set scanIndexForward to false so that it is in descending order and I put limit of 1. I am expecting this to return the latest single entry in the table for that username.
However, when I even look into the wire logs of the same, I see a huge amount of entries being returned, much more than 1. For now, I am using:
final String currentStatus = result.get(0).getStatus();
What I am trying to understand here is, what is whole point of the withLimit clause in this case, or am I doing something wrong?
In March 2013 on the AWS forums a user complained about the same problem.
A representative from Amazon sent him to use the queryPage function.
It seems as if the limit is not preserved for elements but rather a limit on chunk of elements retrieved in a single API call, and the queryPage might help.
You could also look into the pagination loading strategy configuration
Also, you can always open a Github issue for the team.

Categories

Resources