Sorry for the poor title but I didn't know how else to phrase my use case.
I'm trying to use a Guava cache to load user profile objects keyed by their IDs. The catch is that the profiles may change over time, so I need to key the request by the date as well. Further, I'd only like to cache a single profile for a single user (instead of 7 different profiles for every day of the week for a single user).
Is there any way to replace existing cache entries with newly loaded ones only if the date changes, instead of adding a new cache entry for the new unique key?
For clarity:
A sample key would look like <user id, date>
If I have a cached entry that is keyed by <123, "2013-02-13">, and a request comes in for <123, "2013-02-14">, there should only be one entry in the cache for user 123 after loading the new profile.
Thanks!
It sounds like what you should be doing is to have a Cache<UserId, DateAndProfile>, and then to check yourself if the DateAndProfile needs to be overwritten. The Guava caching API isn't going to let you treat different keys as "sort of the same" in any fancy way.
Related
I'm transitioning from Ehcache2.X to Ehcache3.3.1 and I can't find a way to get the time-to-live configuration for a cache at run-time. Previously I used:
cache.getCacheConfiguration().getTimeToLiveSeconds()
Now, it looks like I need to do something akin to:
cache.getRuntimeConfiguration().getExpiry().getExpiryForCreation().getLength()
but, getExpiryForCreation() requires a key, value pair for a specific element and appears to return the duration for that element.
Am I missing something in the API or docs?
I will post here the same answer as on the ehcache mailing list.
An Expiry implementation can be very dynamic and select the expiry time using a given cached key and value.
If you know that you did something like
Expirations.timeToLiveExpiration(Duration.of(20, TimeUnit.SECONDS))
to create it, then, it won't be dynamic. So you can do
cache.getRuntimeConfiguration().getExpiry().getExpiryForCreation(null, null)
to get the duration of a cache entry after creation.
If you then want to dynamically change the TTL, it is possible but you will need to provided your own Expiry implementation (not really hard to do). With a setter for the TTL.
However, the new value will only apply to new added entries. Existing entries won't see their TTLs changed. This is because we calculate the expiration timestamp when the entry is added. Instead of reapplying the duration all the time. For performance reasons.
I want to perform a search of a inputs in a list. That list resides in a database. I see two options for doing that-
Hit the db for each search and return the result.
keep a copy in memory synced with table and search in memory and return the result.
I like the second option as it will be faster. However I am confused on how to keep the list in sync with table.
example : I have a list L = [12,11,14,42,56]
and I receive an input : 14
I need to return the result if the input does exists in the list or not. The list can be updated by other applications. I need to keep the list in sync with table.
What would be the most optimized approach here and how to keep the list in sync with database?
Is there any way my application can be informed of the changes in the table so that I can reload the list on demand.
Instead of recreating your own implementation of something that already exists, I would leverage Hibernate's Second Level Cache (2LC) with an implementation such as EhCache.
By using a 2LC, you can specify the time-to-live expiration time for your entities and once they expire, any query would reload them from the database. If the entity cache has not yet expired, Hibernate will hydrate them from the 2LC application cache rather than the database.
If you are using Spring, you might also want to take a look at #Cachable. This operates at the component / bean tier allowing Spring to cache a result-set into a named region. See their documentation for more details.
To satisfied your requirement, you should control the read and write in one place, otherwise, there will always be some unsync case for the data.
I get a lot of logs from my API. I analyse those logs to get interesting information like how many users for the API in this month or what type of activities they do.
All of the analysis I do depend on a period. So the timestamp is very important for me.
In fact, actually I use indexes on the timestamp. The problem is that timestamp is continue.
My question is which database is the more appropriate for my use case?
I heard about key/value databases, is it interesting to use the timestamp as a key?
Thanks.
This is a two-year-old article from IBM that talks more about SQL implementation, but it is also possibly something to keep in mind when you do a NoSQL implementation:
"Why CURRENT TIMESTAMP produces poor primary keys" - https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/current_timestamp?lang=en
Of course, your app would be different, I'm not sure of the granularity of your time-stamping, but it is possible to have two items logfiled at the same timestamp.
You might be better off creating some other form of unique key algorithm for your key-value store, adding some sort of serialization per timestamp. So the first item at a timestamp is ".1", the second ".2", etc. So you'd have some sort of timestamp.serialid format.
The other thought I have is: are you merging API log files from multiple applications/processes or machines? You might be able to do some sort of elementid.appid.timestamp.serialid to make unique key.
It all depends on your use case, so I can't say more for sure. I also wonder what you want to do with your key-value store in terms of reads/analysis after-the-fact, as that might highly alter your NoSQL solution. If you are planning to do a lot of log analysis, then, yes, there's a good reason to put that into a NoSQL database, especially if you want to do something like fast analysis of data, and then push some of the older items back into disk for storage.
As for databases, obviously each vendor will stick up for their product; but choose the best tool for the job. Best to try before you buy, and test things out for your specific setup. I'm from Aerospike, so I'm obviously biased towards it as a Key-Value store: http://www.aerospike.com/
Talked to a Very Smart Guy today, and he also suggested that you might want to use something like "milliseconds since date-time 'x'" as a primary key. Depending on what you are logging, there might still be a chance of collision with that as a primary key.
Therefore, another suggestion would be to take all entries for that primary key (ex: all log entries for that millisecond) and load them into the same record, in a kind of "bucket." You'd need application logic to parse out the multiple log entries under the same primary key, but that's another way to skin the cat.
I am storing some data in a hash map. Now I want to modify the values associated with a key based on a user input and store them these way permanently.
To make myself more clear, I have a hashmap like this:
public static HashMap<String,Integer> mymap= new HashMap<String,Integer>();
mymap.put("Hi",2);
mymap.put("Hello",3);
I will take feedback from user in some user and if he wants then I will, say, store 4 against Hello. I want these changes to be saved for future references.
I have heard about Reflection API in Java, but am not sure whether that will serve the purpose.
Reflection API allows one to manipulate/access data that is not accessable otherwise - or some data on the class that is unknown at compile time.
In here, it is really not needed. All you need is to put() the element into the map, it will "remove" the old value from the key you just inserted (if it is already there) and associate it (the key) with the newly added value.
So, basically - all you need to do is myMap.put(key,newValue), and the implementation of the Map (assuming it is a correct one, of course) will take care of the rest.
If you want to store the data between runs of the program - you will have to save it (the map) on disk. In order to do so, you can use serialization, or if you can use Properties in some cases.
Make sure that you load the map from disk once the program starts, or you will not see the values you stored.
Just say, mymap.put(key,value);. It will update the value for matching key. If not there, it will insert a new entry e.g.
mymap.put("Hello",4);
If you don't want to insert new value for a new key e.g. World, you can put a check like this:
if(mymap.containsKey(key)){
mymap.put(key,value);
}else{
//no existing key found
}
The Preferences API makes it easy to store a small amount of data on disk. It's usually used to store configuration data. It's similar to the Windows registry.
Here's an introduction: http://docs.oracle.com/javase/1.4.2/docs/guide/lang/preferences.html
I'm working with an enterprise level Java back end application and I need to build in token based user authentication. The front end utilizes PHP and communicates with the Java back end via SOAP.
I thought about using Guava's HashBiMap to help me with the problem. It would be useful to me because I could generate UUID tokens as the keys and store User objects as the values in a static HashBiMap. When a User first successfully logs in, the User will be added to the HashBiMap and the login response will return the generated UUID token. Subsequent SOAP requests for the same user will be made using the token only.
The problem I'm facing now is I need some sort of eviction logic that would allow these tokens to be evicted after 30 minutes of inactivity. In my research it appears that the HashBiMap does not natively support eviction like Guava's MapMaker does.
Does anyone have any recommendations on how I could use the HashBiMap and support eviction for inactivity? If this approach is not ideal, I'm open to other strategies.
Update:
I think I need to use a HashBiMap because I want to be able to lookup a User object in the map and get their already existing token if the User is still in the map. For example, if a User closes their browser within the 30 minute window and a few minutes later returns and logs back in again, I need to check to see if the User already exists in the map so I can return their existing token (since it technically is still valid).
The simplest answer is that no, you can't have a HashBiMap with automatic eviction. The maps that MapMaker makes are specialized concurrent maps. HashBiMap is basically just a wrapper around two HashMaps.
One option might be to store the UUID to User mapping in a MapMaker-created map with eviction and store the User to UUID mapping in another MapMaker-created map that has weak keys. When the entry in the map with eviction is evicted, the entry in the inverse map should be invalidated soon because of the UUID weak reference being cleared (assuming no references to the UUID are held elsewhere). Even if that mapping were still there when the user goes to log in again, when you look up the UUID in the map with eviction and discover no entry for it, you know you need to generate a new UUID and create new mappings.
Of course, you probably need to consider any potential concurrency issues when doing all this.
To echo #ColinD's answer, HashBiMap is a non-lazy map wrapper; as such, you're not going to automatically see changes from the MapMaker map reflected in the BiMap.
All is not lost, though. #ColinD suggested using two maps. To take this a step further, why not wrap these two maps in a custom BiMap implementation that is view-based rather than copying the source map (as HashBiMap does). This would give you the expressive API from BiMap with the custom functionality that you require.