I'm working with an enterprise level Java back end application and I need to build in token based user authentication. The front end utilizes PHP and communicates with the Java back end via SOAP.
I thought about using Guava's HashBiMap to help me with the problem. It would be useful to me because I could generate UUID tokens as the keys and store User objects as the values in a static HashBiMap. When a User first successfully logs in, the User will be added to the HashBiMap and the login response will return the generated UUID token. Subsequent SOAP requests for the same user will be made using the token only.
The problem I'm facing now is I need some sort of eviction logic that would allow these tokens to be evicted after 30 minutes of inactivity. In my research it appears that the HashBiMap does not natively support eviction like Guava's MapMaker does.
Does anyone have any recommendations on how I could use the HashBiMap and support eviction for inactivity? If this approach is not ideal, I'm open to other strategies.
Update:
I think I need to use a HashBiMap because I want to be able to lookup a User object in the map and get their already existing token if the User is still in the map. For example, if a User closes their browser within the 30 minute window and a few minutes later returns and logs back in again, I need to check to see if the User already exists in the map so I can return their existing token (since it technically is still valid).
The simplest answer is that no, you can't have a HashBiMap with automatic eviction. The maps that MapMaker makes are specialized concurrent maps. HashBiMap is basically just a wrapper around two HashMaps.
One option might be to store the UUID to User mapping in a MapMaker-created map with eviction and store the User to UUID mapping in another MapMaker-created map that has weak keys. When the entry in the map with eviction is evicted, the entry in the inverse map should be invalidated soon because of the UUID weak reference being cleared (assuming no references to the UUID are held elsewhere). Even if that mapping were still there when the user goes to log in again, when you look up the UUID in the map with eviction and discover no entry for it, you know you need to generate a new UUID and create new mappings.
Of course, you probably need to consider any potential concurrency issues when doing all this.
To echo #ColinD's answer, HashBiMap is a non-lazy map wrapper; as such, you're not going to automatically see changes from the MapMaker map reflected in the BiMap.
All is not lost, though. #ColinD suggested using two maps. To take this a step further, why not wrap these two maps in a custom BiMap implementation that is view-based rather than copying the source map (as HashBiMap does). This would give you the expressive API from BiMap with the custom functionality that you require.
Related
I need a persistent cache that holds up to several million 6 character base36 strings and has the following behavior:
- When clients retrieve N strings from the cache, they are retrieved in the order of the base36 value e.g. AAAAAA then AAAAAB etc.
- When strings are retrieved they are also removed from the cache so no other client will receive the same strings.
I am currently using MapDB as my persistent cache (I'd use EHCache but it requires a license for persistent storage).
MapDB gives me a Map to which I can put/get elements from and it handles the persisting to disk.
I have noticed that Java's ConcurrentSkipListMap class would help in my problem since it provides ordering and I can also call the pollFirstEntry method to retrieve/remove elements in order.
I am not sure how I can use this with MapDB though. Does anyone have any advice that can help me achieve the behavior that I have outlined?
Thanks
What you're describing doesn't sound like what most people would consider a cache. A cache is essentially a shared Map, with keys mapping to values, and you'd never remove on a read because you want your cache to contain the most popular items (that's what it's for).
What you're describing (ordered set of items, consumed by clients in a fixed order) is much more like a work queue. Rather than looking at cache solutions try persistent queues like RabbitMQ, Kafka, bigqueue, etc.
Currently i am using hazelcast as a distributed cache for my application. It takes in a key and gives me the values.
But, it will be more helpful in my application if the cache can accept multiple keys and return the corresponding values, in one function call.
Can hazelcast do it? Or is there any alternate solution, like EHCache or Redis ?
Hazelcast IMap has the getAll api for this. Basically
Map IMap.getAll(keys);
gives you the key-values for the given set of keys.
See the javadoc for details
i am not sure about redis or hazle cast but ehcache has this. Check this out
http://ehcache.org/apidocs/net/sf/ehcache/Ehcache.html
It has this method Map getAll(Collection keys) and bunch of more bulk operation methods
check this out as well for some more explanation
http://dancing-devil.blogspot.com/2011/04/ehcache-bulk-operation-apis.html
The upcoming JSR107 / JCache standard has bulk operations defined. So every standards compliant cache will have this.
redis can help you do this via the MGET command and in addition it gives you loads of Data structures through which you can get values from lots of keys.
SET a 10
SET b 20
MGET a b
1)10
2)20
HSET "hash name" "a" 10
HSET "hash name" "b" 20
HGETALL "hash name"
1)a
2)10
3)b
4)20
The above example shows how you can harness redis to do what you need to do
Yes, the standard JCache API supports this. See: https://github.com/jsr107/jsr107spec/blob/master/src/main/java/javax/cache/Cache.java
The only implementation of JCache that I'm aware of today is Oracle Coherence; see: http://docs.oracle.com/middleware/1213/coherence/develop-applications/jcache_part.htm
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.
Redis has JCache API (JSR-107) implementation through Redisson framework
Sorry for the poor title but I didn't know how else to phrase my use case.
I'm trying to use a Guava cache to load user profile objects keyed by their IDs. The catch is that the profiles may change over time, so I need to key the request by the date as well. Further, I'd only like to cache a single profile for a single user (instead of 7 different profiles for every day of the week for a single user).
Is there any way to replace existing cache entries with newly loaded ones only if the date changes, instead of adding a new cache entry for the new unique key?
For clarity:
A sample key would look like <user id, date>
If I have a cached entry that is keyed by <123, "2013-02-13">, and a request comes in for <123, "2013-02-14">, there should only be one entry in the cache for user 123 after loading the new profile.
Thanks!
It sounds like what you should be doing is to have a Cache<UserId, DateAndProfile>, and then to check yourself if the DateAndProfile needs to be overwritten. The Guava caching API isn't going to let you treat different keys as "sort of the same" in any fancy way.
Inside my system I have data with a short lenght of life, it means that the data is still actuall not for a long time but shold be persisted in data store.
Also this data may be changed frequently for each user, for instance each minute.
Potentially amount of users maybe large enough and I want to speed up the put/get process of this data by usage of memcache and delayed persist to the bigtable.
No problems just to put/get objects by keys. But for some use cases I need to retrieve all data from cache that still alive but api allows me to get data only by keys. Hence I need to have some key holder that knows all keys of the data inside memcache... But any object may be evicted and I need to remove this key from global registry of keys (but such listener doesn't work in GAE). To store all this objects in the list a map is not accaptable for my solution because each object should has it's own time to evict...
Could somebody recommend me in which way I should move?
It sounds like what you really are attempting to do is have some sort of queue for data that you will be persisting. Memcache is not a good choice for this since as you've said, it is not reliable (nor is it meant to be). Perhaps you would be better off using Task Queues?
Memcache isn't designed for exhaustive access, and if you need it, you're probably using it the wrong way. Memcache is a sharded hashtable, and as such really isn't designed to be enumerated.
It's not clear from your description exactly what you're trying to do, but it sounds like at the least you need to restructure your data so you're aware of the expected keys at the time you want to write it to the datastore.
Since I am encountering the very same problem, which I might solve by building a decorator function and wrap the evicting function around it so that key to the entity is automatically deleted from key directory/placeholder on memcache, i.e. when you call for eviction.
Something like this:
def decorate_evict_decorator(key_prefix):
def evict_decorator(evict):
def wrapper(self,entity_name_or_id):#use self if the function is bound to a class.
mem=memcache.Client()
placeholder=mem.get("placeholder")#could use gets with cas
#{"placeholder":{key_prefix+"|"+entity_name:key_or_id}}
evict(self,entity_name_or_id)
del placeholder[key_prefix+"|"+entity_name]
mem.set("placeholder",placeholder)
return wrapper
return evict_decorator
class car(db.Model):
car_model=db.StringProperty(required=True)
company=db.StringProperty(required=True)
color=db.StringProperty(required=True)
engine=db.StringProperty()
#classmethod
#decorate_evict_decorator("car")
evict(car_model):
#delete process
class engine(db.Model):
model=db.StringProperty(required=True)
cylinders=db.IntegerProperty(required=True)
litres=db.FloatProperty(required=True)
manufacturer=db.StringProperty(required=True)
#classmethod
#decorate_evict_decorator("engine")
evict(engine_model):
#delete process
You could improve on this according to your data structure and flow. And for more on decorators.
You might want to add a cron to keep your datastore in sync the memcache at a regular interval.
I wonder what can be an effective way to add/remove items from a really large list when your storage is memcached-like? Maybe there is some distributed storage with Java interface that deals with this problem well?
Someone may recommend Terracotta. I know about it, but that's not exactly what I need. ;)
Hazelcast 1.6 will have distributed implementation MultiMap, where a key can be associated with a set of values.
MultiMap<String, String> multimap = Hazelcast.getMultiMap ("mymultimap");
multimap.put ("1", "a");
multimap.put ("1", "b");
multimap.put ("1", "c");
multimap.put ("2", "x");
multimap.put ("2", "y");
Collection<String> values = multimap.get("1"); //containing a,b,c
Hazelcast is an open source transactional, distributed/partitioned implementation of queue, topic, map, set, list, lock and executor service. It is super easy to work with; just add hazelcast.jar into your classpath and start coding. Almost no configuration is required.
Hazelcast is released under Apache license and enterprise grade support is also available. Code is hosted at Google Code.
Maybe you should also have a look at Scalaris!
You can use a key-value store to model most data structures if you ignore concurrency issues. Your requirements aren't entirely clear, so I'm going to make some assumptions about your use case. Hopefully if they are incorrect you can generalize the approach.
You can trivially create a linked list in the storage by having a known root (let's call it 'node_root') node which points to a value tuple of {data, prev_key, next_key}. The prev_key and next_key elements are key names which should follow the convention 'node_foo' where foo is a UUID (ideally you can generate these sequentially, if not you can use some other type of UUID). This provides ordered access to your data.
Now if you need O(1) removal of a key, you can add a second index on the structure with key 'data' and value 'node_foo' for the right foo. Then you can perform the removal just as you would a linked list in memory. Remove the index node when you're done.
Now, keep in mind that concurrent modification of this list is just as bad as concurrent modification of any shared data structure. If you're using something like BDBs, you can use their (excellent) transaction support to avoid this. For something without transactions or concurrency control, you'll want to provide external locking or serialize accesses to a single thread.