I'm trying to find an alternative to using java.utils.TreeMap in a threaded environment due to the memory TreeMap consumes and doesn't free, using Sun JDK 1.6. We have a constant resizing TreeMap, which needs to keep sorted by key:
public class WKey implements Comparable<Object> {
private Long ms = null;
private Long id = null;
public WKey(Long ms, Long id) {
this.ms = ms;
this.id = id;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((id == null) ? 0 : id.hashCode());
result = prime * result + ((ms == null) ? 0 : ms.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
WKey other = (WKey) obj;
if (id == null) {
if (other.id != null)
return false;
} else if (!id.equals(other.id))
return false;
if (ms == null) {
if (other.ms != null)
return false;
} else if (!ms.equals(other.ms))
return false;
return true;
}
#Override
public int compareTo(Object arg0) {
WKey k = (WKey) arg0;
if (this.ms < k.ms)
return -1;
else if (this.ms.equals(k.ms)) {
if (this.id < k.id)
return -1;
else if (this.id.equals(k.id)) {
return 0;
}
}
return 1;
}
}
Thread 1
-------------------------
Iterator<WKey> it = result.keySet().iterator();
if (it.hasNext()) {
WKey key = it.next();
/// Some processing here
result.remove(key);
}
Constantly retrieves the first element within the TreeMap and then
removes it.
Threads 2, 3, and 4
-------------------------
for (Object r : rs) {
Object[] row = (Object[]) r;
Long ms = ((Calendar) row[1]).getTimeInMillis();
Long id = (Long) row[0];
WKey key = new WKey(ms, id);
result.put(key, row);
}
Are bulk processing threads which process returned results from various
services, which are generally basic POJOs. POJOs are generated a key
based off their id and timestamp using the key above. I cannot
modify the POJO to implement a Comparator, so I must use this key.
After keys have been identified and process, they are inserted into a
shared tree map where they are getting pulled off in sorted order by
a processing thread.
We were using:
Map<WKey, Object[]> result =
Collections.synchronizedMap(new TreeMap<WKey, Object[]>());
We also tried using ConcurrentSkipListMap:
SortedMap<WKey, Object[]> result =
new ConcurrentSkipListMap<WKey, Object[]>();
We are experimenting with big data and need a collection which sufficiently utilizes memory any time remove or put is used in a threaded environment. We are inserting records by the hundred-thousands and removing elements from the top on a needed basis. We need a container which can scale. The problem with TreeMap is it never releases memory unless you recreate the container, new Collections.synchronizedMap(new TreeMap()) . This is an expensive operation to call in a threaded environment anytime a new entry is removed.
Alternatively, I've been experimenting with Javolution. It has a FastSortedMap, which seems to fit in nicely. However, I find their implementation and usage of the collection rather quirky and lacking sufficient documentation and examples.
They do have a few examples listed in the doc, which relate to the clases FastSortedMap is derived from, but nothing seems to work:
A high-performance hash map with real-time behavior. Related to FastCollection, fast map supports various views.
atomic() - Thread-safe view for which all reads are mutex-free and map updates (e.g. putAll) are atomic.
shared() - View allowing concurrent modifications.
parallel() - A view allowing parallel processing including updates.
sequential() - View disallowing parallel processing.
unmodifiable() - View which does not allow any modifications.
entrySet() - FastSet view over the map entries allowing entries to be added/removed.
keySet() - FastSet view over the map keys allowing keys to be added (map entry with null value).
values() - FastCollection view over the map values (add not supported).
I instantiated the following collection as a replacement to TreeMap:
private FastMap<WKey, Object[]> result =
new FastSortedMap<WKey, Object[]>().shared();
However, once another thread touches the container. All the member functions start to fail. I still encounter null values returned from result.iterator().next(), size() sometimes hangs, result.keySet().min() is very sluggish. result.get returns null. None of the examples in doc really show how the concurrent views are used, listed above. It's really frustrating.
I've looked a at Apache Collections, but I'm afraid I might experience the same issue as many of their sorting collections are derived from java.utils HashMaps and TreeMaps. I looked into Guava as well, but their sorted containers require you to implement comparable on both key and value. I was trying to avoid implementing comparable on the 'value'. I don't need to sort both objects. If I implemented comparable on the value, I would just use a sorted list, queue, or table. Highscale and Trove don't have ordered maps. Fastutils may be a candidate, but I'd have to synchronize everything manually, and I'm trying to save time.
I've reviewed others listed in the stackoverflow benchmark post, but the projects listed previously seem to be my best alternatives.
So far, I'm not convinced Javolution is everything they advertise on their site. My experience is that their implementation is very inconsistent, lacking documentation, and performs rather sluggish in threaded environments. TreeMap performs great; I just wish it wouldn't allocate in such large bursts and GC every now and then. However, I'm hoping there might be somebody out there to prove me wrong, may even demonstrate appropriate usage for Javolutions collections in a threaded environment.
Otherwise, if somebody knows a way around resizing Treemaps, without using 'new', or has solved similar/alternative instances working with threading and sorted maps, any info would be greatly appreciated!
Related
My requirement is to find out all the matching objects with given criteria.
So what I did was created a Custom predicate and passed the matching criteria and the source IDs that should be matched with the Objects. If the condition is matching, then I am internally populating a map to hold the details.
And when the stream is completed, this map will have all the matching criteria details.
So I use this map for later computation.
Most of the time it works, but starts failing randomly.
My Custom predicate is -
public class IdentifierMapPredicate<T extends Identifiable, V> implements Predicate<T> {
private static Logger logger = Logger
.getLogger();
private Collection<V> sourceIds;
private BiPredicate<T, V> matchingCondition;
private Map<V, Collection<Identifier>> identifierMap;
public Map<V, Collection<Identifier>> getIdentifierMap() {
return identifierMap;
}
public IdentifierMapPredicate(Collection<V> sourceIds,
BiPredicate<T, V> matchingCondition) {
this.sourceIds = sourceIds;
this.matchingCondition = matchingCondition;
identifierMap = new HashMap<>(sourceIds.size());
}
#Override
public boolean test(T t) {
for (V id : sourceIds) {
if (matchingCondition.test(t, id)) {
//logger.debug("t :{}, id :{}", t.getIdentifier(), id);
Collection<Identifier> ids = identifierMap.get(id);
if(ids == null){
ids = new HashSet<>();
identifierMap.put(id, ids);
}
ids.add(t.getIdentifier());
logger.debug("identifierMap :{}", identifierMap);
return true;
}
}
return false;
}
}
I added log statements when the criteria is matched and we update the map, sometime the map has only 1 element (even though I am expecting 2) or sometimes its 0.
2019-11-24 09:31:52.574 PST DEBUG ForkJoinPool.commonPool-worker-5 IdentifierMapPredicate:52 - identifierMap :{}
2019-11-24 09:31:52.574 PST DEBUG ForkJoinPool.commonPool-worker-7 IdentifierMapPredicate:52 -identifierMap :{}
Is there anything wrong with the above implementation?
Should changing it to ConcurrentHashMap will help?
Most of the time it works, but starts failing randomly
Yeah, classical race condition.
You're using IdentifierMapPredicate in a multi-threaded context. If you access IdentifierMapPredicate.test concurrently, threads will use also identifierMap simulaneously, which is not thread safe.
It looks like a ConcurrentHashMap will solve the issue. Alternatively you could modify method test with the synchronized keyword, but that will give you less throughput/more locking I'd assume. But on the other hand also less headache, its a trade-off where I'd gladly accept less headache if there aren't tight performance requirements. Just my personal preference.
Btw. nowadays you can use Map.computeIfAbsent to create new entries instead of
Collection<Identifier> ids = identifierMap.get(id);
if(ids == null){
ids = new HashSet<>();
identifierMap.put(id, ids);
}
I have a TreeSet and a custom comparator.
I get the values from server according to the changes in the stock
ex: if time=0 then server will send all the entries on the stock (unsorted)
if time=200 then server will send entries added or deleted after the time 200(unsorted)
In client side i am sorting the entries. My question is which is more efficient
1> fetch all entries first and then call addAll method
or
2> add one by one
there can be millions of entries.
/////////updated///////////////////////////////////
private static Map<Integer, KeywordInfo> hashMap = new HashMap<Integer, KeywordInfo>();
private static Set<Integer> sortedSet = new TreeSet<Integer>(comparator);
private static final Comparator<Integer> comparator = new Comparator<Integer>() {
public int compare(Integer o1, Integer o2) {
int integerCompareValue = o1.compareTo(o2);
if (integerCompareValue == 0) return integerCompareValue;
KeywordInfo k1 = hashMap.get(o1);
KeywordInfo k2 = hashMap.get(o2);
if (null == k1.getKeyword()) {
if (null == k2.getKeyword())
return integerCompareValue;
else
return -1;
} else {
if (null == k2.getKeyword())
return 1;
else {
int compareString = AlphaNumericCmp.COMPARATOR.compare(k1.getKeyword().toLowerCase(), k2.getKeyword().toLowerCase());
//int compareString = k1.getKeyword().compareTo(k2.getKeyword());
if (compareString == 0)
return integerCompareValue;
return compareString;
}
}
}
};
now there is an event handler which gives me an ArrayList of updated entries,
after adding them to my hashMap i am calling
final Map<Integer, KeywordInfo> mapToReturn = new SubMap<Integer, KeywordInfo>(sortedSet, hashMap);
I think your bottleneck can be probably more network-related than CPU related. A bulk operation fetching all the new entries at once would be more network efficient.
With regards to your CPU, the time required to populate a TreeSet does not change consistently between multiple add()s and addAll(). The reason behind is that TreeSet relies on AbstractCollection's addAll() (http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b27/java/util/AbstractCollection.java#AbstractCollection.addAll%28java.util.Collection%29) which in turn creates an iterator and calls multiple times add().
So, my advice on the CPU side is: choose the way that keeps your code cleaner and more readable. This is probably obtained through addAll().
In general it is less memory overhead when on being loaded alread data is stored. This should be time efficient too, maybe using small buffers. Memory allocation costs time too.
However time both solutions, in a separate prototype. You really have to test with huge numbers, as network traffic costs much too. That is a bit Test Driven Development, and adds to QA both quantitative statistics, as correctness of implementation.
The actual implementation is a linked list, so add one by one will be faster if you do it right. And i think in the near future this behaviour wont be change.
For your problem a Statefull comparator may help.
// snipplet, must not work fine
public class NaturalComparator implements Comparator{
private boolean anarchy = false;
private Comparator parentComparator;
NaturalComparator(Comparator parent){
this.parentComparator = parent;
}
public void setAnarchy(){...}
public int compare(A a, A b){
if(anarchy) return 1
else return parentCoparator.compare(a,b);
}
}
...
Set<Integer> sortedSet = new TreeSet<Integer>(new NaturalComparator(comparator));
comparator.setAnarchy(true);
sortedSet.addAll(sorted);
comparator.setAnarchy(false);
I am relatively new to struts and java. I have been trying to understand the following piece of code.
List<LabelValueBean> dbList = getCardProductList();
ConcurrentMap<Integer, ProductItem> ret = new ConcurrentHashMap<Integer, ProductItem>();
for (LabelValueBean lb : dbList) {
ProductItem pi = new ProductItem();
pi.setId(Integer.valueOf(lb.getId()));
pi.setCode(lb.getCode());
pi.setName(lb.getDescription());
LabelValueBeanAuxCol[] aux = lb.getLabelvaluebeanauxcol();
pi.setTypeProduct(Boolean.TRUE);
if (null != aux) {
for (LabelValueBeanAuxCol element : aux) {
if (null != element
&& "PRDCT_SVC_IND".equals(element.getName())) {
pi.setTypeProduct(Boolean.valueOf("Y".equals(element
.getValue())));
}
}
}
pi.setNeedSetup(Boolean.TRUE);
ret.put(pi.getId(), pi);
}
return Himms2LookupUtil
.<ConcurrentMap<Integer, ProductItem>> setValueInCache(
Himms2Names.CARD_SERVICE_PRODUCT_LIST, ret);
}
With repect to the code block around "PRDCT_SVC_IND", how would a name of the column be mapped to the labelvaluebean?
Though I have an idea on the concurrent map and the key value pair functionality, I am pretty unsure about most of the concepts here and have tried searching on the internet without much luck. I would want a more clearer overview of what the above lines actually mean(in general ofcourse), in terms of the concepts used here like the concurrenthashmap, list(labelvaluebean) etc.
Any inputs would be greatly appreciated.
Code is doing following things :-
1) Getting CardProductList in first line and storing reference in a dbList object as
List<LabelValueBean> dbList = getCardProductList();`
2) Creating ConcurrentMap of key value .
3) Start iterating CardProductList and perform following operations on each CardProductList object -
a) Crates ProductItem object.
b) setting CardProduct object values (id, code, name) into ProductItem object.
d) setting ProductItem.typeProduct to TRUE.
c) getting Labelvaluebeanauxcol and store it in a LabelValueBeanAuxCol[] array instance called aux.
d) now check if aux is not null then iterates aux array and checks if(elemet is not null AND element name IS EQUAL TO "PRDCT_SVC_IND"
THEN
set ProductItem.TypeProduct to True if element.value = Y
ELSE
set ProductItem.TypeProduct to FALE
e) set ProductItem.NeddSetup = TRUE
f) set ProductItem.id and ProductItem into ConcurrentMap.
4) Store ConcurrentMap in cache
I am writing an application where memory, and to a lesser extent speed, are vital. I have found from profiling that I spend a great deal of time in Map and Set operations. While I look at ways to call these methods less, I am wondering whether anyone out there has written, or come across, implementations that significantly improve on access time or memory overhead? or at least, that can improve these things given some assumptions?
From looking at the JDK source I can't believe that it can't be made faster or leaner.
I am aware of Commons Collections, but I don't believe it has any implementation whose goal is to be faster or leaner. Same for Google Collections.
Update: Should have noted that I do not need thread safety.
Normally these methods are pretty quick.
There are a couple of things you should check: are your hash codes implemented? Are they sufficiently uniform? Otherwise you'll get rubbish performance.
http://trove4j.sourceforge.net/ <-- this is a bit quicker and saves some memory. I saved a few ms on 50,000 updates
Are you sure that you're using maps/sets correctly? i.e. not trying to iterate over all the values or something similar. Also, e.g. don't do a contains and then a remove. Just check the remove.
Also check if you're using Double vs double. I noticed a few ms performance improvements on ten's of thousands of checks.
Have you also set up the initial capacity correctly/appropriately?
Have you looked at Trove4J ? From the website:
Trove aims to provide fast, lightweight implementations of the java.util.Collections API.
Benchmarks provided here.
Here are the ones I know, in addition to Google and Commons Collections:
http://trove4j.sourceforge.net/
http://javolution.org/
http://fastutil.dsi.unimi.it/
Of course you can always implement your own data structures which are optimized for your use cases. To be able to help better, we would need to know you access patterns and what kind of data you store in the collections.
Try improving the performance of your equals and hashCode methods, this could help speed up the standard containers use of your objects.
You can extend AbstractMap and/or AbstractSet as a starting point. I did this not too long ago to implement a binary trie based map (the key was an integer, and each "level" on the tree was a bit position. left child was 0 and right child was 1). This worked out well for us because the key was EUI-64 identifiers, and for us most of the time the top 5 bytes were going to be the same.
To implement an AbstractMap, you need to at the very least implement the entrySet() method, to return a set of Map.Entry, each of which is a key/value pair.
To implement a set, you extend AbstractSet and supply implementations of size() and iterator().
That's at the very least, however. You will want to also implement get and put, since the default map is unmodifiable, and the default implementation of get iterates through the entrySet looking for a match.
You can possibly save a little on memory by:
(a) using a stronger, wider hash code, and thus avoiding having to store the keys;
(b) by allocating yourself from an array, avoiding creating a separate object per hash table entry.
In case it's useful, here's a no-frills Java implementation of the Numerical Recipies hash table that I've sometimes found useful. You can key directly on a CharSequence (including Strings), or else you must yourself come up with a strong-ish 64-bit hash function for your objects.
Remember, this implementation doesn't store the keys, so if two items have the same hash code (which you'd expect after hashing in the order of 2^32 or a couple of billion items if you have a good hash function), then one item will overwrite the other:
public class CompactMap<E> implements Serializable {
static final long serialVersionUID = 1L;
private static final int MAX_HASH_TABLE_SIZE = 1 << 24;
private static final int MAX_HASH_TABLE_SIZE_WITH_FILL_FACTOR = 1 << 20;
private static final long[] byteTable;
private static final long HSTART = 0xBB40E64DA205B064L;
private static final long HMULT = 7664345821815920749L;
static {
byteTable = new long[256];
long h = 0x544B2FBACAAF1684L;
for (int i = 0; i < 256; i++) {
for (int j = 0; j < 31; j++) {
h = (h >>> 7) ^ h;
h = (h << 11) ^ h;
h = (h >>> 10) ^ h;
}
byteTable[i] = h;
}
}
private int maxValues;
private int[] table;
private int[] nextPtrs;
private long[] hashValues;
private E[] elements;
private int nextHashValuePos;
private int hashMask;
private int size;
#SuppressWarnings("unchecked")
public CompactMap(int maxElements) {
int sz = 128;
int desiredTableSize = maxElements;
if (desiredTableSize < MAX_HASH_TABLE_SIZE_WITH_FILL_FACTOR) {
desiredTableSize = desiredTableSize * 4 / 3;
}
desiredTableSize = Math.min(desiredTableSize, MAX_HASH_TABLE_SIZE);
while (sz < desiredTableSize) {
sz <<= 1;
}
this.maxValues = maxElements;
this.table = new int[sz];
this.nextPtrs = new int[maxValues];
this.hashValues = new long[maxValues];
this.elements = (E[]) new Object[sz];
Arrays.fill(table, -1);
this.hashMask = sz-1;
}
public int size() {
return size;
}
public E put(CharSequence key, E val) {
return put(hash(key), val);
}
public E put(long hash, E val) {
int hc = (int) hash & hashMask;
int[] table = this.table;
int k = table[hc];
if (k != -1) {
int lastk;
do {
if (hashValues[k] == hash) {
E old = elements[k];
elements[k] = val;
return old;
}
lastk = k;
k = nextPtrs[k];
} while (k != -1);
k = nextHashValuePos++;
nextPtrs[lastk] = k;
} else {
k = nextHashValuePos++;
table[hc] = k;
}
if (k >= maxValues) {
throw new IllegalStateException("Hash table full (size " + size + ", k " + k);
}
hashValues[k] = hash;
nextPtrs[k] = -1;
elements[k] = val;
size++;
return null;
}
public E get(long hash) {
int hc = (int) hash & hashMask;
int[] table = this.table;
int k = table[hc];
if (k != -1) {
do {
if (hashValues[k] == hash) {
return elements[k];
}
k = nextPtrs[k];
} while (k != -1);
}
return null;
}
public E get(CharSequence hash) {
return get(hash(hash));
}
public static long hash(CharSequence cs) {
if (cs == null) return 1L;
long h = HSTART;
final long hmult = HMULT;
final long[] ht = byteTable;
for (int i = cs.length()-1; i >= 0; i--) {
char ch = cs.charAt(i);
h = (h * hmult) ^ ht[ch & 0xff];
h = (h * hmult) ^ ht[(ch >>> 8) & 0xff];
}
return h;
}
}
Check out GNU Trove:
http://trove4j.sourceforge.net/index.html
There is at least one implementation in commons-collections that is specifically built for speed: Flat3Map it's pretty specific in that it'll be really quick as long as there are no more than 3 elements.
I suspect that you may get more milage through following #thaggie's advice add look at the equals/hashcode method times.
You said you profiled some classes but have you done any timings to check their speed? I'm not sure how you'd check their memory usage. It seems like it would be nice to have some specific figures at hand when you're comparing different implementations.
There are some notes here and links to several alternative data-structure libraries: http://www.leepoint.net/notes-java/data/collections/ds-alternatives.html
I'll also throw in a strong vote for fastutil. (mentioned in another response, and on that page) It has more different data structures than you can shake a stick at, and versions optimized for primitive types as keys or values. (A drawback is that the jar file is huge, but you can presumably trim it to just what you need)
I went through something like this a couple of years ago -- very large Maps and Sets as well as very many of them. The default Java implementations consumed way too much space. In the end I rolled my own, but only after I examined the actual usage patterns that my code required. For example, I had a known large set of objects that were created early on and some Maps were sparse while others were dense. Other structures grew monotonically (no deletes) while in other places it was faster to use a "collection" and do the occasional but harmless extra work of processing duplicate items than it was to spend the time and space on avoiding duplicates. Many of the implementations I used were array-backed and exploited the fact that my hashcodes were sequentially allocated and thus for dense maps a lookup was just an array access.
Take away messages:
look at your algorithm,
consider multiple implementations, and
remember that most of the libraries out there are catering for general purpose use (eg insert and delete, a range of sizes, neither sparse nor dense, etc) so they're going to have overheads that you can probably avoid.
Oh, and write unit tests...
At times when I have see Map and Set operations are using a high percentage of CPU, it has indicated that I have over used Map and Set and restructuring my data has almost eliminated collections from the top 10% CPU consumer.
See if you can avoid copies of collections, iterating over collections and any other operation which results in accessing most of the elements of the collection and creating objects.
It's probably not so much the Map or Set which causing the problem, but the objects behind them. Depending upon your problem, you might want a more database-type scheme where "objects" are stored as a bunch of bytes rather than Java Objects. You could embed a database (such as Apache Derby) or do your own specialist thing. It's very dependent upon what you are actually doing. HashMap isn't deliberately big and slow...
Commons Collections has FastArrayList, FastHashMap and FastTreeMap but I don't know what they're worth...
Commons Collections has an id map which compares through ==, which should be faster.
-[Joda Primities][1] as has primitive collections, as does Trove. I experimented with Trove and found that its memory useage is better.
I was mapping collections of many small objects with a few Integers. altering these to ints saved nearly half the memory (although requiring some messier application code to compensate).
It seems reasonable to me that sorted trees should consume less memory than hashmaps because they don't require the load factor (although if anyone can confirm or has a reason why this is actually dumb please post in the comments).
Which version of the JVM are you using?
If you are not on 6 (although I suspect you are) then a switch to 6 may help.
If this is a server application and is running on windows try using -server to use the correct hotspot implementation.
I use the following package (koloboke) to do a int-int hashmap, because it supports promitive type and it stores two int in a long variable, this is cool for me. koloboke
I know it's simple to implement, but I want to reuse something that already exist.
Problem I want to solve is that I load configuration (from XML so I want to cache them) for different pages, roles, ... so the combination of inputs can grow quite much (but in 99% will not). To handle this 1%, I want to have some max number of items in cache...
Till know I have found org.apache.commons.collections.map.LRUMap in apache commons and it looks fine but want to check also something else. Any recommendations?
You can use a LinkedHashMap (Java 1.4+) :
// Create cache
final int MAX_ENTRIES = 100;
Map cache = new LinkedHashMap(MAX_ENTRIES+1, .75F, true) {
// This method is called just after a new entry has been added
public boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_ENTRIES;
}
};
// Add to cache
Object key = "key";
cache.put(key, object);
// Get object
Object o = cache.get(key);
if (o == null && !cache.containsKey(key)) {
// Object not in cache. If null is not a possible value in the cache,
// the call to cache.contains(key) is not needed
}
// If the cache is to be used by multiple threads,
// the cache must be wrapped with code to synchronize the methods
cache = (Map)Collections.synchronizedMap(cache);
This is an old question, but for posterity I wanted to list ConcurrentLinkedHashMap, which is thread safe, unlike LRUMap. Usage is quite easy:
ConcurrentMap<K, V> cache = new ConcurrentLinkedHashMap.Builder<K, V>()
.maximumWeightedCapacity(1000)
.build();
And the documentation has some good examples, like how to make the LRU cache size-based instead of number-of-items based.
Here is my implementation which lets me keep an optimal number of elements in memory.
The point is that I do not need to keep track of what objects are currently being used since I'm using a combination of a LinkedHashMap for the MRU objects and a WeakHashMap for the LRU objects.
So the cache capacity is no less than MRU size plus whatever the GC lets me keep. Whenever objects fall off the MRU they go to the LRU for as long as the GC will have them.
public class Cache<K,V> {
final Map<K,V> MRUdata;
final Map<K,V> LRUdata;
public Cache(final int capacity)
{
LRUdata = new WeakHashMap<K, V>();
MRUdata = new LinkedHashMap<K, V>(capacity+1, 1.0f, true) {
protected boolean removeEldestEntry(Map.Entry<K,V> entry)
{
if (this.size() > capacity) {
LRUdata.put(entry.getKey(), entry.getValue());
return true;
}
return false;
};
};
}
public synchronized V tryGet(K key)
{
V value = MRUdata.get(key);
if (value!=null)
return value;
value = LRUdata.get(key);
if (value!=null) {
LRUdata.remove(key);
MRUdata.put(key, value);
}
return value;
}
public synchronized void set(K key, V value)
{
LRUdata.remove(key);
MRUdata.put(key, value);
}
}
I also had same problem and I haven't found any good libraries... so I've created my own.
simplelrucache provides threadsafe, very simple, non-distributed LRU caching with TTL support. It provides two implementations
Concurrent based on ConcurrentLinkedHashMap
Synchronized based on LinkedHashMap
You can find it here.
Here is a very simple and easy to use LRU cache in Java.
Although it is short and simple it is production quality.
The code is explained (look at the README.md) and has some unit tests.