Which is the fastest way for a containsAny check? - java

in Something like 'contains any' for Java set? there a several solutions
Collections.disjoint(A, B)
setA.stream().anyMatch(setB::contains)
Sets.intersection(set1, set2).isEmpty()
CollectionUtils.containsAny()
im my case set1 is new ConcurrentHashMap<>().keySet() and set2 is an ArrayList
set1 can cointain up to 100 entries, set2 less then 10
Or will they all do the same and perform similar?

public static void main(String[] args) {
Map<String, String> map = new ConcurrentHashMap<>();
List<String> list = new ArrayList<>();
for (int i = 0; i < 100; i++) {
map.put(RandomStringUtils.randomNumeric(5), RandomStringUtils.randomNumeric(5));
}
for (int i = 0; i < 10; i++) {
list.add(RandomStringUtils.randomNumeric(5));
}
Set<String> set = new HashSet<>(list);
List<Runnable> methods = new ArrayList<>();
methods.add(() -> { Collections.disjoint(map.keySet(), list); });
methods.add(() -> { Collections.disjoint(list, map.keySet()); });
methods.add(() -> { map.keySet().stream().anyMatch(list::contains); });
methods.add(() -> { list.stream().anyMatch(map.keySet()::contains); });
methods.add(() -> { Sets.intersection(map.keySet(), set).isEmpty(); });
methods.add(() -> { Sets.intersection(set, map.keySet()).isEmpty(); });
methods.add(() -> { CollectionUtils.containsAny(map.keySet(), list); });
methods.add(() -> { CollectionUtils.containsAny(list, map.keySet()); });
for (Runnable method : methods) {
long start = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
method.run();
}
long end = System.currentTimeMillis();
System.out.println("took " + (end - start));
}
}
And the winner iiis Collections.disjoint
took 15
took 32
took 484
took 62
took 157
took 47
took 24
took 32

setA.stream().anyMatch(setB::contains) will be best because all the other options will be non-lazy evaluation and will be performed on all the elements.
For the stream, it will be lazy evaluation and will be returned once any match is found.
Also, from Documentation of CollectionUtils.containsAny()
In other words, this method returns true iff the intersection(java.lang.Iterable, java.lang.Iterable) of coll1 and coll2 is not empty.

Related

Adding more threads to executorservice only makes it slower

I have this code, where I have my own homemade array class, that I want to use to test the speed of some different concurrency tools in java
public class LongArrayListUnsafe {
private static final ExecutorService executor
= Executors.newFixedThreadPool(1);
public static void main(String[] args) {
LongArrayList dal1 = new LongArrayList();
int n = 100_000_000;
Timer t = new Timer();
List<Callable<Void>> tasks = new ArrayList<>();
tasks.add(() -> {
for (int i = 0; i <= n; i+=2){
dal1.add(i);
}
return null;
});
tasks.add(() -> {
for (int i = 0; i < n; i++){
dal1.set(i, i + 1);
}
return null;});
tasks.add(() -> {
for (int i = 0; i < n; i++) {
dal1.get(i);
}
return null;});
tasks.add(() -> {
for (int i = n; i < n * 2; i++) {
dal1.add(i + 1);
}
return null;});
try {
executor.invokeAll(tasks);
} catch (InterruptedException exn) {
System.out.println("Interrupted: " + exn);
}
executor.shutdown();
try {
executor.awaitTermination(1000, TimeUnit.MILLISECONDS);
} catch (Exception e){
System.out.println("what?");
}
System.out.println("Using toString(): " + t.check() + " ms");
}
}
class LongArrayList {
// Invariant: 0 <= size <= items.length
private long[] items;
private int size;
public LongArrayList() {
reset();
}
public static LongArrayList withElements(long... initialValues){
LongArrayList list = new LongArrayList();
for (long l : initialValues) list.add( l );
return list;
}
public void reset(){
items = new long[2];
size = 0;
}
// Number of items in the double list
public int size() {
return size;
}
// Return item number i
public long get(int i) {
if (0 <= i && i < size)
return items[i];
else
throw new IndexOutOfBoundsException(String.valueOf(i));
}
// Replace item number i, if any, with x
public long set(int i, long x) {
if (0 <= i && i < size) {
long old = items[i];
items[i] = x;
return old;
} else
throw new IndexOutOfBoundsException(String.valueOf(i));
}
// Add item x to end of list
public LongArrayList add(long x) {
if (size == items.length) {
long[] newItems = new long[items.length * 2];
for (int i=0; i<items.length; i++)
newItems[i] = items[i];
items = newItems;
}
items[size] = x;
size++;
return this;
}
public String toString() {
return Arrays.stream(items, 0,size)
.mapToObj( Long::toString )
.collect(Collectors.joining(", ", "[", "]"));
}
}
public class Timer {
private long start, spent = 0;
public Timer() { play(); }
public double check() { return (System.nanoTime()-start+spent)/1e9; }
public void pause() { spent += System.nanoTime()-start; }
public void play() { start = System.nanoTime(); }
}
The implementation of a LongArrayList class is not so important,it's not threadsafe.
The drivercode with the executorservice performs a bunch of operations on the arraylist, and has 4 different tasks doing it, each 100_000_000 times.
The problem is that when I give the threadpool more threads "Executors.newFixedThreadPool(2);" it only becomes slower.
For example, for one thread, a typical timing is 1.0366974 ms, but if I run it with 3 threads, the time ramps up to 5.7932714 ms.
What is going on? why is more threads so much slower?
EDIT:
To boil the issue down, I made this much simpler drivercode, that has four tasks that simply add elements:
ExecutorService executor
= Executors.newFixedThreadPool(2);
LongArrayList dal1 = new LongArrayList();
int n = 100_000_00;
Timer t = new Timer();
for (int i = 0; i < 4 ; i++){
executor.execute(new Runnable() {
#Override
public void run() {
for (int j = 0; j < n ; j++)
dal1.add(j);
}
});
}
executor.shutdown();
try {
executor.awaitTermination(1000, TimeUnit.MILLISECONDS);
} catch (Exception e){
System.out.println("what?");
}
System.out.println("Using toString(): " + t.check() + " ms");
Here it still does not seem to matter how many threads i allocate, there is no speedup at all, could this simply be because of overhead?
There are some problems with your code that make it hard to reason why with more threads the time increases.
btw
public double check() { return (System.nanoTime()-start+spent)/1e9; }
gives you back seconds not milliseconds, so change this:
System.out.println("Using toString(): " + t.check() + " ms");
to
System.out.println("Using toString(): " + t.check() + "s");
First problem:
LongArrayList dal1 = new LongArrayList();
dal1 is shared among all threads, and those threads are updating that shared variable without any mutual exclusion around it, consequently, leading to race conditions. Moreover, this can also lead to cache invalidation, which can increase your overall execution time.
The other thing is that you may have load balancing problems. You have 4 parallel tasks, but clearly the last one
tasks.add(() -> {
for (int i = n; i < n * 2; i++) {
dal1.add(i + 1);
}
return null;});
is the most computing-intensive task. Even if the 4 tasks run in parallel, without the problems that I have mention (i.e., lack of synchronization around the shared data), the last task will dictate the overall execution time.
Not to mention that parallelism does not come for free, it adds overhead (e.g., scheduling the parallel work and so on), which might be high enough that makes it not worth to parallelize the code in the first place. In your code, there is at least the overhead of waiting for the tasks to be completed, and also the overhead of shutting down the pool of executors.
Another possibility that would also explain why you are not getting ArrayIndexOutOfBoundsException all over the place is that the first 3 tasks are so small that they are being executed by the same thread. This would also again make your overall execution time very dependent on the last task, the on the overhead of executor.shutdown(); and executor.awaitTermination. However, even if that is the case, the order of execution of tasks, and which threads will execute then, is typically non-deterministic, and consequently, is not something that your application should rely upon. Funny enough, when I changed your code to immediately execute the tasks (i.e., executor.execute) I got ArrayIndexOutOfBoundsException all over the place.

Right way to combine group of collections

I have done some code to combine in parallel group of collections which contains pairs[String,Integer], Example
Thread 1
[Car,1][Bear,1][Car,1]
Thread 2
[River,1][Car,1][River,1]
Result should be collections of each unique pair key (sorted alphabetically)
[Bear,1]
[Car,1][Car,1][Car,1]
[River,1][River,1][River,1]
My solution to do this like what shown below but sometime i don't get expected result or ConcurrentModificationException gets thrown from the list that contains result collections
List<Collection<Pair<String, Integer>>> combiningResult = new ArrayList<>();
private void startMappingPhase() throws Exception {
SimpleDateFormat formatter = new SimpleDateFormat("HH:mm:ss.SSS");
Invoker invoker = new Invoker(mappingClsPath, "Mapping", "mapper");
List<Callable<Integer>> tasks = new ArrayList<>();
for (String line : fileLines) {
tasks.add(() -> {
try {
combine((Collection<Pair<String, Integer>>) invoker.invoke(line));
} catch (Exception e) {
e.printStackTrace();
executor.shutdownNow();
errorOccurred = true;
return 0;
}
return 1;
});
if (errorOccurred)
Utils.showFatalError("Some error occurred, See log for more detalis");
}
long start = System.nanoTime();
System.out.println(tasks.size() + " Tasks");
System.out.println("Started at " + formatter.format(new Date()) + "\n");
executor.invokeAll(tasks);
long elapsedTime = System.nanoTime() - start;
partitioningResult.forEach(c -> {
System.out.println(c.size() + "\n" + c);
});
System.out.print("\nFinished in " + (elapsedTime / 1_000_000_000.0) + " milliseconds\n");
}
private void partition(Collection<Pair<String, Integer>> pairs) {
Set<Pair<String, Integer>> uniquePairs = new LinkedHashSet<>(pairs);
for (Pair<String, Integer> uniquePair : uniquePairs) {
int pFrequencyCount = Collections.frequency(pairs, uniquePair);
Optional<Collection<Pair<String, Integer>>> collResult = combiningResult.stream().filter(c -> c.contains(uniquePair)).findAny();
if (collResult.isPresent()) {
collResult.ifPresent(c -> {
for (int i = 0; i < pFrequencyCount; i++)
c.add(uniquePair);
});
} else {
Collection<Pair<String, Integer>> newColl = new ArrayList<>();
for (int i = 0; i < pFrequencyCount; i++)
newColl.add(uniquePair);
combiningResult.add(newColl);
}
}
}
I tried CopyOnWriteList insisted of ArrayList but sometimes it gets incomplete result like
[Car,1][Car,1] insisted of three entries, My question
Is there a way to achieve what I'm trying to do without getting ConcurrentModificationException and incomplete result?
An example image
If you are trying to modify a single collections from multiple threads you will need to add a synchronized block or use one of the JDK classes supporting concurrency. These will typically perform better than a synchronized block.
https://docs.oracle.com/javase/tutorial/essential/concurrency/collections.html

Find number of elements in range from map object

Map structure and data is given below
Map<String, BigDecimal>
A, 12
B, 23
C, 67
D, 99
Now i want to group values in range, output has range as key and number of elements there as value. Like below:
0-25, 2
26-50, 0
51-75, 1
76-100, 1
How can we do this using java streams ?
You can do it like that:
public class MainClass {
public static void main(String[] args) {
Map<String, BigDecimal> aMap=new HashMap<>();
aMap.put("A",new BigDecimal(12));
aMap.put("B",new BigDecimal(23));
aMap.put("C",new BigDecimal(67));
aMap.put("D",new BigDecimal(99));
Map<String, Long> o = aMap.entrySet().stream().collect(Collectors.groupingBy( a ->{
//Do the logic here to return the group by function
if(a.getValue().compareTo(new BigDecimal(0))>0 &&
a.getValue().compareTo(new BigDecimal(25))<0)
return "0-25";
if(a.getValue().compareTo(new BigDecimal(26))>0 &&
a.getValue().compareTo(new BigDecimal(50))<0)
return "26-50";
if(a.getValue().compareTo(new BigDecimal(51))>0 &&
a.getValue().compareTo(new BigDecimal(75))<0)
return "51-75";
if(a.getValue().compareTo(new BigDecimal(76))>0 &&
a.getValue().compareTo(new BigDecimal(100))<0)
return "76-100";
return "not-found";
}, Collectors.counting()));
System.out.print("Result="+o);
}
}
Result is : Result={0-25=2, 76-100=1, 51-75=1}
I couldn't find a better way to do that check for big decimals but you can think about how to improve it :) Maybe make an external method that does that trick
You may use a solution for regular ranges, e.g.
BigDecimal range = BigDecimal.valueOf(25);
inputMap.values().stream()
.collect(Collectors.groupingBy(
bd -> bd.subtract(BigDecimal.ONE).divide(range, 0, RoundingMode.DOWN),
TreeMap::new, Collectors.counting()))
.forEach((group,count) -> {
group = group.multiply(range);
System.out.printf("%3.0f - %3.0f: %s%n",
group.add(BigDecimal.ONE), group.add(range), count);
});
which will print:
1 - 25: 2
51 - 75: 1
76 - 100: 1
(not using the irregular range 0 - 25)
or a solution with explicit ranges:
TreeMap<BigDecimal,String> ranges = new TreeMap<>();
ranges.put(BigDecimal.ZERO, " 0 - 25");
ranges.put(BigDecimal.valueOf(26), "26 - 50");
ranges.put(BigDecimal.valueOf(51), "51 - 75");
ranges.put(BigDecimal.valueOf(76), "76 - 99");
ranges.put(BigDecimal.valueOf(100),">= 100 ");
inputMap.values().stream()
.collect(Collectors.groupingBy(
bd -> ranges.floorEntry(bd).getValue(), TreeMap::new, Collectors.counting()))
.forEach((group,count) -> System.out.printf("%s: %s%n", group, count));
0 - 25: 2
51 - 75: 1
76 - 99: 1
which can also get extended to print the absent ranges:
Map<BigDecimal, Long> groupToCount = inputMap.values().stream()
.collect(Collectors.groupingBy(bd -> ranges.floorKey(bd), Collectors.counting()));
ranges.forEach((k, g) -> System.out.println(g+": "+groupToCount.getOrDefault(k, 0L)));
0 - 25: 2
26 - 50: 0
51 - 75: 1
76 - 99: 1
>= 100 : 0
But note that putting numeric values into ranges like, e.g. “0 - 25” and “26 - 50” only makes sense if we’re talking about whole numbers, precluding values between 25 and 26, raising the question why you’re using BigDecimal instead of BigInteger. For decimal numbers, you would normally use ranges like “0 (inclusive) - 25 (exclusive)” and “25 (inclusive) - 50 (exclusive)”, etc.
If you have a Range like this:
class Range {
private final BigDecimal start;
private final BigDecimal end;
public Range(BigDecimal start, BigDecimal end) {
this.start = start;
this.end = end;
}
public boolean inRange(BigDecimal val) {
return val.compareTo(start) >= 0 && val.compareTo(end) <= 0;
}
#Override
public String toString() {
return start + "-" + end;
}
}
You can do this:
Map<String, BigDecimal> input = new HashMap<>();
input.put("A", BigDecimal.valueOf(12));
input.put("B", BigDecimal.valueOf(23));
input.put("C", BigDecimal.valueOf(67));
input.put("D", BigDecimal.valueOf(99));
List<Range> ranges = new ArrayList<>();
ranges.add(new Range(BigDecimal.valueOf(0), BigDecimal.valueOf(25)));
ranges.add(new Range(BigDecimal.valueOf(26), BigDecimal.valueOf(50)));
ranges.add(new Range(BigDecimal.valueOf(51), BigDecimal.valueOf(75)));
ranges.add(new Range(BigDecimal.valueOf(76), BigDecimal.valueOf(100)));
Map<Range, Long> result = new HashMap<>();
ranges.forEach(r -> result.put(r, 0L)); // Add all ranges with a count of 0
input.values().forEach( // For each value in the map
bd -> ranges.stream()
.filter(r -> r.inRange(bd)) // Find ranges it is in (can be in multiple)
.forEach(r -> result.put(r, result.get(r) + 1)) // And increment their count
);
System.out.println(result); // {51-75=1, 76-100=1, 26-50=0, 0-25=2}
I also had a solution with the groupingBy collector, but it was twice as big and couldn't deal with overlapping ranges or values that weren't in any range, so I think a solution like this will be better.
You can also use a NavigableMap:
Map<String, BigDecimal> dataSet = new HashMap<>();
dataSet.put("A", new BigDecimal(12));
dataSet.put("B", new BigDecimal(23));
dataSet.put("C", new BigDecimal(67));
dataSet.put("D", new BigDecimal(99));
// Map(k=MinValue, v=Count)
NavigableMap<BigDecimal, Integer> partitions = new TreeMap<>();
partitions.put(new BigDecimal(0), 0);
partitions.put(new BigDecimal(25), 0);
partitions.put(new BigDecimal(50), 0);
partitions.put(new BigDecimal(75), 0);
partitions.put(new BigDecimal(100), 0);
for (BigDecimal d : dataSet.values()) {
Entry<BigDecimal, Integer> e = partitions.floorEntry(d);
partitions.put(e.getKey(), e.getValue() + 1);
}
partitions.forEach((k, count) -> System.out.println(k + ": " + count));
// 0: 2
// 25: 0
// 50: 1
// 75: 1
// 100: 0
If only RangeMap from guava had methods like replace of computeIfPresent/computeIfAbsent like the additions in java-8 Map do, this would have been a breeze to do. Otherwise it's a bit cumbersome:
Map<String, BigDecimal> left = new HashMap<>();
left.put("A", new BigDecimal(12));
left.put("B", new BigDecimal(23));
left.put("C", new BigDecimal(67));
left.put("D", new BigDecimal(99));
RangeMap<BigDecimal, Long> ranges = TreeRangeMap.create();
ranges.put(Range.closedOpen(new BigDecimal(0), new BigDecimal(25)), 0L);
ranges.put(Range.closedOpen(new BigDecimal(25), new BigDecimal(50)), 0L);
ranges.put(Range.closedOpen(new BigDecimal(50), new BigDecimal(75)), 0L);
ranges.put(Range.closedOpen(new BigDecimal(75), new BigDecimal(100)), 0L);
left.values()
.stream()
.forEachOrdered(x -> {
Entry<Range<BigDecimal>, Long> e = ranges.getEntry(x);
ranges.put(e.getKey(), e.getValue() + 1);
});
System.out.println(ranges);
Here is the code which you can use:
public static void groupByRange() {
List<MyBigDecimal> bigDecimals = new ArrayList<MyBigDecimal>();
for(int i =0; i<= 10; i++) {
MyBigDecimal md = new MyBigDecimal();
if(i>0 && i<= 2)
md.setRange(1);
else if(i>2 && i<= 5)
md.setRange(2);
else if(i>5 && i<= 7)
md.setRange(3);
else
md.setRange(4);
md.setValue(i);
bigDecimals.add(md);
}
Map<Integer, List<MyBigDecimal>> result = bigDecimals.stream()
.collect(Collectors.groupingBy(e -> e.getRange(),
Collector.of(
ArrayList :: new,
(list, elem) -> {
if (list.size() < 2)
list.add(elem);
},
(list1, list2) -> {
list1.addAll(list2);
return list1;
}
)));
for(Entry<Integer, List<MyBigDecimal>> en : result.entrySet()) {
int in = en.getKey();
List<MyBigDecimal> cours = en.getValue();
System.out.println("Key Range = "+in + " , List Size : "+cours.size());
}
}
class MyBigDecimal{
private int range;
private int value;
public int getValue() {
return value;
}
public void setValue(int value) {
this.value = value;
}
public int getRange() {
return range;
}
public void setRange(int range) {
this.range = range;
}
}
This will give you a similar result.
public static void main(String[] args) {
Map<String, Integer> resMap = new HashMap<>();
int range = 25;
Map<String, BigDecimal> aMap=new HashMap<>();
aMap.put("A",new BigDecimal(12));
aMap.put("B",new BigDecimal(23));
aMap.put("C",new BigDecimal(67));
aMap.put("D",new BigDecimal(99));
aMap.values().forEach(v -> {
int lower = v.divide(new BigDecimal(range)).intValue();
// get the lower & add the range to get higher
String key = lower*range + "-" + (lower*range+range-1);
resMap.put(key, resMap.getOrDefault(key, 0) + 1);
});
resMap.entrySet().forEach(e -> System.out.println(e.getKey() + " = " + e.getValue()));
}
Though there are some differences from what you have asked
Ranges are inclusive in this; 0-24 instead of 0-25, so that 25 is included in 25-50
Your range 0-25 contains 26 possible values in between, while all other ranges contain 25 values. This implementations output has ranges of size 25 (configurable via range variable)
You can decide on the range
Output (you may want to iterate the map's key better to get the output in a sorted order)
75-99 = 1
0-24 = 2
50-74 = 1
Assuming your range has the value BigDecimal.valueOf(26), you can do the following to get a Map<BigDecimal, Long> where each key represents the group id (0 for [0-25], 1 for [26, 51], ...), and each corresponding value represents the group count of elements.
content.values()
.stream()
.collect(Collectors.groupingBy(n -> n.divide(range, BigDecimal.ROUND_FLOOR), Collectors.counting()))

Merge two sorted linked lists in java

I need to merge two sorted linked list into one sorted list. I've been trying to do so for hours, but when I reach the end of one of the list I always have some trouble. This is the best I could do. My filaA and filaB are liked lists of data type "long".
LinkedList<Long> result= new LinkedList<Long>();
iterA = filaA.listIterator();
iterB = filaB.listIterator();
while (iterA.hasNext() && iterB.hasNext()) {
n = iterA.next();
m = iterB.next();
if (n <= m) {
filafusion.add(n);
n = iterA.next();
} else {
filafusion.add(m);
m = iterB.next();
}
}
if (iterA.hasNext()) {
while (iterA.hasNext()) {
filafusion.add(iterA.next());
}
} else {
while (iterB.hasNext()) {
filafusion.add(iterB.next());
}
}
iterfusion = filafusion.listIterator();
while (iterfusion.hasNext()) {
System.out.print(iterfusion.next());
}
}
The general idea here is to compare one by one and then move the iterator to the next. But they are moving at the same time, so I'm only comparing first with first, second with second, and so on.
I also tried to move the n = iterA.next();m = iterB.next(); before the while loop, which makes it work much better, but then I don't know which list runs out of elements. Only works if the lists are the same lenght but then one of the elements won't enter the result.
I've seen many codes for this here, but they all use Nodes and recursion and stuff I'm not familiar with. I think using iterators will make it more efficient, but that's what's got me so confused, I'm not iterating where I should :(
Any suggestions will be appreciated.
You can use the standard java.util.TreeSet to do the job.
here is a full example :
LinkedList<Long> filaA = new LinkedList<>();
filaA.add(1l);
filaA.add(3l);
filaA.add(5l);
LinkedList<Long> filaB = new LinkedList<>();
filaB.add(2l);
filaB.add(4l);
filaB.add(6l);
Set<Long> result = new TreeSet<>();
result.addAll(filaA);
result.addAll(filaB);
System.out.println(result);
TreeSet use natural order.
I just adapted your code. If you are able to use Java 8, then I have a much shorter solution below.
Iterator iterA = filaA.listIterator();
Iterator iterB = filaB.listIterator();
Long n = (Long)iterA.next();
Long m = (Long)iterB.next();
while (true) {
if (n <= m) {
filafusion.add(n);
if(iterA.hasNext()){
n = (Long)iterA.next();
}
else{
filafusion.add(m);
while(iterB.hasNext()){
filafusion.add((Long)iterB.next());
}
break;
}
} else {
filafusion.add(m);
if(iterB.hasNext()){
m = (Long)iterB.next();
}
else{
filafusion.add(n);
while(iterA.hasNext()){
filafusion.add((Long)iterA.next());
}
break;
}
}
}
Iterator iterfusion = filafusion.listIterator();
while (iterfusion.hasNext()) {
System.out.println(iterfusion.next());
}
Here is the Java 8 way to do it. And it also works for unsorted input lists:
Stream stream = Stream.concat(filaA.stream(), filaB.stream());
stream.sorted().forEach(System.out::println);
public static <T extends Comparable<T>> List<T> mergeSortedLists(List<T> list1, List<T> list2) {
List<T> result = new ArrayList<>();
Iterator<T> iterator1 = list1.iterator();
Iterator<T> iterator2 = list2.iterator();
boolean hasNext1 = iterator1.hasNext();
boolean hasNext2 = iterator2.hasNext();
T next1 = hasNext1 ? iterator1.next() : null;
T next2 = hasNext2 ? iterator2.next() : null;
while (hasNext1 || hasNext2) {
if (!hasNext1) {
result.add(next2);
hasNext2 = iterator2.hasNext();
next2 = hasNext2 ? iterator2.next() : null;
} else if (!hasNext2) {
result.add(next1);
hasNext1 = iterator1.hasNext();
next1 = hasNext1 ? iterator1.next() : null;
} else {
if (next1.compareTo(next2) < 0) {
result.add(next1);
hasNext1 = iterator1.hasNext();
next1 = hasNext1 ? iterator1.next() : null;
} else {
result.add(next2);
hasNext2 = iterator2.hasNext();
next2 = hasNext2 ? iterator2.next() : null;
}
}
}
return result;
}

guava CacheBuilder performance

The following micro benchmark uses Guava CacheBuilder. It performs 1 magnitude times slower than ConcurrentHashMap. Am I using CacheBuilder correctly?
final Cache c = CacheBuilder.newBuilder().concurrencyLevel(10).maximumSize(100).build();
int num = 10;
final java.util.concurrent.CountDownLatch startSignal = new java.util.concurrent.CountDownLatch(1);
final java.util.concurrent.CountDownLatch doneSignal = new java.util.concurrent.CountDownLatch(num);
int j = 0;
final Long[] pairs = new Long[] { new Long(5),
new Long(324235342L), new Long(3242385842L), new Long(8463242363642L),
new Long(3244532342L), new Long(54654L), new Long(7332742342L),
new Long(32425345342L), new Long(32453662342L), new Long(63573242342L) };
Object state = new Object();
for (Long p : pairs) {
c.put(p, state);
}
Thread [] threads = new Thread[num];
for (int k = 0 ; k < num ; ++k) {
final int z = k;
threads[k] = new Thread(new Runnable() {
int i = z;
#Override
public void run() {
try {
startSignal.await();
for (int j = 0 ; j < 100000 ; ++j) {
c.getIfPresent(pairs[z]);
}
doneSignal.countDown();
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
for (Thread t : threads) {
t.start();
}
startSignal.countDown();
c.getIfPresent(pairs[1]);
long t = System.currentTimeMillis();
doneSignal.await();
System.out.println("done in " + (System.currentTimeMillis() - t));
ConcurrentHashMap gives me 9 ms. CacheBuilder gives me 90 ms. This is after looping the same code for several minutes.

Categories

Resources