Third-Party Packages
import java.util.Collections;
import java.util.Comparator;
import org.joda.time.DateTime;
My Comparator
public static Comparator<Task> TASK_PRIORITY = new Comparator<Task>() {
public int compare(Task task1, Task task2) {
if (task1 == null && task2 == null) return 0;
if (task1 == null) return +1; //null last
if (task2 == null) return -1; //null last
// Only consider retries after a task is retried 5+ times
if (task1.getRetries() >= 5 || task2.getRetries() >= 5) {
// Primary sort: retry count (ascending)
int retriesCompare = Integer.compare(task1.getRetries(), task2.getRetries());
if (retriesCompare != 0) return retriesCompare;
}
// Secondary sort: creation time (ascending, null first)
int creationCompare = compareTimeNullFirst(task1.getCreationTime(), task2.getCreationTime());
if (creationCompare != 0) return creationCompare;
// Tertiary sort: load time (ascending, null last)
int loadCompare = compareTimeNullLast(task1.getLoadTime(), task2.getLoadTime());
if (loadCompare != 0) return loadCompare;
return 0;
}
};
private static int compareTimeNullLast(DateTime time1, DateTime time2) {
if (time1 == null && time2 == null) return 0;
if (time1 == null) return +1;
if (time2 == null) return -1;
if (time1.isBefore(time2) return -1;
if (time1.isAfter(time2)) return +1;
return 0;
}
private static int compareTimeNullFirst(DateTime time1, DateTime time2) {
if (time1 == null && time2 == null) return 0;
if (time1 == null) return -1;
if (time2 == null) return +1;
if (time1.isBefore(time2) return -1;
if (time1.isAfter(time2)) return +1;
return 0;
}
Using My Comparator
//tasks is a List<Task>
Collections.sort(tasks, TASK_PRIORITY);
My Problem
I sometimes get an IllegalArgumentException for Comparison method violates its general contract!. I can consistently get this Exception thrown with live data running long enough, but I'm not sure how to fix the actual cause of the problem.
My Question
What is wrong with my comparator? (Specifically, which part of the contract am I violating?) How do I fix it without covering up the Exception?
Notes
I'm using Java 7 and cannot upgrade without a major rewrite.
I could possibly cover-up the Exception by setting java.util.Arrays.useLegacyMergeSort to true, but that is not a desirable solution.
I tried to create tests to randomly generate data and verify each of the contract conditions. I wasn't able to get the exception to be thrown.
I tried removing the condition around the retry comparison, but I still eventually got the Exception.
This line throws the exception: Collections.sort(tasks, TASK_PRIORITY);
Let us start at the beginning. My reading of your code is that the logic of your Comparator is sound. (I would have avoided having null Task and DateTime values, but that is not relevant to your problem.)
The other thing that can cause this exception is if the compare method is giving inconsistent results because the Task objects are changing. Indeed, it looks like it is semantically meaningful for (at least) the retry counts to change. If there is another thread that is changing Task the fields that can effect the ordering ... while the current thread is sorting ... that could the IllegalArgumentException.
(Part of the comparison contract is that pairwise ordering does not change while you are sorting the collection.)
You then say this:
I use ImmutableSet.copyOf to copy the list before sorting, and I do that under a read lock in java.util.concurrent.locks.ReadWriteLock.
Copying a collection doesn't make copies of the elements of the collection. It is a shallow copy. So you will end up with two collections that contain the same objects. If the other thread mutates any of the objects (e.g. by increasing retry counts), that could change the ordering of objects.
The locking makes sure that you have consistent copy, but that's not what the problem is.
What is the solution? I can think of a couple:
You could lock something to block all updates to the collections AND the element objects while you copy and sort.
You could deep-copy the collect; i.e. create a new collection containing copies of the elements of the original collection.
You could create light-weight objects that contain a snapshots of the fields of the Task objects that are relevant to sorting; e.g.
public class Key implements Comparable<Key> {
private int retries;
private DateTime creation;
private DateTime load;
private Task task;
public Key(Task task) {
this.task = task;
this.retries = task.getRetryCount();
...
}
public int compareTo(Key other) {
// compare using retries, creation, load
}
}
This has the potential advantages that you are copying less information, and you can go from the sorted collection of Key objects to the original Task objects.
Note all of these alternatives are slower than what you are currently doing. I don't think there is a way to avoid this.
Related
I have to Array lists with 1000 objects in each of them. I need to remove all elements in Array list 1 which are there in Array list 2. Currently I am running 2 loops which is resulting in 1000 x 1000 operations in worst case.
List<DataClass> dbRows = object1.get("dbData");
List<DataClass> modifiedData = object1.get("dbData");
List<DataClass> dbRowsForLog = object2.get("dbData");
for (DataClass newDbRows : dbRows) {
boolean found=false;
for (DataClass oldDbRows : dbRowsForLog) {
if (newDbRows.equals(oldDbRows)) {
found=true;
modifiedData.remove(oldDbRows);
break;
}
}
}
public class DataClass{
private int categoryPosition;
private int subCategoryPosition;
private Timestamp lastUpdateTime;
private String lastModifiedUser;
// + so many other variables
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
DataClass dataClassRow = (DataClass) o;
return categoryPosition == dataClassRow.categoryPosition
&& subCategoryPosition == dataClassRow.subCategoryPosition && (lastUpdateTime.compareTo(dataClassRow.lastUpdateTime)==0?true:false)
&& stringComparator(lastModifiedUser,dataClassRow.lastModifiedUser);
}
public String toString(){
return "DataClass[categoryPosition="+categoryPosition+",subCategoryPosition="+subCategoryPosition
+",lastUpdateTime="+lastUpdateTime+",lastModifiedUser="+lastModifiedUser+"]";
}
public static boolean stringComparator(String str1, String str2){
return (str1 == null ? str2 == null : str1.equals(str2));
}
public int hashCode() {
int hash = 7;
hash = 31 * hash + (int) categoryPosition;
hash = 31 * hash + (int) subCategoryPosition
hash = 31 * hash + (lastModifiedUser == null ? 0 : lastModifiedUser.hashCode());
return hash;
}
}
The best work around i could think of is create 2 sets of strings by calling tostring() method of DataClass and compare string. It will result in 1000 (for making set1) + 1000 (for making set 2) + 1000 (searching in set ) = 3000 operations. I am stuck in Java 7. Is there any better way to do this? Thanks.
Let Java's builtin collections classes handle most of the optimization for you by taking advantage of a HashSet. The complexity of its contains method is O(1). I would highly recommend looking up how it achieves this because it's very interesting.
List<DataClass> a = object1.get("dbData");
HashSet<DataClass> b = new HashSet<>(object2.get("dbData"));
a.removeAll(b);
return a;
And it's all done for you.
EDIT: caveat
In order for this to work, DataClass needs to implement Object::hashCode. Otherwise, you can't use any of the hash-based collection algorithms.
EDIT 2: implementing hashCode
An object's hash code does not need to change every time an instance variable changes. The hash code only needs to reflect the instance variables that determine equality.
For example, imagine each object had a unique field private final UUID id. In this case, you could determine if two objects were the same by simply testing the id value. Fields like lastUpdateTime and lastModifiedUser would provide information about the object, but two instances with the same id would refer to the same object, even if the lastUpdateTime and lastModifiedUser of each were different.
The point is that if you really want to want to optimize this, include as few fields as possible in the hash computation. From your example, it seems like categoryPosition and subCategoryPosition might be enough.
Whatever fields you choose to include, the simplest way to compute a hash code from them is to use Objects::hash rather than running the numbers yourself.
It is a Set A-B operation(only retain elements in Set A that are not in Set B = A-B)
If using Set is fine then we can do like below. We can use ArrayList as well in place of Set but in AL case for each element to remove/retain check it needs to go through an entire other list scan.
Set<DataClass> a = new HashSet<>(object1.get("dbData"));
Set<DataClass> b = new HashSet<>(object2.get("dbData"));
a.removeAll(b);
If ordering is needed, use TreeSet.
Try to return a set from object1.get("dbData") and object2.get("dbData") that skips one more intermediate collection creation.
I have a question about java collections such as Set or List. More generally objects that you can use in a for-each loop. Is there any requirement that the elements of them actually has to be stored somewhere in a data structure or can they be described only from some sort of requirement and calculated on the fly when you need them? It feels like this should be possible to be done, but I don't see any of the java standard collection classes doing anything like this. Am I breaking any sort of contract here?
The thing I'm thinking about using these for is mainly mathematics. Say for example I want to have a set representing all prime numbers under 1 000 000. It might not be a good idea to save these in memory but to instead have a method check if a particular number is in the collection or not.
I'm also not at all an expert at java streams, but I feel like these should be usable in java 8 streams since the objects have very minimal state (the objects in the collection doesn't even exist until you try to iterate over them or check if a particular object exists in the collection).
Is it possible to have Collections or Iterators with virtually infinitely many elements, for example "all numbers on form 6*k+1", "All primes above 10" or "All Vectors spanned by this basis"? One other thing I'm thinking about is combining two sets like the union of all primes below 1 000 000 and all integers on form 2^n-1 and list the mersenne primes below 1 000 000. I feel like it would be easier to reason about certain mathematical objects if it was done this way and the elements weren't created explicitly until they are actually needed. Maybe I'm wrong.
Here's two mockup classes I wrote to try to illustrate what I want to do. They don't act exactly as I would expect (see output) which make me think I am breaking some kind of contract here with the iterable interface or implementing it wrong. Feel free to point out what I'm doing wrong here if you see it or if this kind of code is even allowed under the collections framework.
import java.util.AbstractSet;
import java.util.Iterator;
public class PrimesBelow extends AbstractSet<Integer>{
int max;
int size;
public PrimesBelow(int max) {
this.max = max;
}
#Override
public Iterator<Integer> iterator() {
return new SetIterator<Integer>(this);
}
#Override
public int size() {
if(this.size == -1){
System.out.println("Calculating size");
size = calculateSize();
}else{
System.out.println("Accessing calculated size");
}
return size;
}
private int calculateSize() {
int c = 0;
for(Integer p: this)
c++;
return c;
}
public static void main(String[] args){
PrimesBelow primesBelow10 = new PrimesBelow(10);
for(int i: primesBelow10)
System.out.println(i);
System.out.println(primesBelow10);
}
}
.
import java.util.Iterator;
import java.util.NoSuchElementException;
public class SetIterator<T> implements Iterator<Integer> {
int max;
int current;
public SetIterator(PrimesBelow pb) {
this.max= pb.max;
current = 1;
}
#Override
public boolean hasNext() {
if(current < max) return true;
else return false;
}
#Override
public Integer next() {
while(hasNext()){
current++;
if(isPrime(current)){
System.out.println("returning "+current);
return current;
}
}
throw new NoSuchElementException();
}
private boolean isPrime(int a) {
if(a<2) return false;
for(int i = 2; i < a; i++) if((a%i)==0) return false;
return true;
}
}
Main function gives the output
returning 2
2
returning 3
3
returning 5
5
returning 7
7
Exception in thread "main" java.util.NoSuchElementException
at SetIterator.next(SetIterator.java:27)
at SetIterator.next(SetIterator.java:1)
at PrimesBelow.main(PrimesBelow.java:38)
edit: spotted an error in the next() method. Corrected it and changed the output to the new one.
Well, as you see with your (now fixed) example, you can easily do it with Iterables/Iterators. Instead of having a backing collection, the example would've been nicer with just an Iterable that takes the max number you wish to calculate primes to. You just need to make sure that you handle the hasNext() method properly so you don't have to throw an exception unnecessarily from next().
Java 8 streams can be used easier to perform these kinds of things nowadays, but there's no reason you can't have a "virtual collection" that's just an Iterable. If you start implementing Collection it becomes harder, but even then it wouldn't be completely impossible, depending on the use cases: e.g. you could implement contains() that checks for primes, but you'd have to calculate it and it would be slow for large numbers.
A (somewhat convoluted) example of a semi-infinite set of odd numbers that is immutable and stores no values.
public class OddSet implements Set<Integer> {
public boolean contains(Integer o) {
return o % 2 == 1;
}
public int size() {
return Integer.MAX_VALUE;
}
public boolean add(Integer i) {
throw new OperationNotSupportedException();
}
public boolean equals(Object o) {
return o instanceof OddSet;
}
// etc. etc.
}
As DwB stated, this is not possible to do with Java's Collections API, as every element must be stored in memory. However, there is an alternative: this is precisely why Java's Stream API was implemented!
Streams allow you to iterate across an infinite amount of objects that are not stored in memory unless you explicitly collect them into a Collection.
From the documentation of IntStream#iterate:
Returns an infinite sequential ordered IntStream produced by iterative application of a function f to an initial element seed, producing a Stream consisting of seed, f(seed), f(f(seed)), etc.
The first element (position 0) in the IntStream will be the provided seed. For n > 0, the element at position n, will be the result of applying the function f to the element at position n - 1.
Here are some examples that you proposed in your question:
public class Test {
public static void main(String[] args) {
IntStream.iterate(1, k -> 6 * k + 1);
IntStream.iterate(10, i -> i + 1).filter(Test::isPrime);
IntStream.iterate(1, n -> 2 * n - 1).filter(i -> i < 1_000_000);
}
private boolean isPrime(int a) {
if (a < 2) {
return false;
}
for(int i = 2; i < a; i++) {
if ((a % i) == 0) {
return false;
}
return true;
}
}
}
I have a class Product, which three variables:
class Product implements Comparable<Product>{
private Type type; // Type is an enum
Set<Attribute> attributes; // Attribute is a regular class
ProductName name; // ProductName is another enum
}
I used Eclipse to automatically generate the equal() and hashcode() methods:
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((attributes == null) ? 0 : attributes.hashCode());
result = prime * result + ((type == null) ? 0 : type.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
Product other = (Product) obj;
if (attributes == null) {
if (other.attributes != null)
return false;
} else if (!attributes.equals(other.attributes))
return false;
if (type != other.type)
return false;
return true;
}
Now in my application I need to sort a Set of Product, so I need to implement the Comparable interface and compareTo method:
#Override
public int compareTo(Product other){
int diff = type.hashCode() - other.getType().hashCode();
if (diff > 0) {
return 1;
} else if (diff < 0) {
return -1;
}
diff = attributes.hashCode() - other.getAttributes().hashCode();
if (diff > 0) {
return 1;
} else if (diff < 0) {
return -1;
}
return 0;
}
Does this implementation make sense? What about if I just want to sort the product based on the String values of "type" and "attributes" values. So how to implement this?
Edit:
The reason I want to sort a Set of is because I have Junit test which asserts on the string values of a HashSet. My goal is to maintain the same order of output as I sort the set. otherwise, even if the Set's values are the same, the assertion will fail due to random output of a set.
Edit2:
Through the discussion, it's clear that to assert the equality of String values of a HashSet isn't good in unit tests. For my situation I currently write a sort() function to sort the HashSet String values in natural ordering, so it can consistently output the same String value for my unit tests and that suffice for now. Thanks all.
Looks like from all the comments in here you dont need to use Comparator at all. Because:
1) You are using HashSet that does not work with Comparator. It is not ordered.
2) You just need to make sure that two HashSets containing Products are equal. It means they are same size and contain the same set of Products.
Since you already added hashCode and equals methods to Product all you need to do is call equals method on those HashSets.
HashSet<Product> set1 = ...
HashSet<Product> set2 = ...
assertTrue( set1.equals(set2) );
This implementation does not seem to be consistent. You have no control over how the hash codes look like. If you have obj1 < obj2 according to compareTo in the first try, the next time you start your JVM it could be the other way around obj1 > obj2.
The only thing that you really know is that if diff == 0 then the objects are considered to be equal. However you can also just use the equals method for that check.
It is now up to you how you define when obj1 < obj2 or obj1 > obj2. Just make sure that it is consistent.
By the way, you know that the current implementation does not include ProductName name in the equals check? Dont know if that is intended thus the remark.
The question is, what do you know about that attributes? Maybe they implement Comparable (for example if they are Numbers), then you can order according to their compareTo method. If you totally know nothing about the objects, it will be hard to build up a consistent ordering.
If you just want them to be ordered consistently but the ordering itself does not play any role, you could just give them ids at creation time and sort by them. At this point you could indeed use the hashcodes if it does not matter that it can change between JVM calls, but only then.
I know that AtomicReference has compareAndSet, but I feel like what I want to do is this
private final AtomicReference<Boolean> initialized = new AtomicReference<>( false );
...
atomicRef.compareSetAndDo( false, true, () -> {
// stuff that only happens if false
});
this would probably work too, might be better.
atomicRef.compareAndSet( false, () -> {
// stuff that only happens if false
// if I die still false.
return true;
});
I've noticed there's some new functional constructs but I'm not sure if any of them are what I'm looking for.
Can any of the new constructs do this? if so please provide an example.
update
To attempt to simplify my problem, I'm trying to find a less error prone way to guard code in a "do once for object" or (really) lazy initializer fashion, and I know that some developers on my team find compareAndSet confusing.
guard code in a "do once for object"
how exactly to implement that depends on what you want other threads attempting to execute the same thing in the meantime. if you just let them run past the CAS they may observe things in an intermediate state while the one thread that succeeded does its action.
or (really) lazy initializer fashion
that construct is not thread-safe if you're using it for lazy initializers because the "is initialized" boolean may be set to true by one thread and then execute the block while another thread observes the true-state but reads an empty result.
You can use Atomicreference::updateAndGet if multiple concurrent/repeated initialization attempts are acceptable with one object winning in the end and the others being discarded by GC. The update method should be side-effect-free.
Otherwise you should just use the double checked locking pattern with a variable reference field.
Of course you can always package any of these into a higher order function that returns a Runnable or Supplier which you then assign to a final field.
// == FunctionalUtils.java
/** #param mayRunMultipleTimes must be side-effect-free */
public static <T> Supplier<T> instantiateOne(Supplier<T> mayRunMultipleTimes) {
AtomicReference<T> ref = new AtomicReference<>(null);
return () -> {
T val = ref.get(); // fast-path if already initialized
if(val != null)
return val;
return ref.updateAndGet(v -> v == null ? mayRunMultipleTimes.get() : v)
};
}
// == ClassWithLazyField.java
private final Supplier<Foo> lazyInstanceVal = FunctionalUtils.instantiateOne(() -> new Foo());
public Foo getFoo() {
lazyInstanceVal.get();
}
You can easily encapsulate various custom control-flow and locking patterns this way. Here are two of my own..
compareAndSet returns true if the update was done, and false if the actual value was not equal to the expected value.
So just use
if (ref.compareAndSet(expectedValue, newValue)) {
...
}
That said, I don't really understand your examples, since you're passing true and false to a method taking object references as argument. And your second example doesn't do the same thing as the first one. If the second is what you want, I think what you're after is
ref.getAndUpdate(value -> {
if (value.equals(expectedValue)) {
return someNewValue(value);
}
else {
return value;
}
});
You’re over-complicating things. Just because there are now lambda expression, you don’t need to solve everything with lambdas:
private volatile boolean initialized;
…
if(!initialized) synchronized(this) {
if(!initialized) {
// stuff to be done exactly once
initialized=true;
}
}
The double checked locking might not have a good reputation, but for non-static properties, there are little alternatives.
If you consider multiple threads accessing it concurrently in the uninitialized state and want a guaranty that the action runs only once, and that it has completed, before dependent code is executed, an Atomic… object won’t help you.
There’s only one thread that can successfully perform compareAndSet(false,true), but since failure implies that the flag already has the new value, i.e. is initialized, all other threads will proceed as if the “stuff to be done exactly once” has been done while it might still be running. The alternative would be reading the flag first and conditionally perform the stuff and compareAndSet afterwards, but that allows multiple concurrent executions of “stuff”. This is also what happens with updateAndGet or accumulateAndGet and it’s provided function.
To guaranty exactly one execution before proceeding, threads must get blocked, if the “stuff” is currently executed. The code above does this. Note that once the “stuff” has been done, there will be no locking anymore and the performance characteristics of the volatile read are the same as for the Atomic… read.
The only solution which is simpler in programming, is to use a ConcurrentMap:
private final ConcurrentHashMap<String,Boolean> initialized=new ConcurrentHashMap<>();
…
initialized.computeIfAbsent("dummy", ignore -> {
// stuff to do exactly once
return true;
});
It might look a bit oversized, but it provides exactly the required performance characteristics. It will guard the initial computation using synchronized (or well, an implementation dependent exclusion mechanism) but perform a single read with volatile semantics on subsequent queries.
If you want a more lightweight solution, you may stay with the double checked locking shown at the beginning of this answer…
I know this is old, but I've found there is no perfect way to achieve this, more specifically this:
trying to find a less error prone way to guard code in a "do (anything) once..."
I'll add to this "while respecting a happens before behavior." which is required for instantiating singletons in your case.
IMO The best way to achieve this is by means of a synchronized function:
public<T> T transaction(Function<NonSyncObject, T> transaction) {
synchronized (lock) {
return transaction.apply(nonSyncObject);
}
}
This allows to preform atomic "transactions" on the given object.
Other options are double-check spin-locks:
for (;;) {
T t = atomicT.get();
T newT = new T();
if (atomicT.compareAndSet(t, newT)) return;
}
On this one new T(); will get executed repeatedly until the value is set successfully, so it is not really a "do something once".
This would only work on copy on write transactions, and could help on "instantiating objects once" (which in reality is instantiating many but at the end is referencing the same) by tweaking the code.
The final option is a worst performant version of the first one, but this one is a true happens before AND ONCE (as opposed to the double-check spin-lock):
public void doSomething(Runnable r) {
while (!atomicBoolean.compareAndSet(false, true)) {}
// Do some heavy stuff ONCE
r.run();
atomicBoolean.set(false);
}
The reason why the first one is the better option is that it is doing what this one does, but in a more optimized way.
As a side note, in my projects I've actually used the code below (similar to #the8472's answer), that at the time I thought safe, and it may be:
public T get() {
T res = ref.get();
if (res == null) {
res = builder.get();
if (ref.compareAndSet(null, res))
return res;
else
return ref.get();
} else {
return res;
}
}
The thing about this code is that, as the copy on write loop, this one generates multiple instances, one for each contending thread, but only one is cached, the first one, all the other constructions eventually get GC'd.
Looking at the putIfAbsent method I see the benefit is the skipping of 17 lines of code and then a synchronized body:
/** Implementation for put and putIfAbsent */
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
int hash = spread(key.hashCode());
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
if (casTabAt(tab, i, null,
new Node<K,V>(hash, key, value, null)))
break; // no lock when adding to empty bin
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
V oldVal = null;
synchronized (f) {
if (tabAt(tab, i) == f) {
And then the synchronized body itself is another 34 lines:
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f;; ++binCount) {
K ek;
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
if (!onlyIfAbsent)
e.val = value;
break;
}
Node<K,V> pred = e;
if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key,
value, null);
break;
}
}
}
else if (f instanceof TreeBin) {
Node<K,V> p;
binCount = 2;
if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
}
}
The pro(s) of using a ConcurrentHashMap is that it will undoubtedly work.
I'm facing some really strange problems while implementing a kind of Kademlia bucket in Java 8 (OpenJDK).
I need to get at least a specific number of items from so-called Buckets.
But that's not the problem.
Somehow, I get sometimes a ConcurrentModificationException while doing closest.addAll() on the ArrayList though it is just used in a single thread and I'm not iterating or doing something like that.
Do you know how to help me?
Here is my code (I know it's a mess!):
List<Neighbour> getClosest(Node n, int num) {
ArrayList<Neighbour> closest = new ArrayList<>();
int missing;
int walkDown = n.getBucket(me);
int walkUp = walkDown + 1;
boolean pleaseBreak = true;
while (true) {
missing = num - closest.size();
if (missing <= 0) {
return closest;
}
if (walkUp >= 0 && walkUp < 160) {
List<Neighbour> l = buckets[walkUp].getClosest(missing);
closest.addAll(l);
if (closest.size() >= missing) {
return closest;
}
walkUp++;
pleaseBreak = false;
}
if (walkDown >= 0 && walkDown < 160) {
List<Neighbour> l = buckets[walkDown].getClosest(missing);
closest.addAll(l);
if (closest.size() >= missing) {
return closest;
}
walkDown--;
pleaseBreak = false;
}
if (pleaseBreak) {
return closest;
}
pleaseBreak = true;
}
}
ConcurrentModificationException actually means that you are breaking the rules of iteration by somehow modifying the list while iterating it.
Note that this exception does not always indicate that an object has been concurrently modified by a different thread. If a single thread issues a sequence of method invocations that violates the contract of an object, the object may throw this exception. For example, if a thread modifies a collection directly while it is iterating over the collection with a fail-fast iterator, the iterator will throw this exception.
That said it is fairly clear what could be causing this issue. As closest is a new List being filled by the method it must be l that is being modified.
There are two options:
Another thread is doing it.
You already have an iterator open across the list l.
Assuming it is not 1 (or you would probably have mentioned that) I will go for:
Your getClosest method is returning a sublist of a list that is being iterated across and/or modified and addAll is also attempting to iterate over it.
To fix, make getClosest return a copy of the sublist.