When I have an observable list, and want to do some stuff whenever an element is added or removed into it, I find myself doing the following code pattern very frequently:
ObservableList<String> myDynamicList = FXCollections.observableArrayList();
myDynamicList.add("My string 1");
myDynamicList.add("My string 2");
<...>
Consumer<String> onElementAdded = s -> {
logger.info("Added: " + s);
};
Consumer<String> onElementRemoved = s -> {
logger.info("Removed: " + s);
};
// Follow list insertions & removals in the future
myDynamicList.addListener((ListChangeListener<? super String>) c -> {
while(c.next()) {
c.getAddedSubList().forEach(onElementAdded);
c.getRemoved().forEach(onElementRemoved);
}
});
// "Synchronize" the initial state - call the same handler for all already inserted elements
myDynamicList.forEach(onElementAdded);
Problems with this approach:
Code clutter: I have to define (at least) the onElementAdded consumer as a variable, to be used both in addListener and forEach calls. I.e., I cannot define the behavior directly in the addListener argument (which can be more concise).
There are two calls that need to be maintained consistent (i.e., when changing onElementAdded in addListener, do not forget to do the same in forEach)
I always have a doubt whether to put addListener or forEach call first. I reckon this doesn't matter if the list is only modified by one thread, and if not this will break anyway. But still...
So the question is, is there a better way of binding a listener to an observable list, and calling the same listener for all existing elements as though if they were just added into the list?
Related
I have a specific use case for data processing where I am returning a future of type Future<List<SamplePOJO>>. I have multiple such futures which I am adding to a List.
But CompositeFuture.join() doesn't work on this list as it is asking for a List<Future> instead of a List<Future<List<SamplePOJO>>>. Is there any workaround for this?
You can collect all those Future<List<SamplePOJO>> in the List<Future> instead of List<Future<List<SamplePOJO>>>.
That will make CompositeFuture.all method accept it.
Future<List<String>> f = getFuture();
List<Future> futures = new ArrayList<>();
futures.add(f);
CompositeFuture.all(futures);
Here's an expanded set of example code (that I mistakenly wrote for another question and moved here).
So there exists a bug in Vert.x that causes issues with CompositeFuture.all(listoffutures), etc., at least in JDK 17, if listoffutures is of type List<Future<SomeType>> (or List<Future<?>>).
This bug might get fixed in Vert.x 5.
I got some success with the code below. The contrived example here is that I want to turn a List<Future<File>> into a Future<List<File>>.
#SuppressWarnings("rawtypes")
static List<Future> ltol(List<Future<File>> sa) {
List<Future> l = new ArrayList<>();
l.addAll(sa);
return l;
}
// A contrived example of what I was doing, which uses .compose and returns
// a Future of the list of results (File objects in my case)
Future<List<File>> mymethodcall(List<Future<File>> attachments) {
return CompositeFuture.all(ltol(attachments)).compose(files -> {
// Note we're reading the result of the .all call in the compose
List<File> mb = new ArrayList<>();
files.list().stream().forEach(o -> {
// Do whatever you need to do here with the results but they'll likely
// need to be cast (to File, in this case).
mb.add((File) o);
});
return Future.succeededFuture(mb);
});
}
The important step is getting your List<Future<T> into a List<Future>, if you need to. I did it by gross brute force in the static method above.
I'm facing a weird behavior in my Java code using List.
The code is very simple, I have a List of Object called AccessRequest which comes from a database and I'm using this first List to create a new one but with a filter to select only a few objects.
Here is the code :
private void updateCommentIfNeeded() {
List<AccessRequest> accessRequestList = getAllRequest();
List<AccessRequest> commentsList = getCommentsListProcessedManually(accessRequestList);
}
public List<AccessRequest> getCommentsListProcessedManually(List<AccessRequest> accessRequests) {
accessRequests.removeIf(ar -> !ar.getComment().equals("To be processed manually"));
if (accessRequests.size() != 0) {
SQLServerConnection sqlServerConnection = new SQLServerConnection(sqlServerUrl);
accessRequests.removeIf(ar -> !sqlServerConnection.emailExists(ar.getEmail()));
}
return accessRequests;
}
I'm supposed to get a second List only containing the objects that has their comments to To be processed manually, which I do. But the weird part is that the first List also takes the value of the second as if I wrote accessRequestList = commentsList but there is no such thing and I'm using local variable.
Ex :
I have 3 objects in my first List, but only one containing the required comment
Both list ends with containing the only objects containing the comment
I'm kind of lost here if anyone has an idea !
Your method getCommentsListProcessedManually modifies the list you're passing. I believe you're operating under the assumption that passing the list as a parameter somehow creates a copy of the list, whereas what is actually happening is that a reference to the list is passed by value.
There are several ways to solve this, but the easiest is to simply create a copy of your input list at the start of your method:
public List<AccessRequest> getCommentsListProcessedManually(List<AccessRequest> input) {
List<AccessRequest> accessRequests = new ArrayList<>(input);
accessRequests.removeIf(ar -> !ar.getComment().equals("To be processed manually"));
if (accessRequests.size() != 0) {
SQLServerConnection sqlServerConnection = new SQLServerConnection(sqlServerUrl);
accessRequests.removeIf(ar -> !sqlServerConnection.emailExists(ar.getEmail()));
}
return accessRequests;
}
You could also use the Stream API for this (using the filter operation), but that's quite a bit trickier in this situation.
You are passing a reference of the list to the method getCommentsListProcessedManually.
So accessRequestList and the one passed as a parameter are the same, hence any operation done to the list is done to the same list.
You can create a copy of the list before passing it as a parameter:
List<AccessRequest> newList = new ArrayList<AccessRequest>(accessRequestList);
What is the correct change order?
In the documentation here - click, we can see the following example:
ObservableList<Item> theList = ...;
theList.addListener(new ListChangeListener<Item>() {
public void onChanged(Change<Item> c) {
while (c.next()) {
if (c.wasPermutated()) {
for (int i = c.getFrom(); i < c.getTo(); ++i) {
//permutate
}
} else if (c.wasUpdated()) {
//update item
} else {
for (Item remitem : c.getRemoved()) {
remitem.remove(Outer.this);
}
for (Item additem : c.getAddedSubList()) {
additem.add(Outer.this);
}
}
}
});
}
BUT below the sample code there is:
Note: in case the change contains multiple changes of different type,
these changes must be in the following order: permutation change(s),
add or remove changes, update changes. This is because permutation changes cannot go after add/remove changes as they would change the position of added elements. And on the other hand, update changes must go after add/remove changes because they refer with their indexes to the current state of the list, which means with all add/remove changes applied.
Is the example wrong then? Or maybe I'm missing something?
There are three types of changes that can be fired by an ObservableList:
Permutation (a change in order)
Addition/removal (or replacement, which is just a simultaneous addition and removal)
An update (a change in an element's property, requires an "extractor")
A single change can be only one of those types. However, a single Change instance can carry multiple changes. That's why you have to iterate over the Change instance by calling next() in a while loop.
The documentation you quote:
Note: in case the change contains multiple changes of different type, these changes must be in the following order: permutation change(s), add or remove changes, update changes.
Is regarding the order of changes returned by said next() method. Here the order is important because the Change must report the list as it currently exists.
That documentation does not dictate the order you query the type of change. If the change was not a permutation then c.wasPermutated() simply returns false and the code moves on. Same with the other types of changes. Note that, because of how the API is designed, if a change is neither a permutation nor an update then it must be an addition or removal (or replacement).
So the example is not wrong. If you wanted you could write it as:
while (c.next()) {
if (c.wasPermutated()) {
// process perumutation
} else if (c.wasRemoved() || c.wasAdded()) {
// process removal, addition, or replacement
} else {
// process update
}
}
But that does not change the behavior of the code.
I learn Java and wonder if the item in this code line:
useResult(result, item);
Will be overrwritten by the next call coming from the
doItem(item);
Here´s the eaxmple:
public void doSomeStuff() {
// List with 100 items
for (Item item : list) {
doItem(item);
}
}
private void doItem(final Item item) {
someAsyncCall(item, new SomeCallback() {
#Override
public void onSuccess(final Result result) {
useResult(result, item);
}
});
}
the SomeCallback() happens some time in the future and it´s another thread
I mean will the useResult(result, item); item be the same when callback return?
Please advice what happens here?
I mean will the useResult(result, item); item be the same when callback return?
Of course it will, what would the utility of that be otherwise?
What you are doing is creating 100 different SomeCallback classes, that will process a different Item object.
A skeleton for your someAsyncCall may look like this:
public static void someAsyncCall(Item i, Callback callback) {
CompletableFuture.runAsync( () -> { // new thread
Result result = computeResult(i);
callback.onSuccess(result, i);
});
}
The point is: Callback, at the moment of instantiation, doesn't know anything about the Item he will get as parameter. He will only know it, when Callback::onSuccess is executed in the future.
So, will Item i change (be assigned a new object) ?
No, because it is effectively final within someAsyncCall (the object value is not explicitly changed).
You can't even assign i = new Item(), as the compiler will complain about the anonymous function accessing a non-final variable.
You could of course create a new Item and pass it to the callback
Item i2 = new Item();
callback.onSuccess(result, i2);
but then it would become one hell of a nasty library...
Nobody forbids you to do i.setText("bla") though, unless your Result class is immutable (the member fields are final themselves).
EDIT
If your questions is how java handles object in method parameters, then the answer is: yes, they are a just copy of the original instances.
You could try with a simple swap method void swap(Item i1, Item 12); and you'll notice the references are effectively swapped only within function, but as soon as you return the objects will point respectively to their original instances.
But it's a copy that reflects the original instance.
Coming back to your example. Imagine your someAsyncCall waits 10000 ms before executing the callback.
in your for loop, after you call doItem, you also do: item.setText("bla");.
When you print item.getName() within useResult you will get bla. Even though the text was changed after the async function was called.
This question already has answers here:
Limit a stream by a predicate
(19 answers)
Closed 8 years ago.
I have a set and a method:
private static Set<String> set = ...;
public static String method(){
final String returnVal[] = new String[1];
set.forEach((String str) -> {
returnVal[0] += str;
//if something: goto mark
});
//mark
return returnVal[0];
}
Can I terminate the forEach inside the lambda (with or without using exceptions)?
Should I use an anonymous class?
I could do this:
set.forEach((String str) -> {
if(someConditions()){
returnVal[0] += str;
}
});
but it wastes time.
implementation using stream.reduce
return set.parallelStream().reduce((output, next) -> {
return someConditions() ? next : output;
}).get(); //should avoid empty set before
I am looking for the fastest solution so exception and a 'real' for each loop are acceptable if they are fast enough.
I'm reluctant to answer this even though I'm not entirely sure what you're attempting to accomplish, but the simple answer is no, you can't terminate a forEach when it's halfway through processing elements.
The official Javadoc states that it is a terminal operation that applies against all elements in the stream.
Performs an action for each element of this stream.
This is a terminal operation.
If you want to gather the results into a single result, you want to use reduction instead.
Be sure to consider what it is a stream is doing. It is acting on all elements contained in it - and if it's filtered along the way, each step in the chain can be said to act on all elements in its stream, even if it's a subset of the original.
In case you were curious as to why simply putting a return wouldn't have any effect, here's the implementation of forEach.
default void forEach(Consumer<? super T> action) {
Objects.requireNonNull(action);
for (T t : this) {
action.accept(t);
}
}
The consumer is explicitly passed in, ad this is done independently of the actual iteration going on. I imagine you could throw an exception, but that would be tacky when more elegant solutions likely exist.