I learn Java and wonder if the item in this code line:
useResult(result, item);
Will be overrwritten by the next call coming from the
doItem(item);
Here´s the eaxmple:
public void doSomeStuff() {
// List with 100 items
for (Item item : list) {
doItem(item);
}
}
private void doItem(final Item item) {
someAsyncCall(item, new SomeCallback() {
#Override
public void onSuccess(final Result result) {
useResult(result, item);
}
});
}
the SomeCallback() happens some time in the future and it´s another thread
I mean will the useResult(result, item); item be the same when callback return?
Please advice what happens here?
I mean will the useResult(result, item); item be the same when callback return?
Of course it will, what would the utility of that be otherwise?
What you are doing is creating 100 different SomeCallback classes, that will process a different Item object.
A skeleton for your someAsyncCall may look like this:
public static void someAsyncCall(Item i, Callback callback) {
CompletableFuture.runAsync( () -> { // new thread
Result result = computeResult(i);
callback.onSuccess(result, i);
});
}
The point is: Callback, at the moment of instantiation, doesn't know anything about the Item he will get as parameter. He will only know it, when Callback::onSuccess is executed in the future.
So, will Item i change (be assigned a new object) ?
No, because it is effectively final within someAsyncCall (the object value is not explicitly changed).
You can't even assign i = new Item(), as the compiler will complain about the anonymous function accessing a non-final variable.
You could of course create a new Item and pass it to the callback
Item i2 = new Item();
callback.onSuccess(result, i2);
but then it would become one hell of a nasty library...
Nobody forbids you to do i.setText("bla") though, unless your Result class is immutable (the member fields are final themselves).
EDIT
If your questions is how java handles object in method parameters, then the answer is: yes, they are a just copy of the original instances.
You could try with a simple swap method void swap(Item i1, Item 12); and you'll notice the references are effectively swapped only within function, but as soon as you return the objects will point respectively to their original instances.
But it's a copy that reflects the original instance.
Coming back to your example. Imagine your someAsyncCall waits 10000 ms before executing the callback.
in your for loop, after you call doItem, you also do: item.setText("bla");.
When you print item.getName() within useResult you will get bla. Even though the text was changed after the async function was called.
Related
I am trying to implement logic that will allow me to update an array in one thread using sun's unsafe.compareAndSwapObject utility while safely iterating over that same array, in a different thread. I believe that the CopyOnWriteArrayList does what I am searching for however it uses locking for the updating and I am trying to develop a solution that does not have any locks.
The compare and swap logic is as follows:
public void add(final Object toAdd) {
Object[] currentObjects;
Object[] newObjects;
do {
currentObjects = this.objects;
newObjects = ArrayUtil.add(currentObjects, toAdd);
} while (!UNSAFE.compareAndSwapObject(this, OBJECTS_OFFSET, currentObjects, newObjects));
}
While the iteration logic is as follows (the toString() is a placeholder):
public void doWork() {
Object[] currentObjects = this.objects;
for (final Object object : currentObjects) {
object.toString();
}
}
My questions are:
Is this code safe?
Does this give me the same snapshot behaviour that CopyOnWriteArrayList does?
If it does, when is the iteration snapshot formed?
Does the fact that I'm creating a local variable have anything to do this?
If it does, how does the JVM know to not optimise this away?
Have I essentially created a variable on the stack that has a reference to the most up to date array object?
Lastly to follow up the third point above about "snapshot" creation, would the following code work the same way:
public void doWork() {
actuallyDoWork(this.objects);
}
public void actuallyDoWork() {
for (final Object object : currentObjects) {
object.toString();
}
}
I have a cache refresh logic and want to make sure that it's thread-safe and correct way to do it.
public class Test {
Set<Integer> cache = Sets.newConcurrentHashSet();
public boolean contain(int num) {
return cache.contains(num);
}
public void refresh() {
cache.clear();
cache.addAll(getNums());
}
}
So I have a background thread refreshing cache - periodically call refresh. And multiple threads are calling contain at the same time. I was trying to avoid having synchronized in the methods signature because refresh could take some time (imagine that getNum makes network calls and parsing huge data) then contain would be blocked.
I think this code is not good enough because if contain called in between clear and addAll then contain always returns false.
What is the best way to achieve cache refreshing without impacting significant latency to contain call?
Best way would be to use functional programming paradigm whereby you have immutable state (in this case a Set), instead of adding and removing elements to that set you create an entirely new Set every time you want to add or remove elements. This is in Java9.
It can be a bit awkward or infeasible however to achieve this method for legacy code. So instead what you could do is have 2 Sets 1 which has the get method on it which is volatile, and then this is assigned a new instance in the refresh method.
public class Test {
volatile Set<Integer> cache = new HashSet<>();
public boolean contain(int num) {
return cache.contains(num);
}
public void refresh() {
Set<Integer> privateCache = new HashSet<>();
privateCache.addAll(getNums());
cache = privateCache;
}
}
Edit We don't want or need a ConcurrentHashSet, that is if you want to add and remove elements to a collection at the same time, which in my opinion is a pretty useless thing to do. But you want to switch the old Set with a new one, which is why you just need a volatile variable to make sure you can't read and edit the cache at the same time.
But as I mentioned in my answer at the start is that if you never modify collections, but instead make new ones each time you want to update a collection (note that this is a very cheap operation as internally the old set is reused in the operation). This way you never need to worry about concurrency, as there is no shared state between threads.
How would you make sure your cache doesn't contain invalid entries when calling contains?? Furthermore, you'd need to call refresh every time getNums() changes, which is pretty inefficient. It would be best if you make sure you control your changes to getNums() and then update cache accordingly. The cache might look like:
public class MyCache {
final ConcurrentHashMap<Integer, Boolean> cache = new ConcurrentHashMap<>(); //it's a ConcurrentHashMap to be able to use putIfAbsent
public boolean contains(Integer num) {
return cache.contains(num);
}
public void add(Integer nums) {
cache.putIfAbsent(num, true);
}
public clear(){
cache.clear();
}
public remove(Integer num) {
cache.remove(num);
}
}
Update
As #schmosel made me realize, mine was a wasted effort: it is in fact enough to initialize a complete new HashSet<> with your values in the refresh method. Assuming of course that the cache is marked with volatile. In short #Snickers3192's answer, points out what you seek.
Old answer
You can also use a slightly different system.
Keep two Set<Integer>, one of which will always be empty. When you refresh the cache, you can asynchronously re-initialize the second one and then just switch the pointers. Other threads accessing the cache won't see any particular overhead in this.
From an external point of view, they will always be accessing the same cache.
private volatile int currentCache; // 0 or 1
private final Set<Integer> caches[] = new HashSet[2]; // use two caches; either one will always be empty, so not much memory consumed
private volatile Set<Integer> cachePointer = null; // just a pointer to the current cache, must be volatile
// initialize
{
this.caches[0] = new HashSet<>(0);
this.caches[1] = new HashSet<>(0);
this.currentCache = 0;
this.cachePointer = caches[this.currentCache]; // point to cache one from the beginning
}
Your refresh method may look like this:
public void refresh() {
// store current cache pointer
final int previousCache = this.currentCache;
final int nextCache = getNextPointer();
// you can easily compute it asynchronously
// in the meantime, external threads will still access the normal cache
CompletableFuture.runAsync( () -> {
// fill the unused cache
caches[nextCache].addAll(getNums());
// then switch the pointer to the just-filled cache
// from this point on, threads are accessing the new cache
switchCachePointer();
// empty the other cache still on the async thread
caches[previousCache].clear();
});
}
where the utility methods are:
public boolean contains(final int num) {
return this.cachePointer.contains(num);
}
private int getNextPointer() {
return ( this.currentCache + 1 ) % this.caches.length;
}
private void switchCachePointer() {
// make cachePointer point to a new cache
this.currentCache = this.getNextPointer();
this.cachePointer = caches[this.currentCache];
}
When I have an observable list, and want to do some stuff whenever an element is added or removed into it, I find myself doing the following code pattern very frequently:
ObservableList<String> myDynamicList = FXCollections.observableArrayList();
myDynamicList.add("My string 1");
myDynamicList.add("My string 2");
<...>
Consumer<String> onElementAdded = s -> {
logger.info("Added: " + s);
};
Consumer<String> onElementRemoved = s -> {
logger.info("Removed: " + s);
};
// Follow list insertions & removals in the future
myDynamicList.addListener((ListChangeListener<? super String>) c -> {
while(c.next()) {
c.getAddedSubList().forEach(onElementAdded);
c.getRemoved().forEach(onElementRemoved);
}
});
// "Synchronize" the initial state - call the same handler for all already inserted elements
myDynamicList.forEach(onElementAdded);
Problems with this approach:
Code clutter: I have to define (at least) the onElementAdded consumer as a variable, to be used both in addListener and forEach calls. I.e., I cannot define the behavior directly in the addListener argument (which can be more concise).
There are two calls that need to be maintained consistent (i.e., when changing onElementAdded in addListener, do not forget to do the same in forEach)
I always have a doubt whether to put addListener or forEach call first. I reckon this doesn't matter if the list is only modified by one thread, and if not this will break anyway. But still...
So the question is, is there a better way of binding a listener to an observable list, and calling the same listener for all existing elements as though if they were just added into the list?
I'm trying to use observable in my code and there is this problem giving me hard time.
public class observeState extends Observable
{
public void setSelectedTransaction(int idx)
{
if (selectedTransaction != idx)
{
this.selectedTransaction = idx;
setChanged();
notifyObservers("setSelectedTransaction");
System.out.println("Observers : "+this.countObservers());
}
}
public void setLog(Log log)
{
if(theLog != log) {
theLog = log;
System.out.println(theLog.getLogTransactions().size() + "setLog");
setChanged();
notifyObservers("setLog");
System.out.println("Observers : "+this.countObservers());
}
}
There are two observers observing this observable class and it does send out notifyObservers when the setSelectedTransaction method is called with the test line "Observers : 2". However the next method setLog does not seem to have observers giving "Observers : 0". I don't think I can only use observable method once.
The mostly likely cause of this issue is that you are not calling the method on the same object. It is a common mistake to assume two objects are the same because they have the same name or some other confusion. I would print out the hashCode of each object or use a debugger to ensure you really are calling the same object.
BTW you can try making the calls in the opposite order, or more than once
to test your theory.
Either the objects that you are using to call the setSelectedTransaction and setLog are different or the observers might be removing themselves as observers in the update method.
Is there a simple way to clear all fields of an instance from a an instance? I mean, I would like to remove all values assigned to the fields of an instance.
ADDED
From the main thread I start a window and another thread which controls state of the window (the last thread, for example, display certain panels for a certain period of time). I have a class which contains state of the window (on which stage the user is, which buttons he already clicked).
In the end, user may want to start the whole process from the beginning (it is a game). So, I decided. So, if everything is executed from the beginning, I would like to have all parameter to be clean (fresh, unassigned).
ADDED
The main thread, creates the new object which is executed in a new thread (and the old thread is finished). So, I cannot create a new object from the old thread. I just have a loop in the second thread.
I don't get it. How can you programmatically decide how to clear various fields?
For normal attributes it can be easy (var = null) but what about composite things or collection? Should it be collection = null, or collection.removeAll()?
This question is looking for synctactic sugar that wouldn't make so much sense..
The best way is to write out your own reset() method to customize the behaviour for every single object.. maybe you can patternize it using an
interface Resettable
{
void reset()
}
but nothing more than that..
Is there a simple way to clear all fields of an instance from a an instance? I mean, I would like to remove all values assigned to the fields of an instance.
Yes, just assign a default value to each one of them. It would take you about 20-30 mins. and will run well forever*( YMMV)
Create a method: reset and invoke it
class YourClass {
int a;
int b;
boolean c;
double d;
String f;
// and so on...
public void method1(){}
public void method2(){}
public void method3(){}
// etc.
// Magic method, reset all the attributes of your instance...
public void reset(){
a = 0;
b = 0;
c = false;
d = 0.0;
f = "";
}
}
And then just invoke it in your code:
....
YourClass object = new YourClass();
Thread thread = YourSpecificNewThread( object );
thread.start();
... // Later on you decide you have to reset the object just call your method:
object.reset(); // like new
I don't really see where's the problem with this approach.
You may use reflection:
Try something like this:
Field[] fields = object.getClass().getDeclaredFields();
for (Field f : fields) {
f.setAccessible(true);
f.set(object, null);
}
It's not a beautifull solution, but may work for you.
There is no other way than setting null to all of them.
As an aside, i find that a particular weird idea. You would have better re-creating a new instance, instead of trying to reset your old one.
If you want to clear a filter (Serializable) that your application "can handle his null" fields, you can use BeanUtils (Apache Commons):
Field[] fields = filter.getClass().getDeclaredFields();
for (Field f : fields) {
if (f.getName().endsWith("serialVersionUID")) {
continue;
}
try {
BeanUtils.setProperty(filter, f.getName(), null);
} catch (IllegalAccessException | InvocationTargetException e) {
FacesUtils.handleError(LOG, "Erro limpar filtro...", e);
}
}
I hope it can help you.