How to allow threads to modify/get info between each other? - java

my question is more about programming pattern than about a specific case.
I want to know how to manage better interactions between concurring threads.
Say I have that for example :
Class Ocean implements Runnable {
Boat myBoat;
// standard stuff
#Override
public void run(){
// the boat navigates through the sees…
}
}
And the following, which is a different thread because it has to run at the same time :
Class Radar implements Runnable {
// standard stuff
public int scanOcean(){
// return boat.position();
}
}
And both those classes are object of my Main method for example.
Now the question is : how can I access the methods inside another thread ? I looked up for it, but I couldn’t find any consistent and practical answer…
Some site refer to the volatile declaration for field that might be used by another thread, some tell about event listeners, others about event handlers… Should I use the standard Observer/Subject pattern ?
Thanks!
Silver Duck

I made good experiences with an intermediate helper object, holding only data to be shared like status info, abort flags etc shared between threads. That will not fit all cases, but it does quite often for me.
The helper instance should implement (and encapsulate) locking in its methods, getters and setters as required, so you don't have to deal with it on the outside.
Consistent, thread safe, easy to use.

Related

Attempt execution uniquness in a java project?

I am working a java library, which has a singleton class with a methods - createTask() and addPointsToTask()
The library is meant to be used in any java service which executes multiple requests.
The service should be able to call createTask only once during it's processing of a single request. Any further calls to createTask in the same thread execution should fail. addPointsToTask can be called any number of times.
As a library owner how can I restrict this method to be called only once per thread?
I have explored ThreadLocal, but don't think it fits my purpose.
One solution is to ask the service that is using the library to set a unique id in threadLocal, but as this 'set-to-thread-local' solution is outside the boundary of the library, this is not a full-proof solution.
Any hints?
Short answer: you won't get a "fool-proof" solution; i.e. a solution that someone can't subvert.
Unless you are running your library on a JVM platfrom that you control, users of your library will be able to find a way to subvert your "only once per thread" restriction if they try hard enough. For example:
They could use reflection to access the private state of the objects or classes that implement the restriction.
They could use bytecode injection to subvert your code.
They could decompile and replacing your code.
They could modify their JVM to do something funky with your code. (The OpenJDK source code is available to anyone.)
Ask yourself the following:
Is this restriction reasonable from the perspective of the programmer you are trying to restrict?
Would a sensible programmer have good reason to try to break it?
Have you considered possible use-cases for your library where it would be reasonable to call createTask() multiple times? For example, use-cases that involve using thread pools?
If you are doing this because you think allowing multiple createTask() calls will break your library, my advice would be:
Tell the programmer via the javadocs and other documentation what is likely to break if they do the thing that you are trying to prevent.
Implement a "soft" check, and provide an easy way for a programmer to disable the check. (But do the check by default, if you think that is appropriate.)
The point is that a sensible programmer won't knowingly subvert restrictions unless they have good reason to. If they do, and they hurt themselves, that is not your problem.
On the other hand, you are implementing this restriction for "business reasons" or to stop "cheating" or something like that, my advice would be to recognize that a determined user will be able to subvert any restrictions you attempt to embed in your code when they run it on their platform. If this fundamentally breaks your model, look for a different model.
You will not be able to prohibit multiple calls from the same request, simply because your library has no concept of what a "request" actually is. This very much depends on the service using the library. Some services may use a single thread per request, but others may not. Using thread-locals is error-prone especially when you are working in multi-threaded or reactive applications where code processing a request can execute on multiple parallel threads.
If your requirement is that addPointsToTask is only called for a task that was actually started by some code that is processing the current request, you could set up your API like that. E.g. createTask could return a context object that is required to call addPointsToTask later.
public TaskContext createTask() {
}
public void addPointsToTask(TaskContext context, ....) {
}
This way you can track task context even over multiple different threads executing code for the same request and points will not get added to a task created by another request.
You could add a method to your singleton which runs some piece of Service-Code in the context of a request.
Dummy implementation:
package stackoverflow;
import java.util.concurrent.Callable;
public enum YourLibrarySingleton {
INSTANCE;
private final ThreadLocal<Task> threadLocalTask;
YourLibrarySingleton() {
this.threadLocalTask = new ThreadLocal<>();
}
public void createTask() {
this.threadLocalTask.set(new Task() {});
}
public void addPointsToTask() {
Task task = this.threadLocalTask.get();
// add points to that task
}
public <T> T handleRequest(Callable<T> callable) throws Exception {
try {
return callable.call();
} finally {
this.threadLocalTask.remove();
}
}
}
Which could be used like this:
package stackoverflow;
public class ServiceCode {
public void handleRequest() throws Exception {
YourLibrarySingleton.INSTANCE.handleRequest(() -> {
YourLibrarySingleton.INSTANCE.createTask();
YourLibrarySingleton.INSTANCE.addPointsToTask();
YourLibrarySingleton.INSTANCE.addPointsToTask();
return "result";
});
}
}

customizing synchronized in java

I know this is kind of wired requirement and I can achieve the below requirement with various locks available in Java.But I want to minimize the effort in development.
Requirement: My existing code base uses synchronized keyword in method level for thread safety in various places. Now the same code base can be used by multiple tenant, so we have make the synchronizations also to be tenant aware.
Possible Solutions:
Change code to use different lock for different tenant and change every method to add the lock in start and unlock in the end.
Somehow customize the synchronized keyword to customSynchronized keyword and will behave tenant aware manner.
I know the solution 1 will definitely work but it will be hell lot of code changes, so I need help from the expert if the solution 2 is possible at all, even if it is complex.
Now:
public synchronized void method1(){
// some processing on share object
}
Tryying to make:
public customSynchronized void method1(){
// some processing on share object
}

Does grouping functions like start/stop and open/close violate the Single Responsibility Principle?

For example:
class Engine {
private EventExecutor executor;
public void start() {
executor.submit(...);
executor.submit(...);
//...
}
public void stop() {
executor.shutdown();
}
}
Submitting different events requires modifying start, but not stop. Changing how the executor terminates requires modfying stop, not start.
Those methods have two separate reasons for modification (submitting events and tuning termination), so should they be separated like the example below?
class Engine {
private EventExecutor executor;
private EngineStarter starter;
private EngineStopper stopper;
public void start() {
starter.start(executor);
}
public void stop() {
stopper.stop(executor);
}
}
interface EngineStarter {
void start(EventExecutor executor);
}
interface EngineStopper {
void stop(EventExecutor executor);
}
Does the first example violate SRP? Should the behaviors be defined in a different class?
The Single Responsibility Principle is not violated as long as you have only one start and one stop. If you have to morph the behavior behind these methods, using interfaces like you did is the right way.
Now, in the first example, if your engine has to send various events when it starts, it's not a responsibility problem, but a strong coupling problem. The responsibility to send the events is still his, no violation here, but you create a strong coupling with the various events, and that can backfire when your code grows larger.
For that kind of system, using an Observer pattern is usually the best way. Objects will listen for your engine to start, and if it does, execute the right events themselves.
The Engine class here may be mapped directly to the canonical Modem example, where Robert Martin describes four methods having two distinct responsibilities.
The dial and hangup functions manage the connection of the modem,
while the send and recv functions communicate data.
If we presume that dial == start and hangup == stop, then these represent one responsibility. (In terms of a software engine, we might call it the life-cycle responsibility.) Conversely, the events submitted would map to the send function, which is a different responsibility. This leads to the conclusion that start should be separated from any specific list of events, rather than separated from stop.
Finally, note that in Martin's conclusion, the modem implementation remains in one class, violating SRP; but each responsibility is represented by a different interface. This conforms to the Interface Segregation Principle, as #Steven commented. So you may wish the engine to implement both life-cycle and event-submitter.

Combining Handler and AsyncTask in Android - Obvious Flaws?

I have a simple Android app which uses AsyncTasks for I/O. A frequent pattern:
User clicks a button
In response, an onClick handler instantiates and .execute()s an AsyncTask
Once the AsyncTask completes, the UI should be updated in some way
According to the documentation for AsyncTask, the correct way to accomplish the UI updates is to override onPostExecute in the AsyncTask class - this will be invoked back on the UI thread after execution and thus can touch the widgets, etc.
However, it seems wrong to me that onPostExecute should have any sort of hard reference to a UI element. I would prefer to keep my I/O tasks and UI code separate. Instead, this seems the obvious situation where I should pass an opaque callback to the AsyncTask - the callback retains a reference to the UI elements and thus we maintain isolation and reusability in the code. A classic delegate pattern (or perhaps listener, event, etc, many options here).
As an example, the code below seems wrong to me:
class QueryJobsDBTask extends AsyncTask<Void, Void, ArrayList<ContentValues>> {
#Override
protected void onPostExecute(ArrayList<ContentValues> freshJobsData) {
someList.clear();
someList.addAll(freshJobsData);
// BUG why does my DB query class hold UI references?
someAdapter.notifyDataSetChanged();
}
After some research, it looks like the Handler class is the most straightforward and lightweight way to accomplish a delegate pattern here. I can write reusable AsyncTasks for I/O and specify contextual UI update callbacks on a per-instance basis via Handler instances. So I have implemented this new Handler-enabled base class
public abstract class HandlerAsyncTask<Params, Progress, Result> extends AsyncTask<Params, Progress, Result> {
private Handler preExecuteHandler, postExecuteHandler;
public void setPreExecuteHandler(Handler preExecuteHandler) {
this.preExecuteHandler = preExecuteHandler;
}
public void setPostExecuteHandler(Handler postExecuteHandler) {
this.postExecuteHandler = postExecuteHandler;
}
#Override
protected void onPreExecute() {
if (preExecuteHandler != null) {
preExecuteHandler.sendMessage(Message.obtain());
}
}
#Override
protected void onPostExecute(Result result) {
if (postExecuteHandler != null) {
Message msg = Message.obtain();
msg.obj = result;
postExecuteHandler.sendMessage(msg);
}
}
}
And voila, all of my I/O tasks are now properly partitioned from the UI - and I can still specify simple UI update callbacks when needed via Handler instances. This seems straightforward, flexible, and superior to me ... so of course I wonder what I'm missing.
How is the current framework solution superior? Is there some major pitfall to this approach? To my knowledge the topology of code execution and threads is the exact same at runtime, just code coupling is looser (and a few extra frames on the stack).
This is an elegant solution for segregating UI/Background tasks in small projects, although passing Runnables is even more elegant. Keep in mind that the AsyncTask is a wrapper around Thread/Handler, so you're doubling up on the thread-messaging that's already going on behind the scenes. The flaw here is that if you design the AsyncTasks to be reusable, you'll need to make sure that the IO you're running are all thread-safe, as there's no communication between the various AsyncTasks as to who is active or accessing which resources. An IntentService might be more appropriate if you need to queue background tasks rather than just fire them.
It's not so much a matter of superiority as purpose & use-case. AsyncTasks are usually written as private classes (or declared anonymously inline) within Activities, and as such inherit the Activity's references to various UI elements that need updating anyway.
If an AsyncTask is of sufficient size and/or complexity that it should be pulled out into its own class, and can be re-used by other classes, than using Handlers for better decoupling is a great idea. It's just that it's often not necessary, as the AsyncTask is accomplishing something specific to the Activity in which it was defined, and for simple ones, the corresponding handler code could even be larger than the entire AsyncTask itself.

ability to get the progress on a Future<T> object

With reference to the java.util.concurrent package and the Future interface I notice (unless I am mistaken) that the ability to start a lengthy tasks and be able to query on the progress only comes with the SwingWorker implementing class.
This begs the following question:
Is there a way, in a non-GUI, non-Swing application (imaging a console application) to start a lengthy task in the background and allow the other threads to inspect the progress ? It seems to me that there is no reason why this capability should be limited to swing / GUI applications. Otherwise, the only available option, the way I see it, is to go through ExecutorService::submit which returns a Future object. However, the base Future interface does not allow monitoring the progress.
Obviously, the Future object would only be good for blocking and then receiving the result.
The Runnable or Callable object that you submit would either have to know how to provide this progress (percentage complete, count of attempts, status (enum?) etc) and provide that as an API call to the object itself, or posted in some lookup resource (in memory map or database if necessary). For simplicity I tend to like the object itself, especially since you're going to most likely need a handle (id) to lookup the object or a reference to the object itself.
This does mean that you have 3 threads operating. 1 for the actual work, 1 that is blocked while waiting for the result, and 1 that is a monitoring thread. The last one could be shared depending on your requirements.
In my case I passed a HashSet, with the Objects to process, as Parameter to the Method, wich was created as instance variable in the calling Class. When the asyncronous method removes the Objects after processing one can retrieve the size of the Map remaining in the calling Method. I thing in general passing Objects by Reference solves the Problem.
I was hoping that there was a standard concurrency framework way to stay updated on the progress of a long running task without requiring the client program to worry about orchestrating and synchronizing everything correctly. It seemed to me to that one could fathom an extended version of the Future<T> interface that would support:
public short progress(); in addition to the usual isDone() and get() methods.
Obviously the implementation of the progress() would then need to poll the object directly so maybe Future<T> would need to be specified as Future<T extends CanReportProgress> where CanReportProgress is the following interface:
public interface CanReportProgress {
public short progress();
}
This begs the question of why one would bother to go through the Future object as opposed to calling the object itself to get the progress. I don't know. I'll have to give it more thought. It could be argued that it is closer to the current contract / semantics whereby the Callable object is not, itself, accessed again by the client programmer after the call to ExecutorService::submit / execute.

Categories

Resources