Remove blocking from a method - java

This is homework.
I do not want the solution, just a small number of links or ideas.
Simply speaking what I want to do is,
Simple example :
public class Example
{
public void method()
{
int x = doThat();
//Call other methods which do not depend on x
return;
}
}
doThat() is a method that is known to be time consuming, which results in my program blocking until results are back. And I want to use different methods of this Object, but program is frozen until doThat() is finished. Those different methods do not necesserely have to be invoked from the method() used in this example, but maybe from outside the object.
I thought about using threads but if I have a huge number of objects (1000+) this probably wont be very efficient (correct me if I am wrong please). I guess if I use threads I have to use one thread per object ?
Is there any other way besides threads that can make the invoking object not block when calling doThat(); ? If threading is the only way, could you provide a link ?
Knowing questions like that get downvoted I will accept any downvotes. But please just a link would be more than great.
Thanks in advance. I hope question is inline with the rules.

I'd also use threads for this, but I simply wanted to add that it would probably be interesting to look at java.util.concurrent.Executors (to create thread pools as you have a number of objects) and the java.util.concurrent.Future and java.util.concurrent.Callable classes which will allow you to launch threads that can return a value.
Take a look at the concurrency tutorial for more info.

I recommend you to create a class that implements Runnable, whose run method does what doThat() does in your sample. Then you can invoke it in a separate Thread in a simple way. Java's Thread class does have a constructor that takes a runnable. Use the run and join methods.
Cheers
Matthias

Of course threads are the only solution to handle some jobs in backgrounds, but
you are not forced to use a thread just for a single operation to be performed.
You can use only one thread that maintains a queue of operations to be performed, in a way that every call to the method doThat adds a new entry into the queue.
Maybe some design patterns like "Strategy" can help you to generalize the concept of operation to be performed, in order to store "operation objects" into the thread's queue.

You want to perform several things concurrently, so using threads is indeed the way to go. The Java tutorial concurrency lesson will probably help you.
1000 concurrent threads will impose a heavy memory load, because a certain amount of stack memory is allocated for each thread (2 MB?). If, however, you can somehow make sure there will be only one Thread running at a time, you still can take the thread per object approach. This would require you to manage that doThat() is only called, if the thread produced by a former invocation on another object has already finished.
If you cannot ensure that easily, the other approach would be to construct one worker thread which reads from a double ended queue which object to work on. The doThat() method would then just add this to the end of the queue, from which the worker thread will later extract it. You have to properly synchronize when accessing any data structure from concurrent threads. And the main thread should somehow notify the worker thread of the condition, that it will not add any more objects to the queue, so the worker thread can cleanly terminate.

Related

Java, why need to use synchronization? instead of using a single thread?

While reading about Java synchronized, I just wondered, if the processing should be in synchronization, why not just creating a single thread (not main thread) and process one by one instead of creating multiple threads.
Because, by 'synchronized', all other threads will be just waiting except single running thread. It seems like the only single thread is working in the time.
Please advise me what I'm missing it.
I would very appreciate it if you could give some use cases.
I read an example, that example about accessing bank account from 2 ATM devices. but it makes me more confused, the blocking(Lock) should be done by the Database side, I think. and I think the 'synchronized' would not work in between multiple EC2 instances.
If my thinking is wrong, please fix me.
If all the code you run with several threads is within a synchronized block, then indeed it makes no difference vs. using a single thread.
However in general your code contains parts which can be run on several threads in parallel and parts which can't. The latter need synchronization but not the former. By using several threads you can speed up the "parallelisable" bits.
Let's consider the following use-case :
Your application is a internet browser game. Every player has a score and can click a button. Every time a player clicks the button, their score is increased and their opponent's is decreased. The first player to reach 10 wins.
As per the nature of the game, and to single a unique winner, you have to consider the two counters increase (and the check for the winner) atomically.
You'll have each player send clickEvents on their own thread and every event will be translated into the increase of the owner's counter, the check on whether the counter reached 10 and the decrease of the opponent's counter.
This is very easily done by synchronizing the method which handles modifying the counters : every concurrent thread will try to obtain the lock, and when they do, they'll execute the code (and finally release the lock).
The locking mechanism is pretty lightweight and only requires a single keyword of code.
If we follow your suggestion to implement another thread that will handle the execution, we'd have to implement the whole thread management logic (more code), to initialize that Thread (more resource) and even so, to guarantee fairness in the handling of events, you still need a way for your client threads to pass the event to your executor thread. The only way I see to do so, is to implement a BlockingQueue, which is also synchronized to prevent the race condition that naturally occurs when trying to add elements from two other thread.
I honnestly don't see a way to resolve this very simple use-case without synchronization (or implementing your own locking algorithm that basically does the same).
You can have a single thread and process one-by-one (and this is done), but there are considerable overheads in doing so and it does not remove the need for synchronization.
You are in a situation where you are starting with multiple threads (for example, you have lots of simultaneous web sessions). You want to do a part of the processing in a single thread - let's say updating some common structure with some new data. You need to pass the new data to the single thread - how do you get it there? You would have to use some kind of message queue (or an equivalent thing) and have the single thread pick requests off the message queue and that would have have to be synchronized anyway, plus there is the overhead of managing the queue, plus the issue that you need to get a reply back from the single thread asynchronously. So you are back to square one.
This technique is used where the processing you need to do is considerable and you don't want to block your main threads for a long time.
In summary: having a single thread does not remove the need for synchronization.

Most efficient Java threading technique?

There seem to be a number of different ways in which one can create threads (Runnable vs Thread class) and also ThreadPools.
Are there any difference in terms of efficiency and which are the most efficient (in terms of performance) techniques for creating and pooling threads in Java?
If you need to handle many short and frequent requests it is better to use a ThreadPool so you can reuse threads already open and assign them Runnable tasks.
But when you need to launch a thread for a single task operation or instantiate a daemon thread that run for all the application time or for a long specific time then could be better create a single thread and terminate it when you don't need it anymore.
At the end of the day, they're all relying on the same underlying Thread-based mechanism to actually do the work. That means that if you are asking "what is the most efficient way to start a single thread?" the answer is, create a Thread object and call start() on it, because any other method will take some other steps before it eventually creates a Thread object and calls start() on it.
That doesn't mean that this is the best way to spawn threads, it just means that it is the most low-level way to do it from Java code. What the other ways to create threads give you is different types of infrastructure to manage the underlying Threads, so your choice of method should depend on the amount and kind of infrastructure you need.

Implementing a Mutex in Java

I have a multi-threaded application (a web app in Tomcat to be exact). In it there is a class that almost every thread will have its own instance of. In that class there is a section of code in one method that only ONE thread (user) can execute at a time. My research has led me to believe that what I need here is a mutex (which is a semaphore with a count of 1, it would seem).
So, after a bit more research, I think what I should do is the following. Of importance is to note that my lock Object is static.
Am I doing it correctly?
public Class MyClass {
private static Object lock = new Object();
public void myMethod() {
// Stuff that multiple threads can execute simultaneously.
synchronized(MyClass.lock) {
// Stuff that only one thread may execute at a time.
}
}
}
In your code, myMethod may be executed in any thread, but only in one at a time. That means that there can never be two threads executing this method at the same time. I think that's what you want - so: Yes.
Typically, the multithreading problem comes from mutability - where two or more threads are accessing the same data structure and one or more of them modifies it.
The first instinct is to control the access order using locking, as you've suggested - however you can quickly run into lock contention where your application looses a lot of processing time to context switching as your threads are parked on lock monitors.
You can get rid of most of the problem by moving to immutable data structures - so you return a new object from the setters, rather than modifying the existing one, as well as utilising concurrent collections, such a ConcurrentHashMap / CopyOnWriteArrayList.
Concurrent programming is something you'll need to get your head around, especially as throughput comes from parallelisation in todays modern computing world.
This will allow one thread at a time through the block. Other thread will wait, but no queue as such, there is no guarantee that threads will get the lock in a fair manner. In fact with Biased lock, its unlikely to be fair. ;)
Your lock should be final If there is any reason it can't its probably a bug. BTW: You might be able to use synchronized(MyClass.class) instead.

SwingWorker synchronized method queue blocking or what?

Theoretical question. If I have two SwingWorkers and an outputObject with method
public void synchronized outputToPane(String output)
If each SwingWorker has a loop in it as shown:
//SwingWorker1
while(true) {
outputObject.outputToPane("garbage");
}
//SwingWorker2
Integer i=0;
while(true) {
outputObject.outputToPane(i.toString());
i++;
}
How would those interact? does the outputToPane method receive an argument from one thread and block the other one until it finishes with the first, or does it build a queue of tasks that will execute in the order received, or some other option?
The reason I ask:
I have two threads that will be doing some heavy number crunching, one with a non-pausable data stream and the other from a file. I would like them both to output to a central messaging area when they hit certain milestones; however, I CANNOT risk the data stream getting blocked while it waits for the other thread to finish with the output. I will risk losing data then.
synchronized only guarantees mutual exclusion. Is not fair, which in practice means that your workers might alternate quite nicely, or the first one might get precedence and block the second one completely until finished, or anything between.
See Reentrantlock docs for more about fairness. Maybe you could consider using it instead of synchronized. Probably even better alternative would be using a Queue.
I would advise you to have two output object in your messaging area. Because if one thread starts to modify the output answer then the other one will have to wait for it to finish. Even if you can optimize it to make it fast enough, the actual display of info would make your threads slow each others down over time.
Although you might try to synchronize them, the result might not always be 100% safe

java: Patterns for Monitoring worker threads?

and excuse the lack of knowledge on multithreaded apps, but I am new to the field.
Is there a pattern or common used methodology for monitoring the 'job completion' or 'job status' of worker threads from a monitor (a class that acts as a monitor)?
What I have currently done is create a list of workers and create one thread for each worker. After all threads have started i am looping over the worker list and 'checking their status' by making a call to a method.
At that time I couldn't come up with a different solution, but being new to the field, I don't know if this is the way to go, or if there are other solutions or patterns that I should study.
Depending on what you want, there are many ways that you can do this.
If you just want to wait until all the threads finish (i.e. all you care about is having everything finish before moving on), you can use Thread.join():
try {
for (Thread t: threadsIWaitOn)
t.join();
} catch (InterruptedException iex) {
/* ... handle error ...
}
If you want a more fine-grained control over the thread status and want to be able, at any time, to know what threads are doing, you can use the Thread.getState() function. This returns a Thread.State object that describes whether the thread is running, blocked, new, etc., and the Javadoc specifically says that it's designed for monitoring the state of a thread rather than trying to synchronize on it. This might be want you want to do.
If you want even more information than that - say, how to get a progress indicator for each thread that counts up from 0 to 100 as the thread progresses - then another option might be to create a Map from Threads to AtomicIntegers associating each thread with a counter, then pass the AtomicInteger into the constructor of each thread. That way, each thread can continuously increment the counters, and you can have another thread that continuously polls the progress.
In short, you have a lot of options based on what it is that you're trying to accomplish. Hopefully something in here helps out!
Use a ThreadPool and Executor, then you get a Future<> and you can poll for their completion and some more nice stuff, too. I can appreciate this book for you: Java Concurrency in Practice
Try to use any kind of synchronization. For example, wait on some kind of monitor/semaphore until job is done / whatever you need.

Categories

Resources