Thread pool using same object - java

I created a thread pool where each thread takes an object from a queue and handles it. I'm not sure I implemented it in the right way. Here's the code:
public class HandlerThreadsPool<T> {
private BlockingQueue<T> queue;
private IQueueObjectHandler<T> objectHandler;
private class ThreadClass implements Runnable {
#Override
public void run() {
while (true) {
objectHandler.handleItem(queue.take());
}
}
}
public HandlerThreadsPool(int numberOfThreads, BlockingQueue<T> queue, IQueueObjectHandler<T> dataHandler){
this.queue = queue;
this.objectHandler = dataHandler;
ExecutorService service = Executors.newFixedThreadPool(numberOfThreads);
for (int i = 0; i < numberOfThreads; i++)
service.execute(new ThreadClass());
service.shutdown();
}
}
The dataHandler handles the object doing some stuff. Is it correct in this way?
Thanks

At first it is not a good practice to create, submit to and shutdown ExecutorService inside constructor.
Look at shutdown() javadoc
Initiates an orderly shutdown in which previously submitted tasks are
executed, but no new tasks will be accepted. Invocation has no
additional effect if already shut down.
You didn't post IQueueObjectHandler, but it seems for me that your ThreadClass jobs will run infinitely, off course if you are not stopping them by explicitly throwing some unchecked exception inside objectHandler.handleItem(..) which would be wrong. And you can have problems with JVM termination because of these infinitely running not daemon threads. (JVM graceful termination conditions)
Also you don't catch InterruptedException while executing queue.take(), this would cause a compile time error. Handling InterruptedException properly would help you to stop on possible shutdownNow().
So
Don't shutdown pool in constructor, this will lead to problems. Use Runtime.getRuntime().addShutdownHook(..) if you don't want to make shutdown somewhere else.
Use shutdownNow() to actually stop executor's threads, if they are in infinite loops, handle InterruptedException inside ThreadClass for this. Or you can stop them using volatile boolean or AtomicBoolean flag which indicates status, running/stopped. Check flag in the loop and change it to false whenever you need to shutdown jobs.
Make ExecutorService service an instance variable, not local. Loosing reference to running ExecutorService looks bad. This can help you somewhere else.

Related

executor Service interupt handling

I have an application in which there are multiple threads. I want them to execute in order.so i choose executorService for multi-threading. if any one of thread(run method) is in error , I want to move on to net thread so that by the end i can come to know how many thread are completed successfully (count needed).My sample code:
The Main class:
public class MySampleClass{
ExecutorService executor = Executors.newSingleThreadExecutor();
for(int i=0; i<=100;i++){
executor.submit(new ThreadClass());
}
//After all threads executed now to shutdown executor
executor.shutdown()
executor.awaitForTermination(1,Time.MILLISECONDS);
My Sample Thread Class :
public class ThreadClass implements Runnable{
#override
public void run(){
boolean isCompleted= doAction();
if(!isCompleted){
// I want here to stop this thread only..what to do ?
//executor.shutdown will stop all other threads
}
}
}
Any Suggestion what to do ?? Am i doing it wrong way ?
Thread.currentThread().interrupt();
You shouldn't stop a thread. There is a reason Thread.stop is deprecated. Instead you can interrupt the current thread.
You can use Callable instead of Runnable. If you do that, submit method returns a Future (http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Future.html) instance on which you can verify if the callable do it´s work in the right way. The documentation explains it:
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html#submit(java.util.concurrent.Callable)
Hope i explained in the right way.

How to stop next thread from running in a ScheduledThreadPoolExecutor

I have a ScheduledThreadPoolExecutor which has one thread and runs for every 30 seconds.
Now, if the current executing thread throws some exception, then I need to make sure that the next thread do not run and the the ScheduledThreadPoolExecutor is down.
How do I achieve this?
Catch the exception call shutdown/shutdownNow API in ExecutorService
shutdown()
Initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted. Invocation has no additional effect if already shut down.
This method does not wait for previously submitted tasks to complete execution. Use awaitTermination to do that.
shutdownNow()
Attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution.
This method does not wait for actively executing tasks to terminate. Use awaitTermination to do that.
There are no guarantees beyond best-effort attempts to stop processing actively executing tasks. For example, typical implementations will cancel via Thread.interrupt(), so any task that fails to respond to interrupts may never terminate.
Refer to these post for more details with working code.
How to forcefully shutdown java ExecutorService
As a clean way, you can simply use a static accessed class to set/check the execution availability.
import java.util.concurrent.atomic.AtomicBoolean;
class ThreadManager
{
private static AtomicBoolean shouldStop = new AtomicBoolean(false);
public static void setExceptionThrown(boolean val)
{
shouldStop.set(val);
}
public boolean shouldExecuteTask()
{
return !shouldStop.get();
}
}
And a custom runnable implementation that allows you to check for the possibility to execute the task
abstract class ModdedRunnable implements Runnable
{
#Override
public void run()
{
if(ThreadManager.shouldExecuteTask())
{
try
{
runImpl();
}
catch(Exception t)
{
ThreadManager.setExceptionThrown(true);
}
}
}
public abstract void runImpl() throws Exception;
}

Is there a way to put tasks back in the executor queue

I have a series of tasks (i.e. Runnables) to be executed by an Executor.
Each task requires a certain condition to be valid in order to proceed. I would be interested to know if there is a way to somehow configure Executor to move tasks in the end of the queue and try to execute them later when the condition would be valid and the task be able to execute and finish.
So the behavior be something like:
Thread-1 take tasks from queue and run is called
Inside run the condition is not yet valid
Task stops and Thread-1 places task in the end of the queue and
gets next task to execute
Later on Thread-X (from thread pool) picks task again from queue condition is valid
and task is being executed
In Java 6, the ThreadPoolExecutor constructor takes a BlockingQueue<Runnable>, which is used to store the queued tasks. You can implement such a blocking queue which overrides the poll() so that if an attempt is made to remove and execute a "ready" job, then poll proceeds as normal. Otherwise the runnable is place at the back of the queue and you attempt to poll again, possibly after a short timeout.
Unless you have to have busy waiting, you can add a repeating task to a ScheduledExecutorService with an appropriate polling interval which you cancel or kill after it is "valid" to run.
ScheduleExecutorService ses = ...
ses.scheduleAtFixedRate(new Runnable() {
public void run() {
if (!isValid()) return;
preformTask();
throw new RuntimeException("Last run");
}
}, PERIOD, PERIOD, TimeUnit.MILLI_SECONDS);
Create the executor first.
You have several possibilites.
If I suppose that your tasks implement a simple interface to query their status (something like an enum with 'NeedReschedule' or 'Completed'), then implement a wrapper (implementing Runnable) for your tasks which will take the task and the executor as instanciation parameters. This wrapper will run the task it is bound to, check its status afterwards, and if necessary reschedule a copy of itself in the executor before terminating.
Alternatively, you could use an execption mechanism to signal the wrapper that the task must be rescheduled.
This solution is simpler, in the sense that it doesn't require a particular interface for you task, so that simple Runnable could be thrown in the system without trouble. However, exceptions incur more computation time (object construction, stack trace etc.).
Here's a possible implementation of the wrapper using the exception signaling mechanism.
You need to implement the RescheduleException class extending Throwable, which may be fired by the wrapped runnable (no need for a more specific interface for the task in this setup). You could also use a simple RuntimeException as proposed in another answer, but you will have to test the message string to know if this is the exception you are waiting for.
public class TaskWrapper implements Runnable {
private final ExecutorService executor;
private final Runnable task;
public TaskWrapper(ExecutorService e, Runnable t){
executor = e;
task = t;
}
#Override
public void run() {
try {
task.run();
}
catch (RescheduleException e) {
executor.execute(this);
}
}
Here's a very simple application firing up 200 wrapped tasks randomly asking a reschedule.
class Task implements Runnable {
#Override
public void run(){
if (Maths.random() > 0.5)
throw new RescheduleException();
}
}
public class Main {
public static void main(String[] args){
ExecutorService executor = Executors.newFixedThreadPool(10);
int i = 200;
while(i--)
executor.execute(new TaskWrapper(executor, new Task());
}
}
You could also have a dedicated thread to monitor the other threads results (using a message queue) and reschedule if necessary, but you lose one thread, compared to the other solution.

Pattern for executing concurrent tasks just once

I am working on a java server which dispatches xmpp messages and workers execute the tasks from my clients.
private static ExecutorService threadpool = Executors.newCachedThreadPool();
DispatchWorker worker = new DispatchWorker(connection, packet);
threadpool.execute(worker);
Works fine, but i need a bit more than that.
I don't want to execute the same request multiple times.
My worker may start another thread with a backround task also only allowed to run once at a time. A Threadpool in the worker threads.
I can identify the requests by a string and i can also give the backround tasks an id to identify them.
My solution would be a synchronized hashmap where my running tasks are registered with their id. The reference of the map will be passed to the worker threads that they remove their entry when they finished.
Feels a bit clumsy this solution so i wanted to know if there are more elegant patterns/best practices.
best regards, m
This is exactly what Quartz does (although it does a lot more, like scheduling jobs in the future).
You can use a Singleton thread pool or pass the thread pool as an argument. (I would have the pool final)
You can use a HashSet to guard adding duplicate tasks.
I believe using Map is okay for this. But instead of synchronized HashMap you can also use ConcurrenHashMap which allows you to specify concurrency levels, i.e. how many thread can work with map at the same time. And also it has atomic putIfAbsent operation.
I would use queues and daemon worker threads that are always running and wait for something to arrive in the queue. This way it is guaranteed, that only one worker is working on a request.
If you only want one thread to run, turn POOLSIZE down to 1, or use newSingleThreadExecutor.
I do not quite understand your second requirement: do you mean only 1 thread is allowed to run as background task? If so, you could create another SingleThreadExecutor and use that for the background task. Then it would not make too much sense to have POOLSIZE>1, unless the work done in the background thread is very short compared to that done in the worker itself.
private static interface Request {};
private final int POOLSIZE = 10;
private final int QUEUESIZE = 1000;
BlockingQueue<Request> e = new LinkedBlockingQueue<Request>(QUEUESIZE);
public void startWorkers() {
ExecutorService threadPool = Executors.newFixedThreadPool(POOLSIZE);
for(int i=0; i<POOLSIZE; i++) {
threadPool.execute(new Runnable() {
#Override
public void run() {
try {
final Request request = e.take();
doStuffWithRequest(request);
} catch (InterruptedException e) {
// LOG
// Shutdown worker thread.
}
}
});
}
}
public void handleRequest(Request request) {
if(!e.offer(request)) {
//Cancel request, queue is full;
}
}
At startup-time, startworkers starts the workers (surprise!).
handleRequest handles requests coming from a webservice, servlet or whatever.
Of course you need to adapt "Request" and "doStuffWithRequest" to your need, and add some additional logic for shutdown etc.
We originally wrote our own utilities to handle this, but if you want the results memoised, then Guava's ComputingMap encapsulates the initialisation by one and only one thread (with other threads blocking and waiting for the result), and the memoisation.
It also supports various expiration strategies.
Usage is simple, you construct it with an initialisation function:
Map<Long, Foo> cache = new MapMaker().makeComputingMap(new Function<Long, Foo>() {
public Foo apply(String key) {
return … // init with expensive calculation
}
});
and then just call it:
Foo foo = cache.get("key");
The first thread to ask for "key" will be the one who performs the initialisation

sequential event processing via executorservice

I have an event queue to process. A thread adds events to the queue.
I have created a runnable Task that in the run method does all which is necessary to process the event.
I have declared an Executors.newCachedThreadPool(); and I execute each Task.
public class EventHandler {
private static final ExecutorService handlers = Executors.newCachedThreadPool();
public void handleNextEvent(AnEvent event){
handlers.execute(new Task(evt));
}
public class Task implements Runnable{
#Override
public void run() {
//Event processing
}
}
public AnotherClass{
public void passEvent(AnEvent evt)//This is called by another thread
{
EventHandler.handleNextEvent(evt);
}
}
My problem is that if I call execute of the executor, my code will get the next event and run next runnable via the executor.
My purpose is to process next event from queue only after previous task has ended.
How would I know that the previous task has finished or not so that I know I can call handleNextEvent again?
Is having some status field updated by the Task a good idea?
Thanks
Executors.newCachedThreadPool() will create new threads on demand, so it's not what you want. You want something like Executors.newSingleThreadExecutor(), which will process the events one at a time, and queue up the rest.
See javadoc:
Creates an Executor that uses a single worker thread operating off an unbounded queue. (Note however that if this single thread terminates due to a failure during execution prior to shutdown, a new one will take its place if needed to execute subsequent tasks.) Tasks are guaranteed to execute sequentially, and no more than one task will be active at any given time.
I think Executors.newSingleThreadExecutor() and the submit() Method are the solution to your problem: http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ExecutorService.html

Categories

Resources