I'm asking this question because I am creating a lot of executor services and while I may already have a memory leak somewhere that needs to be investigated, I think a recent change to the following code actually worsened it, hence I am trying to confirm what is going on:
#FunctionalInterface
public interface BaseConsumer extends Consumer<Path> {
#Override
default void accept(final Path path) {
String name = path.getFileName().toString();
ExecutorService service = Executors.newSingleThreadExecutor(runnable -> {
Thread thread = new Thread(runnable, "documentId=" + name);
thread.setDaemon(true);
return thread;
});
Future<?> future = service.submit(() -> {
baseAccept(path);
return null;
});
try {
future.get();
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
} catch (ExecutionException ex) {
throw new RuntimeException(ex);
}
}
void baseAccept(final Path path) throws Exception;
}
Then this Consumer<Path> gets called on another thread pool with (usually) N=2 threads, I am not sure if that is relevant.
The question is: Does the ExecutorService service go out of scope and get garbage collected once BaseConsumer#accept has finished?
Does the ExecutorService service go out of scope and get garbage collected once BaseConsumer.accept() has finished?
Yes.
Indeed, the associated thread pool should also be garbage collected ... eventually.
The ExecutorService that is created by Executors.newSingleThreadExecutor() an instance of FinalizableDelegatedExecutorService. That class has finalize() method that calls shutdown() on the wrapped ExecutorService object. Provided that all outstanding tasks actually terminate, the service object will shut down its thread pool.
(AFAIK, this is not specified. But it is what is implemented according to the source code, in Java 6 onwards.)
Does adding a finally { service.shutdown(); } in the try-catch around future.get() help in retrieving resources quicker? (not necessarily garbage collecting the service object).
Yes it does. Calling shutdown() causes the threads to be released as soon as the outstanding tasks complete. That procedure starts immediately, whereas if you just left it to the garbage collector it wouldn't start until the finalizer was called.
Now if the resources were just "ordinary" Java objects, this wouldn't matter. But in this case, the resource that you are reclaiming is a Java thread, and that has associate operating system resources (e.g. a native thread), and a non-trivial chunk of out-of-heap memory. So it is maybe worthwhile to do this.
But if you are looking to optimize this, maybe you should be creating a long-lived ExecutorService object, and sharing it across multiple "consumer" instances.
I want to let the execution take place on a named thread, it has to do with it being easier to log. In either case, this code should work.
You can do this much simpler/faster
Thread t = Thread.currentThread();
String name = t.getName();
try {
t.setName("My new thread name for this task");
// do task
} finally {
t.setName(name);
}
This way you can use a named thread without creating a new one.
Related
Did I need to handle runtime exception in executorservice? I tried an example in spring boot web application, the code will still execute despite of exception
Here is the code:
#RestController
class WelcomeController {
ExecutorService es = Executors.newSingleThreadExecutor();
#GetMapping("/sayhi")
public String sayHi() {
es.submit(() -> {
System.out.println("hello");
int a = 0;
if (10 / a == 1) {
}
});
return "hi";
}
}
When an exception is thrown in one thread it doesn’t propagate to other threads unless you do something to make it do that (like using a Future). Here the thread causes an exception and dies, but the rest of the program isn’t affected.
The executor creates a replacement for the lost thread, see the api doc:
Creates an Executor that uses a single worker thread operating off an unbounded queue, and uses the provided ThreadFactory to create a new thread when needed. Unlike the otherwise equivalent newFixedThreadPool(1, threadFactory) the returned executor is guaranteed not to be reconfigurable to use additional threads.
It would seem to me like a good idea to have tasks handle exceptions that they cause. Otherwise the thread dies and the pool has to start a new one to replace it. This is basically what the article linked in the comments says.
You supposed to keep the return value Future
and use Future.get() to interogate if there was an unhandled Exception
I created a thread pool where each thread takes an object from a queue and handles it. I'm not sure I implemented it in the right way. Here's the code:
public class HandlerThreadsPool<T> {
private BlockingQueue<T> queue;
private IQueueObjectHandler<T> objectHandler;
private class ThreadClass implements Runnable {
#Override
public void run() {
while (true) {
objectHandler.handleItem(queue.take());
}
}
}
public HandlerThreadsPool(int numberOfThreads, BlockingQueue<T> queue, IQueueObjectHandler<T> dataHandler){
this.queue = queue;
this.objectHandler = dataHandler;
ExecutorService service = Executors.newFixedThreadPool(numberOfThreads);
for (int i = 0; i < numberOfThreads; i++)
service.execute(new ThreadClass());
service.shutdown();
}
}
The dataHandler handles the object doing some stuff. Is it correct in this way?
Thanks
At first it is not a good practice to create, submit to and shutdown ExecutorService inside constructor.
Look at shutdown() javadoc
Initiates an orderly shutdown in which previously submitted tasks are
executed, but no new tasks will be accepted. Invocation has no
additional effect if already shut down.
You didn't post IQueueObjectHandler, but it seems for me that your ThreadClass jobs will run infinitely, off course if you are not stopping them by explicitly throwing some unchecked exception inside objectHandler.handleItem(..) which would be wrong. And you can have problems with JVM termination because of these infinitely running not daemon threads. (JVM graceful termination conditions)
Also you don't catch InterruptedException while executing queue.take(), this would cause a compile time error. Handling InterruptedException properly would help you to stop on possible shutdownNow().
So
Don't shutdown pool in constructor, this will lead to problems. Use Runtime.getRuntime().addShutdownHook(..) if you don't want to make shutdown somewhere else.
Use shutdownNow() to actually stop executor's threads, if they are in infinite loops, handle InterruptedException inside ThreadClass for this. Or you can stop them using volatile boolean or AtomicBoolean flag which indicates status, running/stopped. Check flag in the loop and change it to false whenever you need to shutdown jobs.
Make ExecutorService service an instance variable, not local. Loosing reference to running ExecutorService looks bad. This can help you somewhere else.
In my application , I have this logic when the user logins , it will call the below method , with all the symbols the user owns .
public void sendSymbol(String commaDelimitedSymbols) {
try {
// further logic
} catch (Throwable t) {
}
}
my question is that as this task of sending symbols can be completed slowly but must be completed , so is there anyway i can make this as a background task ??
Is this possible ??
please share your views .
Something like this is what you're looking for.
ExecutorService service = Executors.newFixedThreadPool(4);
service.submit(new Runnable() {
public void run() {
sendSymbol();
}
});
Create an executor service. This will keep a pool of threads for reuse. Much more efficient than creating a new Thread each time for each asynchronous method call.
If you need a higher degree of control over your ExecutorService, use ThreadPoolExecutor. As far as configuring this service, it will depend on your use case. How often are you calling this method? If very often, you probably want to keep one thread in the pool at all times at least. I wouldn't keep more than 4 or 8 at maximum.
As you are only calling sendSymbol once every half second, one thread should be plenty enough given sendSymbols is not an extremely time consuming routine. I would configure a fixed thread pool with 1 thread. You could even reuse this thread pool to submit other asynchronous tasks.
As long as you don't submit too many, it would be responsive when you call sendSymbol.
There is no really simple solution. Basically you need another thread which runs the method, but you also have to care about synchronization and thread-safety.
new Thread(new Runnable() {
public void run() {
sendSymbol(String commaDelimitedSymbols);
}
}).start();
Maybe a better way would be to use Executors
But you will need to case about thread-safety. This is not really a simple task.
It sure is possible. Threading is the way to go here. In Java, you can launch a new thread like this
Runnable backGroundRunnable = new Runnable() {
public void run(){
//Do something. Like call your function.
}};
Thread sampleThread = new Thread(backGroundRunnable);
sampleThread.start();
When you call start(), it launches a new thread. That thread will start running the run() function. When run() is complete, the thread terminates.
Be careful, if you are calling from a swing app, then you need to use SwingUtil instead. Google that up, sir.
Hope that works.
Sure, just use Java Threads, and join it to get the results (or other proper sync method, depends on your requirements)
You need to spawn a separate thread to perform this activity concurrently. Although this will not be a separate process, but you can keep performing other task while you complete sending symbols.
The following is an example of how to use threads. You simply subclass Runnable which contains your data and the code you want to run in the thread. Then you create a thread with that runnable object as the parameter. Calling start on the thread will run the Runnable object's run method.
public class MyRunnable implements Runnable {
private String commaDelimitedSymbols;
public MyRunnable(StringcommaDelimitedSymbols) {
this.commaDelimitedSymbols = commaDelimitedSymbols;
}
public void run() {
// Your code
}
}
public class Program {
public static void main(String args[]) {
MyRunnable myRunnable = new MyRunnable("...");
Thread t = new Thread(myRunnable)
t.start();
}
}
My multi-threaded application has a main class that creates multiple threads. The main class will wait after it has started some threads. The runnable class I created will get a file list, get a file, and remove a file by calling a web service. After the thread is done it will notify the main class to run again. My problem is it works for a while but possibly after an hour or so it will get to the bottom of the run method from the output I see in the log and that is it. The Java process is still running but it does not do anything based on what I am looking at in the log.
Main class methods:
Main method
while (true) {
// Removed the code here, it was just calling a web service to get a list of companies
// Removed code here was creating the threads and calling the start method for threads
mainClassInstance.waitMainClass();
}
public final synchronized void waitMainClass() throws Exception {
// synchronized (this) {
this.wait();
// }
}
public final synchronized void notifyMainClass() throws Exception {
// synchronized (this) {
this.notify();
// }
}
I originally did the synchronization on the instance but changed it to the method. Also no errors are being recorded in the web service log or client log. My assumption is I did the wait and notify wrong or I am missing some piece of information.
Runnable Thread Code:
At the end of the run method
// This is a class member variable in the runnable thread class
mainClassInstance.notifyMainClass();
The reason I did a wait and notify process because I do not want the main class to run unless there is a need to create another thread.
The purpose of the main class is to spawn threads. The class has an infinite loop to run forever creating and finishing threads.
Purpose of the infinite loop is for continually updating the company list.
I'd suggest moving from the tricky wait/notify to one of the higher-level concurrency facilities in the Java platform. The ExecutorService probably offers the functionality you require out of the box. (CountDownLatch could also be used, but it's more plumbing)
Let's try to sketch an example using your code as template:
ExecutorService execSvc = Executors.newFixedThreadPool(THREAD_COUNT);
while (true) {
// Removed the code here, it was just calling a web service to get a list of companies
List<FileProcessingTask> tasks = new ArrayList<FileProcessingTask>();
for (Company comp:companyList) {
tasks.add(new FileProcessingTask(comp));
}
List<Future<FileProcessingTask>> results = execSvc.invokeAll(tasks); // This call will block until all tasks are executed.
//foreach Future<FileProcessingTask> in results: check result
}
class FileProcessingTask implements Callable<FileResult> { // just like runnable but you can return a value -> very useful to gather results after the multi-threaded execution
FileResult call() {...}
}
------- edit after comments ------
If your getCompanies() call can give you all companies at once, and there's no requirement to check that list continuously while processing, you could simplify the process by creating all work items first and submit them to the executor service all at once.
List<FileProcessingTask> tasks = new ArrayList<FileProcessingTask>();
for (Company comp:companyList) {
tasks.add(new FileProcessingTask(comp));
}
The important thing to understand is that the executorService will use the provided collection as an internal queue of tasks to execute. It takes the first task, gives it to a thread of the pool, gathers the result, places the result in the result collection and then takes the next task in the queue.
If you don't have a producer/consumer scenario (cfr comments), where new work is produced at the same time that task are executed (consumed), then, this approach should be sufficient to parallelize the processing work among a number of threads in a simple way.
If you have additional requirements why the lookup of new work should happen interleaved from the processing of the work, you should make it clear in the question.
I am working on a java server which dispatches xmpp messages and workers execute the tasks from my clients.
private static ExecutorService threadpool = Executors.newCachedThreadPool();
DispatchWorker worker = new DispatchWorker(connection, packet);
threadpool.execute(worker);
Works fine, but i need a bit more than that.
I don't want to execute the same request multiple times.
My worker may start another thread with a backround task also only allowed to run once at a time. A Threadpool in the worker threads.
I can identify the requests by a string and i can also give the backround tasks an id to identify them.
My solution would be a synchronized hashmap where my running tasks are registered with their id. The reference of the map will be passed to the worker threads that they remove their entry when they finished.
Feels a bit clumsy this solution so i wanted to know if there are more elegant patterns/best practices.
best regards, m
This is exactly what Quartz does (although it does a lot more, like scheduling jobs in the future).
You can use a Singleton thread pool or pass the thread pool as an argument. (I would have the pool final)
You can use a HashSet to guard adding duplicate tasks.
I believe using Map is okay for this. But instead of synchronized HashMap you can also use ConcurrenHashMap which allows you to specify concurrency levels, i.e. how many thread can work with map at the same time. And also it has atomic putIfAbsent operation.
I would use queues and daemon worker threads that are always running and wait for something to arrive in the queue. This way it is guaranteed, that only one worker is working on a request.
If you only want one thread to run, turn POOLSIZE down to 1, or use newSingleThreadExecutor.
I do not quite understand your second requirement: do you mean only 1 thread is allowed to run as background task? If so, you could create another SingleThreadExecutor and use that for the background task. Then it would not make too much sense to have POOLSIZE>1, unless the work done in the background thread is very short compared to that done in the worker itself.
private static interface Request {};
private final int POOLSIZE = 10;
private final int QUEUESIZE = 1000;
BlockingQueue<Request> e = new LinkedBlockingQueue<Request>(QUEUESIZE);
public void startWorkers() {
ExecutorService threadPool = Executors.newFixedThreadPool(POOLSIZE);
for(int i=0; i<POOLSIZE; i++) {
threadPool.execute(new Runnable() {
#Override
public void run() {
try {
final Request request = e.take();
doStuffWithRequest(request);
} catch (InterruptedException e) {
// LOG
// Shutdown worker thread.
}
}
});
}
}
public void handleRequest(Request request) {
if(!e.offer(request)) {
//Cancel request, queue is full;
}
}
At startup-time, startworkers starts the workers (surprise!).
handleRequest handles requests coming from a webservice, servlet or whatever.
Of course you need to adapt "Request" and "doStuffWithRequest" to your need, and add some additional logic for shutdown etc.
We originally wrote our own utilities to handle this, but if you want the results memoised, then Guava's ComputingMap encapsulates the initialisation by one and only one thread (with other threads blocking and waiting for the result), and the memoisation.
It also supports various expiration strategies.
Usage is simple, you construct it with an initialisation function:
Map<Long, Foo> cache = new MapMaker().makeComputingMap(new Function<Long, Foo>() {
public Foo apply(String key) {
return … // init with expensive calculation
}
});
and then just call it:
Foo foo = cache.get("key");
The first thread to ask for "key" will be the one who performs the initialisation