So I want to change the fixedthreadpool size after already declaring it before, how would I go about doing this. I tried this but it doesn't work:
ExecutorService pool = Executors.newFixedThreadPool(3);
//pool has 21 threads in queue
//do somestuff
network sends a change in thread limit signal when there are 12 threads left in queue
//redeclare:
pool = Executors.newFixedThreadPool(4);
But it doesn't work, only 3 threads continue to be excuted at a time.
I think I know what the problem is: the executor stores the 21 threads in queue and wont change the fixed thread pool size till the queue is cleared.
I don't want this, How would I make it so that the change in threadpoolsize affects the entire queue? Would I need to reload the entire queue (if so, How would i do that)?
Creating a new pool does get rid of the old one, unless you shut it down you now have two pools, one with three threads and one with four.
The simplest thing to do is to create a ThreadPoolExecutor (which is the implementing class here) This will allow you to change the core pool size and the maximum pool times.
// start with a pool with a min/max size of 3.
ThreadPoolExecutor executor = new ThreadPoolExecutor(3,3,60, TimeUnit.MINUTES, new LinkedBlockingQueue<Runnable>());
// set the min/max size to 4.
executor.setMaximumPoolSize(4);
executor.setCorePoolSize(4);
Another option is to make the pool size 4 from the start if you know that 4 is reasonable.
Related
I have been reading up on how the settings of Spring's ThreadPoolTaskExecutor work together and how the thread pool and queue work. This stackoverflow answer as well as this and this article from Baeldung have been useful to me.
As far as I understand thus far, corePoolSize number of threads are kept alive at all time (assuming allowCoreThreadTimeOut is not set to true). If all of these threads are currently in use, any additional requests will be put on the queue. Once queueCapacity is reached, the thread pool size will be increased until maxPoolSize is reached.
Intuitively, I would have thought it would instead work as follows:
corePoolSize number of threads are kept alive at all time (again assuming allowCoreThreadTimeOut is not set to true). If all of these threads are currently in use and new requests come in, the pool size will be increased until maxPoolSize is reached. If there are then still more requests coming in, they will be put on the queue until queueCapacity is reached.
I wonder what would be the reasoning behind it working the way it is?
The first reference you should check is the documentation.
Right from the documentation for ThreadPoolExecutor (ThreadPoolTaskExecutor is "just" a wrapper):
A ThreadPoolExecutor will automatically adjust the pool size (see getPoolSize()) according to the bounds set by corePoolSize (see getCorePoolSize()) and maximumPoolSize (see getMaximumPoolSize()). When a new task is submitted in method execute(Runnable), if fewer than corePoolSize threads are running, a new thread is created to handle the request, even if other worker threads are idle. Else if fewer than maximumPoolSize threads are running, a new thread will be created to handle the request only if the queue is full. [...]
If the pool currently has more than corePoolSize threads, excess threads will be terminated if they have been idle for more than the keepAliveTime (see getKeepAliveTime(TimeUnit)). This provides a means of reducing resource consumption when the pool is not being actively used. If the pool becomes more active later, new threads will be constructed. [...]
(You haven't mentioned the parameter for the BlockingQueue but I suggest you to read about it as well. It's very interesting.)
Why do the parameters not work like you've suggested they should?
If the pool size would be increased up to maximumPoolSize before tasks are queued (like you've proposed), you'd have one problem: You'd have removed the thread pool's ability to determine when a new worker is worth it.
The corePoolSize is the amount of workers that stay in the pool. The benefit is that you don't have to create, terminate, create, terminate, create ... new workers for a given workload. If you can determine how much work there will always be, it's a smart idea to set the corePoolSize accordingly.
The maximumPoolSize determines the maximum amount of workers in the pool. You want to have control over that as you could have multiple thread pools, hardware restrictions or just a specific program where you don't need as many workers.
Now why does the work queue get filled up first? Because the queue capacity is an indicator for when the amount of work is so high, that it's worth it to create new workers. As long the queue is not full, the core workers are supposed to be enough to handle the given work. If the capacity is reached, then new workers are created to handle further work.
With this mechanism the thread pool dynamically creates workers when there is a need for them and only keeps so many workers as there is usually need for. This is the point of a thread pool.
On application, i'm using
Executor executionContext = Executors.newFixedThreadPool(10);
for fixed the 10 threads.
I don't want to fix the number of thread, I want thread will be dynamic. The number of threads it requires, it will process.
How it can be applied?
The 10 in your question is the value of the thread pool, not the number of threads. You say that you want:
The number of threads it requires, it will process.
Then your code will work as you want. It will use as many threads as it needs until it reaches the thread pool value. You can also choose what happens when it reaches the max thread pool using "rejection policy".
I have a class that uses an ExecutorService to make url requests, so that they can be run in parallel, and I limit it to a max pool size of 20
private static ExecutorService getCachedPool(ThreadFactory threadFactory)
{
return new ThreadPoolExecutor(20, 20,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(),
threadFactory);
}
However I use it as follows:
List<Future<ArtistCoverImage>> results = ArtistArtworkOnlineLookup.getExecutorService()
.invokeAll(lookups);
But if I make call and lookups is larger than available pool size it fails. But I dont understand this as the pool size only specifies max threads that can be used, it uses a SynchronousQueue rather than a BlockingQueue so why arem't the extra lookups just added to the queue.
If I simply change to max pool size Integer.MAX_VALUE
private static ExecutorService getCachedPool(ThreadFactory threadFactory)
{
return new ThreadPoolExecutor(20, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(),
threadFactory);
}
there is no problem, but then Im creating rather more pools than I would like. This is a potential problem because Im trying to improve performance on a low powered black box machine and I am trying to minimize extra work.
There seems to be a misconception on your end regarding the capability/behavior of the queue class you picked. That queue isn't unbound or not blocking, to the contrary.
The javadoc tells us:
New tasks submitted in method execute(java.lang.Runnable) will be rejected when the Executor has been shut down, and also when the Executor uses finite bounds for both maximum threads and work queue capacity, and is saturated.
But more importantly, the javadoc for SynchronousQueue says:
A blocking queue in which each insert operation must wait for a corresponding remove operation by another thread, and vice versa.
When you feed 20 long running tasks into it, each one goes to a thread. When they are all still running when #21 comes in, the pool is fully used, and the queue immediately says: "I am full, too". Everything is saturated, the new job isn't accepted any more.
Solution: pick a different kind of queue.
I wanted to know whether there is a way for increasing number of threads in FixedThreadPool or ScheduledThreadPool when all the threads are in usage.
That is for Example1: suppose we take a FixedThreadPool with size say 5, when all the threads are in usage then another task or function which need to be done has to wait for a thread to become free. I need to avoid this and the number of threads has to increase dynamically without using CachedThreadPool.
Is there any way in doing so?
Example2:
ScheduledExecutorService execService = Executors.newScheduledThreadPool(1);
I have limited the number of threads in the above example 2 to 1. Now when this thread is busy in doing some task the other task to be done has to wait right?
Now I want like : this fixed number of threads has to increase dynamically.
Is there anyway in doing so?
There is actually only one class which implements ScheduledExecutorService called the ScheduledThreadPoolExecutor. This has two parameters to define the number of threads it can have which is the core thread pool (or minimum size) and the maximum thread pool size.
The fixed thread pool factory method creates an instance of this class where the minimum and maximum are the same.
The caches thread pool factory method creates an instance which has a minimum of 0 and no real maximum.
If you want a dynamic thread pool size I suggest you set a high maximum and a minimum you think is appropriate. Whether you create the instance directory or used the fixed/cached thread pool doesn't make much difference.
ScheduledExecutorService execService = Executors.newScheduledThreadPool(1);
No. Here ScheduledExecutorService will not spawn new threads.
you have to go for newCachedThreadPool
I have a long running process that listens to events and do some intense processing.
Currently I use Executors.newFixedThreadPool(x) to throttle the number of jobs that runs concurrently, but depending of the time of the day, and other various factors, I would like to be able to dynamically increase or decrease the number of concurrent threads.
If I decrease the number of concurrent threads, I want the current running jobs to finish nicely.
Is there a Java library that let me control and dynamically increase or decrease the number of concurrent threads running in a Thread Pool ? (The class must implement ExecutorService).
Do I have to implement it myself ?
Have a look at below API in ThreadPoolExecutor
public void setCorePoolSize(int corePoolSize)
Sets the core number of threads. This overrides any value set in the constructor.
If the new value is smaller than the current value, excess existing threads will be terminated when they next become idle.
If larger, new threads will, if needed, be started to execute any queued tasks.
Initialization:
ExecutorService service = Executors.newFixedThreadPool(5);
On need basis, resize Thread pool by using below API
((ThreadPoolExecutor)service).setCorePoolSize(newLimit);//newLimit is new size of the pool
Important note:
If the queue is full, and new value of number of threads is greater than or equal to maxPoolSize defined earlier, Task will be rejected.
So set values of maxPoolSize and corePoolSize properly.