I want to to create an something like ExecutorService in which there will be single thread initially, and based on the work load given to the ExecutorService, the thread count has to be increased gradually upto certain count, say 50 for example. I could not find any way to do that.
Is there anyway make this happen in NETTY NIO?
Appreciate your help. Thanks.
You can use a ThreadPoolExecutor. You don't really add threads yourself, rather, the pool will instantiate new threads as needed according to the load, so you start with core threads and new threads will be added automatically as needed up to maxThreads. Threads above the core count will stick around once they idle for the timeout period.
import java.util.concurrent.*;
....
ThreadPoolExecutor tpe = new ThreadPoolExecutor(core, maxThreads,
timeout, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(),
threadFactory);
In netty, ChannelHandlerContext.executor() returns the Netty's EventExecutor. You can submit your Runnables to Netty's executor.
ctx.executor().submit(aRunnable);
If you want to perform an event, you might want to avoid creating your own executor service, and rely on netty provided executor.
Related
I'm looking for a threadpool implementation that will not wait when no threads are immediately available in the pool and instead either resort to a serial execution or somehow else let caller know that some tasks that are scheduled for executions would have to wait until thread pool gets back threads that are currently busy.
The net effect I'm trying to achieve is to have parallel executions when possible and when thread pool is fully saturated execute remaining tasks serially in the current thread - that is, where scheduling attempt has been made - without waiting until threadpool gets back another thread.
While this seems doable, I still feel that doing it right would be a significant amount of work - perhaps mostly testing-wise, making sure that all corner cases are properly addressed, etc).
I'd like to avoid reinventing the wheel if this support is readily available in one form or the other in one of the JDK's threading-related library classes.
If some other widely known libraries like Guava or Apache Commons offer that functionality I'd be curious knowing that too.
You can achieve what you want with the standard ThreadPoolExecutor:
To stop tasks from queueing when no threads are available, use SynchronousQueue as the queue implementation.
To have the caller run the submitted task when no thread is available in the pool, use ThreadPoolExecutor.CallerRunsPolicy as the task rejection policy.
These two parameters can be set by calling the constructor:
ExecutorService executor = new ThreadPoolExecutor(corePoolSize, maximumPoolSize, keepAliveTime, timeUnit,
new SynchronousQueue<>(), new ThreadPoolExecutor.CallerRunsPolicy())
When my application launches, a executor service (using Executors.newFixedThreadPool(maxThreadNum) in java.util.concurrent) object is created. When requests come, the executor service will creates threads to handle them.
Because it takes time to create threads at run time, I want to make threads available when launching application, so that when requests come, it would take less time to process.
What I did is following:
executorService = Executors.newFixedThreadPool(200);
for (int i=0; i<200; i++) {
executorService.execute(new Runnable() {
#Override
public void run() {
System.out.println("Start thread in pool " );
}
});
}
It will creates 200 threads in the executorService pool when application launches.
Just wonder is this a correct way of creating threads when application starts?
Or is there a better way of doing it?
You are missing shutdown().It is very important to shutdown the Executor service once the operation is completed. So have try,catch and Finally block
try{
executorService.execute(...);
}catach(Exception e){
...
}finally{
executorService.shutdown(); //Mandatory
}
If you can use a ThreadPoolExecutor directly rather than an ExecutorService from Executors1, then there's perhaps a more standard/supported way to start all the core threads immediately.
int nThreads = 200;
ThreadPoolExecutor executor = new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>());
executor.prestartAllCoreThreads();
The above uses prestartAllCoreThreads().
Note that, currently, the implementation of Executors.newFixedThreadPool(int) creates a ThreadPoolExecutor in the exact same manner as above. This means you could technically cast the ExecutorService returned by the factory method to a ThreadPoolExecutor. There's nothing in the documentation that guarantees it will be a ThreadPoolExecutor, however.
1. ThreadPoolExecutor implements ExecutorService but provides more functionality. Also, many of the factory methods in Executors either returns a ThreadPoolExecutor directly or a wrapper that delegates to one. Some, like newWorkStealingPool, use the ForkJoinPool. Again, the return types of these factory methods are implementation details so don't rely too much on it.
The number of threads which could run parallel depends on your processor core. Unless you have 200 cores it would be pretty useless to make a thread pool of 200.
A great way to find out how many processors cores you have is:
int cores = Runtime.getRuntime().availableProcessors();
Moreover the overhead which develops during creating a new thread and executing it is unavoidable, so unless the task is heavily computed it would not be worth to create a new single thread for this task.
But after all your code is total fine so far.
Your code is totally fine if it works for your scenario. Since we don't know your use case, only you can answer your question with enough tests and benchmark.
However, do take note that the ThreadPool will reclaim idle threads after some time. That may bite you if you don't pay attention to it.
Just wonder is this a correct way of creating threads when application
starts?
Yes. That's a correct way of creating threads.
Or is there a better way of doing it?
Maybe. Under some workloads you might want to use a Thread pool with a variable number of threads (unlike the one created by newFixedThreadPool) - one that removes from the pool threads that have been idle for some time.
When should one use singleThreadExecutor in java? Also, when should one use cachedThreadpool?
It is specified both in documentations and books that singleThreadExecutor is preferred over fixedThreadPool(1) as it would not let modification in the number of threads like the latter, but what are the scenarios in which it is advisable or use singleThreadExecutor
newSingleThreadExecutor() is good when you know that one additional thread doing jobs in background is enough in your case (means there wouldn't be lots of jobs waiting in queue). And you don't need/want to extend Thread or implement Runnable and do all job transferring stuff by yourself. Tasks are guaranteed to execute sequentially - this also may be usefull, if you know that parallel task execution may cause deadlock or data race.
cachedThreadpool() - just look into source code
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
it creates new thread on demand and keeps them idle for no longer then 1 minute. And as it's said in docs
.. These pools will typically improve the performance of programs that
execute many short-lived asynchronous tasks. Calls to execute
will reuse previously constructed threads if available. If no existing
thread is available, a new thread will be created and added to the
pool. ..
But there is no upper bound for number of threads, so I would prefer to construct pool by hands with maximumPoolSize much fewer than Integer.MAX_VALUE, e.g. 128.
Does it make sense to use a ThreadPool with a poolsize of just 1 to basically just recycle that one thread over and over again for different uses in the application? Rather then doing new Thread(Runnable()) etc and then letting the garbage collector handle the removal of the thread, I thought it would be more efficient to just use that one thread for different jobs that dont need to run together.
This is what I am currently doing to define 1 poolsize threadpool.
private static int poolSize = 1;
private static int maxPoolSize = 1;
private static long keepAliveTime= 10;
private static final ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(100);
private static ThreadPoolExecutor threadPool = new ThreadPoolExecutor(poolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS, queue);
There is nothing wrong with a single threaded thread pool if it fits with how you application should function. For example, in an application I work on we have a number of services where we need to ensure that data is strictly processed in order of arrival. To do this we simply execute tasks on a single threaded executor.
Also using Executors means that it is easy to adjust the thread pool parameters in the future if you need to.
With new Thread(Runnable) you can execute N threads concurrently. It may be an advantage, but it also may bring synchronization issues.
With reusing one Thread you lose the ability to execute tasks in parallel, but you are spared the sync/concurrency issues.
Defining a one-thread pool this way is perfectly compatible with modern coding standards. it has the only drawback of not letting you parallelize any fragment of yhe code. However, I guess that's what you wanted.
One of the advantage of using the ThreadPoolExecutor being that once the thread is created, it will get reused as against creation of new Thread everytime when using new Thread.
Have you tried it without a Thread ? Threads are no efficient unless really needed and you need to do a lot of I/O specific stuff in parallel. If what you are looking for is a simple internal message queue, then it is fine.
I am using ExecutorService for ease of concurrent multithreaded program. Take following code:
while(xxx) {
ExecutorService exService = Executors.newFixedThreadPool(NUMBER_THREADS);
...
Future<..> ... = exService.submit(..);
...
}
In my case the problem is that submit() is not blocking if all NUMBER_THREADS are occupied. The consequence is that the Task queue is getting flooded by many tasks. The consequence of this is, that shutting down the execution service with ExecutorService.shutdown() takes ages (ExecutorService.isTerminated() will be false for long time). Reason is that the task queue is still quite full.
For now my workaround is to work with semaphores to disallow to have to many entries inside the task queue of ExecutorService:
...
Semaphore semaphore=new Semaphore(NUMBER_THREADS);
while(xxx) {
ExecutorService exService = Executors.newFixedThreadPool(NUMBER_THREADS);
...
semaphore.aquire();
// internally the task calls a finish callback, which invokes semaphore.release()
// -> now another task is added to queue
Future<..> ... = exService.submit(..);
...
}
I am sure there is a better more encapsulated solution?
The trick is to use a fixed queue size and:
new ThreadPoolExecutor.CallerRunsPolicy()
I also recommend using Guava's ListeningExecutorService.
Here is an example consumer/producer queues.
private ListeningExecutorService producerExecutorService = MoreExecutors.listeningDecorator(newFixedThreadPoolWithQueueSize(5, 20));
private ListeningExecutorService consumerExecutorService = MoreExecutors.listeningDecorator(newFixedThreadPoolWithQueueSize(5, 20));
private static ExecutorService newFixedThreadPoolWithQueueSize(int nThreads, int queueSize) {
return new ThreadPoolExecutor(nThreads, nThreads,
5000L, TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<Runnable>(queueSize, true), new ThreadPoolExecutor.CallerRunsPolicy());
}
Anything better and you might want to consider a MQ like RabbitMQ or ActiveMQ as they have QoS technology.
A true blocking ThreadPoolExecutor has been on the wishlist of many, there's even a JDC bug opened on it.
I'm facing the same problem, and came across this:
http://today.java.net/pub/a/today/2008/10/23/creating-a-notifying-blocking-thread-pool-executor.html
It's an implementation of a BlockingThreadPoolExecutor, implemented using a RejectionPolicy that uses offer to add the task to the queue, waiting for the queue to have room. It looks good.
You can call ThreadPoolExecutor.getQueue().size() to find out the size of the waiting queue. You can take an action if the queue is too long. I suggest running the task in the current thread if the queue is too long to slow down the producer (if that is appropriate).
You're better off creating the ThreadPoolExecutor yourself (which is what Executors.newXXX() does anyway).
In the constructor, you can pass in a BlockingQueue for the Executor to use as its task queue. If you pass in a size constrained BlockingQueue (like LinkedBlockingQueue), it should achieve the effect you want.
ExecutorService exService = new ThreadPoolExecutor(NUMBER_THREADS, NUMBER_THREADS, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>(workQueueSize));
you can add another bloquing queue that's has limited size to controle the size of internal queue in executorService, some thinks like semaphore but very easy.
before executor you put() and whene the task achive take(). take() must be inside the task code
I know this is too old but might be useful for other developers. So submitting one of the solution.
As you asked for better encapsulated solution. It is done by extending ThreadPoolExecutor and overriding submit method.
BoundedThreadpoolExecutor implemented using Semaphore. Java executor service throws RejectedExecutionException when the task queue becomes full. Using unbounded queue may result in out of memory error. This can be avoided by controlling the number of tasks being submitted using executor service. This can be done by using semaphore or by implementing RejectedExecutionHandler.