Difference and suggest ThreadPoolTaskExecutor and ThreadPoolExecutor - java

I want to know the main difference between ThreadPoolTaskExecutor and ThreadPoolExecutor. Which one should i choose and why?

Have a look at documentation link to understand the differences clearly.
ThreadPoolExecutor
An ExecutorService that executes each submitted task using one of possibly several pooled threads, normally configured using Executors factory methods.
Thread pools address two different problems: they usually provide improved performance when executing large numbers of asynchronous tasks, due to reduced per-task invocation overhead, and they provide a means of bounding and managing the resources, including threads, consumed when executing a collection of tasks.
ThreadPoolTaskExecutor
JavaBean that allows for configuring a ThreadPoolExecutor in bean style (through its "corePoolSize", "maxPoolSize", "keepAliveSeconds", "queueCapacity" properties) and exposing it as a Spring TaskExecutor.
This class is also well suited for management and monitoring (e.g. through JMX), providing several useful attributes: "corePoolSize", "maxPoolSize", "keepAliveSeconds" (all supporting updates at runtime); "poolSize", "activeCount" (for introspection only).

They're basically identical in terms of functionality. The difference is whether you want to initialise it through a constructor (recommended if created in Java code) or through setters (recommend if created in Spring).

Related

Java support for three different concurrency models

I am going through different concurrency model in multi-threading environment (http://tutorials.jenkov.com/java-concurrency/concurrency-models.html)
The article highlights about three concurrency models.
Parallel Workers
The first concurrency model is what I call the parallel worker model. Incoming jobs are assigned to different workers.
Assembly Line
The workers are organized like workers at an assembly line in a factory. Each worker only performs a part of the full job. When that part is finished the worker forwards the job to the next worker.
Each worker is running in its own thread, and shares no state with other workers. This is also sometimes referred to as a shared nothing concurrency model.
Functional Parallelism
The basic idea of functional parallelism is that you implement your program using function calls. Functions can be seen as "agents" or "actors" that send messages to each other, just like in the assembly line concurrency model (AKA reactive or event driven systems). When one function calls another, that is similar to sending a message.
Now I want to map java API support for these three concepts
Parallel Workers : Is it ExecutorService,ThreadPoolExecutor, CountDownLatch API?
Assembly Line : Sending an event to messaging system like JMS & using messaging concepts of Queues & Topics.
Functional Parallelism: ForkJoinPool to some extent & java 8 streams. ForkJoin pool is easy to understand compared to streams.
Am I correct in mapping these concurrency models? If not please correct me.
Each of those models says how the work is done/splitted from a general point of view, but when it comes to implementation, it really depends on your exact problem. Generally I see it like this:
Parallel Workers: a producer creates new jobs somewhere (e.g in a BlockingQueue) and many threads (via an ExecutorService) process those jobs in parallel. Of course, you could also use a CountDownLatch, but that means you want to trigger an action after exactly N subproblems have been processed (e.g you know your big problem may be split in N smaller problems, check the second example here).
Assembly Line: for every intermediate step, you have a BlockingQueue and one Thread or an ExecutorService. On each step the jobs are taken from one BlickingQueue and put in the next one, to be processed further. To your idea with JMS: JMS is there to connect distributed components and is part of the Java EE and was not thought to be used in a high concurrent context (messages are kept usually on the hard disk, before being processed).
Functional Parallelism: ForkJoinPool is a good example on how you could implement this.
An excellent question to which the answer might not be quite as satisfying. The concurrency models listed show some of the ways you might want to go about implementing an concurrent system. The API provides tools used to implementing any of these models.
Lets start with ExecutorService. It allows you to submit tasks to be executed in a non-blocking way. The ThreadPoolExecutor implementation then limits the maximum number of threads available. The ExecutorService does not require the task to perform the complete process as you might expect of a parallel worker. The task may be limited to specific part of the process and send a message upon completion that starts the next step in an assembly line.
The CountDownLatch and the ExecutorService provide a means to block until all workers have completed that may come in handy if a certain process has been divided to different concurrent sub-tasks.
The point of JMS is to provide a means for messaging between components. It does not enforce a specific model for concurrency. Queues and topics denote how a message is sent from a publisher to a subscriber. When you use queues the message is sent to exactly one subscriber. Topics on the other hand broadcast the message to all subscribers of the topic.
Similar behavior could be achieved within a single component by for example using the observer pattern.
ForkJoinPool is actually one implementation of ExecutorService (which might highlight the difficulty of matching a model and an implementation detail). It just happens to be optimized for working with large amount of small tasks.
Summary: There are multiple ways to implement a certain concurrency model in the Java environment. The interfaces, classes and frameworks used in implementing a program may vary regardless of the concurrency model chosen.
Actor model is another example for an Assembly line. Ex: akka

Shared Objects Design Pattern

I am a bit confused about how to solve the following problem:
I have a big (java se) application, which is based on the producer-consumer model and works mostly multithreaded. E.g. 10 threads are fetching messages, 40 threads are consuming messages. Now i have objects, which need to be shared in all threads, like a ThreadPoolExecutor. Pseudo Code:
ExecutorService execService =
new ThreadPoolExecutor(10, 10, 1, TimeUnit.SECONDS, some_queue);
execService.submit(new Consumer(sharedEntityManagerFactory)
These consumer threads submit every fetched message to another ThreadPoolExecutor, which has threads to process this message.
Now my question is, how to i effectively share objects across all threads (for example an EntityManagerFactoryObject (which is supposed to be a singleton i think) for DataAccessObjects) ? That's only an example it could also be a simple list, or a more complex POJO.
Would a possible(/good) solution be to do this in with dependency injection (JavaSE)? As far as i know it would be a greate solution, but the objects are only created once, and the threads only hold the reference, not a truly new object.
The details vary, based on the dependency injection library you plan to use. But most/all of them supply the possibility of specifying that an injected object is singleton, that is: the library will only create it once, and the same instance will be injected too all the clients.

How to use ObjectPool in detached thread?

The principal of object polling is very interesting
To me it can't be strong without the multi-threading execution.
For exemple i try this library furious-objectpool
The debugging show that the create/passivate methods are executed in the same request thread, how could i take advantage of this principal using it in another thread?
Object Pools are rather discouraged in Java. They are quite an expensive concept, usually way more expensive than just creating an object (new operator requires ~10 instructions, acquire/release in pools typically need MUCH more).
Also such long lived objects in Java tend to mess with GC not being able to clean up resources.
I would really encourage you to use some DI container with some nice stateless beans. It is both super fast (usually only 1 object per type) and nicely managable.
However, if you really need to use a pool, make sure that you use it for an object that has a very expensive construction process - typically some sort of network connections (database connections are the most common example).
As for another thread stuff: such pools are (or what is the point anyways?) always thread safe. Typical usage scenario would involve some sort of a server (like REST service) that accepts and executes plenty of user requests per minute.
Edit:
And please - don't use a technology/library just because it looks cool. It more often than not will bring you trouble in the long run.

Resizeable ListeningExecutorService

I've gotten into the habit of wrapping ExecutorServices in a listeningDecorator to make a ListeningExecutorService. I understand this is the Guava team's recommendation, and it seems to always be worthwhile.
I am running into an issue here, however. My executors are invariable based on a standard ThreadPoolExecutor, and I would like to give control of that thread pool size to my application (and, particularly, to expose it to administrators supporting the application). With an undecorated ThreadPoolExecutor, the methods needed to do this are exposed, but the wrapper is hiding the delegate from me.
So, what would I need to do to get back to the api exposed by the ThreadPoolExecutor without giving up the listeningDecorator?
A couple of ideas I had were:
Make a new ListeningDecorator that exposes the delegate
Keep a reference to the delegate as well as to the decorated Executor
Only keep a reference to the ThreadPoolExecutor, and wrap it only when the ExecutorService as it is requested
Reflect my way in to the delegate and manipulate the thread pool size from there
Guava team member here.
I would write a new ListeningThreadPoolExecutor class that's basically a ListeningDecorator variant wrapping a ThreadPoolExecutor, but instead of exposing the delegate itself, I'd expose setCorePoolSize(int size) methods from the ListeningThreadPoolExecutor that forwarded to the delegate ThreadPoolExecutor.
That approach exposes even fewer internal details than option 1, but failing that, I'd fall back to option 1 as you've described it.

How to manage executors

It's not infrequent in my practice that software I develop grows big and complex, and various parts of it use executors in their own way. From the performance point of view it would be better to use different thread pool configurations at each part. But from the maintainability and code-usability points it would be more preferable if all things related to threads, concurrency and CPU-utilization were kept and configured at some centralized place.
Having each class which needs some concurrent execution or scheduling create its own thread pool is not OK, because it is hard to control their life-cycles and overall number of threads.
Creating some kind of ExecutorManager and passing one thread pool around the application is not OK either, because, depending on type of the task and submitting rate, inappropriately configured combination of working queue and thread pool size can harm performance really bad.
So the question is: are there some common approaches that address this issue?
I would create 2 or 3 threadPools that can be configured differently depending on the tasks they execute, if there are more than 3 different concurrent actions you have a bigger problem.
The pools can be injected when needed (e.g. by name), additionally I would create an annotation to execute a defined method with a specific pool/executor using AOP (e.g. aspectj).
The annotation resolver should have access to all the pools/executors and submit the task using the one specified in the annotation.
For example:
#Concurrent ("pool1")
public void taskOfTypeOne() {
}
#Concurrent ("pool2")
public void taskOfTypeTwo() {
}
What you are looking for is Dependency Injection or Inversion of Control. One of the most popular DI frameworks for Java is Spring. You build ordinary Java objects, but with either specific annotations or by configuring them in XML, to wire them together. This way, you can configure your different ExecutorService instances in one place, and request that they be injected (possibly by name) in the client classes which need them.

Categories

Resources