What is the difference between ManagedExecutorService and ExecutorService in java - java

I have a requirement of submitting task to executor service in my wildfly java ee application.
The current code is as below,
ExecutorService jobExecutorService = Executors.newSingleThreadExecutor();
jobExecutorService.submit(new Task(request));
On each request, the same piece of code will run and submit the task for single-threaded executor.
But I am not sure whether the newly constructed thread is managed or is it a correct way of submitting tasks in my java ee application for any async flow.
If I need to start a thread which should be managed by the container, do I need to use ManagedExecutorService or is there any other implementation.
Need some knowledge on this.

To answer the question out of the title:
ManagedExecutorService is part of the Java EE specification while ExecutorService is part of the Java SE specification.
The main difference between these two interfaces is that the ManagedExecutorService is just a
manageable version of a ExecutorService.
Since you should not spawn any unmanaged Thread in an Java EE environment, you should only use the managed stuff there, while the unmanaged is perfectly fine for Java SE applications.
The proper way to get a ManagedExecutorService in a Java EE application is to inject the ManagedExecutorService with the #Resource annotation
#Resource
ManagedExecutorService managedExecutorService;

ExecutorService does n't need any web container, where as ManagedExecutorService is used in the context of application deployed to a webserver, where threadpools are created and their life cycles are maintained by the container.

Related

Executor service shutdown is not supported

I am using the executor service provided by IBM Websphere 8.5.5
ExecutorService es = (ExecutorService ) new InitialContext().lookup("wm/default")
when I call es.shutdown()method, I get the error:
java.lang.IllegalStateException: ASYN0093E: The operation shutdown is not supported.
Why Websphere does not support the shutdown method? Should not I call that method?
WebSphere Application Server rejects the shutdown method in order to comply with the following requirement of the Concurrency Utilities for Java EE Specification, Section 3.1.6: Lifecycle , which states:
The lifecycle of ManagedExecutorService instances are centrally managed by the application server and cannot be changed by an application.
And more explicitly, Section 3.1.6.1 Java EE Product Provider Requirements , which explicitly states:
The lifecycle of a ManagedExecutorService is managed by an application server. All lifecycle operations on the ManagedExecutorService interface will throw a java.lang.IllegalStateException exception. This includes the following methods that are defined in the java.util.concurrent.ExecutorService interface: awaitTermination(), isShutdown(), isTerminated(), shutdown(), and shutdownNow().
It seems likely this requirement exists to prevent applications from interfering with each other when both use the same executor.

EJB #Schedule is synchronous or asynchronous?

As #Balus has explained in Spawning threads in a JSF managed bean for scheduled tasks using a timer
EJB available? Use #Schedule
If you target Java EE 6 or newer (e.g. JBoss AS, GlassFish, TomEE, etc and thus not a barebones JSP/Servlet container such as Tomcat), then use a #Singleton EJB with a #Schedule method instead. This way the container will worry itself about pooling and destroying threads via ScheduledExecutorService.
So i am curious to know by using #Schedule, the background process will run asynchronously by container managed threads (magically) or it is like a java.util.timer which creates single thread and all process run within this threads??
if #Schedule creates only single thread just to manage the scheduler then would it be safe to use further ScheduledExecutorService within #Schedule? and this ScheduledExecutorService contains further runnable tasks based on multiple threads.
I have a long running process including file manipulation, data processing and email generating, but really should i rely only on this single #Schedule annotation without using any executorservices/creating further threadpool?? BTW i am using Glassfish.

Controlling number of Threads for ManagedExecutorServices / Java EE 7

In Java SE one can use constructs like
ExecutorService es1 = Executors.newSingleThreadExecutor();
ExecutorService es2 = Executors.newFixedThreadPool(10);
to control the number of threads available to the executor service. In Java EE 7 it's possible to inject executor services:
#Resource
private ManagedExecutorService mes;
But how can I control the number of threads available to the managed executor service ? For example, in the application I'm writing, there is an executor service that has to be executed in a single thread. So I can't just let the platform choose its preferred number of threads.
Actually, this setting should be set in the server settings, through admin console (in GlassFish for example), or during the creation of the service:
asadmin create-managed-executor-service --corepoolsize=10 --maximumpoolsize=20 concurrent/mes
See Create ManagedExecutorService, ManagedScheduledExecutorService, ManagedThreadFactory, ContextService in GlassFish 4.

How to reliably kill #Scheduled threads across servers?

I'm building a plugin that is implemented as a Spring MVC application. This plugin is deployed on 3 - 6 tomcat servers via a gui on one of the servers. Each of the instances of the plugin has an #Scheduled method to collect information on the server and store it in a central database.
My issue is that the gui interface for uninstalling the plugin leaves some of the #Scheduled threads running.
For example, I have an environment that has servers 1 - 3. I install and enable the plugin via the gui on server 1. There are now 3 instances of the application running #Scheduled threads on servers 1 - 3. If I go back to server 1 and uninstall the plugin, the thread is reliably killed on server 1 but not servers 2 or 3.
I've implemented the following but the behavior persists:
#Component
public class ContextClosedListener implements ApplicationListener<ContextClosedEvent> {
#Autowired
ThreadPoolTaskExecutor executor;
#Autowired
ThreadPoolTaskScheduler scheduler;
public void onApplicationEvent(ContextClosedEvent event) {
scheduler.shutdown();
executor.shutdown();
}
}
Additionally, I've thought of implementing this as a context listener rather than an #Scheduled method but I'd rather stick to Spring for maintenance and extensibility reasons.
How can I reliably kill threads in an environment like this?
A couple thoughts I have. ThreadPoolTaskExecutor has a method setThreadNamePrefix, which allows you to set the prefix of the thread. You could set the prefix to something unique, then find and kill those threads at runtime. You can also set the thread group using the setThreadGroup method on the same object, then just stop the threads in the threadgroup.
The better, and safer, solution would be to create a break-out method in your scheduled jobs. This is the prefered method to stopping a Thread instead of the old "shot it in the head" method of calling Thread.stop(). You could get reference to those Runnables either by setting a common prefix or by using the thread group as described above.
The next question is: how do you stop the threads easily? For that, it would depend on how your appliation is implemented. Since I deal mainly with Spring MVC apps, my first solution would be to write a Controller to handle admin tasks. If this was JBoss, or some other large app server that had JMX (Tomcat can be configured to provide JMX I believe, but I don't think its configured out of the box that way), I might write a JMX-enabled bean to allow me to stop the threads via the app servers console. Basically, give your self a method to trigger the stopping of the threads.

Unmanaged Threads Spring Quartz Websphere Hibernate

It appears that our implementation of using Quartz - JDBCJobStore along with Spring, Hibernate and Websphere is throwing unmanaged threads.
I have done some reading and found a tech article from IBM stating that the usage of Quartz with Spring will cause that. They make the suggestion of using CommnonJ to address this issue.
I have done some further research and the only examples I have seen so far all deal with the plan old JobStore that is not in a database.
So, I was wondering if anyone has an example of the solution for this issue.
Thanks
We have a working solution for this (two actually).
1) Alter the quartz source code to use a WorkManager daemon thread for the main scheduler thread. It works, but requires changing quarts. We didn't use this though since we didn't want maintain a hacked version of quartz. (That reminds me, I was going to submit this to the project but completely forgot)
2) Create a WorkManagerThreadPool to be used as the quartz threadpool. Implement the interface for the quartz ThreadPool, so that each task that is triggered within quartz is wrapped in a commonj Work object that will then be scheduled in the WorkManager. The key is that the WorkManager in the WorkManagerThreadPool has to be initialized before the scheduler is started, from a Java EE thread (such as servlet initialization). The WorkManagerThreadPool must then create a daemon thread which will handle all the scheduled tasks by creating and scheduling the new Work objects. This way, the scheduler (on its own thread) is passing the tasks to a managed thread (the Work daemon).
Not simple, and unfortunately I do not have code readily available to include.
Adding another answer to the thread, since i found a solution for this, finally.
My environment: WAS 8.5.5, Quartz 1.8.5, no Spring.
The problem i had was the (above stated) unmanaged thread causing a NamingException from ctx.lookup(myJndiUrl), that was instead correctly working in other application servers (JBoss, Weblogic); actually, Webpshere was firing an "incident" with the following message:
javax.naming.ConfigurationException: A JNDI operation on a "java:" name cannot be completed because the server runtime is not able to associate the operation's thread with any J2EE application component. This condition can occur when the JNDI client using the "java:" name is not executed on the thread of a server application request. Make sure that a J2EE application does not execute JNDI operations on "java:" names within static code blocks or in threads created by that J2EE application. Such code does not necessarily run on the thread of a server application request and therefore is not supported by JNDI operations on "java:" names.
The following steps solved the problem:
1) upgraded to quartz 1.8.6 (no code changes), just maven pom
2) added the following dep to classpath (in my case, EAR's /lib folder), to make the new WorkManagerThreadExecutor available
<dependency>
<groupId>org.quartz-scheduler</groupId>
<artifactId>quartz-commonj</artifactId>
<version>1.8.6</version>
</dependency>
Note: in QTZ-113 or the official Quartz Documentation 1.x 2.x there's no mention on how to activate this fix.
3) added the following to quartz.properties ("wm/default" was the JNDI of the already configured DefaultWorkManager in my WAS 8.5.5, see Resources -> AsynchronousBeans -> WorkManagers in WAS console):
org.quartz.threadExecutor.class=org.quartz.custom.WorkManagerThreadExecutor
org.quartz.threadExecutor.workManagerName=wm/default
Note: right class is org.quartz.custom.WorkManagerThreadExecutor for quartz-scheduler-1.8.6 (tested), or org.quartz.commonj.WorkManagerThreadExecutor from 2.1.1 on (not tested, but verified within actual quartz-commonj's jars on maven's repos)
4) moved the JNDI lookup in the empty constructor of the quartz job (thanks to m_klovre's "Thread outside of the J2EE container"); that is, the constructor was being invoked by reflection (newInstance() method) from the very same J2EE context of my application, and had access to java:global namespace, while the execute(JobExecutionContext) method was still running in a poorer context, which was missing all of my application's EJBs
Hope this helps.
Ps. as a reference, you can find here an example of the quartz.properties file I was using above
Check this article:
http://www.ibm.com/developerworks/websphere/techjournal/0609_alcott/0609_alcott.html
basically, set the taskExecutor property on SchedulerFactoryBean to use a org.springframework.scheduling.commonj.WorkManager TaskExecutor which will use container managed threads.
Just a note: the above QUARTZ-708's link is not valid anymore.
This new issue (in a new Jira) seems to be addressing the problem: http://jira.terracotta.org/jira/browse/QTZ-113 (fixVersion = 1.8.6, 2.0.2)
I have recently encountered this problem. Practically you need:
Implement thread pool by delegating work to Websphere Work Manager. (Quartz provides only SimpleThreadPool that run jobs on unmanaged threads). Tell quartz to use this thread pool by org.quartz.threadPool.class property
Tell quartz to use WorkManagerThreadExecutor (or implement custom one) by org.quartz.threadExecutor.class property
A bit patience with cumbersome legacy web containers :)
Here is github demo of using Quartz with Websphere (and also Tomcat).
Hope it helps someone..
You can check the below JIRA link raised on quartz regarding this.
http://jira.opensymphony.com/browse/QUARTZ-708
This has the required WebSphereThreadPool implementation which can be used with the changes in quartz.properties as mentioned to meet your requirements. Hope this helps.
Regards,
Siva
You will have to use websphere's managed thread pools. You can do this via spring and commonj. CommonJ can has a task executor that will create managed threads. You can even use a reference to a jndi managed thread resource. You can then inject the commonj task executor into the Spring based Quartz SchedulerFactoryBean.
Please see http://open.bekk.no/boss/spring-scheduling-in-websphere/ and scroll to "Quartz with CommonJ" section for more details.
The proposal from PaoloC for WAS85 ans Quartz 1.8.6 also works on WAS80 (and Quartz 1.8.6) and does not need Spring. (In my setup Spring 2.5.5 is present, but not in use in that context.)
That way I was able to override SimpleJobFactory by my own variant, using an InjectionHelper to apply CDI on every newly created job. Injection works for both #EJB (with JNDI lookup of the annotated EJB remote business interface) and #Inject (with JNDI lookup of the CDI BeanManager using a new InitialContext first, and then using this newly fetched BM to lookup the CDI bean itself).
Thank you PaoloC for that answer! (I hope this text will appear as an "answer to PaoloC" and not as an answer to the main topic. Found no way to differentiate between these.)

Categories

Resources