How to reliably kill #Scheduled threads across servers? - java

I'm building a plugin that is implemented as a Spring MVC application. This plugin is deployed on 3 - 6 tomcat servers via a gui on one of the servers. Each of the instances of the plugin has an #Scheduled method to collect information on the server and store it in a central database.
My issue is that the gui interface for uninstalling the plugin leaves some of the #Scheduled threads running.
For example, I have an environment that has servers 1 - 3. I install and enable the plugin via the gui on server 1. There are now 3 instances of the application running #Scheduled threads on servers 1 - 3. If I go back to server 1 and uninstall the plugin, the thread is reliably killed on server 1 but not servers 2 or 3.
I've implemented the following but the behavior persists:
#Component
public class ContextClosedListener implements ApplicationListener<ContextClosedEvent> {
#Autowired
ThreadPoolTaskExecutor executor;
#Autowired
ThreadPoolTaskScheduler scheduler;
public void onApplicationEvent(ContextClosedEvent event) {
scheduler.shutdown();
executor.shutdown();
}
}
Additionally, I've thought of implementing this as a context listener rather than an #Scheduled method but I'd rather stick to Spring for maintenance and extensibility reasons.
How can I reliably kill threads in an environment like this?

A couple thoughts I have. ThreadPoolTaskExecutor has a method setThreadNamePrefix, which allows you to set the prefix of the thread. You could set the prefix to something unique, then find and kill those threads at runtime. You can also set the thread group using the setThreadGroup method on the same object, then just stop the threads in the threadgroup.
The better, and safer, solution would be to create a break-out method in your scheduled jobs. This is the prefered method to stopping a Thread instead of the old "shot it in the head" method of calling Thread.stop(). You could get reference to those Runnables either by setting a common prefix or by using the thread group as described above.
The next question is: how do you stop the threads easily? For that, it would depend on how your appliation is implemented. Since I deal mainly with Spring MVC apps, my first solution would be to write a Controller to handle admin tasks. If this was JBoss, or some other large app server that had JMX (Tomcat can be configured to provide JMX I believe, but I don't think its configured out of the box that way), I might write a JMX-enabled bean to allow me to stop the threads via the app servers console. Basically, give your self a method to trigger the stopping of the threads.

Related

Moving from quarkus-resteasy to quarkus-resteasy-reactive what about application using ThreadLocal?

As stated in the documentation passing from quarkus-resteasy to reactive is as simple as changing only maven dependency and it should works.
In our quarkus project, we create a session in #PreMatching filter and save it in thread local. In other filters we update session with other information. Session information are then used in all services and resources.
After somme performance tests, even we have more RAM and CPU we see only 200 threads used in prometheus. We have the possibility to change max-thread in properties file or moving to reactive programming.
At start point, we changed to quarkus reactive and we experienced the following problems:
When adding reactive to quarkus-resteasy we experienced the following:
The first thread that intercept request is an event loop thread. Then it delegate execution to worker thread if method annotated with #Blocking, else it continue execution with event loop thread.
Problem 1) Blocking method:
All information added to session in #PreMatching filter are lost. But information added in the session from other filters with priority for example #Priority(Priorities.AUTHENTICATION - 10) are present in the worker thread.
Problem 2) NoBlocking method:
With a few requests all thing works fine, but with 100 parallel requests, we experience a random losing session information. After googling, event loop thread use a different context (Vertx thread context) and I haven't find any documentation that explain how to move from ThreadLocal to Vertx Thread context while moving from quarkus to quarkus reactive.
Problem 3) Transform method to return Uni, ThreadLocal information are propagated but I haven't yet test it under load. [UPDATE] same as Problem 2)
Problem 4) Cant find how to run integration tests in Vertx Thread
Any help on how to move projects using ThreadLocal to anything else when moving to quarkus reactive will be appreciated

Standard way to run a Java program continously

What is standard(industry standard) way to keep a Java program running continuously.
Use case(Realtime processing):
A continuous running Kafka producer
A continuous running Kafka consumer
A continuous running service to process a stream of objects
Found few questions in Stackoverflow, for example:
https://stackoverflow.com/a/29930409/2653389
But my question is specific to what is the industry standard to achieve this .
First of all, there is no specified standard.
Possible options:
Java EE WEB application
Spring WEB application
Application with Spring-kafka (#KafkaListener)
Kafka producer will potentially accept some commands. In real-life scenario I worked with applications which runs continuously with listeners, receiving requests they triggered some jobs, batches and etc.
It could be achieved using, for example:
Web-server accepting HTTP requests
Standalone Spring application with #KafkaListener
Consumer could be a spring application with #KafkaListener.
#KafkaListener(topics = "${some.topic}")
public void accept(Message message) {
// process
}
Spring application with #KafkaListener will run infinitely by default. The listener containers created for #KafkaListener annotations are registered with an infrastructure bean of type KafkaListenerEndpointRegistry. This bean manages the containers' lifecycles; it will auto-start any containers that have autoStartup set to true. KafkaMessageListenerContainer uses TaskExecutor for performing main KafkaConsumer loop.
Documentation for more information.
If you decide to go without any frameworks and application servers, the possible solution is to create listener in separate thread:
public class ConsumerListener implements Runnable {
private final Consumer<String, String> consumer = new KafkaConsumer<>(properties);
#Override
public void run() {
try {
consumer.subscribe(topics);
while (true) {
// consume
}
}
} finally {
consumer.close();
}
}
}
When you start your program like "java jar" it will work until you didn't stop it. But that is OK for simple personal usage and testing of your code.
Also in UNIX system exist the app called "screen", you can run you java jar as a daemon.
The industry standard is application servers. From simple Jetty to enterprise WebSphere or Wildfly(ex. JBoss). The application servers allows you to run application continiously, communicate with front-end if neccassary and so on.

EJB #Schedule is synchronous or asynchronous?

As #Balus has explained in Spawning threads in a JSF managed bean for scheduled tasks using a timer
EJB available? Use #Schedule
If you target Java EE 6 or newer (e.g. JBoss AS, GlassFish, TomEE, etc and thus not a barebones JSP/Servlet container such as Tomcat), then use a #Singleton EJB with a #Schedule method instead. This way the container will worry itself about pooling and destroying threads via ScheduledExecutorService.
So i am curious to know by using #Schedule, the background process will run asynchronously by container managed threads (magically) or it is like a java.util.timer which creates single thread and all process run within this threads??
if #Schedule creates only single thread just to manage the scheduler then would it be safe to use further ScheduledExecutorService within #Schedule? and this ScheduledExecutorService contains further runnable tasks based on multiple threads.
I have a long running process including file manipulation, data processing and email generating, but really should i rely only on this single #Schedule annotation without using any executorservices/creating further threadpool?? BTW i am using Glassfish.

how can I monitor (listen) the job I scheduled with #scheduled using spring framework?

I needed to schedule a job which should run indefinitely. I found this nice article (although it has spring3 example) and followed through.
http://krams915.blogspot.com/2011/01/spring-3-task-scheduling-via.html
While this is a good starting point, i wasn't able to find how i can monitor the status of the job running. I'd like to know if there's a way for me to monitor scheduled jobs so i can be notified if something went wrong and jobs just died (such as job threw exception and exited).
#Scheduled is a shortcut that uses a bunch of defaults including the Queue. I believe it's an ExecutorService so if you just #Autowire that, you can interrogate it for status.
For more control (like how to handle jobs that throw exceptions) you can create a bean that's your own ExecutorService and put it in the context.

How do I timeout a blocking call inside an EJB?

I am in the process of developing an EJB that makes 10+ calls to other components (EJBs, Web services, etc.) as part of it's business logic. In my case, performance is a huge concern. This EJB will be servicing a few million requests a day.
My question is: For each of those 10+ calls, how can I enforce a timeout?
I cannot wait more than 'n' seconds for any one of the calls to return. If a call takes longer than 'n' seconds, I will use a default response for processing.
I would normally use a Executor to solve this problem but, from what I understand, one shouldn't spawn threads from within an EJB as it may potentially interfere with the EJB's lifecycle.
how can I enforce a timeout?
The ejb3.1 specification provides the possibility to set a timeout using #AccessTimeout annotation that applies for serialized client calls that have to wait when an Session Bean instance
is busy executing a previous request.
Clearly (and explicity described in the specification) this applies to StateFul and Singleton session bean, although it could be implemented for Stateless in the case the bean pool run out of available instances.
Notice, once the client-invoked business method is in progress this timeout no applies.
Other possibility that is not part of the specification but, is supported by several servers (see JBoss example) is to define a timeout at the remote client side. If the client invocation
takes longer than the configured timeout, the client will be informed, however, the server execution will not be interrupted which it is not good enough.
Set a transaction timeout neither is a good option because there is no guarantee the thread that executes the business logic will be interrupted when the transaction timeout expires.
I would normally use a Executor to solve this problem but, from what I understand, one shouldn't spawn threads from within an EJB..
Instead you could use ManagedExecutorService class that is an Executor extension suitable to use within a EJB Container.
Aditionally, to implement asynchronous call within an EJB, take a look at #Asynchronous annotation, which provides a high level abstraction to solve the multithreding issue you are facing.
Cancel() method from Future class, allows you to interrup a thread's execution if you consider that the process has taken too long.
since you are not providing much detail of your environment:
use bean managed transactions and set the transaction timeout
EE7: provides an managed executor service
EE6: custom executor service as a JCA connector

Categories

Resources