What is standard(industry standard) way to keep a Java program running continuously.
Use case(Realtime processing):
A continuous running Kafka producer
A continuous running Kafka consumer
A continuous running service to process a stream of objects
Found few questions in Stackoverflow, for example:
https://stackoverflow.com/a/29930409/2653389
But my question is specific to what is the industry standard to achieve this .
First of all, there is no specified standard.
Possible options:
Java EE WEB application
Spring WEB application
Application with Spring-kafka (#KafkaListener)
Kafka producer will potentially accept some commands. In real-life scenario I worked with applications which runs continuously with listeners, receiving requests they triggered some jobs, batches and etc.
It could be achieved using, for example:
Web-server accepting HTTP requests
Standalone Spring application with #KafkaListener
Consumer could be a spring application with #KafkaListener.
#KafkaListener(topics = "${some.topic}")
public void accept(Message message) {
// process
}
Spring application with #KafkaListener will run infinitely by default. The listener containers created for #KafkaListener annotations are registered with an infrastructure bean of type KafkaListenerEndpointRegistry. This bean manages the containers' lifecycles; it will auto-start any containers that have autoStartup set to true. KafkaMessageListenerContainer uses TaskExecutor for performing main KafkaConsumer loop.
Documentation for more information.
If you decide to go without any frameworks and application servers, the possible solution is to create listener in separate thread:
public class ConsumerListener implements Runnable {
private final Consumer<String, String> consumer = new KafkaConsumer<>(properties);
#Override
public void run() {
try {
consumer.subscribe(topics);
while (true) {
// consume
}
}
} finally {
consumer.close();
}
}
}
When you start your program like "java jar" it will work until you didn't stop it. But that is OK for simple personal usage and testing of your code.
Also in UNIX system exist the app called "screen", you can run you java jar as a daemon.
The industry standard is application servers. From simple Jetty to enterprise WebSphere or Wildfly(ex. JBoss). The application servers allows you to run application continiously, communicate with front-end if neccassary and so on.
Related
I have a Kafka consumer built using spring boot and spring-kafka. It is not a Web Application (only spring-boot-starter dependency) and hence there is no port that is exposed by the application. And I donot want to expose a port just for the sake of health checks.
This kafka consumer application is being packaged as a docker image. The CI/CD pipeline has a stage that verifies if the container is up and the service is started. One option I thought was to check for an active java process that uses the service jar file.
ps aux | grep java ...
But the catch here is that a Kafka consumer can keep running for a while if the Kafka broker is not up and eventually stop with errors. So, using the process based approach is not reliable always.
Are there any other alternative options to find out if the application is up and running fine, given that the application is a standalone non-web app?
you need to schedule a job in the spring boot application that checks whatever needs to be checked
and write the health check result to a file in the container
you can have a cronjob on container level to check output of the spring
application in the file and make a final decision about the health status of the container
Popular way for checking application's health is using Spring Boot actuator module it checks different aspects of application, It seems that you should use this module and implement custom end point for checking your application health:
Health Indicators in Spring Boot
I have not any ready source code for calling actuator methods manually but you can try this:
Define a command line argument for running actuator health check.
Disable actuator end points:
management.endpoints.enabled-by-default=false
Call actuator health check:
#Autowired
private HealthEndpoint healthEndpoint;
public Health getAlive() {
return healthEndpoint.health();
}
Parse returned Health object and print a string in command line that indicates health status of application.
Grab the printed health status string by grep command.
As outlined in the Spring Boot reference documentation, you can use the built-in liveness and readiness events.
You could add a custom listener for readiness state events to your application. As soon as your application is ready (after startup), you could create a file (and write stuff to it).
#Component
public class MyReadinessStateExporter {
#EventListener
public void onStateChange(AvailabilityChangeEvent<ReadinessState> event) {
switch (event.getState()) {
case ACCEPTING_TRAFFIC:
// create file /tmp/healthy
break;
case REFUSING_TRAFFIC:
// remove file /tmp/healthy
break;
}
}
}
As explained in the same section, you can publish an AvailabilityChangeEvent from any component - the exporter will delete the file and let other systems know that it's not healthy.
This is related to Spring Batch Scheduling Problem.
I have 3 UAT servers (UAT1, UAT2 & UAT3). The task is to run the scheduled batch job in UAT3 server only but while I run a batch job by manually hitting the endpoint url, it should run in all 3 servers.
We have used old Batch Executor framework approach. We have an endpoint java class for exposing the endpoint url for manual run, executor class for batch run and configured the scheduling using batch-contxt.xml file. As it is a bit tightly coupled, any change is affecting both manual & scheduled run.
How to modify the approach to include the above problem using spring batch concepts?
Use separate yaml configurations based on environment profile and control based on a property. For example something like this.
#yaml file
jobs.importXMLJob.enable = true
#Value("${jobs.importXMLJob.enable}")
private boolean jobXMLEnable;
#Scheduled(cron = "${batch.cron}")
public void myJob() {
if (jobXMLEnable) {
UAT3 Job
}
}
I am working on a spring boot application and we deploy through kubernetes. My requirement is to run some logic in case the pod crashes or pod gets removed or pod is intentionally shut down. Currently I am using #PreDestroy to run my logics on exit.
#Component
public class EngineShutDownHook{
private static final Logger LOGGER = LoggerFactory.getLogger(EngineShutDownHook.class);
#PreDestroy
public void onExit() {
LOGGER.info("Shutting engine.");
System.out.println(" engine is stopping.");
}
}
However I am not sure whether this code will run on all possible exit scenarios. I have also learnt about spring's ExitCodeGenerator. Can you please suggest which is the best way to achieve this ?
Use Container Lifecycle Hooks of K8s
PreStop: This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others.
Edited Question
I have a spring-boot application running Spring Boot 2.1 and Spring Cloud Finley.M2. The application is integrated with RabbitMQ and consumes messages sent by other services towards it. The integration with RabbitMQ has been achieved using Spring Cloud's #StreamLinster and #EnableBinding abstractions as shown below:
#EnableBinding(CustomChannels.class)
public class IncomingChannelHandler {
#StreamListener("inboundChannel")
public void handleIncoming(String incoming) {
final IncomingActionDTO dto = gson.fromJson(incoming, IncomingActionDTO.class);
handleIncoming(dto);
}
}
My goal is to be able to stop and start being a consumer of a rabbitMQ queue programmatically.
I tried the solution with the RabbitListenerEndpointRegistry but the result was not what I needed, because the stop kept me as a consumer in the queue. I also tried to stop it through lifecycle which did not work either.
Is there a way to inform the queue to stop consider you a consumer until the application registers again as a one?
I'm building a plugin that is implemented as a Spring MVC application. This plugin is deployed on 3 - 6 tomcat servers via a gui on one of the servers. Each of the instances of the plugin has an #Scheduled method to collect information on the server and store it in a central database.
My issue is that the gui interface for uninstalling the plugin leaves some of the #Scheduled threads running.
For example, I have an environment that has servers 1 - 3. I install and enable the plugin via the gui on server 1. There are now 3 instances of the application running #Scheduled threads on servers 1 - 3. If I go back to server 1 and uninstall the plugin, the thread is reliably killed on server 1 but not servers 2 or 3.
I've implemented the following but the behavior persists:
#Component
public class ContextClosedListener implements ApplicationListener<ContextClosedEvent> {
#Autowired
ThreadPoolTaskExecutor executor;
#Autowired
ThreadPoolTaskScheduler scheduler;
public void onApplicationEvent(ContextClosedEvent event) {
scheduler.shutdown();
executor.shutdown();
}
}
Additionally, I've thought of implementing this as a context listener rather than an #Scheduled method but I'd rather stick to Spring for maintenance and extensibility reasons.
How can I reliably kill threads in an environment like this?
A couple thoughts I have. ThreadPoolTaskExecutor has a method setThreadNamePrefix, which allows you to set the prefix of the thread. You could set the prefix to something unique, then find and kill those threads at runtime. You can also set the thread group using the setThreadGroup method on the same object, then just stop the threads in the threadgroup.
The better, and safer, solution would be to create a break-out method in your scheduled jobs. This is the prefered method to stopping a Thread instead of the old "shot it in the head" method of calling Thread.stop(). You could get reference to those Runnables either by setting a common prefix or by using the thread group as described above.
The next question is: how do you stop the threads easily? For that, it would depend on how your appliation is implemented. Since I deal mainly with Spring MVC apps, my first solution would be to write a Controller to handle admin tasks. If this was JBoss, or some other large app server that had JMX (Tomcat can be configured to provide JMX I believe, but I don't think its configured out of the box that way), I might write a JMX-enabled bean to allow me to stop the threads via the app servers console. Basically, give your self a method to trigger the stopping of the threads.