Edited Question
I have a spring-boot application running Spring Boot 2.1 and Spring Cloud Finley.M2. The application is integrated with RabbitMQ and consumes messages sent by other services towards it. The integration with RabbitMQ has been achieved using Spring Cloud's #StreamLinster and #EnableBinding abstractions as shown below:
#EnableBinding(CustomChannels.class)
public class IncomingChannelHandler {
#StreamListener("inboundChannel")
public void handleIncoming(String incoming) {
final IncomingActionDTO dto = gson.fromJson(incoming, IncomingActionDTO.class);
handleIncoming(dto);
}
}
My goal is to be able to stop and start being a consumer of a rabbitMQ queue programmatically.
I tried the solution with the RabbitListenerEndpointRegistry but the result was not what I needed, because the stop kept me as a consumer in the queue. I also tried to stop it through lifecycle which did not work either.
Is there a way to inform the queue to stop consider you a consumer until the application registers again as a one?
Related
I have a Kafka consumer built using spring boot and spring-kafka. It is not a Web Application (only spring-boot-starter dependency) and hence there is no port that is exposed by the application. And I donot want to expose a port just for the sake of health checks.
This kafka consumer application is being packaged as a docker image. The CI/CD pipeline has a stage that verifies if the container is up and the service is started. One option I thought was to check for an active java process that uses the service jar file.
ps aux | grep java ...
But the catch here is that a Kafka consumer can keep running for a while if the Kafka broker is not up and eventually stop with errors. So, using the process based approach is not reliable always.
Are there any other alternative options to find out if the application is up and running fine, given that the application is a standalone non-web app?
you need to schedule a job in the spring boot application that checks whatever needs to be checked
and write the health check result to a file in the container
you can have a cronjob on container level to check output of the spring
application in the file and make a final decision about the health status of the container
Popular way for checking application's health is using Spring Boot actuator module it checks different aspects of application, It seems that you should use this module and implement custom end point for checking your application health:
Health Indicators in Spring Boot
I have not any ready source code for calling actuator methods manually but you can try this:
Define a command line argument for running actuator health check.
Disable actuator end points:
management.endpoints.enabled-by-default=false
Call actuator health check:
#Autowired
private HealthEndpoint healthEndpoint;
public Health getAlive() {
return healthEndpoint.health();
}
Parse returned Health object and print a string in command line that indicates health status of application.
Grab the printed health status string by grep command.
As outlined in the Spring Boot reference documentation, you can use the built-in liveness and readiness events.
You could add a custom listener for readiness state events to your application. As soon as your application is ready (after startup), you could create a file (and write stuff to it).
#Component
public class MyReadinessStateExporter {
#EventListener
public void onStateChange(AvailabilityChangeEvent<ReadinessState> event) {
switch (event.getState()) {
case ACCEPTING_TRAFFIC:
// create file /tmp/healthy
break;
case REFUSING_TRAFFIC:
// remove file /tmp/healthy
break;
}
}
}
As explained in the same section, you can publish an AvailabilityChangeEvent from any component - the exporter will delete the file and let other systems know that it's not healthy.
I am developing microservice which consumes messages from Kaffka then processes this messages and stores output to MongoDB
I am new to kafka and I encounter some problem with losing messages.
Scenario is pretty easy:
In case of mongoDB being offline microservice recieves a message then trying to save output to Mongo then I get error that says mongo is offline and message is lost.
My question is there is any mechanism in kafka that stops sending messages in that case. Should manually commit offset in Kafka ? What are best practices to handle error in Kafka consumers?
For such kind of scenario you should manually commit the offset. Commit offset only if your message processing successful. You commit it like below. However you should note that messages have ttl hence messages get automatically deleted from kafka broker after ttl elapse.
consumer.commitSync();
I think rather than making commit manually, you should use Kafka Streams and Kafka Connect. Managing transaction between two systems: Apache Kafka and MongoDB might be not so easy, so better use already developed and tested tools (You can read more about Kafka Connect: https://kafka.apache.org/documentation/#connect, https://docs.confluent.io/current/connect/index.html)
Your scenario might be something like this:
Process your message using Kafka Streams and send result to new
topic (Kafka Streams support exactly-once semantics)
Use Kafka Connect (Sink connector) to save data in MongoDB https://www.confluent.io/connector/kafka-connect-mongodb-sink/
One way you can do this by using pause and resume methods on MessageListenerContainer (But you have to use spring kafka > 2.1.x) spring-kafka-docs
#KafkaListener Lifecycle Management
The listener containers created for #KafkaListener annotations are not beans in the application context. Instead, they are registered with an infrastructure bean of type KafkaListenerEndpointRegistry. This bean is automatically declared by the framework and manages the containers' lifecycles; it will auto-start any containers that have autoStartup set to true.
So Autowire KafkaListenerEndpointRegistry registry endpoint in the application
#Autowired
private KafkaListenerEndpointRegistry registry;
Get the MessageListenerContainer from registry spring-kafka-docs
public MessageListenerContainer getListenerContainer(java.lang.String id)
Return the MessageListenerContainer with the specified id or null if no such container exists.
Parameters:
id - the id of the container
On MessageListenerContainer you can use pause or resume methods spring-kafka-docs
default void pause()
Pause this container before the next poll().
default void resume()
Resume this container, if paused, after the next poll().
What is standard(industry standard) way to keep a Java program running continuously.
Use case(Realtime processing):
A continuous running Kafka producer
A continuous running Kafka consumer
A continuous running service to process a stream of objects
Found few questions in Stackoverflow, for example:
https://stackoverflow.com/a/29930409/2653389
But my question is specific to what is the industry standard to achieve this .
First of all, there is no specified standard.
Possible options:
Java EE WEB application
Spring WEB application
Application with Spring-kafka (#KafkaListener)
Kafka producer will potentially accept some commands. In real-life scenario I worked with applications which runs continuously with listeners, receiving requests they triggered some jobs, batches and etc.
It could be achieved using, for example:
Web-server accepting HTTP requests
Standalone Spring application with #KafkaListener
Consumer could be a spring application with #KafkaListener.
#KafkaListener(topics = "${some.topic}")
public void accept(Message message) {
// process
}
Spring application with #KafkaListener will run infinitely by default. The listener containers created for #KafkaListener annotations are registered with an infrastructure bean of type KafkaListenerEndpointRegistry. This bean manages the containers' lifecycles; it will auto-start any containers that have autoStartup set to true. KafkaMessageListenerContainer uses TaskExecutor for performing main KafkaConsumer loop.
Documentation for more information.
If you decide to go without any frameworks and application servers, the possible solution is to create listener in separate thread:
public class ConsumerListener implements Runnable {
private final Consumer<String, String> consumer = new KafkaConsumer<>(properties);
#Override
public void run() {
try {
consumer.subscribe(topics);
while (true) {
// consume
}
}
} finally {
consumer.close();
}
}
}
When you start your program like "java jar" it will work until you didn't stop it. But that is OK for simple personal usage and testing of your code.
Also in UNIX system exist the app called "screen", you can run you java jar as a daemon.
The industry standard is application servers. From simple Jetty to enterprise WebSphere or Wildfly(ex. JBoss). The application servers allows you to run application continiously, communicate with front-end if neccassary and so on.
I have just started with Spring boot RabbitMQ. I would like to know how can we separately configure producer code and consumer code in case of spring boot rabbitmq (annotations config). I mean to say, if I want to write rabbitmq producer code in spring boot and consumer code in python , or vice versa- consumer code in spring boot and producer code in python..I found no separate producer and consumer configurations in spring boot. For example,in case of Spring XML configuration, at the sender side, we only have exchange name and routing key available. There is no information at the producer side regarding queue name or type of exchange. But in contrast to this, in case of Spring boot, the Queue is configured at the sender side only, including the exchange binding. Can you please help me with separate sender and receiver configurations using spring boot. I am working on cross technology rabbitmq. So I would like to know sender and receiver minimum configurations required. Please help.
For eg- https://github.com/Civilis317/spring-amqp, in this code, at the producer side, in configuration file, the queue is configured. But in case of xml configuration, the producer had no idea about the queue. I would like to know what is the minimum configuration required at the sender in case of spring boot rabbitmq.
I mean to say, in xml configuration, the exchange-queue binding details were found at the consumer side xml file. But in spring boot, the exchange-queue binding is found at the sender config files only. Is it how it is written??
But in contrast to this, in case of Spring boot, the Queue is configured at the sender side only, including the exchange binding.
That is not correct. What is leading you to that conclusion?
Messages are sent to an exchange with a routing key; the producer knows nothing about the queue(s) that are bound to the exchange.
I am trying to make a queue with activemq and spring boot using this link and it looks fine. What I am unable to do is to make this queue persistent after application goes down. I think that SimpleJmsListenerContainerFactory should be durable to achieve that but when I set factory.setSubscriptionDurable(true) and factory.setClientId("someid") I am unable to receive messages any more. I would be greatfull for any suggestions.
I guess you are embedding the broker in your application. While this is ok for integration tests and proof of concepts, you should consider having a broker somewhere in your infrastructure and connect to it. If you choose that, refer to the ActiveMQ documentation and you should be fine.
If you insist on embedding it, you need to provide a brokerUrl that enables message persistence.
Having said that, it looks like you misunderstand durable subscriber and message persistence. The latter can be achieved by having a broker that actually stores the content of the queue somewhere so that if the broker is stopped and restarted, it can restore the content of its queue. The former is to be able to receive a message even if the listener is not active at a period of time.
you can enable persistence of messages using ActiveMQConnectionFactory.
as mentioned in the spring boot link you provided, this ActiveMQConnectionFactory gets created automatically by spring boot.so you can have this bean in your application configuration created manually and you can set various property as well.
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=true");
Here is the link http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html