Is there a configuration I can use to instruct Spring to continue on startup and initialize the Beans even if Kafka connection failed?
I am using Spring Framework 5.2.3 and Spring Kafka 2.5.3.RELEASE.
If you need kafka beans for your application to work in every use case then continue with startup if there is no kafka connection makes no sense. Your application will not be able to do anything without kafka.
But if some parts of your application do not need kafka and you would like to use only those parts then you can either mark kafka related beans as lazy or make all beans lazy by default. In this case spring will create beans only when they are actually needed. And even if there is no kafka connection available parts of your app that do not need kafka will work.
Related
I have an apache camel route which is calling apache kafka topic (producer) just like below in my spring boot application (deployed on Tomcat server):
from("timer://foo?period=1000").to("kafka:myTopic?brokers=localhost:9092");
This spring boot app is a rest API that is supposed to get around 300 TPS.
Q1) I know that a single tomcat server thread serves each request coming to my spring boot app. Does the same thread will be used in the above line of code used by apache camel for invoking the myTopic? Or apache camel uses some connection pooling internally just like RestTemplate?
Q2) Because the TPS will be increasing to 500 TPS in near future, does it make any sense to introduce pooling for the above line of code? I believe if I use connection pooling, my application performance will increase. However, I am not able to find the code which I can use to enable connection pooling for the above line of code.
If anyone has any idea please let me know. Please note that I am not looking for parallel processing so seda or multicast in camel is not an option. I am only having a single call to kafka topic as shown above in the code so just looking for how to enable connection pooling in this line of code.
Thanks.
I'm writing a unit test with Spring Kafka 2.4 to prove that my Spring Boot setup is correct. I'm validating that SeekToCurrentBatchErrorHandler works as expected which requires sending an incorrect message that should be retried. Unfortunately this incorrect messages breaks other tests because this message will be retried forever.
Because of above I'd like to ensure that each test is correctly isolated. I either need to:
Delete and recreate the Kafka topic with AdminClient
Seek to the end of the existing Kafka topic and commit new offsets
I was trying option 2 with Consumer.seekToEnd() method however Spring Kafka hides the created consumers behind few layers of internal framework classes. I'm also not 100% sure if this method can be called in test thread which is different from listener thread.
What is the recommended way to clear topics in tests with Spring Kafka?
Best practice is to use unique topic names in each test to provide complete isolation; you could also stop the container(s), create a new Consumer with the same group.id and perform the seeks there.
How Do I connect to multiple bootstrap servers(DEV, STAGE and PROD) from a microservice ( Admin MS) with security in place?
I want to connect to all the kafka servers and create/manage topics, create ACLS etc.
I am using spring kafka adminclient , and configuring properties from application.yml using spring boot to connect to Dev right now. But now I want to connect to all environments.
Is there an easier and better approach other that wring a properties hashmap and putting config values in it. Does Spring cloud stream help?
Is this something similar to connecting multiple databases to a micro service ?
You can do it by creating multiple child boot applications, each with its own environment containing the properties.
But it's probably easier to bypass Boot's auto configuration and wire up your own AdminClients with their own properties.
With Spring JDBC Templates, you can initialize connections lazily with a simple flag. Is there a similar capability for Kafka Container Factories for Springboot 1.5.x / Spring Kafka 1.3.x deployments?
The best answer I have seen so far is to disable autoStart and manage the start on your own catching any exceptions that may occur during start up here - How to start spring application even if Kafka listener (spring-kafka) doesn't initialize
Is this the only way and is there any caveats when using KafkaListenerEndpointRegistry to self-manage the lifecycle of the container(s)?
Would Lazy annotation work with #KafkaListener, #Configuration class for Kafka configurations or similar component class? Shooting this question out there since there does not appear to be a documented approach for handling, while in parallel attempting some of these approaches in order to obtain feedback in parallel.
When using Springboot 1.5.x, how does this change (if any) with Springboot 2.1.x (or above) and compatible Spring Kafka version for those versions?
Lazy initialization of a listener container makes no sense. What would trigger the instantiation?
The KafkaTemplate lazily creates its producer on the first operation.
autoStartup and starting/stopping using the registry is the correct approach.
The version makes no difference.
I am trying to make a queue with activemq and spring boot using this link and it looks fine. What I am unable to do is to make this queue persistent after application goes down. I think that SimpleJmsListenerContainerFactory should be durable to achieve that but when I set factory.setSubscriptionDurable(true) and factory.setClientId("someid") I am unable to receive messages any more. I would be greatfull for any suggestions.
I guess you are embedding the broker in your application. While this is ok for integration tests and proof of concepts, you should consider having a broker somewhere in your infrastructure and connect to it. If you choose that, refer to the ActiveMQ documentation and you should be fine.
If you insist on embedding it, you need to provide a brokerUrl that enables message persistence.
Having said that, it looks like you misunderstand durable subscriber and message persistence. The latter can be achieved by having a broker that actually stores the content of the queue somewhere so that if the broker is stopped and restarted, it can restore the content of its queue. The former is to be able to receive a message even if the listener is not active at a period of time.
you can enable persistence of messages using ActiveMQConnectionFactory.
as mentioned in the spring boot link you provided, this ActiveMQConnectionFactory gets created automatically by spring boot.so you can have this bean in your application configuration created manually and you can set various property as well.
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=true");
Here is the link http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html