How to auto configure programmatically? - java

I have a spring boot kafka application. My brokers are recycled every few days. The old brokers are deprovisioned and new brokers are provisioned.
I have a scheduler which is checking for brokers every few hours. I would like to make sure as soon as the we have new brokers,
we should reload all the Spring Kafka related beans. Very similar to KafkaAutoConfiguration except I want a trigger on broker value change and load the auto configuration programmatically.
How do I call the auto configure programmatically whenever the old brokers are replaced with new one ?

Your requirements sounds like Config Server in Spring Cloud:https://cloud.spring.io/spring-cloud-static/Greenwich.SR2/multi/multi__spring_cloud_config_2.html#_spring_cloud_config_2 with its #RefreshScope feature: https://cloud.spring.io/spring-cloud-static/Greenwich.SR2/multi/multi__spring_cloud_context_application_context_services.html#refresh-scope.
So, you need to specify your own beans and mark them with that annotation:
#Bean
#RefreshScope
public ConsumerFactory<?, ?> kafkaConsumerFactory() {
return new DefaultKafkaConsumerFactory<>(this.properties.buildConsumerProperties());
}
#Bean
#RefreshScope
public ProducerFactory<?, ?> kafkaProducerFactory() {
DefaultKafkaProducerFactory<?, ?> factory = new DefaultKafkaProducerFactory<>(
this.properties.buildProducerProperties());
String transactionIdPrefix = this.properties.getProducer().getTransactionIdPrefix();
if (transactionIdPrefix != null) {
factory.setTransactionIdPrefix(transactionIdPrefix);
}
return factory;
}
These two beans rely on the configuration properties for connection to Apache Kafka broker and that is really fully enough to have them refreshable. Whenever a ContextRefreshedEvent happens these beans are going to be re-initialized with a fresh configuration properties.
I think the ConsumerFactory consumers (MessageListenerContainer and KafkaListenerEndpointRegistry) have to be restarted on that event as well. The point is that MessageListenerContainer starts a long-living process and therefore caches a KafkaConsumer instance for the poll purposes.
All the ProducerFactory consumers don't need to be restarted. Even if KafkaProducer is cached in the DefaultKafkaProducerFactory it is going to be reinitialized during #RefreshScope phase.
UPDATE
I don’t use config server. I get the new hosts from consul catalog service.
Right, I didn't say that you use a Config Server. That just looks for me similar way. So, from big height I would really take a look into a Config Client implementation for your Consul catalog solution.
Nevertheless you still can emit a RefreshEvent which will trigger all your #RefreshScope'd beans to be reloaded. For that purpose you need to implement an ApplicationEventPublisherAware and emit that event whenever you have update from Consul. Remember: Kafka listener containers must be restarted. For that purpose you can listen for the RefreshScopeRefreshedEvent since you really are interested in the restart only when all the #RefreshScope have been refreshed.
More about refresh scope: https://gist.github.com/dsyer/a43fe5f74427b371519af68c5c4904c7

Related

Create at server startup dynamically several JMS listeners with SpringBoot

I know how to create a single JMS listener with Springboot with annotations. But now I want to create several JMS listeners listening at several brokers sending same kind of messages at server startup, reading in a file the properties of brokers.
How can I achieve this ? Is it possible to ask SpringBoot to create the beans with java statements instead of annotations ? With a factory or something like that ?
I know there won't be more than 10 brokers in the system. Is there a solution to define statically 10 JMS listeners with annotations but deactivating some listeners if they are not used so that they don't cause errors ?
My answer relates to: "Is there a solution to define statically 10 JMS listeners with annotations but deactivating some listeners if they are not used so that they don't cause errors ?" and not the Dynamic portion of creating JMSListeners on the fly.
You can use #ConditionalOnProperty to enable/disable your consumer and use profiles to specify when they are enabled.
Example:
#Slf4j
#Service
#ConditionalOnProperty(name = "adapter-app-config.jms.request.enable-consumer", havingValue = "true")
public class RequestMessageConsumer {
#Autowired
private AdapterConfig adapterConfig;
#Autowired
private RequestMessageHandler requestMessageHandler;
#JmsListener(concurrency = "1", destination = "${adapter-app-config.jms.request.InQueue}")
#Transactional(rollbackFor = { Exception.class })
public void onMessage(TextMessage message) {
requestMessageHandler.handleMessage(message);
}
}
application.properties
adapter-app-config:
jms:
sync:
enable-consumer: true
request:
enable-consumer: false
For Dynamic, please see:
Adding Dynamic Number of Listeners(Spring JMS)
https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#jms-annotated-programmatic-registration

Why using containerGroup is preventing my other listeners to work?

My application is listening to several topics.
Some of them are compacted topics used to load in memory some data.
I wanted to load first thoses data, so I used a SmartLifecycle to manually start those container before the other containers.
It's working great, but for simplicity, I tried to use a containerGroup
#KafkaListener(id = "myId", containerGroup = "compacted", ...)
Then in the SmartLifecycle bean I used :
Collection<MessageListenerContainer> compactedListenerContainers = applicationContext.getBean("compacted", Collection.class);
But once I do that, after the "start" method is finished, the other containers are never started.
If I replace this line by :
Collection<MessageListenerContainer> compactedListenerContainers = Arrays.asList(registry.getListenerContainer("myId"));
Its working.
Any idea why getting the bean for a containerGroup prevent all other listener to work ? Knowing that all other #KafkaListeners are just defined by :
#KafkaListener(topics = "myTopic")
Edit
After further investigations, the problem is related to the KafkaListenerEndpointRegistry.
If the SmartLifeCycle bean is created with "KafkaListenerEndpointRegistry" as a dependency the application is working. Even if I'm not using the registry at all.
But if the SmartLifeCycle bean is created without this registry, the application fail.
You need to show your container factory.
I presume you have autoStartup set to false since you are manually starting them.
So the others won't start either; since you want to start them after your compacted topics are loaded, simply call start() on the endpoint registry and it will start the others.
Or you can put the others in another containerGroup.

Hazelcast- Right place to register a Map Listener with a IMap

I have the following use-case
I have two Spring boot applications on two separate machines. One application is running with an embedded hazelcast and another application is connecting to the hazelcast running embedded.
I have two maps- one IMap and one MultiMap. I want to add an EntryEvictionListener
to the IMAP. WhaI I want to do is on the eviction of an entry from the IMap, go to the MuliMap and remove the corresponding entry from it.
I am using Spring java configuration. I wanted advice on where should I register the listener with the IMap. The class which implements EntryEvictionListener interface( which will be registered with IMap), it is a spring managed bean and also has other spring managed beans autowired inside of it.
I was planning to register the mapListener in the spring boot application which connects to the embedded hazelcast running in other spring boot application. I was planning to do it inside a postconstruct method, so it runs only once.
Is this a good approach?
Thank you in advance.
EDIT-
Class CustomListener implements HazelcastInstanceAware,EntryEvictedListener{
private HazelcastInstance hazelcastInstance;
#Override
public void setHazelcastInstance(HazelcastInstance hazelcastInstance){
this.hazelcastInstance=hazelcastInstance;
}
#Override
public void entryEvicted(EntryEvent<String,String> event){
// get multi map from hazelcast instance
//remove value
}
}
The above works!
#indraneel-bende, please check this: http://docs.hazelcast.org/docs/latest-development/manual/html/Distributed_Events/Distributed_Object_Events/Listening_for_Map_Events.html#page_Registering+Map+Listeners
If you use Hazelcast-Spring config, you can add the listener in the Hazelcast Config, could be XML config like in the doc or Java config, and that's it. Make sure that your MapListener is a Spring bean.

Detecting refreshing of RefreshScope beans

It is my understanding that when you use Spring Cloud's RefreshScope annotation, a Proxy to the data is injected, and the proxy is automatically updated if the backing information is changed. Unfortunately, I need to find a way to be alerted when that refresh occurs, so that my code can re-read the data from the refresh-scoped bean.
Simple example: A scheduled task whose schedule is stored in Cloud Config. Unless you wait until the next execution of the task (which could take a while) or regularly poll the configuration (which seems wasteful), there's no way to know if the configuration has changed.
EnvironmentChangeEvent is fired when there's a change in Environment. In terms of Spring Cloud Config it means it's triggered when /env actuator endpoint is called.
RefreshScopeRefreshedEvent is fired when refresh of #RefreshScope beans has been initiated, e.g. /refresh actuator endpoint is called.
That means that you need to register ApplicationListener<RefreshScopeRefreshedEvent> like that:
#Configuration
public class AppConfig {
#EventListener(RefreshScopeRefreshedEvent.class)
public void onRefresh(RefreshScopeRefreshedEvent event) {
// Your code goes here...
}
}
When the refresh occurs EnvironmentChangeEvent would be raised in your config client, as the documentation states:
The application will listen for an EnvironmentChangedEvent and react
to the change in a couple of standard ways (additional
ApplicationListeners can be added as #Beans by the user in the normal
way).
So, you can define your event listener for this event:
public class YourEventListener implements ApplicationListener<EnvironmentChangeEvent> {
#Override
public void onApplicationEvent(EnvironmentChangeEvent event) {
// do stuff
}
}
I think an approach can be to annotate with #RefreshScope all your bean that have properties externalized by the configuration and annotated within #Value ( "${your.prop.key}" ) annotation.
These properties are updated when they changed on configuration.
More specifically, after the refresh of properties and application context under scope RefreshScope, an event RefreshScopeRefreshedEvent is triggered. You can have a listener for this given the understanding that the properties has finished updates (you can be sure to capture updated values only).

How to make App server to start even if database is down?

I am using spring & hibernate. my application has 3 modules. Each module has a specific database. So, Application deals with 3 databases. On server start up, if any one of the databases is down, then server is not started. My requirement is even if one of the databases is down, server should start as other module's databases are up, user can work on other two modules. Please suggest me how can i achieve this?
I am using spring 3.x and hibernate 3.x. Also i am using c3p0 connection pooling.
App server is Tomcat.
Thanks!
I would use the #Configuration annotation to make an object who's job it is to construct the beans and deal with the DB down scenario. When constructing the beans, test if the DB connections are up, if not, return a Dummy Version of your bean. This will get injected into the relevant objects. The job of this dummy bean is to really just throw an unavailable exception when called. If your app can deal with these unavailable exceptions for certain functions and show that to the user while continuing to function when the other datasources are used, you should be fine.
#Configuration
public class DataAccessConfiguration {
#Bean
public DataSource dataSource() {
try {
//create data source to your database
....
return realDataSource;
} catch (Exception) {
//create dummy data source
....
return dummyDataSource;
}
}
}
This was originally a comment:
Have you tried it? You wouldn't know whether a database is down until you connect to it, so unless c3p0 prevalidates all its connections, you wouldn't know that a particular database is down until you try to use it. By that time your application will have already started.

Categories

Resources