How to start Kafka listener manually? - java

I have some methods annotated with #KafkaListener but I want to start only some of them manually (depending on some conditions).
#KafkaListener(id = "consumer1", topics = "topic-name", clientIdPrefix = "client-prefix", autoStartup = "false")
public void consumer1(String message) {
// consume
}
#PostConstruct
private void startConsumers() {
if (true) {
kafkaListenerEndpointRegistry.getListenerContainer("consumer1").start();
}
}
But at this moment kafkaListenerEndpointRegistry.getListenerContainers() is empty list and kafkaListenerEndpointRegistry.getListenerContainer("consumer1") returns null. So maybe the moment when #PostConstruct method is called is too early and listeners are still not registered.
I tried to annotate startConsumers() method with #Scheduled(fixedDelay = 100) and listeners are already available. But using #Scheduled is not a good decision for something that I want to call once after starting the application.

You can't do it in #PostConstruct - it's too early in the application context life cycle.
Implement SmartLifecyle set the phase to Integer.MAX_VALUE and start the container in the start() method.
Or use an #EventListener and listen for the ApplicationStartedEvent (if using Spring Boot) or ContextRefreshedEvent for a non-Boot Spring application.

Related

How to make method transactional to manipulate entities in Liferay 7.1

I need to do multiple writes to DB under single transaction using Liferay 7.1. Basically, my question is would this work?
#Component(service = MyService.class)
public class MyService {
private OrganizationLocalService localService;
#Reference(unbind = "-")
protected void setOrganizationLocalService(OrganizationLocalService localService) {
this.localService = localService;
}
#Transactional(rollbackFor = IllegalArgumentException.class)
public void doInTransaction() {
try {
localService.createOrganization(...);
localService.updateOrganization(...);
// more
catch (IllegalArgumentException e) {
// rollback logic
}
}
}
There are also Liferay event listeners built to be part of the service calls used to manipulate Liferay entities. Those event listeners will do additional work like sending messages to Kafka topics, etc. And I am not sure if introducing transactions would not disrupt the work of these listeners.
By default in Liferay, every method at LocalService level is transactional.
Then, you have to collect all tasks in a single localservice method to ensure a single transactional enviroment.
#Transactional annotation is not effective as you have tryed to do. Here is not a Spring enviroment.

scope of #kafkaListener

I just want to understand that what is the scope of #kafkaListener, either prototype or singleton. In case of multiple consumers of a single topic, is it return the single instance or multiple instances. In my case, I have multiple customers are subscribed to single topic and get the reports. I just wanted to know, what would happen, if
multiple customers wants to query for the report on the same time. In
my case, I am closing the container after successful consumption of
messages but at the same time if some other person wants to fetch
reports, the container should be open.
how to change the scope to prototype (if it is not) associated with Id's of
container, so that each time a separate instance can be generated.
#KafkaListener(id = "id1", topics = "testTopic" )
public void listen() {
// code goes here
}
A Single Listener Instance is invoked for all consuming Threads.
The annotation #KafkaListener is not Prototype scoped, and it is not possible with this annotation either.
4.1.10. Thread Safety
When using a concurrent message listener container, a single listener instance is invoked on all consumer threads. Listeners, therefore, need to be thread-safe, and it is preferable to use stateless listeners. If it is not possible to make your listener thread-safe or adding synchronization would significantly reduce the benefit of adding concurrency, you can use one of a few techniques:
Use n containers with concurrency=1 with a prototype scoped MessageListener bean so that each container gets its own instance (this is not possible when using #KafkaListener).
Keep the state in ThreadLocal<?> instances.
Have the singleton listener delegate to a bean that is declared in SimpleThreadScope (or a similar scope).
To facilitate cleaning up thread state (for the second and third items in the preceding list), starting with version 2.2, the listener container publishes a ConsumerStoppedEvent when each thread exits. You can consume these events with an ApplicationListener or #EventListener method to remove ThreadLocal<?> instances or remove() thread-scoped beans from the scope. Note that SimpleThreadScope does not destroy beans that have a destruction interface (such as DisposableBean), so you should destroy() the instance yourself.
By default, the application context’s event multicaster invokes event listeners on the calling thread. If you change the multicaster to use an async executor, thread cleanup is not effective.
https://docs.spring.io/spring-kafka/reference/html/
=== Edited ===
Lets take their 3rd option (Delcaring a SimpleThreadScope and delegating to it)
Register SimpleThreadScope . It is not picked up automatically. You need to register it like below:
#Bean
public static BeanFactoryPostProcessor beanFactoryPostProcessor() {
return new BeanFactoryPostProcessor() {
#Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
beanFactory.registerScope("thread", new SimpleThreadScope());
}
};
}
Create a component with scopeName = "thread"
#Component
#Scope(scopeName = "thread", proxyMode = ScopedProxyMode.TARGET_CLASS)
public class KafkaDelegate{
public void handleMessageFromKafkaListener(String message){
//Do some stuff here with Message
}
}
Create a #Service
public class KafkaListenerService{
#Autowired
private KafkaDelegate kafkaDelegate;
#KafkaListener(id = "id1", topics = "testTopic" )
public void listen(String message) {
kafkaDelete.handleMessageFromKafkaListener(message);
}
}
Another example: How to implement a stateful message listener using Spring Kafka?
See this answer for an example of how to use a prototype scoped #KafkaListener bean.

Spring Boot Async Programming

I'm new in software. I'm working to understand async programming in Spring Boot. As seen above, I set thread pool size 2. When I requested same url three times one after another. My two requests are working asynchronously. Third one is waiting. This is ok. But when I don't use the asynchronous feature (neither #async annotation nor threadpool), it still performs transactions asynchronously, as before. So I'm confused. Spring Boot rest controller behaves asynchronously by default? Why we use #async in Spring Boot? Or do I misunderstand that?
#Service
public class TenantService {
#Autowired
private TenantRepository tenantRepository;
#Async("threadPoolTaskExecutor")
public Future<List<Tenant>> getAllTenants() {
System.out.println("Execute method asynchronously - "
+ Thread.currentThread().getName());
try {
List<Tenant> allTenants = tenantRepository.findAll();
Thread.sleep(5000);
return new AsyncResult<>(allTenants);
} catch (InterruptedException e) {
//
}
return null;
}
}
#Configuration
#EnableAsync
public class AsyncConfig {
#Bean(name = "threadPoolTaskExecutor")
public Executor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(2);
executor.setMaxPoolSize(2);
executor.setQueueCapacity(100);
executor.setThreadNamePrefix("AsynchThread-");
executor.initialize();
return executor;
}
#Bean(name = "threadPoolTaskExecutor2")
public Executor threadPoolTaskExecutor2() {
return new ThreadPoolTaskExecutor();
}
}
I'm assuming you are using the default embedded Tomcat from Spring Boot. If that's the case, then you are not misunderstanding. Tomcat will indeed work asynchronously by default, meaning it will start a new thread for every request (see this for on that).
The #Async annotation does not aim to replace the functionality that Tomcat provides in this case. Instead, that annotation allows executing any method of a bean in a separate thread. For your particular use case, it might be enough to let Tomcat start a new thread for every request, but sometimes you might want to parallelize work further.
An example on when you would probably want to use both is when a request must trigger some heavy computation, but the response does not depend on it. By using the #Async annotation, you can start the heavy computation on another thread, and let the request finish sooner (effectively allowing the server to handle other requests while the heavy computation runs independently on another thread).

Under which circumstances would ApplicationEventPublisher.publishEvent not fire?

I am trying to build a DiscoveryClient and I want it to fire an event when there is a change to the routes. I am using
publisher.publishEvent(new InstanceRegisteredEvent<>(this, "serviceName"));
However, the event does not actually fire even if it is the same object. I am suspecting it is because it is a different thread, but #Scheduled also run from a different thread and it fires successfuly.
The circumstance that I hit was the fact that I was using the ApplicationEventPublisher that was provided during the BootstrapAutoConfiguration phase in the application. Because I was using that, the events I publish do not get propagated as expected.
To get around that I had to make sure to change the ApplicationEventPublisher that was have put in during bootstrap with something after by adding in another AutoConfiguration executed during the AutoConfiguration phase and not in Bootstrap phase.
I added (but it is optional) ApplicationEventPublisherAware to the class in my case DockerSwarmDiscovery
#Configuration
#ConditionalOnBean(DockerSwarmDiscovery.class)
#Slf4j
public class DockerSwarmDiscoveryWatchAutoConfiguration {
#Autowired
private DockerSwarmDiscovery dockerSwarmDiscovery;
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#PostConstruct
public void injectPublisher() {
dockerSwarmDiscovery.setApplicationEventPublisher(applicationEventPublisher);
}
}

Spring cache using #Cacheable during #PostConstruct does not work

related to the commit in spring framework https://github.com/spring-projects/spring-framework/commit/5aefcc802ef05abc51bbfbeb4a78b3032ff9eee3
the initialisation is set to a later stage from afterPropertiesSet() to afterSingletonsInstantiated()
In short:
This prevents the caching to work when using it in a #PostConstruct use case.
Longer version:
This prevents the use case where you would
create serviceB with #Cacheable on a methodB
create serviceA with #PostConstruct calling serviceB.methodB
#Component
public class ServiceA{
#Autowired
private ServiceB serviceB;
#PostConstruct
public void init() {
List<String> list = serviceB.loadSomething();
}
This results in org.springframework.cache.interceptor.CacheAspectSupport not being initialised now and thus not caching the result.
protected Object execute(CacheOperationInvoker invoker, Object target, Method method, Object[] args) {
// check whether aspect is enabled
// to cope with cases where the AJ is pulled in automatically
if (this.initialized) {
//>>>>>>>>>>>> NOT Being called
Class<?> targetClass = getTargetClass(target);
Collection<CacheOperation> operations = getCacheOperationSource().getCacheOperations(method, targetClass);
if (!CollectionUtils.isEmpty(operations)) {
return execute(invoker, new CacheOperationContexts(operations, method, args, target, targetClass));
}
}
//>>>>>>>>>>>> Being called
return invoker.invoke();
}
My workaround is to manually call the initialisation method:
#Configuration
public class SomeConfigClass{
#Inject
private CacheInterceptor cacheInterceptor;
#PostConstruct
public void init() {
cacheInterceptor.afterSingletonsInstantiated();
}
This of course fixes my issue but does it have side effects other that just being called 2 times (1 manual and 1 by the framework as intended)
My question is:
"Is this a safe workaround to do as the initial commiter seemed to have an issue with just using the afterPropertiesSet()"
As Marten said already, you are not supposed to use any of those services in the PostConstruct phase because you have no guarantee that the proxy interceptor has fully started at this point.
Your best shot at pre-loading your cache is to listen for ContextRefreshedEvent (more support coming in 4.2) and do the work there. That being said, I understand that it may not be clear that such usage is forbidden so I've created SPR-12700 to improve the documentation. I am not sure what javadoc you were referring to.
To answer your question: no it's not a safe workaround. What you were using before worked by "side-effect" (i.e. it wasn't supposed to work, if your bean was initialized before the CacheInterceptor you would have the same problem with an older version of the framework). Don't call such low-level infrastructure in your own code.
Just had the exact same problem as OP and listening to ContextRefreshedEvent was causing my initialization method to be called twice. Listening to ApplicationReadyEvent worked best for me.
Here is the code I used
#Component
public class MyInitializer implements ApplicationListener<ApplicationReadyEvent> {
#Override
public void onApplicationEvent(ApplicationReadyEvent event) {
//doing things
}
}
Autowire ApplicationContext and invoke method call using :
applicationContext.getBean(RBService.class).getRawBundle(bundleName, DEFAULT_REQUEST_LANG);
where getRawBundle is Cacheable method.

Categories

Resources