Delegate processing during Tomcat startup with Spring - java

I have defined a bean which needs to do some heavy processing during the #PostConstruct lifecycle phase (during start up).
As it stands, I submit a new Callable to an executor service with each iteration of the processing loop. I keep a list of the Future objects returned from these submissions in a member variable.
#Component
#Scope("singleton")
public class StartupManager implements ApplicationListener<ContextRefreshedEvent> {
#Autowired
private ExecutorService executorService;
private final Map<Class<?>, Optional<Action>> actionMappings = new ConcurrentHashMap<>();
private final List<Future> processingTasks = Collections.synchronizedList(new ArrayList<>());
#PostConstruct
public void init() throws ExecutionException, InterruptedException {
this.controllers.getHandlerMethods().entrySet().stream().forEach(handlerItem -> {
processingTasks.add(executorService.submit(() -> {
// processing
}));
});
}
}
This same bean implements the ApplicationListener interface, so it can listen for a ContextRefreshedEvent which allows me to detect when the application has finished starting up. I use this handler to loop through the list of Futures and invoke the blocking get method which ensures that all of the processing has occurred before the application continues.
#Override
public void onApplicationEvent(ContextRefreshedEvent applicationEvent) {
for(Future task : this.processingTasks) {
try {
task.get();
} catch (InterruptedException | ExecutionException e) {
throw new IllegalStateException(e.getMessage());
}
}
}
My first question... Is changing the actionMapping stream to a parallelStream going to achieve the same thing as submitting a task to the executor service? Is there a way I can pass an existing executor service into a parallel stream for it to use the thread pool size i've defined for the bean?
Secondly.. As part of the processing.. The actionMappings map is read and entries are put in there. It is sufficient enough to make this Map a ConcurrentHashMap to make it thread safe in this scenario?
And secondly is implementing the ApplicationListener interface and listening for the ContextRefreshedEvent the best way to detect when the application has startedup and therefore force complete the un-processed tasks by blocking? Or can this be done another way?
Thanks.

About using parallelStream(): No, and this is precisely the main drawback of using this method. It should be used only when the thread pool size doesn't matter, so I think your ExecutorService-based approach is fine.
Since you are working with Java 8, you could as well use the CompletableFuture.supplyAsync() method, which has an overload that takes an Executor. Since ExecutorService extends Executor, you can pass it your ExecutorService and you're done!
I think a ConcurrentHashMap is fine. It ensures thread safety in all its operations, especially when comes the time to add or modify entries.
When is a ContextRefreshedEvent fired? According to the Javadoc:
Event raised when an ApplicationContext gets initialized or refreshed.
which doesn't guarantee your onApplicationEvent() method is to be called once and only once, that is, when your bean is properly initialized, which includes execution of the #PostConstruct-annotated method.
I suggest you implement the BeanPostProcessor interface and put your Future-checkup logic in the postProcessAfterInitialization() method. The two BeanPostProcessormethods are called before and after the InitializingBean.afterPropertiesSet() method (if present), respectively.
I hope this will be helpful...
Cheers,
Jeff

Related

scope of #kafkaListener

I just want to understand that what is the scope of #kafkaListener, either prototype or singleton. In case of multiple consumers of a single topic, is it return the single instance or multiple instances. In my case, I have multiple customers are subscribed to single topic and get the reports. I just wanted to know, what would happen, if
multiple customers wants to query for the report on the same time. In
my case, I am closing the container after successful consumption of
messages but at the same time if some other person wants to fetch
reports, the container should be open.
how to change the scope to prototype (if it is not) associated with Id's of
container, so that each time a separate instance can be generated.
#KafkaListener(id = "id1", topics = "testTopic" )
public void listen() {
// code goes here
}
A Single Listener Instance is invoked for all consuming Threads.
The annotation #KafkaListener is not Prototype scoped, and it is not possible with this annotation either.
4.1.10. Thread Safety
When using a concurrent message listener container, a single listener instance is invoked on all consumer threads. Listeners, therefore, need to be thread-safe, and it is preferable to use stateless listeners. If it is not possible to make your listener thread-safe or adding synchronization would significantly reduce the benefit of adding concurrency, you can use one of a few techniques:
Use n containers with concurrency=1 with a prototype scoped MessageListener bean so that each container gets its own instance (this is not possible when using #KafkaListener).
Keep the state in ThreadLocal<?> instances.
Have the singleton listener delegate to a bean that is declared in SimpleThreadScope (or a similar scope).
To facilitate cleaning up thread state (for the second and third items in the preceding list), starting with version 2.2, the listener container publishes a ConsumerStoppedEvent when each thread exits. You can consume these events with an ApplicationListener or #EventListener method to remove ThreadLocal<?> instances or remove() thread-scoped beans from the scope. Note that SimpleThreadScope does not destroy beans that have a destruction interface (such as DisposableBean), so you should destroy() the instance yourself.
By default, the application context’s event multicaster invokes event listeners on the calling thread. If you change the multicaster to use an async executor, thread cleanup is not effective.
https://docs.spring.io/spring-kafka/reference/html/
=== Edited ===
Lets take their 3rd option (Delcaring a SimpleThreadScope and delegating to it)
Register SimpleThreadScope . It is not picked up automatically. You need to register it like below:
#Bean
public static BeanFactoryPostProcessor beanFactoryPostProcessor() {
return new BeanFactoryPostProcessor() {
#Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
beanFactory.registerScope("thread", new SimpleThreadScope());
}
};
}
Create a component with scopeName = "thread"
#Component
#Scope(scopeName = "thread", proxyMode = ScopedProxyMode.TARGET_CLASS)
public class KafkaDelegate{
public void handleMessageFromKafkaListener(String message){
//Do some stuff here with Message
}
}
Create a #Service
public class KafkaListenerService{
#Autowired
private KafkaDelegate kafkaDelegate;
#KafkaListener(id = "id1", topics = "testTopic" )
public void listen(String message) {
kafkaDelete.handleMessageFromKafkaListener(message);
}
}
Another example: How to implement a stateful message listener using Spring Kafka?
See this answer for an example of how to use a prototype scoped #KafkaListener bean.

Project Reactor create multiple scheduler from the same Executor

In my application I have a single java Executor, can it create issues if I create multiple Scheduler from the same instance of Executor, having something like:
public class MyServiceController {
#Autowired
private Executor mainExecutor;
public Object something() {
return Flux.from(somethingElse())
.publishOn(Schedulers.fromExecutor(mainExecutor))
.toFuture()
}
}
(Or having multiple classes implementing this pattern, all of them having the same instance of mainExecutor)
it should be fine in both cases, the same Executor will back all Scheduler.Worker spawned by each Scheduler you create from it, and the same is true if it is an ExecutorService (albeit with a different Scheduler implementation for the wrapper).
For clarity, I'd still consider making the Scheduler a singleton next to the Executor.

How to manage shutdown of ExecutorService when we allow to inject it?

Suppose I am writing a service, which needs some executor service/separate thread. I give ability to use factory method to not worry about executor service, but still want to allow passing existing executor service (dependency injection).
How can I manage for executorService.shutdown()?
Example code:
public class ThingsScheduler {
private final ExecutorService executorService;
public ThingsScheduler(ExecutorService executorService) {
this.executorService = executorService;
}
public static ThingsScheduler createDefaultSingleThreaded() {
return new ThingsScheduler(Executors.newSingleThreadExecutor());
}
public scheduleThing() {
executorService.submit(new SomeTask());
}
// implement Closeable?
// #PreDestory?
// .shutdown() + JavaDoc?
}
There are several problems
We should have ability to shutdown internally created executor, or in best case handle it automatically (Spring #PreDestory, or in worst case finalize())
We shold rather not shutdown executor if it's externally managed (injected)
We could create some attribute stating if executor is created by our class or if it's injected, and then on finalize/#PreDestroy/shutdown hook we could shut down it, but it not feels elegant for me.
Maybe we should completely resign from factory method and always require injection pushing executor lifecycle management to the client?
You may crate an instance of anonymous sub-inner class from your default factory as shown below. The class will define the close/#PreDestroy method which shall be called by your DI container.
e.g.
public class ThingsScheduler {
final ExecutorService executorService;
public ThingsScheduler(ExecutorService executorService) {
this.executorService = executorService;
}
/**
* assuming you are using this method as factory method to make the returned
* bean as managed by your DI container
*/
public static ThingsScheduler createDefaultSingleThreaded() {
return new ThingsScheduler(Executors.newSingleThreadExecutor()) {
#PreDestroy
public void close() {
System.out.println("closing the bean");
executorService.shutdown();
}
};
}
}
I would say that solution in fully up to you. Third-party libraries like spring widely use a dedicated attribute to understand who should release a particular resource depending on its creator. mongoInstanceCreated in SimpleMongoDbFactory, localServer in SimpleHttpServerJaxWsServiceExporter, etc. But they do it because these classes are created only for external usage. If your class is only used in your application code than you can either inject executorService and don't care about its releasing or create and release it inside the class which uses it. This choice depends on your class/application design (does your class work with any executorService, whether executorService is shared and used by other classes, etc). Otherwise i don't see other option than the dedicated flag.
More "elegant" solution would be to extend your ExecutorService and in it override shutdown method (whichever you choose). In case of injection, you would return that extended type and it would have it's own shutdown logic. In case of factory - you still have original logic.
After some more thinking I came up with some conclusions:
do not think about shutting it down if it's injected - someone else created it, someone else will manage it's lifecycle
an executor factory could be injected instead of Executor, then we create instance using factory and manage closing it by ourself as we manage the lifecycle (and in such case responses from other users applies)

Ejb Timer throwing javax.ejb.ConcurrentAccessTimeoutException: Unable to get write lock on

my application is running on tomee and I have the ejb timer to trigger the timeout method every two minutes. The timer triggered the timeout method first time and is still running when the timer tried to trigger the same method for second time. And it threw the following exception..
javax.ejb.ConcurrentAccessTimeoutException: Unable to get write lock on 'timeout' method for: com.abc.xyz
at org.apache.openejb.core.singleton.SingletonContainer.aquireLock(SingletonContainer.java:298)
at org.apache.openejb.core.singleton.SingletonContainer._invoke(SingletonContainer.java:217)
at org.apache.openejb.core.singleton.SingletonContainer.invoke(SingletonContainer.java:197)
at org.apache.openejb.core.timer.EjbTimerServiceImpl.ejbTimeout(EjbTimerServiceImpl.java:769)
at org.apache.openejb.core.timer.EjbTimeoutJob.execute(EjbTimeoutJob.java:39)
at org.quartz.core.JobRunShell.run(JobRunShell.java:207)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:560)
All my log is filled up with the same stacktrace and it continues to occur until I stop the server..
Can we make the timerservice not to trigger the method if it is already running?
or is there a way to timeout the first call before it is triggered again..
Thanks,
Is your timed EJB a singleton bean?
By default singletons use container managed concurrency with write locks that guarantee exclusive access for all methods.
The openejb.xml configures the AccessTimeout for a singleton EJB. After that timeout the exception you have seen will be thrown. Please see here as well: http://tomee.apache.org/singleton-beans.html
Solutions might be:
Use a stateless session bean as the timer bean
Define a read lock on the timer method
Don't use a repeating timer but schedule the next execution of your timer at the end of the current execution.
If you want to avoid running multiple times in parallel, but also want to avoid that the scheduled runs queue up, then I have another proposal.
This way I let schedules "skip", if the previous one is still running:
#Singleton
#Startup
#ConcurrencyManagement(ConcurrencyManagementType.BEAN)
public class Example
{
private final AtomicBoolean alreadyRunning = new AtomicBoolean(false);
#Schedule(minute = "*", hour="*", persistent = false)
public void doWork()
{
if (alreadyRunning.getAndSet(true)) return;
try
{
// ... your code
}
finally
{
alreadyRunning.set(false);
}
}
}

How to Wire Dependent #Async methods

I have two Spring based async thread pools and methods in the same Spring bean. The doWork() uses the default Spring thread pool and holdAndReprocess() uses its own Spring thread pool.
I currently have my class setup like below where doWork processes some work and then if a failure occurs it parks the thread in the holdAndReprocess() "queue" thread pool and then waits and reprocesses the thread by calling the doWork(). With my current setup, the call to holdAndReprocess() and then the call back to doWork() is synchronous. Any ideas on how to wire this such that all communication between the doWork() and holdAndReprocess is asynchronous?
I'm using xml backed configuration and not pure annotation driven Spring beans.
public class AsyncSampleImpl implements AsyncSample {
#Async
public void doWork(){
holdAndReprocess();
}
#Async("queue")
#Transactional(propagation = Propagation.REQUIRED)
public void holdAndReprocess(){
//sleeps thread for static amount of time and then reprocesses
doWork();
}
}
Read https://stackoverflow.com/a/4500353/516167
As you're calling your #Async method from another method in the same object, you're probably bypassing the async proxy code and just calling your plain method, ie within the same thread.
Split this bean into two beans and invoke holdAndReprocess() from separate bean.
This rules apply also to #Transactional annotations.
Read about this: https://stackoverflow.com/a/5109419/516167
From Spring Reference Documentation Section 11.5.6, “Using #Transactional”
In proxy mode (which is the default), only 'external' method calls coming in through the proxy will be intercepted. This means that 'self-invocation', i.e. a method within the target object calling some other method of the target object, won't lead to an actual transaction at runtime even if the invoked method is marked with #Transactional!
Draft
public class AsyncSampleImpl implements AsyncSample {
public void doWork(){
reprocessor.holdAndReprocess();
}
public void holdAndReprocess(){
//sleeps thread for static amount of time and then reprocesses
worker.doWork();
}
}

Categories

Resources