I have a Spring Boot app that implements an AMQP MessageListener. This listener invoke to an #Async method managed by ThreadPoolTaskExecutor with pool size. The problem occurs when there are many incoming messages, so these messages are lost because there are no asynchronous workers available.
I am using Spring Core 5.0.7-RELEASE, Java 8
This is my code:
AsyncConfigurator:
#EnableAsync
#Configuration
public class AsyncConfiguration extends AsyncConfigurerSupport {
#Override
#Bean("docThreadPoolTaskExecutor")
public Executor getAsyncExecutor() {
final ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(4);
executor.setMaxPoolSize(8);
executor.setWaitForTasksToCompleteOnShutdown(true);
executor.setThreadNamePrefix("DocWorkerThread-");
executor.initialize();
return executor;
}
My Back Service (MyAsyncService):
#Async("docThreadPoolTaskExecutor")
#Override
public void generaDocumento(String idUser, int year) {
//... some heavy and slow process
}
My message listener:
...
#Autowired
private MyAsyncService myAsyncService;
#Override
#EntryPoint
public void onMessage(Message message) {
try {
final String mensaje = new String(message.getBody(), StandardCharsets.UTF_8);
final MyPojo payload = JsonUtils.readFromJson(mensaje , MyPojo.class);
myAsyncService.generaDocumento(payload.getIdUser(), Integer.valueOf(payload.getYear()));
} catch ( Throwable t ) {
throw new AmqpRejectAndDontRequeueException( t );
}
}
I need someone to give me some idea to solve this.
ThreadPoolTaskExecutor has a queue by default. Messages should be added to the queue when you reach the corePoolSize. When the queue is full then more workers will be started until you reach the maxPoolSize. You can check the current queue size by executor.getThreadPoolExecutor().getQueue().size() to verify if the messages are really lost or just stuck in the queue.
Related
I'm trying to start an application with spring boot that have a JMSListener to connect to an external ActiveMQ, this application need to start even the ActiveMQ is down.
I'm using a failover transport protocol in the connection, but when I start the application the main thread is blocked while the JMS try to establish the connection and the application don't start until the first connection is made.
I was trying this solution, where I extend the DefaultMessageListenerContainer and override the start method to run in a new thread that will free the main thread.
Consumer.java
#SpringBootApplication
public class Consumer {
public static void main(String[] args) {
SpringApplication.run(Consumer.class, args);
System.out.println("Consumer Start");
}
}
Receiver
#Component
public class Receiver {
#Bean
public JmsListenerContainerFactory<?> myFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new MyListenerContainerFactory();
configurer.configure(factory, connectionFactory);
return factory;
}
#JmsListener(destination = "mailbox", containerFactory = "myFactory")
public void receiveMessage(String email) {
System.out.println("Received <" + email + ">");
}
}
MyListenerContainer.java
public class MyListenerContainer extends DefaultMessageListenerContainer {
#Override
public void start() throws JmsException {
new Thread(() -> {
super.start();
}).start();
}
}
MyListenerContainerFactory.java
public class MyListenerContainerFactory extends DefaultJmsListenerContainerFactory {
#Override
protected DefaultMessageListenerContainer createContainerInstance() {
return new MyListenerContainer();
}
}
1º Question:
This solution work but I'm not sure if this does not break anything in the start of spring beans since I'm finishing the start method before the connection is created. Can this break the bean creation process in spring boot?
2º Question:
In additional to this if I have in the same application a JMSListener and a JMSProducer they will try to use the same connection or they use different ones?
If someone has a different solution that is better please share.
Thank you for the time and best regards
Bernardo Neves
It's probably cleaner to use factory.setAutoStartup(false); which will prevent Spring from attempting to start the container; use container.start() when you are ready (on whatever thread you want).
#Autowired
private JmsListenerEndpointRegistry registry;
...
registry.start(); // starts all containers
// or
registry.getListenerContainer(myContainerId).start();
They will use the same connection.
I don't know if I've used the correct title, but I will try to explain the problem below.
I have the following method to check the payment status from external service:
#Autowired
PaymentService paymentService;
public void checkPaymentStatus(Payment p){
// sending
String status = paymentService.getPaymentStatus(p.getId());
}
It works in main thread and there might be hundreds of requests per second, so I decided to use CompletableFuture to run the tasks asynchronously in separate threads.
#Autowired
PaymentService paymentService;
public void checkPaymentStatus(Payment p){
// sending
CompletableFuture<Response> status = paymentService.getPaymentStatus(p.getId());
}
PaymentService.class
#Service
public class PaymentService{
private final RestTemplate restTemplate;
#Async
public CompletableFuture<Response> getPaymentStatus(long id){
Response results = restTemplate.getForObject(url, Response.class);
return CompletableFuture.completedFuture(results);
}
}
Configuration
#Bean
public Executor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(2);
executor.setMaxPoolSize(2);
executor.setQueueCapacity(500);
executor.setThreadNamePrefix("payment-service-");
executor.initialize();
return executor;
}
It works perfectly, but I have another task here. Every request must be wait 30 seconds before sending to the external service.
How to solve this problem?
Update:
Update:
In this methods, I might use Thread sleep, but it is not the correct solution as it blocks the Thread for 30 seconds and next task might run after 60 secs, etc.
#Async
public CompletableFuture<Response> getPaymentStatus(long id){
// I might use Thread sleep here, but it is not the correct solution as it blocks the Thread for 30 seconds and next task might run after 60 secs, etc.
Response results = restTemplate.getForObject(url, Response.class);
return CompletableFuture.completedFuture(results);
}
I have an web app(with Spring/Spring boot) running on tomcat 7.There are some ExecutorService defined like:
public static final ExecutorService TEST_SERVICE = new ThreadPoolExecutor(10, 100, 60L,
TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(1000), new ThreadPoolExecutor.CallerRunsPolicy());
The tasks are important and must complete properly. I catch the exceptions and save them to db for retry, like this:
try {
ThreadPoolHolder.TEST_SERVICE.submit(new Runnable() {
#Override
public void run() {
try {
boolean isSuccess = false;
int tryCount = 0;
while (++tryCount < CAS_COUNT_LIMIT) {
isSuccess = doWork(param);
if (isSuccess) {
break;
}
Thread.sleep(1000);
}
if (!isSuccess) {
saveFail(param);
}
} catch (Exception e) {
log.error("test error! param : {}", param, e);
saveFail(param);
}
}
});
} catch (Exception e) {
log.error("test error! param:{}", param, e);
saveFail(param);
}
So, when tomcat shutting down, what will happen to the threads of the pool(running or waiting in the queue)? how to make sure that all the tasks either completed properly before shutdown or saved to db for retry?
Tomcat has built in Thread Leak detection, so you should get an error when the application is undeployed. As a developer it is your responsibility to link any object you create to the web applications lifecycle, this means You should never ever have static state which are not constants
If you are using Spring Boot, your Spring context is already linked to the applications lifecycle, so the best way is to create you executor as a Spring bean, and let Spring shut it down when the application stops. Here is an example you can put in any #Configuration class.
#Bean(destroyMethod = "shutdownNow", name = "MyExecutorService")
public ThreadPoolExecutor executor() {
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(10, 100, 60L,
TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(1000),
new ThreadPoolExecutor.CallerRunsPolicy());
return threadPoolExecutor;
}
As you can see the #Bean annotation allows you to specify a destroy method which will be executed when the Spring context is closed. In addition I have added the name property, this is because Spring typically creates a number of ExecutorServices for stuff like async web processing. When you need to use the executor, just Autowire it as any other spring bean.
#Autowired
#Qualifier(value = "MyExecutorService")
ThreadPoolExecutor executor;
Remember static is EVIL, you should only use static for constants, and potentially immutable obbjects.
EDIT
If you need to block the Tomcats shutdown procedure until the tasks have been processed, you need to wrap the Executor in a Component for more control, like this.
#Component
public class ExecutorWrapper implements DisposableBean {
private final ThreadPoolExecutor threadPoolExecutor;
public ExecutorWrapper() {
threadPoolExecutor = new ThreadPoolExecutor(10, 100, 60L,
TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(1000), new ThreadPoolExecutor.CallerRunsPolicy());
}
public <T> Future<T> submit(Callable<T> task) {
return threadPoolExecutor.submit(task);
}
public void submit(Runnable runnable) {
threadPoolExecutor.submit(runnable);
}
#Override
public void destroy() throws Exception {
threadPoolExecutor.shutdown();
boolean terminated = threadPoolExecutor.awaitTermination(1, TimeUnit.MINUTES);
if (!terminated) {
List<Runnable> runnables = threadPoolExecutor.shutdownNow();
// log the runnables that were not executed
}
}
}
With this code you call shutdown first so no new tasks can be submitted, then wait some time for the executor finish the current task and queue. If it does not finish in time you call shutdownNow to interrupt the running task, and get the list of unprocessed tasks.
Note: DisposableBean does the trick, but the best solution is actually to implement the SmartLifecycle interface. You have to implement a few more methods, but you get greater control, because no threads are started until all bean have been instantiated and the entire bean hierarchy is wired together, it even allows you to specify in which orders components should be started.
Tomcat as any Java application will not end untill all non-daeon threads will end. ThreadPoolExecutor in above example uses default thread factory and will create non-daemon threads.
I Have a Spring rest controller which is calling an asynchronous method using Spring's #Async methodology and return immediately an http 202 code (Accepted) to the client.(The asynchronous job is heavy and could lead to a timeout).
So actually, at the end of the asynchronous task, i'm sending an email to the client telling him the status of his request.
Everything works just fine but I'm asking myself what can I do if my server/jvm crashes or if it is shut down? My client would receive a 202 code and will never receive a the status email.
Is there a way to synchronize (in real time) a ThreadPoolTaskExecutor in a database or even in a file to let the server recover at startup without managing this on my own with complex rules and evolution status?
Here is my Executor configuration
#Configuration
#EnableAsync
public class AsyncConfig implements AsyncConfigurer {
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(4);
executor.setMaxPoolSize(8);
executor.setQueueCapacity(100);
executor.setThreadNamePrefix("asyncTaskExecutor-");
executor.setAwaitTerminationSeconds(120);
executor.setKeepAliveSeconds(30);
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
executor.initialize();
return executor;
}
#Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
return new SimpleAsyncUncaughtExceptionHandler();
}
}
The controller that launch the async task
#RequestMapping(value = "/testAsync", method = RequestMethod.GET)
public void testAsync() throws InterruptedException{
businessService.doHeavyThings();
}
The async method called:
#Async
public void doHeavyThings() throws InterruptedException {
LOGGER.error("Start doHeavyThings with configured executor - " + Thread.currentThread().getName() + " at " + new Date());
Thread.sleep(5000L);
LOGGER.error("Stop doHeavyThings with configured executor - " + Thread.currentThread().getName() + " at " + new Date());
}
}
Thx
For a web server shutdown the application lifecycle in a java web application will notifiy a ServletContextListener. If you provide an implementation of a ServletContextListener you can put your 'what is already processed' logic in the contextDestroyed method.
When the web server or the application is started again the listener can be used to recover an re-process the unprocessed items of your job using the contextInitialized method.
Another option would be using Spring destruction callbacks and place the logic here.
HTH
I'm trying to implement a UDP server with Netty. The idea is to bind only once (therefore creating only one Channel). This Channel is initialized with only one handler that dispatches processing of incoming datagrams among multiple threads via an ExecutorService.
#Configuration
public class SpringConfig {
#Autowired
private Dispatcher dispatcher;
private String host;
private int port;
#Bean
public Bootstrap bootstrap() throws Exception {
Bootstrap bootstrap = new Bootstrap()
.group(new NioEventLoopGroup(1))
.channel(NioDatagramChannel.class)
.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.handler(dispatcher);
ChannelFuture future = bootstrap.bind(host, port).await();
if(!future.isSuccess())
throw new Exception(String.format("Fail to bind on [host = %s , port = %d].", host, port), future.cause());
return bootstrap;
}
}
#Component
#Sharable
public class Dispatcher extends ChannelInboundHandlerAdapter implements InitializingBean {
private int workerThreads;
private ExecutorService executorService;
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
DatagramPacket packet = (DatagramPacket) msg;
final Channel channel = ctx.channel();
executorService.execute(new Runnable() {
#Override
public void run() {
//Process the packet and produce a response packet (below)
DatagramPacket responsePacket = ...;
ChannelFuture future;
try {
future = channel.writeAndFlush(responsePacket).await();
} catch (InterruptedException e) {
return;
}
if(!future.isSuccess())
log.warn("Failed to write response packet.");
}
});
}
#Override
public void afterPropertiesSet() throws Exception {
executorService = Executors.newFixedThreadPool(workerThreads);
}
}
I have the following questions:
Should the DatagramPacket received by the channelRead method of the Dispatcher class be duplicated before being used by the worker thread? I wonder if this packet is destroyed after the channelRead method returns, even if a reference is kept by the worker thread.
Is it safe to share the Channel among all the worker threads and let them call writeAndFlush concurrently?
Thanks!
Nope. If you need the object to live longer you either turn it into something else or use ReferenceCountUtil.retain(datagram) and then ReferenceCountUtil.release(datagram) once you're done with it. You also shouldn't be doing await() at the executor service as well, you should register a handler for whatever happens.
Yes, channel objects are thread safe and they can be called from many different threads.