Spring Data Redis Template ConvertAndSend Duplicate Messages - java

I have a messaging application built over redis, however, I noticed the spring data redis template's convertAndSend may be duplicating messages as the message listener receives duplicate messages one in every three trials.
As you can imagine this may not be good for certain applications, in my secondary storage is complaining about duplicate keys.
I register the message listener in a #Configuration annotated class as:
#Bean
RedisMessageListenerContainer container(JobsListener receiver, RedisConnectionFactory connectionFactory) {
MessageListenerAdapter jobsMessageListener = new MessageListenerAdapter(receiver);
RedisMessageListenerContainer container = new RedisMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.addMessageListener(jobsMessageListener, new PatternTopic(RedisCacheService.JOBS_KEY));
return container;
}
And in the JobsListener implementation, I use the onMessageReceived method.
#Override
public void onMessage(Message message, byte[] pattern) {
System.out.println(new String(message.getBody()));
Job job = cacheService.processNextJob();
if (job != null) {
logger.debug("Job id processed is " + job.getId() + " " + Thread.currentThread().getId());
update(job);
} else {
logger.debug("Job id processed is null");
}
}
However, if I add synchronized to the onMessageReceived method it seems to fix this.
Is there a reason why synchronized helps? Smells like some concurrency issue under the hood.

Related

Republish message to same queue with updated headers after automatic nack in Spring AMQP

I am trying to configure my Spring AMQP ListenerContainer to allow for a certain type of retry flow that's backwards compatible with a custom rabbit client previously used in the project I'm working on.
The protocol works as follows:
A message is received on a channel.
If processing fails the message is nacked with the republish flag set to false
A copy of the message with additional/updated headers (a retry counter) is published to the same queue
The headers are used for filtering incoming messages, but that's not important here.
I would like the behaviour to happen on an opt-in basis, so that more standardised Spring retry flows can be used in cases where compatibility with the old client isn't a concern, and the listeners should be able to work without requiring manual acking.
I have implemented a working solution, which I'll get back to below. Where I'm struggling is to publish the new message after signalling to the container that it should nack the current message, because I can't really find any good hooks after the nack or before the next message.
Reading the documentation it feels like I'm looking for something analogous to the behaviour of RepublishMessageRecoverer used as the final step of a retry interceptor. The main difference in my case is that I need to republish immediately on failure, not as a final recovery step. I tried to look at the implementation of RepublishMessageRecoverer, but the many of layers of indirection made it hard for me to understand where the republishing is triggered, and if a nack goes before that.
My working implementation looks as follows. Note that I'm using an AfterThrowsAdvice, but I think an error handler could also be used with nearly identical logic.
/*
MyConfig.class, configuring the container factory
*/
#Configuration
public class MyConfig {
#Bean
// NB: bean name is important, overwrites autoconfigured bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
Jackson2JsonMessageConverter messageConverter,
RabbitTemplate rabbitTemplate
) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(messageConverter);
// AOP
var a1 = new CustomHeaderInspectionAdvice();
var a2 = new MyThrowsAdvice(rabbitTemplate);
Advice[] adviceChain = {a1, a2};
factory.setAdviceChain(adviceChain);
return factory;
}
}
/*
MyThrowsAdvice.class, hooking into the exception flow from the listener
*/
public class MyThrowsAdvice implements ThrowsAdvice {
private static final Logger logger = LoggerFactory.getLogger(MyThrowsAdvice2.class);
private final AmqpTemplate amqpTemplate;
public MyThrowsAdvice2(AmqpTemplate amqpTemplate) {
this.amqpTemplate = amqpTemplate;
}
public void afterThrowing(Method method, Object[] args, Object target, ListenerExecutionFailedException ex) {
var message = message(args);
var cause = ex.getCause();
// opt-in to old protocol by throwing an instance of BusinessException in business logic
if (cause instanceof BusinessException) {
/*
NB: Since we want to trigger execution after the current method fails
with an exception we need to schedule it in another thread and delay
execution until the nack has happened.
*/
new Thread(() -> {
try {
Thread.sleep(1000L);
var messageProperties = message.getMessageProperties();
var count = getCount(messageProperties);
messageProperties.setHeader("xb-count", count + 1);
var routingKey = messageProperties.getReceivedRoutingKey();
var exchange = messageProperties.getReceivedExchange();
amqpTemplate.send(exchange, routingKey, message);
logger.info("Sent!");
} catch (InterruptedException e) {
logger.error("Sleep interrupted", e);
}
}).start();
// NB: Produce the desired nack.
throw new AmqpRejectAndDontRequeueException("Business logic exception, message will be re-queued with updated headers", cause);
}
}
private static long getCount(MessageProperties messageProperties) {
try {
Long c = messageProperties.getHeader("xb-count");
return c == null ? 0 : c;
} catch (Exception e) {
return 0;
}
}
private static Message message(Object[] args) {
try {
return (Message) args[1];
} catch (Exception e) {
logger.info("Bad cast parse", e);
throw new AmqpRejectAndDontRequeueException(e);
}
}
}
Now, as you can imagine, I'm not particularly pleased with the indeterminism of scheduling a new thread with a delay.
So my question is simply, is there any way I could produce a deterministic solution to my problem using the provided hooks of the ListenerContainer ?
Your current solution risks message loss; since you are publishing on a different thread after a delay. If the server crashes during that delay, the message is lost.
It would be better to publish immediately to another queue with a TTL and dead-letter configuration to republish the expired message back to the original queue.
Using the RepublishMessageRecoverer with retries set to maxattempts=1 should do what you need.

Using spring integration v5.5.14 lots of queued task are increasing

after upgrading to spring boot version v2.7.1 we are seeing that there are lots of queued task, we never had seen such queued task increasing in the last version we were using v2.2.2.
Our team has tried to check the things in v2.7.1 but couldn't found anything in this version.
Can anyone please review the code and let us know what we are missing or have written wrong that is causing the issue. We are using spring integration to pull emails from client server and for that we have add a taskexecutor to have concurrent polling.
Versions that we use:
Spring Boot = 2.7.1
Spring Integration = 5.5.14
Earlier we were using:
Spring Boot = 2.2.2 release
Spring Integration = 5.2.3 release
I've attached the code below.
Configuration class for Imap Integration
#Configuration
#EnableIntegration
public class ImapIntegrationConfig {
private final ApplicationContext applicationContext;
#Autowired
public ImapIntegrationConfig(ApplicationContext applicationContext) {
this.applicationContext = applicationContext;
}
#Bean("mailTaskExecutor")
public ThreadPoolTaskExecutor mailTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(1000);
taskExecutor.setCorePoolSize(100);
taskExecutor.setTaskDecorator(new SecurityAwareTaskDecorator(applicationContext));
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.setAwaitTerminationSeconds(Integer.MAX_VALUE);
return taskExecutor;
}
#Bean("imapMailChannel")
public ExecutorChannelSpec imapMailChannel() {
return MessageChannels.executor(mailTaskExecutor());
}
#Bean
public HeaderMapper<MimeMessage> mailHeaderMapper() {
return new DefaultMailHeaderMapper();
}
}
ImapListener Class to register the flow
public void registerImapFlow(ImapSetting imapSetting) {
ImapMailReceiver mailReceiver = createImapMailReceiver(imapSetting);
// create the flow for an email process
//#formatter:off
StandardIntegrationFlow flow = IntegrationFlows
.from(Mail.imapInboundAdapter(mailReceiver),
consumer -> consumer.autoStartup(true)
.poller(Pollers.fixedDelay(Duration.ofSeconds(5), Duration.ofMinutes(2))
.taskExecutor(taskExecutor)
.errorHandler(t -> logger.error("Error while polling emails for address " + imapSetting.getUsername(), t))
.maxMessagesPerPoll(10)))
.enrichHeaders(Map.of(CONCERN_CODE, imapSetting.getConcernCode(), IMAP_CONFIG_ID, imapSetting.getImapSettingId()))
.channel(imapMailChannel).get();
//#formatter:on
// give the bean a unique name to avoid clashes with multiple imap settings
String flowId = concernIdentifier.getConcernIdentifier() + "-" + imapSetting.getImapSettingId();
IntegrationFlowContext.IntegrationFlowRegistration existingFlow = integrationFlowContext.getRegistrationById(flowId);
if (existingFlow != null) {
// destroy the previous beans
existingFlow.destroy();
}
// register the new flow
integrationFlowContext.registration(flow).id(flowId).useFlowIdAsPrefix().register();
}
Process message method
#ServiceActivator(inputChannel = "imapMailChannel")
public void processMessage(Message<?> message) throws InvalidMessageException {
String concern = (String) message.getHeaders().get(CONCERN_CODE);
if (isEmpty(concern)) {
logger.error("Received null concern!");
}
Long imapConfigId = (Long) message.getHeaders().get(IMAP_CONFIG_ID);
String logMessage = null;
String messageId = null;
try {
Object payload = message.getPayload();
if (payload instanceof MimeMultipart) {
//.......................//
}
else if (payload instanceof String) {
//......................//
}
catch (Exception e) {
logger.error("Error while processing " + logMessage, e);
if (concern != null) {
metricUtil.emailFailed(concern);
}
throw new MaxxtonException("CCM-MessageID: Exception in processMessage() method", e, MessageErrorCode.UNABLE_TO_PROCESS_EMAIL);
}
metricUtil.emailProcessed(concern);
}
ImapMailReceiver method
private ImapMailReceiver createImapMailReceiver(ImapSetting imapSettings) {
String url = String.format(imapSettings.getImapUrl(), URLEncoder.encode(imapSettings.getUsername(), UTF_8), URLEncoder.encode(imapSettings.getPassword(), UTF_8));
ImapMailReceiver receiver = new ImapMailReceiver(url);
receiver.setSimpleContent(true);
Properties mailProperties = new Properties();
mailProperties.put("mail.debug", "false");
mailProperties.put("mail.imap.connectionpoolsize", "5");
mailProperties.put("mail.imap.fetchsize", 4194304);
mailProperties.put("mail.imap.connectiontimeout", 15000);
mailProperties.put("mail.imap.timeout", 30000);
mailProperties.put("mail.imaps.connectionpoolsize", "5");
mailProperties.put("mail.imaps.fetchsize", 4194304);
mailProperties.put("mail.imaps.connectiontimeout", 15000);
mailProperties.put("mail.imaps.timeout", 30000);
receiver.setJavaMailProperties(mailProperties);
receiver.setSearchTermStrategy(this::notSeenTerm);
receiver.setAutoCloseFolder(false);
receiver.setShouldDeleteMessages(false);
receiver.setShouldMarkMessagesAsRead(true);
receiver.setHeaderMapper(mailHeaderMapper);
receiver.setEmbeddedPartsAsBytes(false);
return receiver;
}
Added a screenshot taken from Grafana of active and queued task when we have upgraded to SP v2.7.1 and SI v5.5.14
At a glance it all looks OK. Unless you really don't close that folder manually elsewhere since you use receiver.setAutoCloseFolder(false);
There is no reason in that .taskExecutor(taskExecutor) since you use MessageChannels.executor(mailTaskExecutor()) immediately after producing message from the Mail.imapInboundAdapter().
I remember that in Gitter I suggested you to check how it works with the spring.task.scheduling.pool.size=10 placed into the application.properties. This is the only obvious difference between the mentioned versions: https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.spring-integration.
Your screenshot doesn't prove that the problem is exactly with Spring Integration. Perhaps tasks are queued somehow by the tool which exports metrics to Graphana. I believe you have upgraded not just Spring Integration in your project...

Spring - Rabbitmq Listener needs to paused and resumed during database change

Hey The requirement is to pause the rabbitmq listeners from processing messages during a change to the backend tables. This change is limited to just my application so don't want to bring down the entire rabbitmq instance. Once the process is complete I want to kickstart the listeners again.
Issue I'm facing
I have 2 listeners connected to 2 separate queues sharing a 'consumerconnectionFactory'. When I killed the connection, only the one without any open channels get killed and when I resumed the connection I got an extra connection which was not there earlier. Can you please help.
I'm sharing my java configs below.
#Bean
public SimpleMessageListenerContainer auditMessageListenerContainer(AuditMessageListener auditMessageListener)
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(consumerConnectionFactory);
container.setQueueNames(messagingAuditQueue);
container.setMessageListener(auditMessageListener);
container.setMaxConcurrentConsumers(5);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
container.setDefaultRequeueRejected(false);
container.setMissingQueuesFatal(false);
container.setForceCloseChannel(true);
container.setExclusive(false);
return container;
}
#Bean
public SimpleMessageListenerContainer accessMessageListenerContainer(AccessLogListener accessLogListener)
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(consumerConnectionFactory);
container.setQueueNames(accessAuditQueue);
container.setMessageListener(accessLogListener);
container.setMaxConcurrentConsumers(5);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
container.setDefaultRequeueRejected(false);
container.setMissingQueuesFatal(false);
container.setForceCloseChannel(true);
container.setExclusive(false);
return container;
}
This is how I did the Java config for the Listeners.
Below is the RestController to start and stop the listeners
#RestController
#RequestMapping(MESSAGE_AUDIT_ROOT)
public class RestartController {
#Autowired
private List<MessageListenerContainer> listenerContainers;
#Autowired
private List<ConnectionFactory> connectionFactories;
#GetMapping("/stop")
public String stopMessageListenerContainer() {
connectionFactories.forEach(conFactory -> {
CachingConnectionFactory cConFactory = (CachingConnectionFactory) conFactory;
cConFactory.resetConnection();
});
listenerContainers.forEach(container -> {
SimpleMessageListenerContainer smlc = (SimpleMessageListenerContainer) container;
smlc.shutdown();
});
listenerContainers.forEach(container -> System.out
.println("Container: " + container.toString() + "is Running ?" + container.isRunning()));
return "done - stop";
}
#GetMapping("/start")
public String startMessageListenerContainer() {
connectionFactories.forEach(conFactory -> {
CachingConnectionFactory cConFactory = (CachingConnectionFactory) conFactory;
cConFactory.createConnection();
});
listenerContainers.forEach(container -> {
SimpleMessageListenerContainer smlc = (SimpleMessageListenerContainer) container;
smlc.start();
});
listenerContainers.forEach(container -> System.out
.println("Container: " + container.toString() + "is Running ?" + container.isRunning()));
return "done - start";
}
}
Below is the Images for the behavior I see locally.
1. Initial connections list
When Connection stop Rest call
2.1 Queue connection still active
3. When connection start Rest Call
With the default cache mode (CHANNEL), there should only be one connection at all times, unless you configure a RabbitTemplate with usePublisherConnection set to true, in which case, the connection name would be api-audit.publisher.
Since you have two connections with the name api-audit, there is something very odd going on. I suspect you somehow have two connection factories loaded, perhaps one is in a child application context? You can't have two beans with the same name in a single application context.
i.e. you are calling resetConnection on one of them but not the other.
I suggest you put a breakpoint in createConnection to see who's using a second CF.
By the way, you should really reset the connection after the container is stopped; otherwise the container will go into recovery mode and might re-open the connection, depending on timing.

RabbitListener does not pick up every message sent with AsyncRabbitTemplate

I am using a Spring-Boot project on Spring-Boot Version 1.5.4, with spring-boot-starter-amqp, spring-boot-starter-web-services and spring-ws-support v. 2.4.0.
So far, I have successfully created a #RabbitListener Component, which does exactly what it is supposed to do, when a message is sent to the broker via rabbitTemplate.sendAndReceive(uri, message). I tried to see what would happen if I used AsyncRabbitTemplate for this, as it is possible that the message processing might take a while, and I don't want to lock my application while waiting for a response.
The problem is: the first message I put in the queue is not even being picked up by the listener. The callback just acknowledges a success with the published message instead of the returned message.
Listener:
#RabbitListener(queues = KEY_MESSAGING_QUEUE)
public Message processMessage(#Payload byte[] payload, #Headers Map<String, Object> headers) {
try {
byte[] resultBody = messageProcessor.processMessage(payload, headers);
MessageBuilder builder = MessageBuilder.withBody(resultBody);
if (resultBody.length == 0) {
builder.setHeader(HEADER_NAME_ERROR_MESSAGE, "Error occurred during processing.");
}
return builder.build();
} catch (Exception ex) {
return MessageBuilder.withBody(EMPTY_BODY)
.setHeader(HEADER_NAME_ERROR_MESSAGE, ex.getMessage())
.setHeader(HEADER_NAME_STACK_TRACE, ex.getStackTrace())
.build();
}
}
When I am executing my Tests, one test fails, and the second test succeeds. The class is annotated with #RunWith(SpringJUnit4ClassRunner.class) and #SpringBootTest(classes = { Application.class, Test.TestConfiguration.class }) and has a #ClassRule of BrokerRunning.isRunningWintEmptyQueues(QUEUE_NAME)
TestConfiguration (inner class):
public static class TestConfiguration {
#Bean // referenced in the tests as art
public AsyncRabbitTemplate asyncRabbitTemplate(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(QUEUE_NAME);
return new AsyncRabbitTemplate(rabbitTemplate, container);
}
#Bean
public MessageListener messageListener() {
return new MessageListener();
}
}
Tests:
#Test
public void shouldListenAndReplyToQueue() throws Exception {
doReturn(RESULT_BODY)
.when(innerMock)
.processMessage(any(byte[].class), anyMapOf(String.class, Object.class));
Message msg = MessageBuilder
.withBody(MESSAGE_BODY)
.setHeader("header", "value")
.setHeader("auth", "entication")
.build();
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
pendingReply.addCallback(new ListenableFutureCallback<Message>() {
#Override
public void onSuccess(Message result) { }
#Override
public void onFailure(Throwable ex) {
throw new RuntimeException(ex);
}
});
while (!pendingReply.isDone()) {}
result = pendingReply.get();
// assertions omitted
}
Test 2:
#Test
public void shouldReturnExceptionToCaller() throws Exception {
doThrow(new SSLSenderInstantiationException("I am a message", new Exception()))
.when(innerMock)
.processMessage(any(byte[].class), anyMapOf(String.class, Object.class));
Message msg = MessageBuilder
.withBody(MESSAGE_BODY)
.setHeader("header", "value")
.setHeader("auth", "entication")
.build();
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
pendingReply.addCallback(/*same as above*/);
while (!pendingReply.isDone()) {}
result = pendingReply.get();
//assertions omitted
}
When I run both tests together, the test that is executed first fails, while the second call succeeds.
When I run both tests separately, both fail.
When I add an #Before-Method, which uses the AsyncRabbitTemplate art to put any message into the queue, both tests MAY pass, or the second test MAY not pass, so in addition to being unexpected, the behaviour is inconsistent as well.
The interesting thing is, that the callback passed to the method reports a success before the listener is invoked, and reports the sent message as result.
The only class missing from this is the general configuration class, which is annotated with #EnableRabbit and has this content:
#Bean
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrentConsumers(10);
return factory;
}
Other Things I have tried:
specifically create AsyncRabbitTemplate myself, start and stop it manually before and after every message process -> both tests succeeded
increase / decrease receive timeout -> no effect
remove and change the callback -> no effect
explicitly created the queue again with an injected RabbitAdmin -> no effect
extracted the callback to a constant -> tests didn't even start correctly
As stated above, I used RabbitTemplate directly, which worked exactly as intended
If anyone has any ideas what is missing, I'd be very happy to hear.
You can't use the same queue for requests and replies...
#Bean // referenced in the tests as art
public AsyncRabbitTemplate asyncRabbitTemplate(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(QUEUE_NAME);
return new AsyncRabbitTemplate(rabbitTemplate, container);
}
Will listen for replies on QUEUE_NAME, so...
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
...simply sends a message to itself. It looks like you intended...
RabbitMessageFuture pendingReply = art.sendAndReceive(KEY_MESSAGING_QUEUE, msg);

Loss of data in case of multiple consumers for redis messaging

In My application am using multiple consumers for receiving messages from publisher of redis .But now issue is loss of data and duplicate data i mean multiple consumers receiving reeving same message .How can I solve this issue in redis? and also can provide example in java am new to redis messaging.Please help me.
Here is my receiver
#Configuration
#EnableScheduling
public class ScheduledRecevierService {
private static final Logger LOGGER = LoggerFactory.getLogger(Application.class);
private static final SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss");
#Bean
RedisConnectionFactory redisConnectionFactory() {
LOGGER.info("in redisConnectionFactory");
JedisConnectionFactory redis = new JedisConnectionFactory();
redis.setHostName("ipaddress");
redis.setPort(6379);
return redis;
}
#Bean
StringRedisTemplate template(RedisConnectionFactory connectionFactory) {
LOGGER.info("in template");
return new StringRedisTemplate(connectionFactory);
}
#Scheduled(fixedRate = 1000)
public void getScheduledMessage() {
StringRedisTemplate template = template(redisConnectionFactory());
System.out.println("The time is now " + dateFormat.format(new Date()));
LOGGER.info("Sending messages...");
String message = template.opsForList().leftPop("point-to-point-test"); // latch.await();
// template.convertAndSend("chat", "Hello from Redis! count: " + i);
LOGGER.info("Got message " + message + " from chat1 channel"); //
}
}
I am running this applications in multiple consumer instances.My Queue "point-to-point-test" having 1000 messages what i observed is in multiple server logs reading same message.
Can we implement point to point protocol communication in redis using java?
RPOPLPUSH command in redis solve this issue?if yes post some example in java.
from fast few days am struggled to fix these issues in redis messaging please help me
Use redis transactions to ensure that all your commands are executed sequentially http://redis.io/topics/transactions
Use Jedis as the java client library , see its test for usage on transactions https://github.com/xetorthio/jedis/blob/master/src/test/java/redis/clients/jedis/tests/commands/TransactionCommandsTest.java

Categories

Resources