Manualy NACK message from AmqpInboundChannelAdapter - java

This is my current code:
#Bean
public IntegrationFlow someFlow() {
return IntegrationFlows
.from(someInboundAdapter())
.transform(new JsonToObjectTransformer(SomeObject.class))
.filter((SomeObject s) -> s.getId()!=null && s.getId().isRealId(), f -> f.discardChannel(manualNackChannel()))
.channel(amqpInputChannel())
.get();
}
#ServiceActivator(inputChannel = "manualNackChannel")
public void manualNack(#Header(AmqpHeaders.CHANNEL) Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) Long tag) throws IOException {
channel.basicNack(tag, false, false);
}
#Bean
public AmqpInboundChannelAdapter someInboundAdapter() {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(someListenerContainer());
adapter.setErrorChannel(manualNackChannel()); //NOT WORKING
return adapter;
}
#Bean
public SimpleMessageListenerContainer someListenerContainer() {
SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer(commonConfig.connectionFactory());
listenerContainer.setQueues(someQueue());
listenerContainer.setConcurrentConsumers(4);
listenerContainer.setMessageConverter(jackson2JsonConverter());
listenerContainer.setAcknowledgeMode(AcknowledgeMode.MANUAL);
listenerContainer.setConsumerTagStrategy(consumerTagStrategy());
listenerContainer.setAfterReceivePostProcessors(new GUnzipPostProcessor());
listenerContainer.setAdviceChain(commonConfig.retryInterceptor()); //reties 3 times and RejectAndDontRequeueRecoverer
return listenerContainer;
}
Here I use MANUAL ACK-ing, since I want to ACK/NACK message only if processed sucesfully in last part of IntegrationFlow.
Here, in case that message cannot be deserialized, retryInterceptor is invoked, but after exausting all the retries, I need to be able to manually NACK the message. I expected to do it with setErrorChannel method on adapter, but I cannot get AMQP channel headers in manualNack.
Is this proper way to manually NACK message from AmqpInboundChannelAdapter?
UPDATE
I guess this is my current solution, but don't know if good enough:
private ErrorMessageStrategy nackStrategy(){
return (throwable, attributes) -> {
Object inputMessage = attributes.getAttribute(ErrorMessageUtils.INPUT_MESSAGE_CONTEXT_KEY);
return new ErrorMessage(throwable, ((Message)inputMessage).getHeaders());
};
}
#Bean
public AmqpInboundChannelAdapter someInboundAdapter() {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(someListenerContainer());
adapter.setRecoveryCallback(new ErrorMessageSendingRecoverer(manualNackChannel(), nackStrategy()));
adapter.setRetryTemplate(commonConfig.retryTemplate());
return adapter;
}

in case that message cannot be deserialized
Since AMQP message cannot be deserialized, the Spring Message isn't created and therefore no AmqpHeaders.CHANNEL header.
I'm not sure though how that ErrorMessageSendingRecoverer can help you here because deserialization really happens on the SimpleMessageListenerContainer level a bit earlier than onMessage() in the AmqpInboundChannelAdapter.
Not sure yet how to help you but maybe you can share some simply Spring Boot project to play from our side? Thanks

Here is the full working code for this example. You can test ACK/NACK on 3 REST endpoints:
http://localhost:8080/sendForAck -> will send Object SomeObject to queue proba, transform it, forward to exchange probaEx and ACK it after that
http://localhost:8080/sendForNack -> will send malformed byte[] message which cannot be deserialized and will be NACK-ed.
http://localhost:8080/sendForNack2 -> will create malformed json message and will be NACK-ed with InvalidFormatException
#Controller
#EnableAutoConfiguration
#Configuration
public class SampleController {
#Autowired
public RabbitTemplate rabbitTemplate;
#RequestMapping("/sendForAck")
#ResponseBody
String sendForAck() {
SomeObject s = new SomeObject();
s.setId(2);
rabbitTemplate.convertAndSend("", "proba", s);
return "Sent for ACK!";
}
#RequestMapping("/sendForNack")
#ResponseBody
String sendForNack() {
rabbitTemplate.convertAndSend("", "proba", new byte[]{1,2,3});
return "Sent for NACK!";
}
#RequestMapping("/sendForNack2")
#ResponseBody
String sendForNack2() {
MessageProperties p = new MessageProperties();
p.getHeaders().put("__TypeId__", "SampleController$SomeObject");
p.setDeliveryMode(MessageDeliveryMode.PERSISTENT);
p.setPriority(0);
p.setContentEncoding("UTF-8");
p.setContentType("application/json");
rabbitTemplate.send("", "proba", new org.springframework.amqp.core.Message("{\"id\":\"abc\"}".getBytes(), p));
return "Sent for NACK2!";
}
static class SomeObject{
private Integer id;
public Integer getId(){return id;}
public void setId(Integer id){ this.id=id; }
#Override
public String toString() {
return "SomeObject{" +
"id=" + id +
'}';
}
}
#Bean
public IntegrationFlow someFlow() {
return IntegrationFlows
.from(someInboundAdapter())
.transform(new JsonToObjectTransformer(SomeObject.class))
.filter((SomeObject s) -> s.getId()!=null, f -> f.discardChannel(manualNackChannel()))
.transform((SomeObject s) -> {s.setId(s.getId()*2); return s;})
.handle(amqpOutboundEndpoint())
.get();
}
#Bean
public MessageChannel manualNackChannel() {
return new DirectChannel();
}
#Bean
public MessageChannel manualAckChannel() {
return new DirectChannel();
}
#ServiceActivator(inputChannel = "manualNackChannel")
public void manualNack(#Header(AmqpHeaders.CHANNEL) Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) Long tag, #Payload Object p) throws IOException {
channel.basicNack(tag, false, false);
System.out.println("NACKED " + p);
}
#ServiceActivator(inputChannel = "manualAckChannel")
public void manualAck(#Header(AmqpHeaders.CHANNEL) Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) Long tag, #Payload Object p) throws IOException {
channel.basicAck(tag, false);
System.out.println("ACKED " + p);
}
private ErrorMessageStrategy nackStrategy() {
return (throwable, attributes) -> {
Message inputMessage = (Message)attributes.getAttribute(ErrorMessageUtils.INPUT_MESSAGE_CONTEXT_KEY);
return new ErrorMessage(throwable, inputMessage.getHeaders());
};
}
#Bean
public AmqpInboundChannelAdapter someInboundAdapter() {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(someListenerContainer());
adapter.setRecoveryCallback(new ErrorMessageSendingRecoverer(manualNackChannel(), nackStrategy()));
adapter.setRetryTemplate(retryTemplate());
return adapter;
}
#Bean
public RetryTemplate retryTemplate() {
RetryTemplate template = new RetryTemplate();
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(10);
backOffPolicy.setMaxInterval(5000);
backOffPolicy.setMultiplier(4);
template.setBackOffPolicy(backOffPolicy);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(4);
template.setRetryPolicy(retryPolicy);
return template;
}
#Bean
public AmqpOutboundEndpoint amqpOutboundEndpoint() {
AmqpOutboundEndpoint outboundEndpoint = new AmqpOutboundEndpoint(ackTemplate());
outboundEndpoint.setConfirmAckChannel(manualAckChannel());
outboundEndpoint.setConfirmCorrelationExpressionString("#root");
outboundEndpoint.setExchangeName("probaEx");
return outboundEndpoint;
}
#Bean
public MessageConverter jackson2JsonConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
public RabbitTemplate ackTemplate() {
RabbitTemplate ackTemplate = new RabbitTemplate(connectionFactory());
ackTemplate.setMessageConverter(jackson2JsonConverter());
return ackTemplate;
}
#Bean
public Queue someQueue() {
return QueueBuilder.nonDurable("proba").build();
}
#Bean
public Exchange someExchange(){
return ExchangeBuilder.fanoutExchange("probaEx").build();
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setHost("10.10.121.137");
factory.setPort(35672);
factory.setUsername("root");
factory.setPassword("123456");
factory.setPublisherConfirms(true);
return factory;
}
#Bean
public SimpleMessageListenerContainer someListenerContainer() {
SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer(connectionFactory());
listenerContainer.setQueues(someQueue());
listenerContainer.setMessageConverter(jackson2JsonConverter());
listenerContainer.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return listenerContainer;
}
public static void main(String[] args) throws Exception {
SpringApplication.run(SampleController.class, args);
}
}
Still, the question remains if this private ErrorMessageStrategy nackStrategy() could be written in a better way?

Related

Client hangs on sending messages to RabbitMQ with Java client

I've implemented RabbitMQ publisher and consumer in reactive manner with Java, but my publishing functionality hangs channel. The queue itself, declaring a queue and consuming however works fine, I've tested it with admin's management UI. When attempting to send more messages I don't see any more of logs like "queue declare success" or "delivering message to exchange...". By the way I know I do not need declareQueue in deliver(), but I added it to verify if communication in this particular channel works.
My code is:
#Slf4j
#Component
public class RabbitConfigurator {
private TasksQueueConfig cfg;
private ReceiverOptions recOpts;
private List<Address> addresses;
private Utils.ExceptionFunction<ConnectionFactory, Connection> connSupplier;
public RabbitConfigurator(TasksQueueConfig cfg) {
this.cfg = cfg;
addresses = cfg
.getHosts()
.stream()
.map(Address::new)
.collect(Collectors.toList());
connSupplier = cf -> {
LOG.info("initializing new RabbitMQ connection");
return cf.newConnection(addresses, "dmTasksProc");
};
}
#Bean
public ConnectionFactory rabbitMQConnectionFactory() {
ConnectionFactory cf = new ConnectionFactory();
cf.setHost(cfg.getHosts().get(0));
cf.setPort(5672);
cf.setUsername(cfg.getUsername());
cf.setPassword(cfg.getPassword());
return cf;
}
#Bean
public Sender sender(ConnectionFactory connFactory) {
SenderOptions sendOpts = new SenderOptions()
.connectionClosingTimeout(Duration.parse(cfg.getConnectionTimeout()))
.connectionFactory(connFactory)
.connectionSupplier(connSupplier)
.connectionSubscriptionScheduler(Schedulers.elastic());
return RabbitFlux.createSender(sendOpts);
}
#Bean
public Receiver receiver(ConnectionFactory connFactory) {
ReceiverOptions recOpts = new ReceiverOptions()
.connectionClosingTimeout(Duration.parse(cfg.getConnectionTimeout()))
.connectionFactory(connFactory)
.connectionSupplier(connSupplier)
.connectionSubscriptionScheduler(Schedulers.elastic());
return RabbitFlux.createReceiver(recOpts);
}
#Bean
public Flux<Delivery> deliveryFlux(Receiver receiver) {
return receiver.consumeAutoAck(cfg.getName(), new ConsumeOptions().qos(cfg.getPrefetchCount()));
}
#Bean
public AmqpAdmin rabbitAmqpAdmin(ConnectionFactory connFactory) {
return new RabbitAdmin(new CachingConnectionFactory(connFactory));
}
}
and the consumer/publisher:
#Slf4j
#Service
public class TasksQueue implements DisposableBean {
private TasksQueueConfig cfg;
private ObjectMapper mapper;
private Flux<Delivery> deliveryFlux;
private Receiver receiver;
private Sender sender;
private Disposable consumer;
public TasksQueue(TasksQueueConfig cfg, AmqpAdmin amqpAdmin, ObjectMapper mapper, Flux<Delivery> deliveryFlux,
Receiver receiver, Sender sender) {
this.cfg = cfg;
this.mapper = mapper;
this.deliveryFlux = deliveryFlux;
this.receiver = receiver;
this.sender = sender;
amqpAdmin.declareQueue(new Queue(cfg.getName(), false, false, false));
consumer = consume();
}
public Mono<Void> deliver(Flux<Task> tasks) {
var pub = sender.sendWithPublishConfirms(
tasks.map(task -> {
try {
String exchange = "";
LOG.debug("delivering message to exchange='{}', routingKey='{}'", exchange, cfg.getName());
return new OutboundMessage(exchange, cfg.getName(), mapper.writeValueAsBytes(task));
} catch(JsonProcessingException ex) {
throw Exceptions.propagate(ex);
}
}));
return sender.declareQueue(QueueSpecification.queue(cfg.getName()))
.flatMap(declareOk -> {
LOG.info("queue declare success");
return Mono.just(declareOk);
})
.thenMany(pub)
.doOnError(JsonProcessingException.class, ex -> LOG.error("Cannot prepare queue message:", ex))
.doOnError(ex -> LOG.error("Failed to send task to the queue:", ex))
.map(res -> {
if(res.isAck()) {
LOG.info("Message {} sent successfully", new String(res.getOutboundMessage().getBody()));
return res;
} else {
LOG.info("todo");
return res;
}
})
.then();
}
private Disposable consume() {
return deliveryFlux
.retryWhen(Retry.fixedDelay(10, Duration.ofSeconds(1)))
.doOnError(err -> {
LOG.error("tasks consumer error", err);
})
.subscribe(m -> {
LOG.info("Received message {}", new String(m.getBody()));
});
}
#Override
public void destroy() throws Exception {
LOG.info("Cleaning up tasks queue resources");
consumer.dispose();
receiver.close();
sender.close();
}
}
Five minutes after attempting to send message I get log:
r.r.ChannelCloseHandlers$SenderChannelCloseHandler:47: closing channel 1 by signal cancel
r.r.ChannelCloseHandlers$SenderChannelCloseHandler:53: Channel 1 didn't close normally: null
Big thanks for input in advance!

How to use AmazonSQS listener with two accounts

I have application with two worker classes. I want them to pull from AWS SQS ,but from two different accounts.
I am using #SQSListener to achive this. I am having trouble to set the right AmazonSQS client for each queue.Tried to use custom destionationResolver but again it cannot access the right amazonSQS client bean.
I'm using AmazonSQSAsync maybe this is part of the problem. Whit the custom destination resolver i am getting access denied for one of the queues.
My config code:
#Bean(destroyMethod = "shutdown")
#Primary
public AmazonSQSAsync amazonSQS() {
AmazonSQSAsync amazonSQSAsyncClient = new AmazonSQSAsyncClient(new AWSCredentialsProvider() {
public void refresh() {}
public AWSCredentials getCredentials() {
return new AWSCredentials() {
public String getAWSSecretKey() {return secretKey;}
public String getAWSAccessKeyId() {return accessKey;}
};
}
});
QueueBufferConfig config = new QueueBufferConfig();
config.setMaxBatchOpenMs(maxBatchOpenMs);
config.setMaxBatchSize(maxBatchSize);
LOGGER.info("SQS Client Initialized Successfully");
return new AmazonSQSBufferedAsyncClient(amazonSQSAsyncClient, config);
}
#Bean(destroyMethod = "shutdown")
#Qualifier("workerSQS")
public AmazonSQSAsync workerSQS() {
final ClientConfiguration cc = new ClientConfiguration();
cc.setConnectionTimeout(listenerConnectionTimeout);
cc.setSocketTimeout(listenerSocketTimeout);
cc.setMaxConnections(listenerMaxConnection);
cc.setRequestTimeout(listenerRequestTimeout);
cc.setUseReaper(true);
//cc.setConnectionMaxIdleMillis();
AWSCredentialsProvider awsCredentialsProvider = new AWSCredentialsProvider() {
public void refresh() {}
public AWSCredentials getCredentials() {
return new AWSCredentials() {
public String getAWSSecretKey() {return routingSecretKey;}
public String getAWSAccessKeyId() {return routingAccessKey;}
};
}
};
AmazonSQSAsync amazonSQSAsyncClient = AmazonSQSAsyncClientBuilder.standard()
.withCredentials(awsCredentialsProvider)
.withRegion(Regions.US_EAST_1)
.withClientConfiguration(cc)
.build();
// See https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-client-side-buffering-request-batching.html
// for QueueBufferConfig Configuration Parameters
QueueBufferConfig config = new QueueBufferConfig();
config.setLongPoll(true);
return new AmazonSQSBufferedAsyncClient(amazonSQSAsyncClient, config);
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory() {
SimpleMessageListenerContainerFactory msgListenerContainerFactory = new SimpleMessageListenerContainerFactory();
msgListenerContainerFactory.setBackOffTime(listenerBackOffTime);
msgListenerContainerFactory.setWaitTimeOut(listenerWaitTimeOut);
msgListenerContainerFactory.setVisibilityTimeout(listenerVisibilityTimeOut);
msgListenerContainerFactory.setMaxNumberOfMessages(listenerMaxMessagesPerPoll);
msgListenerContainerFactory.setDestinationResolver(destinationResolver());
return msgListenerContainerFactory;
}
#Bean
public CustomDestinationResolver destinationResolver(){
return new CustomDestinationResolver();
}
#Component
public static class CustomDestinationResolver implements DestinationResolver{
#Autowired
private AmazonSQS amazonSQS;
#Autowired
#Qualifier("workerSQS")
private AmazonSQSAsync amazonSQSAsync;
#Override
public String resolveDestination(String name) throws DestinationResolutionException {
String queueName = name;
if (queueName.startsWith("tl")) {
try {
GetQueueUrlResult getQueueUrlResult = amazonSQSAsync.getQueueUrl(new GetQueueUrlRequest(name));
return getQueueUrlResult.getQueueUrl();
} catch (QueueDoesNotExistException var4) {
throw new DestinationResolutionException(var4.getMessage(), var4);
}
} else {
try {
GetQueueUrlResult getQueueUrlResult = amazonSQS.getQueueUrl(new GetQueueUrlRequest(name));
return getQueueUrlResult.getQueueUrl();
} catch (QueueDoesNotExistException var4) {
throw new DestinationResolutionException(var4.getMessage(), var4);
}
}
}
}
I was not able to do it with SQS Listener,so i tried with JMS listener and it worked.
I simply created two JMS listenerContainerFactory and used them. Each listener have different AWS account

Spring Integration FTP remove local files after use (Spring Boot)

I am trying to write a program that can take a file from one server via ftp and place it on another server via ftp. However, I am having issues deleting the local file after it has been written. Being able to save it locally is not an issue as long as it is temporary. I have tried using an ExpressionEvaluatingRequestHandlerAdvice with an OnSuccessExpression and I could not get it to actually use the expression. The code is here:
#Configuration
#EnableConfigurationProperties(FTPConnectionProperties.class)
public class FTPConfiguration {
private FTPConnectionProperties ftpConnectionProperties;
public FTPConfiguration(FTPConnectionProperties ftpConnectionProperties) {
this.ftpConnectionProperties = ftpConnectionProperties;
}
#Bean
public SessionFactory<FTPFile> ftpInputSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(ftpConnectionProperties.getInputServer());
sf.setUsername(ftpConnectionProperties.getInputFtpUser());
sf.setPassword(ftpConnectionProperties.getInputFtpPassword());
return new CachingSessionFactory<>(sf);
}
#Bean
public SessionFactory<FTPFile> ftpOutputSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(ftpConnectionProperties.getOutputServer());
sf.setUsername(ftpConnectionProperties.getOutputFtpUser());
sf.setPassword(ftpConnectionProperties.getOutputFtpPassword());
return new CachingSessionFactory<>(sf);
}
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(ftpInputSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(true);
fileSynchronizer.setRemoteDirectory(ftpConnectionProperties.getInputDirectory());
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.TIF"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "input", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source = new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer());
source.setLocalDirectory(new File("ftp-inbound"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new FileSystemPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), ""));
return source;
}
#Bean
#ServiceActivator(inputChannel = "input")
public MessageHandler handler() {
FtpMessageHandler handler = new FtpMessageHandler(ftpOutputSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression(ftpConnectionProperties.getOutputDirectory()));
handler.setFileNameGenerator(message -> {
if (message.getPayload() instanceof File) {
return ((File) message.getPayload()).getName();
} else {
throw new IllegalArgumentException("File expected as payload.");
}
});
return handler;
}
}
It is handling the remote files exactly as expected, deleting the remote file from the source and putting into the output, but not removing the local file after use.
I would suggest you to make that input channel as a PublishSubscribeChannel and add one more simple subscriber:
#Bean
public PublishSubscribeChannel input() {
return new PublishSubscribeChannel();
}
#Bean
#ServiceActivator(inputChannel = "input")
public MessageHandler handler() {
...
}
#Bean
#ServiceActivator(inputChannel = "input")
public MessageHandler deleteLocalFileService() {
return m -> ((File) message.getPayload()).delete();
}
This way the same message with the File payload is going to be sent first to your FtpMessageHandler and only after that to this new deleteLocalFileService for removing the local file based on the payload.
Simple Solution to fetch file from SFTP server and then move that file to other folder with different name.
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
if (sftpServerProperties.getSftpPrivateKey() != null) {
factory.setPrivateKey(sftpServerProperties.getSftpPrivateKey());
factory.setPrivateKeyPassphrase(sftpServerProperties.getSftpPrivateKeyPassphrase());
} else {
factory.setPassword(sftpServerProperties.getPassword());
}
factory.setHost(sftpServerProperties.getSftpHost());
factory.setPort(sftpServerProperties.getSftpPort());
factory.setUser(sftpServerProperties.getSftpUser());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory(sftpServerProperties.getSftpRemoteDirectoryDownload());
fileSynchronizer.setFilter(new SftpSimplePatternFileListFilter(sftpServerProperties.getSftpRemoteDirectoryDownloadFilter()));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "fromSftpChannel", poller = #Poller(cron = "*/10 * * * * *"))
public MessageSource<File> sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source = new SftpInboundFileSynchronizingMessageSource(
sftpInboundFileSynchronizer());
source.setLocalDirectory(util.createDirectory(Constants.FILES_DIRECTORY));
source.setAutoCreateLocalDirectory(true);
return source;
}
#Bean
#ServiceActivator(inputChannel = "fromSftpChannel")
public MessageHandler resultFileHandler() {
return (Message<?> message) -> {
String csvFilePath = util.getDirectory(Constants.FILES_DIRECTORY) + Constants.INSIDE + message.getHeaders().get("file_name");
util.readCSVFile(csvFilePath, String.valueOf(message.getHeaders().get("file_name")));
File file = (File) message.getPayload();
File newFile = new File(file.getPath() + System.currentTimeMillis());
try {
FileUtils.copyFile(file, newFile);
sftpGateway.sendToSftp(newFile);
} catch (Exception e) {
e.printStackTrace();
}
if (file.exists()) {
file.delete();
}
if (newFile.exists()) {
newFile.delete();
}
};
}
#Bean
#ServiceActivator(inputChannel = "toSftpChannelDest")
public MessageHandler handlerOrderBackUp() {
SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setAutoCreateDirectory(true);
handler.setRemoteDirectoryExpression(new LiteralExpression(sftpServerProperties.getSftpRemoteBackupDirectory()));
return handler;
}
#MessagingGateway
public interface SFTPGateway {
#Gateway(requestChannel = "toSftpChannelDest")
void sendToSftp(File file);
}

How to mock WebFluxRequestExecutingMessageHandler with MockIntegrationContext.substituteMessageHandlerFor

I have implemented an IntegrationFlow where I want to do to following tasks:
Poll for files from a directory
Transform the file content to a string
Send the string via WebFluxRequestExecutingMessageHandler to a REST-Endpoint and use an AdviceChain to handle success and error responses
Implementation
#Configuration
#Slf4j
public class JsonToRestIntegration {
#Autowired
private LoadBalancerExchangeFilterFunction lbFunction;
#Value("${json_folder}")
private String jsonPath;
#Value("${json_success_folder}")
private String jsonSuccessPath;
#Value("${json_error_folder}")
private String jsonErrorPath;
#Value("${rest-service-url}")
private String restServiceUrl;
#Bean
public DirectChannel httpResponseChannel() {
return new DirectChannel();
}
#Bean
public MessageChannel successChannel() {
return new DirectChannel();
}
#Bean
public MessageChannel failureChannel() {
return new DirectChannel();
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedDelay(1000).get();
}
#Bean
public IntegrationFlow jsonFileToRestFlow() {
return IntegrationFlows
.from(fileReadingMessageSource(), e -> e.id("fileReadingEndpoint"))
.transform(org.springframework.integration.file.dsl.Files.toStringTransformer())
.enrichHeaders(s -> s.header("Content-Type", "application/json; charset=utf8"))
.handle(reactiveOutbound())
.log()
.channel(httpResponseChannel())
.get();
}
#Bean
public FileReadingMessageSource fileReadingMessageSource() {
FileReadingMessageSource source = new FileReadingMessageSource();
source.setDirectory(new File(jsonPath));
source.setFilter(new SimplePatternFileListFilter("*.json"));
source.setUseWatchService(true);
source.setWatchEvents(FileReadingMessageSource.WatchEventType.CREATE);
return source;
}
#Bean
public MessageHandler reactiveOutbound() {
WebClient webClient = WebClient.builder()
.baseUrl("http://jsonservice")
.filter(lbFunction)
.build();
WebFluxRequestExecutingMessageHandler handler = new WebFluxRequestExecutingMessageHandler(restServiceUrl, webClient);
handler.setHttpMethod(HttpMethod.POST);
handler.setCharset(StandardCharsets.UTF_8.displayName());
handler.setOutputChannel(httpResponseChannel());
handler.setExpectedResponseType(String.class);
handler.setAdviceChain(singletonList(expressionAdvice()));
return handler;
}
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setTrapException(true);
advice.setSuccessChannel(successChannel());
advice.setOnSuccessExpressionString("payload + ' war erfolgreich'");
advice.setFailureChannel(failureChannel());
advice.setOnFailureExpressionString("payload + ' war nicht erfolgreich'");
return advice;
}
#Bean
public IntegrationFlow loggingFlow() {
return IntegrationFlows.from(httpResponseChannel())
.handle(message -> {
String originalFileName = (String) message.getHeaders().get(FileHeaders.FILENAME);
log.info("some log");
})
.get();
}
#Bean
public IntegrationFlow successFlow() {
return IntegrationFlows.from(successChannel())
.handle(message -> {
MessageHeaders messageHeaders = ((AdviceMessage) message).getInputMessage().getHeaders();
File originalFile = (File) messageHeaders.get(ORIGINAL_FILE);
String originalFileName = (String) messageHeaders.get(FILENAME);
if (originalFile != null && originalFileName != null) {
File jsonSuccessFolder = new File(jsonSuccessPath);
File jsonSuccessFile = new File(jsonSuccessFolder, originalFileName);
try {
Files.move(originalFile.toPath(), jsonSuccessFile.toPath());
} catch (IOException e) {
log.error("some log", e);
}
}
})
.get();
}
#Bean
public IntegrationFlow failureFlow() {
return IntegrationFlows.from(failureChannel())
.handle(message -> {
Message<?> failedMessage = ((MessagingException) message.getPayload()).getFailedMessage();
if (failedMessage != null) {
File originalFile = (File) failedMessage.getHeaders().get(FileHeaders.ORIGINAL_FILE);
String originalFileName = (String) failedMessage.getHeaders().get(FileHeaders.FILENAME);
if (originalFile != null && originalFileName != null) {
File jsonErrorFolder = new File(tonisJsonErrorPath);
File jsonErrorFile = new File(jsonErrorFolder, originalFileName);
try {
Files.move(originalFile.toPath(), jsonErrorFile.toPath());
} catch (IOException e) {
log.error("some log", e);
}
}
}
})
.get();
}
}
So far it seems to work in production. In the test I want to do the following steps:
Copy JSON-Files to the input directory
Start the polling for the json files
Do assertions on the HTTP-Response from the WebFluxRequestExecutingMessageHandler which are routed through my advice chain
But I'm struggling in the test with the following tasks:
Mocking the WebFluxRequestExecutingMessageHandler with the MockIntegrationContext.substituteMessageHandlerFor()-method
Manually start the polling of the json files
Test
#RunWith(SpringRunner.class)
#SpringIntegrationTest()
#Import({JsonToRestIntegration.class})
#JsonTest
public class JsonToRestIntegrationTest {
#Autowired
public DirectChannel httpResponseChannel;
#Value("${json_folder}")
private String jsonPath;
#Value("${json_success_folder}")
private String jsonSuccessPath;
#Value("${json_error_folder}")
private String jsonErrorPath;
#Autowired
private MockIntegrationContext mockIntegrationContext;
#Autowired
private MessageHandler reactiveOutbound;
#Before
public void setUp() throws Exception {
Files.createDirectories(Paths.get(jsonPath));
Files.createDirectories(Paths.get(jsonSuccessPath));
Files.createDirectories(Paths.get(jsonErrorPath));
}
#After
public void tearDown() throws Exception {
FileUtils.deleteDirectory(new File(jsonPath));
FileUtils.deleteDirectory(new File(jsonSuccessPath));
FileUtils.deleteDirectory(new File(jsonErrorPath));
}
#Test
public void shouldSendJsonToRestEndpointAndReceiveOK() throws Exception {
File jsonFile = new ClassPathResource("/test.json").getFile();
Path targetFilePath = Paths.get(jsonPath + "/" + jsonFile.getName());
Files.copy(jsonFile.toPath(), targetFilePath);
httpResponseChannel.subscribe(httpResponseHandler());
this.mockIntegrationContext.substituteMessageHandlerFor("", reactiveOutbound);
}
private MessageHandler httpResponseHandler() {
return message -> Assert.assertThat(message.getPayload(), is(notNullValue()));
}
#Configuration
#Import({JsonToRestIntegration.class})
public static class JsonToRestIntegrationTest {
#Autowired
public MessageChannel httpResponseChannel;
#Bean
public MessageHandler reactiveOutbound() {
ArgumentCaptor<Message<?>> messageArgumentCaptor = ArgumentCaptor.forClass(Message.class);
MockMessageHandler mockMessageHandler = mockMessageHandler(messageArgumentCaptor).handleNextAndReply(m -> m);
mockMessageHandler.setOutputChannel(httpResponseChannel);
return mockMessageHandler;
}
}
}
Updated Working Example with mocked WebFluX web client:
Implementation
public class JsonToRestIntegration {
private final LoadBalancerExchangeFilterFunction lbFunction;
private final BatchConfigurationProperties batchConfigurationProperties;
#Bean
public DirectChannel httpResponseChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel errorChannel() {
return new DirectChannel();
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedDelay(100, TimeUnit.MILLISECONDS).get();
}
#Bean
public IntegrationFlow jsonFileToRestFlow() {
return IntegrationFlows
.from(fileReadingMessageSource(), e -> e.id("fileReadingEndpoint"))
.transform(org.springframework.integration.file.dsl.Files.toStringTransformer("UTF-8"))
.enrichHeaders(s -> s.header("Content-Type", "application/json; charset=utf8"))
.handle(reactiveOutbound())
.channel(httpResponseChannel())
.get();
}
#Bean
public FileReadingMessageSource fileReadingMessageSource() {
FileReadingMessageSource source = new FileReadingMessageSource();
source.setDirectory(new File(batchConfigurationProperties.getJsonImportFolder()));
source.setFilter(new SimplePatternFileListFilter("*.json"));
source.setUseWatchService(true);
source.setWatchEvents(FileReadingMessageSource.WatchEventType.CREATE);
return source;
}
#Bean
public WebFluxRequestExecutingMessageHandler reactiveOutbound() {
WebClient webClient = WebClient.builder()
.baseUrl("http://service")
.filter(lbFunction)
.build();
WebFluxRequestExecutingMessageHandler handler = new WebFluxRequestExecutingMessageHandler(batchConfigurationProperties.getServiceUrl(), webClient);
handler.setHttpMethod(HttpMethod.POST);
handler.setCharset(StandardCharsets.UTF_8.displayName());
handler.setOutputChannel(httpResponseChannel());
handler.setExpectedResponseType(String.class);
handler.setAdviceChain(singletonList(expressionAdvice()));
return handler;
}
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setTrapException(true);
advice.setFailureChannel(errorChannel());
return advice;
}
#Bean
public IntegrationFlow responseFlow() {
return IntegrationFlows.from(httpResponseChannel())
.handle(message -> {
MessageHeaders messageHeaders = message.getHeaders();
File originalFile = (File) messageHeaders.get(ORIGINAL_FILE);
String originalFileName = (String) messageHeaders.get(FILENAME);
if (originalFile != null && originalFileName != null) {
File jsonSuccessFolder = new File(batchConfigurationProperties.getJsonSuccessFolder());
File jsonSuccessFile = new File(jsonSuccessFolder, originalFileName);
try {
Files.move(originalFile.toPath(), jsonSuccessFile.toPath());
} catch (IOException e) {
log.error("Could not move file", e);
}
}
})
.get();
}
#Bean
public IntegrationFlow failureFlow() {
return IntegrationFlows.from(errorChannel())
.handle(message -> {
Message<?> failedMessage = ((MessagingException) message.getPayload()).getFailedMessage();
if (failedMessage != null) {
File originalFile = (File) failedMessage.getHeaders().get(ORIGINAL_FILE);
String originalFileName = (String) failedMessage.getHeaders().get(FILENAME);
if (originalFile != null && originalFileName != null) {
File jsonErrorFolder = new File(batchConfigurationProperties.getJsonErrorFolder());
File jsonErrorFile = new File(jsonErrorFolder, originalFileName);
try {
Files.move(originalFile.toPath(), jsonErrorFile.toPath());
} catch (IOException e) {
log.error("Could not move file", originalFileName, e);
}
}
}
})
.get();
}
}
Test
#RunWith(SpringRunner.class)
#SpringIntegrationTest(noAutoStartup = "fileReadingEndpoint")
#Import({JsonToRestIntegration.class, BatchConfigurationProperties.class})
#JsonTest
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
public class JsonToRestIntegrationIT {
private static final FilenameFilter JSON_FILENAME_FILTER = (dir, name) -> name.endsWith(".json");
#Autowired
private BatchConfigurationProperties batchConfigurationProperties;
#Autowired
private ObjectMapper om;
#Autowired
private MessageHandler reactiveOutbound;
#Autowired
private DirectChannel httpResponseChannel;
#Autowired
private DirectChannel errorChannel;
#Autowired
private FileReadingMessageSource fileReadingMessageSource;
#Autowired
private SourcePollingChannelAdapter fileReadingEndpoint;
#MockBean
private LoadBalancerExchangeFilterFunction lbFunction;
private String jsonImportPath;
private String jsonSuccessPath;
private String jsonErrorPath;
#Before
public void setUp() throws Exception {
jsonImportPath = batchConfigurationProperties.getJsonImportFolder();
jsonSuccessPath = batchConfigurationProperties.getJsonSuccessFolder();
jsonErrorPath = batchConfigurationProperties.getJsonErrorFolder();
Files.createDirectories(Paths.get(jsonImportPath));
Files.createDirectories(Paths.get(jsonSuccessPath));
Files.createDirectories(Paths.get(jsonErrorPath));
}
#After
public void tearDown() throws Exception {
FileUtils.deleteDirectory(new File(jsonImportPath));
FileUtils.deleteDirectory(new File(jsonSuccessPath));
FileUtils.deleteDirectory(new File(jsonErrorPath));
}
#Test
public void shouldMoveJsonFileToSuccessFolderWhenHttpResponseIsOk() throws Exception {
final CountDownLatch latch = new CountDownLatch(1);
httpResponseChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
latch.countDown();
super.postSend(message, channel, sent);
}
});
errorChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
fail();
}
});
ClientHttpConnector httpConnector = new HttpHandlerConnector((request, response) -> {
response.setStatusCode(HttpStatus.OK);
response.getHeaders().setContentType(MediaType.APPLICATION_JSON_UTF8);
DataBufferFactory bufferFactory = response.bufferFactory();
String valueAsString = null;
try {
valueAsString = om.writeValueAsString(new ResponseDto("1"));
} catch (JsonProcessingException e) {
fail();
}
return response.writeWith(Mono.just(bufferFactory.wrap(valueAsString.getBytes())))
.then(Mono.defer(response::setComplete));
});
WebClient webClient = WebClient.builder()
.clientConnector(httpConnector)
.build();
new DirectFieldAccessor(this.reactiveOutbound)
.setPropertyValue("webClient", webClient);
File jsonFile = new ClassPathResource("/test.json").getFile();
Path targetFilePath = Paths.get(jsonImportPath + "/" + jsonFile.getName());
Files.copy(jsonFile.toPath(), targetFilePath);
fileReadingEndpoint.start();
assertThat(latch.await(12, TimeUnit.SECONDS), is(true));
File[] jsonImportFolder = new File(jsonImportPath).listFiles(JSON_FILENAME_FILTER);
assertThat(filesInJsonImportFolder, is(notNullValue()));
assertThat(filesInJsonImportFolder.length, is(0));
File[] filesInJsonSuccessFolder = new File(jsonSuccessPath).listFiles(JSON_FILENAME_FILTER);
assertThat(filesInJsonSuccessFolder, is(notNullValue()));
assertThat(filesInJsonSuccessFolder.length, is(1));
File[] filesInJsonErrorFolder = new File(jsonErrorPath).listFiles(JSON_FILENAME_FILTER);
assertThat(filesInJsonErrorFolder, is(notNullValue()));
assertThat(filesInJsonErrorFolder.length, is(0));
}
#Test
public void shouldMoveJsonFileToErrorFolderWhenHttpResponseIsNotOk() throws Exception {
final CountDownLatch latch = new CountDownLatch(1);
errorChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
latch.countDown();
super.postSend(message, channel, sent);
}
});
httpResponseChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
fail();
}
});
ClientHttpConnector httpConnector = new HttpHandlerConnector((request, response) -> {
response.setStatusCode(HttpStatus.BAD_REQUEST);
response.getHeaders().setContentType(MediaType.APPLICATION_JSON_UTF8);
DataBufferFactory bufferFactory = response.bufferFactory();
return response.writeWith(Mono.just(bufferFactory.wrap("SOME BAD REQUEST".getBytes())))
.then(Mono.defer(response::setComplete));
});
WebClient webClient = WebClient.builder()
.clientConnector(httpConnector)
.build();
new DirectFieldAccessor(this.reactiveOutbound)
.setPropertyValue("webClient", webClient);
File jsonFile = new ClassPathResource("/error.json").getFile();
Path targetFilePath = Paths.get(jsonImportPath + "/" + jsonFile.getName());
Files.copy(jsonFile.toPath(), targetFilePath);
fileReadingEndpoint.start();
assertThat(latch.await(11, TimeUnit.SECONDS), is(true));
File[] filesInJsonImportFolder = new File(jsonImportPath).listFiles(JSON_FILENAME_FILTER);
assertThat(filesInJsonImportFolder, is(notNullValue()));
assertThat(filesInJsonImportFolder.length, is(0));
File[] filesInJsonSuccessFolder = new File(jsonSuccessPath).listFiles(JSON_FILENAME_FILTER);
assertThat(filesInJsonSuccessFolder, is(notNullValue()));
assertThat(filesInJsonSuccessFolder.length, is(0));
File[] filesInJsonErrorFolder = new File(jsonErrorPath).listFiles(JSON_FILENAME_FILTER);
assertThat(filesInJsonErrorFolder, is(notNullValue()));
assertThat(filesInJsonErrorFolder.length, is(1));
}
}
this.mockIntegrationContext.substituteMessageHandlerFor("", reactiveOutbound);
The first parameter of this method is an endpoint id. (I guess we are just missing Javadocs there on those methods...).
So, what you need is something like this:
.handle(reactiveOutbound(), e -> e.id("webFluxEndpoint"))
And then in that test-case you do:
this.mockIntegrationContext.substituteMessageHandlerFor("webFluxEndpoint", reactiveOutbound);
You don't need to override bean in the test class config. The MockMessageHandler can be just used in the test method body.
You poll files via .from(fileReadingMessageSource()). To do a manual control you need to have it stopped in the beginning. For this purpose you add an endpoint id again:
.from(fileReadingMessageSource(), e -> e.id("fileReadingEndpoint"))
And then in the test configuration you do this:
#SpringIntegrationTest(noAutoStartup = "fileReadingEndpoint")
Another approach for the WebFlux would be via customized WebClient to mock request to the server. For example:
ClientHttpConnector httpConnector = new HttpHandlerConnector((request, response) -> {
response.setStatusCode(HttpStatus.OK);
response.getHeaders().setContentType(MediaType.TEXT_PLAIN);
DataBufferFactory bufferFactory = response.bufferFactory();
return response.writeWith(Mono.just(bufferFactory.wrap("FOO\nBAR\n".getBytes())))
.then(Mono.defer(response::setComplete));
});
WebClient webClient = WebClient.builder()
.clientConnector(httpConnector)
.build();
new DirectFieldAccessor(this.reactiveOutbound)
.setPropertyValue("webClient", webClient);

Spring AMQP - Message re queuing using dead letter mechanism with TTL

Its like "Houston we have a problem here" where I need to schedule/delay a message for 5 minutes after it fails on the first attempt to process an event.
I have implemented dead letter exchange in this scenario.
The messages on failing, route to the DLX --> Retry Queue and comes back to work queue after a TTL of 5 minutes for another attempt.
Here is the configuration I am using:
public class RabbitMQConfig {
#Bean(name = "work")
#Primary
Queue workQueue() {
return new Queue(WORK_QUEUE, true, false, false, null);
}
#Bean(name = "workExchange")
#Primary
TopicExchange workExchange() {
return new TopicExchange(WORK_EXCHANGE, true, false);
}
#Bean
Binding workBinding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(workQueue()).to(workExchange()).with("#");
}
#Bean(name = "retryExchange")
FanoutExchange retryExchange() {
return new FanoutExchange(RETRY_EXCHANGE, true, false);
}
#Bean(name = "retry")
Queue retryQueue() {
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-dead-letter-exchange", WORK_EXCHANGE);
args.put("x-message-ttl", RETRY_DELAY); //delay of 5 min
return new Queue(RETRY_QUEUE, true, false, false, args);
}
#Bean
Binding retryBinding(Queue queue,FanoutExchange exchange) {
return BindingBuilder.bind(retryQueue()).to(retryExchange());
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
return factory;
}
#Bean
Consumer receiver() {
return new Consumer();
}
#Bean
MessageListenerAdapter listenerAdapter(Consumer receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
}
Producer.java:
#GetMapping(path = "/hello")
public String sayHello() {
// Producer operation
String messages[];
messages = new String[] {" hello "};
for (int i = 0; i < 5; i++) {
String message = util.getMessage(messages)+i;
rabbitTemplate.convertAndSend("WorkExchange","", message);
System.out.println(" Sent '" + message + "'");
}
return "hello";
}
Consumer.java:
public class Consumer {
#RabbitListener(queues = "WorkQueue")
public void receiveMessage(String message, Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) Long tag) throws IOException, InterruptedException {
try {
System.out.println("message to be processed: " + message);
doWorkTwo(message);
channel.basicAck(tag, false);
} catch (Exception e) {
System.out.println("In the exception catch block");
System.out.println("message in dead letter exchange: " + message);
channel.basicPublish("RetryExchange", "", null, message.getBytes());
}
}
private void doWorkTwo(String task) throws InterruptedException {
int c = 0;
int b = 5;
int d = b / c;
}
}
Is it the correct way to use a dead letter exchange for my scenario and after waiting once in the RETRY QUEUE for 5 min, on the second time attempt it does not wait for 5 min in the RETRY QUEUE (I have mentioned TTL as 5 min) and moves to the WORK QUEUE immediately.
I am running this application by hitting localhost:8080/hello url.
Here is my updated configuration.
RabbitMQConfig.java:
#EnableRabbit
public class RabbitMQConfig {
final static String WORK_QUEUE = "WorkQueue";
final static String RETRY_QUEUE = "RetryQueue";
final static String WORK_EXCHANGE = "WorkExchange"; // Dead Letter Exchange
final static String RETRY_EXCHANGE = "RetryExchange";
final static int RETRY_DELAY = 60000; // in ms (1 min)
#Bean(name = "work")
#Primary
Queue workQueue() {
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-dead-letter-exchange", RETRY_EXCHANGE);
return new Queue(WORK_QUEUE, true, false, false, args);
}
#Bean(name = "workExchange")
#Primary
DirectExchange workExchange() {
return new DirectExchange(WORK_EXCHANGE, true, false);
}
#Bean
Binding workBinding(Queue queue, DirectExchange exchange) {
return BindingBuilder.bind(workQueue()).to(workExchange()).with("");
}
#Bean(name = "retryExchange")
DirectExchange retryExchange() {
return new DirectExchange(RETRY_EXCHANGE, true, false);
}
// Messages will drop off RetryQueue into WorkExchange for re-processing
// All messages in queue will expire at same rate
#Bean(name = "retry")
Queue retryQueue() {
Map<String, Object> args = new HashMap<String, Object>();
//args.put("x-dead-letter-exchange", WORK_EXCHANGE);
//args.put("x-message-ttl", RETRY_DELAY);
return new Queue(RETRY_QUEUE, true, false, false, null);
}
#Bean
Binding retryBinding(Queue queue, DirectExchange exchange) {
return BindingBuilder.bind(retryQueue()).to(retryExchange()).with("");
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDefaultRequeueRejected(false);
/*factory.setAdviceChain(new Advice[] {
org.springframework.amqp.rabbit.config.RetryInterceptorBuilder
.stateless()
.maxAttempts(2).recoverer(new RejectAndDontRequeueRecoverer())
.backOffOptions(1000, 2, 5000)
.build()
});*/
return factory;
}
#Bean
Consumer receiver() {
return new Consumer();
}
#Bean
MessageListenerAdapter listenerAdapter(Consumer receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
}
Consumer.java:
public class Consumer {
#RabbitListener(queues = "WorkQueue")
public void receiveMessage(String message, Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) Long tag,
#Header(required = false, name = "x-death") HashMap<String, String> xDeath)
throws IOException, InterruptedException {
doWorkTwo(message);
channel.basicAck(tag, false);
}
private void doWorkTwo(String task) {
int c = 0;
int b = 5;
if (c < b) {
throw new AmqpRejectAndDontRequeueException(task);
}
}
}
If you reject the message so the broker routes it to a DLQ, you can examine the x-death header. In this scenario, I have a DLQ with a TTL of 5 seconds and the consumer of the message from the main queue rejects it; the broker routes it to the DLQ, then it expires and is routed back to the main queue - the x-death header shows the number of re-routing operations:

Categories

Resources