I have developed what I think is a good solution in Spring Boot and Integration, using the Java DSL, with DirectChannel and QueueChannel beans. This is based upon the example code RouterTests#testMethodInvokingRouter2.
Now I want to move it into ActiveMQ. If I import ActiveMQAutoConfiguration I get an instance of ConnectionFactory. But how do I replace the following beans with the JMS equivalents?:
#Bean(name = "failed-channel")
public MessageChannel failedChannel() {
return new DirectChannel();
}
#Bean(name = "retry-channel")
public MessageChannel retryChannel() {
return new QueueChannel();
}
#Bean(name = "exhausted-channel")
public MessageChannel exhaustedChannel() {
return new QueueChannel();
}
Is there any easy way to do this or am I barking up the wrong tree?
Complete code below
#ContextConfiguration
#RunWith(SpringJUnit4ClassRunner.class)
#DirtiesContext
public class RetryRouterTests {
/** Failed download attempts are sent to this channel to be routed by {#link ContextConfiguration#failedDownloadRouting( ) } */
#Autowired
#Qualifier("failed-channel")
private MessageChannel failed;
/** Retry attempts for failed downloads are sent to this channel by {#link ContextConfiguration#failedDownloadRouting( ) }*/
#Autowired
#Qualifier("retry-channel")
private PollableChannel retryChannel;
/** Failed download attempts which will not be retried, are sent to this channel by {#link ContextConfiguration#failedDownloadRouting( ) }*/
#Autowired
#Qualifier("exhausted-channel")
private PollableChannel exhaustedChannel;
/**
* Unit test of {#link ContextConfiguration#failedDownloadRouting( ) } and {#link RetryRouter}.
*/
#Test
public void retryRouting() {
final int limit = 2;
Message<?> message = failed( 0, limit);
for ( int attempt = 0 ; attempt <= limit * 2; attempt++ ){
this.failed.send( message );
if ( attempt < limit){
message = this.retryChannel.receive( );
assertEquals( payload( 0 ) , message.getPayload( ) );
assertNull(this.exhaustedChannel.receive( 0 ) );
}else{
assertEquals( payload( 0 ) , this.exhaustedChannel.receive( ).getPayload( ) );
assertNull( this.retryChannel.receive( 0 ) );
}
}
}
private Message<String> failed( int attempt , int limit ) {
return MessageBuilder
.withPayload( payload( attempt ) )
.setHeader("limit", limit)
.build();
}
private String payload (int attempt){
return "download attempt "+attempt;
}
#Configuration
#Import({/*ActiveMQAutoConfiguration.class,*/ IntegrationAutoConfiguration.class})
public static class ContextConfiguration {
#Bean(name = "failed-channel")
public MessageChannel failedChannel() {
return new DirectChannel();
}
#Bean(name = "retry-channel")
public MessageChannel retryChannel() {
return new QueueChannel();
}
#Bean(name = "exhausted-channel")
public MessageChannel exhaustedChannel() {
return new QueueChannel();
}
/**
* Decides if a failed download attempt can be retried or not, based upon the number of attempts already made
* and the limit to the number of attempts that may be made. Logic is in {#link RetryRouter}.
* <p>
* The number of download attempts already made is maintained as a header {#link #attempts},
* and the limit to the number of attempts is another header {#link #retryLimit} which is originally setup upstream as
* a header by {#link DownloadDispatcher} from retry configuration.
* <p>
* Messages for failed download attempts are listened to on channel {#link #failedChannel()},
* are routed to to {#link #retryChannel()} for another attempt or are routed to {#link #exhaustedChannel()} when there no more retries to be made.
* <p>
* Refer http://stackoverflow.com/questions/34693248/how-to-increment-a-message-header for how to increment the attempts header.
*
* #return the {#link IntegrationFlow} defining retry routing message flows
*/
#Bean
public IntegrationFlow failedDownloadRouting() {
return IntegrationFlows.from( "failed-channel" )
.handle( logMessage ( "failed" ) )
// adds download attempt counter when it is absent, which is the first iteration only
.enrichHeaders( h -> h.header("attempts", new AtomicInteger( 0 ) ) )
// incremented the download attempt counter for every retry
.handle( new GenericHandler<Message<String>>( ) {
#Override
public Object handle( Message<String> payload , Map<String,Object> headers ) {
((AtomicInteger)headers.get( "attempts" )).getAndIncrement();
return payload;
}})
.handle( logMessage ( "incremented" ) )
.route( new RetryRouter( ) )
.get();
}
/**
* Decides if a failed download attempt can be retried or not, based upon the number of attempts already made
* and the limit to the number of attempts that may be made.
* <p>
*/
private static class RetryRouter {
/**
* #param attempts current accumulated number of failed download attempts
* #param limit maximum number of download attempts
* #return String channel name into which the message will be routed
*/
#Router
public String route(#Header("attempts") AtomicInteger attempts , #Header("limit") Integer limit) {
if ( attempts.intValue( ) <= limit.intValue( ) ){
return "retry-channel";
}
return "exhausted-channel";
}
}
}
}
It's not clear what you mean by "replace" in this context.
If you mean channels equivalent to those, which are backed by a JMS queue (for persistence reasons only), then use Jms.channel(cf) for a SubscribableChannel (DirectChannel is a SubscribableChannel) and Jms.pollableChannel(cf) (QueueChannel is a PollableChannel).
For completeness, a PublishSubscribeChannel can be replaced by a Jms.publishSubscribeChannel(cf).
However, if you simply want to send the contents of those channels to Jms for distribution to other systems, it's better to use a Jms.outboundAdapter() subscribed to the DirectChannel or polling the QueueChannel.
JMS-backed channels are not intended for message distribution; they are intended to provide message persistence in order to prevent data loss.
Related
i have a channel that stores messages. When new messages arrive, if the server has not yet processed all the messages (that still in the queue), i need to clear the queue (for example, by rerouting all data into another channel). For this, I used a router. But the problem is when a new messages arrives, then not only old but also new ones rerouting into another channel. New messages must remain in the queue. How can I solve this problem?
This is my code:
#Bean
public IntegrationFlow integerFlow() {
return IntegrationFlows.from("input")
.bridge(e -> e.poller(Pollers.fixedDelay(500, TimeUnit.MILLISECONDS, 1000).maxMessagesPerPoll(1)))
.route(r -> {
if (flag) {
return "mainChannel";
} else {
return "garbageChannel";
}
})
.get();
}
#Bean
public IntegrationFlow outFlow() {
return IntegrationFlows.from("mainChannel")
.handle(m -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(m.getPayload() + "\tmainFlow");
})
.get();
}
#Bean
public IntegrationFlow outGarbage() {
return IntegrationFlows.from("garbageChannel")
.handle(m -> System.out.println(m.getPayload() + "\tgarbage"))
.get();
}
Flag value changes through #GateWay by pressing "q" and "e" keys.
I would suggest you to take a look into a purge API of the QueueChannel:
/**
* Remove any {#link Message Messages} that are not accepted by the provided selector.
* #param selector The message selector.
* #return The list of messages that were purged.
*/
List<Message<?>> purge(#Nullable MessageSelector selector);
This way with a custom MessageSelector you will be able to remove from the queue old messages. See a timestamp message header to consult. With the result of this method you can do whatever you need to do with old messages.
We have a spring application which is having a dynamic queue listener connecting to queue in rabbitmq. Let's say I have a total of 5 listener consumer connected to 5 queues from my spring application to rabbitmq.
Now if Network fluctuation/failure happens then each time, first one of my 5 connected queues is stopped retrying to rabbitmq.
I have debugged code through spring-amqp class and found out that on creating a connection with rabbitmq (when network failure happens), it fails to connect to it and throw org.springframework.amqp.AmqpIOException particular exception which is not handled in retry function so that that queue is removed from retried queue list.
My Main class:
#Slf4j
#SpringBootApplication(exclude = {ClientAutoConfiguration.class})
#EnableTransactionManagement
#EnableJpaRepositories(basePackages = "com.x.x.repositories")
#EntityScan(basePackages = "com.x.x.entities")
public class Main
{
#PostConstruct
void configuration()
{
TimeZone.setDefault(TimeZone.getTimeZone("UTC"));
}
/**
* The main method.
*
* #param args the arguments
*/
public static void main(String[] args)
{
ConfigurableApplicationContext context = SpringApplication.run(Main.class, args);
RabbitMQListenerUtil queueRegisterUtil = context.getBean(RabbitMQListenerUtil.class);
try
{
queueRegisterUtil.registerSpecifiedListenerForAllInstance();
}
catch (Exception e)
{
log.error(e.getMessage(), e);
}
}
}
Class which is used to create 5 consumer/listener
/**
* The Class RabbitMQListenerUtil.
*/
#Component
#Slf4j
public class RabbitMQListenerUtil
{
#Autowired
private ApplicationContext applicationContext;
public void registerSpecifiedListenerForAllInstance()
{
try
{
log.debug("New Listener has been register for instane name : ");
Thread.sleep(5000);
registerNewListener("temp1");
registerNewListener("temp2");
registerNewListener("temp3");
registerNewListener("temp4");
registerNewListener("temp5");
}
catch (Exception e)
{
}
}
/**
* This method will add new listener bean for given queue name at runtime
*
* #param queueName - Queue name
* #return Configurable application context
*/
public void registerNewListener(String queueName)
{
AnnotationConfigApplicationContext childAnnotaionConfigContext = new AnnotationConfigApplicationContext();
childAnnotaionConfigContext.setParent(applicationContext);
ConfigurableEnvironment environmentConfig = childAnnotaionConfigContext.getEnvironment();
Properties listenerProperties = new Properties();
listenerProperties.setProperty("queue.name", queueName + "_queue");
PropertiesPropertySource pps = new PropertiesPropertySource("props", listenerProperties);
environmentConfig.getPropertySources().addLast(pps);
childAnnotaionConfigContext.register(RabbitMQListenerConfig.class);
childAnnotaionConfigContext.refresh();
}
}
Class which create dynamic listener for queue consumer
/**
* The Class RabbitMQListenerConfig.
*/
#Configuration
#Slf4j
#EnableRabbit
public class RabbitMQListenerConfig
{
/** The Constant ALLOW_MESSAGE_REQUEUE. */
private static final boolean ALLOW_MESSAGE_REQUEUE = true;
/** The Constant MULTIPLE_MESSAGE_FALSE. */
private static final boolean MULTIPLE_MESSAGE_FALSE = false;
/**
* Listen.
*
* #param msg the msg
* #param channel the channel
* #param queue the queue
* #param deliveryTag the delivery tag
* #throws IOException Signals that an I/O exception has occurred.
*/
#RabbitListener(queues = "${queue.name}")
public void listen(Message msg, Channel channel, #Header(AmqpHeaders.CONSUMER_QUEUE) String queue, #Header(AmqpHeaders.DELIVERY_TAG) long deliveryTag) throws IOException
{
int msgExecutionStatus = 0;
try
{
String message = new String(msg.getBody(), StandardCharsets.UTF_8);
log.info(message);
}
catch (Exception e)
{
log.error(e.toString());
log.error(e.getMessage(), e);
}
finally
{
ackMessage(channel, deliveryTag, msgExecutionStatus);
}
}
/**
* Ack message.
*
* #param channel the channel
* #param deliveryTag the delivery tag
* #param msgExecutionStatus the msg execution status
* #throws IOException Signals that an I/O exception has occurred.
*/
protected void ackMessage(Channel channel, long deliveryTag, int msgExecutionStatus) throws IOException
{
if (msgExecutionStatus == Constants.MESSAGE_DELETE_FOUND_EXCEPTION)
{
channel.basicNack(deliveryTag, MULTIPLE_MESSAGE_FALSE, ALLOW_MESSAGE_REQUEUE);
}
else
{
channel.basicAck(deliveryTag, MULTIPLE_MESSAGE_FALSE);
}
}
/**
* Bean will create from this with given name.
*
* #param name - Queue name-
* #return the queue
*/
#Bean
public Queue queue(#Value("${queue.name}") String name)
{
return new Queue(name);
}
/**
* RabbitAdmin Instance will be created which is required to create new Queue.
*
* #param cf - Connection factory
* #return the rabbit admin
*/
#Bean
public RabbitAdmin admin(ConnectionFactory cf)
{
return new RabbitAdmin(cf);
}
}
Application log :
https://pastebin.com/NQWdmdTH
I have tested this multiple times and each time my first connected queue is being stopped from connecting .
========================= UPDATE 1=============================
Code to reconnect stopped consumer :
https://pastebin.com/VnUrhdLP
Caused by: java.net.UnknownHostException: rabbitmqaind1.hqdev.india
There is something wrong with your network.
I am trying to consume an AWS queue using Spring boot with JMS, and I am having a problem throwing exceptions in my consumer method.
Every time I try to throw a custom exception in my consumer method, to log into an Aspect, the following message is returned:
errorCause=java.lang.IllegalStateException: No thread-bound request
found: Are you referring to request attributes outside of an actual
web request, or processing a request outside of the originally
receiving thread? If you are actually operating within a web request
and still receive this message, your code is probably running outside
of DispatcherServlet/DispatcherPortlet: In this case, use
RequestContextListener or RequestContextFilter to expose the current
request., errorMessage=Error listener queue,
date=2018-06-29T17:45:26.290, type=InvoiceRefuseConsumer]
I have already created a RequestContextListener bean but I did not succeed.
Could someone tell me what might be causing this error?
Here is my code:
Module 1 - Queue consumer
#Service
public class InvoiceRefuseConsumer extends AbstractQueue implements IQueueConsumer{
#Autowired
private InvoiceRefuseService invoiceRefuseService;
#JmsListener(destination = "${amazon.sqs.queue-to-be-consumed}")
#Override
public void listener(#Payload String message) throws ApplicationException {
try {
//Convert the payload received by the queue to the InvoiceFuseParam object
InvoiceRefuseParam param = convertToPojo(message, InvoiceRefuseParam.class);
// Set the type and reason of the refused invoice
param.setType(InvoiceRefuseType.INVOICE_TREATMENT.getId());
if(param.getReasonCode().equals(InvoiceRefuseTypeOperationType.TYPE_OPERATION_INSERT.getDesc())) {
// Persist data information
invoiceRefuseService.save(param);
} else if(param.getReasonCode().equals(InvoiceRefuseTypeOperationType.TYPE_OPERATION_DELETE.getDesc())) {
// Remove refused invoice
invoiceRefuseService.delete(param.getKeyAccess(), param.getType());
}
} catch(Exception e) {
throw new ApplicationException("Error listener queue", e);
}
}
}
Module 2 - Service operations
#Service
public class InvoiceRefuseService {
/**
* automatically initiates the InvoiceRefuseCrud
*/
#Autowired
private InvoiceRefuseCrud invoiceRefuseCrud;
/**
* automatically initiates the SupplierCrud
*/
#Autowired
private SupplierCrud supplierCrud;
/**
* automatically initiates the SequenceDao
*/
#Autowired
private SequenceDao sequenceDao;
/**
* automatically initiates the InvoiceRefuseDao
*/
#Autowired
private InvoiceRefuseDao invoiceRefuseDao;
/**
* automatically initiates the OrderConsumerService
*/
#Autowired
private OrderConsumerService orderConsumerService;
/**
* automatically initiates the InvoiceOrderService
*/
#Autowired
private InvoiceOrderService invoiceOrderService;
/**
* automatically initiates the BranchWarehouseTypeDao
*/
#Autowired
private BranchWarehouseTypeDao branchWarehouseTypeDao;
/**
* Method created to delete a invoice refuse
* #param key
* #param type
* #throws ApplicationException
*/
#Transactional
public void delete(String key, int type) throws ApplicationException {
try {
// Search for the refused invoices
List<InvoiceRefuseModel> lsInvoiceRefuseModel = invoiceRefuseCrud.findBykeyAccessAndType(key, type);
if(ApplicationUtils.isEmpty(lsInvoiceRefuseModel)){
throw new FieldValidationException(getKey("key.notfound"));
}
// Remove refused invoice and cascate with the the scheduling order
invoiceRefuseCrud.deleteAll(lsInvoiceRefuseModel);
} catch (Exception e) {
throw new ApplicationException(getKey("api.delete.error"), e);
}
}
/**
* Method created to save a new invoice refuse
* #param param
* #throws ApplicationException
*/
#OneTransaction
public void save(InvoiceRefuseParam param) throws ApplicationException {
try {
for (String orderNumber : param.getOrderNumbers()) {
// Verify if the invoice refused key already exists
Optional.ofNullable(invoiceRefuseCrud.findBykeyAccessAndType(param.getKeyAccess(), param.getType()))
.filter(invoiceRefuses -> invoiceRefuses.isEmpty())
.orElseThrow(() -> new ApplicationException(getKey("invoice.alread.exists")));
// Convert to model
InvoiceRefuseModel model = convertToSaveModel(param, orderNumber);
// Save data on database
InvoiceRefuseModel result = invoiceRefuseCrud.save(model);
// Associate new refused invoice with the scheduled order
associateInvoiceRefusedToSchedulingOrder(result);
}
} catch (Exception e) {
throw new ApplicationException(getKey("api.save.error"), e);
}
}
/**
* Method creates to associate a refused invoice to the scheduling order
* #param invoiceRefuseModel
* #throws ApplicationException
*/
public void associateInvoiceRefusedToSchedulingOrder(InvoiceRefuseModel invoiceRefuseModel) throws ApplicationException{
// Search for the scheduled order
List<InvoiceOrderModel> lsInvoiceOrderModel = invoiceOrderService.findByNuOrder(invoiceRefuseModel.getNuOrder());
for (InvoiceOrderModel orderModel : lsInvoiceOrderModel) {
// Verify if its a SAP order
boolean isOrderSap = Optional
.ofNullable(branchWarehouseTypeDao.findByIdBranch(orderModel.getNuReceiverPlant()))
.filter(branch -> branch.getNaLoadPoint() != null)
.isPresent();
if (isOrderSap) {
// Update the order status
invoiceOrderService.updateStatus(orderModel);
}
}
}
/**
* Method created to convert from param to model
* #param param
* #param orderNumber
* #return InvoiceRefuseModel
* #throws ApplicationException
*/
private InvoiceRefuseModel convertToSaveModel(InvoiceRefuseParam param, String orderNumber) throws ApplicationException{
OrderParam orderParam = new OrderParam();
orderParam.getLsOrdeNumber().add(orderNumber);
// Search for SAP orders
OrderDataPojo orderSap = Optional.ofNullable(orderConsumerService.findAll(orderParam))
.filter(ordersSap -> ordersSap.getOrders().size() > 0)
.orElseThrow(() -> new ApplicationException(getKey("ordersap.notfound")));
// Convert to model
InvoiceRefuseModel model = new InvoiceRefuseModel();
model.setNuOrder(orderNumber);
model.setCdCompany(BranchMatrixType.MATRIX.getCdCompany());
model.setDsMessage(param.getReasonDescription());
model.setDtIssue(param.getIssueDate());
model.setKeyAccess(param.getKeyAccess());
model.setNuGuid(param.getGuid());
model.setNuInvoice(param.getInvoiceNumber() + param.getInvoiceSerialNumber());
model.setTsCreation(new Date());
model.setNuInvoiceSerial(param.getInvoiceSerialNumber());
model.setNuIssuerPlant(orderSap.getOrders().stream().map(o -> o.getHeader().getIssuerPlant()).findFirst().get());
model.setNuReceiverPlant(orderSap.getOrders().stream().map(o -> o.getHeader().getReceiverPlant()).findFirst().get());
model.setType(param.getType());
model.setCdInvoiceRefuseMessage(param.getReasonCode());
// Passing these fields is required for refused invoices, but they are not received for notes in treatment
if(param.getType().equals(InvoiceRefuseType.INVOICE_REFUSED.getId())) {
model.setIsEnableReturn(BooleanType.getByBool(param.getIsEnableReturn()).getId());
model.setDtRefuse(param.getRefuseDate());
}
// Search for the issuing supplier
SupplierModel supplierModelIssuer = findSupplier(param.getDocumentIdIssuer());
model.setCdSupplierIssuer(supplierModelIssuer.getCdSupplier());
// Search for the receiver supplier
SupplierModel supplierModelReceiver = findSupplier(param.getDocumentIdIssuer());
model.setCdSupplierReceiver(supplierModelReceiver.getCdSupplier());
// Set the primary key
InvoiceRefuseModelId id = new InvoiceRefuseModelId();
id.setCdInvoiceRefuse(sequenceDao.nextIntValue(SequenceName.SQ_INVOICE_REFUSE));
model.setId(id);
return model;
}
/**
* Method created to search for a supplier
* #param documentId
* #return SupplierModel
* #throws ApplicationException
*/
private SupplierModel findSupplier(String documentId) throws ApplicationException{
// Search for the supplier
SupplierModel model = supplierCrud.findTop1ByNuDocumentIdAndCdCompany(documentId, BranchMatrixType.MATRIX.getCdCompany());
if(model == null){
throw new ApplicationException(getKey("supplier.notfound"));
}
return model;
}
/**
* Method created to find a refused invoice and return the result by page
* #param param
* #param pageable
* #return Page<InvoiceRefuseModel>
* #throws ApplicationException
*/
public Page<InvoiceRefuseModel> findRefuseInvoice(InvoiceRefuseFilterParam param, Pageable pageable) throws ApplicationException {
return invoiceRefuseDao.findRefuseInvoice(param, pageable);
}
/**
* Method created to find a refused invoice and return the result by list
* #param param
* #return List<InvoiceRefuseModel>
* #throws ApplicationException
*/
public List<InvoiceRefuseModel> findRefuseInvoice(InvoiceRefuseFilterParam param) throws ApplicationException {
return invoiceRefuseDao.findRefuseInvoice(param);
}
/**
* Method created to find a refused invoice by order number and return the result by list
* #param nuOrder
* #return List<InvoiceRefuseModel>
*/
public List<InvoiceRefuseModel> findByNuOrder(String nuOrder){
return invoiceRefuseDao.findByNuOrder(nuOrder);
}
}
I would like to achieve following scenario in my application:
If a business error occurs, the message should be send from the incomingQueue to the deadLetter Queue and delayed there for 10 seconds
The step number 1 should be repeated 3 times
The message should be published to the parkingLot Queue
I am able (see the code below) to delay the message for a certain amount of time in a deadLetter Queue. And the message is looped infinitely between the incoming Queue and the deadLetter Queue. So far so good.
The main question: How can I intercept the process and manually route the message (as described in the step 3) to the parkingLot Queue for later further analysis?
A secondary question: Can I achieve the same process with only one exchange?
Here is a shortened version of my two classes:
Configuration class
#Configuration
public class MailRabbitMQConfig {
#Bean
TopicExchange incomingExchange() {
TopicExchange incomingExchange = new TopicExchange(incomingExchangeName);
return incomingExchange;
}
#Bean
TopicExchange dlExchange() {
TopicExchange dlExchange = new TopicExchange(deadLetterExchangeName);
return dlExchange;
}
#Bean
Queue incomingQueue() {
return QueueBuilder.durable(incomingQueueName)
.withArgument(
"x-dead-letter-exchange",
dlExchange().getName()
)
.build();
}
#Bean
public Queue parkingLotQueue() {
return new Queue(parkingLotQueueName);
}
#Bean
Binding incomingBinding() {
return BindingBuilder
.bind(incomingQueue())
.to(incomingExchange())
.with("#");
}
#Bean
public Queue dlQueue() {
return QueueBuilder
.durable(deadLetterQueueName)
.withArgument("x-message-ttl", 10000)
.withArgument("x-dead-letter-exchange", incomingExchange()
.getName())
.build();
}
#Bean
Binding dlBinding() {
return BindingBuilder
.bind(dlQueue())
.to(dlExchange())
.with("#");
}
#Bean
public Binding bindParkingLot(
Queue parkingLotQueue,
TopicExchange dlExchange
) {
return BindingBuilder.bind(parkingLotQueue)
.to(dlExchange)
.with(parkingLotRoutingKeyName);
}
}
Consumer class
#Component
public class Consumer {
private final Logger logger = LoggerFactory.getLogger(Consumer.class);
#RabbitListener(queues = "${mail.rabbitmq.queue.incoming}")
public Boolean receivedMessage(MailDataExternalTemplate mailDataExternalTemplate) throws Exception {
try {
// business logic here
} catch (Exception e) {
throw new AmqpRejectAndDontRequeueException("Failed to handle a business logic");
}
return Boolean.TRUE;
}
}
I know I could define an additional listener for a deadLetter Queue in a Consumer class like that:
#RabbitListener(queues = "${mail.rabbitmq.queue.deadletter}")
public void receivedMessageFromDlq(Message failedMessage) throws Exception {
// Logic to count x-retries header property value and send a failed message manually
// to the parkingLot Queue
}
However it does not work as expected because this listener is called as soon as the message arrives the head of the deadLetter Queue without to be delayed.
Thank you in advance.
EDIT: I was able with #ArtemBilan and #GaryRussell help to solve the problem. The main solution hints are within their comments in the accepted answer. Thank you guys for the help! Below you will find a new diagram that shows the messaging process and the Configuration and the Consumer classes. The main changes were:
The definition of the routes between the incoming exchange -> incoming queue and the dead letter exchange -> dead letter queue in the MailRabbitMQConfig class.
The loop handling with the manual publishing of the message to the parking lot queue in the Consumer class
Configuration class
#Configuration
public class MailRabbitMQConfig {
#Autowired
public MailConfigurationProperties properties;
#Bean
TopicExchange incomingExchange() {
TopicExchange incomingExchange = new TopicExchange(properties.getRabbitMQ().getExchange().getIncoming());
return incomingExchange;
}
#Bean
TopicExchange dlExchange() {
TopicExchange dlExchange = new TopicExchange(properties.getRabbitMQ().getExchange().getDeadletter());
return dlExchange;
}
#Bean
Queue incomingQueue() {
return QueueBuilder.durable(properties.getRabbitMQ().getQueue().getIncoming())
.withArgument(
properties.getRabbitMQ().getQueue().X_DEAD_LETTER_EXCHANGE_HEADER,
dlExchange().getName()
)
.withArgument(
properties.getRabbitMQ().getQueue().X_DEAD_LETTER_ROUTING_KEY_HEADER,
properties.getRabbitMQ().getRoutingKey().getDeadLetter()
)
.build();
}
#Bean
public Queue parkingLotQueue() {
return new Queue(properties.getRabbitMQ().getQueue().getParkingLot());
}
#Bean
Binding incomingBinding() {
return BindingBuilder
.bind(incomingQueue())
.to(incomingExchange())
.with(properties.getRabbitMQ().getRoutingKey().getIncoming());
}
#Bean
public Queue dlQueue() {
return QueueBuilder
.durable(properties.getRabbitMQ().getQueue().getDeadLetter())
.withArgument(
properties.getRabbitMQ().getMessages().X_MESSAGE_TTL_HEADER,
properties.getRabbitMQ().getMessages().getDelayTime()
)
.withArgument(
properties.getRabbitMQ().getQueue().X_DEAD_LETTER_EXCHANGE_HEADER,
incomingExchange().getName()
)
.withArgument(
properties.getRabbitMQ().getQueue().X_DEAD_LETTER_ROUTING_KEY_HEADER,
properties.getRabbitMQ().getRoutingKey().getIncoming()
)
.build();
}
#Bean
Binding dlBinding() {
return BindingBuilder
.bind(dlQueue())
.to(dlExchange())
.with(properties.getRabbitMQ().getRoutingKey().getDeadLetter());
}
#Bean
public Binding bindParkingLot(
Queue parkingLotQueue,
TopicExchange dlExchange
) {
return BindingBuilder.bind(parkingLotQueue)
.to(dlExchange)
.with(properties.getRabbitMQ().getRoutingKey().getParkingLot());
}
}
Consumer class
#Component
public class Consumer {
private final Logger logger = LoggerFactory.getLogger(Consumer.class);
#Autowired
public MailConfigurationProperties properties;
#Autowired
protected EmailClient mailJetEmailClient;
#Autowired
private RabbitTemplate rabbitTemplate;
#RabbitListener(queues = "${mail.rabbitmq.queue.incoming}")
public Boolean receivedMessage(
#Payload MailDataExternalTemplate mailDataExternalTemplate,
Message amqpMessage
) {
logger.info("Received message");
try {
final EmailTransportWrapper emailTransportWrapper = mailJetEmailClient.convertFrom(mailDataExternalTemplate);
mailJetEmailClient.sendEmailUsing(emailTransportWrapper);
logger.info("Successfully sent an E-Mail");
} catch (Exception e) {
int count = getXDeathCountFromHeader(amqpMessage);
logger.debug("x-death count: " + count);
if (count >= properties.getRabbitMQ().getMessages().getRetryCount()) {
this.rabbitTemplate.send(
properties.getRabbitMQ().getExchange().getDeadletter(),
properties.getRabbitMQ().getRoutingKey().getParkingLot(),
amqpMessage
);
return Boolean.TRUE;
}
throw new AmqpRejectAndDontRequeueException("Failed to send an E-Mail");
}
return Boolean.TRUE;
}
private int getXDeathCountFromHeader(Message message) {
Map<String, Object> headers = message.getMessageProperties().getHeaders();
if (headers.get(properties.getRabbitMQ().getMessages().X_DEATH_HEADER) == null) {
return 0;
}
//noinspection unchecked
ArrayList<Map<String, ?>> xDeath = (ArrayList<Map<String, ?>>) headers
.get(properties.getRabbitMQ().getMessages().X_DEATH_HEADER);
Long count = (Long) xDeath.get(0).get("count");
return count.intValue();
}
To delay message to be available in the queue, you should consider to use DelayedExchange: https://docs.spring.io/spring-amqp/docs/2.0.2.RELEASE/reference/html/_reference.html#delayed-message-exchange.
As for manually sending to the parkingLot queue, that's just easy to use RabbitTemplate and send message using its name:
/**
* Send a message to a default exchange with a specific routing key.
*
* #param routingKey the routing key
* #param message a message to send
* #throws AmqpException if there is a problem
*/
void send(String routingKey, Message message) throws AmqpException;
All the queues are bound to the default exchange via their names as routing keys.
Here is spring-integration-aws project. They provide example about Inbound Channle adapter:
#SpringBootApplication
public static class MyConfiguration {
#Autowired
private AmazonSQSAsync amazonSqs;
#Bean
public PollableChannel inputChannel() {
return new QueueChannel();
}
#Bean
public MessageProducer sqsMessageDrivenChannelAdapter() {
SqsMessageDrivenChannelAdapter adapter = new SqsMessageDrivenChannelAdapter(this.amazonSqs, "myQueue");
adapter.setOutputChannel(inputChannel());
return adapter;
}
}
Ok, Channel and SqsMessageDrivenChannelAdapter are defined, but what is the next? Let say that I have spring bean like that:
import com.amazonaws.services.sqs.model.Message;
#Component
public class MyComponent {
public void onMessage(Message message) throws Exception {
//handle sqs message
}
}
How to tell spring to pass all messages from myQueue to this
component?
Is there any additionbal configuration to process messages one by
one? For example after receiving message SQS mark them as
processing and they are not visible to other clients, so it is
needed to fetch only one message, process nad fetch one next. Does
this behavior enabled by default?
You should read the Spring Integration Reference Manual.
#Component
public class MyComponent {
#ServiceActivator(inputChannel = "inputChannel")
public void onMessage(Message message) throws Exception {
//handle sqs message
}
}
Answering to your second question:
/**
* Configure the maximum number of messages that should be retrieved during one poll to the Amazon SQS system. This
* number must be a positive, non-zero number that has a maximum number of 10. Values higher then 10 are currently
* not supported by the queueing system.
*
* #param maxNumberOfMessages
* the maximum number of messages (between 1-10)
*/
public void setMaxNumberOfMessages(Integer maxNumberOfMessages) {
this.maxNumberOfMessages = maxNumberOfMessages;
}
By default it is 10.
The question about mark them as processing can be achieved with the SqsMessageDeletionPolicy option:
/**
* Never deletes message automatically. The receiving listener method must acknowledge each message manually by using
* the acknowledgment parameter.
* <p><b>IMPORTANT</b>: When using this policy the listener method must take care of the deletion of the messages.
* If not, it will lead to an endless loop of messages (poison messages).</p>
*
* #see Acknowledgment
*/
NEVER,
Such an Acknowledgment object is placed into the AwsHeaders.ACKNOWLEDGMENT Message header which you can get from your onMessage() method and acknowledge it whenever you need.