How can I configure the inbound channel adapter via annotations instead of the regular configuration file? I was able to define the bean for the session factory though as under:
#Bean
public DefaultFtpSessionFactory ftpSessionFactory() {
DefaultFtpSessionFactory ftpSessionFactory = new
DefaultFtpSessionFactory();
ftpSessionFactory.setHost(host);
ftpSessionFactory.setPort(port);
ftpSessionFactory.setUsername(username);
ftpSessionFactory.setPassword(password);
return ftpSessionFactory;
}
How can I configure the inbound channel adapter given as under via annotations?
<int-ftp:inbound-channel-adapter id="ftpInbound"
channel="ftpChannel"
session-factory="ftpSessionFactory"
filename-pattern="*.xml"
auto-create-local-directory="true"
delete-remote-files="false"
remote-directory="/"
local-directory="ftp-inbound"
local-filter="acceptOnceFilter">
<int:poller fixed-delay="60000" max-messages-per-poll="-1">
<int:transactional synchronization-factory="syncFactory" />
</int:poller>
</int-ftp:inbound-channel-adapter>
#Artem Bilan
The modified code is as under
#EnableIntegration
#Configuration
public class FtpConfiguration {
#Value("${ftp.host}")
private String host;
#Value("${ftp.port}")
private Integer port;
#Value("${ftp.username}")
private String username;
#Value("${ftp.password}")
private String password;
#Value("${ftp.fixed.delay}")
private Integer fixedDelay;
#Value("${ftp.local.directory}")
private String localDirectory;
private final static Logger LOGGER = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
#Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sessionFactory = new DefaultFtpSessionFactory();
sessionFactory.setHost(host);
sessionFactory.setPort(port);
sessionFactory.setUsername(username);
sessionFactory.setPassword(password);
return new CachingSessionFactory<FTPFile>(sessionFactory);
}
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(ftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory("/");
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.xml"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(value = "ftpChannel",
poller = #Poller(fixedDelay = "60000", maxMessagesPerPoll = "-1"))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source =
new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer());
source.setLocalDirectory(new File(localDirectory));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
}
While running this,I get an exception as under
No bean named 'ftpChannel' is defined
Please note that 'channel' keyword in not available while wiring the Inbound channel adapter,its 'value' instead.
I tried wiring the channel with PollableChannel,that also went in vain though. It is as under:
#Bean
public MessageChannel ftpChannel() {
return new PollableChannel() {
#Override
public Message<?> receive() {
return this.receive();
}
#Override
public Message<?> receive(long l) {
return null;
}
#Override
public boolean send(Message<?> message) {
return false;
}
#Override
public boolean send(Message<?> message, long l) {
return false;
}
};
}
I got an error "failed to send message within timeout: -1".Am I doing something wrong still?
What I'm looking for is to wire up all the beans on application start-up, and then expose some method to start polling the server,process them and then delete them from local,something like this
public void startPollingTheServer() {
getPollableChannel().receive();
}
where getPollableChannel() gives me the bean I had wired for Polling.
There is an #InboundChannelAdapter for you.
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(ftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory("/");
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.xml"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "ftpChannel")
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source =
new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer());
source.setLocalDirectory(new File("ftp-inbound"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
Plus take a look into the Reference Manual.
Also pay attention, please, for Java DSL for Spring Integration, where the same might look like:
#Bean
public IntegrationFlow ftpInboundFlow() {
return IntegrationFlows
.from(s -> s.ftp(this.ftpSessionFactory)
.preserveTimestamp(true)
.remoteDirectory("ftpSource")
.regexFilter(".*\\.txt$")
.localFilename(f -> f.toUpperCase() + ".a")
.localDirectory(this.ftpServer.getTargetLocalDirectory()),
e -> e.id("ftpInboundAdapter").autoStartup(false))
.channel(MessageChannels.queue("ftpInboundResultChannel"))
.get();
}
Related
My Application currently uses IBM MQ and has queue config setup and working fine with JMS. e.g.
#EnableJms
#Configuration
public class IBMQueueConfig {
#Bean("defaultContainer")
public JmsListenerContainerFactory containerFactory(final ConnectionFactory connectionFactory,
final ErrorHandler errorHandler) {
final DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setErrorHandler(errorHandler);
return factory;
}
}
I can receive message and process as follows:
#Service
public class ProcessMessageReceive {
#JmsListener(destination = "${queue}", concurrency = "${threads}", containerFactory = "defaultContainer")
public Message processMessage(#Payload final String message) {
//do stuff
}
}
I need to use RabbitMQ for testing and require additional configuration. I have the following the class:
#Configuration
#ConfigurationProperties(prefix = "spring.rabbitmq")
#EnableRabbit
public class RabbitMQConfiguration {
private String host;
private int port;
private String username;
private String password;
private String virtualHost;
#Bean
public DirectExchange exchange() {
return new DirectExchange(exchange);
}
#Bean("defaultContainer")
public JmsListenerContainerFactory containerFactory(#Qualifier("rabbit-connection-factory") final ConnectionFactory connectionFactory) {
final DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(); //ERROR
return factory;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(#Qualifier("rabbit-connection-factory") final ConnectionFactory connectionFactory,
#Value("spring.rabbitmq.listener.simple.concurrency") final int concurrency,
#Value("spring.rabbitmq.listener.simple.max-concurrency") final int maxConcurrency) {
final SimpleRabbitListenerContainerFactory containerFactory = new SimpleRabbitListenerContainerFactory();
containerFactory.setConnectionFactory(connectionFactory);
containerFactory.setConcurrentConsumers(concurrency);
containerFactory.setMaxConcurrentConsumers(maxConcurrency);
containerFactory.setDefaultRequeueRejected(false);
return containerFactory;
}
#Bean(name = "rabbit-connection-factory")
public ConnectionFactory connectionFactory() {
final CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost(host);
connectionFactory.setPort(port);
connectionFactory.setUsername(username);
connectionFactory.setPassword(password);
connectionFactory.setVirtualHost(virtualHost);
return connectionFactory;
}
#Bean
public Queue inboundQueue() {
return new Queue(fixInboundQueue, true);
}
#Bean
public Binding inboundQueueBinding() {
return bind(inboundQueue())
.to(exchange())
.with(routingKey);
}
}
I get an error on line factory.setConnectionFactory(connectionFactory); as it expects a javax.jms.ConnectionFactory but provided is Rabbit MQ One.
Is there a way I can wire in the Rabbit MQ ConnectionFactory ? I know it is possible if I use RMQConnectionFactory, but I am looking to see If I can achieve it with Spring Rabbit dependency.
The objective is to avoid writing another processMessage() specifically for the Rabbit MQ and re-use what I already have.
Alternatively, can I use both annotations? In which case I would use spring profile to enable the one I need depending on prod or test?
#RabbitListener(queues = "${app.rabbitmq.queue}")
#JmsListener(destination = "${queue}", concurrency = "${threads}", containerFactory = "defaultContainer")
public Message processMessage(#Payload final String message) {
//do stuff
}
You have to use #RabbitListener instead of #JmsListener if you want to talk to RabbitMQ over AMQP.
You can add both annotations if you want to use JMS in production and RabbitMQ in tests.
I'm unable to make works queue listener with Spring Boot and SQS
(the message is sent and appear in SQS ui)
The #MessageMapping or #SqsListener not works
Java: 11
Spring Boot: 2.1.7
Dependencie: spring-cloud-aws-messaging
This is my config
#Configuration
#EnableSqs
public class SqsConfig {
#Value("#{'${env.name:DEV}'}")
private String envName;
#Value("${cloud.aws.region.static}")
private String region;
#Value("${cloud.aws.credentials.access-key}")
private String awsAccessKey;
#Value("${cloud.aws.credentials.secret-key}")
private String awsSecretKey;
#Bean
public Headers headers() {
return new Headers();
}
#Bean
public MessageQueue queueMessagingSqs(Headers headers,
QueueMessagingTemplate queueMessagingTemplate) {
Sqs queue = new Sqs();
queue.setQueueMessagingTemplate(queueMessagingTemplate);
queue.setHeaders(headers);
return queue;
}
private ResourceIdResolver getResourceIdResolver() {
return queueName -> envName + "-" + queueName;
}
#Bean
public DestinationResolver destinationResolver(AmazonSQSAsync amazonSQSAsync) {
DynamicQueueUrlDestinationResolver destinationResolver = new DynamicQueueUrlDestinationResolver(
amazonSQSAsync,
getResourceIdResolver());
destinationResolver.setAutoCreate(true);
return destinationResolver;
}
#Bean
public QueueMessagingTemplate queueMessagingTemplate(AmazonSQSAsync amazonSQSAsync,
DestinationResolver destinationResolver) {
return new QueueMessagingTemplate(amazonSQSAsync, destinationResolver, null);
}
#Bean
public QueueMessageHandlerFactory queueMessageHandlerFactory() {
QueueMessageHandlerFactory factory = new QueueMessageHandlerFactory();
MappingJackson2MessageConverter messageConverter = new MappingJackson2MessageConverter();
messageConverter.setStrictContentTypeMatch(false);
factory.setArgumentResolvers(Collections.singletonList(new PayloadArgumentResolver(messageConverter)));
return factory;
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory(AmazonSQSAsync amazonSqs) {
SimpleMessageListenerContainerFactory factory = new SimpleMessageListenerContainerFactory();
factory.setAmazonSqs(amazonSqs);
factory.setMaxNumberOfMessages(10);
factory.setWaitTimeOut(2);
return factory;
}
}
I notice also that org.springframework.cloud.aws.messaging.config.SimpleMessageListenerContainerFactory and org.springframework.cloud.aws.messaging.config.annotation.SqsConfiguration run on startup
And my test
#RunWith(SpringJUnit4ClassRunner.class)
public class ListenTest {
#Autowired
private MessageQueue queue;
private final String queueName = "test-queue-receive";
private String result = null;
#Test
public void test_listen() {
// given
String data = "abc";
// when
queue.send(queueName, data).join();
// then
Awaitility.await()
.atMost(10, TimeUnit.SECONDS)
.until(() -> Objects.nonNull(result));
Assertions.assertThat(result).equals(data);
}
#MessageMapping(value = queueName)
public void receive(String data) {
this.result = data;
}
}
Do you think something is wrong ?
I create a repo for exemple : (https://github.com/mmaryo/java-sqs-test)
In test folder, change aws credentials in 'application.yml'
Then run tests
I had the same issue when using the spring-cloud-aws-messaging package, but then I used the queue URL in the #SqsListener annotation instead of the queue name and it worked.
#SqsListener(value = { "https://full-queue-URL" }, deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void receive(String message) {
// do something
}
It seems you can use the queue name when using the spring-cloud-starter-aws-messaging package. I believe there is some configuration that allows usage of the queue name instead of URL if you don't want to use the starter package.
EDIT: I noticed the region was being defaulted to us-west-2 despite me listing us-east-1 in my properties file. Then I created a RegionProvider bean and set the region to us-east-1 in there and now when I use the queue name in the #SqsMessaging it is found and correctly resolved to the URL in the framework code.
you'll need to leverage the #Primary annotation, this is what worked for me:
#Autowired(required = false)
private AWSCredentialsProvider awsCredentialsProvider;
#Autowired
private AppConfig appConfig;
#Bean
public QueueMessagingTemplate getQueueMessagingTemplate() {
return new QueueMessagingTemplate(sqsClient());
}
#Primary
#Bean
public AmazonSQSAsync sqsClient() {
AmazonSQSAsyncClientBuilder builder = AmazonSQSAsyncClientBuilder.standard();
if (this.awsCredentialsProvider != null) {
builder.withCredentials(this.awsCredentialsProvider);
}
if (appConfig.getSqsRegion() != null) {
builder.withRegion(appConfig.getSqsRegion());
} else {
builder.withRegion(Regions.DEFAULT_REGION);
}
return builder.build();
}
build.gradle needs these deps:
implementation("org.springframework.cloud:spring-cloud-starter-aws:2.2.0.RELEASE")
implementation("org.springframework.cloud:spring-cloud-aws-messaging:2.2.0.RELEASE")
I am new to spring boot and trying to use the sqs listener to poll a test-queue in local stack. I can push messages into my local stack queue. But, I then want to poll the same queue and log the contents of the message. However, I don't get any message logged to the console by the sqs lister?
application-local.properties
cloud.aws.region=us-east-1
cloud.local.sqs=http://localstack:4576
cloud.local.s3=http://localstack:4572
app.sqs.maxmessages=1
app.sqs.input=http://localstack:4576/queue/test-queue
app.sqs.output=http://localstack:4576/queue/test-queue
AppController
#Log4j2
#RestController
public class AppController {
private AmazonS3 s3;
private SQSOutput output;
#Autowired
public AppController(AmazonS3 s3, SQSOutput output) {
this.s3 = s3;
this.output = output;
}
#RequestMapping("/send")
public Map<String, String> sendMessage() {
output.send("Test Message!");
Map<String, String> response = new HashMap<>();
response.put("message", "Message sent!");
return response;
}
#SqsListener(value = "${app.sqs.input}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void getMessage(String message) {
log.info("Received message: " + message);
}
}
SetupBeans
#Component
public class SetupBeans {
#Value("${cloud.aws.region}")
private String region;
#Value("${cloud.local.s3}")
private String localCloudS3;
#Value("${cloud.local.sqs}")
private String localCloudSQS;
#Value("${app.sqs.output}")
private String outputUrl;
#Value("${app.sqs.maxmessages}")
private int maxMessages;
#Bean
#Primary
private AWSCredentialsProvider credProvider() {
return DefaultAWSCredentialsProviderChain.getInstance();
}
#Bean
#Primary
public AmazonS3 amazonS3() {
AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder
.EndpointConfiguration(localCloudS3, region);
return AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(endpointConfiguration)
.build();
}
#Bean
#Primary
private AmazonSQSAsync amazonSQSAsync() {
AmazonSQSAsyncClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder
.EndpointConfiguration(localCloudSQS, region);
return AmazonSQSAsyncClientBuilder.standard()
.withEndpointConfiguration(endpointConfiguration)
.build();
}
#Bean
private QueueMessagingTemplate queueMessagingTemplate() {
return new QueueMessagingTemplate(amazonSQSAsync());
}
#Bean
public QueueMessageHandlerFactory queueMessageHandlerFactory() {
QueueMessageHandlerFactory factory = new QueueMessageHandlerFactory();
MappingJackson2MessageConverter messageConverter = new MappingJackson2MessageConverter();
// set strict content type match to false
messageConverter.setStrictContentTypeMatch(false);
factory.setArgumentResolvers(Collections.<HandlerMethodArgumentResolver>singletonList(new PayloadArgumentResolver(messageConverter)));
return factory;
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory(AmazonSQSAsync amazonSQS){
SimpleMessageListenerContainerFactory factory = new SimpleMessageListenerContainerFactory();
factory.setAmazonSqs(amazonSQS);
factory.setMaxNumberOfMessages(maxMessages);
return factory;
}
#Bean
private QueueMessageChannel getQueueMessageChannel() {
return new QueueMessageChannel(amazonSQSAsync(), outputUrl);
}
#Bean
public SQSOutput getSQSOutput() {
return new SQSOutput(queueMessagingTemplate(), getQueueMessageChannel());
}
}
I was using org.springframework.cloud:spring-cloud-aws-messaging but needed to use org.springframework.cloud:spring-cloud-starter-aws-messaging.
How to acknowledge the messages manually without using auto acknowledgement.
Is there a way to use this along with the #RabbitListener and #EnableRabbit style of configuration.
Most of the documentation tells us to use SimpleMessageListenerContainer along with ChannelAwareMessageListener.
However using that we lose the flexibility that is provided with the annotations.
I have configured my service as below :
#Service
public class EventReceiver {
#Autowired
private MessageSender messageSender;
#RabbitListener(queues = "${eventqueue}")
public void receiveMessage(Order order) throws Exception {
// code for processing order
}
My RabbitConfiguration is as below
#EnableRabbit
public class RabbitApplication implements RabbitListenerConfigurer {
public static void main(String[] args) {
SpringApplication.run(RabbitApplication.class, args);
}
#Bean
public MappingJackson2MessageConverter jackson2Converter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
return converter;
#Bean
public SimpleRabbitListenerContainerFactory myRabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMaxConcurrentConsumers(5);
factory.setMessageConverter((MessageConverter) jackson2Converter());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return factory;
}
#Bean
public ConnectionFactory rabbitConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost("localhost");
return connectionFactory;
}
#Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
registrar.setContainerFactory(myRabbitListenerContainerFactory());
}
#Autowired
private EventReceiver receiver;
}
}
Any help will be appreciated on how to adapt manual channel acknowledgement along with the above style of configuration.
If we implement the ChannelAwareMessageListener then the onMessage signature will change.
Can we implement ChannelAwareMessageListener on a service ?
Add the Channel to the #RabbitListener method...
#RabbitListener(queues = "${eventqueue}")
public void receiveMessage(Order order, Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) long tag) throws Exception {
...
}
and use the tag in the basicAck, basicReject.
EDIT
#SpringBootApplication
#EnableRabbit
public class So38728668Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So38728668Application.class, args);
context.getBean(RabbitTemplate.class).convertAndSend("", "so38728668", "foo");
context.getBean(Listener.class).latch.await(60, TimeUnit.SECONDS);
context.close();
}
#Bean
public Queue so38728668() {
return new Queue("so38728668");
}
#Bean
public Listener listener() {
return new Listener();
}
public static class Listener {
private final CountDownLatch latch = new CountDownLatch(1);
#RabbitListener(queues = "so38728668")
public void receive(String payload, Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) long tag)
throws IOException {
System.out.println(payload);
channel.basicAck(tag, false);
latch.countDown();
}
}
}
application.properties:
spring.rabbitmq.listener.acknowledge-mode=manual
Just in case you need to use #onMessage() from ChannelAwareMessageListener class. Then you can do it this way.
#Component
public class MyMessageListener implements ChannelAwareMessageListener {
#Override
public void onMessage(Message message, Channel channel) {
log.info("Message received.");
// do something with the message
channel.basicAck(message.getMessageProperties().getDeliveryTag(), false);
}
}
And for the rabbitConfiguration
#Configuration
public class RabbitConfig {
public static final String topicExchangeName = "exchange1";
public static final String queueName = "queue1";
public static final String routingKey = "queue1.route.#";
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("xxxx");
connectionFactory.setPassword("xxxxxxxxxx");
connectionFactory.setPort(5672);
connectionFactory.setVirtualHost("vHost1");
return connectionFactory;
}
#Bean
public RabbitTemplate rabbitTemplate() {
return new RabbitTemplate(connectionFactory());
}
#Bean
Queue queue() {
return new Queue(queueName, true);
}
#Bean
TopicExchange exchange() {
return new TopicExchange(topicExchangeName);
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(routingKey);
}
#Bean
public SimpleMessageListenerContainer listenerContainer(MyMessageListener myRabbitMessageListener) {
SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer();
listenerContainer.setConnectionFactory(connectionFactory());
listenerContainer.setQueueNames(queueName);
listenerContainer.setMessageListener(myRabbitMessageListener);
listenerContainer.setAcknowledgeMode(AcknowledgeMode.MANUAL);
listenerContainer.setConcurrency("4");
listenerContainer.setPrefetchCount(20);
return listenerContainer;
}
}
Thanks for gary's help. I finally solved the issue. I am documenting this for the benefit of others.
This needs to be documented as part of standard documentation in Spring AMQP reference documentation page.
Service class is as below.
#Service
public class Consumer {
#RabbitListener(queues = "${eventqueue}")
public void receiveMessage(Order order, Channel channel) throws Exception {
// the above methodname can be anything but should have channel as second signature
channel.basicConsume(eventQueue, false, channel.getDefaultConsumer());
// Get the delivery tag
long deliveryTag = channel.basicGet(eventQueue, false).getEnvelope().getDeliveryTag();
try {
// code for processing order
catch(Exception) {
// handle exception
channel.basicReject(deliveryTag, true);
}
// If all logic is successful
channel.basicAck(deliveryTag, false);
}
the configuration has also been modified as below
public class RabbitApplication implements RabbitListenerConfigurer {
private static final Logger log = LoggerFactory.getLogger(RabbitApplication .class);
public static void main(String[] args) {
SpringApplication.run(RabbitApplication.class, args);
}
#Bean
public MappingJackson2MessageConverter jackson2Converter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
return converter;
}
#Bean
public DefaultMessageHandlerMethodFactory myHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setMessageConverter(jackson2Converter());
return factory;
}
#Autowired
private Consumer consumer;
#Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
registrar.setMessageHandlerMethodFactory(myHandlerMethodFactory());
}
...
}
Note: no need to configure Rabbitconnectionfactory or containerfactor etc since the annotation implicity take care of all this.
Code:
RabbitMQListener:
#Component
public class ServerThroughRabbitMQ implements ServerThroughAMQPBroker {
private static final AtomicLong ID_COUNTER=new AtomicLong();
private final long instanceId=ID_COUNTER.incrementAndGet();
#Autowired
public ServerThroughRabbitMQ( UserService userService,LoginService loginService....){
....
}
#Override
#RabbitListener(queues = "#{registerQueue.name}")
public String registerUserAndLogin(String json) {
.....
}
ServerConfig:
#Configuration
public class ServerConfig {
#Value("${amqp.broker.exchange-name}")
private String exchangeName;
#Value("${amqp.broker.host}")
private String ampqBrokerHost;
#Value("${amqp.broker.quidco.queue.postfix}")
private String quidcoQueuePostfix;
#Value("${amqp.broker.quidco.queue.durability:true}")
private boolean quidcoQueueDurability;
#Value("${amqp.broker.quidco.queue.autodelete:false}")
private boolean quidcoQueueAutodelete;
private String registerAndLoginQuequName;
#PostConstruct
public void init() {
registerAndLoginQuequName = REGISTER_AND_LOGIN_ROUTING_KEY + quidcoQueuePostfix;
public String getRegisterAndLoginQueueName() {
return registerAndLoginQuequName;
}
public String getLoginAndCheckBonusQueueName() {
return loginAndCheckBonusQuequName;
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(ampqBrokerHost);
return connectionFactory;
}
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory());
}
#Bean
public TopicExchange topic() {
return new TopicExchange(exchangeName);
}
#Bean(name = "registerQueue")
public Queue registerQueue() {
return new Queue(registerAndLoginQuequName, quidcoQueueDurability, false, quidcoQueueAutodelete);
}
#Bean
public Binding bindingRegisterAndLogin() {
return BindingBuilder.bind(registerQueue()).to(topic()).with(REGISTER_AND_LOGIN_ROUTING_KEY);
}
}
TestConfig:
#EnableRabbit
#TestPropertySource("classpath:test.properties")
public class ServerThroughAMQPBrokerRabbitMQIntegrationTestConfig {
private final ExecutorService=Executors.newCachedThreadPool();
private LoginService loginServiceMock=mock(LoginService.class);
private UserService userServiceMock =mock(UserService.class);
#Bean
public ExecutorService executor() {
return executorService;
}
#Bean
public LoginService getLoginServiceMock() {
return loginServiceMock;
}
#Bean
public UserService getUserService() {
return userServiceMock;
}
#Bean
#Autowired
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMaxConcurrentConsumers(5);
return factory;
}
#Bean
#Autowired
public RabbitTemplate getRabbitTemplate(ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
return rabbitTemplate;
}
#Bean
public ServerThroughRabbitMQ getServerThroughRabbitMQ() {
return new ServerThroughRabbitMQ(userServiceMock, loginServiceMock,...);
}
}
Integration tests:
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes ={ServerConfig.class,ServerThroughAMQPBrokerRabbitMQIntegrationTestConfig.class})
#Category({IntegrationTest.class})
#TestPropertySource("classpath:test.properties")
public class ServerThroughAMQPBrokerRabbitMQIntegrationTest {
final private ObjectMapper jackson = new ObjectMapper();
#Autowired
private ExecutorService executor;
#Autowired
private ServerThroughRabbitMQ serverThroughRabbitMQ;
#Autowired
private RabbitTemplate template;
#Autowired
private TopicExchange exchange;
#Autowired
UserService userService;
#Autowired
LoginService loginService;
#Autowired
private AmqpAdmin amqpAdmin;
#Autowired
private ServerConfig serverConfig;
final String username = "username";
final String email = "email#email.com";
final Integer tcVersion=1;
final int quidcoUserId = 1;
final String jwt = ProcessLauncherForJwtPhpBuilderUnitWithCxtTest.EXPECTED_JWT;
#Before
public void cleanAfterOthersForMyself() {
cleanTestQueues();
}
#After
public void cleanAfterMyselfForOthers() {
cleanTestQueues();
}
private void cleanTestQueues() {
amqpAdmin.purgeQueue(serverConfig.getRegisterAndLoginQueueName(), false);
}
#Test
#Category({SlowTest.class,IntegrationTest.class})
public void testRegistrationAndLogin() throws TimeoutException {
final Waiter waiter = new Waiter();
when(userService.register(anyString(), anyString(), anyString())).thenReturn(...);
when(loginService....()).thenReturn(...);
executor.submit(() -> {
final RegistrationRequest request = new RegistrationRequest(username, email,tcVersion);
final String response;
try {
//#todo: converter to convert RegistrationRequest inside next method to json
response = (String) template.convertSendAndReceive(exchange.getName(), REGISTER_AND_LOGIN_ROUTING_KEY.toString(), jackson.writeValueAsString(request));
waiter.assertThat(response, not(isEmptyString()));
final RegistrationResponse registrationResponse = jackson.readValue(response, RegistrationResponse.class);
waiter.assertThat(...);
waiter.assertThat(...);
} catch (Exception e) {
throw new RuntimeException(e);
}
waiter.resume();
});
waiter.await(5, TimeUnit.SECONDS);
}
}
When I run that test separetly , everything works fine, but when I run it with other tests the mocked ServerThroughRabbitMQ isn't being used, so some spring caches force to use old rabbit listener.
I tried to debug it and I can see, that correct bean is being autowired to the test, but for some reason old listener is using(old bean field instanceId=1 new mocked bean instanceId=3) and test failing(Not sure how it's possible, so if in case of existing old bean I assume to get an autowire exception).
I tried to use #DirtiesContext BEFORE_CLASS, but faced anoter problem(see here)
RabbitMQ and Integration Testing can be hard, since Rabbit MQ keeps some kind of state:
- messages from previous tests in queues
- listeners from previous tests still listening on queues
There are several approaches:
Purge all queues before you start the test (that might be what you mean by cleanTestQueues())
Delete all queues (or use temporary queues) and recreate them before each test
Using the Rabbit Admin Rest API killing listeners or connections of previous tests
delete the vhost and recreating the infrasture for each test (which is the most brutal way)