I'm having difficulty finding a Spring way to initial an exchange that's sending the incoming message to more then 1 queue - on my Spring-boot application:
I can't find a good way to define a seconds exchange-queue binding.
I'm using RabbitTemplate as the producer client.
The RabbitMQ 6 page tutorial doesn't really help with that since:
the only initial several temporary queues from the Consumer on-demand (while I need to the Producer to do the binding - to persistant queues)
The examples are for basic java usage - not using Spring capabilities.
I also failed to find how to implement it via The spring AMQP pages.
what I got so far, is trying to inject the basic java binding to the spring way of doing it - but it's not working....
#Bean
public ConnectionFactory connectionFactory() throws IOException {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
Connection conn = connectionFactory.createConnection();
Channel channel = conn.createChannel(false);
channel.exchangeDeclare(SPRING_BOOT_EXCHANGE, "fanout");
channel.queueBind(queueName, SPRING_BOOT_EXCHANGE, ""); //first bind
channel.queueBind(queueName2, SPRING_BOOT_EXCHANGE, "");// second bind
return connectionFactory;
}
Any help would be appreciated
Edited
I think the problem arise with the fact that every time I restart my server it tries to redefine the exchange-query-binding - while they persist in the broker...
I managed to define them manually via the brokers UI console - so the Producer only aware of the exchange name, and the Consumer only aware to it's relevant queue.
Is there a way to define those element progrematically - but in such a way so it won't be redefined\overwritten if already exist from previous restarts?
We use an approach similar to the following to send data from one specific input channel to several input queues of other consumers:
#Bean
public IntegrationFlow integrationFlow(final RabbitTemplate rabbitTemplate, final AmqpHeaderMapper amqpHeaderMapper) {
IntegrationFlows
.from("some-input-channel")
.handle(Amqp.outboundAdapter(rabbitTemplate)
.headerMapper(headerMapper))
.get()
}
#Bean
public AmqpHeaderMapper amqpHeaderMapper() {
final DefaultAmqpHeaderMapper headerMapper = new DefaultAmqpHeaderMapper();
headerMapper.setRequestHeaderNames("*");
return headerMapper;
}
#Bean
public ConnectionFactory rabbitConnectionFactory() {
return new CachingConnectionFactory();
}
#Bean
public RabbitAdmin rabbitAdmin(final ConnectionFactory rabbitConnectionFactory) {
final RabbitAdmin rabbitAdmin = new RabbitAdmin(rabbitConnectionFactory);
rabbitAdmin.afterPropertiesSet();
return rabbitAdmin;
}
#Bean
public RabbitTemplate rabbitTemplate(final ConnectionFactory rabbitConnectionFactory, final RabbitAdmin rabbitAdmin) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate();
rabbitTemplate.setConnectionFactory(connectionFactory);
final FanoutExchange fanoutExchange = new FanoutExchange(MY_FANOUT.getFanoutName());
fanoutExchange.setAdminsThatShouldDeclare(rabbitAdmin);
for (final String queueName : MY_FANOUT.getQueueNames) {
final Queue queue = new Queue(queueName, true);
queue.setAdminsThatShouldDeclare(rabbitAdmin);
final Binding binding = BindingBuilder.bind(queue).to(fanoutExchange);
binding.setAdminsThatShouldDeclare(rabbitAdmin);
}
rabbitTemplate.setExchange(fanoutExchange);
}
and for completeness here's the enum for the fanout declaration:
public enum MyFanout {
MY_FANOUT(Lists.newArrayList("queue1", "queue2"), "my-fanout"),
private final List<String> queueNames;
private final String fanoutName;
MyFanout(final List<String> queueNames, final String fanoutName) {
this.queueNames = requireNonNull(queueNames, "queue must not be null!");
this.fanoutName = requireNonNull(fanoutName, "exchange must not be null!");
}
public List<String> getQueueNames() {
return this.queueNames;
}
public String getFanoutName() {
return this.fanoutName;
}
}
Hope it helps!
Thanks!
That was the answer I was looking for.
also - for the sake of completeness - I found a way to it 'the java way' inside Spring Bean:
#Bean
public ConnectionFactory connectionFactory() throws IOException {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
Connection conn = connectionFactory.createConnection();
Channel channel = conn.createChannel(false);
// declare exchnage
AMQP.Exchange.DeclareOk resEx = channel.exchangeDeclare(AmqpTemp.SPRING_BOOT_EXCHANGE_test, ExchangeTypes.FANOUT, true, false, false, null);
// declares queues
AMQP.Queue.DeclareOk resQ = channel.queueDeclare(AmqpTemp.Q2, true, false, false, null);
resQ = channel.queueDeclare(AmqpTemp.Q3, true, false, false, null);
// declare binding
AMQP.Queue.BindOk resB = channel.queueBind(AmqpTemp.Q2, AmqpTemp.SPRING_BOOT_EXCHANGE_test, "");
resB = channel.queueBind(AmqpTemp.Q3, AmqpTemp.SPRING_BOOT_EXCHANGE_test, "");
// channel.queueBind(queueName2, SPRING_BOOT_EXCHANGE, "");
return connectionFactory;
}
The problems I was having before, were do to the fact that I created some queues in my initial play with the code - and when I tried to reuse the same queue names it caused exception since they were initially defined differently - so - lesson learnt: rename the queues from the names you used when you 'played' with the code.
Related
I have found out that an IntegrationFlow I have written using Java DSL wasn't very testable so I have followed Configuring with Java Configuration and split it into #Bean configuration.
In my unit test I have used a 3rd party SFTP in memory server and I tried triggering InboundChannelAdaper and then calling receive() on the channel.
I had a problem with finding out the type of Channel to use, as Channel usage was not mentioned anywhere in the SFTP Adapters documentation, but ultimately I found what I think is correct (QueueChannel) in the testing examples repository .
My problem is that the unit test I wrote is hanging on the channel's receive() method. Through debugging I determined that session factory's getSession() never gets called.
What am I doing wrong?
#Bean
public PollableChannel sftpChannel() {
return new QueueChannel();
}
#Bean
#EndpointId("sftpInboundAdapter")
#InboundChannelAdapter(channel = "sftpChannel", poller = #Poller(fixedDelay = "1000"))
public SftpInboundFileSynchronizingMessageSource sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer);
source.setLocalDirectory(new File("/local"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
source.setMaxFetchSize(6);
return source;
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(testSftpSessionFactory);
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setPreserveTimestamp(true);
fileSynchronizer.setRemoteDirectory("/remote");
List<String> filterFileNameList = List.of("1.txt");
fileSynchronizer.setFilter(new FilenameListFilter(filterFileNameList));
return fileSynchronizer;
}
#Bean
private DefaultSftpSessionFactory testSftpSessionFactory(String username, String password, int port, String host) {
DefaultSftpSessionFactory defaultSftpSessionFactory = new DefaultSftpSessionFactory();
defaultSftpSessionFactory.setPassword("password");
defaultSftpSessionFactory.setUser("username");
defaultSftpSessionFactory.setHost("localhost");
defaultSftpSessionFactory.setPort(777);
defaultSftpSessionFactory.setAllowUnknownKeys(true);
Properties config = new java.util.Properties();
config.put( "StrictHostKeyChecking", "no" );
defaultSftpSessionFactory.setSessionConfig(config);
return defaultSftpSessionFactory;
}
#ExtendWith(SpringExtension.class)
#ContextConfiguration(classes = {IntegrationFlowTestSupport.class, Synchronizer.class, Channel.class, Activator.class})
public class IntegrationFlowConfigTest {
private static final String CONTENTS = "abcdef 1234567890";
#Autowired
PollableChannel sftpChannel;
#Autowired
DefaultSftpSessionFactory testSftpSessionFactory;
#Autowired
SftpInboundFileSynchronizer sftpInboundFileSynchronizer;
#Autowired
SftpInboundFileSynchronizingMessageSource sftpMessageSource;
#Autowired
SourcePollingChannelAdapter sftpInboundAdapter;
#Test
public void test() throws Exception {
FileEntry f1 = new FileEntry("/remote/1.txt", CONTENTS);
FileEntry f2 = new FileEntry("/remote/2.txt", CONTENTS);
FileEntry f3 = new FileEntry("/remote/3.txt", CONTENTS);
withSftpServer(server -> {
server.setPort(777);
server.addUser("username", "password");
server.putFile(f1.getPath(), f1.createInputStream());
server.putFile(f2.getPath(), f2.createInputStream());
sftpInboundAdapter.start();
Message<?> message = sftpChannel.receive();
});
}
}
First of all it is wrong to rewrite your code to satisfy unit test expectations. We spend not one hour thinking about dividing concerns from production code to testing.
See respective documentation: https://docs.spring.io/spring-integration/docs/current/reference/html/testing.html#test-context.
For your use-case it might be better to do a mock on that #ServiceActivator instead of QueueChannel and competing consumer in your test. What I mean that you already have a consumer in your configuration with that #ServiceActivator. So, there is no guarantee that your manual sftpChannel.receive() would give you a message from the queue since this one could be consumed by your #ServiceActivator subscriber.
The fixedDelay = "0" looks suspicious. Isn't that too often to ask SFTP server for new files? How do you expect your system would be stable enough if you give it so much stress with such a short delay?
We don't know what is withSftpServer(server -> {, and it is also not clear what is testSftpSessionFactory. So, not sure yet how you start an SFTP server and connect to it from your code.
I also see sftpMessageSource.start();, but there is nowhere in your that it is stopped somehow. Plus I guess you really meant to start an endpoint, not source. The endpoint in your case is a SourcePollingChannelAdapter created for that #InboundChannelAdapter. You can use an #EndpointId, if it is not autowired automatically by type.
In our tests we use Apache MINA SSH library: https://github.com/spring-projects/spring-integration/blob/main/spring-integration-sftp/src/test/java/org/springframework/integration/sftp/SftpTestSupport.java#L64-L76
I have a legacy Spring 4.2.1.RELEASE application that connects to ActiveMQ 5.x as a listener and now we're adding connectivity to ActiveMQ Artemis. For Artemis we're using durable subscriptions because we don't want message loss on a topic when the subscribers go down and shared subscriptions because we wanted the option of clustering or using concurrency to asynchronously process the messages in the subscription. I have separate ConnectionFactorys and ListenerContainers, but from this WARN log that keeps repeating it looks like the Artemis DMLC can't start due to the following NPE:
java.lang.NullPointerException
at org.springframework.jms.listener.AbstractMessageListenerContainer.createConsumer(AbstractMessageListenerContainer.java:856)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.createListenerConsumer(AbstractPollingMessageListenerContainer.java:213)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.initResourcesIfNecessary(DefaultMessageListenerContainer.java:1173)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1149)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1142)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1039)
at java.lang.Thread.run(Unknown Source)
On the surface it looks like it can't find the method createSharedDurableConsumer. Looking at the AbstractMessageListenerContainer I have, line 856 is calling method.invoke
/** The JMS 2.0 Session.createSharedDurableConsumer method, if available */
private static final Method createSharedDurableConsumerMethod = ClassUtils.getMethodIfAvailable(
Session.class, "createSharedDurableConsumer", Topic.class, String.class, String.class);
...
Method method = (isSubscriptionDurable() ?
createSharedDurableConsumerMethod : createSharedConsumerMethod);
try {
return (MessageConsumer) method.invoke(session, destination, getSubscriptionName(), getMessageSelector());
}
Artemis configuration:
#Configuration
public class ArtemisConfig {
#Autowired
private Environment env;
#Bean
public ConnectionFactory artemisConnectionFactory() {
ActiveMQConnectionFactory artemisConnectionFactory = ActiveMQJMSClient
.createConnectionFactoryWithHA(JMSFactoryType.CF, createTransportConfigurations());
artemisConnectionFactory.setUser(env.getRequiredProperty("artemis.username"));
artemisConnectionFactory.setPassword(env.getRequiredProperty("artemis.password"));
artemisConnectionFactory.setCallTimeout(env.getRequiredProperty("artemis.call.timeout.millis", Long.class));
artemisConnectionFactory.setConnectionTTL(env.getRequiredProperty("artemis.connection.ttl.millis", Long.class));
artemisConnectionFactory
.setCallFailoverTimeout(env.getRequiredProperty("artemis.call.failover.timeout.millis", Long.class));
artemisConnectionFactory.setInitialConnectAttempts(
env.getRequiredProperty("artemis.connection.attempts.initial", Integer.class));
artemisConnectionFactory
.setReconnectAttempts(env.getRequiredProperty("artemis.connection.attempts.reconnect", Integer.class));
artemisConnectionFactory.setRetryInterval(env.getRequiredProperty("artemis.retry.interval.millis", Long.class));
artemisConnectionFactory
.setRetryIntervalMultiplier(env.getRequiredProperty("artemis.retry.interval.multiplier", Double.class));
artemisConnectionFactory.setBlockOnAcknowledge(true);
artemisConnectionFactory.setBlockOnDurableSend(true);
artemisConnectionFactory.setCacheDestinations(true);
artemisConnectionFactory.setConsumerWindowSize(0);
artemisConnectionFactory.setMinLargeMessageSize(1024 * 1024);
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(artemisConnectionFactory);
cachingConnectionFactory
.setSessionCacheSize(env.getRequiredProperty("artemis.session.cache.size", Integer.class));
cachingConnectionFactory.setReconnectOnException(true);
return cachingConnectionFactory;
}
#Bean
public DefaultJmsListenerContainerFactory artemisContainerFactory(ConnectionFactory artemisConnectionFactory,
JmsTransactionManager artemisJmsTransactionManager,
MappingJackson2MessageConverter mappingJackson2MessageConverter) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setCacheLevel(DefaultMessageListenerContainer.CACHE_CONSUMER);
factory.setConnectionFactory(artemisConnectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setMessageConverter(mappingJackson2MessageConverter);
factory.setSubscriptionDurable(Boolean.TRUE);
factory.setSubscriptionShared(Boolean.TRUE);
factory.setSessionAcknowledgeMode(Session.SESSION_TRANSACTED);
factory.setSessionTransacted(Boolean.TRUE);
factory.setTransactionManager(artemisJmsTransactionManager);
return factory;
}
private TransportConfiguration[] createTransportConfigurations() {
String connectorFactoryFqcn = NettyConnectorFactory.class.getName();
Map<String, Object> primaryTransportParameters = new HashMap<>(2, 1F);
String primaryHostname = env.getRequiredProperty("artemis.primary.hostname");
Integer primaryPort = env.getRequiredProperty("artemis.primary.port", Integer.class);
primaryTransportParameters.put("host", primaryHostname);
primaryTransportParameters.put("port", primaryPort);
return new TransportConfiguration[] {
new TransportConfiguration(connectorFactoryFqcn, primaryTransportParameters),
new TransportConfiguration(connectorFactoryFqcn, backupTransportParameters) };
}
}
My pom uses version 2.10.0 of Artemis.
How do I fix this?
The JMS 2.0 spec is backwards compatible with JMS 1.1 so make sure you only have the JMS 2 spec on your classpath. My hunch is that the reflection calls in the Spring code are getting messed up because they're hitting the JMS 1.1 spec classes instead of the proper JMS 2 spec classes.
I have written a Spring Application that is implementing RabbitMQ manually with the RabbitMQ Client API.
The way the Connection Factory and Connection are set are similar to the tutorial:
public class Recv {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv)
throws java.io.IOException,
java.lang.InterruptedException {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("122.34.1.1");
factory.setPort(5672);
factory.setUsername("user");
factory.setPassword("password")
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
...
}
}
The connection works correct. However, if I turn off my host server, my application seems to stuck in some sort of loop where it tries to auto recover by keep pinging the turned-off host.
Because of this, my console get filled with tons of stacktraces that say "UnknownHostException". The exact location according to stacktraces is the line:
Connection connection = factory.newConnection();
I have tried to put a try-catch block around this line, but that doesn't seem to work at all.
If the traditional try-catch block can't handle the exception coming from the connection, what is the proper way to catch the exception and stop the auto-recovery from creating this loop?
Thanks.
Try setting up your Rabbit like this (on both ends):
#Bean
public ConnectionFactory connectionFactory() {
final CachingConnectionFactory connectionFactory = new CachingConnectionFactory(server);
connectionFactory.setUsername("user");
connectionFactory.setPassword("pass");
return connectionFactory;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
final SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
return factory;
}
The port should be set by default. If not, it will show on startup and you can change it. The Queue should be declared and set by the consumer (host or whatever you call it).
And put a #EnableRabbit annotation on your configuration class if you haven't.
Consumer declares a Queue:
#RabbitListener(queues = "myQueue")
#Component
public class RabbitListener {
#Bean
public Queue getQueue() {
return new Queue("myQueue");
}
#RabbitHandler
public void getElementFromMyQueue(#Payload Object object) {
// handle object as you want
}
}
After that just #Autowire the RabbitTemplate on the sender and you should be good to go. The queue should remain idle but 'active', even when the host is down
I am using spring boot with RabbitMQ. Everything is working - processing messages works and after losing connection it automatically tries to reconnect. However,
I have only one problem:
When Rabbit server is swtiched off (no possibility to establish connection) and I try to launch spring-boot server it can't start. I can't check now (no access to machine) what exact content of exception is, however it was about problem with instatiation of beans. Can you help me ?
#Configuration
public class RabbitConfig{
private String queueName = "myQueue";
private String echangeName = "myExchange";
#Bean
public FanoutExchange exchange(RabbitAdmin rabbitAdmin) {
FanoutExchange exch = new
FanoutExchange(echangeName);
rabbitAdmin.declareExchange(exch);
return exch;
}
#Bean
public Queue queue(FanoutExchange exchange, RabbitAdmin rabbitAdmin) {
HashMap<String, Object> args = new HashMap<String, Object>();
args.put("x-message-ttl", 20);
args.put("x-dead-letter-exchange", "dlx_exchange_name");
Queue queue = new Queue(queueName, true, false, false, args);
rabbitAdmin.declareQueue(queue);
rabbitAdmin.declareBinding(BindingBuilder.bind(queue).to(exchange));
return queue;
}
}
Edit
I must edit, because I was not aware of fact that is is important here.
In my case the last argument is not null, it is some Hashmap (it is important for me). I edited my code above.
Moreover, I don't understand your answer exactly. Could you be more precisely ?
In order make sure that I was sufficiently clear: I would like to take advantage of automatic reconnection (now it is working). Additionally, If during starting spring boot server rabbit broker is shutdown it should start and cyclically try to reconnect (at this moment application doesn't start).
Edit2
#Configuration
public class RabbitConfig{
private String queueName = "myQueue";
private String echangeName = "myExchange";
#Bean
public FanoutExchange exchange(RabbitAdmin rabbitAdmin) {
FanoutExchange exch = new
FanoutExchange(echangeName);
//rabbitAdmin.declareExchange(exch);
return exch;
}
#Bean
public Queue queue(FanoutExchange exchange, RabbitAdmin rabbitAdmin) {
HashMap<String, Object> args = new HashMap<String, Object>();
args.put("x-message-ttl", 20);
args.put("x-dead-letter-exchange", "dlx_exchange_name");
Queue queue = new Queue(queueName, true, false, false, args);
//rabbitAdmin.declareQueue(queue);
//rabbitAdmin.declareBinding(BindingBuilder.bind(queue).to(exchange));
return queue;
}
// EDIT 3: now, we are made to create binding bean
#Autowired
Queue queue; // inject bean by name
#Autowired
Exchange exchange;
#Bean
public Binding binding() {
return BindingBuilder.bind(queue.to(exchange);
}
}
That's correct. You try to register Broker entities manually:
rabbitAdmin.declareExchange(exch);
...
rabbitAdmin.declareQueue(queue);
rabbitAdmin.declareBinding(BindingBuilder.bind(queue).to(exchange));
You should rely here on the built-in auto-declaration mechanism in the Framework.
In other words: you're good to declare those beans (including Bindingm BTW), but you have not to call rabbitAdmin.declare at all. At least not from the bean definition phase.
I have a spring application that has to consume messages from some JMS queues. The number of queues has to be configurable, and because of this we have to manually create the consumers by reading a config file. So I can have x queues of type1 and y queues of type2, and all the connection details are specified in this config file.
I would say it is a rather complicated code, and I need to point out the following facts: I manually create the spring DefaultMessageListenerContainer and call start and stop on it, the transaction manager is distributed between the JMS and JDBC resources. Also, the application runs on WebLogic and the JMS queues are in WebLogic too.
The flow is that the app reads messages from the queues, tries to put the messages into the database, but if the database is down, the transaction (shared between both JMS and JDBC) is rolled back, so the messages is put back into the queue - this is the failover mechanism when database is down.
The issue that I am experiencing is that when I stop the application while it performs the failover mechanism, there are some JMS consumer threads that are not stopped. This way I get to leak threads and overload the system.
So my question is how can I make sure that when the application stops, it stops all the consumer threads? Calling stop on the message listener container doesn't seem to do the job.
Below are some code snippets:
config:
[
{
"factoryInitial": "weblogic.jndi.WLInitialContextFactory",
"providerUrl": "t3://localhost:7001",
"securityPrincipal": "user",
"securityCredentials": "password",
"connectionFactory": "jms/QCF",
"channels": {
"type1": "jms/queue1"
}
}
]
java:
public class JmsConfig {
private Map<String, List<DefaultMessageListenerContainer>> channels = new HashMap<>();
private Map<String, MessageListener> messageConsumers;
private PlatformTransactionManager transactionManager;
public JmsConfig(Map<String, MessageListener> messageConsumers, PlatformTransactionManager transactionManager) throws Exception {
this.messageConsumers = messageConsumers;
this.transactionManager = transactionManager;
List<JmsServerConfiguration> serverConfigurationList = readJsonFile();
for (JmsServerConfiguration jmsServerConfiguration : serverConfigurationList) {
Properties environment = createEnvironment(jmsServerConfiguration);
JndiTemplate jndiTemplate = new JndiTemplate();
jndiTemplate.setEnvironment(environment);
ConnectionFactory connectionFactory = createConnectionFactory(jndiTemplate, jmsServerConfiguration);
populateMessageListenerContainers(jmsServerConfiguration, jndiTemplate, connectionFactory);
}
}
#PreDestroy
public void stopListenerContainers() {
for (Map.Entry<String, List<DefaultMessageListenerContainer>> channel : channels.entrySet()) {
for (DefaultMessageListenerContainer listenerContainer : channel.getValue()) {
listenerContainer.stop();
}
}
}
private void populateMessageListenerContainers(
JmsServerConfiguration jmsServerConfiguration,
JndiTemplate jndiTemplate, ConnectionFactory connectionFactory) throws Exception {
Set<Map.Entry<String, String>> channelsEntry = jmsServerConfiguration.getChannels().entrySet();
for (Map.Entry<String, String> channel : channelsEntry) {
Destination destination = createDestination(jndiTemplate, channel.getValue());
DefaultMessageListenerContainer listenerContainer =
createListenerContainer(connectionFactory, destination, messageConsumers.get(channel.getKey()));
if (!channels.containsKey(channel.getKey())) {
channels.put(channel.getKey(),
new ArrayList<DefaultMessageListenerContainer>());
}
channels.get(channel.getKey()).add(listenerContainer);
}
}
private Properties createEnvironment(JmsServerConfiguration jmsServerConfiguration) {
Properties properties = new Properties();
properties.setProperty("java.naming.factory.initial", jmsServerConfiguration.getFactoryInitial());
properties.setProperty("java.naming.provider.url", jmsServerConfiguration.getProviderUrl());
properties.setProperty("java.naming.security.principal", jmsServerConfiguration.getSecurityPrincipal());
properties.setProperty("java.naming.security.credentials", jmsServerConfiguration.getSecurityCredentials());
return properties;
}
private ConnectionFactory createConnectionFactory(JndiTemplate jndiTemplate,
JmsServerConfiguration jmsServerConfiguration) throws Exception {
JndiObjectFactoryBean connectionFactory = new JndiObjectFactoryBean();
connectionFactory.setJndiTemplate(jndiTemplate);
connectionFactory.setJndiName(jmsServerConfiguration.getConnectionFactory());
connectionFactory.setExpectedType(ConnectionFactory.class);
connectionFactory.afterPropertiesSet();
return (ConnectionFactory) connectionFactory.getObject();
}
private Destination createDestination(JndiTemplate jndiTemplate, String jndiName) throws Exception {
JndiObjectFactoryBean destinationFactory = new JndiObjectFactoryBean();
destinationFactory.setJndiTemplate(jndiTemplate);
destinationFactory.setJndiName(jndiName);
destinationFactory.setExpectedType(Destination.class);
destinationFactory.afterPropertiesSet();
return (Destination) destinationFactory.getObject();
}
private DefaultMessageListenerContainer createListenerContainer(
ConnectionFactory connectionFactory, Destination destination,
MessageListener messageListener) {
DefaultMessageListenerContainer listenerContainer = new DefaultMessageListenerContainer();
listenerContainer.setConcurrentConsumers(3);
listenerContainer.setConnectionFactory(connectionFactory);
listenerContainer.setDestination(destination);
listenerContainer.setMessageListener(messageListener);
listenerContainer.setTransactionManager(transactionManager);
listenerContainer.setSessionTransacted(true);
listenerContainer.afterPropertiesSet();
listenerContainer.start();
return listenerContainer;
}
}
So the issue was solved by calling listenerContainer.shutdown(); instead of stop().