I have written a Spring Application that is implementing RabbitMQ manually with the RabbitMQ Client API.
The way the Connection Factory and Connection are set are similar to the tutorial:
public class Recv {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv)
throws java.io.IOException,
java.lang.InterruptedException {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("122.34.1.1");
factory.setPort(5672);
factory.setUsername("user");
factory.setPassword("password")
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
...
}
}
The connection works correct. However, if I turn off my host server, my application seems to stuck in some sort of loop where it tries to auto recover by keep pinging the turned-off host.
Because of this, my console get filled with tons of stacktraces that say "UnknownHostException". The exact location according to stacktraces is the line:
Connection connection = factory.newConnection();
I have tried to put a try-catch block around this line, but that doesn't seem to work at all.
If the traditional try-catch block can't handle the exception coming from the connection, what is the proper way to catch the exception and stop the auto-recovery from creating this loop?
Thanks.
Try setting up your Rabbit like this (on both ends):
#Bean
public ConnectionFactory connectionFactory() {
final CachingConnectionFactory connectionFactory = new CachingConnectionFactory(server);
connectionFactory.setUsername("user");
connectionFactory.setPassword("pass");
return connectionFactory;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
final SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
return factory;
}
The port should be set by default. If not, it will show on startup and you can change it. The Queue should be declared and set by the consumer (host or whatever you call it).
And put a #EnableRabbit annotation on your configuration class if you haven't.
Consumer declares a Queue:
#RabbitListener(queues = "myQueue")
#Component
public class RabbitListener {
#Bean
public Queue getQueue() {
return new Queue("myQueue");
}
#RabbitHandler
public void getElementFromMyQueue(#Payload Object object) {
// handle object as you want
}
}
After that just #Autowire the RabbitTemplate on the sender and you should be good to go. The queue should remain idle but 'active', even when the host is down
Related
i have a problem, i do not know how to set the host dynamically and doing RPC operation on different host
Here is the situation
I have a multiple RabbitMQ running on different servers and networks (i.e 192.168.1.0/24, 192.168.2.0/24).
The behavior would be i have a list of IP address which i will perform an RPC with.
So, for each entry in the ip address list, i want to perform a convertSendAndReceive and process the reply and so on.
Tried some codes in documentation but it seems it does not work even the invalid address (addresses that don't have a valid RabbitMQ running, or is not event existing on the network, for example 1.1.1.1) gets received by a valid RabbitMQ (running on 192.168.1.1 for example)
Note: I can successfully perform RPC call on correct address, however, i can also successfully perform RPC call on invalid address which im not suppose to
Anyone has any idea about this?
Here is my source
TaskSchedulerConfiguration.java
#Configuration
#EnableScheduling
public class TaskSchedulerConfiguration {
#Autowired
private IpAddressRepo ipAddressRepo;
#Autowired
private RemoteProcedureService remote;
#Scheduled(fixedDelayString = "5000", initialDelay = 2000)
public void scheduledTask() {
ipAddressRepo.findAll().stream()
.forEach(ipaddress -> {
boolean status = false;
try {
remote.setIpAddress(ipaddress);
remote.doSomeRPC();
} catch (Exception e) {
logger.debug("Unable to Connect to licenser server: {}", license.getIpaddress());
logger.debug(e.getMessage(), e);
}
});
}
}
RemoteProcedureService.java
#Service
public class RemoteProcedureService {
#Autowired
private RabbitTemplate template;
#Autowired
private DirectExchange exchange;
public boolean doSomeRPC() throws JsonProcessingException {
//I passed this.factory.getHost() so that i will know if only the valid ip address will be received by the other side
//at this point, other side receives invalid ipaddress which supposedly will not be receive by the oher side
boolean response = (Boolean) template.convertSendAndReceive(exchange.getName(), "rpc", this.factory.getHost());
return response;
}
public void setIpAddress(String host) {
factory.setHost(host);
factory.setCloseTimeout(prop.getRabbitMQCloseConnectTimeout());
factory.setPort(prop.getRabbitMQPort());
factory.setUsername(prop.getRabbitMQUsername());
factory.setPassword(prop.getRabbitMQPassword());
template.setConnectionFactory(factory);
}
}
AmqpConfiguration.java
#Configuration
public class AmqpConfiguration {
public static final String topicExchangeName = "testExchange";
public static final String queueName = "rpc";
#Autowired
private LicenseVisualizationProperties prop;
//Commented this out since this will only be assigne once
//i need to achieve to set it dynamically in order to send to different hosts
//so put it in RemoteProcedureService.java, but it never worked
// #Bean
// public ConnectionFactory connectionFactory() {
// CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
// connectionFactory.setCloseTimeout(prop.getRabbitMQCloseConnectTimeout());
// connectionFactory.setPort(prop.getRabbitMQPort());
// connectionFactory.setUsername(prop.getRabbitMQUsername());
// connectionFactory.setPassword(prop.getRabbitMQPassword());
// return connectionFactory;
// }
#Bean
public DirectExchange exhange() {
return new DirectExchange(topicExchangeName);
}
}
UPDATE 1
It seems that, during the loop, when an valid ip is set in the CachingConnectionFactory succeeding ip addressing loop, regardliess if valid or invalid, gets received by the first valid ip set in CachingConnectionFactory
UPDATE 2
I found out that once it can establish a successfully connection, it will not create a new connection. How do you force RabbitTemplate to establish a new connection?
It's a rather strange use case and won't perform very well; you would be better to have a pool of connection factories and templates.
However, to answer your question:
Call resetConnection() to close the connection.
In this code, I am using setJMSExpiration(1000) for expire message of one second in queue from publisher side. But From Consumer Side, It is returning properly message after 1 second instead of null.
public class RegistrationPublisher extends Thread{
public void run() {
publisherQueue("Registration.Main.*");
}
public void publisherQueue(String server){
try {
String url="tcp://192.168.20.49:61616";
// Create a ConnectionFactory
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(url);
Connection connection = connectionFactory.createConnection();
connection.start();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue(server);
MessageProducer producer = session.createProducer(destination);
producer.setDeliveryMode(DeliveryMode.PERSISTENT);
String text = "Test";
TextMessage message = session.createTextMessage(text);
message.setJMSExpiration(1000);// For Expire message in one second
producer.send(message);
producer.close();
session.close();
connection.close();
}
catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String args[]) throws IOException{
RegistrationPublisher registrationPublisher=new RegistrationPublisher();
registrationPublisher.start();
}
}
You do this by configuring the JMS MessageProducer to do it for you via the send method that accepts a TTL or by calling setTimeToLive on the producer which adds the same TTL to all sent messages. The JMS APIs for the message version are clear that calling the setters on the message have no effect.
void setJMSExpiration(long expiration) throws JMSException
Sets the message's expiration value.
This method is for use by JMS providers only to set this field when a message is sent. This message cannot be used by clients to configure the expiration time of the message. This method is public to allow a JMS provider to set this field when sending a message whose implementation is not its own.
I first also thought that is was possible to set expiration directly on the message in the post-processor, but as Tim Bish said above, this is not the intended way to do it, and the value will get reset to 0 afterward. I couldn't access to the producer directly neither to set a time to live, because this object was in library org.springframework.jms (I was following this documentation).
One thing I could do was to set to time to live on the jmsTemplate:
import org.springframework.jms.core.JmsTemplate;
#Service
public class MyJmsServiceImpl implements MyJmsService {
#Inject
private JmsTemplate jmsTemplate;
private void convertAndSendToResponseQueue(String targetQueueName, String correlationid, Object message) {
// Set time to live
jmsTemplate.setExplicitQosEnabled(true);
jmsTemplate.setTimeToLive(5000);
jmsTemplate.convertAndSend(targetQueueName, message, new JmsResponsePostProcessor(correlationid));
}
}
I have implemented ActiveMQ message broker in my application but whenever I'm connected to ActiveMQ server in my console logs I'm always seeing the below messages:
10:28:05.282 [ActiveMQ InactivityMonitor WriteCheckTimer] DEBUG o.a.a.t.AbstractInactivityMonitor - WriteChecker: 10000ms elapsed since last write check.
10:28:05.282 [ActiveMQ InactivityMonitor Worker] DEBUG o.a.a.t.AbstractInactivityMonitor - Running WriteCheck[tcp://10.211.127.203:61616]
Looks it keep on polling the listener queue it seems. I want my listener to be active so that whenever the message comes to the queue, the application can process it. At the same time, i do not want this message to pile up my log file.
My message configuration:
#Configuration
#EnableJms
#ImportResource("classpath*:beans.xml")
public class MessagingConfiguration {
#Autowired
MongoCredentialEncryptor encryptor;
#Value("${activemq.broker.url}")
private String BROKER_URL = "tcp://localhost:61616";
#Value("${activemq.request.queue}")
private String REQUEST_QUEUE = "test.request";
#Value("${activemq.response.queue}")
private String RESPONSE_QUEUE = "test.response";
#Value("${activemq.borker.username}")
private String BROKER_USERNAME = "admin";
#Value("${activemq.borker.password}")
private String BROKER_PASSWORD = "admin";
#Autowired
ListenerClass messageListener;
#Autowired
JmsExceptionListener jmsExceptionListener;
#Bean
public ActiveMQConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(BROKER_URL);
connectionFactory.setUserName(BROKER_USERNAME);
//connectionFactory.setPassword(BROKER_PASSWORD);
connectionFactory.setPassword(encryptor.decrypt(BROKER_PASSWORD));
connectionFactory.setTrustAllPackages(true);
connectionFactory.setRedeliveryPolicy(redeliveryPolicy());
return connectionFactory;
}
#Bean
public RedeliveryPolicy redeliveryPolicy() {
RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setBackOffMultiplier(3); // Wait 5 seconds first re-delivery, then 15, 45 seconds
redeliveryPolicy.setInitialRedeliveryDelay(5000);
redeliveryPolicy.setMaximumRedeliveries(3);
redeliveryPolicy.setUseExponentialBackOff(true);
return redeliveryPolicy;
}
#Bean
public ConnectionFactory cachingConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setTargetConnectionFactory(connectionFactory());
connectionFactory.setExceptionListener(jmsExceptionListener);
connectionFactory.setSessionCacheSize(100);
connectionFactory.setCacheConsumers(false);
connectionFactory.setCacheProducers(false);
return connectionFactory;
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
template.setDefaultDestinationName(REQUEST_QUEUE);
template.setSessionAcknowledgeModeName("CLIENT_ACKNOWLEDGE");
template.setMessageConverter(converter());
return template;
}
#Bean
public DefaultMessageListenerContainer jmsListenerContainer() {
DefaultMessageListenerContainer factory = new DefaultMessageListenerContainer();
factory.setConnectionFactory(connectionFactory());
factory.setConcurrency("1-1");
factory.setDestinationName(RESPONSE_QUEUE);
factory.setMessageListener(messageListener);
factory.setExceptionListener(jmsExceptionListener);
return factory;
}
#Bean
MessageConverter converter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("_type");
return converter;
}
}
Solution:
I found a solution to my problem by disabling the InactivityMonitor by following this link ActiveMQ InactivityMonitor. I tested it the connection is still alive and its not keep on polling the queue. But i would like to know is there any downfall by disabling InactivityMonitor? Is there any better solution for this problem.
activemq.broker.url=failover:tcp://localhost:61616?wireFormat.maxInactivityDuration=0
The fix is simple, change your logging settings to not write at debug level or filter that one logger to not be at debug level. The logs are keeping you updated on the fact that the client is successfully pinging the remote broker to ensure the connection is alive. If you disable the monitoring then you can miss connection drop events are things like half closed sockets etc.
In production you really don't want to be turning off connection checking features that cause your code to miss the fact that it is not going to be able to receive anything
Example configuration modification:
<logger name="org.apache.activemq.transport" level="WARN"/>
There is nothing harm if this log is getting printed, InactivityMonitor just checks whether the connection between broker and client is active or not.
If it finds that the connection is inactive for the given time, then InactivityMonitor closes the connection between client and broker.
We can set the timeout for InactivityMonitor by specifying wireFormat.maxInactivityDuration="<time in ms>" in the tcp URLs in transportConnectors section of activemq.xml file.
For detailed information visit this : InactivityMonitor
I'm having difficulty finding a Spring way to initial an exchange that's sending the incoming message to more then 1 queue - on my Spring-boot application:
I can't find a good way to define a seconds exchange-queue binding.
I'm using RabbitTemplate as the producer client.
The RabbitMQ 6 page tutorial doesn't really help with that since:
the only initial several temporary queues from the Consumer on-demand (while I need to the Producer to do the binding - to persistant queues)
The examples are for basic java usage - not using Spring capabilities.
I also failed to find how to implement it via The spring AMQP pages.
what I got so far, is trying to inject the basic java binding to the spring way of doing it - but it's not working....
#Bean
public ConnectionFactory connectionFactory() throws IOException {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
Connection conn = connectionFactory.createConnection();
Channel channel = conn.createChannel(false);
channel.exchangeDeclare(SPRING_BOOT_EXCHANGE, "fanout");
channel.queueBind(queueName, SPRING_BOOT_EXCHANGE, ""); //first bind
channel.queueBind(queueName2, SPRING_BOOT_EXCHANGE, "");// second bind
return connectionFactory;
}
Any help would be appreciated
Edited
I think the problem arise with the fact that every time I restart my server it tries to redefine the exchange-query-binding - while they persist in the broker...
I managed to define them manually via the brokers UI console - so the Producer only aware of the exchange name, and the Consumer only aware to it's relevant queue.
Is there a way to define those element progrematically - but in such a way so it won't be redefined\overwritten if already exist from previous restarts?
We use an approach similar to the following to send data from one specific input channel to several input queues of other consumers:
#Bean
public IntegrationFlow integrationFlow(final RabbitTemplate rabbitTemplate, final AmqpHeaderMapper amqpHeaderMapper) {
IntegrationFlows
.from("some-input-channel")
.handle(Amqp.outboundAdapter(rabbitTemplate)
.headerMapper(headerMapper))
.get()
}
#Bean
public AmqpHeaderMapper amqpHeaderMapper() {
final DefaultAmqpHeaderMapper headerMapper = new DefaultAmqpHeaderMapper();
headerMapper.setRequestHeaderNames("*");
return headerMapper;
}
#Bean
public ConnectionFactory rabbitConnectionFactory() {
return new CachingConnectionFactory();
}
#Bean
public RabbitAdmin rabbitAdmin(final ConnectionFactory rabbitConnectionFactory) {
final RabbitAdmin rabbitAdmin = new RabbitAdmin(rabbitConnectionFactory);
rabbitAdmin.afterPropertiesSet();
return rabbitAdmin;
}
#Bean
public RabbitTemplate rabbitTemplate(final ConnectionFactory rabbitConnectionFactory, final RabbitAdmin rabbitAdmin) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate();
rabbitTemplate.setConnectionFactory(connectionFactory);
final FanoutExchange fanoutExchange = new FanoutExchange(MY_FANOUT.getFanoutName());
fanoutExchange.setAdminsThatShouldDeclare(rabbitAdmin);
for (final String queueName : MY_FANOUT.getQueueNames) {
final Queue queue = new Queue(queueName, true);
queue.setAdminsThatShouldDeclare(rabbitAdmin);
final Binding binding = BindingBuilder.bind(queue).to(fanoutExchange);
binding.setAdminsThatShouldDeclare(rabbitAdmin);
}
rabbitTemplate.setExchange(fanoutExchange);
}
and for completeness here's the enum for the fanout declaration:
public enum MyFanout {
MY_FANOUT(Lists.newArrayList("queue1", "queue2"), "my-fanout"),
private final List<String> queueNames;
private final String fanoutName;
MyFanout(final List<String> queueNames, final String fanoutName) {
this.queueNames = requireNonNull(queueNames, "queue must not be null!");
this.fanoutName = requireNonNull(fanoutName, "exchange must not be null!");
}
public List<String> getQueueNames() {
return this.queueNames;
}
public String getFanoutName() {
return this.fanoutName;
}
}
Hope it helps!
Thanks!
That was the answer I was looking for.
also - for the sake of completeness - I found a way to it 'the java way' inside Spring Bean:
#Bean
public ConnectionFactory connectionFactory() throws IOException {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
Connection conn = connectionFactory.createConnection();
Channel channel = conn.createChannel(false);
// declare exchnage
AMQP.Exchange.DeclareOk resEx = channel.exchangeDeclare(AmqpTemp.SPRING_BOOT_EXCHANGE_test, ExchangeTypes.FANOUT, true, false, false, null);
// declares queues
AMQP.Queue.DeclareOk resQ = channel.queueDeclare(AmqpTemp.Q2, true, false, false, null);
resQ = channel.queueDeclare(AmqpTemp.Q3, true, false, false, null);
// declare binding
AMQP.Queue.BindOk resB = channel.queueBind(AmqpTemp.Q2, AmqpTemp.SPRING_BOOT_EXCHANGE_test, "");
resB = channel.queueBind(AmqpTemp.Q3, AmqpTemp.SPRING_BOOT_EXCHANGE_test, "");
// channel.queueBind(queueName2, SPRING_BOOT_EXCHANGE, "");
return connectionFactory;
}
The problems I was having before, were do to the fact that I created some queues in my initial play with the code - and when I tried to reuse the same queue names it caused exception since they were initially defined differently - so - lesson learnt: rename the queues from the names you used when you 'played' with the code.
I am looking to extend AbstractJavaSamplerClient so that I can fire messages to RabbitMQ. The current setup I have is:
Have connection and channel objects as instance members
Create the connection and channel connection in setupTest()
Send the messages in runTest()
Clean up the connection in teardownTest()
Code:
package com.the.package.samplers.TheSampler
import ...
...
public final class TheSampler extends AbstractJavaSamplerClient {
private final ConnectionFactory factory = new ConnectionFactory();
private Connection connection = null;
private Channel channel = null;
...
#Override
public Arguments getDefaultParameters() {
Arguments parameters = new Arguments();
...
return parameters;
#Override
public void setupTest(JavaSamplerContext context) {
...
factory.setHost(host);
factory.setVirtualHost(vhost);
factory.setPort(port);
factory.setUsername(username);
factory.setPassword(password);
routingKey = queue;
try {
connection = factory.newConnection();
channel = connection.createChannel();
channel.exchangeDeclare(exchange, EXCHANGE_TYPE, true);
channel.queueDeclare(queue, true, false, false, null);
channel.queueBind(queue, exchange, routingKey);
}
catch(IOException e) {
...
}
}
#Override
public SampleResult runTest(JavaSamplerContext context) {
...
channel.basicPublish(exchange, routingKey, null, message.getBytes());
...
}
#Override
public void teardownTest(JavaSamplerContext context) {
try {
channel.close();
connection.close();
}
catch(IOException e) {
...
}
}
}
After running the JMeter test with 5 threads for some time, the message rate drops and I start seeing the following exception (repeated indefinitely):
ERROR - jmeter.threads.JMeterThread: Error while processing sampler 'Java Request' : com.rabbitmq.client.AlreadyClosedException: connection is already closed due to connection error; cause: java.net.SocketException: Connection reset
at com.rabbitmq.client.impl.AMQChannel.ensureIsOpen(AMQChannel.java:190)
at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:291)
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:647)
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:630)
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:621)
at com.the.package.samplers.TheSampler.runTest(TheSampler.java:102)
at org.apache.jmeter.protocol.java.sampler.JavaSampler.sample(JavaSampler.java:191)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:434)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:261)
at java.lang.Thread.run(Unknown Source)
I tried to be a bit more safe by creating and closing the connection and channel objects in runTest(), but that incurred a huge performance hit (fires at a max of 50 messages per second, and previously was in the thousands).
Is there a way to safely create a connection to RabbitMQ when extending AbstractJavaSamplerClient and running with multiple threads?
I don't see any issue in your code from what you show.
Where is JMeter located regarding rabbitmq server ? ie is there a firewall between them ?
You should check this:
https://www.rabbitmq.com/reliability.html
Detecting Dead TCP Connections with Heartbeats