RabbitMQ test consumer is Alive - java

I have a JAVA application which creates consumers that listen to rabbitmq . I need to know the started consumer is still working fine and if not then i need to restart the consumer.
Is their any way i can do that. Currently my main application creates an Executor thread pool and passes this executor while creating new connection.
ExecutorService executor = Executors.newFixedThreadPool(30);
Connection connection = factory.newConnection(executor);
The main method then create 30 consumerApp object by calling constructor with new channel as argument and call the listen() method
for(int i=0;i<=30;i++) {
ConsumerApp consumer = new ConsumerApp(i,connection.createChanell());
consumer.listen() }
The listen method in consumerApp listen to a queue and start a DefaultConsumer Object which simply prints the received message
listen() {
try {
channel.queueDeclare("test-queue-name", false, false, false, null);
}
catch {
System.out.println("Exception on creating Queue")
}
Consumer consumer = new DefaultConsumer(this.channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties,
byte[] body) throws IOException {
String message = new String(body, "UTF-8");
System.out.println(" [x] Received Message in consumer '"+consumerId+" "+ message + "'");
}
};
//Now starting the consumer
try {
channel.basicConsume(QUEUE_NAME, true, consumer);
}
catch (ShutdownSignalException | IOException ex) {
ex.printStackTrace();
}
}
I want to know is their any way i can check the consumer is active . My idea is to catch the shutdown signal exception and recreate the consumer object and recall the listen method . Is this necessary as rabbitmq auto recovers and connnect back. ? But how can i ensure this ?
Is this any way achievable using the threadpool passed to rabbitmq connector.
I am using latest version of rabbitmq client 5.3.0

Consumer has different methods that can help you track the state of your consumer. You're likely to be interested in handleConsumeOk and in handleCancel.
Automatic connection recovery will indeed re-register consumers after a connection failure, but that doesn't prevent you from following their state manually to e.g. expose some information on JMX.

Related

Spark process doesn't finish when publishing to RabbitMQ

At the end of my Spark app (Containerized Spark v2.4.7), I publish a message to RabbitMQ. The app runs successfully and even the message is published to my containerized RabbitMQ. The problem is that the process doesn't really finish... I need to ctrl c from the terminal to abort the process.
Loglines that I write to console after publishing message are written, which means the process didn't stuck in the RabbitMQ client.
I tried to close the RabbitMQ channel and connection, but it didn't help.
Also, I tried to close the SparkSession at the end, but it didn't help either.
I wrote a test with Junit where I create the same queue with the same configuration and write the same message with the same client configuration and it finishes successfully, not stuck or anything.
My RabbitMQ publishing implementation:
private Connection getConnection() throws KeyManagementException, NoSuchAlgorithmException, URISyntaxException, IOException, TimeoutException{
ConnectionFactory factory = new ConnectionFactory();
String uri = getURI();
factory.setUri(new URI(uri));
factory.setConnectionTimeout(10000);
return factory.newConnection();
}
#Override
public void publish(String msg) {
try{
Connection connection = getConnection();
Channel channel = connection.createChannel();
channel.queueDeclare("rabbitQueue", true, false, false, null);
channel.basicPublish("exchange", "rabbitKey", null, msg.getBytes());
} catch (Exception ex){
logger.error("Error, failed to create connection to RabbitMQ", ex);
}
}
What should be the reason that my process doesn't finish?
Thanks!

How to get a RabbitMQ message when a task completes?

I'm using RabbitMQ (and Celery) on Java, here is my code to get a message from RabbitMQ based on a tutorial I am reading:
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume(QUEUE_NAME, true, consumer);
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println(" [x] Received '" + message + "'");
}
But I only get a message when the task begins - when I would like to get a message when the task is complete. Any help?
You should not be using QueueingConsumer since it's considered deprecated as explained here: https://www.rabbitmq.com/releases/rabbitmq-java-client/current-javadoc/com/rabbitmq/client/QueueingConsumer.html
On the contrary, you should be creating your own consumer that implements the interface Consumer from RabbitMQ libraries. There is a method you will have to implement called handleDelivery that will be called every time you get a message. Then, to start it you need to call channel.basicConsume(QUEUE_NAME, true, consumer).
Example:
channel.basicConsume(queueName, autoAck, "myConsumerTag", new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body) throws IOException
{
//your code here
}
});

How can I use a separate task executor for blocking sub tasks in netty?

Situation: Given the telnet client & server example of the official repo (https://github.com/netty/netty/tree/4.0/example/src/main/java/io/netty/example/telnet), I've change this a little bit using a fake blocking task: https://github.com/knalli/netty-with-bio-task/tree/master/src/main/java/de/knallisworld/poc
This is Netty 4!
Instead of echo replying the message (like the telnet server demo does), the channel handler blocks the thread for some time (in real world with things like JDBC or JSch, ...).
try { Thread.sleep(3000); } catch (InterruptedException e) {};
future = ctx.write("Task finished at " + new Date());
future.addListener(ChannelFutureListener.CLOSE);
This actually works: I'm testing this with a echo "Hello" | nc localhost $port) and the thread will be blocked (and nc waits) until it returns 3 seconds later.
However, this means I'm blocking a thread of Netty's event loop worker group with an unrelated task.
Therefor, I've changed the channel registration and applied a custom executor:
public class TelnetServerInitializer extends ChannelInitializer<SocketChannel> {
private static final StringDecoder DECODER = new StringDecoder();
private static final StringEncoder ENCODER = new StringEncoder();
private TelnetServerHandler serverHandler;
private EventExecutorGroup executorGroup;
public TelnetServerInitializer() {
executorGroup = new DefaultEventExecutorGroup(10);
serverHandler = new TelnetServerHandler();
}
#Override
protected void initChannel(final SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(
8192, Delimiters.lineDelimiter()));
pipeline.addLast("decoder", DECODER);
pipeline.addLast("encoder", ENCODER);
// THIS!
pipeline.addLast(executorGroup, "handler", serverHandler);
}
}
Unfortunately, after this configuration the socket will be closed immediately after exiting handler's channelRead0(). I can see that the task itself will be processed including calling the handler's event methods. But the corresponding channel is already disconnected to the client (my nc command as already exited).
How does integrating another executor work? Am I missing a detail?
Your netty server is working as expected, it's the echo | nc command you are testing with that is exiting early.
Try using 'telnet localhost 3000' for an interactive session with your test server, enter some text and you'll see that the correct response is written after a delay, then the channel is closed.
Or just use 'nc -v -w10 localhost 3000', write some text, hit enter, again you'll see the expected output after a delay and the channel closed.

Server sent event with Jersey: EventOutput is not closed after client drops

I am using jersey to implement a SSE scenario.
The server keeps connections alive. And push data to clients periodically.
In my scenario, there is a connection limit, only a certain number of clients can subscribe to the server at the same time.
So when a new client is trying to subscribe, I do a check(EventOutput.isClosed) to see if any old connections are not active anymore, so they can make room for new connections.
But the result of EventOutput.isClosed is always false, unless the client explicitly calls close of EventSource. This means that if a client drops accidentally(power outage or internet cutoff), it's still hogging the connection, and new clients can not subscribe.
Is there a work around for this?
#CuiPengFei,
So in my travels trying to find an answer to this myself I stumbled upon a repository that explains how to handle gracefully cleaning up the connections from disconnected clients.
The encapsulate all of the SSE EventOutput logic into a Service/Manager. In this they spin up a thread that checks to see if the EventOutput has been closed by the client. If so they formally close the connection (EventOutput#close()). If not they try to write to the stream. If it throws an Exception then the client has disconnected without closing and it handles closing it. If the write is successful then the EventOutput is returned to the pool as it is still an active connection.
The repo (and the actual class) are available here. Ive also included the class without imports below in case the repo is ever removed.
Note that they bind this to a Singleton. The store should be globally unique.
public class SseWriteManager {
private final ConcurrentHashMap<String, EventOutput> connectionMap = new ConcurrentHashMap<>();
private final ScheduledExecutorService messageExecutorService;
private final Logger logger = LoggerFactory.getLogger(SseWriteManager.class);
public SseWriteManager() {
messageExecutorService = Executors.newScheduledThreadPool(1);
messageExecutorService.scheduleWithFixedDelay(new messageProcessor(), 0, 5, TimeUnit.SECONDS);
}
public void addSseConnection(String id, EventOutput eventOutput) {
logger.info("adding connection for id={}.", id);
connectionMap.put(id, eventOutput);
}
private class messageProcessor implements Runnable {
#Override
public void run() {
try {
Iterator<Map.Entry<String, EventOutput>> iterator = connectionMap.entrySet().iterator();
while (iterator.hasNext()) {
boolean remove = false;
Map.Entry<String, EventOutput> entry = iterator.next();
EventOutput eventOutput = entry.getValue();
if (eventOutput != null) {
if (eventOutput.isClosed()) {
remove = true;
} else {
try {
logger.info("writing to id={}.", entry.getKey());
eventOutput.write(new OutboundEvent.Builder().name("custom-message").data(String.class, "EOM").build());
} catch (Exception ex) {
logger.info(String.format("write failed to id=%s.", entry.getKey()), ex);
remove = true;
}
}
}
if (remove) {
// we are removing the eventOutput. close it is if it not already closed.
if (!eventOutput.isClosed()) {
try {
eventOutput.close();
} catch (Exception ex) {
// do nothing.
}
}
iterator.remove();
}
}
} catch (Exception ex) {
logger.error("messageProcessor.run threw exception.", ex);
}
}
}
public void shutdown() {
if (messageExecutorService != null && !messageExecutorService.isShutdown()) {
logger.info("SseWriteManager.shutdown: calling messageExecutorService.shutdown.");
messageExecutorService.shutdown();
} else {
logger.info("SseWriteManager.shutdown: messageExecutorService == null || messageExecutorService.isShutdown().");
}
}}
Wanted to provide an update on this:
What was happening is that the eventSource on the client side (js) never got into readyState '1' unless we did a broadcast as soon as a new subscription was added. Even in this state the client could receive data pushed from the server. Adding call to do a broadcast of a simple "OK" message helped kicking the eventSource into readyState 1.
On closing the connection from the client side; to be pro-active in cleaning up resources, just closing the eventSource on the client side doesn't help. We must make another ajax call to the server to force the server to do a broadcast. When the broadcast is forced, jersey will clean up the connections that are no longer alive and will in-turn release resources (Connections in CLOSE_WAIT). If not a connection will linger in CLOSE_WAIT till the next broadcast happens.

OPEN MQ - Help with asynchronous

I'm testing open MQ for send and receive messages in my project. I have no problem to configure it to send a synchronous message, but i can't find any way in the official documentation to configure the message to be consumed 15 minutes after the producer send a message, and continue call the consumer if an error appears.
offical documentation: http://dlc.sun.com/pdf/819-7757/819-7757.pdf
my method whom send a message
public void sendMessage(EntradaPrecomven entrada){
try{
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.fscontext.RefFSContextFactory");
env.put(Context.PROVIDER_URL, "file:///C:/mqteste");
// Create the initial context.
Context ctx = new InitialContext(env);
// Look up the connection factory object in the JNDI object store.
autenticisFactory = (ConnectionFactory) ctx.lookup(CF_LOOKUP_NAME);
mdbConn = autenticisFactory.createConnection();
mdbSession = mdbConn.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = (Destination) ctx.lookup(DEST_LOOKUP_NAME);
MessageProducer myProducer = mdbSession.createProducer(destination);
ObjectMessage outMsg = mdbSession.createObjectMessage(entrada);
outMsg.setJMSRedelivered(Boolean.TRUE);
myProducer.send(outMsg);
consumidor = mdbSession.createConsumer(destination);
MessageMDB myListener = new MessageMDB();
consumidor.setMessageListener(myListener);
mdbConn.start();
mdbConn.close();
}catch(Exception e){
try {
mdbSession.rollback();
} catch (JMSException e1) {}
}
}
My listener:
#Override
public void onMessage(Message msg) {
ObjectMessage objMessage = (ObjectMessage) msg;
try {
System.out.println("Received Phone Call:" + objMessage.getJMSRedelivered());
throw new JMSException("TESTE");
} catch (JMSException e) {
e.printStackTrace();
}
}
So, when i call mdbConn.start() the sendMessage() is called, but i want to call 15 minutes after the call. And whatever it sendMessage() does, the message is always removed from the queue. How can i keep the messagen in queue to be called later ?
Thanks!
The message is removed from the broker queue due to the fact that the session you are using is set to auto acknowledge.
mdbSession = mdbConn.createSession(false, Session.AUTO_ACKNOWLEDGE);
This will automatically send an acknowledgement to the broker that the listener has received a message for the consumer with which it is associated once the onMessage() method has executed to completion. This then results in the message being removed from the queue.
If you manually take over the acknowledgement process you can choose to only acknowledge the receipt of the message at a time of your choosing (be that 15 minutes later or whatever criteria you have for the consuming client).
Setting the Session Session.CLIENT_ACKNOWLEDGE will allow you to do this but then you will have to manually send an acknowledge in your consumer code. By calling acknowledge on the message msg.acknowledge() inside your onMessage() method within your listener.
This will then acknowledge the receipt of messages consumed within that session and remove them from the queue.
Pages 46 and 65 in the pdf you quoted are useful for more information on this as is the api

Categories

Resources