Consuming a queue based on its consumer count in Spring AMQP - java

I want a queue to be consumed by only one subscriber at a time. So if one subscriber drops, then another one(s) will have the chance of subscribing.
I am looking for the correct way of doing it in Spring AMQP. I did this in pure Java, based on the example in RabbitMQ's website. I passively declare the queue, check its consumer count, if it is 0, then start to consume it.
Here's the code.
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
int count = channel.queueDeclarePassive(QUEUE_NAME).getConsumerCount();
System.out.println("count is "+count);
if (count == 0) {
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
DeliverCallback deliverCallback = (consumerTag, delivery) -> {
String message = new String(delivery.getBody(), StandardCharsets.UTF_8);
System.out.println(" [x] Received '" + message + "'");
};
channel.basicConsume(QUEUE_NAME, true, deliverCallback, consumerTag -> { });
} else{
System.out.println("subscribed by some other processor(s)");
}
I also can check the subscriber count in Spring AMQP this way. But it is too late, because it already listens to the queue.
#RabbitListener(queues = "q1")
public void receivedMessageQ1(String message, Channel channel){
try {
int q1 = channel.queueDeclarePassive("q1").getConsumerCount();
// do something.
} catch (IOException e) {
System.out.println("exception occurred");
}
}
In a nutshell, I want to consume a queue based on its consumer count. I hope I am clear.

Set the exclusive flag on the #RabbitListener; RabbitMQ will only allow one instance to consume. The other instance(s) will attempt to listen every 5 seconds (by default). To increase the interval, set the container factory's recoveryBackOff.
#SpringBootApplication
public class So56319999Application {
public static void main(String[] args) {
SpringApplication.run(So56319999Application.class, args);
}
#RabbitListener(queues = "so56319999", exclusive = true)
public void listen (String in) {
}
#Bean
public Queue queue() {
return new Queue("so56319999");
}
}

Related

Java program keeps running: Telegram bot scheduled by ScheduledExecutorService keeps sending messages even when terminated

I have a bot that first creates an array of messages and then sends those messages to a channel. The program uses ScheduledExecutorService and there are two Runnables and two threads: the first thread populates the array, the second sends the requests if array is not empty. After a request a message is to be removed from the array. The message contains both text and a gif:
public class BotTest {
private static final ScheduledExecutorService scheduler =
Executors.newScheduledThreadPool(2);
public static void main(String[] args) throws Exception {
final MyBot bot = new MyBot();
ArrayList<Message> messages = new ArrayList<>();
Runnable createMessages =
new Runnable(){
public void run(){
//...creating messages
}
};
scheduler.scheduleAtFixedRate(createMessages, 0, 1, TimeUnit.MINUTES);
scheduler.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
//if (messages.size()<=0) {
while (messages.size()<=0) {
System.out.println("Array is empty");
}
//else {
try {
final Message message = messages.get(0);
messages.remove(0);
bot.sendMessage(message);
}
catch (Throwable e) {
e.printStackTrace();
}
//}
}
}, 0, 1, TimeUnit.SECONDS);
}
}
sendMessage:
public int sendMessage(Message message) throws Exception {
boolean result = false;
String linkText = rootLink + token + "/sendMessage?chat_id=#MyChat&text=" + message.getText();
String linkGif = rootLink + token + "/sendAnimation?chat_id=#MyChat&animation=" + URLEncoder.encode(message.getUrl(), "UTF-8");
int responseCode1 = new TelegramRequest(linkText).send().getCode();
int responseCode2 = new TelegramRequest(linkGif).send().getCode();
//...logging in text file
return responseCode1 + responseCode2;
}
My computer (Windows 7, IDE Intellij) keeps sending messages even though I've long (hours ago) since terminated the process. If I turn off the Internet connection on my computer, the messages stop getting sent - until I turn it on again. What is going on and how do I stop it?
I also have logging going on: for each message an entry is made in a text file. However, no entries are being made for these continuous requests. It makes me think there is some caching going on and those requests were somehow put into some queue and only now are being sent...

Is it valid to pass netty channels to a queue and use it for writes on a different thread later on?

I have the following setup. There is a message distributor that spreads inbound client messages across a configured number of message queues (LinkedBlockingQueues in my case), based on an unique identifier called appId (per connected client):
public class MessageDistributor {
private final List<BlockingQueue<MessageWrapper>> messageQueueBuckets;
public MessageDistributor(List<BlockingQueue<MessageWrapper>> messageQueueBuckets) {
this.messageQueueBuckets = messageQueueBuckets;
}
public void handle(String appId, MessageWrapper message) {
int index = (messageQueueBuckets.size() - 1) % hash(appId);
try {
messageQueueBuckets.get(index).offer(message);
} catch (Exception e) {
// handle exception
}
}
}
As I also need to answer the message later on, I wrap the message object and the netty channel inside a MessageWrapper:
public class MessageWrapper {
private final Channel channel;
private final Message message;
public MessageWrapper(Channel channel, Message message) {
this.channel = channel;
this.message = message;
}
public Channel getChannel() {
return channel;
}
public Message getMessage() {
return message;
}
}
Furthermore, there is a message consumer, which implements a Runnable and takes new messages from the assigned blocking queue. This guy performs some expensive/blocking operations that I want to have outside the main netty event loop and which should also not block operations for other connected clients too much, hence the usage of several queues:
public class MessageConsumer implements Runnable {
private final BlockingQueue<MessageWrapper> messageQueue;
public MessageConsumer(BlockingQueue<MessageWrapper> messageQueue) {
this.messageQueue = messageQueue;
}
#Override
public void run() {
while (true) {
try {
MessageWrapper msgWrap = messageQueue.take();
Channel channel = msgWrap.getChannel();
Message msg = msgWrap.getMessage();
doSthExepnsiveOrBlocking(channel, msg);
} catch (Exception e) {
// handle exception
}
}
}
public void doSthExepnsiveOrBlocking(Channel channel, Message msg) {
// some expsive/blocking operations
channe.writeAndFlush(someResultObj);
}
}
The setup of all classes looks like the following (the messageExecutor is a DefaultEventeExecutorGroup with a size of 8):
int nrOfWorkers = config.getNumberOfClientMessageQueues();
List<BlockingQueue<MessageWrapper>> messageQueueBuckets = new ArrayList<>(nrOfWorkers);
for (int i = 0; i < nrOfWorkers; i++) {
messageQueueBuckets.add(new LinkedBlockingQueue<>());
}
MessageDistributor distributor = new MessageDistributor(messageQueueBuckets);
List<MessageConsumer> consumers = new ArrayList<>(nrOfWorkers);
for (BlockingQueue<MessageWrapper> messageQueueBucket : messageQueueBuckets) {
MessageConsumer consumer = new MessageConsumer(messageQueueBucket);
consumers.add(consumer);
messageExecutor.submit(consumer);
}
My goal with this approach is to isolate connected clients from each other (not fully, but at least a bit) and also to execute expensive operations on different threads.
Now my question is: Is it valid to wrap the netty channel object inside this MessageWrapper for later use and access its write method in some other thread?
UPDATE
Instead of building additional message distribution mechanics on top of netty, I decided to simply go with a separate EventExecutorGroup for my blocking channel handlers and see how it works.
Yes it is valid to call Channel.* methods from other threads. That said the methods perform best when these are called from the EventLoop thread itself that belongs to the Channel.

Not able to nack messages in RabbitMQ using SprintBoot

I have the following Consumer class that listens to incoming messages on a queue and then both acks and nacks them. The ack part is working fine but the nack is not working. All the messages gets acked for some reason.
application.properties
spring.rabbitmq.host=192.168.99.100
spring.rabbitmq.port=5677
spring.rabbitmq.username=abc
spring.rabbitmq.password=def
spring.rabbitmq.listener.acknowledge-mode=manual
Producer.java
#Component
public class Producer implements CommandLineRunner {
#Autowired
private RabbitTemplate rabbitTemplate;
#Autowired
private Queue queue;
#Override
public void run(String... args) throws Exception {
for (int i = 0; i < 100; i++) {
this.rabbitTemplate.convertAndSend(this.queue.getName(), "Hello World !");
}
}
}
Consumer.java
#Component
public class Consumer {
private final CountDownLatch latch = new CountDownLatch(1);
int ctr = 0;
#RabbitListener(queues = "producer-consumer-nack2.q")
public void receiveQueue(String text, Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) long tag) throws IOException, InterruptedException {
ctr++;
//nack every 10th, 20th, 30th and so on message
if (ctr % 10 == 0) {
System.out.println("Nack Message #" + ctr + ": " + text);
channel.basicNack(tag, false, true);
} else {
System.out.println("Ack Message #" + ctr + ": " + text);
channel.basicAck(tag, true);
}
latch.countDown();
}
public CountDownLatch getLatch() {
return latch;
}
}
Since all the messages got consumed the queue is empty (see below).
I believe the way nack works is that it will reject then queue the message for redelivery at the same point in the queue. (See rabbitmq documentation here)
Therefore, as you are looking at the end of processing it will have been rejected and then reprocessed later on.
I'd suggest debugging the code, with a breakpoint in your nack condition (or a print) to see if it hits that code block. Then if you debug till right after the nack but before the next processing of a message and check your queue - I think you'll see a nacked message.

How to get a RabbitMQ message when a task completes?

I'm using RabbitMQ (and Celery) on Java, here is my code to get a message from RabbitMQ based on a tutorial I am reading:
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume(QUEUE_NAME, true, consumer);
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println(" [x] Received '" + message + "'");
}
But I only get a message when the task begins - when I would like to get a message when the task is complete. Any help?
You should not be using QueueingConsumer since it's considered deprecated as explained here: https://www.rabbitmq.com/releases/rabbitmq-java-client/current-javadoc/com/rabbitmq/client/QueueingConsumer.html
On the contrary, you should be creating your own consumer that implements the interface Consumer from RabbitMQ libraries. There is a method you will have to implement called handleDelivery that will be called every time you get a message. Then, to start it you need to call channel.basicConsume(QUEUE_NAME, true, consumer).
Example:
channel.basicConsume(queueName, autoAck, "myConsumerTag", new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body) throws IOException
{
//your code here
}
});

Server sent event with Jersey: EventOutput is not closed after client drops

I am using jersey to implement a SSE scenario.
The server keeps connections alive. And push data to clients periodically.
In my scenario, there is a connection limit, only a certain number of clients can subscribe to the server at the same time.
So when a new client is trying to subscribe, I do a check(EventOutput.isClosed) to see if any old connections are not active anymore, so they can make room for new connections.
But the result of EventOutput.isClosed is always false, unless the client explicitly calls close of EventSource. This means that if a client drops accidentally(power outage or internet cutoff), it's still hogging the connection, and new clients can not subscribe.
Is there a work around for this?
#CuiPengFei,
So in my travels trying to find an answer to this myself I stumbled upon a repository that explains how to handle gracefully cleaning up the connections from disconnected clients.
The encapsulate all of the SSE EventOutput logic into a Service/Manager. In this they spin up a thread that checks to see if the EventOutput has been closed by the client. If so they formally close the connection (EventOutput#close()). If not they try to write to the stream. If it throws an Exception then the client has disconnected without closing and it handles closing it. If the write is successful then the EventOutput is returned to the pool as it is still an active connection.
The repo (and the actual class) are available here. Ive also included the class without imports below in case the repo is ever removed.
Note that they bind this to a Singleton. The store should be globally unique.
public class SseWriteManager {
private final ConcurrentHashMap<String, EventOutput> connectionMap = new ConcurrentHashMap<>();
private final ScheduledExecutorService messageExecutorService;
private final Logger logger = LoggerFactory.getLogger(SseWriteManager.class);
public SseWriteManager() {
messageExecutorService = Executors.newScheduledThreadPool(1);
messageExecutorService.scheduleWithFixedDelay(new messageProcessor(), 0, 5, TimeUnit.SECONDS);
}
public void addSseConnection(String id, EventOutput eventOutput) {
logger.info("adding connection for id={}.", id);
connectionMap.put(id, eventOutput);
}
private class messageProcessor implements Runnable {
#Override
public void run() {
try {
Iterator<Map.Entry<String, EventOutput>> iterator = connectionMap.entrySet().iterator();
while (iterator.hasNext()) {
boolean remove = false;
Map.Entry<String, EventOutput> entry = iterator.next();
EventOutput eventOutput = entry.getValue();
if (eventOutput != null) {
if (eventOutput.isClosed()) {
remove = true;
} else {
try {
logger.info("writing to id={}.", entry.getKey());
eventOutput.write(new OutboundEvent.Builder().name("custom-message").data(String.class, "EOM").build());
} catch (Exception ex) {
logger.info(String.format("write failed to id=%s.", entry.getKey()), ex);
remove = true;
}
}
}
if (remove) {
// we are removing the eventOutput. close it is if it not already closed.
if (!eventOutput.isClosed()) {
try {
eventOutput.close();
} catch (Exception ex) {
// do nothing.
}
}
iterator.remove();
}
}
} catch (Exception ex) {
logger.error("messageProcessor.run threw exception.", ex);
}
}
}
public void shutdown() {
if (messageExecutorService != null && !messageExecutorService.isShutdown()) {
logger.info("SseWriteManager.shutdown: calling messageExecutorService.shutdown.");
messageExecutorService.shutdown();
} else {
logger.info("SseWriteManager.shutdown: messageExecutorService == null || messageExecutorService.isShutdown().");
}
}}
Wanted to provide an update on this:
What was happening is that the eventSource on the client side (js) never got into readyState '1' unless we did a broadcast as soon as a new subscription was added. Even in this state the client could receive data pushed from the server. Adding call to do a broadcast of a simple "OK" message helped kicking the eventSource into readyState 1.
On closing the connection from the client side; to be pro-active in cleaning up resources, just closing the eventSource on the client side doesn't help. We must make another ajax call to the server to force the server to do a broadcast. When the broadcast is forced, jersey will clean up the connections that are no longer alive and will in-turn release resources (Connections in CLOSE_WAIT). If not a connection will linger in CLOSE_WAIT till the next broadcast happens.

Categories

Resources