Not able to nack messages in RabbitMQ using SprintBoot - java

I have the following Consumer class that listens to incoming messages on a queue and then both acks and nacks them. The ack part is working fine but the nack is not working. All the messages gets acked for some reason.
application.properties
spring.rabbitmq.host=192.168.99.100
spring.rabbitmq.port=5677
spring.rabbitmq.username=abc
spring.rabbitmq.password=def
spring.rabbitmq.listener.acknowledge-mode=manual
Producer.java
#Component
public class Producer implements CommandLineRunner {
#Autowired
private RabbitTemplate rabbitTemplate;
#Autowired
private Queue queue;
#Override
public void run(String... args) throws Exception {
for (int i = 0; i < 100; i++) {
this.rabbitTemplate.convertAndSend(this.queue.getName(), "Hello World !");
}
}
}
Consumer.java
#Component
public class Consumer {
private final CountDownLatch latch = new CountDownLatch(1);
int ctr = 0;
#RabbitListener(queues = "producer-consumer-nack2.q")
public void receiveQueue(String text, Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) long tag) throws IOException, InterruptedException {
ctr++;
//nack every 10th, 20th, 30th and so on message
if (ctr % 10 == 0) {
System.out.println("Nack Message #" + ctr + ": " + text);
channel.basicNack(tag, false, true);
} else {
System.out.println("Ack Message #" + ctr + ": " + text);
channel.basicAck(tag, true);
}
latch.countDown();
}
public CountDownLatch getLatch() {
return latch;
}
}
Since all the messages got consumed the queue is empty (see below).

I believe the way nack works is that it will reject then queue the message for redelivery at the same point in the queue. (See rabbitmq documentation here)
Therefore, as you are looking at the end of processing it will have been rejected and then reprocessed later on.
I'd suggest debugging the code, with a breakpoint in your nack condition (or a print) to see if it hits that code block. Then if you debug till right after the nack but before the next processing of a message and check your queue - I think you'll see a nacked message.

Related

Producer-Consumer multithreading FIFO in Java

community, I'm trying to solve this Producer/Consumer problem with 10 threads, and I got stuck at a point in implementing it.
The problem looks like this:
Problem Scheme
The program itself should take have a loop while inserting messages containing (id, timeout), in ascending order by id (1,2,3,4...) and should simply print the id of the message that comes out, in the same order it entered, like a queue.
For example in the photo above, the 3 messages Message(1,200), Message(2, 1000) and Message(3,20) are the first 3 messages that the producer will produce.
Although the thread who has been assigned with Message(3,20) should be printed first (because it has the lowest timeout(20)), I want it to wait for the first message which has 200ms timeout to print, then wait again for the message2 with 1000ms to print, then print itself. So all in increasing order(maybe use the id as the ordering number?).
So far I've implemented this:
public class Main {
private static BlockingQueue<Message> queue = new ArrayBlockingQueue<>(5);
public static void main(String[] args) throws InterruptedException {
Thread t1 = new Thread(() -> {
try {
producer();
} catch (InterruptedException exception) {
exception.printStackTrace();
}
});
Thread t2 = new Thread(() -> {
try {
consumer();
} catch (InterruptedException exception) {
exception.printStackTrace();
}
});
t1.start();
t2.start();
t1.join();
t2.join();
}
public static void producer() throws InterruptedException {
while (true) {
queue.put(new Message());
}
}
public static void consumer() throws InterruptedException {
ExecutorService executorService = Executors.newFixedThreadPool(10);
for (int i = 0; i < 1000; i++) {
executorService.submit(queue.take());
}
executorService.shutdown();
}
}
and I have my Message class here:
public class Message implements Runnable {
public static int totalIds = 0;
public int id;
public int timeout;
public Random random = new Random();
public Message() {
this.id = totalIds;
totalIds++;
this.timeout = random.nextInt(5000);
}
#Override
public String toString() {
return "Message{" +
"id=" + id +
", timeout=" + timeout +
'}';
}
#Override
public void run() {
System.out.println(Thread.currentThread().getName() + "[RECEIVED] Message = " + toString());
try {
Thread.sleep(timeout);
} catch (InterruptedException exception) {
exception.printStackTrace();
}
System.out.println(Thread.currentThread().getName() + "[DONE] Message = " + toString() + "\n");
}
}
So far it does everything ok except the part where the threads should wait for the one that has the priority id let's say... here is the first part of the output:
All tasks submitted
pool-1-thread-9[RECEIVED] Message = Message{id=13, timeout=1361}
pool-1-thread-10[RECEIVED] Message = Message{id=14, timeout=92}
pool-1-thread-3[RECEIVED] Message = Message{id=7, timeout=3155}
pool-1-thread-5[RECEIVED] Message = Message{id=9, timeout=562}
pool-1-thread-2[RECEIVED] Message = Message{id=6, timeout=4249}
pool-1-thread-1[RECEIVED] Message = Message{id=0, timeout=1909}
pool-1-thread-7[RECEIVED] Message = Message{id=11, timeout=2468}
pool-1-thread-4[RECEIVED] Message = Message{id=8, timeout=593}
pool-1-thread-8[RECEIVED] Message = Message{id=12, timeout=3701}
pool-1-thread-6[RECEIVED] Message = Message{id=10, timeout=806}
pool-1-thread-10[DONE] Message = Message{id=14, timeout=92}
pool-1-thread-10[RECEIVED] Message = Message{id=15, timeout=846}
pool-1-thread-5[DONE] Message = Message{id=9, timeout=562}
pool-1-thread-5[RECEIVED] Message = Message{id=16, timeout=81}
pool-1-thread-4[DONE] Message = Message{id=8, timeout=593}
pool-1-thread-4[RECEIVED] Message = Message{id=17, timeout=4481}
pool-1-thread-5[DONE] Message = Message{id=16, timeout=81}
pool-1-thread-5[RECEIVED] Message = Message{id=18, timeout=2434}
pool-1-thread-6[DONE] Message = Message{id=10, timeout=806}
pool-1-thread-6[RECEIVED] Message = Message{id=19, timeout=10}
pool-1-thread-6[DONE] Message = Message{id=19, timeout=10}
pool-1-thread-6[RECEIVED] Message = Message{id=20, timeout=3776}
pool-1-thread-10[DONE] Message = Message{id=15, timeout=846}
pool-1-thread-10[RECEIVED] Message = Message{id=21, timeout=2988}
pool-1-thread-9[DONE] Message = Message{id=13, timeout=1361}
pool-1-thread-9[RECEIVED] Message = Message{id=22, timeout=462}
pool-1-thread-9[DONE] Message = Message{id=22, timeout=462}
pool-1-thread-9[RECEIVED] Message = Message{id=23, timeout=3074}
pool-1-thread-1[DONE] Message = Message{id=0, timeout=1909}
pool-1-thread-1[RECEIVED] Message = Message{id=24, timeout=725}
pool-1-thread-7[DONE] Message = Message{id=11, timeout=2468}
One of my friends told me it should be done with semaphores (never worked with them) but I really don't know how to implement semaphores so that they do what I want.
Appreciate any leads on solving this!
As far as I understand, you need two things:
Start all producer's worker threads together and let them run in parallel, but...
wait for the threads to finish in a FIFO order (according to their creation id).
So, you can start the threads one-by-one, so as to let them run in parallel, but also maintain a FIFO queue with their order by ascending id, and just join each thread in the sequence they were added to that queue.
Here is a demonstrating code on how you can do it:
import java.util.LinkedList;
import java.util.Objects;
import java.util.Random;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
public class Main {
private static class Message implements Runnable {
private final TimeUnit sleepUnit;
private final long sleepAmount;
private final int id;
public Message(final int id,
final TimeUnit sleepUnit,
final long sleepAmount) {
this.sleepUnit = Objects.requireNonNull(sleepUnit);
this.sleepAmount = sleepAmount;
this.id = id;
}
#Override
public void run() {
try {
System.out.println(toString() + " started and waiting...");
sleepUnit.sleep(sleepAmount);
}
catch (final InterruptedException ix) {
System.out.println(toString() + " interrupted: " + ix);
}
}
#Override
public String toString() {
return "Message{" + id + ", " + sleepUnit + "(" + sleepAmount + ")}";
}
}
private static class Producer {
private final int parallelism;
private final Consumer<? super Producer> consumer;
public Producer(final int parallelism,
final Consumer<? super Producer> consumer) {
this.parallelism = parallelism;
this.consumer = Objects.requireNonNull(consumer);
}
public void produceWithExecutor() {
System.out.println("Producing with Executor...");
final Random rand = new Random();
final ExecutorService service = Executors.newFixedThreadPool(parallelism);
final LinkedList<Future> q = new LinkedList<>();
for (int i = 0; i < parallelism; ++i) {
final Message msg = new Message(i, TimeUnit.MILLISECONDS, 500 + rand.nextInt(3000));
q.addLast(service.submit(msg, msg));
}
service.shutdown();
while (!q.isEmpty())
try {
System.out.println(q.removeFirst().get().toString() + " joined."); //Will wait for completion of each submitted task (in FIFO sequence).
}
catch (final InterruptedException ix) {
System.out.println("Interrupted: " + ix);
}
catch (final ExecutionException xx) {
System.out.println("Execution failed: " + xx);
}
consumer.accept(this);
}
public void produceWithPlainThreads() throws InterruptedException {
System.out.println("Producing with Threads...");
final Random rand = new Random();
final LinkedList<Thread> q = new LinkedList<>();
for (int i = 0; i < parallelism; ++i) {
final Message msg = new Message(i, TimeUnit.MILLISECONDS, 500 + rand.nextInt(3000));
final Thread t = new Thread(msg, msg.toString());
t.start();
q.add(t);
}
while (!q.isEmpty()) {
final Thread t = q.removeFirst();
t.join(); //Will wait for completion of each submitted task (in FIFO sequence).
System.out.println(t.getName() + " joined.");
}
consumer.accept(this);
}
}
public static void main(final String[] args) throws InterruptedException {
final Consumer<Producer> consumer = producer -> System.out.println("Consuming.");
final int parallelism = 10;
new Producer(parallelism, consumer).produceWithExecutor();
new Producer(parallelism, consumer).produceWithPlainThreads();
}
}
As you can see, there are two producing implementations here: one with an ExecutorService running all the submitted threads, and one with plain threads which are started (almost) at the same time.
This results in an output like so:
Producing with Executor...
Message{1, MILLISECONDS(692)} started and waiting...
Message{2, MILLISECONDS(1126)} started and waiting...
Message{0, MILLISECONDS(3403)} started and waiting...
Message{3, MILLISECONDS(1017)} started and waiting...
Message{4, MILLISECONDS(2861)} started and waiting...
Message{5, MILLISECONDS(2735)} started and waiting...
Message{6, MILLISECONDS(2068)} started and waiting...
Message{7, MILLISECONDS(947)} started and waiting...
Message{8, MILLISECONDS(1091)} started and waiting...
Message{9, MILLISECONDS(1599)} started and waiting...
Message{0, MILLISECONDS(3403)} joined.
Message{1, MILLISECONDS(692)} joined.
Message{2, MILLISECONDS(1126)} joined.
Message{3, MILLISECONDS(1017)} joined.
Message{4, MILLISECONDS(2861)} joined.
Message{5, MILLISECONDS(2735)} joined.
Message{6, MILLISECONDS(2068)} joined.
Message{7, MILLISECONDS(947)} joined.
Message{8, MILLISECONDS(1091)} joined.
Message{9, MILLISECONDS(1599)} joined.
Consuming.
Producing with Threads...
Message{0, MILLISECONDS(3182)} started and waiting...
Message{1, MILLISECONDS(2271)} started and waiting...
Message{2, MILLISECONDS(2861)} started and waiting...
Message{3, MILLISECONDS(2942)} started and waiting...
Message{4, MILLISECONDS(2714)} started and waiting...
Message{5, MILLISECONDS(1228)} started and waiting...
Message{6, MILLISECONDS(2000)} started and waiting...
Message{7, MILLISECONDS(2372)} started and waiting...
Message{8, MILLISECONDS(764)} started and waiting...
Message{9, MILLISECONDS(587)} started and waiting...
Message{0, MILLISECONDS(3182)} joined.
Message{1, MILLISECONDS(2271)} joined.
Message{2, MILLISECONDS(2861)} joined.
Message{3, MILLISECONDS(2942)} joined.
Message{4, MILLISECONDS(2714)} joined.
Message{5, MILLISECONDS(1228)} joined.
Message{6, MILLISECONDS(2000)} joined.
Message{7, MILLISECONDS(2372)} joined.
Message{8, MILLISECONDS(764)} joined.
Message{9, MILLISECONDS(587)} joined.
Consuming.
You can see in the output that in both cases the threads are started (almost) together via a loop, but are joined in a FIFO ordered manner. In the first case you can see that the threads may start in different order which is a side effect of starting threads themselves. In the second case with the plain threads, it happened that all the threads called their run method in the order in which they were created and started, because this happens in a such a short amount of time. But the joining of each thread will always be in ascending id order according to this code. If you run this code multiple times, you may achieve for thread eg 2 to print in its run method before thread eg 1 in both cases, but the order we wait the threads to finish in the Producer's methods will always end in ascending id order.
All the threads should exit/finish their run method in ascending sleeping order and not in ascending id order. But the output will always be in ascending id order because of the way we are iterating over the queue and waiting them to join in an orderly manner.
So if your want to obtain the result of each Thread in an ascending id order, then the corresponding code would have to be in your Producer's produce methods (after you join each thread) and not in the end of each Message's run method (in order to avoid extra synchronization and inter-thread communication).

Consuming a queue based on its consumer count in Spring AMQP

I want a queue to be consumed by only one subscriber at a time. So if one subscriber drops, then another one(s) will have the chance of subscribing.
I am looking for the correct way of doing it in Spring AMQP. I did this in pure Java, based on the example in RabbitMQ's website. I passively declare the queue, check its consumer count, if it is 0, then start to consume it.
Here's the code.
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
int count = channel.queueDeclarePassive(QUEUE_NAME).getConsumerCount();
System.out.println("count is "+count);
if (count == 0) {
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
DeliverCallback deliverCallback = (consumerTag, delivery) -> {
String message = new String(delivery.getBody(), StandardCharsets.UTF_8);
System.out.println(" [x] Received '" + message + "'");
};
channel.basicConsume(QUEUE_NAME, true, deliverCallback, consumerTag -> { });
} else{
System.out.println("subscribed by some other processor(s)");
}
I also can check the subscriber count in Spring AMQP this way. But it is too late, because it already listens to the queue.
#RabbitListener(queues = "q1")
public void receivedMessageQ1(String message, Channel channel){
try {
int q1 = channel.queueDeclarePassive("q1").getConsumerCount();
// do something.
} catch (IOException e) {
System.out.println("exception occurred");
}
}
In a nutshell, I want to consume a queue based on its consumer count. I hope I am clear.
Set the exclusive flag on the #RabbitListener; RabbitMQ will only allow one instance to consume. The other instance(s) will attempt to listen every 5 seconds (by default). To increase the interval, set the container factory's recoveryBackOff.
#SpringBootApplication
public class So56319999Application {
public static void main(String[] args) {
SpringApplication.run(So56319999Application.class, args);
}
#RabbitListener(queues = "so56319999", exclusive = true)
public void listen (String in) {
}
#Bean
public Queue queue() {
return new Queue("so56319999");
}
}

Java program keeps running: Telegram bot scheduled by ScheduledExecutorService keeps sending messages even when terminated

I have a bot that first creates an array of messages and then sends those messages to a channel. The program uses ScheduledExecutorService and there are two Runnables and two threads: the first thread populates the array, the second sends the requests if array is not empty. After a request a message is to be removed from the array. The message contains both text and a gif:
public class BotTest {
private static final ScheduledExecutorService scheduler =
Executors.newScheduledThreadPool(2);
public static void main(String[] args) throws Exception {
final MyBot bot = new MyBot();
ArrayList<Message> messages = new ArrayList<>();
Runnable createMessages =
new Runnable(){
public void run(){
//...creating messages
}
};
scheduler.scheduleAtFixedRate(createMessages, 0, 1, TimeUnit.MINUTES);
scheduler.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
//if (messages.size()<=0) {
while (messages.size()<=0) {
System.out.println("Array is empty");
}
//else {
try {
final Message message = messages.get(0);
messages.remove(0);
bot.sendMessage(message);
}
catch (Throwable e) {
e.printStackTrace();
}
//}
}
}, 0, 1, TimeUnit.SECONDS);
}
}
sendMessage:
public int sendMessage(Message message) throws Exception {
boolean result = false;
String linkText = rootLink + token + "/sendMessage?chat_id=#MyChat&text=" + message.getText();
String linkGif = rootLink + token + "/sendAnimation?chat_id=#MyChat&animation=" + URLEncoder.encode(message.getUrl(), "UTF-8");
int responseCode1 = new TelegramRequest(linkText).send().getCode();
int responseCode2 = new TelegramRequest(linkGif).send().getCode();
//...logging in text file
return responseCode1 + responseCode2;
}
My computer (Windows 7, IDE Intellij) keeps sending messages even though I've long (hours ago) since terminated the process. If I turn off the Internet connection on my computer, the messages stop getting sent - until I turn it on again. What is going on and how do I stop it?
I also have logging going on: for each message an entry is made in a text file. However, no entries are being made for these continuous requests. It makes me think there is some caching going on and those requests were somehow put into some queue and only now are being sent...

Is it valid to pass netty channels to a queue and use it for writes on a different thread later on?

I have the following setup. There is a message distributor that spreads inbound client messages across a configured number of message queues (LinkedBlockingQueues in my case), based on an unique identifier called appId (per connected client):
public class MessageDistributor {
private final List<BlockingQueue<MessageWrapper>> messageQueueBuckets;
public MessageDistributor(List<BlockingQueue<MessageWrapper>> messageQueueBuckets) {
this.messageQueueBuckets = messageQueueBuckets;
}
public void handle(String appId, MessageWrapper message) {
int index = (messageQueueBuckets.size() - 1) % hash(appId);
try {
messageQueueBuckets.get(index).offer(message);
} catch (Exception e) {
// handle exception
}
}
}
As I also need to answer the message later on, I wrap the message object and the netty channel inside a MessageWrapper:
public class MessageWrapper {
private final Channel channel;
private final Message message;
public MessageWrapper(Channel channel, Message message) {
this.channel = channel;
this.message = message;
}
public Channel getChannel() {
return channel;
}
public Message getMessage() {
return message;
}
}
Furthermore, there is a message consumer, which implements a Runnable and takes new messages from the assigned blocking queue. This guy performs some expensive/blocking operations that I want to have outside the main netty event loop and which should also not block operations for other connected clients too much, hence the usage of several queues:
public class MessageConsumer implements Runnable {
private final BlockingQueue<MessageWrapper> messageQueue;
public MessageConsumer(BlockingQueue<MessageWrapper> messageQueue) {
this.messageQueue = messageQueue;
}
#Override
public void run() {
while (true) {
try {
MessageWrapper msgWrap = messageQueue.take();
Channel channel = msgWrap.getChannel();
Message msg = msgWrap.getMessage();
doSthExepnsiveOrBlocking(channel, msg);
} catch (Exception e) {
// handle exception
}
}
}
public void doSthExepnsiveOrBlocking(Channel channel, Message msg) {
// some expsive/blocking operations
channe.writeAndFlush(someResultObj);
}
}
The setup of all classes looks like the following (the messageExecutor is a DefaultEventeExecutorGroup with a size of 8):
int nrOfWorkers = config.getNumberOfClientMessageQueues();
List<BlockingQueue<MessageWrapper>> messageQueueBuckets = new ArrayList<>(nrOfWorkers);
for (int i = 0; i < nrOfWorkers; i++) {
messageQueueBuckets.add(new LinkedBlockingQueue<>());
}
MessageDistributor distributor = new MessageDistributor(messageQueueBuckets);
List<MessageConsumer> consumers = new ArrayList<>(nrOfWorkers);
for (BlockingQueue<MessageWrapper> messageQueueBucket : messageQueueBuckets) {
MessageConsumer consumer = new MessageConsumer(messageQueueBucket);
consumers.add(consumer);
messageExecutor.submit(consumer);
}
My goal with this approach is to isolate connected clients from each other (not fully, but at least a bit) and also to execute expensive operations on different threads.
Now my question is: Is it valid to wrap the netty channel object inside this MessageWrapper for later use and access its write method in some other thread?
UPDATE
Instead of building additional message distribution mechanics on top of netty, I decided to simply go with a separate EventExecutorGroup for my blocking channel handlers and see how it works.
Yes it is valid to call Channel.* methods from other threads. That said the methods perform best when these are called from the EventLoop thread itself that belongs to the Channel.

Thread pool to process messages in parallel, but preserve order within conversations

I need to process messages in parallel, but preserve the processing order of messages with the same conversation ID.
Example:
Let's define a Message like this:
class Message {
Message(long id, long conversationId, String someData) {...}
}
Suppose the messages arrive in the following order:
Message(1, 1, "a1"), Message(2, 2, "a2"), Message(3, 1, "b1"), Message(4, 2, "b2").
I need the message 3 to be processed after the message 1, since messages 1 and 3 have the same conversation ID (similarly, the message 4 should be processed after 2 by the same reason).
I don't care about the relative order between e.g. 1 and 2, since they have different conversation IDs.
I would like to reuse the java ThreadPoolExecutor's functionality as much as possible to avoid having to replace dead threads manually in my code etc.
Update: The number of possible 'conversation-ids' is not limited, and there is no time limit on a conversation. (I personally don't see it as a problem, since I can have a simple mapping from a conversationId to a worker number, e.g. conversationId % totalWorkers).
Update 2: There is one problem with a solution with multiple queues, where the queue number is determined by e.g. 'index = Objects.hash(conversationId) % total': if it takes a long time to process some message, all messages with the same 'index' but different 'conversationId' will wait even though other threads are available to handle it. That is, I believe solutions with a single smart blocking queue would be better, but it's just an opinion, I am open to any good solution.
Do you see an elegant solution for this problem?
I had to do something very similar some time ago, so here is an adaptation.
(See it in action online)
It's actually the exact same base need, but in my case the key was a String, and more importantly the set of keys was not growing indefinitely, so here I had to add a "cleanup scheduler". Other than that it's basically the same code, so I hope I have not lost anything serious in the adaptation process. I tested it, looks like it works. It's longer than other solutions, though, perhaps more complex...
Base idea:
MessageTask wraps a message into a Runnable, and notifies queue when it is complete
ConvoQueue: blocking queue of messages, for a conversation. Acts as a prequeue that guarantees desired order. See this trio in particular: ConvoQueue.runNextIfPossible() → MessageTask.run() → ConvoQueue.complete() → …
MessageProcessor has a Map<Long, ConvoQueue>, and an ExecutorService
messages are processed by any thread in the executor, the ConvoQueues feed the ExecutorService and guarantee message order per convo, but not globally (so a "difficult" message will not block other conversations from being processed, unlike some other solutions, and that property was critically important in our case -- if it's not that critical for you, maybe a simpler solution is better)
cleanup with ScheduledExecutorService (takes 1 thread)
Visually:
ConvoQueues ExecutorService's internal queue
(shared, but has at most 1 MessageTask per convo)
Convo 1 ########
Convo 2 #####
Convo 3 ####### Thread 1
Convo 4 } → #### → {
Convo 5 ### Thread 2
Convo 6 #########
Convo 7 #####
(Convo 4 is about to be deleted)
Below all the classes (MessageProcessorTest can be executed directly):
// MessageProcessor.java
import java.util.*;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import static java.util.concurrent.TimeUnit.SECONDS;
public class MessageProcessor {
private static final long CLEANUP_PERIOD_S = 10;
private final Map<Long, ConvoQueue> queuesByConvo = new HashMap<>();
private final ExecutorService executorService;
public MessageProcessor(int nbThreads) {
executorService = Executors.newFixedThreadPool(nbThreads);
ScheduledExecutorService cleanupScheduler = Executors.newScheduledThreadPool(1);
cleanupScheduler.scheduleAtFixedRate(this::removeEmptyQueues, CLEANUP_PERIOD_S, CLEANUP_PERIOD_S, SECONDS);
}
public void addMessageToProcess(Message message) {
ConvoQueue queue = getQueue(message.getConversationId());
queue.addMessage(message);
}
private ConvoQueue getQueue(Long convoId) {
synchronized (queuesByConvo) {
return queuesByConvo.computeIfAbsent(convoId, p -> new ConvoQueue(executorService));
}
}
private void removeEmptyQueues() {
synchronized (queuesByConvo) {
queuesByConvo.entrySet().removeIf(entry -> entry.getValue().isEmpty());
}
}
}
// ConvoQueue.java
import java.util.Queue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.LinkedBlockingQueue;
class ConvoQueue {
private Queue<MessageTask> queue;
private MessageTask activeTask;
private ExecutorService executorService;
ConvoQueue(ExecutorService executorService) {
this.executorService = executorService;
this.queue = new LinkedBlockingQueue<>();
}
private void runNextIfPossible() {
synchronized(this) {
if (activeTask == null) {
activeTask = queue.poll();
if (activeTask != null) {
executorService.submit(activeTask);
}
}
}
}
void complete(MessageTask task) {
synchronized(this) {
if (task == activeTask) {
activeTask = null;
runNextIfPossible();
}
else {
throw new IllegalStateException("Attempt to complete task that is not supposed to be active: "+task);
}
}
}
boolean isEmpty() {
return queue.isEmpty();
}
void addMessage(Message message) {
add(new MessageTask(this, message));
}
private void add(MessageTask task) {
synchronized(this) {
queue.add(task);
runNextIfPossible();
}
}
}
// MessageTask.java
public class MessageTask implements Runnable {
private ConvoQueue convoQueue;
private Message message;
MessageTask(ConvoQueue convoQueue, Message message) {
this.convoQueue = convoQueue;
this.message = message;
}
#Override
public void run() {
try {
processMessage();
}
finally {
convoQueue.complete(this);
}
}
private void processMessage() {
// Dummy processing with random delay to observe reordered messages & preserved convo order
try {
Thread.sleep((long) (50*Math.random()));
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(message);
}
}
// Message.java
class Message {
private long id;
private long conversationId;
private String data;
Message(long id, long conversationId, String someData) {
this.id = id;
this.conversationId = conversationId;
this.data = someData;
}
long getConversationId() {
return conversationId;
}
String getData() {
return data;
}
public String toString() {
return "Message{" + id + "," + conversationId + "," + data + "}";
}
}
// MessageProcessorTest.java
public class MessageProcessorTest {
public static void main(String[] args) {
MessageProcessor test = new MessageProcessor(2);
for (int i=1; i<100; i++) {
test.addMessageToProcess(new Message(1000+i,i%7,"hi "+i));
}
}
}
Output (for each convo ID (2nd field) order is preserved):
Message{1002,2,hi 2}
Message{1001,1,hi 1}
Message{1004,4,hi 4}
Message{1003,3,hi 3}
Message{1005,5,hi 5}
Message{1006,6,hi 6}
Message{1009,2,hi 9}
Message{1007,0,hi 7}
Message{1008,1,hi 8}
Message{1011,4,hi 11}
Message{1010,3,hi 10}
...
Message{1097,6,hi 97}
Message{1095,4,hi 95}
Message{1098,0,hi 98}
Message{1099,1,hi 99}
Message{1096,5,hi 96}
Test above provided me confidence to share it, but I'm slightly worried that I might have forgotten details for pathological cases. It has been running in production for years without hitches (although with more code that allows to inspect it live when we need to see what's happening, why a certain queue takes time, etc -- never a problem with the system above in itself, but sometimes with the processing of a particular task)
Edit: click here to test online. Alternative: copy that gist in there, and press "Compile & Execute".
Not sure how you want messages to be processed. For convenience each message is of type Runnable, which is the place for execution to take place.
The solution to all of this is to have a number of Executor's which are submit to a parallel ExecutorService. Use the modulo operation to calculate to which Executor the incoming message needs to be distributed to. Obviously, for the same conversation id its the same Executor, hence you have parallel processing but sequential for the same conversation id. It's not guaranteed that messages with different conversation id's will always execute in parallel (all in all, you are bounded, at least, by the number of physical cores in your system).
public class MessageExecutor {
public interface Message extends Runnable {
long getId();
long getConversationId();
String getMessage();
}
private static class Executor implements Runnable {
private final LinkedBlockingQueue<Message> messages = new LinkedBlockingQueue<>();
private volatile boolean stopped;
void schedule(Message message) {
messages.add(message);
}
void stop() {
stopped = true;
}
#Override
public void run() {
while (!stopped) {
try {
Message message = messages.take();
message.run();
} catch (Exception e) {
System.err.println(e.getMessage());
}
}
}
}
private final Executor[] executors;
private final ExecutorService executorService;
public MessageExecutor(int poolCount) {
executorService = Executors.newFixedThreadPool(poolCount);
executors = new Executor[poolCount];
IntStream.range(0, poolCount).forEach(i -> {
Executor executor = new Executor();
executorService.submit(executor);
executors[i] = executor;
});
}
public void submit(Message message) {
final int executorNr = Objects.hash(message.getConversationId()) % executors.length;
executors[executorNr].schedule(message);
}
public void stop() {
Arrays.stream(executors).forEach(Executor::stop);
executorService.shutdown();
}
}
You can then start the message executor with a pool ammount and submit messages to it.
public static void main(String[] args) {
MessageExecutor messageExecutor = new MessageExecutor(Runtime.getRuntime().availableProcessors());
messageExecutor.submit(new Message() {
#Override
public long getId() {
return 1;
}
#Override
public long getConversationId() {
return 1;
}
#Override
public String getMessage() {
return "abc1";
}
#Override
public void run() {
System.out.println(this.getMessage());
}
});
messageExecutor.submit(new Message() {
#Override
public long getId() {
return 1;
}
#Override
public long getConversationId() {
return 2;
}
#Override
public String getMessage() {
return "abc2";
}
#Override
public void run() {
System.out.println(this.getMessage());
}
});
messageExecutor.stop();
}
When I run with a pool count of 2 and submit an amount of messages:
Message with conversation id [1] is scheduled on scheduler #[0]
Message with conversation id [2] is scheduled on scheduler #[1]
Message with conversation id [3] is scheduled on scheduler #[0]
Message with conversation id [4] is scheduled on scheduler #[1]
Message with conversation id [22] is scheduled on scheduler #[1]
Message with conversation id [22] is scheduled on scheduler #[1]
Message with conversation id [22] is scheduled on scheduler #[1]
Message with conversation id [22] is scheduled on scheduler #[1]
Message with conversation id [1] is scheduled on scheduler #[0]
Message with conversation id [2] is scheduled on scheduler #[1]
Message with conversation id [3] is scheduled on scheduler #[0]
Message with conversation id [3] is scheduled on scheduler #[0]
Message with conversation id [4] is scheduled on scheduler #[1]
When the same amount of messages runs with a pool count of 3:
Message with conversation id [1] is scheduled on scheduler #[2]
Message with conversation id [2] is scheduled on scheduler #[0]
Message with conversation id [3] is scheduled on scheduler #[1]
Message with conversation id [4] is scheduled on scheduler #[2]
Message with conversation id [22] is scheduled on scheduler #[2]
Message with conversation id [22] is scheduled on scheduler #[2]
Message with conversation id [22] is scheduled on scheduler #[2]
Message with conversation id [22] is scheduled on scheduler #[2]
Message with conversation id [1] is scheduled on scheduler #[2]
Message with conversation id [2] is scheduled on scheduler #[0]
Message with conversation id [3] is scheduled on scheduler #[1]
Message with conversation id [3] is scheduled on scheduler #[1]
Message with conversation id [4] is scheduled on scheduler #[2]
Messages get distributed nicely among the pool of Executor's :).
EDIT: the Executor's run() is catching all Exceptions, to ensure it does not break when one message is failing.
You essentially want the work to be done sequentially within a conversation. One solution would be to synchronize on a mutex that is unique to that conversation. The drawback of that solution is that if conversations are short lived and new conversations start on a frequent basis, the "mutexes" map will grow fast.
For brevity's sake I've omitted the executor shutdown, actual message processing, exception handling etc.
public class MessageProcessor {
private final ExecutorService executor;
private final ConcurrentMap<Long, Object> mutexes = new ConcurrentHashMap<> ();
public MessageProcessor(int threadCount) {
executor = Executors.newFixedThreadPool(threadCount);
}
public static void main(String[] args) throws InterruptedException {
MessageProcessor p = new MessageProcessor(10);
BlockingQueue<Message> queue = new ArrayBlockingQueue<> (1000);
//some other thread populates the queue
while (true) {
Message m = queue.take();
p.process(m);
}
}
public void process(Message m) {
Object mutex = mutexes.computeIfAbsent(m.getConversationId(), id -> new Object());
executor.submit(() -> {
synchronized(mutex) {
//That's where you actually process the message
}
});
}
}
I had a similar problem in my application. My first solution was sorting them using a java.util.ConcurrentHashMap. So in your case, this would be a ConcurrentHashMap with conversationId as key and a list of messages as value. The problem was that the HashMap got too big taking too much space.
My current solution is the following:
One Thread receives the messages and stores them in a java.util.ArrayList. After receiving N messages it pushes the list to a second thread. This thread sorts the messages using the ArrayList.sort method using conversationId and id. Then the thread iterates through the sorted list and searches for blocks wich can be processed. Each block which can be processed is taken out of the list. To process a block you can create a runnable with this block and push this to an executor service. The messages which could not be processed remain in the list and will be checked in the next round.
For what it's worth, the Kafka Streams API provides most of this functionality. Partitions preserve ordering. It's a larger buy-in than an ExecutorService but could be interesting, especially if you happen to use Kafka already.
I would use three executorServices (one for receiving messages, one for sorting messages, one for processing messages). I would also use one queue to put all messages received and another queue with messages sorted and grouped (sorted by ConversationID, then make groups of messages that share the same ConversationID). Finally: one thread for receiving messages, one thread for sorting messages and all remaining threads used for processing messages.
see below:
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.NoSuchElementException;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.stream.Collectors;
public class MultipleMessagesExample {
private static int MAX_ELEMENTS_MESSAGE_QUEUE = 1000;
private BlockingQueue<Message> receivingBlockingQueue = new LinkedBlockingDeque<>(MAX_ELEMENTS_MESSAGE_QUEUE);
private BlockingQueue<List<Message>> prioritySortedBlockingQueue = new LinkedBlockingDeque<>(MAX_ELEMENTS_MESSAGE_QUEUE);
public static void main(String[] args) {
MultipleMessagesExample multipleMessagesExample = new MultipleMessagesExample();
multipleMessagesExample.doTheWork();
}
private void doTheWork() {
int totalCores = Runtime.getRuntime().availableProcessors();
int totalSortingProcesses = 1;
int totalMessagesReceiverProcess = 1;
int totalMessagesProcessors = totalCores - totalSortingProcesses - totalMessagesReceiverProcess;
ExecutorService messagesReceiverExecutorService = Executors.newFixedThreadPool(totalMessagesReceiverProcess);
ExecutorService sortingExecutorService = Executors.newFixedThreadPool(totalSortingProcesses);
ExecutorService messageProcessorExecutorService = Executors.newFixedThreadPool(totalMessagesProcessors);
MessageReceiver messageReceiver = new MessageReceiver(receivingBlockingQueue);
messagesReceiverExecutorService.submit(messageReceiver);
MessageSorter messageSorter = new MessageSorter(receivingBlockingQueue, prioritySortedBlockingQueue);
sortingExecutorService.submit(messageSorter);
for (int i = 0; i < totalMessagesProcessors; i++) {
MessageProcessor messageProcessor = new MessageProcessor(prioritySortedBlockingQueue);
messageProcessorExecutorService.submit(messageProcessor);
}
}
}
class Message {
private Long id;
private Long conversationId;
private String someData;
public Message(Long id, Long conversationId, String someData) {
this.id = id;
this.conversationId = conversationId;
this.someData = someData;
}
public Long getId() {
return id;
}
public Long getConversationId() {
return conversationId;
}
public String getSomeData() {
return someData;
}
}
class MessageReceiver implements Callable<Void> {
private BlockingQueue<Message> bloquingQueue;
public MessageReceiver(BlockingQueue<Message> bloquingQueue) {
this.bloquingQueue = bloquingQueue;
}
#Override
public Void call() throws Exception {
System.out.println("receiving messages...");
bloquingQueue.add(new Message(1L, 1000L, "conversation1 data fragment 1"));
bloquingQueue.add(new Message(2L, 2000L, "conversation2 data fragment 1"));
bloquingQueue.add(new Message(3L, 1000L, "conversation1 data fragment 2"));
bloquingQueue.add(new Message(4L, 2000L, "conversation2 data fragment 2"));
return null;
}
}
/**
* sorts messages. group together same conversation IDs
*/
class MessageSorter implements Callable<Void> {
private BlockingQueue<Message> receivingBlockingQueue;
private BlockingQueue<List<Message>> prioritySortedBlockingQueue;
private List<Message> intermediateList = new ArrayList<>();
private MessageComparator messageComparator = new MessageComparator();
private static int BATCH_SIZE = 10;
public MessageSorter(BlockingQueue<Message> receivingBlockingQueue, BlockingQueue<List<Message>> prioritySortedBlockingQueue) {
this.receivingBlockingQueue = receivingBlockingQueue;
this.prioritySortedBlockingQueue = prioritySortedBlockingQueue;
}
#Override
public Void call() throws Exception {
while (true) {
boolean messagesReceivedQueueIsEmpty = false;
intermediateList = new ArrayList<>();
for (int i = 0; i < BATCH_SIZE; i++) {
try {
Message message = receivingBlockingQueue.remove();
intermediateList.add(message);
} catch (NoSuchElementException e) {
// this is expected when queue is empty
messagesReceivedQueueIsEmpty = true;
break;
}
}
Collections.sort(intermediateList, messageComparator);
if (intermediateList.size() > 0) {
Map<Long, List<Message>> map = intermediateList.stream().collect(Collectors.groupingBy(message -> message.getConversationId()));
map.forEach((k, v) -> prioritySortedBlockingQueue.add(new ArrayList<>(v)));
System.out.println("new batch of messages was sorted and is ready to be processed");
}
if (messagesReceivedQueueIsEmpty) {
System.out.println("message processor is waiting for messages...");
Thread.sleep(1000); // no need to use CPU if there are no messages to process
}
}
}
}
/**
* process groups of messages with same conversationID
*/
class MessageProcessor implements Callable<Void> {
private BlockingQueue<List<Message>> prioritySortedBlockingQueue;
public MessageProcessor(BlockingQueue<List<Message>> prioritySortedBlockingQueue) {
this.prioritySortedBlockingQueue = prioritySortedBlockingQueue;
}
#Override
public Void call() throws Exception {
while (true) {
List<Message> messages = prioritySortedBlockingQueue.take(); // blocks if no message is available
messages.stream().forEach(m -> processMessage(m));
}
}
private void processMessage(Message message) {
System.out.println(message.getId() + " - " + message.getConversationId() + " - " + message.getSomeData());
}
}
class MessageComparator implements Comparator<Message> {
#Override
public int compare(Message o1, Message o2) {
return (int) (o1.getConversationId() - o2.getConversationId());
}
}
create a executor class extending Executor.On submit you can put code like below.
public void execute(Runnable command) {
final int key= command.getKey();
//Some code to check if it is runing
final int index = key != Integer.MIN_VALUE ? Math.abs(key) % size : 0;
workers[index].execute(command);
}
Create worker with queue so that if you want some task required sequentially then run.
private final AtomicBoolean scheduled = new AtomicBoolean(false);
private final BlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>(maximumQueueSize);
public void execute(Runnable command) {
long timeout = 0;
TimeUnit timeUnit = TimeUnit.SECONDS;
if (command instanceof TimeoutRunnable) {
TimeoutRunnable timeoutRunnable = ((TimeoutRunnable) command);
timeout = timeoutRunnable.getTimeout();
timeUnit = timeoutRunnable.getTimeUnit();
}
boolean offered;
try {
if (timeout == 0) {
offered = workQueue.offer(command);
} else {
offered = workQueue.offer(command, timeout, timeUnit);
}
} catch (InterruptedException e) {
throw new RejectedExecutionException("Thread is interrupted while offering work");
}
if (!offered) {
throw new RejectedExecutionException("Worker queue is full!");
}
schedule();
}
private void schedule() {
//if it is already scheduled, we don't need to schedule it again.
if (scheduled.get()) {
return;
}
if (!workQueue.isEmpty() && scheduled.compareAndSet(false, true)) {
try {
executor.execute(this);
} catch (RejectedExecutionException e) {
scheduled.set(false);
throw e;
}
}
}
public void run() {
try {
Runnable r;
do {
r = workQueue.poll();
if (r != null) {
r.run();
}
}
while (r != null);
} finally {
scheduled.set(false);
schedule();
}
}
This library should help: https://github.com/jano7/executor
ExecutorService underlyingExecutor = Executors.newCachedThreadPool();
KeySequentialRunner<String> runner = new KeySequentialRunner<>(underlyingExecutor);
Message message = retrieveMessage();
Runnable task = new Runnable() {
#Override
public void run() {
// process the message
}
};
runner.run(message.conversationId, task);

Categories

Resources