Kubernetes Watch - How to update? (Java API) - java

I have two tasks that I run in parallel threads and I'm pulling my hair on why the Watch functionality doesn't work. Please let me know if you have any insights.
Task 1: Get the status of the pods and show the current status.
Task 2: Keep a watch on new events. This is what I'm trying to understand better.
Each of these tasks are executed every 30 seconds with scheduledAtFixedRate().
Expected behavior:
Task 1: I should get the list of all the pods with their current status (this works).
Task 2: I should expect list of new events as they happen.
Observed behavior:
Task 1: It works fine. I get updated status of the pods every 30 seconds.
Task 2: It dumps the events from the first request, but it seems like it doesn't update any new events.
Code:
Task 1:
#Component
#Scope(value = org.springframework.beans.factory.config.ConfigurableBeanFactory.SCOPE_SINGLETON)
public class Task1 implements Runnable {
private ScheduledExecutorService scheduledExecutorService;
private CommandInvoker commandInvoker;
private static final int INITIAL_DELAY = 15;
private static final int POLLING_INTERVAL = 30;
#Autowired
public Task1 (CommandInvoker commandInvoker,
ScheduledExecutorService scheduledExecutorService) {
this.commandInvoker = commandInvoker;
this.scheduledExecutorService = scheduledExecutorService;
this.scheduledExecutorService.scheduleAtFixedRate(this, INITIAL_DELAY, POLLING_INTERVAL, TimeUnit.SECONDS);
}
#Override
public void run() {
System.out.println("===== STARTING TASK 1 POD HEALTH CHECK =======");
commandInvoker.getPodStatus();
}
}
Task 2:
#Component
#Scope(value = org.springframework.beans.factory.config.ConfigurableBeanFactory.SCOPE_SINGLETON)
public class Task2 implements Runnable {
private ScheduledExecutorService scheduledExecutorService;
private CommandInvoker commandInvoker;
#Autowired
public Task2(CommandInvoker commandInvoker,
ScheduledExecutorService scheduledExecutorService) {
this.commandInvoker = commandInvoker;
this.scheduledExecutorService = scheduledExecutorService;
this.scheduledExecutorService.scheduleAtFixedRate(this, 30, 30, TimeUnit.SECONDS);
}
#Override
public void run() {
System.out.println("===== STARTING TASK 2 EVENT WATCH UPDATE =======");
commandInvoker.getWatchUpdates();
}
}
CommandInvoker:
#Component
public class CommandInvoker {
public void getPodStatus() {
try {
CoreV1Api api = new CoreV1Api();
V1PodList list = api.listPodForAllNamespaces(null,
null, null, null, null, null, null, null, null);
for( V1Pod pod : list.getItems() ) {
// THIS WORKS //
}
} catch ( ApiException e) {
throw new WhateverException ("Failed to handle watchlist event", e);
}
}
public void getWatchUpdates() {
CoreV1Api api = new CoreV1Api();
try {
Watch<V1Event> watch = Watch.createWatch(
apiClient,
api.listEventForAllNamespacesCall(null, null, null, null,
null, null, null, null, true, null, null),
new TypeToken<Watch.Response<V1Event>>() {}.getType());
watch.forEach( response -> {
V1Event event = response.object;
// THIS ONLY DUMPS EVENTS FROM FIRST CALL BUT NEVER GETS EXECUTED AGAIN
});
// I NEVER REACH HERE BUT I DON'T GET ANY UPDATES
} catch ( ApiException e) {
throw new K8ServerException("Failed to handle watchlist event", e);
}
}
}

Related

Thread pool to process messages in parallel, but preserve order within conversations

I need to process messages in parallel, but preserve the processing order of messages with the same conversation ID.
Example:
Let's define a Message like this:
class Message {
Message(long id, long conversationId, String someData) {...}
}
Suppose the messages arrive in the following order:
Message(1, 1, "a1"), Message(2, 2, "a2"), Message(3, 1, "b1"), Message(4, 2, "b2").
I need the message 3 to be processed after the message 1, since messages 1 and 3 have the same conversation ID (similarly, the message 4 should be processed after 2 by the same reason).
I don't care about the relative order between e.g. 1 and 2, since they have different conversation IDs.
I would like to reuse the java ThreadPoolExecutor's functionality as much as possible to avoid having to replace dead threads manually in my code etc.
Update: The number of possible 'conversation-ids' is not limited, and there is no time limit on a conversation. (I personally don't see it as a problem, since I can have a simple mapping from a conversationId to a worker number, e.g. conversationId % totalWorkers).
Update 2: There is one problem with a solution with multiple queues, where the queue number is determined by e.g. 'index = Objects.hash(conversationId) % total': if it takes a long time to process some message, all messages with the same 'index' but different 'conversationId' will wait even though other threads are available to handle it. That is, I believe solutions with a single smart blocking queue would be better, but it's just an opinion, I am open to any good solution.
Do you see an elegant solution for this problem?
I had to do something very similar some time ago, so here is an adaptation.
(See it in action online)
It's actually the exact same base need, but in my case the key was a String, and more importantly the set of keys was not growing indefinitely, so here I had to add a "cleanup scheduler". Other than that it's basically the same code, so I hope I have not lost anything serious in the adaptation process. I tested it, looks like it works. It's longer than other solutions, though, perhaps more complex...
Base idea:
MessageTask wraps a message into a Runnable, and notifies queue when it is complete
ConvoQueue: blocking queue of messages, for a conversation. Acts as a prequeue that guarantees desired order. See this trio in particular: ConvoQueue.runNextIfPossible() → MessageTask.run() → ConvoQueue.complete() → …
MessageProcessor has a Map<Long, ConvoQueue>, and an ExecutorService
messages are processed by any thread in the executor, the ConvoQueues feed the ExecutorService and guarantee message order per convo, but not globally (so a "difficult" message will not block other conversations from being processed, unlike some other solutions, and that property was critically important in our case -- if it's not that critical for you, maybe a simpler solution is better)
cleanup with ScheduledExecutorService (takes 1 thread)
Visually:
ConvoQueues ExecutorService's internal queue
(shared, but has at most 1 MessageTask per convo)
Convo 1 ########
Convo 2 #####
Convo 3 ####### Thread 1
Convo 4 } → #### → {
Convo 5 ### Thread 2
Convo 6 #########
Convo 7 #####
(Convo 4 is about to be deleted)
Below all the classes (MessageProcessorTest can be executed directly):
// MessageProcessor.java
import java.util.*;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import static java.util.concurrent.TimeUnit.SECONDS;
public class MessageProcessor {
private static final long CLEANUP_PERIOD_S = 10;
private final Map<Long, ConvoQueue> queuesByConvo = new HashMap<>();
private final ExecutorService executorService;
public MessageProcessor(int nbThreads) {
executorService = Executors.newFixedThreadPool(nbThreads);
ScheduledExecutorService cleanupScheduler = Executors.newScheduledThreadPool(1);
cleanupScheduler.scheduleAtFixedRate(this::removeEmptyQueues, CLEANUP_PERIOD_S, CLEANUP_PERIOD_S, SECONDS);
}
public void addMessageToProcess(Message message) {
ConvoQueue queue = getQueue(message.getConversationId());
queue.addMessage(message);
}
private ConvoQueue getQueue(Long convoId) {
synchronized (queuesByConvo) {
return queuesByConvo.computeIfAbsent(convoId, p -> new ConvoQueue(executorService));
}
}
private void removeEmptyQueues() {
synchronized (queuesByConvo) {
queuesByConvo.entrySet().removeIf(entry -> entry.getValue().isEmpty());
}
}
}
// ConvoQueue.java
import java.util.Queue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.LinkedBlockingQueue;
class ConvoQueue {
private Queue<MessageTask> queue;
private MessageTask activeTask;
private ExecutorService executorService;
ConvoQueue(ExecutorService executorService) {
this.executorService = executorService;
this.queue = new LinkedBlockingQueue<>();
}
private void runNextIfPossible() {
synchronized(this) {
if (activeTask == null) {
activeTask = queue.poll();
if (activeTask != null) {
executorService.submit(activeTask);
}
}
}
}
void complete(MessageTask task) {
synchronized(this) {
if (task == activeTask) {
activeTask = null;
runNextIfPossible();
}
else {
throw new IllegalStateException("Attempt to complete task that is not supposed to be active: "+task);
}
}
}
boolean isEmpty() {
return queue.isEmpty();
}
void addMessage(Message message) {
add(new MessageTask(this, message));
}
private void add(MessageTask task) {
synchronized(this) {
queue.add(task);
runNextIfPossible();
}
}
}
// MessageTask.java
public class MessageTask implements Runnable {
private ConvoQueue convoQueue;
private Message message;
MessageTask(ConvoQueue convoQueue, Message message) {
this.convoQueue = convoQueue;
this.message = message;
}
#Override
public void run() {
try {
processMessage();
}
finally {
convoQueue.complete(this);
}
}
private void processMessage() {
// Dummy processing with random delay to observe reordered messages & preserved convo order
try {
Thread.sleep((long) (50*Math.random()));
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(message);
}
}
// Message.java
class Message {
private long id;
private long conversationId;
private String data;
Message(long id, long conversationId, String someData) {
this.id = id;
this.conversationId = conversationId;
this.data = someData;
}
long getConversationId() {
return conversationId;
}
String getData() {
return data;
}
public String toString() {
return "Message{" + id + "," + conversationId + "," + data + "}";
}
}
// MessageProcessorTest.java
public class MessageProcessorTest {
public static void main(String[] args) {
MessageProcessor test = new MessageProcessor(2);
for (int i=1; i<100; i++) {
test.addMessageToProcess(new Message(1000+i,i%7,"hi "+i));
}
}
}
Output (for each convo ID (2nd field) order is preserved):
Message{1002,2,hi 2}
Message{1001,1,hi 1}
Message{1004,4,hi 4}
Message{1003,3,hi 3}
Message{1005,5,hi 5}
Message{1006,6,hi 6}
Message{1009,2,hi 9}
Message{1007,0,hi 7}
Message{1008,1,hi 8}
Message{1011,4,hi 11}
Message{1010,3,hi 10}
...
Message{1097,6,hi 97}
Message{1095,4,hi 95}
Message{1098,0,hi 98}
Message{1099,1,hi 99}
Message{1096,5,hi 96}
Test above provided me confidence to share it, but I'm slightly worried that I might have forgotten details for pathological cases. It has been running in production for years without hitches (although with more code that allows to inspect it live when we need to see what's happening, why a certain queue takes time, etc -- never a problem with the system above in itself, but sometimes with the processing of a particular task)
Edit: click here to test online. Alternative: copy that gist in there, and press "Compile & Execute".
Not sure how you want messages to be processed. For convenience each message is of type Runnable, which is the place for execution to take place.
The solution to all of this is to have a number of Executor's which are submit to a parallel ExecutorService. Use the modulo operation to calculate to which Executor the incoming message needs to be distributed to. Obviously, for the same conversation id its the same Executor, hence you have parallel processing but sequential for the same conversation id. It's not guaranteed that messages with different conversation id's will always execute in parallel (all in all, you are bounded, at least, by the number of physical cores in your system).
public class MessageExecutor {
public interface Message extends Runnable {
long getId();
long getConversationId();
String getMessage();
}
private static class Executor implements Runnable {
private final LinkedBlockingQueue<Message> messages = new LinkedBlockingQueue<>();
private volatile boolean stopped;
void schedule(Message message) {
messages.add(message);
}
void stop() {
stopped = true;
}
#Override
public void run() {
while (!stopped) {
try {
Message message = messages.take();
message.run();
} catch (Exception e) {
System.err.println(e.getMessage());
}
}
}
}
private final Executor[] executors;
private final ExecutorService executorService;
public MessageExecutor(int poolCount) {
executorService = Executors.newFixedThreadPool(poolCount);
executors = new Executor[poolCount];
IntStream.range(0, poolCount).forEach(i -> {
Executor executor = new Executor();
executorService.submit(executor);
executors[i] = executor;
});
}
public void submit(Message message) {
final int executorNr = Objects.hash(message.getConversationId()) % executors.length;
executors[executorNr].schedule(message);
}
public void stop() {
Arrays.stream(executors).forEach(Executor::stop);
executorService.shutdown();
}
}
You can then start the message executor with a pool ammount and submit messages to it.
public static void main(String[] args) {
MessageExecutor messageExecutor = new MessageExecutor(Runtime.getRuntime().availableProcessors());
messageExecutor.submit(new Message() {
#Override
public long getId() {
return 1;
}
#Override
public long getConversationId() {
return 1;
}
#Override
public String getMessage() {
return "abc1";
}
#Override
public void run() {
System.out.println(this.getMessage());
}
});
messageExecutor.submit(new Message() {
#Override
public long getId() {
return 1;
}
#Override
public long getConversationId() {
return 2;
}
#Override
public String getMessage() {
return "abc2";
}
#Override
public void run() {
System.out.println(this.getMessage());
}
});
messageExecutor.stop();
}
When I run with a pool count of 2 and submit an amount of messages:
Message with conversation id [1] is scheduled on scheduler #[0]
Message with conversation id [2] is scheduled on scheduler #[1]
Message with conversation id [3] is scheduled on scheduler #[0]
Message with conversation id [4] is scheduled on scheduler #[1]
Message with conversation id [22] is scheduled on scheduler #[1]
Message with conversation id [22] is scheduled on scheduler #[1]
Message with conversation id [22] is scheduled on scheduler #[1]
Message with conversation id [22] is scheduled on scheduler #[1]
Message with conversation id [1] is scheduled on scheduler #[0]
Message with conversation id [2] is scheduled on scheduler #[1]
Message with conversation id [3] is scheduled on scheduler #[0]
Message with conversation id [3] is scheduled on scheduler #[0]
Message with conversation id [4] is scheduled on scheduler #[1]
When the same amount of messages runs with a pool count of 3:
Message with conversation id [1] is scheduled on scheduler #[2]
Message with conversation id [2] is scheduled on scheduler #[0]
Message with conversation id [3] is scheduled on scheduler #[1]
Message with conversation id [4] is scheduled on scheduler #[2]
Message with conversation id [22] is scheduled on scheduler #[2]
Message with conversation id [22] is scheduled on scheduler #[2]
Message with conversation id [22] is scheduled on scheduler #[2]
Message with conversation id [22] is scheduled on scheduler #[2]
Message with conversation id [1] is scheduled on scheduler #[2]
Message with conversation id [2] is scheduled on scheduler #[0]
Message with conversation id [3] is scheduled on scheduler #[1]
Message with conversation id [3] is scheduled on scheduler #[1]
Message with conversation id [4] is scheduled on scheduler #[2]
Messages get distributed nicely among the pool of Executor's :).
EDIT: the Executor's run() is catching all Exceptions, to ensure it does not break when one message is failing.
You essentially want the work to be done sequentially within a conversation. One solution would be to synchronize on a mutex that is unique to that conversation. The drawback of that solution is that if conversations are short lived and new conversations start on a frequent basis, the "mutexes" map will grow fast.
For brevity's sake I've omitted the executor shutdown, actual message processing, exception handling etc.
public class MessageProcessor {
private final ExecutorService executor;
private final ConcurrentMap<Long, Object> mutexes = new ConcurrentHashMap<> ();
public MessageProcessor(int threadCount) {
executor = Executors.newFixedThreadPool(threadCount);
}
public static void main(String[] args) throws InterruptedException {
MessageProcessor p = new MessageProcessor(10);
BlockingQueue<Message> queue = new ArrayBlockingQueue<> (1000);
//some other thread populates the queue
while (true) {
Message m = queue.take();
p.process(m);
}
}
public void process(Message m) {
Object mutex = mutexes.computeIfAbsent(m.getConversationId(), id -> new Object());
executor.submit(() -> {
synchronized(mutex) {
//That's where you actually process the message
}
});
}
}
I had a similar problem in my application. My first solution was sorting them using a java.util.ConcurrentHashMap. So in your case, this would be a ConcurrentHashMap with conversationId as key and a list of messages as value. The problem was that the HashMap got too big taking too much space.
My current solution is the following:
One Thread receives the messages and stores them in a java.util.ArrayList. After receiving N messages it pushes the list to a second thread. This thread sorts the messages using the ArrayList.sort method using conversationId and id. Then the thread iterates through the sorted list and searches for blocks wich can be processed. Each block which can be processed is taken out of the list. To process a block you can create a runnable with this block and push this to an executor service. The messages which could not be processed remain in the list and will be checked in the next round.
For what it's worth, the Kafka Streams API provides most of this functionality. Partitions preserve ordering. It's a larger buy-in than an ExecutorService but could be interesting, especially if you happen to use Kafka already.
I would use three executorServices (one for receiving messages, one for sorting messages, one for processing messages). I would also use one queue to put all messages received and another queue with messages sorted and grouped (sorted by ConversationID, then make groups of messages that share the same ConversationID). Finally: one thread for receiving messages, one thread for sorting messages and all remaining threads used for processing messages.
see below:
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.NoSuchElementException;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.stream.Collectors;
public class MultipleMessagesExample {
private static int MAX_ELEMENTS_MESSAGE_QUEUE = 1000;
private BlockingQueue<Message> receivingBlockingQueue = new LinkedBlockingDeque<>(MAX_ELEMENTS_MESSAGE_QUEUE);
private BlockingQueue<List<Message>> prioritySortedBlockingQueue = new LinkedBlockingDeque<>(MAX_ELEMENTS_MESSAGE_QUEUE);
public static void main(String[] args) {
MultipleMessagesExample multipleMessagesExample = new MultipleMessagesExample();
multipleMessagesExample.doTheWork();
}
private void doTheWork() {
int totalCores = Runtime.getRuntime().availableProcessors();
int totalSortingProcesses = 1;
int totalMessagesReceiverProcess = 1;
int totalMessagesProcessors = totalCores - totalSortingProcesses - totalMessagesReceiverProcess;
ExecutorService messagesReceiverExecutorService = Executors.newFixedThreadPool(totalMessagesReceiverProcess);
ExecutorService sortingExecutorService = Executors.newFixedThreadPool(totalSortingProcesses);
ExecutorService messageProcessorExecutorService = Executors.newFixedThreadPool(totalMessagesProcessors);
MessageReceiver messageReceiver = new MessageReceiver(receivingBlockingQueue);
messagesReceiverExecutorService.submit(messageReceiver);
MessageSorter messageSorter = new MessageSorter(receivingBlockingQueue, prioritySortedBlockingQueue);
sortingExecutorService.submit(messageSorter);
for (int i = 0; i < totalMessagesProcessors; i++) {
MessageProcessor messageProcessor = new MessageProcessor(prioritySortedBlockingQueue);
messageProcessorExecutorService.submit(messageProcessor);
}
}
}
class Message {
private Long id;
private Long conversationId;
private String someData;
public Message(Long id, Long conversationId, String someData) {
this.id = id;
this.conversationId = conversationId;
this.someData = someData;
}
public Long getId() {
return id;
}
public Long getConversationId() {
return conversationId;
}
public String getSomeData() {
return someData;
}
}
class MessageReceiver implements Callable<Void> {
private BlockingQueue<Message> bloquingQueue;
public MessageReceiver(BlockingQueue<Message> bloquingQueue) {
this.bloquingQueue = bloquingQueue;
}
#Override
public Void call() throws Exception {
System.out.println("receiving messages...");
bloquingQueue.add(new Message(1L, 1000L, "conversation1 data fragment 1"));
bloquingQueue.add(new Message(2L, 2000L, "conversation2 data fragment 1"));
bloquingQueue.add(new Message(3L, 1000L, "conversation1 data fragment 2"));
bloquingQueue.add(new Message(4L, 2000L, "conversation2 data fragment 2"));
return null;
}
}
/**
* sorts messages. group together same conversation IDs
*/
class MessageSorter implements Callable<Void> {
private BlockingQueue<Message> receivingBlockingQueue;
private BlockingQueue<List<Message>> prioritySortedBlockingQueue;
private List<Message> intermediateList = new ArrayList<>();
private MessageComparator messageComparator = new MessageComparator();
private static int BATCH_SIZE = 10;
public MessageSorter(BlockingQueue<Message> receivingBlockingQueue, BlockingQueue<List<Message>> prioritySortedBlockingQueue) {
this.receivingBlockingQueue = receivingBlockingQueue;
this.prioritySortedBlockingQueue = prioritySortedBlockingQueue;
}
#Override
public Void call() throws Exception {
while (true) {
boolean messagesReceivedQueueIsEmpty = false;
intermediateList = new ArrayList<>();
for (int i = 0; i < BATCH_SIZE; i++) {
try {
Message message = receivingBlockingQueue.remove();
intermediateList.add(message);
} catch (NoSuchElementException e) {
// this is expected when queue is empty
messagesReceivedQueueIsEmpty = true;
break;
}
}
Collections.sort(intermediateList, messageComparator);
if (intermediateList.size() > 0) {
Map<Long, List<Message>> map = intermediateList.stream().collect(Collectors.groupingBy(message -> message.getConversationId()));
map.forEach((k, v) -> prioritySortedBlockingQueue.add(new ArrayList<>(v)));
System.out.println("new batch of messages was sorted and is ready to be processed");
}
if (messagesReceivedQueueIsEmpty) {
System.out.println("message processor is waiting for messages...");
Thread.sleep(1000); // no need to use CPU if there are no messages to process
}
}
}
}
/**
* process groups of messages with same conversationID
*/
class MessageProcessor implements Callable<Void> {
private BlockingQueue<List<Message>> prioritySortedBlockingQueue;
public MessageProcessor(BlockingQueue<List<Message>> prioritySortedBlockingQueue) {
this.prioritySortedBlockingQueue = prioritySortedBlockingQueue;
}
#Override
public Void call() throws Exception {
while (true) {
List<Message> messages = prioritySortedBlockingQueue.take(); // blocks if no message is available
messages.stream().forEach(m -> processMessage(m));
}
}
private void processMessage(Message message) {
System.out.println(message.getId() + " - " + message.getConversationId() + " - " + message.getSomeData());
}
}
class MessageComparator implements Comparator<Message> {
#Override
public int compare(Message o1, Message o2) {
return (int) (o1.getConversationId() - o2.getConversationId());
}
}
create a executor class extending Executor.On submit you can put code like below.
public void execute(Runnable command) {
final int key= command.getKey();
//Some code to check if it is runing
final int index = key != Integer.MIN_VALUE ? Math.abs(key) % size : 0;
workers[index].execute(command);
}
Create worker with queue so that if you want some task required sequentially then run.
private final AtomicBoolean scheduled = new AtomicBoolean(false);
private final BlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>(maximumQueueSize);
public void execute(Runnable command) {
long timeout = 0;
TimeUnit timeUnit = TimeUnit.SECONDS;
if (command instanceof TimeoutRunnable) {
TimeoutRunnable timeoutRunnable = ((TimeoutRunnable) command);
timeout = timeoutRunnable.getTimeout();
timeUnit = timeoutRunnable.getTimeUnit();
}
boolean offered;
try {
if (timeout == 0) {
offered = workQueue.offer(command);
} else {
offered = workQueue.offer(command, timeout, timeUnit);
}
} catch (InterruptedException e) {
throw new RejectedExecutionException("Thread is interrupted while offering work");
}
if (!offered) {
throw new RejectedExecutionException("Worker queue is full!");
}
schedule();
}
private void schedule() {
//if it is already scheduled, we don't need to schedule it again.
if (scheduled.get()) {
return;
}
if (!workQueue.isEmpty() && scheduled.compareAndSet(false, true)) {
try {
executor.execute(this);
} catch (RejectedExecutionException e) {
scheduled.set(false);
throw e;
}
}
}
public void run() {
try {
Runnable r;
do {
r = workQueue.poll();
if (r != null) {
r.run();
}
}
while (r != null);
} finally {
scheduled.set(false);
schedule();
}
}
This library should help: https://github.com/jano7/executor
ExecutorService underlyingExecutor = Executors.newCachedThreadPool();
KeySequentialRunner<String> runner = new KeySequentialRunner<>(underlyingExecutor);
Message message = retrieveMessage();
Runnable task = new Runnable() {
#Override
public void run() {
// process the message
}
};
runner.run(message.conversationId, task);

Blocking Queue Take() does not retrieve the item

I have the code below:
#Override
public boolean start() {
boolean b = false;
if (status != RUNNING) {
LOGGER.info("Starting Auto Rescheduler Process...");
try {
b = super.start();
final ThreadFactory threadFactory = new ThreadFactoryBuilder().setNameFormat("Rescheduler-Pool-%d").build();
ExecutorService exServ = Executors.newSingleThreadExecutor(threadFactory);
service = MoreExecutors.listeningDecorator(exServ);
} catch (Exception e) {
LOGGER.error("Error starting Auto Rescheduler Process! {}", e.getMessage());
LOGGER.debug("{}", e);
b = false;
}
} else {
LOGGER.info("Asked to start Auto Rescheduler Process but it had already started. Ignoring...");
}
return b;
}
The AutoRescheduler is the runnable below:
private class AutoScheduler implements Runnable {
private static final String DEFAULT_CONFIGURABLE_MINUTES_VALUE = "other";
private static final long DEFAULT_DELAY_MINUTES = 60L;
#Override
public void run() {
try {
while (!Thread.currentThread().isInterrupted()) {
//BLOCKS HERE UNTIL A FINISHED EVENT IS PUT IN QUEUE
final FinishedEvent fEvent = finishedEventsQueue.take();
LOGGER.info("Received a finished Event for {} and I am going to reschedule it", fEvent);
final MyTask task = fEvent.getSource();
final LocalDateTime nextRunTime = caclulcateNextRightTime(task);
boolean b = scheduleEventService.scheduleEventANew(task, nextRunTime);
if (b) {
cronController.loadSchedule();
LOGGER.info("Rescheduled event {} for {}", task, nextRunTime);
}
} catch (InterruptedException e) {
LOGGER.error("Interrupted while waiting for a new finishedEventQueue");
Thread.currentThread().interrupt();
}
}
I see events being caught and put in the queue. Normally I then see them being rescheduled by the AutoReschduler
However from time to time I stop seeing them being rescheduled which leads me to believe that the reschedulingThread dies silently. After this happens no more events are taken from the queue until I restart the process (I have a GUI that allows me to call the stop() and start() methods of the public class). After I restart it though, the blocked events are rescheduled normally which means that they are in the queue indeed.
Does anyone have an idea?
EDIT
I have reproduced the error in Eclipse. The thread does not die (I have tested with the ExecutorService as well. However take() still does not take the item from the queue although it is placed there.

Javafx Task running intermittently

I'm running FX8 (JDK8_u25) and have the following code
private void initializeBrokerCommunication()
{
System.out.println("0");
try {
Task<List<Integer>> initTask = new Task<List<Integer>>() {
#Override
protected List<Integer> call() throws Exception
{
System.out.println("1");
Thread.sleep(500);
DateTime upperBound = new DateTime(masterXAxis.getUpperBound());
DateTime lowerBound = new DateTime(masterXAxis.getLowerBound());
int seconds = Seconds.secondsBetween(lowerBound, upperBound).getSeconds();
lowerBound = lowerBound.minusSeconds(seconds);
System.out.println("2");
return broker.getDeviceData(null, upperBound.toDate(), lowerBound.toDate(), true);
}
};
// setOnSucceeded ensures that the method processHistoricData runs on the Fx thread.
initTask.setOnSucceeded(event -> {
System.out.println("3");
processHistoricData(initTask.valueProperty().getValue(), true);
});
// submit the task and the scheduler decides when to run it.
scheduler.submit(initTask);
}
catch (Exception e) {
e.printStackTrace();
}
System.out.println("4");
}
scheduler is a ScheduledExecutorService object and is initialized as follows
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(5);
initializeBrokerCommunication() method is wrapped in a class that extends Application class. Multiple runs of my class produces different outputs each time.
0-4-1-2-3 is what I expect, but sometimes I see 0-4 being printed and don't see 1-2-3 even being printed.
Any pointers as to why task runs intermittently.

ThreadPoolExecutor runs even after ScheduledFuture is cancelled

I'm looking to create a ScheduledThreadPoolExecutor with an unknown pool size. Pool size is determined at run-time, will likely be between 1-5, and for this example I used size 2. We use a custom Task that simply executes a method every so often, but that method will eventually throw an exception (which I've simulated with a simple numTimes variable and if statement). If an exception is thrown, I only want to cancel execution of THAT specific thread! If all threads are cancelled, I want to shut down the ScheduledThreadPoolExecutor. Once numTimes == 5 I simulate the exception to cancel the thread), and I can manage to cancel the thread a number of ways but they just don't feel right.
As a side note, I placed ScheduledFuture everywhere just to play around with cancelling it.
public class Test
{
static ScheduledThreadPoolExecutor stpe = new ScheduledThreadPoolExecutor(2);
public static void main(String[] args)
{ stpe.scheduleWithFixedDelay(new UpdateTask(1), 0, 1000, TimeUnit.MILLISECONDS);
stpe.scheduleWithFixedDelay(new UpdateTask(2), 0, 5000, TimeUnit.MILLISECONDS);
// stpe.shutdown();
}
public static class UpdateTask implements Runnable
{
int id;
int numTimes = 0;
ScheduledFuture<?> t;
public UpdateTask(int id)
{ this.id = id;
}
public void run()
{ System.out.println("Hello " + id + " num: " + numTimes);
String fn = "C:\\lib" + id;
if (numTimes++ == 5)
{ File f = new File(fn);
f.mkdir();
t.cancel(false);
}
}
}
}
Calling t.cancel() from run() or from main() have the same effect, in that the thread stops executing but the program does not stop running. Naturally, this is because the ThreadPoolExecutor is still doing stuff, despite both threads no longer being scheduled.
I tried invoking shutdown on stpe, but it doesn't finish thread execution. Two directories are created with stpe.shutdown commented out, and they are not otherwise.
I can't figure out an elegant way to cancel ScheduledFuture, then ScheduledThreadPoolExecutor when all ScheduledFuture's are cancelled.
Final approach ##
I was not able to get s1.get() to work as described in the answer below, so I simply created my own class to handle it.
public class Test
{
static ScheduledThreadPoolExecutor stpe = new ScheduledThreadPoolExecutor(2);
static CancelUpdateTasks canceller;
public static void main(String[] args)
{ Test t = new Test();
canceller.add(0, stpe.scheduleWithFixedDelay(new UpdateTask(0), 0, 1000, TimeUnit.MILLISECONDS));
canceller.add(1, stpe.scheduleWithFixedDelay(new UpdateTask(1), 0, 5000, TimeUnit.MILLISECONDS));
canceller.waitForSchedules();
stpe.shutdown();
}
public Test()
{ canceller = new CancelUpdateTasks();
}
public static class UpdateTask implements Runnable
{
int id;
int numTimes = 0;
public UpdateTask(int id)
{ this.id = id;
}
public void run()
{ System.out.println("Hello " + id + " num: " + numTimes);
if (numTimes++ == 5)
{ canceller.cancel(id);
}
}
}
public class CancelUpdateTasks
{ List<ScheduledFuture<?>> scheduler;
boolean isScheduled;
public CancelUpdateTasks()
{ scheduler = new ArrayList<ScheduledFuture<?>>();
isScheduled = false;
}
public void waitForSchedules()
{ int schedId = 0;
while(isScheduled)
{ ScheduledFuture<?> schedule = scheduler.get(schedId);
if (schedule.isCancelled())
{ if (schedId == scheduler.size() - 1)
return;
schedId++;
}
else
{ try
{ Thread.sleep(1000);
}
catch (InterruptedException e)
{ e.printStackTrace();
}
}
}
}
public void add(int id, ScheduledFuture<?> schedule)
{ scheduler.add(id, schedule);
if (!isScheduled)
isScheduled = true;
}
public void cancel(int id)
{ scheduler.get(id).cancel(false);
}
public void cancelNow(int id)
{ scheduler.get(id).cancel(true);
}
}
}
You'll want to issue a shutdown on the pool. The JVM will continue to run until there are only daemon threads alive. A ThreadPoolExecutor by default will create non-daemon threads.
Just invoke stpe.shutdown();
edit: Based on OPs update
shutdown admittedly is different for ScheduledThreadPoolExecugtor than a plain ThreadPoolExecutor. In this case shutdown prevents any scheduled task to become re scheduled. To make it work correctly you will have to wait on the futures completion. You can do so by get()ing on the ScheduledFuture
ScheduledFuture sf1 = stpe.scheduleWithFixedDelay(new UpdateTask(1), 0, 1000, TimeUnit.MILLISECONDS);
ScheduledFuture sf2 = stpe.scheduleWithFixedDelay(new UpdateTask(2), 0, 5000, TimeUnit.MILLISECONDS);
sf1.get();
sf2.get();
stpe.shutdown();
In this case both tasks are run asynchronously, the main thread will first wait for sf1 to complete then will wait for sf2 to complete and finally shutdown.

Executor/Queue process last known task only

I'm looking to write some concurrent code which will process an event. This processing can take a long time.
Whilst that event is processing it should record incoming events and then process the last incoming events when it is free to run again. (The other events can be thrown away). This is a little bit like a FILO queue but I only need to store one element in the queue.
Ideally I would like to plug in my new Executor into my event processing architecture shown below.
public class AsyncNode<I, O> extends AbstractNode<I, O> {
private static final Logger log = LoggerFactory.getLogger(AsyncNode.class);
private Executor executor;
public AsyncNode(EventHandler<I, O> handler, Executor executor) {
super(handler);
this.executor = executor;
}
#Override
public void emit(O output) {
if (output != null) {
for (EventListener<O> node : children) {
node.handle(output);
}
}
}
#Override
public void handle(final I input) {
executor.execute(new Runnable() {
#Override
public void run() {
try{
emit(handler.process(input));
}catch (Exception e){
log.error("Exception occured whilst processing input." ,e);
throw e;
}
}
});
}
}
I wouldn't do either. I would have an AtomicReference to the event you want to process and add a task to process it in a destructive way.
final AtomicReference<Event> eventRef =
public void processEvent(Event event) {
eventRef.set(event);
executor.submit(new Runnable() {
public vodi run() {
Event e = eventRef.getAndSet(null);
if (e == null) return;
// process event
}
}
}
This will only ever process the next event when the executor is free, without customising the executor or queue (which can be used for other things)
This also scales to having keyed events i.e. you want to process the last event for a key.
I think the key to this is the "discard policy" you need to apply to your Executor. If you only want to handle the latest task then you need a queue size of one and a "discarding policy" of throw away the oldest. Here is an example of an Executor that will do this
Executor latestTaskExecutor = new ThreadPoolExecutor(1, 1, // Single threaded
30L, TimeUnit.SECONDS, // Keep alive, not really important here
new ArrayBlockingQueue<>(1), // Single element queue
new ThreadPoolExecutor.DiscardOldestPolicy()); // When new work is submitted discard oldest
Then when your tasks come in just submit them to this executor, if there is already a queued job it will be replaced with the new one
latestTaskExecutor.execute(() -> doUpdate()));
Here is a example app showing this working
import java.util.Random;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.Executor;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
public class LatestUpdate {
private static final Executor latestTaskExecutor = new ThreadPoolExecutor(1, 1, // Single threaded
30L, TimeUnit.SECONDS, // Keep alive, not really important here
new ArrayBlockingQueue<>(1), // Single element queue
new ThreadPoolExecutor.DiscardOldestPolicy()); // When new work is submitted discard oldest
private static final AtomicInteger counter = new AtomicInteger(0);
private static final Random random = new Random();
public static void main(String[] args) {
LatestUpdate latestUpdate = new LatestUpdate();
latestUpdate.run();
}
private void doUpdate(int number) {
System.out.println("Latest number updated is: " + number);
try { // Wait a random amount of time up to 5 seconds. Processing the update takes time...
Thread.sleep(random.nextInt(5000));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private void run() {
// Updates a counter every second and schedules an update event
Thread counterUpdater = new Thread(() -> {
while (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(1000L); // Wait one second
} catch (InterruptedException e) {
e.printStackTrace();
}
counter.incrementAndGet();
// Schedule this update will replace any existing update waiting
latestTaskExecutor.execute(() -> doUpdate(counter.get()));
System.out.println("New number is: " + counter.get());
}
});
counterUpdater.start(); // Run the thread
}
}
This also covers the case for GUIs where once updates stop arriving you want the GUI to become eventually consistent with the last event received.
public class LatestTaskExecutor implements Executor {
private final AtomicReference<Runnable> lastTask =new AtomicReference<>();
private final Executor executor;
public LatestTaskExecutor(Executor executor) {
super();
this.executor = executor;
}
#Override
public void execute(Runnable command) {
lastTask.set(command);
executor.execute(new Runnable() {
#Override
public void run() {
Runnable task=lastTask.getAndSet(null);
if(task!=null){
task.run();
}
}
});
}
}
#RunWith( MockitoJUnitRunner.class )
public class LatestTaskExecutorTest {
#Mock private Executor executor;
private LatestTaskExecutor latestExecutor;
#Before
public void setup(){
latestExecutor=new LatestTaskExecutor(executor);
}
#Test
public void testRunSingleTask() {
Runnable run=mock(Runnable.class);
latestExecutor.execute(run);
ArgumentCaptor<Runnable> captor=ArgumentCaptor.forClass(Runnable.class);
verify(executor).execute(captor.capture());
captor.getValue().run();
verify(run).run();
}
#Test
public void discardsIntermediateUpdates(){
Runnable run=mock(Runnable.class);
Runnable run2=mock(Runnable.class);
latestExecutor.execute(run);
latestExecutor.execute(run2);
ArgumentCaptor<Runnable> captor=ArgumentCaptor.forClass(Runnable.class);
verify(executor,times(2)).execute(captor.capture());
for (Runnable runnable:captor.getAllValues()){
runnable.run();
}
verify(run2).run();
verifyNoMoreInteractions(run);
}
}
This answer is a modified version of the one from DD which minimzes submission of superfluous tasks.
An atomic reference is used to keep track of the latest event. A custom task is submitted to the queue for potentially processing an event, only the task that gets to read the latest event actually goes ahead and does useful work before clearing out the atomic reference to null. When other tasks get a chance to run and find no event is available to process, they just do nothing and pass away silently. Submitting superfluous tasks are avoided by tracking the number of available tasks in the queue. If there is at least one task pending in the queue, we can avoid submitting the task as the event will be handled when an already queued task is dequeued.
import java.util.concurrent.Executor;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReference;
public class EventExecutorService implements Executor {
private final Executor executor;
// the field which keeps track of the latest available event to process
private final AtomicReference<Runnable> latestEventReference = new AtomicReference<>();
private final AtomicInteger activeTaskCount = new AtomicInteger(0);
public EventExecutorService(final Executor executor) {
this.executor = executor;
}
#Override
public void execute(final Runnable eventTask) {
// update the latest event
latestEventReference.set(eventTask);
// read count _after_ updating event
final int activeTasks = activeTaskCount.get();
if (activeTasks == 0) {
// there is definitely no other task to process this event, create a new task
final Runnable customTask = new Runnable() {
#Override
public void run() {
// decrement the count for available tasks _before_ reading event
activeTaskCount.decrementAndGet();
// find the latest available event to process
final Runnable currentTask = latestEventReference.getAndSet(null);
if (currentTask != null) {
// if such an event exists, process it
currentTask.run();
} else {
// somebody stole away the latest event. Do nothing.
}
}
};
// increment tasks count _before_ submitting task
activeTaskCount.incrementAndGet();
// submit the new task to the queue for processing
executor.execute(customTask);
}
}
}
Though I like James Mudd's solution but it still enqueues a second task while previous is running which might be undesirable. If you want to always ignore/discard arriving task if previous is not completed you can make some wrapper like this:
public class DiscardingSubmitter {
private final ExecutorService es = Executors.newSingleThreadExecutor();
private Future<?> future = CompletableFuture.completedFuture(null); //to avoid null check
public void submit(Runnable r){
if (future.isDone()) {
future = es.submit(r);
}else {
//Task skipped, log if you want
}
}
}

Categories

Resources