Akka/Java here, although I have a basic understanding of Scala. New to Akka. I have a Master class that starts up when the actor system fires up, which manages three children: Fizz, Buzz and Foo.
When Master starts up, that call to doSomething() can throw a NoSuchElementException. If it does, I would like the Master to shut down its three children, kill itself, shut down the actor system as a whole, and then invoke a custom system shutdown hook. My best attempt thus far:
public class MyApp {
public static void main(String[] args) {
ActorRef master = actorSystem.actorOf(Props.create(Master.class));
master.tell(new Init(), ActorRef.noSender());
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
void run() {
System.out.println("Shutting down!");
}
});
}
}
public class Master extends AbstractActor {
private Logger log = LoggerFactory.getLogger(this.getClass());
private ActorRef fizz;
private ActorRef buzz;
private ActorRef foo;
public Master() {
super();
}
#Override
public Receive createReceive() {
return receiveBuilder()
.match(Init.class, init -> {
try {
fizz = context().actorOf(Props.create(Fizz.class));
buzz = context().actorOf(Props.create(Buzz.class));
foo = context().actorOf(Props.create(Foo.class));
long metric = doSomething();
log.info("After all the children started up \"metric\" was: {}", metric);
} catch(NoSuchElementException ex) {
self().tell(PoisonPill.getInstance(), self());
}
}).build();
}
}
My thinking here is:
Since Master is the top-most actor, I can't define a SupervisorStrategy to handle the thrown NoSuchElementException for me, so I have to put a try-catch in there to handle it
My understanding of PoisonPill is that it shuts down the receiving actor's children and then shuts the actor down
However I'm still fuzzy as to whether PoisonPill shuts the actor system down if the actor happens to be the root/top-level actor, and I'm also not seeing how I can wire the PoisonPill to not only shut the actor system down, but also engage the JVM's shutdown hook
When I run this code I don't see any evidence of the actor system shutting down, it just hangs. Any ideas how I can wire all this together to achieve the desired affect?
One way to achieve the desired behavior is to have the master actor call getContext().getSystem().terminate() and register a callback that contains the shutdown logic with ActorSystem.registerOnTermination:
actorSystem.registerOnTermination(new Runnable {
#Override
public void run() {
System.out.println("Shutting down!");
}
});
// ...
try {
// ...
} catch (NoSuchElementException ex) {
getContext().getSystem().terminate();
}
Coordinated shutdown is available for shutdown procedures that are more involved.
You can change the supervision strategy for the user guardian:
https://doc.akka.io/docs/akka/current/general/supervision.html#user-the-guardian-actor
If you have the user guardian escalate any exceptions, then throw the NoSuchElementException, the exception will end up at the root guardian with the result that your system will exit.
Related
I need to create a RMI service which can notify events to clients.
Each client register itself on the server, the client can emit an event and the server will broadcast it to all other clients.
The program works, but, the client reference on the server is never garbage collected, an the thread which the server uses to check if the client reference will never terminate.
So each time a client connects to the server, a new thread is created and never terminated.
The Notifier class can register and unregister a listener.
The broadcast method call each registered listener and send the message back.
public class Notifier extends UnicastRemoteObject implements INotifier{
private List<IListener> listeners = Collections.synchronizedList(new ArrayList());
public Notifier() throws RemoteException {
super();
}
#Override
public void register(IListener listener) throws RemoteException{
listeners.add(listener);
}
#Override
public void unregister(IListener listener) throws RemoteException{
boolean remove = listeners.remove(listener);
if(remove) {
System.out.println(listener+" removed");
} else {
System.out.println(listener+" NOT removed");
}
}
#Override
public void broadcast(String msg) throws RemoteException {
for (IListener listener : listeners) {
try {
listener.onMessage(msg);
} catch (RemoteException e) {
e.printStackTrace();
}
}
}
}
The listener is just printing each received message.
public class ListenerImpl extends UnicastRemoteObject implements IListener {
public ListenerImpl() throws RemoteException {
super();
}
#Override
public void onMessage(String msg) throws RemoteException{
System.out.println("Received: "+msg);
}
}
The RunListener client subscribes a listener wait few seconds to receive a message and then terminates.
public class RunListener {
public static void main(String[] args) throws Exception {
Registry registry = LocateRegistry.getRegistry();
INotifier notifier = (INotifier) registry.lookup("Notifier");
ListenerImpl listener = new ListenerImpl();
notifier.register(listener);
Thread.sleep(6000);
notifier.unregister(listener);
UnicastRemoteObject.unexportObject(listener, true);
}
}
The RunNotifier just publish the service and periodically sends a message.
public class RunNotifier {
static AtomicInteger counter = new AtomicInteger();
public static void main(String[] args) throws RemoteException, AlreadyBoundException, NotBoundException {
Registry registry = LocateRegistry.createRegistry(1099);
INotifier notifier = new Notifier();
registry.bind("Notifier", notifier);
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
try {
int n = counter.incrementAndGet();
System.out.println("Broadcasting "+n);
notifier.broadcast("Hello ("+n+ ")");
} catch (RemoteException e) {
e.printStackTrace();
}
}
},5 , 5, TimeUnit.SECONDS);
try {
System.in.read();
} catch (IOException e) {
}
executor.shutdown();
registry.unbind("Notifier");
UnicastRemoteObject.unexportObject(notifier, true);
}
}
I've seen many Q&A on stack overflow about RMI, but none addresses this kind of problem.
I guess I'm doing some very big mistake, but I can't spot it.
As you can see in the picture, a new RMI RenewClean thread is created for each incoming connection, and this thread will never terminate.
Once the client disconnects, and terminates, the RenewClean thread will silently swallow all ConnectionException thrown and will keep polling a client which will never reply.
As a side note, I even tried to keep just weak reference of the IListener in the Notifier class, and still the results are the same.
This may not be very helpful if you are stuck on JDK1.8, but when I test on JDK17 the multiple rmi server threads created for each incoming client RMI RenewClean-[IPADDRESS:PORT] are cleaned up on the server, and not showing "will never terminate" behaviour you may have observed on JDK1.8. It may be a JDK1.8 issue, or simply that you are not waiting long enough for the threads to end.
For quicker cleanup, try adjusting the system property for client thread garbage collection setting from the default (3600000 = 1 hour):
java -Dsun.rmi.dgc.client.gcInterval=3600000 ...
On my server I added this in one of the API callbacks:
Function<Thread,String> toString = t -> t.getName()+(t.isDaemon() ? " DAEMON" :"");
Set<Thread> threads = Thread.getAllStackTraces().keySet();
System.out.println("-".repeat(40)+" Threads x "+threads.size());
threads.stream().map(toString).forEach(System.out::println);
After RMI server startup it printed names of threads and no instances of "RMI RenewClean":
---------------------------------------- Threads x 12
After connecting many times from a client, the server reported corresponding instances of "RMI RenewClean":
---------------------------------------- Threads x 81
Leaving the RMI server for a while, these gradually shrank back - not to 12 threads -, but low enough to suggest that RMI thread handling is not filling up with many unnecessary daemon threads:
---------------------------------------- Threads x 20
After about an hour all the remaining "RMI RenewClean" were removed - probably due to housekeeping performed at the interval defined by the VM setting sun.rmi.dgc.client.gcInterval=3600000:
---------------------------------------- Threads x 13
Note also that RMI server shutdown is instant at any point - the "RMI RenewClean" daemon threads do not hold up rmi server shutdown.
I have an AutoCloseable whose close() method is being called prematurely. The AutoCloseable is ProcessQueues below. I don't want the close() method to be called when it is currently being called. I'm considering the removal of "implements AutoCloseable" to accomplish that. But then how do I know when to call ProcessQueues.close()?
public class ProcessQueues implements AutoCloseable {
private ArrayList<MessageQueue> queueObjects = new ArrayList<MessageQueue>();
public ProcessQueues() {
queueObjects.add(new FFE_DPVALID_TO_SSP_EXCEPTION());
queueObjects.add(new FFE_DPVALID_TO_SSP_ESBEXCEPTION());
...
}
private void scheduleProcessRuns() {
try {
for (MessageQueue obj : queueObjects) {
monitorTimer.schedule(obj, new Date(), 1); // NOT THE ACTUAL ARGUMENTS
}
}
catch (Exception ex) {
// NOT THE ACTUAL EXCEPTION HANDLER
}
}
public static void main(String[] args) {
try (ProcessQueues pq = new ProcessQueues()) {
pq.scheduleProcessRuns();
} catch (Exception e) {
// NOT THE ACTUAL EXCEPTION HANDLER
}
}
#Override
public void close() throws Exception {
for (MessageQueue queue : queueObjects) {
queue.close();
}
}
}
I want ProcessQueues.close() to be called, but not until the task execution threads of all Timer objects terminate. As written, ProcessQueues.close() will be called as soon as the tasks are scheduled. I can easily solve that by removing "implements AutoCloseable" from the ProcessQueues class (and removing the #Override annotation). But then I have to call ProcessQueues.close() myself. How do I know when the task execution threads of all Timer objects have terminated? That's when I want to call ProcessQueues.close().
Note that MessageQueue isn't instantiated in the resource specification header of a try-with-resources block, so although MessageQueue also implements AutoCloseable, the feature isn't utilized here. I'm explicitly calling MessageQueue.close(). It is in MessageQueue.close() that I release resources. Releasing those resources prematurely causes the task execution threads to fail to complete their tasks.
I'm considering an explicit call to ProcessQueues.close() after rewriting the code to prevent automatic resource deallocation, but again I don't know how to discover the right time for that explicit call.
I considered overriding ProcessQueues.finalize(), but "Java: How to Program", Eleventh Edition advises against that. "You should never use method finalize, because it can cause many problems and there's uncertainty as to whether it will ever get called before a program terminates... Now it's considered better practice for any class that uses system resources... to provide a method that programmers can call to release resources when they're no longer needed in a program." I have such a method. It's ProcessQueues.close(). But when should I call it?
You have conflicting lifecycle issues here.
You have Timer whose lifecycle is 100% in your control. You start it, you stop it, and that's it. But you have no direct introspection in to the status of the threads being managed by the Timer. So, you can't ask it if it has anything currently running, for example.
Then you have your MessageQueue, which is invoked by the Timer. This is the lifecycle you're interested in. You want to wait for all of the MessageQueues to be "done", for assorted values of done. But, since the queue are constantly being rescheduled (given the Timer.schedule method that you're using), they're NEVER "done". They process their contents and go off and run again.
So, how is anyone to know when "done" means "done"?
Is it up to the MessageQueue? Or is it up to the ProcessQueues? Who's in command here?
Notice, nothing ever cancels the Timer. It's just runs on and on and on.
So, how can one know when MessageQueue can be closed?
If MessageQueue is the real driver here, then you should add lifecycle methods to the MessageQueue that ProcessQueues can monitor to know when to shut things down. For example, you could create a CountDownLatch set for however many MessageQueues are in your list, and then subscribe to a new lifecycle method on the MessageQueue that it calls when it's finished. The callback method can then decrement the CountDownLatch, and the ProcessQueues.close method simply waits on the latch to countdown before closing everything.
public class ProcessQueues implements AutoCloseable, MessageQueueListener {
private ArrayList<MessageQueue> queueObjects = new ArrayList<MessageQueue>();
CountDownLatch latch;
public ProcessQueues() {
queueObjects.add(new FFE_DPVALID_TO_SSP_EXCEPTION());
queueObjects.add(new FFE_DPVALID_TO_SSP_ESBEXCEPTION());
...
queueObjects.forEach((mq) -> {
mq.setListener(this);
});
latch = new CountDownLatch(queueObjects.size());
}
private void scheduleProcessRuns() {
try {
for (MessageQueue obj : queueObjects) {
monitorTimer.schedule(obj, new Date(), 1); // NOT THE ACTUAL ARGUMENTS
}
} catch (Exception ex) {
// NOT THE ACTUAL EXCEPTION HANDLER
}
}
public static void main(String[] args) {
try (ProcessQueues pq = new ProcessQueues()) {
pq.scheduleProcessRuns();
} catch (Exception e) {
// NOT THE ACTUAL EXCEPTION HANDLER
}
}
#Override
public void close() throws Exception {
latch.await();
for (MessageQueue queue : queueObjects) {
queue.close();
}
monitorTimer.cancel();
}
#Override
public void messageQueueDone() {
latch.countDown();
}
}
public interface MessageQueueListener {
public void messageQueueDone();
}
public class MessageQueue extends TimerTask {
MessageQueueListener listener;
public void setListener(MessageQueueListener listener) {
this.listener = listener;
}
private boolean isMessageQueueReallyDone {
...
}
public void run() {
...
if (isMessageQueueReallyDone() && listener != null) {
listener.messageQueueDone();
}
}
}
Mind, this means that your try-with-resource block will block waiting on all of the MessageQueues, if that's what you want, then you're good to go.
It also crassly assumes that your MessageQueue.run() knows when to shut down, which goes back to that "who's in control here" thing.
I could terminate the Timer, but having it run perpetually is intentional. The question is in consideration of what happens when something else terminates the Timer and the MessageQueue objects are no longer needed. It is at that point that I would like to call ProcessQueues.close().
If I were to use the Executor framework, rather than Timer, then I could use ExecutorService.awaitTermination(long timeout, TimeUnit unit)
TimerTask is a Runnable, and MessageQueue is already a TimerTask, so MessageQueue need not change.
'ExecutorService.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS)' would effectively wait forever for termination.
public static void main(String[] args) {
try (ProcessQueues pq = new ProcessQueues()) {
pq.scheduleProcessRuns();
// Don't take this literally.
ExecutorService.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
} catch (Exception e) {
// NOT THE ACTUAL EXCEPTION HANDLER
}
}
Of course, awaitTermination isn't a static method, so I'll have to have an ExecutorService, but you get the idea.
After termination, the AutoCloseable feature is leveraged and ProcessQueues.close() is implicitly called.
All that remains is to start the threads for perpetually repeated calls to each TimerTask, using the Executor framework. The answer to that question is ScheduledExecutorService.
I think this will work.
In my Spring application, there is a scheduler for executing some task. Scheduled annotation is not used there because the schedule is quite complicated - it is dynamic and it used some data from the database. So simple endless cycle with thread sleeping is used. And sleeping interval is changed according to some rules. Maybe all this can be done with Scheduled annotation, but the question is not about that.
Below is simple example:
#Service
public class SomeService {
#PostConstruct
void init() {
new Thread(() -> {
while (true) {
System.out.println(new Date());
try {
Thread.sleep(1000);
} catch (Exception ex) {
System.out.println("end");
return;
}
}
}).start();
}
}
The code works fine but there is some trouble with killing that new thread. When I stop the application from Tomcat this new thread is continuing to run. So on Tomcat manage page I see that application is stopped, but in Tomcat log files I still see the output from the thread.
So what the problem? How I should change the code so the thread would be killed when the application is stopped?
Have you tried to implement a #PreDestroy method which will be invoked before WebApplicationContext is closed to change a boolean flag used in your loop? Though it seems strange that your objects are not discarded even when application is stopped...
class Scheduler {
private AtomicBoolean booleanFlag = new AtomicBoolean(true);
#PostConstruct
private void init() {
new Thread(() -> {
while (booleanFlag.get()) {
// do whatever you want
}
}).start();
}
#PreDestroy
private void destroy() {
booleanFlag.set(false);
}
}
Question: How can I find out if an actor was stopped gracefully (e.g. through its parent stopping) or through an exception?
Context: With the following deathwatch setup I only get the Terminated.class message in the good test, where I explicitly call stop. I expected a Terminated.class message only in the bad case. Using a supervisorStrategy that stops the child that threw an exception would make no difference, as this leads to the behaviour of the good test. And there I can't find a way to decide if it was caused by an exception or not.
My test setup is the following:
DeathWatch
public class DeathWatch extends AbstractActor {
#Override
public Receive createReceive() {
return receiveBuilder()
.matchAny(this::logTerminated)
.build();
}
private <P> void logTerminated(final P p) {
log.info("terminated: {}", p);
}
}
Actor
public class MyActor extends AbstractActor {
#Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("good", s -> { getContext().stop(self()); })
.matchEquals("bad", s -> { throw new Exception("baaaad"); })
.build();
}
}
Test
public class Test {
private TestActorRef<Actor> actor;
#Before
public void setUp() throws Exception {
actor = TestActorRef.create(ActorSystem.create(), Props.create(MyActor.class), "actor");
TestActorRef.create(ActorSystem.create(), Props.create(DeathWatch.class),"deathwatch").watch(actor);
}
#Test
public void good() throws Exception {
actor.tell("good", ActorRef.noSender());
}
#Test
public void bad() throws Exception {
actor.tell("bad", ActorRef.noSender());
}
}
Update: Adding the following supervisor, leads to a second logging of "terminated", but yields no further context information.
public class Supervisor extends AbstractActor {
private final ActorRef child;
#Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, s -> child.tell(s, getSelf()))
.build();
}
#Override
public SupervisorStrategy supervisorStrategy() {
return new OneForOneStrategy(DeciderBuilder.match(Exception.class, e -> stop()).build());
}
}
The Terminated message is behaving as expected. From the documentation:
In order to be notified when another actor terminates (i.e. stops permanently, not temporary failure and restart), an actor may register itself for reception of the Terminated message dispatched by the other actor upon termination.
And here:
Termination of an actor proceeds in two steps: first the actor suspends its mailbox processing and sends a stop command to all its children, then it keeps processing the internal termination notifications from its children until the last one is gone, finally terminating itself (invoking postStop, dumping mailbox, publishing Terminated on the DeathWatch, telling its supervisor)....
The postStop() hook is invoked after an actor is fully stopped.
The Terminated message isn't reserved for the scenario in which an actor is stopped due to an exception or error; it comes into play whenever an actor is stopped, including scenarios in which the actor is stopped "normally." Let's go through each scenario in your test case:
"Good" case without an explicit supervisor: MyActor stops itself, calls postStop (which isn't overridden, so nothing happens in postStop), and sends a Terminated message to the actor that's watching it (your DeathWatch actor).
"Good" case with an explict supervisor: same as 1.
"Bad" case without an explicit supervisor: The default supervision strategy is used, which is to restart the actor. A restart does not trigger the sending of a Terminated message.
"Bad" case with an explicit supervisor: the supervisor handles the Exception, then stops MyActor, again launching the termination chain described above, resulting in a Termination message sent to the watching actor.
So how does one distinguish between the "good" and "bad" cases when an actor is stopped? Look at the logs. The SupervisorStrategy, by default, logs Stop failures at the ERROR level.
When an exception is thrown, if you want to do more than log the exception, consider restarting the actor instead of stopping it. A restart, unlike a stop, always indicates that something went wrong (as mentioned earlier, a restart is the default strategy when an exception is thrown). You could place post-exception logic inside the preRestart or postRestart hook.
Note that when an exception is thrown while an actor is processing a message, that message is lost, as described here. If you want to do something with that message, you have to catch the exception.
If you have an actor that you want to inform whenever an exception is thrown, you can send a message to this monitor actor from within the parent's supervisor strategy (the parent of the actor that can throw an exception). This assumes that the parent actor has a reference to this monitor actor. If the strategy is declared inside the parent and not in the parent's companion object, then the body of the strategy has access to the actor in which the exception was thrown (via sender). ErrorMessage below is a made-up class:
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1 minute) {
case t: Throwable =>
val problemActor = sender
monitorActor ! ErrorMessage(t, problemActor)
Stop
}
I am trying to use the new feature of KCL library in Java for AWS Kinesis to do a graceful shutdown by registering with shutdown hook to stop all the record processors and then the worker gracefully. The new library provides a new interface which record processors needs to be implemented. But how does it get invoked?
Tried invoking first the worker.requestShutdown() then worker.shutdown() and it works. But is it any intended way to use it. What is the use then to use both, and its benefit?
Starting a consumer
As you might know that when you create a Worker, it
1) creates the consumer offset table in dynamodb
2) create leases, schedule lease taker and lease renewer at configured interval of time
If you have two partitions, then there will be two records in your same dynamodb table, meaning partition needs a lease.
eg.
{
"checkpoint": "TRIM_HORIZON",
"checkpointSubSequenceNumber": 0,
"leaseCounter": 38,
"leaseKey": "shardId-000000000000",
"leaseOwner": "ComponentTest_Consumer_With_Two_Partitions_Consumer_192.168.1.83",
"ownerSwitchesSinceCheckpoint": 0
}
{
"checkpoint": "49570828493343584144205257440727957974505808096533676050",
"checkpointSubSequenceNumber": 0,
"leaseCounter": 40,
"leaseKey": "shardId-000000000001",
"leaseOwner": "ComponentTest_Consumer_With_Two_Partitions_Consumer_192.168.1.83",
"ownerSwitchesSinceCheckpoint": 0
}
schedule for taking and renewing lease is taken care by Lease Coordinator ScheduledExecutorService (called leaseCoordinatorThreadPool)
3) Then for each partition in the stream, Worker creates an internal PartitionConsumer, which actually fetches the events, and dispatches to your RecordProcessor#processRecords. see ProcessTask#call
4) on your question, you have to register your IRecordProcessorFactory impl to the worker, which will give one ProcessorFactoryImpl to each PartitionConsumer.
eg. see example here, which might be helpful
KinesisClientLibConfiguration streamConfig = new KinesisClientLibConfiguration(
"consumerName", "streamName", getAuthProfileCredentials(), "consumerName-" + "consumerInstanceId")
.withKinesisClientConfig(getHttpConfiguration())
.withInitialPositionInStream(InitialPositionInStream.TRIM_HORIZON); // "TRIM_HORIZON" = from the tip of the stream
Worker consumerWorker = new Worker.Builder()
.recordProcessorFactory(new DavidsEventProcessorFactory())
.config(streamConfig)
.dynamoDBClient(new DynamoDB(new AmazonDynamoDBClient(getAuthProfileCredentials(), getHttpConfiguration())))
.build();
public class DavidsEventProcessorFactory implements IRecordProcessorFactory {
private Logger logger = LogManager.getLogger(DavidsEventProcessorFactory.class);
#Override
public IRecordProcessor createProcessor() {
logger.info("Creating an EventProcessor.");
return new DavidsEventPartitionProcessor();
}
}
class DavidsEventPartitionProcessor implements IRecordProcessor {
private Logger logger = LogManager.getLogger(DavidsEventPartitionProcessor.class);
//TODO add consumername ?
private String partitionId;
private ShutdownReason RE_PARTITIONING = ShutdownReason.TERMINATE;
public KinesisEventPartitionProcessor() {
}
#Override
public void initialize(InitializationInput initializationInput) {
this.partitionId = initializationInput.getShardId();
logger.info("Initialised partition {} for streaming.", partitionId);
}
#Override
public void processRecords(ProcessRecordsInput recordsInput) {
recordsInput.getRecords().forEach(nativeEvent -> {
String eventPayload = new String(nativeEvent.getData().array());
logger.info("Processing an event {} : {}" , nativeEvent.getSequenceNumber(), eventPayload);
//update offset after configured amount of retries
try {
recordsInput.getCheckpointer().checkpoint();
logger.debug("Persisted the consumer offset to {} for partition {}",
nativeEvent.getSequenceNumber(), partitionId);
} catch (InvalidStateException e) {
logger.error("Cannot update consumer offset to the DynamoDB table.", e);
e.printStackTrace();
} catch (ShutdownException e) {
logger.error("Consumer Shutting down", e);
e.printStackTrace();
}
});
}
#Override
public void shutdown(ShutdownInput shutdownReason) {
logger.debug("Shutting down event processor for {}", partitionId);
if(shutdownReason.getShutdownReason() == RE_PARTITIONING) {
try {
shutdownReason.getCheckpointer().checkpoint();
} catch (InvalidStateException e) {
logger.error("Cannot update consumer offset to the DynamoDB table.", e);
e.printStackTrace();
} catch (ShutdownException e) {
logger.error("Consumer Shutting down", e);
e.printStackTrace();
}
}
}
}
// then start a consumer
consumerWorker.run();
Stopping a consumer
Now, when you want to stop your Consumer instance(Worker), you don't need to deal much with each PartitionConsumer, which will be taken care by Worker once you ask it to shut down.
with shutdown, it asks the leaseCoordinatorThreadPool to stop, which was responsible for renewing and taking leases, and awaits for termination.
requestShutdown on the other hand cancels the lease taker, AND notifies the PartitionConsumers about the shutdown.
And more important thing with requestShutdown is if you want to get notified on your RecordProcessor then you can implement IShutdownNotificationAware as well. That way in case of race condition when your RecordProcessor is processing an event but worker is about to shut down, you should still be able to commit your offset and then shutdown.
requestShutdown returns a ShutdownFuture, which then calls back worker.shutdown
You will have to implement following method on your RecordProcessor to get notified on requestShutdown,
class DavidsEventPartitionProcessor implements IRecordProcessor, IShutdownNotificationAware {
private String partitionId;
// few implementations
#Override
public void shutdownRequested(IRecordProcessorCheckpointer checkpointer) {
logger.debug("Shutdown requested for {}", partitionId);
}
}
But but if you loose the lease before notifying then it might not be called.
Summary to your questions
The new library provides a new interface which record processors needs
to be implemented. But how does it get invoked?
implement a IRecordProcessorFactory and IRecordProcessor.
then wire your RecordProcessorFactory to your Worker.
Tried invoking first the worker.requestShutdown() then
worker.shutdown() and it works. But is it any intended way to use it?
You should use requestShutdown() for graceful shutdown, which will take care of race-condition. It was introduced in kinesis-client-1.7.1