How can i get which child actor had exception on supervisor.
Basically i want to process other things like logging failure to DB, etc..
before stopping the failing actor. But for this i had to know exactly which actor
had the exception.
My supervisorStrategy code block like
/* stop task actor on unhandled exception */
private static SupervisorStrategy strategy = new OneForOneStrategy(
1,
Duration.create(1, TimeUnit.MINUTES),
new Function<Throwable, SupervisorStrategy.Directive>() {
#Override
public SupervisorStrategy.Directive apply(Throwable t) throws Exception {
return SupervisorStrategy.stop();
}
}
);
#Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}
If you read the following link about Fault Tolerance, you can see that you can, within a supervision strategy, get the failing child actor ref according to this piece of info:
If the strategy is declared inside the supervising actor (as opposed
to a separate class) its decider has access to all internal state of
the actor in a thread-safe fashion, including obtaining a reference to
the currently failed child (available as the getSender of the failure
message).
So if you use getSender inside of you supervision strategy you should be able to determine which child produced the exception and act accordingly.
As you watch the child, you will receive Terminated message with actor field and other info. See also What Lifecycle Monitoring Means. You can also process failure inside child actor itself, by overriding its preRestart/postRestart method.
Related
We are using Activiti BPMN diagrams to run our workflows.
In our main process we're running additional process(innerProcess)
inside of a service task - MyServiceTask . See below.
The issue is that if there is an exception thrown in the innerProcess process, then I won't get it in MyServiceTask, only after the main process finished, then the exception will be bubbled up.
But I want to be able to catch the exception in MyServiceTask in case it happens.
Can you help?
public class MyServiceTask implements JavaDelegate
{
#Inject
private RuntimeService runtimeService;
public void execute(DelegateExecution context) throws Exception
{
runtimeService.startProcessInstanceByKey("innerProcess", paramMap);
}
}
Based on your code, you are not running a second "Activiti". Rather you are initiating a new process instance. All process instances are isolated and errors are associated with a specific instance. The only exception to that rule is when a process instance is a "sub process". In this case, errors can bubble up to the parent process instance.
I would modify your logic to start a sub process instance wither via a signal (probably the easiest way) or directly from within the service.
Sub process instances differ only in that they have a parent process instance id which can be set on initialization.
I read google docs, but there is said about non deferred task only. There we create xml file with params and can specify retry count.
But I use deferred tasks:
public static class ExpensiveOperation implements DeferredTask
{
#Override
public void run()
{
System.out.println("Doing an expensive operation...");
// expensive operation to be backgrounded goes here
}
}
and create it that way:
Queue queue = QueueFactory.getDefaultQueue();
queue.add(TaskOptions.Builder.withPayload(new ExpensiveOperation(/*different params*/)));
How to specify that I don't want it to be restarted in case of failure?
I'm not a Java user, but I see this in Interface DeferredTask which I think you may be able to use:
Normal return from this method is considered success and will not
retry unless DeferredTaskContext.markForRetry() is called.
Exceptions thrown from this method will indicate a failure and will be
processed as a retry attempt unless
DeferredTaskContext.setDoNotRetry(boolean) was set to true.
Lately I tried to write some unit tests for akka actors to test actors messages flow. I observed some strange behaviour in my tests:
Fields:
private TestActorRef<Actor> sut;
private ActorSystem system;
JavaTestKit AnotherActor;
JavaTestKit YetAnotherActor;
System and actors are created in #Before annotated method:
#Before
public void setup() throws ClassNotFoundException {
system = ActorSystem.apply();
AnotherActor = new JavaTestKit(system);
YetAnotherActor = new JavaTestKit(system);
Props props = MyActor.props(someReference);
this.sut = system.of(props, "MyActor"); }
Next
#Test
public void shouldDoSth() throws Exception {
// given actor
MyActor actor = (MyActor) sut.underlyingActor();
// when
SomeMessage message = new SomeMessage(Collections.emptyList());
sut.tell(message, AnotherActor.getRef());
// then
YetAnotherActor.expectMsgClass(
FiniteDuration.apply(1, TimeUnit.SECONDS),
YetSomeMessage.class);
}
In my code I have:
private void processMessage(SomeMessage message) {
final List<Entity> entities = message.getEntities();
if(entities.isEmpty()) {
YetAnotherActor.tell(new YetSomeMessage(), getSelf());
// return;
}
if (entities > workers.size()) {
throw new IllegalStateException("too many tasks to be started !");
}
}
Basically, sometimes (very rarely) such test fails (on another OS), and the exception from processMessage method is thrown (IllegalStateException due to business logic).
Mostly test pass as YetSomeMessage message is received by YetAnotherActor despite the fact that the IllegateStateException error is thrown as well and logged in stack trace.
As I assume from akka TestActorRef documentation:
This special ActorRef is exclusively for use during unit testing in a single-threaded environment. Therefore, it
overrides the dispatcher to CallingThreadDispatcher and sets the receiveTimeout to None. Otherwise,
it acts just like a normal ActorRef. You may retrieve a reference to the underlying actor to test internal logic.
my system is using only single thread to process messages received by actor. Could someone explain my why despite proper assertion, the test fails ?
Of course returning after sending YetSomeMessage in proper code would be done but I do not understand how another thread processing can lead to test faiulre.
Since you are using TestActorRef, you are basically doing synchronous testing. As a general rule of thumb, don't use TestActorRef unless you really need to. That thing uses the CallingThreadDispatcher, i.e. it will steal the callers thread to execute the actor. So the solution to your mystery is that the actor runs on the same thread as your test and therefore the exception ends up on the test thread.
Fortunately, this test-case of yours do not need the TestActorRef at all. You can just create the actor as an ordinary one, and everything should work (i.e. the actor will be on a proper separate thread). Please try to do everything with the asynchronous test support http://doc.akka.io/docs/akka/2.4.0/scala/testing.html#Asynchronous_Integration_Testing_with_TestKit
Please note: I am a Java developer with no working knowledge of Scala (sadly). I would ask that any code examples provided in the answer would be using Akka's Java API.
I am brand-spanking-new to Akka and actors, and am trying to set up a fairly simple actor system:
So a DataSplitter actor runs and splits up a rather large chunk of binary data, say 20GB, into 100 KB chunks. For each chunk, the data is stored in the DataCache via the DataCacher. In the background, a DataCacheCleaner rummages through the cache and finds data chunks that it can safely delete. This is how we prevent the cache from becoming 20GB in size.
After sending the chunk off to the DataCacher for caching, the DataSplitter then notifies the ProcessorPool of the chunk which now needs to be processed. The ProcessorPool is a router/pool consisting of tens of thousands of different ProcessorActors. When each ProcessActor receives a notification to "process" a 100KB chunk of data, it then fetches the data from the DataCacher and does some processing on it.
If you're wondering why I am bothering even caching anything here (hence the DataCacher, DataCache and DataCacheCleaner), my thinking was that 100KB is still a fairly large message to pass around to tens of thousands of actor instances (100KB * 1,000 = 100MB), so I am trying to just store the 100KB chunk once (in a cache) and then let each actor access it by reference through the cache API.
There is also a Mailman actor that subscribes to the event bus and intercepts all DeadLetters.
So, altogether, 6 actors:
DataSplitter
DataCacher
DataCacheCleaner
ProcessorPool
ProcessorActor
Mailman
The Akka docs preach that you should decompose your actor system based on dividing up subtasks rather than purely by function, but I'm not exactly seeing how this applies here. The problem at hand is that I'm trying to organize a supervisor hierarchy between these actors and I'm not sure what the best/correct approach is. Obviously ProcessorPool is a router that needs to be the parent/supervisor to the ProcessorActors, so we have this known hierarchy:
/user/processorPool/
processorActors
But other than that known/obvious relationship, I'm not sure how to organize the rest of my actors. I could make them all "peers" under one common/master actor:
/user/master/
dataSplitter/
dataCacher/
dataCacheCleaner/
processorPool/
processorActors/
mailman/
Or I could omit a master (root) actor and try to make things more vertical around the cache:
/user/
dataSplitter/
cacheSupervisor/
dataCacher/
dataCacheCleaner/
processorPool/
processorActors/
mailman/
Being so new to Akka I'm just not sure what the best course of action is, and if someone could help with some initial hand-holding here, I'm sure the lightbulbs will all turn on. And, just as important as organizing this hierarchy is, I'm not even sure what API constructs I can use to actually create the hierarchy in the code.
Organising them under one master makes it easier to manage since you can access all the actors watched by the supervisor (in this case master).
One hierarchical implementation can be:
Master Supervisor Actor
class MasterSupervisor extends UntypedActor {
private static SupervisorStrategy strategy = new AllForOneStrategy(2,
Duration.create(5, TimeUnit.MINUTES),
new Function<Throwable, Directive>() {
#Override
public Directive apply(Throwable t) {
if (t instanceof SQLException) {
log.error("Error: SQLException")
return restart()
} else if (t instanceof IllegalArgumentException) {
log.error("Error: IllegalArgumentException")
return stop()
} else {
log.error("Error: GeneralException")
return stop()
}
}
});
#Override
public SupervisorStrategy supervisorStrategy() { return strategy }
#Override
void onReceive(Object message) throws Exception {
if (message.equals("SPLIT")) {
// CREATE A CHILD OF MyOtherSupervisor
if (!dataSplitter) {
dataSplitter = context().actorOf(FromConfig.getInstance().props(Props.create(DataSplitter.class)), "DataSplitter")
// WATCH THE CHILD
context().watch(dataSplitter)
log.info("${self().path()} has created, watching and sent JobId = ${message} message to DataSplitter")
}
// do something with message such as Forward
dataSplitter.forward(message, context())
}
}
DataSplitter Actor
class DataSplitter extends UntypedActor {
// Inject a Service to do the main operation
DataSplitterService dataSplitterService
#Override
void onReceive(Object message) throws Exception {
if (message.equals("SPLIT")) {
log.info("${self().path()} recieved message: ${message} from ${sender()}")
// do something with message such as Forward
dataSplitterService.splitData()
}
}
}
I have a Play framework 2 application that also uses Akka. I have an Actor that receives messages from a remote system, the amount of such messages can be very huge. After a message is received, i log it into the database (using the built-in Ebean ORM) and then continue to process it. I don't care, how fast this database logging works, but it definitely should not block the further processing. Here is a simplified code sample:
public class MessageReceiver extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof ServerMessage) {
ServerMessage serverMessage = (ServerMessage) message;
ServerMessageModel serverMessageModel = new ServerMessageModel(serverMessage);
serverMessageModel.save();
//now send the message to another actor for further processing
} else {
unhandled(message);
}
}
}
As i understand, database inserting is blocking in this realization, so it does not meet my needs. But i can't figure out how to make it unblocking. I've read about the Future class, but i can't get it to work, since it should return some value, and serverMessageModel.save(); returns void.I understand that writing a lot of messages one-by-one into the database is unefficient, but that is not the issue at the moment.
Am i right that this implementation is blocking? If it is, how can i make it run asynchronously?
Future solution seems good to me. I haven't used Futures from Java, but you can just return arbitrary Integer or String if you definitely need some return value.
Other option is to send that message to some other actor which would do the saving to the DB. Then you should make sure that the mailbox of that actor would not overfill.
Have you considered akka-persistence for this? Maybe that would suit your use-case.
If you wish to use Future - construct an Akka Future with a Callable (anonymous class), whose apply() will actually implement the db save code. You can actually put all of this (future creation and apply()) in your ServerMessageModel class -- maybe call it asynchSave(). Your Future maybe Future where status is the result of asynchSave...
public Future<Status> asyncSave(...) { /* should the params be ServerMessageModel? */
return future(new Callable<Status>() {
public Status call() {
/* do db work here */
}
}
In your onReceive you can go ahead with tell to the other actor. NOTE: if you want to make sure that you are firing the tell to the other actor after this future returns, then you could use Future's onSuccess.
Future<Status> f = serverMessageModel.asyncSave();
f.onSuccess(otherActor.tell(serverMessage, self());
You can also do failure handling... see http://doc.akka.io/docs/akka/2.3.4/java/futures.html for further details.
Hope that helps.
Persist actor state with Martin Krassers akka-persistence extension and my jdbc persistence provider akka persistence jdbc https://github.com/dnvriend/akka-persistence-jdbc