Exception is not received in the main Activiti from the inner Activiti - java

We are using Activiti BPMN diagrams to run our workflows.
In our main process we're running additional process(innerProcess)
inside of a service task - MyServiceTask . See below.
The issue is that if there is an exception thrown in the innerProcess process, then I won't get it in MyServiceTask, only after the main process finished, then the exception will be bubbled up.
But I want to be able to catch the exception in MyServiceTask in case it happens.
Can you help?
public class MyServiceTask implements JavaDelegate
{
#Inject
private RuntimeService runtimeService;
public void execute(DelegateExecution context) throws Exception
{
runtimeService.startProcessInstanceByKey("innerProcess", paramMap);
}
}

Based on your code, you are not running a second "Activiti". Rather you are initiating a new process instance. All process instances are isolated and errors are associated with a specific instance. The only exception to that rule is when a process instance is a "sub process". In this case, errors can bubble up to the parent process instance.
I would modify your logic to start a sub process instance wither via a signal (probably the easiest way) or directly from within the service.
Sub process instances differ only in that they have a parent process instance id which can be set on initialization.

Related

Unit testing akka actors with JUnit

Lately I tried to write some unit tests for akka actors to test actors messages flow. I observed some strange behaviour in my tests:
Fields:
private TestActorRef<Actor> sut;
private ActorSystem system;
JavaTestKit AnotherActor;
JavaTestKit YetAnotherActor;
System and actors are created in #Before annotated method:
#Before
public void setup() throws ClassNotFoundException {
system = ActorSystem.apply();
AnotherActor = new JavaTestKit(system);
YetAnotherActor = new JavaTestKit(system);
Props props = MyActor.props(someReference);
this.sut = system.of(props, "MyActor"); }
Next
#Test
public void shouldDoSth() throws Exception {
// given actor
MyActor actor = (MyActor) sut.underlyingActor();
// when
SomeMessage message = new SomeMessage(Collections.emptyList());
sut.tell(message, AnotherActor.getRef());
// then
YetAnotherActor.expectMsgClass(
FiniteDuration.apply(1, TimeUnit.SECONDS),
YetSomeMessage.class);
}
In my code I have:
private void processMessage(SomeMessage message) {
final List<Entity> entities = message.getEntities();
if(entities.isEmpty()) {
YetAnotherActor.tell(new YetSomeMessage(), getSelf());
// return;
}
if (entities > workers.size()) {
throw new IllegalStateException("too many tasks to be started !");
}
}
Basically, sometimes (very rarely) such test fails (on another OS), and the exception from processMessage method is thrown (IllegalStateException due to business logic).
Mostly test pass as YetSomeMessage message is received by YetAnotherActor despite the fact that the IllegateStateException error is thrown as well and logged in stack trace.
As I assume from akka TestActorRef documentation:
This special ActorRef is exclusively for use during unit testing in a single-threaded environment. Therefore, it
overrides the dispatcher to CallingThreadDispatcher and sets the receiveTimeout to None. Otherwise,
it acts just like a normal ActorRef. You may retrieve a reference to the underlying actor to test internal logic.
my system is using only single thread to process messages received by actor. Could someone explain my why despite proper assertion, the test fails ?
Of course returning after sending YetSomeMessage in proper code would be done but I do not understand how another thread processing can lead to test faiulre.
Since you are using TestActorRef, you are basically doing synchronous testing. As a general rule of thumb, don't use TestActorRef unless you really need to. That thing uses the CallingThreadDispatcher, i.e. it will steal the callers thread to execute the actor. So the solution to your mystery is that the actor runs on the same thread as your test and therefore the exception ends up on the test thread.
Fortunately, this test-case of yours do not need the TestActorRef at all. You can just create the actor as an ordinary one, and everything should work (i.e. the actor will be on a proper separate thread). Please try to do everything with the asynchronous test support http://doc.akka.io/docs/akka/2.4.0/scala/testing.html#Asynchronous_Integration_Testing_with_TestKit

Akka : How to get actor causing exception on supervisor

How can i get which child actor had exception on supervisor.
Basically i want to process other things like logging failure to DB, etc..
before stopping the failing actor. But for this i had to know exactly which actor
had the exception.
My supervisorStrategy code block like
/* stop task actor on unhandled exception */
private static SupervisorStrategy strategy = new OneForOneStrategy(
1,
Duration.create(1, TimeUnit.MINUTES),
new Function<Throwable, SupervisorStrategy.Directive>() {
#Override
public SupervisorStrategy.Directive apply(Throwable t) throws Exception {
return SupervisorStrategy.stop();
}
}
);
#Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}
If you read the following link about Fault Tolerance, you can see that you can, within a supervision strategy, get the failing child actor ref according to this piece of info:
If the strategy is declared inside the supervising actor (as opposed
to a separate class) its decider has access to all internal state of
the actor in a thread-safe fashion, including obtaining a reference to
the currently failed child (available as the getSender of the failure
message).
So if you use getSender inside of you supervision strategy you should be able to determine which child produced the exception and act accordingly.
As you watch the child, you will receive Terminated message with actor field and other info. See also What Lifecycle Monitoring Means. You can also process failure inside child actor itself, by overriding its preRestart/postRestart method.

Java shutdown hook across different JVM

Can i attach java shutdown hook across jvm .
I mean can I attach shut down from my JVM to weblogic server running in different jvm?
The shutdown hook part is in Runtime.
The across JVM part you'll have to implement yourself, because only you know how your JVMs can discover and identify themselves.
It could be as simple as creating a listening socket at JVM1 startup, and sending port number of JVM2 to it. JVM1 would send shutdown notification to JVM2 (to that port) in its shutdown hook.
The short anser is: You can, but not out of the box and there are some pitfalls so please read the section pitfalls at the end.
A shutdown hook must be a thread object Runtime.addShutdownHook(Thread) that the jvm can access. Thus it must be instantiated within that jvm.
The only way I see to do it is to implement a Runnable that is also Serializable and some kind of remote service (e.g. RMI) which you can pass the SerializableRunnable. This service must then create a Thread pass the SerializableRunnable to that Thread's constructor and add it as a shutdown hook to the Runtime.
But there is also another problem in this case. The SerializableRunnable has no references to objects within the remote service's jvm and you have to find a way how that SerializableRunnable can obtain them or to get them injected. So you have the choice between a ServiceLocator or an
dependency injection mechanism. I will use the service locator pattern for the following examples.
I would suggest to define an interface like this:
public interface RemoteRunnable extends Runnable, Serializable {
/**
* Called after de-serialization from a remote invocation to give the
* RemoteRunnable a chance to obtain service references of the jvm it has
* been de-serialized in.
*/
public void initialize(ServiceLocator sl);
}
The remote service method could then look like this
public class RemoteShutdownHookService {
public void addShutdownhook(RemoteRunnable rr){
// Since an instance of a RemoteShutdownHookService is an object of the remote
// jvm, it can provide a mechanism that gives access to objects in that jvm.
// Either through a service locator
ServiceLocator sl = ...;
rr.initialize(sl);
// or a dependency injection.
// In case of a dependecy injection the initialize method of RemoteRunnable
// can be omitted.
// A short spring example:
//
// AutowireCapableBeanFactory beanFactory = .....;
// beanFactory.autowireBean(rr);
Runtime.getRuntime().addShutdownHook(new Thread(rr));
}
}
and your RemoteRunnable might look lioke this
public class SomeRemoteRunnable implements RemoteRunnable {
private static final long serialVersionUID = 1L;
private SomeServiceInterface someService;
#Override
public void run() {
// call someService on shutdown
someService.doSomething();
}
#Override
public void initialize(ServiceLocator sl) {
someService = sl.getService(SomeServiceInterface.class);
}
}
Pitfalls
There is only one problem with this approach that is not obvious. The RemoteRunnable implementation class must be available in the remote service's classpath. Thus you can not just create a new RemoteRunnable class and pass an instance of it to the remote service. You always have to add it to the remote JVMs classpath.
So this approach only makes sense if the RemoteRunnable implements an algorithm that can be configured by the state of the RemoteRunnable.
If you want to dynamically add arbitrary shutdown hook code to the remote JVM without the need to modify the remote JVMs classpath you must use a dynamic language and pass that script to the remote service, e.g. groovy.

Add Quartz Source Java Files on the Fly

I have looked around and around for this answer, but I have not been able to find a good answer. I would like to create a system based on Quartz that allows people to schedule their own tasks. I will use a pseudo example.
Let's say my main method for my Quartz program is called quartz.java.
Then I have a file called sweep.java that implements the Quartz "job" interface.
So in my quartz.java, I schedule my sweep.java to run every hour. I run quartz.java, and it works fine. GREAT; however, now I want to add a dust.java to the quartz scheduler; however, since this is a production service, I don't want to have to stop my quartz.java file, add in my dust.java, and recompile and run quartz.java again. This downtime would be unacceptable.
Does anyone have any ideas on how I could accomplish this? It seems impossible because how could you ever feed another java file into the program without recompiling, linking, etc.
I hope that this example is clear. Please let me know if I need to clarify any part of it.
Partial answer: it is possible to compile, and then instantiate, a class, programatically.
Here are links to example code:
how to compile from a String;
CompilerOutput;
CompilerOutputDirectory.
The extracted class is grabbed in the third source file (see method getGeneratedClass, which returns a Class<?> object).
HOWEVER: keep in mind that this is potentially dangerous to do so. One problem, which can be quite serious if you are not careful, is that when you dynamically instantiate a class, its static initialization blocks are executed. And these can potentially wreak havoc on your application. So, in addition, you'll have to create an appropriate SecurityContext.
In the code above, I actually only ever get the Class<?> object and never instantiate it in any way, so no code is executed. But your usage scenario is quite different.
I have not tried any of these but are worth trying .
1) Consider using Quartz camel endpoint .
If my understanding is right, Apache Camel lets you create the camel routes on the fly.
It just needs to deploy the camel-context.xml into a container taking into consideration that the required classes would be already available on classpath of container.
2) Quartz lets you create a job declaratively i.e. with xml configuration of job and trigger.
You can find more information here.
3) Now this requires some efforts ;-)
Create an interface which has a method which you will execute as a part of job. Lets say this will have a method called
public interface MyDynamicJob
{
public void executeThisAsPartOfJob();
}
Create your instances of Job methods.
public EmailJob implements MyDynamicJob
{
#Override
public void executeThisAsPartOfJob()
{
System.out.println("Sending Email");
}
}
Now in your main scheduler engine, use the Observer pattern to store/initiate the job dynamically.
Something like,
HashMap jobs=new HashMap<String,MyDynamicJob>();
// call this method to add the job dynamically.
// If you add a job after the scheduler engine started , find a way here how to reiterate over this map without shutting down the scheduler :-).
public void addJob(String someJobName,MyDynamicJob job)
{
jobs.add(someJobName,job);
}
public void initiateScheduler()
{
// Iterate over the jobs map to get all registered jobs. Create
// Create JobDetail instances dynamically for each job Entry. add your custom job class as a part of job data map.
Job jd1=JobBuilder.newJob(GenericJob.class)
.withIdentity("FirstJob", "First Group").build();
Map jobDataMap=jd1.getJobDataMap();
jobDataMap.put("dynamicjob", jobs.get("dynamicjob1"));
}
public class GenericJob implements Job {
public void execute(JobExecutionContext arg0) throws JobExecutionException {
System.out.println("Executing job");
Map jdm=arg0.getJobDetail().getJobDataMap();
MyDynamicJob mdj=jdm.get("dynamicjob");
// Now execute your custom job method here.
mdj.executeThisAsPartOfJob();
System.out.println("Job Execution complete");
}
}

Why is my multi-threaded application being paused?

My multi-threaded application has a main class that creates multiple threads. The main class will wait after it has started some threads. The runnable class I created will get a file list, get a file, and remove a file by calling a web service. After the thread is done it will notify the main class to run again. My problem is it works for a while but possibly after an hour or so it will get to the bottom of the run method from the output I see in the log and that is it. The Java process is still running but it does not do anything based on what I am looking at in the log.
Main class methods:
Main method
while (true) {
// Removed the code here, it was just calling a web service to get a list of companies
// Removed code here was creating the threads and calling the start method for threads
mainClassInstance.waitMainClass();
}
public final synchronized void waitMainClass() throws Exception {
// synchronized (this) {
this.wait();
// }
}
public final synchronized void notifyMainClass() throws Exception {
// synchronized (this) {
this.notify();
// }
}
I originally did the synchronization on the instance but changed it to the method. Also no errors are being recorded in the web service log or client log. My assumption is I did the wait and notify wrong or I am missing some piece of information.
Runnable Thread Code:
At the end of the run method
// This is a class member variable in the runnable thread class
mainClassInstance.notifyMainClass();
The reason I did a wait and notify process because I do not want the main class to run unless there is a need to create another thread.
The purpose of the main class is to spawn threads. The class has an infinite loop to run forever creating and finishing threads.
Purpose of the infinite loop is for continually updating the company list.
I'd suggest moving from the tricky wait/notify to one of the higher-level concurrency facilities in the Java platform. The ExecutorService probably offers the functionality you require out of the box. (CountDownLatch could also be used, but it's more plumbing)
Let's try to sketch an example using your code as template:
ExecutorService execSvc = Executors.newFixedThreadPool(THREAD_COUNT);
while (true) {
// Removed the code here, it was just calling a web service to get a list of companies
List<FileProcessingTask> tasks = new ArrayList<FileProcessingTask>();
for (Company comp:companyList) {
tasks.add(new FileProcessingTask(comp));
}
List<Future<FileProcessingTask>> results = execSvc.invokeAll(tasks); // This call will block until all tasks are executed.
//foreach Future<FileProcessingTask> in results: check result
}
class FileProcessingTask implements Callable<FileResult> { // just like runnable but you can return a value -> very useful to gather results after the multi-threaded execution
FileResult call() {...}
}
------- edit after comments ------
If your getCompanies() call can give you all companies at once, and there's no requirement to check that list continuously while processing, you could simplify the process by creating all work items first and submit them to the executor service all at once.
List<FileProcessingTask> tasks = new ArrayList<FileProcessingTask>();
for (Company comp:companyList) {
tasks.add(new FileProcessingTask(comp));
}
The important thing to understand is that the executorService will use the provided collection as an internal queue of tasks to execute. It takes the first task, gives it to a thread of the pool, gathers the result, places the result in the result collection and then takes the next task in the queue.
If you don't have a producer/consumer scenario (cfr comments), where new work is produced at the same time that task are executed (consumed), then, this approach should be sufficient to parallelize the processing work among a number of threads in a simple way.
If you have additional requirements why the lookup of new work should happen interleaved from the processing of the work, you should make it clear in the question.

Categories

Resources