Let me come directly to use case.
I am having a number of work-items in my process say A,B,C. It starts in A--->B--->C order.
In my case, B is a call to a 3rd party web service. C should process only if B is success. if the call to the web-service fails, system should retry after 5 min. The number of retries are limited to 3.
How can I achieve this using Jbpm6.?
Some options that I understand from doc are,
1) I can use a work item handler. Inside work item, I will start another thread which will do the retries and finally it calls the completeWrokItem() method. But in this case my process engine thread will wait unnecessarily for the completeWrokItem() call.
2)I can use command for retry . But if I call command it will execute in another thread and the process thread will execute C. Which is not a desirable way
How can I create a process so that, B will execute in back-end and will notify the engine that it can continue executing C?
Please advice.
Thanks in advance.
Please comment if my question is not clear enough to answer.
Your question is not completely clear; however, I provide an answer to hopefully provide some clarity:
For asynchronous execution, you should follow guidelines in documentation: JBMP 6.0 Async Documentation
Given your processes flow, if you use a Command and a process defined as: A->B->C; C will not start until the command completes.
To have commands run in parallel, you use parallel branches. In below pic, if Script1 and Script2 were commands they would execute in parallel, and Email would only execute once both Scripts complete:
A command signals complete by simply returning from execute method:
public ExecutionResults execute(CommandContext ctx) throws Exception {
// Set results if exist. Otherwise, return empty ExecutionResults.
ExecutionResults results = new ExecutionResults();
// This would match the name of an output parameter for the work item.
// results.setData("result", "result data");
logger.info("Command finished execution: " + this.getClass());
logger.debug("Results of executing command: ", results);
return results;
}
`
Add a XOR gateway after the node B, Add script to of the node B and set the status and retry_count of web-service(if success, status_b = true; if failed, status_b = false and retry_count ++),
XOR go to C if the retry_count>=3 or status_b == true
else go to B again
Related
so I have a function that deletes some object, thing. This can take a while, like a half hour ish, and I want to check if it is successfully deleted.
#Test
public void successfulThingDelete(){
Thing thing = new Thing();
deleteThing(thing);
if (thing.getStatus() == 'deleted'){
pass
}
else {
fail
}
I want to be able to continually check the status of thing (i.e thing.getStatus()) and pass the test if it is deleted. But if a certain time elapses and it's not deleted then the code has failed and it should fail. I'm assuming I need to introduce a new thread for this pinging of the status but I'm not sure how to add that within this method. Thanks for any help!
I would go for awaitility. With that you can write tests like
#Test
public void updatesCustomerStatus() throws Exception {
// Publish an asynchronous event:
publishEvent(updateCustomerStatusEvent);
// Awaitility lets you wait until the asynchronous operation completes:
await().atMost(5, SECONDS).until(customerStatusIsUpdated());
...
}
I'm guessing that this is a Junit test. The Test annotation allows a timeout attribute in the form "...#Test(timeout=1000)..." - the value is in milliseconds. So calculate the milliseconds in thirty minutes - 1800000 and use that. Junit will fail the test if it isn't finished in that time.
#Test(timeout=1800000)
public void successfulThingDelete(){...
If the test runs its course and finishes before the time limit then the usual coded assertions happen and the test ends. if the test actions take longer then Junit will interrupt whatever's running and fail the test overall.
Ref - https://github.com/junit-team/junit4/wiki/timeout-for-tests
I have a situation where I want to execute a system process on each worker within Spark. I want this process to be run an each machine once. Specifically this process starts a daemon which is required to be running before the rest of my program executes. Ideally this should execute before I've read any data in.
I'm on Spark 2.0.2 and using dynamic allocation.
You may be able to achieve this with a combination of lazy val and Spark broadcast. It will be something like below. (Have not compiled below code, you may have to change few things)
object ProcessManager {
lazy val start = // start your process here.
}
You can broadcast this object at the start of your application before you do any transformations.
val pm = sc.broadcast(ProcessManager)
Now, you can access this object inside your transformation like you do with any other broadcast variables and invoke the lazy val.
rdd.mapPartition(itr => {
pm.value.start
// Other stuff here.
}
An object with static initialization which invokes your system process should do the trick.
object SparkStandIn extends App {
object invokeSystemProcess {
import sys.process._
val errorCode = "echo Whatever you put in this object should be executed once per jvm".!
def doIt(): Unit = {
// this object will construct once per jvm, but objects are lazy in
// another way to make sure instantiation happens is to check that the errorCode does not represent an error
}
}
invokeSystemProcess.doIt()
invokeSystemProcess.doIt() // even if doIt is invoked multiple times, the static initialization happens once
}
A specific answer for a specific use case, I have a cluster with 50 nodes and I wanted to know which ones have CET timezone set:
(1 until 100).toSeq.toDS.
mapPartitions(itr => {
sys.process.Process(
Seq("bash", "-c", "echo $(hostname && date)")
).
lines.
toIterator
}).
collect().
filter(_.contains(" CET ")).
distinct.
sorted.
foreach(println)
Notice I don't think it's guaranteed 100% you'll get a partition for every node so the command might not get run on every node, even using using a 100 elements Dataset in a cluster with 50 nodes like the previous example.
I would like to diagnose some error. I believe I should not tell the whole scenario to get a good solution for my question. So, I would like to create some debug information on the workers and display it on the driver, possibly real-time.
I read somewhere that issuing a System.out.println("DEBUG: ...") on a worker would produce a line in the executor log, but currently I'm having trouble retrieving those logs. Aside from that it would be still useful if I could see some debug noise on the driver as the calculation runs.
(I also figured out a workaround, but I don't know if I should apply it or not. At the end of each worker task I could append elements to a sequence file and I could monitor that, or check it at the end.)
One way I could think of doing this is (ab)using a custom acummulator to send messages from the workers to the driver. This will get whatever String message from the workers to the driver. On the driver you'd print the contents to collect the info. It's not real-time as wished-for as it depends of the program execution.
import org.apache.spark.AccumulatorParam
object LineCummulatorParam extends AccumulatorParam[String] {
def zero(value:String) : String = value
def addInPlace(s1:String, s2:String):String = s1 + "\n" + s2
}
val debugInfo = sparkContext.accumulator("","debug info")(DebugInfoCummulatorParam)
rdd.map{rdd => ...
...
...
//this happens on each worker
debugInfo += "something happened here"
}
//this happens on the driver
println(debugInfo)
Not sure why you cannot access the worker logs - that would be the most straightforward solution BTW.
I have a small problem with creating threads in EJB.OK I understand why i can not use them in EJB, but dont know how to replace them with the same functionality.I am trying to download 30-40 webpages/files and i need to start downloading of all files at the same time(approximately).This is need ,because if i run them in one thread in queue.It will excecute more than 3 minutes.
I try with #Asyncronious anotation, but nothing happened.
public void execute(String lang2, String lang1,int number) {
Stopwatch timer = new Stopwatch().start();
htmlCodes.add(URL2String(URLs.get(number)));
timer.stop();
System.out.println( number +":"+ Thread.currentThread().getName() + timer.elapsedMillis()+"miseconds");
}
private void findMatches(String searchedWord, String lang1, String lang2) {
articles = search(searchedWord);
for (int i = 0; i < articles.size(); i++) {
execute(lang1,lang2,i);
}
Here are two really good SO answers that can help. This one gives you your options, and this one explains why you shouldn't spawn threads in an ejb. The problem with the first answer is it doesn't contain a lot of knowledge about EJB 3.0 options. So, here's a tutorial on using #Asynchronous.
No offense, but I don't see any evidence in your code that you've read this tutorial yet. Your asynchronous method should return a Future. As the tutorial says:
The client may retrieve the result using one of the Future.get methods. If processing hasn’t been completed by the session bean handling the invocation, calling one of the get methods will result in the client halting execution until the invocation completes. Use the Future.isDone method to determine whether processing has completed before calling one of the get methods.
Is there an easy way to retrieve a job and check e.g. the status with Play?
I have a few encoding jobs/downloading jobs which run for a long time. In some cases I want to cancel them.
Is there a way to retrieve a list of Jobs or something?
E.g. one Job calls the FFMPEG encoder using the ProcessBuilder. I would like to be able to get this job and kill the process if it is not required (e.g. wrong file uploaded and don't want to wait for an hour before it is finished). If I can get a handle to that Job then I can get to the process as well.
I am using Play 1.2.4
See JobsPlugin.java to see how to list all the scheduledJobs.
Getting the task currently executed is more tricky but you can find your jobs in JobsPlugin.scheduledJobs list by checking Job class and call a method in your custom Job to tell him to cancel
Something like
for (Job<?> job : JobsPlugin.scheduledJobs) {
if (job instanceof MyJob) {
((MyJob) job).cancelWork();
}
}
where cancelWork is your custom method