I have a bpmn process that once starts and continues its execution forever based on the Timer cycle event. There is no end event for it.
I had recently done few changes with the workflow and made a redeployment to camunda. Since the existing processes are already running I need an option to stop it which I am finding difficult to do through workflow.
How can I stop existing execution if a new workflow started its execution? Can we achieve that using workflow itself? REST / Java coding cannot be done to achieve this.
I have another question regarding an order by query in camunda.
From the above scenario, i ended up seeing quite a few similar variables in variable table. How can i get the latest variable out of it? orderByActivityInstanceId is the only option i saw, which i feel is not reliable.
You can use other events (conditional, message or signal) to react to the situation in which you want to stop the looping process. You can for instance add an event-based sub process with a interrupting message start event to your process model.
To your second point: https://docs.camunda.org/manual/7.15/reference/rest/history/activity-instance/get-activity-instance-query/
sortBy Sort the results by a given criterion. Valid values are
activityInstanceId, instanceId, executionId, activityId, activityName,
activityType, startTime, endTime, duration, definitionId, occurrence
and tenantId. Must be used in conjunction with the sortOrder
parameter.
https://docs.camunda.org/manual/7.15/reference/rest/variable-instance/get/
is another option
To stop all the active process instances in Camunda, you can do this by calling a Camunda REST API or by Java Coding.
Using REST API
Activate/Suspend Process Instance By Id
Using Java
Suspend Process Instances
If you would like to suspend all process instances of a given process definition, you can use the method suspendProcessDefinitionById(...) of theRepositoryService and specify the suspendProcessInstances option.
Thanks a lot, i appreciate your response #amine & #rob.
I got it resolved using a signal event. Every time when a new process is deployed it triggers a signal event that will stop the recursion.
To sort the data there are options within camunda. But I had done it differently.
If there is more than one variable, I fetch them using versionTag from the process definition table.
Related
I need to know if exists some pattern for following scenario:
I need call via REST API some long durable job which return me response containing job instance id a current status of job(STARTED, PROCESSING, SUCCESS FAILED, CANCELLED and so on)
After this I need to call another endpoint with job instance id from previous call to check if my job finnished or not.
The second step will be executed many time in the loop with some delay and of course with defined max count of checks.
Such routine will be always called from current thread and method will be blocked until some final status achieved or max check count or some exception during monitoring.
My idea is something like that developer will implemented some interface ExecutableMonitoredJob with two methods where first one will execute some operation and another one will determine if we are finnished or not. I would like to have it very abstract because operation might not be only REST call but for example also some db stored procedure or file creation. So second method to monitor status can have different implementation. Further, I would like to have possibility to access return value from first operation in the form of some context.
I looked at some tutorials in spring batch but I think that would be a little big overkill for now.
Does exist some suitable solution for my problem?
thanks
I have a bpmn process with one ending accessible by two ways. Theses two ways finish with approximately the same Automatic Task. In one case everything is fine, but not in the second way.
In this second way the end event is correctly fired (I looked in the table act_hi_actinst with my proc_inst_id_ variable) but the column end_act_id_ is absolutely not updated.. Same for end_time_ and duration_.
I really need this variable to be updated for checking which process is over or not. I don't know if it's important but i have some multi-instance task in my process (canceled by passing through certain tasks).
Thanks for your help!
end Event in my activiti table
I found the problem.
It was my multi-instance tasks in my project that fooled me. I put boundary signals on my multi-instance tasks to cancel them when complete certain tasks. I linked thoses boundary signals to my last inclusive gateway, but it was a mistake. Some instances of my multi-instances tasks did not finish properly.
I just linked my boundary signals to the end of the workflow, and it's working well now.
I am trying to create a custom processor using the Luwak Lucene indexer, so I can run queries on incoming flow files. What I am trying to figure out is the best way to update the query indexes that exist inside of the Luwak monitor (example code below).
EDIT - More Usage Context
By update, I mean allowing an outside user to add / update / remove queries that are being run against the incoming flowfiles. We would be starting with a fixed set of queries, but then would want to allow a user or users the ability to change the queries being executed against the incoming messages. Here in lies the challenge, changing the queries that are being executed.
Any other options I should consider? It appears to take about ~20s to update the queries, if there are 10k of them. This would most likely be rare, but re-load / startup time is something I am trying to consider.
Options I have considered:
Use an UpdateAttribute and update on every flowfile. Not ideal, especially if there are a bunch of queries to index.
Use http, AWS SQS, etc. to send a high-priority flow file to update (make higher than any other source). Not terrible, but still doesn't seem right.
Use the NiFi API to start / stop the processor on update. Doesn't seem like a very efficient way to perform the updates, especially if they happen quite frequently.
Instantiate Monitor:
Monitor monitor = new Monitor(new LuceneQueryParser("field"), new TermFilteredPresearcher());
Add Queries - What I am trying to optimize:
//Add queries to the monitor
for (Map.Entry<String, String> entry : bucketList.entrySet()) {
MonitorQuery q = new MonitorQuery(entry.getKey(),entry.getValue());
monitor.update(q);
}
When your processor starts you could start a background timer thread that periodically builds a new monitor and then replaces the one being used by the processor.
You would probably want to make a member variable in your processor like:
AtomicReference<Monitor> monitorHolder = new AtomicReference<Monitor>();
Then in #OnScheduled you can build the initial monitor and set it in the holder.
Then in onTrigger you always first get the Monitor:
Monitor localMonitor = monitorHolder.get();
Then in the background thread you can call monitorHolder.set(newMonitor) which won't affect the current execution of the processor, but will take effect the next time onTrigger is called.
Suppose I need to execute N tasks in the same thread. The tasks may sometimes need some values from an external storage. I have no idea in advance which task may need such a value and when. It is much faster to fetch M values in one go rather than the same M values in M queries to the external storage.
Note that I cannot expect cooperation from tasks themselves, they can be concidered as nothing more than java.lang.Runnable objects.
Now, the ideal procedure, as I see it, would look like
Execute all tasks in a loop. If a task requests an external value, remember this, suspend the task and switch to the next one.
Fetch the values requested at the previous step, all at once.
Remove all completed task (suspended ones don't count as completed).
If there are still tasks left, go to step 1, but instead of executing a task, continue its execution from the suspended state.
As far as I see, the only way to "suspend" and "resume" something would be to remove its related frames from JVM stack, store them somewhere, and later push them back onto the stack and let JVM continue.
Is there any standard (not involving hacking at lower level than JVM bytecode) way to do this?
Or can you maybe suggest another possible way to achieve this (other than starting N threads or making tasks cooperate in some way)?
It's possible using something like quasar that does stack-slicing via an agent. Some degree of cooperation from the tasks is helpful, but it is possible to use AOP to insert suspension points from outside.
(IMO it's better to be explicit about what's going on (using e.g. Future and ForkJoinPool). If some plain code runs on one thread for a while and is then "magically" suspended and jumps to another thread, this can be very confusing to debug or reason about. With modern languages and libraries the overhead of being explicit about the asynchronicity boundaries should not be overwhelming. If your tasks are written in terms of generic types then it's fairly easy to pass-through something like scalaz Future. But that wouldn't meet your requirements as given).
As mentioned, Quasar does exactly that (it usually schedules N fibers on M threads, but you can set M to 1), using bytecode transformations. It even gives each task (AKA "fiber") its own stack trace, so you can dump it and get a complete stack trace without any interference from any other task sharing the thread.
Well you could try this
you need
A mechanism to save the current state of the task because when the task returns its frame would be popped from the call stack. Based on the return value or something like that you can determine weather it completed or not since you would need to re-execute it from the point where it left thus u need to preserve the state information.
Create a Request Data structure for each task. When ever a task wants to request something it logs it there , The data structure should support all the possible request a task can make.
Store these DS in a Map. At the end of the loop you can query this DS to determine the kind of resource required by each task.
get the resource put it in the DS . Start the task from the state when it returned.
The task queries the DS gets the resource.
The task should use this DS when ever it wants to use an external resource.
you would need to design the method in which resource is requested with special consideration since when you will re-execute the task again you would need to call this method yourself so that the task can execute from where it left.
*DS -> Data Structure
hope it helps.
I'm trying to make a Listener (or something like that?) that will start a specific event when a date field from a database row is the same as the current time. Of course I can trigger every second to check if the date/time is the same as the current, but I think that is quite expansive. There should be a better alternative..
What I trying to do is the following:
I have several (for example football) matches scheduled in my database. At the specific time when the match should start, I will start a event in my Java app. This could be 1 or more matches at that time.
I understand that you are trying to schedule execution of future events in java app not in database.
You should consider using ScheduledExecutorService method schedule to delay execution of task to specific point in time.
The only problem you have to solve is how you synchronize task in database with this in schedule.
EDIT:
If you keep map with taskID->ScheduledFuture object returned you can easily call cancel on the object to remove task. But you have to add some kind of last-modification column to detect new and updated tasks and query database to check if there are not any new tasks.