Activiti whats the difference between Deployments, Definitions, Instances, Tasks and Jobs - java

Just looking at the Activiti admin app and I'm wondering what the differences are between Deployments, Definitions, Instances, Tasks and jobs.
Ive had a go at explaining what I think these do?..
Any help much appreciated.
Deployments - instances of Activiti Engine?
Definitions - ??
Instances - ??
Tasks - Outline of different tasks that can be applied to various processes. Such as Decision Tables, User tasks?
Jobs - List of current jobs/processes in action?

The activiti APP provides out of the box some generic UIs for generic "Tasks" that are usually required in a BPM system.
So the following are some very simple answer to your questions:
Applications being deployed.. they all run on top of the same engine (we are changing that in Activiti Cloud). Applications are logical groups of Process Definitions, Decision Tables, Forms, etc.
Definitions: process, decision tables, forms definitions
Process Instances: running business processes
Tasks: User tasks generated by process instances (every time that a business process hits a UserTask node, it will create a new Task here). Tasks are always assigned to real people or groups of people.
Jobs: Async jobs that are created by Async nodes inside the process definitions, also used for timers. Imagine a DB (by default) scheduler like Quartz here to do async executions. Jobs are usually used for System to System interactions. When you have long running system to system interactions you might need to execute that in an asynchronous fashion and that is where jobs come into action.
Hope that helps

Related

Synchronize Batch Jobs across multiple Application Instances

I am writing a spring batch application which should only run one Job Instance at a time. This should also be true if multiple application instances are started. Sadly, the jobs can’t be parallelized and are invoked at random.
So, what I am looking for is a spring boot configuration which allows me to synchronize the job execution within one processor as well as in the distributed case. I have already found some approaches like the JobLauncherSynchronizer (https://docs.spring.io/spring-batch-admin/trunk/apidocs/org/springframework/batch/admin/launch/JobLauncherSynchronizer.html) but all the solutions I have found work either only on one processor or protect just a fraction of the job execution.
Is there any spring boot configuration which prevents multiple concurrent executions of the same job, even across multiple concurrently running application instances (which share the same database)?
Thank you in advance.
Is there any spring boot configuration which prevents multiple concurrent executions of the same job, even across multiple concurrently running application instances (which share the same database)?
Not to my knowledge. If you really want to have a global synchronization at the job level (ie a single job instance at a time), you need a global synchronizer like the JobLauncherSynchronizer you linked to.

Spring-compatible mechanism to manage background jobs

I am working on a new functionality for a multi-tenancy web-app, which allows the admin to start a potentially very long running process (ca. 1 - 5 min) by the click of a button in the admin panel.
However it is crucial that such a task can only be executed ONCE at a time for each tenant. Of course we can disable the button after a click, but we cannot prevent the admin (or another admin) from opening another browser tab and clicking the button again.
Is there any existing library which allows us to:
Uniquely identify a job (e.g. by an id like "tenant_001_activation_task")
Start the task in the background
Query if such a task is already running in the background and if so reject any further calls to this function.
I already had a look into quartz and the Spring TaskExecutor. However these two seem to mainly focus on scheduling tasks at a given time (like a cronjob). What I'm looking for is a solution for running and monitoring a background job at any time programmatically.
If you decide to use Quartz, you can simply annotate the relevant job implementation classes with the #DisallowConcurrentExecution annotation.
Please note that this annotation is effective on the Quartz job detail level, not on the job implementation class level. Let us say you have a job implementation class com.foo.MyTenantTask and you annotation this class with the #DisallowConcurrentExecution annotation. Then you register 2 jobs that use this job implementation task - tenant_001_task and tenant_002_task.
If you run tenant_001_task and tenant_002_task, they will be allowed to run concurrently because they are different jobs (job details). However, if you attempt to run multiple instances of tenant_001_task concurrently, Quartz will only execute the first instance and the other instances will be queued up and wait for the first instance to finish executing. Then Quartz will pick one queued instance of tenant_001_task and execute it and so on until all queued up instances of tenant_001_task have been executed.
On the other hand, Quartz will not prevent concurrent execution of tenant_001_task and tenant_002_task instances since these represent different jobs (job details).
Quartz provides various (local, JMX, RMI) APIs that allow you to obtain the list of currently executing jobs, list of all registered jobs and their triggers etc. It will certainly allow you to implement the scheduling logic you described.
If you are building an app to manage and monitor your Quartz jobs, triggers etc., I recommend that you take a quick look into our product called QuartzDesk. It is a management and monitoring GUI for all types of Java Quartz-based applications. There is a public online demo and if you want to experiment locally, you can request a 30-day trial license key. If you need to interact with your Quartz schedulers programmatically (and possibly remotely), you can use various JAX-WS service APIs provided by QuartzDesk.

Parallel Processing in Weblogic using WorkManager

I have three servers and scheduling one task using application WorkManager. At a time only one node processes this task. Now I want to schedule this task in parallel i.e. run in 3 threads. Paralleling this task on one server/JVM is easy. But not able to find a way where I can schedule a task/work on remote JVM. For example, task is divided in 3 sub tasks and all 3 JVMs running this task in parallel.
I tried creating Global WorkManager and targeted other server(Server2). I ran main job on Server1 and scheduled work using Global Work Manager. But that did not work and work was scheduled on Server1 only.
There is RemoteWorkItem interface provided by commonj. But not sure Weblogic has provided implementation of this interface or not. I am using weblogic 10.3 https://docs.oracle.com/cd/E13222_01/wls/docs90/javadocs/commonj/work/RemoteWorkItem.html
Is there a way out there using WorkManager or I have to go with the messaging solution only?
Work Managers are used to constrain how many work requests a particular WLS instance will execute at once. They're designed more for server stability than for forcing distributed workloads, as evidenced by the fact that you can (and should) set work managers even on single node clusters.
I think you'll have to focus more on how the work items are distributed to each server in the cluster, since the solution is load balancing within the constraints of the work managers.

Clustered Quartz scheduler configuration

I'm working on an application that uses Quartz for scheduling Jobs. The Jobs to be scheduled are created programmatically by reading a properties file. My question is: if I have a cluster of several nodes which of these should create schedules programmatically? Only one of these? Or maybe all?
i have used quartz in a web app, where users, among other things, could create quartz jobs that performed certain tasks.
We have had no problems on that app provided that at least the job names are different for each job. You can also have different group names, and if i remember correctly the jobgroup+jobname combination forms a job key.
Anyway we had no problem with creating an running the jobs from different nodes, but quartz at the time(some 6 months ago, i do not believe this has changed but i am not sure) did not offer the possibility to stop jobs in the cluster, it only could stop jobs on the node the stop command was executed on.
If instead you just want to create a fixed number of jobs when the application starts you better delegate that job to one of the nodes, as the jobs name/group will be read from the same properties file for each node, and conflicts will arise.
Have you tried creating them on all of them? I think you would get some conflict because of duplicate names.
So I think one of the members should create the schedules during startup.
You should only have one system scheduling jobs for the cluster if they are predefined in properties like you say. If all of the systems did it you would needlessly recreate the jobs and maybe put them in a weird state if every server made or deleted the same jobs and triggers.
You could simply only deploy the properties for the jobs to one server and then only one server would try to create them.
You could make a separate app that has the purpose of scheduling the jobs and only run it once.
If these are web servers you could make a simple secured REST API that triggers the scheduling process. Then you could write an automated script to access the API and kick off the scheduling of jobs as part of a deployment or whenever else you desired. If you have multiple servers behind a load balancer it should go to only one server and schedule the jobs which quartz would save to the database backed jobstore. The other nodes in the cluster would receive them the next time they update from the database.

What framework to use for advanced job scheduling in Java?

In my application I need to have periodically run background tasks (which I can easily do with Quartz - i.e. schedule a given job to be run at a specific time periodically).
But I would like to have a little bit more control. In particular I need to:
have the system rerun a task that wasn't run at its scheduled time (i.e. the server was down and because of this the task was not run. In such a situation I want the 'late' task to be run ASAP)
it would be nice to easily control tasks - i.e. run a task on demand or see when a given task was last run or reschedule a given task to be run at a different time
It seems to me that the above points can be achieved with Spring Batch Admin, but I don't have much experience in this area yet. Also, I've seen numerous posts on how Spring Batch is not a scheduling tool so I'm becoming to have doubts what the right tool for the job is here.
So my question is: can the above be achieved with Spring Batch Admin? Or perhaps Quartz is enough but needs configuring to do the above? Or maybe I need both? Or something else?
Thanks a lot :)
Peter
have the system rerun a task that wasn't run at its scheduled time
This feature in Quartz is called Misfire Instructions and does exactly what you need - but is a lot more flexible. All you need is to define JDBCJobStore.
it would be nice to easily control tasks - i.e. run a task on demand or see when a given task was last run or reschedule a given task to be run at a different time
You can use Quartz JMX to access various information (like previous and next run time) or query the Quartz database tables directly. There are also free and commercial management tools basex on the above input. I believe you can also manually run jobs there.
Spring Batch can be integrated with Quartz, but not replace it.

Categories

Resources