Triggering schedule job only once in multiple node environment - java

I am using Docker swarm mode to run multiple instances of Java (Spring Boot) application, and I would like to run schedule job twice a day but it needs to be triggered only by one instance of application.
Is there any mechanism to configure Spring Boot application and Docker swarm to run that scheduled task just once?
I've seen in Jive property:
<property name="allNodes" value="false"/>
and now I am wondering if I can do some similar thing on my infrastructure.
Application instances are in same network, so I suppose network discovery wouldn't be the problem.

Can you create one node as master node and the scheduled job will run only in master node.On failure other node will promoted and become master so eligible for runing the job.
Or you can created a distributed lock(Hazelcast support distributed lock).Every node will call tryLock().The node wins will allow to run the job.

Related

How to unblock the jobs with triggerstate to ERROR in Quartz cluster in java

my context is followed:
My enviroment is composed from two machines with Spring boot and Quartz in cluster configured on both.
I have a Master job with #DisallowConcurrentExecution annotation, that turns every 30 seconds, takes the records, on DB, with id fixed and schedules the jobs slaves, always with DisallowConcurrentExecution annotation, that make a defined logic. In anomaly cases (example: sudden shutdown machine), seem that some the jobs dont't be able to terminate his flow, remaining in the ERROR state.
How can I resume or unblock those jobs are in trigger state to ERROR by quartz objects in java ?
Because, currently, those jobs id can't anymore to be schedule from quartz.
Application log says:
org.quartz.ObjectAlreadyExistsException: Unable to store Job : 'sre.153', because one already exists with this identification.

How to identify a terminating pod in kubernetes using java

I am using fabric8 to get the status of a kuberenetes pod by the following code.
KubernetesHelper.getPodStatusText(pod);
I deploy an app within a container and there is a one to one mapping between a container and pod. My requirement is to redeploy the application. So after deleting the pod I check for the status and the method returns a status of "Running" while deleting.
I am unable to identify that the pod has been deleted since the newly deployed app also return a status of "Running". Are there any other variables of a pod that can be used to identify a healthy pod vs a terminating pod.
One way of doing this is to perform a rolling upgrade. This will ensure that your deployed application incurs no downtime (new pods are started before old pods are stopped). One caveat is that you must be using a replication controller or replication set to do so. Most rolling deployments will simply involve just updating the container's image for the new version of software.
You can do this through Java via fabric8's Kubernetes Java client. Here's an example:
client.replicationControllers()
.inNamespace("thisisatest")
.withName("nginx-controller")
.rolling().updateImage("nginx");
You can change any configuration of the replication controller (replicas, environment variables, etc). The call will return when your pods running the new version are Ready & the old replication & pods have been stopped & deleted.

JBPM6: How to resume a process from the last successful node after the server crash?

I'm trying to implement the failover strategy when executing jbpm6 processes. My setup is the following:
I'm using jbpm6.2.0-Final (latest stable release) with persistence enabled
I'm constructing an instance of org.kie.spring.factorybeans.RuntimeManagerFactoryBean with type SINGLETON to get KSession to start/abort processes and complete/abort work items
all beans are wired by Spring 3.2
DB2 is used a database engine
I use Tomcat 7.0.27
In the positive scenario everything is working as I expect. But I would like to know how to resume the process in the case of server crash. To reproduce it I started my process (described as BPMN2 file), got at some middle step and killed the Tomcat process. After that I see uncompleted process instance in the PROCESS_INSTANCE_INFO table and uncompleted work item in the WORK_ITEM_INFO table. Also there is a session in the SESSION_INFO table.
My question is: could you show me the example of code which would take that remaining process and resume it starting from the last node (if it is possible).
Update
I forgot to mention that i'm not using jbpm-console, but I'm embedding jbpm into my javaee application.
If you initialize your RuntimeManager on init of your application Server it should take care of reloading and resuming the processes.
You need not worry about reloading it again by yourself.

quartz scheduler jobs that execute on all servers in a cluster

Once I enable clustering in Quartz, it will distribute the cron jobs to the various servers in the cluster. That's fine normally, but there's actually one job that I'd like to have executed on every server in the cluster every time it's scheduled to run.
Is there a way to mark a quartz cron job to indicate that it should run on all servers in a cluster?
I solved this by having two quartz schedulers running. One was configured as a clustered scheduler. The other was configured as a local scheduler. Obviously, jobs that are supposed to run on all machines would get added to the local scheduler.

Same quartz job running twice because of two server instances

I have this problem: my app has a quartz scheduler to run a task each X minutes. This app is deployed in two server instances, so each instance is executing the task at the same time. I want to execute only one task at the same time.
We have configured Quartz with Spring and our application server is WAS.
Which options do you suggest?
You could setup quartz cluster with JDBC job store - then every job fire will be executed by only one cluster node. You can find more information on that topic in quartz documentation

Categories

Resources