I have a startup singleton EJB than initializes a scheduler and stores an EJB in JobDataMap. The trigger goes to ERROR state when I store the EJB.
#Singleton
#Startup
#Lock(LockType.READ)
public class NotificationScheduler {
private Scheduler scheduler;
#Inject
private QuartzProperties quartzProperties;
#Inject
private NotificationMailProcess notificationMailProcess;
#PostConstruct
public void init() {
try {
Properties properties = new Properties();
properties.put(StdSchedulerFactory.PROP_SCHED_INSTANCE_NAME, "NotificationScheduler");
properties.put(StdSchedulerFactory.PROP_SCHED_INSTANCE_ID, StdSchedulerFactory.AUTO_GENERATE_INSTANCE_ID);
properties.put(StdSchedulerFactory.PROP_THREAD_POOL_CLASS, quartzProperties.getQuartzThreadPoolClass());
properties.put(StdSchedulerFactory.PROP_THREAD_POOL_PREFIX + ".threadCount", quartzProperties.getQuartzThreadPoolCount());
properties.put(StdSchedulerFactory.PROP_JOB_STORE_CLASS, quartzProperties.getQuartzJobStoreClass());
properties.put(StdSchedulerFactory.PROP_JOB_STORE_PREFIX + ".driverDelegateClass",
quartzProperties.getQuartzJobStoreDriverDelegateClass());
properties.put(StdSchedulerFactory.PROP_JOB_STORE_PREFIX + ".dataSource", quartzProperties.getQuartzJobStoreDataSource());
properties.put("org.quartz.dataSource." + quartzProperties.getQuartzJobStoreDataSource() + ".jndiURL",
quartzProperties.getQuartzJobStoreDataSource());
properties.put(StdSchedulerFactory.PROP_JOB_STORE_PREFIX + ".nonManagedTXDataSource",
quartzProperties.getQuartzJobStoreNonManagedTXDataSource());
properties.put("org.quartz.dataSource." + quartzProperties.getQuartzJobStoreNonManagedTXDataSource() + ".jndiURL",
quartzProperties.getQuartzJobStoreNonManagedTXDataSource());
properties.put(StdSchedulerFactory.PROP_JOB_STORE_PREFIX + ".isClustered", quartzProperties.getQuartzJobStoreIsClustered());
properties.put(StdSchedulerFactory.PROP_JOB_STORE_PREFIX + ".clusterCheckinInterval",
quartzProperties.getQuartzJobStoreClusterCheckinInterval());
StdSchedulerFactory schedulerFactory = new StdSchedulerFactory();
schedulerFactory.initialize(properties);
scheduler = schedulerFactory.getScheduler();
scheduler.start();
JobDataMap jobDataMap = new JobDataMap();
jobDataMap.put("process", notificationMailProcess);
JobDetail notificationMailJobDetail = newJob(NotificationMailJob.class) .withIdentity(Constants.JOB_NAME,Constants.JOB_GROUP).usingJobData(jobDataMap).build();
String cronExpression = Constants.CONFIGURATION_DEFAULT_JOB_CRON);
Trigger trigger = newTrigger().withIdentity(Constants.JOB_TRIGGER, Constants.JOB_GROUP)
.withSchedule(cronSchedule(cronExpression)).build();
scheduler.scheduleJob(notificationMailJobDetail, new HashSet<>(Arrays.asList(trigger)), true);
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
The NotificationMailProcess is a dummy EJB class that implements Serializable interface annotated with #Stateless.
Finally found that it was a serialization problem during persist of injected EJB. The solution was to alter the implementation and to get the EJB in Job using JNDI lookup.
Related
I am trying to put some data into Quartz job data map and access them in the class which implements the Job class. But it gives me the Null Pointer exception. When the application is run without the code which access the Job Data Map, it runs fine.
I use a Cron trigger to execute a scheduled job. In this example case, I configured it to run in each 20 seconds.
#Bean
public Trigger simpleJobTrigger(#Qualifier("simpleJobDetail") JobDetail jobDetail) {
CronTriggerFactoryBean factoryBean = new CronTriggerFactoryBean();
factoryBean.setJobDetail(jobDetail);
factoryBean.setStartDelay(0L);
factoryBean.setName("test-trigger");
factoryBean.setStartTime(LocalDateTime.now().toDate());
factoryBean.setCronExpression("0/20 * * * * ?");
factoryBean.setMisfireInstruction(SimpleTrigger.MISFIRE_INSTRUCTION_FIRE_NOW);
try {
factoryBean.afterPropertiesSet();
} catch (ParseException e) {
e.printStackTrace();
}
return factoryBean.getObject();
}
Following is my simpleJobDetail bean.
#Bean
public JobDetailFactoryBean simpleJobDetail() {
JobDetailFactoryBean factoryBean = new JobDetailFactoryBean();
factoryBean.setJobClass(Executor.class);
factoryBean.setDurability(true);
factoryBean.setName("test-job");
factoryBean.getJobDataMap().put("caller", "James");
return factoryBean;
}
This is my execute method.
public class Executor implements Job {
#Autowired
ScheduledTaskService scheduledTaskService;
#Override
public void execute(JobExecutionContext jobExecutionContext) {
JobDataMap jobDataMap = null;
try {
jobDataMap = jobExecutionContext.getTrigger().getJobDataMap();
String caller = jobDataMap.get("caller").toString();
System.out.println("This is called by the user "+caller);
} catch (Exception e) {
//e.printStackTrace();
System.out.println("UNABLE TO ACCESS THE JOB DATA MAP "+e);
}
scheduledTaskService.doThePayment();
}
}
When I run the application, it prints the log given in the catch clause.
UNABLE TO ACCESS THE JOB DATA MAP java.lang.NullPointerException
Why execute method fails to access the JobDataMap ? Is there any configuration or a property I should set? What is the reason for job map to be not available at this point?
How can I get this resolved?
Found the issue in my code.
When accessing the JobDataMap, I have accessed it in following way.
jobDataMap = jobExecutionContext.getTrigger().getJobDataMap();
instead, in my case, I should access it from JobDetails, not from the trigger.
jobDataMap = jobExecutionContext.getJobDetail().getJobDataMap();
I'm dwelling with Quartz from days..
I need to create, the app starts, some triggers and job details..
So, this is my Job
#DisallowConcurrentExecution
public class TimeoutJob extends QuartzJobBean{
public final String ID = "idInterruttore";
private final Logger logger = Logger.getLogger(TimeoutJob.class);
#Autowired InterruttoreService interruttoreService;
#Override
protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
JobDataMap dataMap = context.getJobDetail().getJobDataMap();
int idInterruttore = dataMap.getIntFromString(ID);
Interruttore interruttore = interruttoreService.findById(idInterruttore);
logger.debug("Job reached for " + interruttore.getNomeInterruttore());
}
}
Then i configure some bean in QuartzConfiguration.java
#Configuration
#ComponentScan("it.besmart")
public class QuartzConfiguration {
#Autowired
ApplicationContext applicationContext;
#Bean
public SchedulerFactoryBean scheduler() {
SchedulerFactoryBean schedulerFactory = new SchedulerFactoryBean();
schedulerFactory.setJobFactory(springBeanJobFactory());
return schedulerFactory;
}
#Bean
public SpringBeanJobFactory springBeanJobFactory() {
AutoWiringSpringBeanJobFactory jobFactory = new AutoWiringSpringBeanJobFactory();
jobFactory.setApplicationContext(applicationContext);
return jobFactory;
}
}
Now, I have a JobManager.class which manage jobDetails and Triggers
#Service("jobManager")
public class JobManager {
private final Logger logger = Logger.getLogger(JobManager.class);
#Autowired
SchedulerFactoryBean scheduler;
#Autowired
InterruttoreService interruttoreService;
#PostConstruct
public void createInitialJobs() {
logger.debug("Start ut jobs to create");
List<Interruttore> interruttori = interruttoreService.findAllSwitches();
Date now = new Date();
for (int i = 0; i < interruttori.size(); i++) {
Interruttore interruttore = interruttori.get(i);
if (interruttore.getTimeoutDate().after(now) && interruttore.isStato()) {
// JobDetail and Trigger creation
createJob(interruttore, interruttore.getTimeoutDate());
}
}
}
public void createJob(Interruttore interruttore, Date richiesta) {
JobDetailFactoryBean jobDetail = new JobDetailFactoryBean();
jobDetail.setJobClass(TimeoutJob.class);
jobDetail.setName("Job detail for " + interruttore.getNomeInterruttore());
jobDetail.setDescription("Job Description");
jobDetail.setDurability(true);
Map<String, Integer> map = new HashMap<String,Integer>();
map.put("idInterruttore", interruttore.getIdInterruttore());
jobDetail.setJobDataAsMap(map);
long future = richiesta.getTime() - new Date().getTime();
logger.debug("next timeout is " + future / 1000 / 60 + " minuti for " + interruttore.getNomeInterruttore());
//trigger creation
SimpleTriggerFactoryBean trigger = new SimpleTriggerFactoryBean();
trigger.setName("myTrigger"+interruttore.getNomeInterruttore());
trigger.setGroup("timeoutTriggers");
trigger.setJobDetail(jobDetail.getObject());
trigger.setStartDelay(0);
trigger.setRepeatCount(1);
trigger.setRepeatInterval(future);
trigger.afterPropertiesSet();
logger.debug("Trigger for " + interruttore.getNomeInterruttore());
logger.debug("Trigger object is :" + trigger.getObject());
logger.debug("Next Trigger date " + trigger.getObject().getFinalFireTime());
try {
scheduler.getScheduler().scheduleJob(jobDetail.getObject(), trigger.getObject());
} catch (SchedulerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
When launching the app, the #PostConstruct method tries to create the triggers, but i'm getting an exception when creating jobManager
Application startup failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jobManager': Invocation of init method failed; nested exception is java.lang.NullPointerException
caused by
Caused by: java.lang.NullPointerException
at org.springframework.scheduling.quartz.SimpleTriggerFactoryBean.afterPropertiesSet(SimpleTriggerFactoryBean.java:231)
at it.besmart.quartz.JobManager.createJob(JobManager.java:85)
at it.besmart.quartz.JobManager.createInitialJobs(JobManager.java:54)
which is
trigger.afterPropertiesSet();
as my triggers are not created...
There is a bug in the spring-context-support jar 4.2.5 version.
sti.setJobKey(this.jobDetail.getKey());
i.e. jobDetail can be null.
In the new versions it is fixed. I checked 4.3.2 version.
You can use 4.3.2 or later.
In 4.3.2 version
if (this.jobDetail != null) {
sti.setJobKey(this.jobDetail.getKey());
}
Setup: Spring application deployed on Weblogic 12c, using JNDI lookup to get a datasource to the Oracle Database.
We have multiple services which will be polling the database regularly for new jobs. In order to prevent two services picking the same job we are using a native SELECT FOR UPDATE query in a CrudRepository. The application then takes the resulting job and updates it to PROCESSING instead of WAITING using the CrusRepository.save() method.
The problem is that I can't seem to get the save() to work within the FOR UPDATE transaction (at least this is my current working theory of what goes wrong), and as a result the entire polling freezes until the default 10 minute timeout occurs. I have tried putting #Transactional (with various propagation flags) basically everywhere, but I'm not able to get it to work (#EnableTransactionManagement is activated and working).
Obviously there must be some basic knowledge I'm missing. Is this even a possible setup? Unfortunately, just using #Transactional with a non-native CrudRepository SELECT query is not possible, as it apparently first makes a SELECT to see if the row is locked or not, and only then makes a new SELECT that locks it. Another service could very well pick up the same job in the meanwhile, which is why we need it to lock immediately.
Update in relation to #M. Deinum's comment.: I should perhaps also mention that it's a setup wherein the central component that's doing the polling is a library used by all the other services (therefore the library has #SpringBootApplication, as does each service using it, so double component scanning is certainly present). Furthermore, the service has two separate classes for polling depending on the type of service, with a lot of common code, shared in an AbstractTransactionHelper class. Below I've aggregated some code for the sake of brevity.
The library's main class:
#SpringBootApplication
#EnableTransactionManagement
#EnableJpaRepositories
public class JobsMain {
public static void initializeJobsMain(){
PersistenceProviderResolverHolder.setPersistenceProviderResolver(new PersistenceProviderResolver() {
#Override
public List<PersistenceProvider> getPersistenceProviders() {
return Collections.singletonList(new HibernatePersistenceProvider());
}
#Override
public void clearCachedProviders() {
//Not quite sure what this should do...
}
});
}
#Bean
public JtaTransactionManager transactionManager(){
return new WebLogicJtaTransactionManager();
}
public DataSource dataSource(){
final JndiDataSourceLookup dsLookup = new JndiDataSourceLookup();
dsLookup.setResourceRef(true);
DataSource dataSource = dsLookup.getDataSource("Jobs");
return dataSource;
}
}
The repository (we're returning a set with only one job as we had some other issues when returning a single object):
public interface JobRepository extends CrudRepository<Job, Integer> {
#Query(value = "SELECT * FROM JOB WHERE JOB.ID IN "
+ "(SELECT ID FROM "
+ "(SELECT * FROM JOB WHERE "
+ "JOB.STATUS = :status1 OR "
+ "JOB.STATUS = :status2 "
+ "ORDER BY JOB.PRIORITY ASC, JOB.CREATED ASC) "
+ "WHERE ROWNUM <= 1) "
+ "FOR UPDATE", nativeQuery = true)
public Set<Job> getNextJob(#Param("status1") String status1, #Param("status2") String status2);
The transaction handling class:
#Service
public class JobManagerTransactionHelper extends AbstractTransactionHelper{
#Transactional
#Override
public QdbJob getNextJobToProcess(){
Set<Job> jobs = null;
try {
jobs = jobRepo.getNextJob(Status.DONE.name(), Status.FAILED.name());
} catch (Exception ex) {
logger.error(ex);
}
return extractSingleJobFromSet(jobs);
}
Update 2: Some more code.
AbstractTransactionHelper:
#Service
public abstract class AbstractTransactionHelper {
#Autowired
QdbJobRepository jobRepo;
#Autowired
ArchivedJobRepository archive;
protected Job extractSingleJobFromSet(Set<Job> jobs){
Job job = null;
if(jobs != null && !jobs.isEmpty()){
for(job job : jobs){
if(this instanceof JobManagerTransactionHelper){
updateJob(job);
}
job = job;
}
}
return job;
}
protected void updateJob(Job job){
updateJob(job, Status.PROCESSING, null);
}
protected void updateJob(Job job, Status status, String serviceMessage){
if(job != null){
if(status != null){
job.setStatus(status);
}
if(serviceMessage != null){
job.setServiceMessage(serviceMessage);
}
saveJob(job);
}
}
protected void saveJob(Job job){
jobRepo.save(job);
archive.save(Job.convertJobToArchivedJob(job));
}
Update 4: Threading. newJob() is implemented by each service that uses the library.
#Service
public class JobManager{
#Autowired
private JobManagerTransactionHelper transactionHelper;
#Autowired
JobListener jobListener;
#Autowired
Config config;
protected final AtomicInteger atomicThreadCounter = new AtomicInteger(0);
protected boolean keepPolling;
protected Future<?> futurePoller;
protected ScheduledExecutorService pollService;
protected ThreadPoolExecutor threadPool;
public boolean start(){
if(!keepPolling){
ThreadFactory pollServiceThreadFactory = new ThreadFactoryBuilder()
.setNamePrefix(config.getService() + "ScheduledPollingPool-Thread").build();
ThreadFactory threadPoolThreadFactory = new ThreadFactoryBuilder()
.setNamePrefix(config.getService() + "ThreadPool-Thread").build();
keepPolling = true;
pollService = Executors.newSingleThreadScheduledExecutor(pollServiceThreadFactory);
threadPool = (ThreadPoolExecutor)Executors.newFixedThreadPool(getConfig().getThreadPoolSize(), threadPoolThreadFactory);
futurePoller = pollService.scheduleWithFixedDelay(getPollTask(), 0, getConfig().getPollingFrequency(), TimeUnit.MILLISECONDS);
return true;
}else{
return false;
}
}
protected Runnable getPollTask() {
return new Runnable(){
public void run(){
try{
while(atomicThreadCounter.get() < threadPool.getMaximumPoolSize() &&
threadPool.getActiveCount() < threadPool.getMaximumPoolSize() &&
keepPolling == true){
Job job = transactionHelper.getNextJobToProcess();
if(job != null){
threadPool.submit(getJobHandler(job));
atomicThreadCounter.incrementAndGet();//threadPool.getActiveCount() isn't updated fast enough the first loop
}else{
break;
}
}
}catch(Exception e){
logger.error(e);
}
}
};
}
protected Runnable getJobHandler(final Job job){
return new Runnable(){
public void run(){
try{
atomicThreadCounter.decrementAndGet();
jobListener.newJob(job);
}catch(Exception e){
logger.error(e);
}
}
};
}
As it turns out, the problem was the WeblogicJtaTransactionManager. My guess is that the FOR UPDATE resulted in a JPA transaction, but upon updating the object in the database, the WeblogicJtaTransactionManager was used, which failed to find an ongoing JTA transaction. Since we're deploying on Weblogic we wrongly assumed we had to use the WeblogicJtaTransactionManager.
Either way, exchanging the TransactionManager to a JpaTransactionManager (and explicitly setting the EntityManagerFactory and DataSource on it) basically solved all problems.
#Bean
public PlatformTransactionManager transactionManager() {
JpaTransactionManager jpaTransactionManager = new JpaTransactionManager(entityManagerFactory().getObject());
jpaTransactionManager.setDataSource(dataSource());
jpaTransactionManager.setJpaDialect(new HibernateJpaDialect());
return jpaTransactionManager;
}
Assuming you also have added an EntityManagerFactoryBean which is needed if you want to use multiple datasources in the same project (which we're doing, but not within single transactions, so no need for JTA).
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean();
factoryBean.setDataSource(dataSource());
factoryBean.setJpaVendorAdapter(vendorAdapter);
factoryBean.setPackagesToScan("my.model");
return factoryBean;
}
Is it possible to get a list of defined jobs in Spring Batch at runtime without using db? Maybe it's possible to get this metadata from jobRepository bean or some similar object?
It is possible to retrieve the list of all job names using JobExplorer.getJobNames().
You first have to define the jobExplorer bean using JobExplorerFactoryBean:
<bean id="jobExplorer" class="org.springframework.batch.core.explore.support.JobExplorerFactoryBean">
<property name="dataSource" ref="dataSource"/>
</bean>
and then you can inject this bean when you need it.
To list jobs defined as beans, you can just let the spring context inject them for you all the bean types of type Job into a list as below:
#Autowired
private List<? extends Job> jobs;
..
//You can then launch you job given a name.
Alternative strategy to get list of job names that are configured as beans one can use the ListableJobLocator.
#Autowired
ListableJobLocator jobLocator;
....
jobLocator.getJobNames();
This does not require a job repository.
I use these code to list and execute jobs
private String jobName = "";
private JobLauncher jobLauncher = null;
private String selectedJob;
private String statusJob = "Exit Status : ";
private Job job;
ApplicationContext context;
private String[] lstJobs;
/**
* Execute
*/
public ExecuteJobBean() {
this.context = ApplicationContextProvider.getApplicationContext();
this.lstJobs = context.getBeanNamesForType(Job.class);
if (jobLauncher == null)
jobLauncher = (JobLauncher) context.getBean("jobLauncher");
}
/**
* Execute
*/
public void executeJob() {
setJob((Job) context.getBean(this.selectedJob));
try {
statusJob = "Exit Status : ";
JobParameters jobParameters = new JobParametersBuilder().addLong("time", System.currentTimeMillis()).toJobParameters();
JobExecution execution = jobLauncher.run(getJob(), jobParameters);
this.statusJob = execution.getStatus() + ", ";
} catch (Exception e) {
e.printStackTrace();
this.statusJob = "Error, " + e.getMessage();
}
this.statusJob += " Done!!";
}
In our REST-Service we want to implement a job that checks something every 10 seconds. So we thought we could use Quartz to make a Job that cover this. But the problem is, that we need to inject a singleton, because it is used in the job and the job seems to be not in the context of our service, so the injected class is always null (NullPointerException).
So is there another possible solution to achieve such a job without using Quartz? Already tried to write our own JobFactory that connects the job with the BeanManager, but it didnt work at all.
This is the code for the job that is not working:
#Stateless
public class GCEStatusJob implements Job, Serializable{
private Logger log = LoggerFactory.getLogger(GCEStatusJob.class);
#Inject
SharedMemory sharedMemory;
#Override
public void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException {
GoogleComputeEngineFactory googleComputeEngineFactory = new GoogleComputeEngineFactory();
List<HeartbeatModel> heartbeatList = new ArrayList<>(sharedMemory.getAllHeartbeats());
List<GCE> gceList = googleComputeEngineFactory.listGCEs();
List<String> ipAddressList = gceList.stream().map(GCE::getIp).collect(Collectors.toList());
for(HeartbeatModel heartbeat : heartbeatList){
if(ipAddressList.contains(heartbeat.getIpAddress())){
long systemTime = System.currentTimeMillis();
if(systemTime-heartbeat.getSystemTime()>10000){
log.info("Compute Engine mit IP "+heartbeat.getIpAddress()+" antwortet nicht mehr. Wird neu gestartet!");
String name = gceList.stream().filter((i) -> i.getIp().equals(heartbeat.getIpAddress())).findFirst().get().getName();
googleComputeEngineFactory.resetGCE(name);
}
}
}
}
}
SharedMemory is always null.
I have used Scheduler context map to achive this. You can try this.
In REST API when we create a Scheduler we can use the Context map to pass the parameters to Job
#Path("job")
public class RESTApi {
private String _userID;
public String get_userID() {
return _userID;
}
public void set_userID(String _userID) {
this._userID = _userID;
}
#GET
#Path("/start/{userId}")
public void startJob(#PathParam("userId") String userID) {
_userID = userID;
try {
SimpleTrigger trigger = new SimpleTrigger();
trigger.setName("updateTrigger");
trigger.setStartTime(new Date(System.currentTimeMillis() + 1000));
trigger.setRepeatCount(SimpleTrigger.REPEAT_INDEFINITELY);
trigger.setRepeatInterval(1000);
JobDetail job = new JobDetail();
job.setName("updateJob");
job.setJobClass(GCEStatusJob.class);
Scheduler scheduler = new StdSchedulerFactory().getScheduler();
scheduler.getContext().put("apiClass", this);
scheduler.start();
scheduler.scheduleJob(job, trigger);
} catch (Exception e) {
e.printStackTrace();
}
}
}
JOB implementation
public class GCEStatusJob implements Job {
#Override
public void execute(JobExecutionContext arg0) throws JobExecutionException {
RESTApi apiClass;
try {
apiClass = ((RESTApi) arg0.getScheduler().getContext().get("apiClass"));
System.out.println("User name is" + apiClass.get_userID());
} catch (SchedulerException e) {
e.printStackTrace();
}
}
}
Correct me, if my understanding is wrong.