What I want to be able to do is read a file with a bunch of batch files (and args) and create a quartz job for each of those entries. I know I'm missing something obvious, but I can't seem to find how to do this anywhere on google. Everything I'm finding says a new class has to be coded for each job which can't be externally constructed. I can't seem to find how to create an instance of a class which I can pass into the scheduler.
public class MyJob implements Job{
private String[] jobArgs = null;
public MyJob(String[] jobArgs){
this.jobArgs = jobArgs;
}
public void execute(JobExecutionContext arg0) throws JobExecutionException{
ExternalProcess ep - new ExternalProcess();
try{
ep.runExecutableCommand(jobargs);
}catch(Exception e){...}
}
}
public class JobScheduler {
...
List<String[]> jobArgList = loadJobListFromDisk();
List<MyJob> = new ArrayList<MyJob>();
for(String[] jobArgs : jobList){
MyJob myJob = new MyJob(jobArgs);
// Is it possible to pass in a reference to an instance somehow
// instead of letting the scheduler create the instance based on
// the class definition? I know this syntax doesn't work, but this
// is the general idea of what I'm trying to do.
JobDetail jobDetail = JobBuilder.newJob(myJob).withIdentity...
}
}
In quartz 2, you need to use a JobDataMap (stored in the jobDetail) to transfer parameters to the execute method. It will be available in the context.getJobDetail().getJobDataMap()
From what ive had quality time with Quartz Scheduler or rather JobBuilder, its written in such a dumb way it only accepts Class object as parameter. I was not able to find a way to pass Job Object into the builder.
It's a very very bad design, resulting in future problems with writing generic solutions for Quartz in version 1.X at least.
Related
I have this code in my Main() function:
DataStream<OutputObject> asyncResultStream = AsyncDataStream.orderedWait(
listOfData,
new CustomAsyncConnector(),
5,
TimeUnit.SECONDS,
10).setParallelism(3).startNewChain().uid("customUid");
Which is the simple format for using AsyncDataStreams in 1.2. And the code in the CustomAsyncConnector is just like every example you will find at its core:
public class CustomAsyncConnector extends RichAsyncFunction<CustomObject, ResultObject> {
private transient Session client;
#Override
public void open(Configuration parameters) throws Exception {
client = Cluster.builder().addContactPoint("<CustomUrl>")
.withPort(1234)
.build()
.connect("<thisKeyspace>");
}
#Override
public void close() throws Exception {
client.close();
}
#Override
public void asyncInvoke(final CustomObject ctsa, final AsyncCollector<ResultObject> asyncCollector) throws Exception {
//Custom code here...
}
}
Now here are my questions:
1.) What is the proper way to pass "parameters" to the open() function in CustomAsyncConnector() from where it is called in my Main() function.
2.) How are the parameters supposed to be used to set up the connection to the client in the open() function?
My guess on the first question is to create a new CustomAsyncConnector() object instance in main and then call the open() function directly and pass the parameters object to it and then put that instance in the AsysDataStream's code. However I am not sure if this is the best way or, and more importantly, the proper way to set the fields in a Configuration type object (again, assuming that doing "configParameters.setString("contactPointUrl", "127.0.0.1")" is right, but am not sure). And this leads to my second, and honestly most important, question.
So regarding my second question, the parameters I want to pass to the open() function are the contactPointUrl, the portNumber, and the keyspace to be put in .connect(). However I cannot seem to access them by doing something like ".addContactPoint(parameters.getString("contactPointUrl"))". I also tried seeing if or where I should do Cluster.builder().getConfiguration(parameters) but I am shooting in the dark where that even belongs or if at all and if the parameter names have to be something specific and so on.
So I hope I didn't word that too poorly, but any and all help would be greatly appreciated.
Thanks in advance!
Here is what ended up working. Still not sure how to pass the configuration parameters to the .open() method, but oh well.
Added this to the CustomAsyncConnector class:
private final CustomProps props;
public CustomAsyncConnector(CustomProps props) {
super();
this.props = props;
}
And what I pass in the main() method:
AsyncDataStream
.unorderedWait(
dataToProcess,
new CustomAsyncConnector(props),
5,
TimeUnit.SECONDS,
10);
And utilized the props in the .open() method like how I wanted to use the parameters.
Using Play Framework 1.2.7, I have a class that extends play.jobs.Job that performs database writes (MongoDB using Play Moprhia plugin)
Here's an abbreviated example:
/* controller */
public static void doThings(#Required String id) {
User me = User.findById(id);
notFoundIfNull(me);
new MyJob(me).now();
}
/* MyJob */
public class MyJob extends Job {
private final User me;
public MyJob(User me) {
this.me = me;
}
#Override
public void doJob() {
int newValue = me.someInt;
newValue++;
me.someInt = newValue;
me.save();
}
}
Here's the weird part (weird to me anyway):
The write in the doJob() method does happen the first time the job is executed, sometimes a second time, but any additional instantiations of this job the write never occurs. No exceptions are thrown.
If i just remove the extends Job from MyJob and then just call the MyJob class by instantiating it myself and calling doJob() it works every time:
/* controller */
public static void doThings(#Required String id) {
User me = User.findById(id);
notFoundIfNull(me);
new MyJob(me).doJob(); // assumes this class no longer Extends Job
}
I've been using Play now for 4+ years and have never seen this kind of behavior, and i'm at a loss as to what actually is happening.
I'm not sure, but I think could be a (not handled) conflict on Morphia plugin and special on Context.
I'm sure there is something very similar in JPA model for Play1, where there is two contexts.
From your code I notice the object is loaded by Controller Context, and saved in Job Context.
When you do without job, Morphia still use the Controller one.
Try to pass only id and reload inside Job, or try to use JPDA remote debug, trap every call (inside controller and job), go deep inside Play framework and compare context object.
Good luck
For my batch application, I have a handful of steps I need to take prior to the execution of the Spring Batch job. For instance, I need to do a specific query and store data in a property - a List with a complex type (List<ComplexType>) - so that it can be used and manipulated throughout the Spring Batch job (primarily in the ItemReader).
I've tried autowiring in my list and accessing it in the step, but I can't access the data in the list that way. I get a null ComplexType in my List, no matter what values have been added to my autowired List property prior to the job.
I have also tried passing data using ExecutionContext, but I don't think this is supposed to work outside the Spring Batch job execution.
What I want to know, is what is the best way to populate an item prior to executing a Spring Batch job and maintain that object throughout the lifecycle of the job.
If the best way is one of the previous attempts I've made, any guidance on common mistakes with those approaches are appreciated, thanks.
Thanks Luca Basso Ricci for the JobExecutionListener pointer. I ended up creating my own StepExecutionListener where my pre-step processing would happen.
I followed this example from Mkyong which goes over different types of Spring Batch Listeners.
I created a custom listener like this one in the Java code:
public class CustomStepListener implements StepExecutionListener {
#Autowired
private CustomObject customObject;
#Override
public void beforeStep(StepExecution stepExecution) {
// initialize customObject and do other pre set setup
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
return null;
}
And I initialized the autowired CustomObject class here. The CustomObject class is a custom object that simply contained my List of type ComplexType.
#Component
public class CustomObject {
private List<ComplexType> customObjectList;
public List<ComplexType> getCustomObjectList() {
return customObjectList;
}
public void setCustomObjectList(List<ComplexType> customObjectList) {
this.customObjectList= customObjectList;
}
}
Finally, in my job configuration 'batch-job-context.xml' I added my new listener:
<!-- ... -->
<beans:bean id="customStepListener"
class="com.robotsquidward.CustomStepListener"/>
<job id="robotsquidwardJob"
job-repository="jobRepository"
incrementer="runIdIncrementer">
<step id="robotsquidwardStep">
<tasklet task-executor="taskExecutor" throttle-limit="1">
<chunk
reader="robotsquidwardReader"
processor="robotsquidwardProcessor"
writer="robotsquidwardWriter"
commit-interval="1"/>
</tasklet>
<listeners>
<listener ref="customStepListener"/>
</listeners>
</step>
</job>
When I followed these steps I was able to initialize my ComplexObject List within the beforeJob function and access the values of the ComplexObject List within my job's Reader class:
#Component
#Scope(value = "step")
public class RobotsquidwardReader implements ItemReader<ComplexType> {
#Autowired
private CustomObject customObject;
#Override
public ComplexType read() throws Exception, UnexpectedInputException,
ParseException, NonTransientResourceException {
if(customObject.getCustomObjectList() != null) {
return customObject.getCustomObjectList.remove(0);
} else {
return null;
}
}
}
Easy as that. All it took is two new classes, a config change, and a major headache :)
You can do this :)
try to do this in job parameters incrementer trick :
<j:job id="Your_job" incrementer="incrementerBean">
and
<bean id="incrementerBean" class="com.whatever.IncrementerClass"/>
incrementer class :
class IncrementerClass implements JobParametersIncrementer {
#Override
JobParameters getNext(JobParameters parameters) {
Map<String, JobParameter> map = new HashMap<String, JobParameter>(
parameters.getParameters());
...
//you can put here your list, if it can be :
// Domain representation of a parameter to a batch job. Only the following types
// can be parameters: String, Long, Date, and Double.
//make some query
List<String> listStrings = Query.getYourQuery();
//Join your query into string to have something like this below
map.put("listOfSomething", new JobParameter("abc, abc, abc"));
...
return new JobParameters(map);
}
}
And thats all,
then you can use this parameter for example in some processing bean :
#Value("#{jobParameters['listOfSomething']}")
private String yourList
You can build your list from string, and thats all :)
good luck
I have a method in my bean, which executes periodically:
#Scheduled(fixedRate = xx)
public void runPeriodically() {
// do smt...
}
Now I want to find out the time of its previous execution. How can I do that? I read about Trigger interface, but it's not clear to me how to use it for my need.
I might be missing something, but wouldn't a simple instance variable do the job?
private Date lastRun;
#Scheduled(fixedRate = xx)
public void runPeriodically() {
// do smt...
lastRun = new Date();
}
As for the Trigger interface: you can't use #Scheduled in combination with the Trigger interface. At least not out of the box. If you want to use Trigger, you need to use a TaskScheduler and "feed" it with Trigger objects. E.g.
scheduler.schedule(task, new CronTrigger("0 15 9-17 * * MON-FRI"));
CronTrigger obviously implements Trigger, so you have all your interface methods there.
I have a object, nicely configured with everything it needs to do its job. If I could just call run() on it once a day, my life would be complete.
To be clear, I know how to create a schedule and a trigger. But the methods to schedule all take JobDetail, which wants to create a new instance of my class. How do I use the one that I have?
In short, is there a nice way without Spring to call a method on my object using Quartz?
If you are using Quartz with Spring you can do the following :
Sample code
MethodInvokingJobDetailFactoryBean jobDetailfactory = new MethodInvokingJobDetailFactoryBean();
jobDetailfactory.setTargetObject(configuredObject);
jobDetailfactory.setTargetMethod("methodName");
Here configuredObject is your nicely configured object and methodName is the name of the method to be invoked. You can autowire the configuredObject into this class.
You can use Quartz JobBuilder to build Quartz JobDetail Object using your own jobDetails class, if I get you correctly. let me know if it is not required by you.
Suppose Job info is your own class having jobdetails. then you can use it below like this:
JobDataMap jobDataMap = new JobDataMap();
Map<String, Object> jobParams = jobInfo.getJobParams();
for (String paramKey : jobParams.keySet()) {
jobDataMap.put(paramKey, jobParams.get(paramKey));
}
jobBuilder.ofType((Class<? extends Job>) Class.forName(jobInfo.getJobClass()))
.withIdentity(jobInfo.getName(), jobInfo.getGroup())
.withDescription(jobInfo.getDescription()).storeDurably(jobInfo.isStoreDurably())
.usingJobData(jobDataMap);
JobDetail jobDetail = jobBuilder.build();
Here's some code (Kotlin)
fun createJobDetail(jobName: String, function: () -> Unit) = JobBuilder.newJob(MyJob::class.java)
.withIdentity(jobName)
.usingJobData(JobDataMap(mapOf(jobDataKey to function)))
.build()`
#DisallowConcurrentExecution
class MyJob : Job {
#Suppress("UNCHECKED_CAST")
override fun execute(context: JobExecutionContext) {
try {
(context.jobDetail.jobDataMap[jobDataKey] as () -> Unit)()
} catch(e: Exception) {
throw JobExecutionException(e)
}
}
}
Instead of using Quartz, you might be better off using the built-in java.util.concurrent. ScheduledExecutorService and its scheduleAtFixedRate() method.
For example:
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1,
new ThreadFactory() {
#Override
public Thread newThread(Runnable runnable) {
Thread t = Executors.defaultThreadFactory().newThread(runnable);
t.setDaemon(true);
return t;
}
});
scheduler.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
myLovelyObject.run();
}
}, 0, 24, TimeUnit.HOURS);
If you need to use Quartz, you could always store a reference to your object in a static field in the Job class. Not elegant, but not exactly the end of the world either.