Interface object seen in Spring batch Job launcher? how? - java

public interface JobLauncher {
public JobExecution run(Job job, JobParameters jp) throws JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException, JobParametersInvalidException;
}
This is a part of spring framework JobLauncher i/f
The method run has to be implemented by an implementing class.
Does "job" in run(Job job, JobParameters jp) represents an object of interface Job? but with Java's logic you cannot create object of any interface..
Please explain with an example using Job interface...

Does "job" in run(Job job, JobParameters jp) represents an object of
interface Job? but with Java's logic you cannot create object of any
interface..
You are right, you cannot instantiate an Interface. The reason the parameter is Job is so that you can pass the method any object that implements the interface Job. You can create multiple Job objects with each of them doing different things and use the one JobLauncher to launch them.
You can use a JobBuilderFactory to create a Job that can include many steps.
A good example of this can be found here: https://spring.io/guides/gs/batch-processing/
#Bean
public Job importUserJob(JobCompletionNotificationListener listener) {
return jobBuilderFactory.get("importUserJob")
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(step1())
.end()
.build();
}
Using the above will create a FlowJob object in Spring which will be compatible with JobLauncher's run method as it implements Job.

Related

When and why do we need ApplicationRunner and Runner interface?

I'm learning Spring boot. What's some typical use cases for ApplicationRunner or any runner interface?
import org.junit.jupiter.api.Test;
import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.test.context.SpringBootTest;
#SpringBootTest
class PersistencedemoApplicationTests implements ApplicationRunner {
#Test
void contextLoads() {
}
#Override
public void run(ApplicationArguments args) throws Exception {
// load initial data in test DB
}
}
This is one case I'm aware of. Anything else?
These runners are used to run the logic on application startup, for example spring boot has ApplicationRunner(Functional Interface) with run method
ApplicationRunner run() will get execute, just after applicationcontext is created and before spring boot application startup.
ApplicationRunner takes ApplicationArgument which has convenient methods like getOptionNames(), getOptionValues() and getSourceArgs().
And CommandLineRunner is also a Functional interface with run method
CommandLineRunner run() will get execute, just after applicationcontext is created and before spring boot application starts up.
It accepts the argument, which are passed at time of server startup.
Both of them provides the same functionality and the only difference between CommandLineRunner and ApplicationRunner is CommandLineRunner.run() accepts String array[] whereas ApplicationRunner.run() accepts ApplicationArguments as argument. you can find more information with example here
In order to use ApplicationRunner or CommandLineRunner interfaces, one needs to create a Spring bean and implement either ApplicationRunner or CommandLineRunner interfaces, both perform similarly. Once complete, your Spring application will detect your bean.
In addition, you can create multiple ApplicationRunner or CommandLineRunner beans, and control the ordering by implementing either
org.springframework.core.Ordered interface
org.springframework.core.annotation.Order annotation.
use case:
One might wish to log some command line arguments.
You could provide some instructions to the user on termination of this application.
consider:
#Component
public class MyBean implements CommandLineRunner {
#Override
public void run(String...args) throws Exception {
logger.info("App started with arguments: " + Arrays.toString(args));
}
}
Details on ApplicationRunner
ApplicationRunner and CommandLineRunner are two interfaces Spring Boot provides to run any custom code just before application is fully started.
Spring-batch is a batch processing framework. This uses CommandLineRunner to register and start batch jobs at application startup.
You can also use this interface to load some master data into cache/perform health checks.
The use-case varies from application to application.
Or like this:
#Bean
ApplicationRunner appStarted() {
return args -> {
logger.info("Ready to go - args: {}", args.getNonOptionArgs());
};
}

Spring Batch + Spring API REST

I have set up a java batch project with spring batch that allows to persist the rows of a CSV in a table of a database.
I would have liked to know if it was possible with Spring API REST to trigger the Batch with a POST method that would join the necessary CSV.
thank you in advance
You can do that using a Controller with a JobLauncher and Job. The barebones of the controller would be like this
#RestController
public class MyController{
// Usually given by Spring Batch
private JobLauncher jobLauncher;
// Your Job
private Job job;
// Ctor
public MyController(JobLauncher jobLauncher, Job job, ...){}
#PostMapping("/")
public String launchJob(...){
...
// Create JobParameters and launch
JobParameters jobparameters = new Job Parameters();
jobLauncher.run(job, jobParameters);
...
}
}
SimpleJobLauncher, the implementation of JobLauncher, uses a synchronous executor by default, you'll probably want to change it to an Async one depending of your requirements

How to create and launch spring batch jobs at runtime

We have a requirement to carry out data movement from 1 database to other and exploring spring batch for the same. User of our application selects source and target datasource along with the list of tables for which the data needs to be moved.
Need help with following:
The information necessary to build a job comes at runtime from our web application - that includes datasource details and list of table names. We would like to create a new job by sending these details to the job builder module and launch it using JobLauncher. How do we write this job builder module?
We may have multiple users raising data movement requests in parallel, so need a way to create multiple jobs and run them in suitable order.
We have used the Java based configuration to create a job and launch it from a web container. The configuration is as follows
#Bean
public Job loadDataJob(JobCompletionNotificationListener listener) {
RunIdIncrementer inc = new RunIdIncrementer();
inc.setKey(new Date().toString());
JobBuilder builder = jobBuilderFactory.get("loadDataJob")
.incrementer(inc)
.listener(listener);
SimpleJobBuilder simpleBuilder = builder.start(preExecute());
for(String s : getTables()){
simpleBuilder.next(etlTable(s));
}
simpleBuilder.next(postExecute());
return simpleBuilder.build();
}
#Bean
#Scope("prototype")
public Step etlTable(String tableName) {
return stepBuilderFactory.get(tableName)
.<Map<String,Object>, Map<String,Object>> chunk(1000)
.reader(dbDataReader(tableName))
.processor(processor())
.writer(dbDataWriter(tableName))
.build();
}
Currently we have hardcoded the source and target datasource details into respective beans. The getTables() returns a list of tables (hardcoded) for which the data needs to be moved.
RestController that launches the job
#RestController
public class MyController {
#Autowired
JobLauncher jobLauncher;
#Autowired
Job job;
#RequestMapping("/launchjob")
public String handle() throws Exception {
try {
JobParameters jobParameters = new JobParametersBuilder().addLong("time", new Date().getTime()).toJobParameters();
jobLauncher.run(job, jobParameters);
} catch (Exception e) {
}
return "Done";
}
}
Concerning your first question, you definitely have to use JavaConfiguration. Moreover, you shouldn't define your steps as spring beans, if you want to create a job with a dynamic number of steps (for instance a step per table you have to copy).
I've written a couple of answers to questions about how to create jobs dynamically. Have a look at them, they might be helpful
Spring batch execute dynamically generated steps in a tasklet
Spring batch repeat step ending up in never ending loop
Spring Batch - How to generate parallel steps based on params created in a previous step
Spring Batch - Looping a reader/processor/writer step
Edited
Some remarks concerning your second question:
Firstly, you are using a normal JobLauncher and I assume your instantiate the SimpleJobLauncher. This means, you can provide a job with jobparameters, as you have shown in your code above. However, the provided "job" does not have to be a "SpringBean"-instance, so you don't have to Autowire it and therefore, you can use create-methodes as I suggested in the answers to the questions mentioned above.
Secondly, if you create your Job instance for every request dynamically, there is no need to pass the whole configuration as jobparameters, since you can pass the "configuration properties" like datasource and tables to be copied directly as parameters to your "createJob" method. You could even create your DataSource-instances "on the fly", if you don't know all possible datasources in advance.
Thirdly, I would consider every request as a "single run", which cannot be "restarted". Hence, I'd just but some "meta information" into the jobparameters like user, date/time, datasource names (urls) and a list of tables to be copied. I would use this kind of information just as a kind of logging/auditing which requests where issued, but I wouldn't use the jobparameter-instances as controlparameters inside the job itself (again, you can pass the values of these parameters during the construction time of the job and steps by passing them to your create-Methods, so the structure of your job is created according to your parameters and hence, during runtime - when you could access your jobparameters - there is nothing to do based on the jobparameters).
Finally, if a request fails (meaning the jobs exits with an error) simply a new request has to be executed in order to retry, but this request would be a complete new request and not a restart of an already executed job launch (since I would add the request time to my jobparameters, every launch would be a unique launch).
Edited 2:
Not creating the Job as a Bean doesn't mean to not use Autowiring. Here is an example, aus I would structure my Beans.
#Component
#EnableBatchProcessing
#Import() // list with imports as neede
public class JobCreatorComponent {
#Autowire
private StepBuilderFactory stepBuilder;
#Autowire
private JobBuilderFactory jobBuilder;
public Job createJob(all the parameters you need) {
return jobBuilder.get(). ....
}
}
#RestController
#Import(JobCreatorComponent.class)
public class MyController {
#Autowired
JobLauncher jobLauncher;
#Autowired
JobCreatorComponent jobCreator;
#RequestMapping("/launchjob")
public String handle() throws Exception {
try {
Job job = jobCreator.createJob(... params ...);
JobParameters jobParameters = new JobParametersBuilder().addLong("time", new Date().getTime()).toJobParameters();
jobLauncher.run(job, jobParameters);
} catch (Exception e) {
}
return "Done";
}
}
by using #JobScope on itemreader no need to do things manually at run time just have to annoted your respective reader with #Jobscope, on each interaction with controller you will get fresh record processing.
This is type of job on demand where you can execute the job for goals like do the db migration or get the specific reporting like that.

Running a spring batch job multiple times

I have created a spring batch job with spring boot using below tutorial:
https://spring.io/guides/gs/batch-processing/
The job is reading a file and writing to a database as expected.
However, now I have a use case to run this job multiple times.
I have an ArrayList of parameters.
What changes should I do to the job so that I can run the job the number of times the size of my ArrayList ?
You can kickstart your batch job manually like this
#Component
Class Someclass{
...............
#Autowired
private JobLauncher jobLauncher;
#Autowired
private Job job;
public void someFunction(){
jobLauncher.run(job, new JobParameters());
}
}
Only thing is you cannot restart a batch job if it is already completed, It throws an error saying the status is COMPLETED. For this to work you have to set allowStartIfComplete property to true. This has to be done in your batch step configuration, something like this
stepBuilderFactory.get("step1")
.<Person, Person> chunk(10)
.reader(reader())
.processor(processor())
.writer(writer())
.allowStartIfComplete(true)
.build();

Correct way to persist Quartz triggers in database

I'm quite new to Quartz and now I need to schedule some jobs in Spring web application.
I know about Spring + Quartz integration (I'm using Spring v 3.1.1) but I'm wondering if this is the right way to follow.
In particular I need to persist my scheduled tasks in a DB so I can re-initialize them when application is restarted.
Are there some utilities provided by Spring scheduling wrapper to do this?
Can you suggest me some "well known" approach to follow?
Here is one way I handle this scenario.
First in my Spring Configuration I specify a SchedulerFactoryBean from which I can inject the Scheduler into other beans.
<bean name="SchedulerFactory"
class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="applicationContextSchedulerContextKey">
<value>applicationContext</value>
</property>
</bean>
Then when I create a job in my application I store the details of the job in the database. This service is called by one of my controllers and it schedules the job:
#Component
public class FollowJobService {
#Autowired
private FollowJobRepository followJobRepository;
#Autowired
Scheduler scheduler;
#Autowired
ListableBeanFactory beanFactory;
#Autowired
JobSchedulerLocator locator;
public FollowJob findByClient(Client client){
return followJobRepository.findByClient(client);
}
public void saveAndSchedule(FollowJob job) {
job.setJobType(JobType.FOLLOW_JOB);
job.setCreatedDt(new Date());
job.setIsEnabled(true);
job.setIsCompleted(false);
JobContext context = new JobContext(beanFactory, scheduler, locator, job);
job.setQuartzGroup(context.getQuartzGroup());
job.setQuartzName(context.getQuartzName());
followJobRepository.save(job);
JobSchedulerUtil.schedule(new JobContext(beanFactory, scheduler, locator, job));
}
}
The JobContext I build contains detail about the job and is eventually passed to a utility for scheduling jobs. Here is that code for the utility method that actually schedules the job. Notice that in my service I autowire the JobScheduler and pass it to the JobContext. Also notice that I store the job in the database using my repository.
/**
* Schedules a DATA_MINING_JOB for the client. The job will attempt to enter
* followers of the target into the database.
*/
#Override
public void schedule(JobContext context) {
Client client = context.getNetworkSociallyJob().getClient();
this.logScheduleAttempt(context, client);
JobDetail jobDetails = JobBuilder.newJob(this.getJobClass()).withIdentity(context.getQuartzName(), context.getQuartzGroup()).build();
jobDetails.getJobDataMap().put("job", context.getNetworkSociallyJob());
jobDetails.getJobDataMap().put("repositories", context.getRepositories());
Trigger trigger = TriggerBuilder.newTrigger().withIdentity(context.getQuartzName() + "-trigger", context.getQuartzGroup())
.withSchedule(cronSchedule(this.getSchedule())).build();
try {
context.getScheduler().scheduleJob(jobDetails, trigger);
this.logSuccess(context, client);
} catch (SchedulerException e) {
this.logFailure(context, client);
e.printStackTrace();
}
}
So after all of this code executes I two things have happened, my job is store in the database and its been scheduled using the quartz scheduler. Now if the application restarts I want to reschedule my jobs with the scheduler. To do this I register a bean that implements ApplicationListener<ContextRefreshedEvent> which is called by Spring each time the container restarts or is started.
<bean id="jobInitializer" class="com.network.socially.web.jobs.JobInitializer"/>
JobInitializer.class
public class JobInitializer implements ApplicationListener<ContextRefreshedEvent> {
Logger logger = LoggerFactory.getLogger(JobInitializer.class);
#Autowired
DataMiningJobRepository repository;
#Autowired
ApplicationJobRepository jobRepository;
#Autowired
Scheduler scheduler;
#Autowired
JobSchedulerLocator locator;
#Autowired
ListableBeanFactory beanFactory;
#Override
public void onApplicationEvent(ContextRefreshedEvent event) {
logger.info("Job Initilizer started.");
//TODO: Modify this call to only pull completed & enabled jobs
for (ApplicationJob applicationJob : jobRepository.findAll()) {
if (applicationJob.getIsEnabled() && (applicationJob.getIsCompleted() == null || !applicationJob.getIsCompleted())) {
JobSchedulerUtil.schedule(new JobContext(beanFactory, scheduler, locator, applicationJob));
}
}
}
}
This class autowires the scheduler and a repository that grabs instances of each of my jobs that implement the ApplicationJob interface. Using the information from these database records I can use my scheduler utility to reconstruct my jobs.
So basically I manually store the jobs in my database and manually schedule them by injecting a instance of the Scheduler in appropriate beans. To rescheduled them, I query my database and then schedule them using the ApplicationListener to account for restarts and starts of the container.
I suppose there are quite some documentation available for Spring and Quartz JDBC job store integration; for instance:
If you're using annotation configuration: http://java.dzone.com/articles/configuring-quartz
If you're using XML configuration: http://arkuarku.wordpress.com/2011/01/06/spring-quartz-using-jdbcjobstore-simple/
Quartz Job Stores: http://quartz-scheduler.org/documentation/quartz-2.x/tutorials/tutorial-lesson-09
Spring 3.x SchedulerFactoryBean for Quartz

Categories

Resources