I am new to spring batch, and I'm encountering an issue when using multiple data source in my batch.
Let me explain.
I am using 2 databases in my server with Spring Boot.
So far everything worked fine with my implementation of RoutingDataSource.
#Component("dataSource")
public class RoutingDataSource extends AbstractRoutingDataSource {
#Autowired
#Qualifier("datasourceA")
DataSource datasourceA;
#Autowired
#Qualifier("datasourceB")
DataSource datasourceB;
#PostConstruct
public void init() {
setDefaultTargetDataSource(datasourceA);
final Map<Object, Object> map = new HashMap<>();
map.put(Database.A, datasourceA);
map.put(Database.B, datasourceB);
setTargetDataSources(map);
}
#Override
protected Object determineCurrentLookupKey() {
return DatabaseContextHolder.getDatabase();
}
}
The implementation require a DatabaseContextHolder, here it is :
public class DatabaseContextHolder {
private static final ThreadLocal<Database> contextHolder = new ThreadLocal<>();
public static void setDatabase(final Database dbConnection) {
contextHolder.set(dbConnection);
}
public static Database getDatabase() {
return contextHolder.get();
}
}
When I received a request on my server, I have a basic interceptor that sets the current database based on some input I have in the request. with the method DatabaseContextHolder.setDatabase(db); Everything works fine with my actual controllers.
It gets more complicated when I try to run a job with one tasklet.
One of my controller start an async task like this.
#GetMapping("/batch")
public void startBatch() {
return jobLauncher.run("myJob", new JobParameters());
}
#EnableBatchProcessing
#Configuration
public class MyBatch extends DefaultBatchConfigurer {
#Autowired private JobBuilderFactory jobs;
#Autowired private StepBuilderFactory steps;
#Autowired private MyTasklet tasklet;
#Bean
public Job job(Step step) {
return jobs.get("myJob").start(step).build();
}
#Bean
protected Step registeredDeliveryTask() {
return steps.get("myTask").tasklet(tasklet).build();
}
/** Overring the joblauncher get method to make it asynchornous */
#Override
public JobLauncher getJobLauncher() {
try {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(super.getJobRepository());
jobLauncher.setTaskExecutor(new SimpleAsyncTaskExecutor());
jobLauncher.afterPropertiesSet();
return jobLauncher;
} catch (Exception e) {
throw new BatchConfigurationException(e);
}
}
}
And my Tasklet :
#Component
public class MyTasklet implements Tasklet {
#Autowired
private UserRepository repository;
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext)throws Exception {
//Do stuff with the repository.
}
But the RoutingDataSource doesn't work, even if I set my Context before starting the job. For example if I set my database to B, the repo will work on database A.
It is always the default datasource that is selected. (because of this line
setDefaultTargetDataSource(datasourceA); )
I tried to set the database, by passing the value in the parameters, inside the tasklet, but still got the same issue.
#GetMapping("/batch")
public void startBatch() {
Map<String, JobParameter> parameters = new HashMap<>();
parameters.put("database", new JobParameter(DatabaseContextHolder.getCircaDatabase().toString()));
return jobLauncher.run("myJob", new JobParameters(parameters));
}
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext)throws Exception {
String database =
chunkContext.getStepContext().getStepExecution().getJobParameters().getString("database");
DatabaseContextHolder.setDatabase(Database.valueOf(database));
//Do stuff with the repository.
}
I feel like the problem is because the database was set in a different thread, because my job is asynchronous. So it cannot fetch the database set before launching the job. But I couldn't find any solution so far.
Regards
Your routing datasource is being used for Spring Batch's meta-data, which means the job repository will interact with a different database depending on the thread processing the request. This is not needed for batch jobs. You need to configure Spring Batch to work with a fixed data source.
Related
I am facing some issues while writing integration tests for Spring Batch jobs. The main problem is that an exception is thrown whenever a transaction is started inside the batch job.
Well, first things first. Imagine this is the step of a simple job. A Tasklet for the sake of simplicity. Of course, it is used in a proper batch config (MyBatchConfig) which I also omit for brevity.
#Component
public class SimpleTask implements Tasklet {
private final MyRepository myRepository;
public SimpleTask(MyRepository myRepository) {
this.myRepository = myRepository;
}
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
myRepository.deleteAll(); // or maybe saveAll() or some other #Transactional method
return RepeatStatus.FINISHED;
}
}
MyRepository is a very unspecial CrudRepository.
Now, to test that job I use the following test class.
#SpringBatchTest
#EnableAutoConfiguration
#SpringJUnitConfig(classes = {
H2DataSourceConfig.class, // <-- this is a configuration bean for an in-memory testing database
MyBatchConfig.class
})
public class MyBatchJobTest {
#Autowired
private JobLauncherTestUtils jobLauncherTestUtils;
#Autowired
private JobRepositoryTestUtils jobRepositoryTestUtils;
#Autowired
private MyRepository myRepository;
#Test
public void testJob() throws Exception {
var testItems = List.of(
new MyTestItem(1),
new MyTestItem(2),
new MyTestItem(3)
);
myRepository.saveAll(testItems); // <--- works perfectly well
jobLauncherTestUtils.launchJob();
}
}
When it comes to the tasklet execution and more precisely to the deleteAll() method call this exception is fired:
org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is java.lang.IllegalStateException: Already value [org.springframework.jdbc.datasource.ConnectionHolder#68f48807] for key [org.springframework.jdbc.datasource.DriverManagerDataSource#49a6f486] bound to thread [SimpleAsyncTaskExecutor-1]
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:448)
...
Do you have any ideas why this is happening?
As a workaround I currently mock the repository with #MockBean and back it with an ArrayList but this is not what the inventor intended, I guess.
Any advice?
Kind regards
Update 1.1 (includes solution)
The mentioned data source configuration class is
#Configuration
#EnableJpaRepositories(
basePackages = {"my.project.persistence.repository"},
entityManagerFactoryRef = "myTestEntityManagerFactory",
transactionManagerRef = "myTestTransactionManager"
)
#EnableTransactionManagement
public class H2DataSourceConfig {
#Bean
public DataSource myTestDataSource() {
var dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("org.h2.Driver");
dataSource.setUrl("jdbc:h2:mem:myDb;DB_CLOSE_DELAY=-1");
return dataSource;
}
#Bean
public LocalContainerEntityManagerFactoryBean myTestEntityManagerFactory() {
var emFactory = new LocalContainerEntityManagerFactoryBean();
var adapter = new HibernateJpaVendorAdapter();
adapter.setDatabasePlatform("org.hibernate.dialect.H2Dialect");
adapter.setGenerateDdl(true);
emFactory.setDataSource(myTestDataSource());
emFactory.setPackagesToScan("my.project.persistence.model");
emFactory.setJpaVendorAdapter(adapter);
return emFactory;
}
#Bean
public PlatformTransactionManager myTestTransactionManager() {
return new JpaTransactionManager(myTestEntityManagerFactory().getObject());
}
#Bean
public BatchConfigurer testBatchConfigurer() {
return new DefaultBatchConfigurer() {
#Override
public PlatformTransactionManager getTransactionManager() {
return myTestTransactionManager();
}
};
}
}
By default, when you declare a datasource in your application context, Spring Batch will use a DataSourceTransactionManager to drive step transactions, but this transaction manager knows nothing about your JPA context.
If you want to use another transaction manager, you need to override BatchConfigurer#getTransactionManager and return the transaction manager you want to use to drive step transactions. In your case, you are only declaring a transaction manager bean in the application context which is not enough. Here a quick example:
#Bean
public BatchConfigurer batchConfigurer() {
return new DefaultBatchConfigurer() {
#Override
public PlatformTransactionManager getTransactionManager() {
return new JpaTransactionManager(myTestEntityManagerFactory().getObject());
}
};
}
For more details, please refer to the reference documentation.
I have a spring batch application to get a file in samba server
and generate a new file in a different folder on the same server.
However,
only ItemReader is called in the flow.
What is the problem? Thanks.
BatchConfiguration:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration extends BaseConfiguration {
#Bean
public ValeTrocaItemReader reader() {
return new ValeTrocaItemReader();
}
#Bean
public ValeTrocaItemProcessor processor() {
return new ValeTrocaItemProcessor();
}
#Bean
public ValeTrocaItemWriter writer() {
return new ValeTrocaItemWriter();
}
#Bean
public Job importUserJob(JobCompletionNotificationListener listener) throws Exception {
return jobBuilderFactory()
.get("importUserJob")
.incrementer(new RunIdIncrementer())
.repository(getJobRepository())
.listener(listener)
.start(this.step1())
.build();
}
#Bean
public Step step1() throws Exception {
return stepBuilderFactory()
.get("step1")
.<ValeTroca, ValeTroca>chunk(10)
.reader(this.reader())
.processor(this.processor())
.writer(this.writer())
.build();
}
}
BaseConfiguration:
public class BaseConfiguration implements BatchConfigurer {
#Bean
#Override
public PlatformTransactionManager getTransactionManager() {
return new ResourcelessTransactionManager();
}
#Bean
#Override
public SimpleJobLauncher getJobLauncher() throws Exception {
final SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(this.getJobRepository());
return simpleJobLauncher;
}
#Bean
#Override
public JobRepository getJobRepository() throws Exception {
return new MapJobRepositoryFactoryBean(this.getTransactionManager()).getObject();
}
#Bean
#Override
public JobExplorer getJobExplorer() {
MapJobRepositoryFactoryBean repositoryFactory = this.getMapJobRepositoryFactoryBean();
return new SimpleJobExplorer(repositoryFactory.getJobInstanceDao(), repositoryFactory.getJobExecutionDao(),
repositoryFactory.getStepExecutionDao(), repositoryFactory.getExecutionContextDao());
}
#Bean
public MapJobRepositoryFactoryBean getMapJobRepositoryFactoryBean() {
return new MapJobRepositoryFactoryBean(this.getTransactionManager());
}
#Bean
public JobBuilderFactory jobBuilderFactory() throws Exception {
return new JobBuilderFactory(this.getJobRepository());
}
#Bean
public StepBuilderFactory stepBuilderFactory() throws Exception {
return new StepBuilderFactory(this.getJobRepository(), this.getTransactionManager());
}
}
ValeTrocaItemReader:
#Configuration
public class ValeTrocaItemReader implements ItemReader<ValeTroca>{
#Value(value = "${url}")
private String url;
#Value(value = "${user}")
private String user;
#Value(value = "${password}")
private String password;
#Value(value = "${domain}")
private String domain;
#Value(value = "${inputDirectory}")
private String inputDirectory;
#Bean
#Override
public ValeTroca read() throws MalformedURLException, SmbException, IOException, Exception {
File tempOutputFile = getInputFile();
DefaultLineMapper<ValeTroca> lineMapper = new DefaultLineMapper<>();
lineMapper.setLineTokenizer(new DelimitedLineTokenizer() {
{
setDelimiter(";");
setNames(new String[]{"id_participante", "cpf", "valor"});
}
});
lineMapper.setFieldSetMapper(
new BeanWrapperFieldSetMapper<ValeTroca>() {
{
setTargetType(ValeTroca.class);
}
});
FlatFileItemReader<ValeTroca> itemReader = new FlatFileItemReader<>();
itemReader.setLinesToSkip(1);
itemReader.setResource(new FileUrlResource(tempOutputFile.getCanonicalPath()));
itemReader.setLineMapper(lineMapper);
itemReader.open(new ExecutionContext());
tempOutputFile.deleteOnExit();
return itemReader.read();
}
Sample of ItemProcessor:
public class ValeTrocaItemProcessor implements ItemProcessor<ValeTroca, ValeTroca> {
#Override
public ValeTroca process(ValeTroca item) {
//Do anything
ValeTroca item2 = item;
System.out.println(item2.getCpf());
return item2;
}
EDIT:
- Spring boot 2.1.2.RELEASE - Spring batch 4.1.1.RELEASE
Looking at your configuration, here are a couple of notes:
BatchConfiguration looks good. That's a typical job with a single chunk-oriented step.
BaseConfiguration is actually the default configuration you get when using #EnableBatchProcessing without providing a datasource. So this class can be removed
Adding #Configuration on ValeTrocaItemReader and marking the method read() with #Bean is not correct. This means your are declaring a bean named read of type ValeTroca in your application context. Moreover, your custom reader uses a FlatFileItemReader but has no added value compared to a FlatFileItemReader. You can declare your reader as a FlatFileItemReader and configure it as needed (resource, line mapper, etc ). This will also avoid the mistake of opening the execution context in the read method, which should be done when initializaing the reader or in the ItemStream#open method if the reader implements ItemStream
Other than that, I don't see from what you shared why the processor and writer are not called.
SOLVED: The problem was that even though I'm not using any databases, the spring batch, although configured to have the JobRepository in memory, needs a database (usually H2) to save the configuration tables, jobs, etc.
In this case, the dependencies of JDBC and without H2 in pom.xml were disabled. Just added them to the project and the problem was solved!
In Spring Integration we have a Setup that looks something like this:
--->
--->
(dispatcher) Messages --> Gateway ----> QueueChannel ---> MessageHandler (worker)
--->
--->
So we have one Dispatcher Thread that takes Messages from a MQTT-Broker and forwards them into the Queue. The Poller for the Queue is provided with a TaskExecuter, so the Consumer is multithreaded.
We managed to implement all the functionalities. So the just described setup is already implemented.
Now to guarantee no data loss we want to make two things:
1.:
We want our queue to persist the data, so when the Programm shuts down ungracefully, all the data in the queue will still be there.
This also worked for us, we are using MongoDB as a database because we read somewhere in your docs that this is the recommended way to do it.
2.:
The second thing we want to assure is that the worker threads are working transactional. So only if the worker threads return correctly the messages will permanently be deleted from the queue (and therefore the persistent MessageStore). If the program shuts down during the processing of a message (by the worker thread) the message will still be in the queue at the next startup.
Also if the worker, for example, thows an exception during the processing of the message, it will be put back into the queue.
Our implementation:
As explained before, the basic setup of the program is already implemented. We then extended the basic implementation with a message store implementation for the queue.
QueueChannel:
#Bean
public PollableChannel inputChannel(BasicMessageGroupStore mongoDbChannelMessageStore) {
return new QueueChannel(new MessageGroupQueue(mongoDbChannelMessageStore, "inputChannel"));
}
backed by a Messagestore:
#Bean
public BasicMessageGroupStore mongoDbChannelMessageStore(MongoDbFactory mongoDbFactory) {
MongoDbChannelMessageStore store = new MongoDbChannelMessageStore(mongoDbFactory);
store.setPriorityEnabled(true);
return store;
}
the matching Poller:
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
PollerMetadata poll = Pollers.fixedDelay(10).get();
poll.setTaskExecutor(consumer);
return poll;
}
Executor:
private Executor consumer = Executors.newFixedThreadPool(5);
What we have tried?
As explained now we want to extend this implementation with a transactional functionality. We tried using the setTransactionSynchronizationFactory like explained here but it wasn't working (didn't get errors or anything but the behavior was still as it was before we added the TransactionSynchronizer):
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
PollerMetadata poll = Pollers.fixedDelay(10).get();
poll.setTaskExecutor(consumer);
BeanFactory factory = mock(BeanFactory.class);
ExpressionEvaluatingTransactionSynchronizationProcessor etsp = new ExpressionEvaluatingTransactionSynchronizationProcessor();
etsp.setBeanFactory(factory);
etsp.setAfterRollbackChannel(inputChannel());
etsp.setAfterRollbackExpression(new SpelExpressionParser().parseExpression("#bix"));
etsp.setAfterCommitChannel(inputChannel());
etsp.setAfterCommitExpression(new SpelExpressionParser().parseExpression("#bix"));
DefaultTransactionSynchronizationFactory dtsf = new DefaultTransactionSynchronizationFactory(etsp);
poll.setTransactionSynchronizationFactory(dtsf);
return poll;
}
What would be the best way to realize our requirements in spring integration?
EDIT:
As recommended in the answer I chose to do this with the JdbcChannelMessageStore. So I tried converting the XML Implementation described here (18.4.2) into Java. I wasn't quite sure on how to do it, this is what I have tried so far:
I created H2 database and run the script shown here on it.
Created JDBCChannelMessageStore Bean:
#Bean
public JdbcChannelMessageStore store() {
JdbcChannelMessageStore ms = new JdbcChannelMessageStore();
ms.setChannelMessageStoreQueryProvider(queryProvider());
ms.setUsingIdCache(true);
ms.setDataSource(dataSource);
return ms;
}
Created H2ChannelMessageStoreQueryProvider
#Bean
public ChannelMessageStoreQueryProvider queryProvider() {
return new H2ChannelMessageStoreQueryProvider();
}
Adapted the poller:
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() throws Exception {
PollerMetadata poll = Pollers.fixedDelay(10).get();
poll.setTaskExecutor(consumer);
poll.setAdviceChain(Collections.singletonList(transactionInterceptor()));
return poll;
}
Autowired my PlaatformTransactionManager:
#Autowired
PlatformTransactionManager transactionManager;
And created TransactionInterceptor from the TransactonManager:
#Bean
public TransactionInterceptor transactionInterceptor() {
return new TransactionInterceptorBuilder(true)
.transactionManager(transactionManager)
.isolation(Isolation.READ_COMMITTED)
.propagation(Propagation.REQUIRED)
.build();
}
If you need to have queue as transactional, you definitely should take a look into the transactional MessageStore. And only JDBC one is like that. Just because only JDBC support transactions. So, when we perform DELETE, it is OK only if TX is committed.
The MongoDB, nor any other NoSQL DataBases, support such a model, therefore you only can push back the failed messages to the DB on rollback using TransactionSynchronizationFactory.
UPDATE
#RunWith(SpringRunner.class)
#DirtiesContext
public class So47264688Tests {
private static final String MESSAGE_GROUP = "transactionalQueueChannel";
private static EmbeddedDatabase dataSource;
#BeforeClass
public static void init() {
dataSource = new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.H2)
.addScript("classpath:/org/springframework/integration/jdbc/schema-drop-h2.sql")
.addScript("classpath:/org/springframework/integration/jdbc/schema-h2.sql")
.build();
}
#AfterClass
public static void destroy() {
dataSource.shutdown();
}
#Autowired
private PollableChannel transactionalQueueChannel;
#Autowired
private JdbcChannelMessageStore jdbcChannelMessageStore;
#Autowired
private PollingConsumer serviceActivatorEndpoint;
#Autowired
private CountDownLatch exceptionLatch;
#Test
public void testTransactionalQueueChannel() throws InterruptedException {
GenericMessage<String> message = new GenericMessage<>("foo");
this.transactionalQueueChannel.send(message);
assertTrue(this.exceptionLatch.await(10, TimeUnit.SECONDS));
this.serviceActivatorEndpoint.stop();
assertEquals(1, this.jdbcChannelMessageStore.messageGroupSize(MESSAGE_GROUP));
Message<?> messageFromStore = this.jdbcChannelMessageStore.pollMessageFromGroup(MESSAGE_GROUP);
assertNotNull(messageFromStore);
assertEquals(message, messageFromStore);
}
#Configuration
#EnableIntegration
public static class ContextConfiguration {
#Bean
public PlatformTransactionManager transactionManager() {
return new DataSourceTransactionManager(dataSource);
}
#Bean
public ChannelMessageStoreQueryProvider queryProvider() {
return new H2ChannelMessageStoreQueryProvider();
}
#Bean
public JdbcChannelMessageStore jdbcChannelMessageStore() {
JdbcChannelMessageStore jdbcChannelMessageStore = new JdbcChannelMessageStore(dataSource);
jdbcChannelMessageStore.setChannelMessageStoreQueryProvider(queryProvider());
return jdbcChannelMessageStore;
}
#Bean
public PollableChannel transactionalQueueChannel() {
return new QueueChannel(new MessageGroupQueue(jdbcChannelMessageStore(), MESSAGE_GROUP));
}
#Bean
public TransactionInterceptor transactionInterceptor() {
return new TransactionInterceptorBuilder()
.transactionManager(transactionManager())
.isolation(Isolation.READ_COMMITTED)
.propagation(Propagation.REQUIRED)
.build();
}
#Bean
public TaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setCorePoolSize(5);
return threadPoolTaskExecutor;
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedDelay(10)
.advice(transactionInterceptor())
.taskExecutor(threadPoolTaskExecutor())
.get();
}
#Bean
public CountDownLatch exceptionLatch() {
return new CountDownLatch(2);
}
#ServiceActivator(inputChannel = "transactionalQueueChannel")
public void handle(Message<?> message) {
System.out.println(message);
try {
throw new RuntimeException("Intentional for rollback");
}
finally {
exceptionLatch().countDown();
}
}
}
}
Thanks to Artem Bilan for your great support. I finally found the solution. It seemed like there was another bean with the name transactionManager and transactionInterceptor active. This resulted in the strange behavior, that my trans-manager was never initialized, instead the other transactionmanager (null) was used for the transactioninterceptor and the PollingConsumer. Thats why my Transactionmanager in PollingConsumer was null, and why my Transactions were never working.
The solution was to rename all my beans, for some beans I also used the annotation #Primary to tell spring to always use this speciffic bean when autowired.
I also downgraded two 4.3, just to make sure this wasn't an error related to Version 5. I haven't testet if it would work with V 5 yet, but I think it should work also.
So I have a problem in Spring Batch 3.0.7.RELEASE and Spring 4.3.2.RELEASE where the Listeners are not running in my ItemProcessor class. Regular injection at the #StepScope level is working for #Value("#{jobExecutionContext['" + Constants.SECURITY_TOKEN + "']}") as seen below. But it isn't working for beforeProcess or beforeStep, I have tried both the annotation version and interface version. I'm almost 100% sure this was working at some point, but can't figure out why it's stopped.
Any ideas? Does it look like I have configured it wrong?
AppBatchConfiguration.java
#Configuration
#EnableBatchProcessing
#ComponentScan(basePackages = "our.org.base")
public class AppBatchConfiguration {
private final static SimpleLogger LOGGER = SimpleLogger.getInstance(AppBatchConfiguration.class);
private final static String OUTPUT_XML_FILE_PATH_PLACEHOLDER = null;
private final static String INPUT_XML_FILE_PATH_PLACEHOLDER = null;
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Bean(name = "cimAppXmlReader")
#StepScope
public <T> ItemStreamReader<T> appXmlReader(#Value("#{jobParameters[inputXmlFilePath]}")
String inputXmlFilePath) {
LOGGER.info("Job Parameter => App XML File Path :" + inputXmlFilePath);
StaxEventItemReader<T> reader = new StaxEventItemReader<T>();
reader.setResource(new FileSystemResource(inputXmlFilePath));
reader.setUnmarshaller(mecaUnMarshaller());
reader.setFragmentRootElementNames(getAppRootElementNames());
reader.setSaveState(false);
// Make the StaxEventItemReader thread-safe
SynchronizedItemStreamReader<T> synchronizedItemStreamReader = new SynchronizedItemStreamReader<T>();
synchronizedItemStreamReader.setDelegate(reader);
return synchronizedItemStreamReader;
}
#Bean
#StepScope
public ItemStreamReader<JAXBElement<AppIBTransactionHeaderType>> appXmlTransactionHeaderReader(#Value("#{jobParameters[inputXmlFilePath]}")
String inputXmlFilePath) {
LOGGER.info("Job Parameter => App XML File Path for Transaction Header :" + inputXmlFilePath);
StaxEventItemReader<JAXBElement<AppIBTransactionHeaderType>> reader = new StaxEventItemReader<>();
reader.setResource(new FileSystemResource(inputXmlFilePath));
reader.setUnmarshaller(mecaUnMarshaller());
String[] fragmentRootElementNames = new String[] {"AppIBTransactionHeader"};
reader.setFragmentRootElementNames(fragmentRootElementNames);
reader.setSaveState(false);
return reader;
}
#Bean
public Unmarshaller mecaUnMarshaller() {
Jaxb2Marshaller marshaller = new Jaxb2Marshaller();
marshaller.setPackagesToScan(ObjectFactory.class.getPackage().getName());
return marshaller;
}
#Bean
public Marshaller uberMarshaller() {
Jaxb2Marshaller marshaller = new Jaxb2Marshaller();
marshaller.setClassesToBeBound(ServiceRequestType.class);
marshaller.setSupportJaxbElementClass(true);
return marshaller;
}
#Bean(destroyMethod="") // To stop multiple close calls, see: http://stackoverflow.com/a/23089536
#StepScope
public ResourceAwareItemWriterItemStream<JAXBElement<ServiceRequestType>> writer(#Value("#{jobParameters[outputXmlFilePath]}")
String outputXmlFilePath) {
SyncStaxEventItemWriter<JAXBElement<ServiceRequestType>> writer = new SyncStaxEventItemWriter<JAXBElement<ServiceRequestType>>();
writer.setResource(new FileSystemResource(outputXmlFilePath));
writer.setMarshaller(uberMarshaller());
writer.setSaveState(false);
HashMap<String, String> rootElementAttribs = new HashMap<String, String>();
rootElementAttribs.put("xmlns:ns1", "http://some.org/corporate/message/2010/1");
writer.setRootElementAttributes(rootElementAttribs);
writer.setRootTagName("ns1:SetOfServiceRequests");
return writer;
}
#Bean
#StepScope
public <T> ItemProcessor<T, JAXBElement<ServiceRequestType>> appNotificationProcessor() {
return new AppBatchNotificationItemProcessor<T>();
}
#Bean
public ItemProcessor<JAXBElement<AppIBTransactionHeaderType>, Boolean> appBatchCreationProcessor() {
return new AppBatchCreationItemProcessor();
}
public String[] getAppRootElementNames() {
//get list of App Transaction Element Names
return AppProcessorEnum.getValues();
}
#Bean
public Step AppStep() {
// INPUT_XML_FILE_PATH_PLACEHOLDER and OUTPUT_XML_FILE_PATH_PLACEHOLDER will be overridden
// by injected jobParameters using late binding (StepScope)
return stepBuilderFactory.get("AppStep")
.<Object, JAXBElement<ServiceRequestType>> chunk(10)
.reader(appXmlReader(INPUT_XML_FILE_PATH_PLACEHOLDER))
.processor(appNotificationProcessor())
.writer(writer(OUTPUT_XML_FILE_PATH_PLACEHOLDER))
.taskExecutor(concurrentTaskExecutor())
.throttleLimit(1)
.build();
}
#Bean
public Step BatchCreationStep() {
return stepBuilderFactory.get("BatchCreationStep")
.<JAXBElement<AppIBTransactionHeaderType>, Boolean>chunk(1)
.reader(appXmlTransactionHeaderReader(INPUT_XML_FILE_PATH_PLACEHOLDER))
.processor(appBatchCreationProcessor())
.taskExecutor(concurrentTaskExecutor())
.throttleLimit(1)
.build();
}
#Bean
public Job AppJob() {
return jobBuilderFactory.get("AppJob")
.incrementer(new RunIdIncrementer())
.listener(AppJobCompletionNotificationListener())
.flow(AppStep())
.next(BatchCreationStep())
.end()
.build();
}
#Bean
public JobCompletionNotificationListener AppJobCompletionNotificationListener() {
return new JobCompletionNotificationListener();
}
#Bean
public TaskExecutor concurrentTaskExecutor() {
SimpleAsyncTaskExecutor taskExecutor = new SimpleAsyncTaskExecutor();
taskExecutor.setConcurrencyLimit(1);
return taskExecutor;
}
}
AppBatchNotificationItemProcessor.java
#StepScope
public class AppBatchNotificationItemProcessor<E> extends AppAbstractItemProcessor<E, JAXBElement<ServiceRequestType>> implements ItemProcessor<E, JAXBElement<ServiceRequestType>>, StepExecutionListener {
// This is populated correctly
#Value("#{jobExecutionContext['" + Constants.SECURITY_TOKEN + "']}")
private SecurityToken securityToken;
#Autowired
private AppProcessorService processor;
#Override
public JAXBElement<ServiceRequestType> process(E item) throws BPException {
// Do Stuff
return srRequest;
}
#BeforeProcess
public void beforeProcess(E item) {
System.out.println("Doesn't execute");
}
#Override
public void beforeStep(StepExecution stepExecution) {
// Doesn't execute
System.out.println("Doesn't execute");
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
// Doesn't execute
System.out.println("Doesn't execute");
}
}
This is due to the fact that you are returning interfaces instead of implementations in your #Bean methods. IMHO, you should return the most specific type possible when using java configuration in Spring. Here's why:
When configuring via XML, you provide the class in the XML configuration. This exposes the implementation to Spring so that any interfaces the class implements can be discovered and handled appropriately. When using java configuration, the return type of the #Bean method serves as the replacement for that information. And there is the issue. If your return type is an interface, Spring only knows about that specific interface and not all the interfaces an implementation may implement. By returning the concrete type where you can, you give Spring insight into what you're actually returning so it can better handle the various registration and wiring use cases for you.
For your specific example, since you're returning an ItemProcessor and it's step scoped (therefore proxied), all Spring knows about are the methods/behaviors expected with the ItemProcessor interface. If you return the implementation (AppBatchNotificationItemProcessor), other behaviors can be autoconfigured.
As far as I remember, you have to register a reader, writer, processor directly as listener on the step, if you use StepScope.
StepScope prevents the framework from being able to figure out what kind of interfaces, resp. #annotations (e.g.#BeforeProcess) the proxy actually implements/defines and therefore it is not able to register it as a listener.
So, I assume if add
return stepBuilderFactory.get("AppStep")
.<Object, JAXBElement<ServiceRequestType>> chunk(10)
.reader(appXmlReader(INPUT_XML_FILE_PATH_PLACEHOLDER))
.processor(appNotificationProcessor())
.writer(writer(OUTPUT_XML_FILE_PATH_PLACEHOLDER))
.listener(appNotificationProcessor())
.taskExecutor(concurrentTaskExecutor())
.throttleLimit(1)
.build();
it will work.
Ive being from sometime trying to setup a little program that uses Spring and Quartz together to schedule a task. I followed some other similar answers with no luck.
At the moment I think I have all configured correctly, I see no more exceptions but my job looks like its not kicking off.
In the log.out that Spring generates, I see the following messages at the end:
2015-06-04T15:46:57.928 DEBUG
[org.springframework.core.env.PropertySourcesPropertyResolver]
Searching for key 'spring.liveBeansView.mbeanDomain' in
[systemProperties] 2015-06-04T15:46:57.929 DEBUG
[org.springframework.core.env.PropertySourcesPropertyResolver]
Searching for key 'spring.liveBeansView.mbeanDomain' in
[systemEnvironment] 2015-06-04T15:46:57.929 DEBUG
[org.springframework.core.env.PropertySourcesPropertyResolver] Could
not find key 'spring.liveBeansView.mbeanDomain' in any property
source. Returning [null]
I will show you my codes...
This is the class from which I start the scheduler:
public class JobRunner {
public static void main(String[] args) throws SchedulerException {
ApplicationContext applicationContext = new AnnotationConfigApplicationContext(WhatsTheTimeConfiguration.class);
AutowiringSpringBeanJobFactory autowiringSpringBeanJobFactory = new AutowiringSpringBeanJobFactory();
autowiringSpringBeanJobFactory.setApplicationContext(applicationContext);
SpringBeanJobFactory springBeanJobFactory = new SpringBeanJobFactory();
SchedulerFactoryBean schedulerFactoryBean = new SchedulerFactoryBean();
schedulerFactoryBean.setTriggers(trigger());
schedulerFactoryBean.setJobFactory(springBeanJobFactory);
schedulerFactoryBean.start();
}
private static SimpleTrigger trigger() {
return newTrigger()
.withIdentity("whatsTheTimeJobTrigger", "jobsGroup1")
.startNow()
.withSchedule(simpleSchedule()
.withIntervalInSeconds(1)
.repeatForever())
.build();
}
}
I want to mention that If I use the method schedulerFactoryBean.getScheduler().start(), it throws me a null pointer exception on the scheduler, so thats why im calling start() on the factory.
The class AutowiringSpringBeanJobFactory was copy pasted from another answer here in stackoverflow. I decided to do that since all other answers where I found something was only configuration done via xml and I don't want to use xml.
public final class AutowiringSpringBeanJobFactory extends SpringBeanJobFactory implements
ApplicationContextAware {
private transient AutowireCapableBeanFactory beanFactory;
#Override
public void setApplicationContext(final ApplicationContext context) {
beanFactory = context.getAutowireCapableBeanFactory();
}
#Override
protected Object createJobInstance(final TriggerFiredBundle bundle) throws Exception {
final Object job = super.createJobInstance(bundle);
beanFactory.autowireBean(job);
return job;
}
}
This is the class that represents the Job that I want to trigger:
#Component
public class WhatsTheTimeManager extends QuartzJobBean {
#Autowired
private WhatsTheTime usecase;
#Autowired
private LocationRetriever locationDataProvider;
public WhatsTheTimeManager() {
}
#Override
protected void executeInternal(JobExecutionContext jobExecutionContext) throws JobExecutionException {
usecase.tellMeWhatsTheTimeIn(locationDataProvider.allLocations());
}
public void setUsecase(WhatsTheTime usecase) {
this.usecase = usecase;
}
public void setLocationDataProvider(LocationRetriever locationDataProvider) {
this.locationDataProvider = locationDataProvider;
}
}
My Spring configuration is doing component scanning, its very simple:
#Configuration
#ComponentScan(basePackages = "com.springpractice")
public class WhatsTheTimeConfiguration {
}
From this point everything I have are just some interfaces, components and a domain object, but I will paste them also, just in case I forgot something:
public interface LocationRetriever {
List<String> allLocations();
}
public interface TimeOutputRenderer {
TimeReport renderReport(String timeInLocation, String location);
}
public interface TimeRetriever {
String timeFor(String location);
}
#Component
public class LocationRetrieverDataProvider implements LocationRetriever{
public LocationRetrieverDataProvider() {
}
#Override
public List<String> allLocations() {
return asList("Europe/London", "Europe/Madrid", "Europe/Moscow", "Asia/Tokyo", "Australia/Melbourne", "America/New_York");
}
}
#Component
public class TimeOutputRendererDataProvider implements TimeOutputRenderer {
public TimeOutputRendererDataProvider() {
}
#Override
public TimeReport renderReport(String location, String time) {
System.out.println(location + " time is " + time);
return new TimeReport(location, time);
}
}
#Component
public class TimeRetrieverDataProvider implements TimeRetriever {
public TimeRetrieverDataProvider() {
}
#Override
public String timeFor(String location) {
SimpleDateFormat timeInLocation = new SimpleDateFormat("dd-M-yyyy hh:mm:ss a");
timeInLocation.setTimeZone(TimeZone.getTimeZone(location));
return timeInLocation.format(new Date());
}
}
Just one last detail, that maybe is of interest.
The versions I am using in my libraries are the following:
quartz 2.2.1
spring 4.1.6.RELEASE
When I run the appliaction, I expect the times of those countries to be printed every second, but it doesn't happen.
If you want to clone the code and try for yourself and see, you can find it at this git repo(Feel free to fork if you want): https://github.com/SFRJ/cleanarchitecture
The main error in your code is that you're not letting Spring handle the scheduling for you.
While you can use Quartz in code as any other code, the idea of the integration with Spring is to tell Spring about the work you want to be done and let Spring do the hard work for you.
In order to allow Spring to run the Quartz scheduling, you need to declare the Job, the JobDetail and the Trigger as Beans.
Spring only handles Beans if they are created through the Spring life-cycle (i.e. using annotations or XML) but not if the objects are created in code with a new statement.
The following code needs to be removed from JobRunner.java:
SpringBeanJobFactory springBeanJobFactory = new SpringBeanJobFactory();
SchedulerFactoryBean schedulerFactoryBean = new SchedulerFactoryBean();
schedulerFactoryBean.setTriggers(trigger());
schedulerFactoryBean.setJobFactory(springBeanJobFactory);
schedulerFactoryBean.start();
...
private static SimpleTrigger trigger() {
return newTrigger()
.withIdentity("whatsTheTimeJobTrigger", "jobsGroup1")
.startNow()
.withSchedule(simpleSchedule()
.withIntervalInSeconds(1)
.repeatForever())
.build();
}
That code will have to be re-written into WhatsTheTimeConfiguration.java, and here's how it looks now:
#Configuration
#ComponentScan(basePackages = "com.djordje.cleanarchitecture")
public class WhatsTheTimeConfiguration {
#Bean
public SchedulerFactoryBean schedulerFactoryBean() {
SchedulerFactoryBean schedulerFactoryBean = new SchedulerFactoryBean();
schedulerFactoryBean.setTriggers(trigger());
schedulerFactoryBean.setJobDetails(jobDetail());
schedulerFactoryBean.setJobFactory(springBeanJobFactory());
return schedulerFactoryBean;
}
#Bean
public SpringBeanJobFactory springBeanJobFactory() {
return new AutowiringSpringBeanJobFactory();
}
#Bean
public JobDetail jobDetail() {
JobDetailImpl jobDetail = new JobDetailImpl();
jobDetail.setKey(new JobKey("WhatsTheTime"));
jobDetail.setJobClass(WhatsTheTimeManager.class);
jobDetail.setDurability(true);
return jobDetail;
}
#Bean
public SimpleTrigger trigger() {
return newTrigger()
.forJob(jobDetail())
.withIdentity("whatsTheTimeJobTrigger", "jobsGroup1")
.startNow()
.withSchedule(simpleSchedule()
.withIntervalInSeconds(1)
.repeatForever())
.build();
}
}
SchedulerFactoryBean is now a Bean and will be handled and initialized by Spring, and so are SimpleTrigger and AutowiringSpringBeanJobFactory.
I added the missing JobDetail class which was missing and added the necessary wiring to SimpleTrigger and SchedulerFactoryBean. They both need to know about JobDetail which is the only place that knows which class is the job class that needs to be triggered.