Spring Integration Transaction management using Atomikos - java

I am thinking to create a Spring Integration Spring Boot application to
1-Poll messages from a DB
2-Do some processing on it
3-Publish messages to EMS Queue
using Atomikos for Transaction management. My question is: If the above configuration will be transactional with all the required JTA configurations done? Also I have read somewhere, if multiple threads are created in Spring Integration,e.g,using a Splitter, then the context won't to transactional. How to overcome this?

If you configure the poller as transactional, the flow will run in a transaction, as long as you don't hand off to another thread (via an ExecutorChannel or QueueChannel channel, for example).
Adding a splitter will not break the transaction boundary as each split will be processed on the same thread.

Spring Integration has different requirements for transactions, to Do so you need to pass a transaction manager in the poller metaData, for example:
#Bean
public PollerMetadata pollerMetadata() throws NamingException {
return Pollers.fixedDelay(Long.valueOf(env.getProperty("poller.interval")))
.transactional(**transactionManager**).get();
}
With
#Autowired
private PlatformTransactionManager **transactionManager**;
And putting :
#InboundChannelAdapter(channel = "jpaInputChannel", poller = #Poller(value = "**pollerMetadata**"))

Related

Spring boot quartz schema other than public doesn't work

I am not able to use other schema (than public) for quartz tables. This is my quartz setup:
spring.quartz.job-store-type=jdbc
spring.quartz.jdbc.initialize-schema=always
spring.quartz.properties.org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
spring.quartz.properties.org.quartz.jobStore.isClustered=true
spring.quartz.properties.org.quartz.jobStore.clusterCheckinInterval=2000
spring.quartz.properties.org.quartz.scheduler.instanceId=AUTO
spring.quartz.properties.org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
spring.quartz.properties.org.quartz.jobStore.useProperties=false
spring.quartz.properties.org.quartz.jobStore.tablePrefix=QRTZ_
And config class:
#Bean
public SchedulerFactoryBean schedulerFactory(ApplicationContext applicationContext, DataSource dataSource, QuartzProperties quartzProperties) {
SchedulerFactoryBean schedulerFactoryBean = new SchedulerFactoryBean();
AutowireCapableBeanJobFactory jobFactory = new AutowireCapableBeanJobFactory(applicationContext.getAutowireCapableBeanFactory());
Properties properties = new Properties();
properties.putAll(quartzProperties.getProperties());
schedulerFactoryBean.setOverwriteExistingJobs(true);
schedulerFactoryBean.setDataSource(dataSource);
schedulerFactoryBean.setQuartzProperties(properties);
schedulerFactoryBean.setJobFactory(jobFactory);
return schedulerFactoryBean;
}
#Bean
public Scheduler scheduler(ApplicationContext applicationContext, DataSource dataSource, QuartzProperties quartzProperties)
throws SchedulerException {
Scheduler scheduler = schedulerFactory(applicationContext, dataSource, quartzProperties).getScheduler();
scheduler.start();
return scheduler;
}
This works fine, and the tables are getting created. However I would like to have the tables in a different schema. So I set quartz to use 'quartz' schema.
spring.quartz.properties.org.quartz.jobStore.tablePrefix=quartz.QRTZ_
This is the error I'm getting:
[ClusterManager: Error managing cluster: Failure obtaining db row lock: ERROR: current transaction is aborted, commands ignored until end of transaction block] [org.quartz.impl.jdbcjobstore.LockException: Failure obtaining db row lock: ERROR: current transaction is aborted, commands ignored until end of transaction block
Any ideas on how to solve it?
It was a bold hope that "tablePrefix" can also adjsut the "db schema", (and there is no documented property concerning "db schema"), but you could get more lucky, if you configure it on the datasource.
i.e. you would introduce/configure different (spring) datasource( bean)s for every user/schema used by your application ...
(like here:) Spring Boot Configure and Use Two DataSources
or here
, then you'd wire the scheduler factory with the appropriate datasource (quartz).
schedulerFactoryBean.setDataSource(quartzDataSource);
Or via (#Autowired) parameter injection, or method invocation : #Bean initialization - difference between parameter injection vs. direct method access?
UPDATE (regarding "wiring"):
..from current spring-boot doc:
To have Quartz use a DataSource other than the application’s main DataSource, declare a DataSourcebean, annotating its #Bean method with #QuartzDataSource. Doing so ensures that the Quartz-specific DataSource is used by both the SchedulerFactoryBean and for schema initialization.
Similarly, to have Quartz use a TransactionManager other than the application’s main ... declare a TransactionManager bean, ...#QuartzTransactionManager.
You can take even more control by customizing:
spring.quartz.jdbc.initialize-schema
Database schema initialization mode.
default: embedded (embedded|always|never)
spring.quartz.jdbc.schema
Path to the SQL file to use to initialize the database schema.
default: classpath:org/quartz/impl/jdbcjobstore/tables_##platform##.sql
... properties, where ##platform## refers to your db vendor.
But it is useless for your requirement... since looking at
and complying with the original schemes - they seem schema independent/free. (So the data source approach looks more promising, herefor.)
REFS:
https://www.quartz-scheduler.org/documentation/quartz-2.3.0/configuration/ConfigJobStoreTX.html
Spring Boot Configure and Use Two DataSources
https://stackoverflow.com/a/42360877/592355
https://www.baeldung.com/spring-annotations-resource-inject-autowire
#Bean initialization - difference between parameter injection vs. direct method access?
spring.quartz.jdbc.initialize-schema
What are the possible values of spring.datasource.initialization-mode?
spring.quartz.jdbc.schema
https://github.com/quartz-scheduler/quartz/tree/master/quartz-core/src/main/resources/org/quartz/impl/jdbcjobstore
https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.quartz
https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/autoconfigure/quartz/QuartzDataSource.html
So the idea is that Quartz doesn't create the tables using spring.quartz.properties.org.quartz.jobStore.tablePrefix
Table names are static. Eg qrtz_triggers. as #xerx593 pointed out.
What we can do is to create the tables (manual, flyway, liquibase) in a different schema, update tablePrefix=schema.qrtz_ and it will work.
Tested with Postgres

Proper setting transaction manager with multitenant databases configuration with spring-boot

I have multitenant database in Spring Boot. I store multi spring JDBC templates (based on tomcat Data Sources, configured manually) in map (immutable bean). And I choose proper data source based on uuid in a request (connection pool per database). I have disabled standard configuration in Spring Boot by:
#SpringBootApplication(exclude = DataSourceAutoConfiguration.class)
What is the proper way of transaction manager configuration? With single data source I can use PlatformTransactionManager, but how it should be done with multiple jdbc templates/data sources in spring? It would be the best if I could set everything dynamically. Thanks in advance.
Here a solution for using multiple datasources
http://www.baeldung.com/spring-data-jpa-multiple-databases
Configure Two DataSources
If you need to configure multiple data sources, you can apply the same tricks that are described in the previous section. You must, however, mark one of the DataSource #Primary as various auto-configurations down the road expect to be able to get one by type.
If you create your own DataSource, the auto-configuration will back off. In the example below, we provide the exact same features set than what the auto-configuration provides on the primary data source
#Bean
#Primary
#ConfigurationProperties("app.datasource.foo")
public DataSourceProperties fooDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
#ConfigurationProperties("app.datasource.foo")
public DataSource fooDataSource() {
return fooDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
#ConfigurationProperties("app.datasource.bar")
public BasicDataSource barDataSource() {
return (BasicDataSource) DataSourceBuilder.create()
.type(BasicDataSource.class).build();
}
fooDataSourceProperties has to be flagged #Primary so that the database initializer feature uses your copy (should you use that).
app.datasource.foo.type=com.zaxxer.hikari.HikariDataSource
app.datasource.foo.maximum-pool-size=30
app.datasource.bar.url=jdbc:mysql://localhost/test
app.datasource.bar.username=dbuser
app.datasource.bar.password=dbpass
app.datasource.bar.max-total=30

spring boot xa transaction datasource and jms

I make a POC with spring-boot-starter-data-jpa and spring-boot-starter-activemq. I would like to push the jms message on the broker (activeMQ) when the jpa transaction was commited.
My code :
UtilsateurService with have the "main" transaction:
#Service
public class UtilisateurService {
#Autowired
private UtilisateurRepository utilisateurRepository;
#Autowired
private SendMessage sendMessage;
#Transactional(rollbackOn = java.lang.Exception.class)
public Utilisateur create(Utilisateur utilisateur) throws Exception {
final Utilisateur result = utilisateurRepository.save(utilisateur);
sendMessage.send("creation utilisateur : " + result.getId());
throw new Exception("rollback");
//return result;
}
}
The SendMessage class witch "manage" Jms message:
#Component
public class SendMessage {
#Autowired
private JmsMessagingTemplate jmsMessagingTemplate;
#Value("${jms.queue.destination}")
private String destinationQueue;
public void send(String msg) {
this.jmsMessagingTemplate.convertAndSend(destinationQueue, msg);
}
}
My main class :
#SpringBootApplication
#EnableJms
#EnableTransactionManagement
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
The JMS message was push on the activeMq broker before exception was throw. So I don't have "rollback" on the broker.
How can I configure to have xa transaction running?
is your jmsTemplate Transacted ?
jmsTemplate.setSessionTransacted(true);
https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/support/JmsAccessor.html#setSessionTransacted-boolean-
public void setSessionTransacted(boolean sessionTransacted)
Set the transaction mode that is used when creating a JMS Session.
Default is "false". Note that within a JTA transaction, the parameters
passed to create(Queue/Topic)Session(boolean transacted, int
acknowledgeMode) method are not taken into account. Depending on the
Java EE transaction context, the container makes its own decisions on
these values. Analogously, these parameters are not taken into account
within a locally managed transaction either, since the accessor
operates on an existing JMS Session in this case.
Setting this flag to "true" will use a short local JMS transaction
when running outside of a managed transaction, and a synchronized
local JMS transaction in case of a managed transaction (other than an
XA transaction) being present. This has the effect of a local JMS
transaction being managed alongside the main transaction (which might
be a native JDBC transaction), with the JMS transaction committing
right after the main transaction.
http://www.javaworld.com/article/2077963/open-source-tools/distributed-transactions-in-spring--with-and-without-xa.html
30.2.5 Transaction management
Spring provides a JmsTransactionManager that manages transactions for
a single JMS ConnectionFactory. This allows JMS applications to
leverage the managed transaction features of Spring as described in
Chapter 17, Transaction Management. The JmsTransactionManager performs
local resource transactions, binding a JMS Connection/Session pair
from the specified ConnectionFactory to the thread. JmsTemplate
automatically detects such transactional resources and operates on
them accordingly.
In a Java EE environment, the ConnectionFactory will pool Connections
and Sessions, so those resources are efficiently reused across
transactions. In a standalone environment, using Spring’s
SingleConnectionFactory will result in a shared JMS Connection, with
each transaction having its own independent Session. Alternatively,
consider the use of a provider-specific pooling adapter such as
ActiveMQ’s PooledConnectionFactory class.
JmsTemplate can also be used with the JtaTransactionManager and an
XA-capable JMS ConnectionFactory for performing distributed
transactions. Note that this requires the use of a JTA transaction
manager as well as a properly XA-configured ConnectionFactory! (Check
your Java EE server’s / JMS provider’s documentation.)
Reusing code across a managed and unmanaged transactional environment
can be confusing when using the JMS API to create a Session from a
Connection. This is because the JMS API has only one factory method to
create a Session and it requires values for the transaction and
acknowledgment modes. In a managed environment, setting these values
is the responsibility of the environment’s transactional
infrastructure, so these values are ignored by the vendor’s wrapper to
the JMS Connection. When using the JmsTemplate in an unmanaged
environment you can specify these values through the use of the
properties sessionTransacted and sessionAcknowledgeMode. When using a
PlatformTransactionManager with JmsTemplate, the template will always
be given a transactional JMS Session.
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/jms.html#jms-tx
Hassen give the solution. So I change the SendMessage class to :
#Component
public class SendMessage {
private final JmsMessagingTemplate jmsMessagingTemplate;
#Value("${jms.queue.destination}")
private String destinationQueue;
#Autowired
public SendMessage(JmsMessagingTemplate jmsMessagingTemplate) {
this.jmsMessagingTemplate = jmsMessagingTemplate;
this.jmsMessagingTemplate.getJmsTemplate().setSessionTransacted(true);
}
public void send(String msg) {
this.jmsMessagingTemplate.convertAndSend(destinationQueue, msg);
}
}

Configuring spring transactions in spring integration dsl

I'm currently configuring spring integration using spring-integration-dsl as follow
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(inboundServer())
.transform(Transformers.objectToString())
.transform(...)
.route(...)
.transform(Transformers.toJson())
.channel(...)
.get();
}
#Bean
public PlatformTransactionManager transactionManager() {
....
}
I don't know how I can configure the flow to use the transaction manager I've configured.
Actually, Spring Integration Java DSL supports all transaction features, which are available for the XML components.
Please, provide more info from where you want to start a transaction. And keep in mind that TX support is restricted to the Thread boundaries. So, you can start TX from the poller or from the JMS(AMQP) Message Driven Channel Adapter.
Or use TransactionInterceptor as an advice on any endpoint within the flow. But in this case the TX is restricted just only for the AbstractReplyProducingMessageHandler.handleRequestMessage.
UPDATE
To start the TX for some part of flow isn't so standard task and it can be achieved as a unit of work some transactional black box. For this purpose we have a component like Gateway. So, you specify some interface, mark it with #MessagingGateway, add #IntegrationComponentScan alongside with #EnableConfiguration and mark the method of that interface with #Transactional. The requestChannel of this gateway should send message to some separate flow with JDBC and Jackson conversion and wait for the result to continue in the main flow. The TX will be finished on the return from that gateway's method invocation.
And call that gateway as regular service-activator from the .handle("myGateway", "getData")

How to propagate Spring transaction to another thread?

Perhaps, I am doing something wrong, but I can't find a good way out for the following situation.
I would like to unit test a service that uses Spring Batch underneath to execute jobs. The jobs are executed via pre-configured AsyncTaskExecutor in separate threads. In my unit test I would like to:
Create few domain objects and persist them via DAO
Invoke the service method to launch the job
Wait until the job is completed
Use DAO to retrieve domain objects and check their state
Obviously, all above should be executed within one transaction, but unfortunately, transactions are not propagated to new threads (I understand the rationale behind this).
Ideas that came to my mind:
Commit the transaction#1 after step (1). Is not good, as the DB state should be rolled back after the unit test.
Use Isolation.READ_UNCOMMITTED in job configuration. But this requires two different configurations for test and for production.
I think the simplest solution would be configure the JobLauncher with a SyncTaskExecutor during test execution - this way the job is executed in the same thread as the test and shares the transaction.
The task executor configuration can be moved to a separate spring configuration xml file. Have two versions of it - one with SyncTaskExecutor which is used during testing and the other AsyncTaskExecutor that is used for production runs.
Although this is not a true solution to your question, I found it possible to start a new transaction inside a worker thread manually. In some cases this might be sufficient.
Source: Spring programmatic transactions.
Example:
#PersistenceContext
private EntityManager entityManager;
#Autowired
private PlatformTransactionManager txManager;
/* in a worker thread... */
public void run() {
TransactionStatus tx = txManager.getTransaction(new DefaultTransactionDefinition());
try {
entityManager.find(...)
...
entityManager.flush(...)
etc...
txManager.commit(tx);
} catch (RuntimeException e) {
txManager.rollback(tx);
}
}
If you do want separate configurations, I'd recommend templating the isolation policy in your configuration and getting its value out of a property file so that you don't wind up with a divergent set of Spring configs for testing and prod.
But I agree that using the same policy production uses is best. How vast is your fixture data, and how bad would it be to have a setUp() step that blew away and rebuilt your data (maybe from a snapshot, if it's a lot of data) so that you don't have to rely on rollbacks?

Categories

Resources