I need to do multiple writes to DB under single transaction using Liferay 7.1. Basically, my question is would this work?
#Component(service = MyService.class)
public class MyService {
private OrganizationLocalService localService;
#Reference(unbind = "-")
protected void setOrganizationLocalService(OrganizationLocalService localService) {
this.localService = localService;
}
#Transactional(rollbackFor = IllegalArgumentException.class)
public void doInTransaction() {
try {
localService.createOrganization(...);
localService.updateOrganization(...);
// more
catch (IllegalArgumentException e) {
// rollback logic
}
}
}
There are also Liferay event listeners built to be part of the service calls used to manipulate Liferay entities. Those event listeners will do additional work like sending messages to Kafka topics, etc. And I am not sure if introducing transactions would not disrupt the work of these listeners.
By default in Liferay, every method at LocalService level is transactional.
Then, you have to collect all tasks in a single localservice method to ensure a single transactional enviroment.
#Transactional annotation is not effective as you have tryed to do. Here is not a Spring enviroment.
Related
I have to improve the atomicity of a process in a Spring Boot web application. The idea is the following:
class X {
#transactional
method a() {
b("STARTING") //Transactional method
try {
c() //transactional method
} except {
b("FAILED") //transactional method
}
}
}
#transactional
class Z {
b(status) {
//change status
}
c() {
//do stuff
}
// ... More transactional methods
}
I am working on class X, the problem is that when I call a() from another class to execute the whole transactional process, it only perform the changes from line 4, b("STARTING") and it completely ignores the rest. I am using Hibernate, so I cannot use NESTED propagation, I have tried some other things but I am not even sure why this happens, as I understand, default REQUIRED propagation setting should merge everything in a single transaction.
When I start the application in DEBUG mode, I can see how the code flows as it is expected, but at the end, when executing COMMIT, the transaction is incomplete. Any ideas?
I need to perform some work when the spring application is ready, something similar to #Scheduled but I want it to perform only once.
I found some ways to do it, such as using #PostConstruct on a bean, using #EventListener or InitializingBean, however, all of these ways does not match my need. If during the execution of this logic something goes wrong, I want to ignore it so the application starts anyway. But using these methods the application crashes.
Of course, I can surround the logic with try-catch and it will work. But, is there any more elegant way?
We faced a similar issue with our microservices , in order to run code just after startup we added a Component.
ApplicationStartup implements ApplicationListener<ApplicationReadyEvent>
Within the application to make a call to the services just after application startup, this worked for us.
#Component
public class ApplicationStartup implements ApplicationListener<ApplicationReadyEvent> {
#Autowired
YourService yourService;
#Override
public void onApplicationEvent(final ApplicationReadyEvent event) {
System.out.println("ApplicationReadyEvent: application is up");
try {
// some code to call yourservice with property driven or constant inputs
} catch (Exception e) {
e.printStackTrace();
}
}
}
When you use #PostConstruct for implementing a logic, the application is not ready yet, so it kind of contradicts your requirement. spring initializes the beans one by one (with respect to the dependencies between them.
After all it builds up the application context.
When the application context is fully initialized, spring indeed allows listeners to be run. So The listeners is a way to go - when the listener is invoked the application is ready.
In both cases (PostConstruct, EventListener) as long as you're not using try/catch block the application context will fail, because it waits till all the listeners will be done.
You can use #Async if you don't want the application context to wait for listeners execution. In this case the exception handling will be done by the task executor. See here
Personally I don't see any issue with try/catch approach
You can use #PostConstruct (as you said) but you must wrap your business in try catch and ignore it when it throws an exception.
Sample Code
#PostConstruct
void init() {
try {
//Your business
}
catch (Exception e) {
//Do nothing Or you can just log
}
I have some methods annotated with #KafkaListener but I want to start only some of them manually (depending on some conditions).
#KafkaListener(id = "consumer1", topics = "topic-name", clientIdPrefix = "client-prefix", autoStartup = "false")
public void consumer1(String message) {
// consume
}
#PostConstruct
private void startConsumers() {
if (true) {
kafkaListenerEndpointRegistry.getListenerContainer("consumer1").start();
}
}
But at this moment kafkaListenerEndpointRegistry.getListenerContainers() is empty list and kafkaListenerEndpointRegistry.getListenerContainer("consumer1") returns null. So maybe the moment when #PostConstruct method is called is too early and listeners are still not registered.
I tried to annotate startConsumers() method with #Scheduled(fixedDelay = 100) and listeners are already available. But using #Scheduled is not a good decision for something that I want to call once after starting the application.
You can't do it in #PostConstruct - it's too early in the application context life cycle.
Implement SmartLifecyle set the phase to Integer.MAX_VALUE and start the container in the start() method.
Or use an #EventListener and listen for the ApplicationStartedEvent (if using Spring Boot) or ContextRefreshedEvent for a non-Boot Spring application.
I'm using a Java EE 7 + GlassFish and need to perform some operation against a number of JPA entities from a stateless bean.
#Stateless
public class JobRunner
{
public void do()
{
for (Entity entity:facade.findAll())
{
///do some work against entity
}
}
}
This JobRunner bean is injected into the servlet and I invoke do() method from the web UI.
The issue is that all entities are being changed within one transaction so if one fails everything is rolled back what is not desirable. Is there a way to start and close a new transaction for each entity (i.e. for each iteration of the loop)?
I can write an external client and make a loop there calling a stateless bean for each entity but it's not something that completely works for me as I prefer to keep an app monolithic. Can I somehow manage transactions form inside a container?
Maybe JMS helps? If I implement a doer as message listener and will be sending a message for each entity, will it start a new transaction for each one?
#Stateless
public class JobRunner
{
public void do()
{
for (Entity entity:facade.findAll())
{
sendMessageToRealDoer(entity);
}
}
}
Create another bean, specifying #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW), at method or bean level:
import javax.ejb.TransactionAttribute;
import javax.ejb.TransactionAttributeType;
#Stateless
public class JobWork {
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public void doWork(Entity entity) {
// do what you would do in the loop with the Entity
// this runs in a new transaction
}
}
I wish I could tell you that you only need to annotate a method of the same bean (JobRunner) and simply call it. This is not possible (EDIT)without workarounds - check comment from Steve C(/EDIT) because when calling methods of this object in EJBs and CDI beans the interceptors do not get called. Transactions are implemented with interceptors in both cases.
Some notes:
If the total duration of the operations in the loop is expected to be long, you will get a timeout in the outer transaction, that is implicitly started for the JobRunner stateless EJB. You will want to take measure that no "outer" transaction is started.
Sending the data to a queue will work too; but queues will process them asynchronously, meaning that the execution will return to the servlet calling JobRunner.do() most probably before all items have been processed.
I'm using a JPA EntityListener to do some additional audit work and am injecting a Spring-managed AuditService into my AuditEntryListener using #Configurable. The AuditService generates a collection of AuditEntry objects. The AuditService is itself a Singleton scoped bean, and I'd like to gather all the AuditEntry objects under a common key that can then be accessed by the outermost service layer (the one that invoked the persist call which in turn triggered the EntityListener).
I'm looking at using Spring's TransactionSynchronizationManager to set a specific transaction name (using UID() or some other unique strategy) at the beginning of the transaction, and then using that name as a key within the AuditService that will allow me to group all AuditEntry objects created within that transaction.
Is mixing declarative and programmatic transaction management have the potential for trouble? (Though I'm doing nothing more than setting the transaction name). Is there a better way to associate the generated AuditEntry objects with the current transaction? This solution does work for me, but given that the TransactionSynchronizationManager isn't intended for application use, I'd like to make sure that my use of it won't cause some unforseen problems.
Related Question
Finally, a related, but not immediately pertinent question: I know that the documentation for JPA EntityListeners cautions against using the current EntityManager, but if I did want to use it to diff an object against it's persisted self, would I be safe using an #Transactional(propagation=REQUIRES_NEW) annotation around my preUpdate() method?
Prototype Code:
Service Class
#Transactional
public void create(MyEntity e) {
TransactionSynchronizationManager.setCurrentTransactionName(new UID().toString());
this.em.persist(e);
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
#Override
public void afterCommit() {
Set<AuditEntry> entries = auditService.getAuditEntries(TransactionSynchronizationManager.getCurrentTransactionName());
if(entries != null) {
for(AuditEntry entry : entries) {
//do some stuff....
LOG.info(entry.toString());
}
}
}
});
}
JPA EntityListener
#Configurable
public class AuditEntryListener {
#Autowired
private AuditService service;
#PreUpdate
public void preUpdate(Object entity) {
service.auditUpdate(TransactionSynchronizationManager.getCurrentTransactionName(), entity);
}
public void setService(AuditService service) {
this.service = service;
}
public AuditService getService() {
return service;
}
}
AuditService
#Service
public class AuditService {
private Map<String, Set<AuditEntry>> auditEntryMap = new HashMap<String, Set<AuditEntry>>();
public void auditUpdate(String key, Object entity) {
// do some audit work
// add audit entries to map
this.auditEntryMap.get(key).add(ae);
}
}
#Filip
As far as I understand, your requirement is:
Have an unique token generated within each transaction (database
transaction of course)
Keep this unique token easily accessible across all layers
So naturally you're thinking about the TransactionSynchronizationManager provided by Spring as a facility to store the unique token (in this case, an UID)
Be very carefull with this approach, the TransactionSynchronizationManager is the main storage helper to manage all the #Transactional processing for Spring. Under the #Transactional hood, Spring is creating an appropriate EntityManager, an appropriate Synchronization object and attach them to a thread local using TransactionSynchronizationManager.
In your service class code, inside a #Transactional method your are tampering with the Synchronization object, it can end up with undesirable behavior.
I've done an indept analysis of how #Transactional works here, have a look: http://doanduyhai.wordpress.com/2011/11/20/spring-transactional-explained/
Now back to your needs. What you can do is:
Add a Thread local to the AuditService, containing the unique token when entering the #Transactional method and destroy it when exiting the method. Within this method call, you can access the unique token in any layer. Explanation for ThreadLocal usage can be found here: http://doanduyhai.wordpress.com/2011/12/04/threadlocal-explained/
Create a new annotation, let's say #Auditable(uid="AuditScenario1") to annotate methods that need to be audited and use Spring AOP to intercept these method calls and manage the Thread local processing for you
Example:
Modified AuditService
#Service
public class AuditService {
public uidThreadLocal = new ThreadLocal<String>();
...
...
}
Auditable annotation
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
#Documented
public #interface Auditable
{
String uid();
}
Usage of #Auditable annotation
#Auditable(uid="AuditScenario1")
#Transactional
public void myMethod()
{
// Something
}
Spring AOP part
#Around("execution(public * *(..)) && #annotation(auditableAnnotation))
public Object manageAuditToken(ProceedingJoinPoint jp, Auditable auditableAnnotation)
{
...
...
AuditService.uidThreadLocal.set(auditableAnnotation.uid())...
...
}
Hope this will help.
You can come up with a solution using the TransactionSynchronizationManager. We register a "TransactionInterceptorEntityListener" with JPA as an entity-listener. What we wanted to achieve is the ability to listen to CRUD events such that we can work with a spring managed "listener" that has a lifecycle tied to the current transaction (i.e., spring-managed but instance per transaction). We sub-class the JPATransactionManager and introduce in the prepareSynchronization() method, a hook to setup a "TransactionInterceptorSynchronizer." We also use the same hook for allow code (in programmatic tx) to associate and retrieve arbitrary objects with the current transaction and also register jobs that run before/after transaction commit.
The overall code is complex, but definitely do-able. If you use JPATemplates for programmatic tx, it is tough to achieve this. So we rolled our own template that simply calls the JPA template after taking care of the interceptor work. We plan to open-source our JPA library (written on top of Spring's classes) soon.
You can see a pattern of adding custom transactions and hooks with Spring managed transactions in the following library for Postgresql