Reactive calls in #Post_construct - java

I'm trying to understand what is the right way of implementing post_construct methods in Spring Webflux.
On startup of an application I need to read data from the DB (I have an R2dbcRepository configured), and then perform some logic and save result as Bean's fields).
So I have a findAll() method returning Flux. How should it be done?
I tried using .block(), AtomicBoolean flag, none of these worked

First of all, never use block() method. Use it for tests at most, but there is a better solution out there than StepVerifier. (If you use Kotlin there are await prefixed methods that work like block but not blocking.)
If you need data at launch, that says it is bad design to me because if there is no user, what do you do with it? I think it's illogical. What happens when you use query when you need it, add to cache and reuse it when you need it again. In the case of WebFlux, you can prepare a Mono object that uses a query from the database and use .cache() end of chain. So Spring Bean can contain this Mono object that will be run when you subscribe.
Ofc below example, repo.find will never call if function of Service won't run.
https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#cache--
#Configuration
public class Config {
private R2dbcRepository repo;
public Config(R2dbcRepository repo) {
this.repo = repo;
}
#Bean
public Mono<Data> myCachedDbData() {
return repo.find(...)
.map(it -> new Data(it))
.cache()
}
}
#Service
public class Service {
private Mono<Data> data;
public Config(Mono<Data> data) {
this.data = data;
}
public Object function() {
return data.flatMap(...)
}
}

Related

How to add SynchronizationCallbacks to #TransactionalEventListener during spring boot application startup?

I have a spring boot application that uses a few #TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT). I noticed that spring boot doesn't do any exception logging for them when they end up with an exception being thrown.
Because of this I wanted to add some generic logging facility for such exceptions. I found that TransactionalApplicationListener.SynchronizationCallback is the interface I need to implement. However it seems complicated to register these callbacks. I didn't find any call of TransactionalApplicationListener#addCallback in the spring dependencies that would achieve this.
Trying to get a list of TransactionalApplicationListener and the SynchronizationCallback injected and then call addCallback in a #PostConstruct didn't get me further because there were always no listeners injected even though the application did make successful use of them.
So how do I add SynchronizationCallbacks to TransactionalApplicationListeners during spring boot application startup?
The first thing to note is that TransactionalApplicationListeners like all ApplicationListener are not beans in the spring context. They live somewhat outside of it (see org.springframework.context.ConfigurableApplicationContext#addApplicationListener). So injecting them is not possible for the application context.
While debugging and looking through spring sources one finds that these listeners are being created by org.springframework.transaction.event.TransactionalEventListenerFactory. And that is where my solution steps into. We decorate that factory with another one that is aware of SynchronizationCallbacks:
public class SynchronizationCallbackAwareFactory implements EventListenerFactory, Ordered {
private final TransactionalEventListenerFactory delegate;
private final Provider<List<SynchronizationCallback>> synchronizationCallbacks;
private final int order;
public SynchronizationCallbackAwareFactory(TransactionalEventListenerFactory transactionalEventListenerFactory,
Provider<List<SynchronizationCallback>> synchronizationCallbacks,
int order) {
this.delegate = transactionalEventListenerFactory;
this.synchronizationCallbacks = synchronizationCallbacks;
this.order = order;
}
#Override
public boolean supportsMethod(Method method) {
return delegate.supportsMethod(method);
}
#Override
public ApplicationListener<?> createApplicationListener(String beanName, Class<?> type, Method method) {
ApplicationListener<?> applicationListener = delegate.createApplicationListener(beanName, type, method);
if (applicationListener instanceof TransactionalApplicationListener) {
TransactionalApplicationListener<?> listener = (TransactionalApplicationListener<?>) applicationListener;
Collection<SynchronizationCallback> callbacks = this.synchronizationCallbacks.get();
callbacks.forEach(listener::addCallback);
}
return applicationListener;
}
#Override
public int getOrder() {
return order;
}
}
Note that I use a javax.inject.Provider in my case to make the retrieval of the callbacks at the latest possible time.
The decorator has to be Ordered because spring will use the first factory supporting the method it gets across. And therefore the order of an instance of this class has to have higher precedence as the order value 50 of TransactionEventListenerFactory.
I had simmilar problem with code as below
#Transactional(propagation = Propagation.REQUIRES_NEW)
public class SomeListenerFacade {
#TransactionalEventListener
public void onSomething(SomeEvent event) {
throw new RuntimeException("some cause");
}
}
I followed your solution. It worked. On the way I've found an alternative way for at least seeing that exception in the logfile
# application.properties
logging.level.org.springframework.transaction.support.TransactionSynchronizationUtils = DEBUG

Create bean instance at runtime for interface

i am kind of stuck on a problem with creating beans, or probably i got the wrong intention.. Maybe you can help me solve it:
I got a application which takes in requests for batch processing. For every batch i need to create an own context depending on the parameters issued by the request.
I will try to simplyfy it with the following example:
I receive a request to process in a batch FunctionA which is a implementation for my Function_I interface and has sub-implementation FunctionA_DE and FunctionA_AT
Something like this:
public interface Function_I {
String doFunctionStuff()
}
public abstract class FunctionA implements Function_I {
FunctionConfig funcConfig;
public FunctionA(FunctionConfig funcConfig) {
this.funcConfig = funcConfig;
}
public String doFunctionStuff() {
// some code
String result = callSpecificFunctionStuff();
// more code
return result;
}
protected abstract String callSpecificFunctionStuff();
}
public class FunctionA_DE extends FunctionA {
public FunctionA_DE(FunctionConfig funcConf) {
super(funcConf)
}
protected String callSpecifiFunctionStuff() {
//do some specificStuff
return result;
}
}
public class FunctionA_AT extends FunctionA {
public FunctionA_AT(FunctionConfig funcConf) {
super(funcConf)
}
protected String callSpecifiFunctionStuff() {
//do some specificStuff
return result;
}
}
what would be the Spring-Boot-Way of creating a instance for FunctionA_DE to get it as Function_I for the calling part of the application, and what should it look like when i add FunctionB with FunctionB_DE / FunctionB_AT to my classes..
I thought it could be something like:
PSEUDO CODE
#Configuration
public class FunctionFactory {
#Bean(SCOPE=SCOPE_PROTOTYPE) // i need a new instance everytime i call it
public Function_I createFunctionA(FunctionConfiguration funcConfig) {
// create Function depending on the funcConfig so either FunctionA_DE or FunctionA_AT
}
}
and i would call it by Autowiring the FunctionFactory into my calling class and use it with
someSpringFactory.createFunction(functionConfiguration);
but i cant figure it out to create a Prototype-Bean for the function with passing a parameter.. And i cant really find a solution to my question by browsing through SO, but maybe i just got the wrong search terms.. Or my approach to solve this issue i totally wrong (maybe stupid), nobody would solve it the spring-boot-way but stick to Factories.
Appreciate your help!
You could use Springs's application context. Create a bean for each of the interfaces but annotate it with a specific profile e.g. "Function-A-AT". Now when you have to invoke it, you can simply set the application context of spring accordingly and the right bean should be used by Spring.
Hello everyone and thanks for reading my question.
after a discussion with a friend who is well versed in the spring framework i came to the conclusion that my approach or my favoured solution was not what i was searching for and is not how spring should be used. Because the Function_I-Instance depends on the for the specific batch loaded configuration it is not recommended to manage all these instances as #Beans.
In the end i decided to not manage the instances for my Function_I with spring. but instead i build a Controller / Factory which is a #Controller-Class and let this class build the instance i need with the passed parameters for decision making on runtime.
This is how it looks (Pseudo-Code)
#Controller
public class FunctionController {
SomeSpringManagedClass ssmc;
public FunctionController(#Autowired SomeSpringManagedClass ssmc) {
this.ssmc = ssmc;
}
public Function_I createFunction(FunctionConfiguration funcConf) {
boolean funcA, cntryDE;
// code to decide the function
if(funcA && cntryDE) {
return new FunctionA_DE(funcConf);
} else if(funB && cntryDE) {
return new FunctionB_DE(funcConf);
} // maybe more else if...
}
}

How to transfer data via reactor's subscriber context?

I'm a new for a project reactor, but i have task to send some information from classic spring rest controller to some service, which is interacts with different system. Whole project developed with project reactor.
Here is my rest controller:
#RestController
public class Controller {
#Autowired
Service service;
#PostMapping("/path")
public Mono<String> test(#RequestHeader Map<String, String> headers) throws Exception {
testService.saveHeader(headers.get("header"));
return service.getData();
}
And here is my service:
#Service
public class Service {
private Mono<String> monoHeader;
private InteractionService interactor;
public Mono<String> getData() {
return Mono.fromSupplier(() -> interactor.interact(monoHeader.block()));
}
public void saveHeader(String header) {
String key = "header";
monoHeader = Mono.just("")
.flatMap( s -> Mono.subscriberContext()
.map( ctx -> s + ctx.get(key)))
.subscriberContext(ctx -> ctx.put(key, header));
}
Is it acceptable solution?
Fisrt off, I don't think you need the Context here. It is useful to implicitly pass data to a Flux or a Mono that you don't create (eg. one that a database driver creates for you). But here you're in charge of creating the Mono<String>.
Does the service saveHeader really achieve something? The call seem transient in nature: you always immediately call the interactor with the last saved header. (there could be a side effect there where two parallel calls to your endpoint end up overwriting each other's headers).
If you really want to store the headers, you could add a list or map in your service, but the most logical path would be to add the header as a parameter of getData().
This eliminates monoHeader field and saveHeader method.
Then getData itself: you don't need to ever block() on a Mono if you aim at returning a Mono. Adding an input parameter would allow you to rewrite the method as:
public Mono<String> getData(String header) {
return Mono.fromSupplier(() -> interactor.interact(header));
}
Last but not least, blocking.
The interactor seems to be an external service or library that is not reactive in nature. If the operation involves some latency (which it probably does) or blocks for more than a few milliseconds, then it should run on a separate thread.
Mono.fromSupplier runs in whatever thread is subscribing to it. In this case, Spring WebFlux will subscribe to it, and it will run in the Netty eventloop thread. If you block that thread, it means no other request can be serviced in the whole application!
So you want to execute the interactor in a dedicated thread, which you can do by using subscribeOn(Schedulers.boundedElastic()).
All in all:
#RestController
public class Controller {
#Autowired
Service service;
#PostMapping("/path")
public Mono<String> test(#RequestHeader Map<String, String> headers) throws Exception {
return service.getData(headers.get("header"));
}
}
#Service
public class Service {
private InteractionService interactor;
public Mono<String> getData(String header) {
return Mono.fromSupplier(() -> interactor.interact(header))
.subscribeOn(Schedulers.boundedElastic());
}
}
How to transfer data via reactor's subscriber context?
Is it acceptable solution?
No.
Your code of saveHeader() method is an equivalent of simple
public void saveHeader(String header) {
monoHeader = Mono.just(header);
}
A subscriberContext is needed if you consume the value elsewhere - if the mono is constructed elsewhere. In your case (where you have all code before your eyes in the same method) just use the actual value.
BTW, there are many ways to implement your getData() method.
One is as suggested by Simon Baslé to get rid of a separate saveHeader() method.
One other way, if you have to keep your monoHeader field, could be
public Mono<String> getData() {
return monoHeader.publishOn(Schedulers.boundedElastic())
.map(header -> interactor.interact(header));
}

How to use CompletableFuture.thenCompose() when returning entities from repositories?

I started working with CompletableFuture in Spring Boot, and I'm seeing in some places that the usual repository methods return CompletableFuture <Entity> instead of Entity.
I do not know what is happening, but when I return instances of CompletableFuture in repositories, the code runs perfectly. However when I return entities, the code does not work asynchronously and always returns null.
Here is an example:
#Service
public class AsyncServiceImpl{
/** .. Init repository instances .. **/
#Async(AsyncConfiguration.TASK_EXECUTOR_SERVICE)
public CompletableFuture<Token> getTokenByUser(Credential credential) {
return userRepository.getUser(credential)
.thenCompose(s -> TokenRepository.getToken(s));
}
}
#Repository
public class UserRepository {
#Async(AsyncConfiguration.TASK_EXECUTOR_REPOSITORY)
public CompletableFuture<User> getUser(Credential credentials) {
return CompletableFuture.supplyAsync(() ->
new User(credentials.getUsername())
);
}
}
#Repository
public class TokenRepository {
#Async(AsyncConfiguration.TASK_EXECUTOR_REPOSITORY)
public CompletableFuture<Token> getToken(User user) {
return CompletableFuture.supplyAsync(() ->
new Token(user.getUserId())
);
}
}
The previous code runs perfectly but the following code doesn't run asynchronously and the result is always null.
#Service
public class AsyncServiceImpl {
/** .. Init repository instances .. **/
#Async(AsyncConfiguration.TASK_EXECUTOR_SERVICE)
public CompletableFuture<Token> requestToken(Credential credential) {
return CompletableFuture.supplyAsync(() -> userRepository.getUser(credential))
.thenCompose(s ->
CompletableFuture.supplyAsync(() -> TokenRepository.getToken(s)));
}
}
#Repository
public class UserRepository {
#Async(AsyncConfiguration.TASK_EXECUTOR_REPOSITORY)
public User getUser(Credential credentials) {
return new User(credentials.getUsername());
}
}
#Repository
public class TokenRepository {
#Async(AsyncConfiguration.TASK_EXECUTOR_SERVICE)
public Token getToken(User user) {
return new Token(user.getUserId());
}
}
Why doesn't this second code work?
As per the Spring #Async Javadoc:
the return type is constrained to either void or Future
and it is also further detailed in the reference documentation:
In the simplest case, the annotation may be applied to a void-returning method.
[…]
Even methods that return a value can be invoked asynchronously. However, such methods are required to have a Future typed return value. This still provides the benefit of asynchronous execution so that the caller can perform other tasks prior to calling get() on that Future.
In your second example, your #Async-annotated methods do not return a Future (or ListenableFuture and CompletableFuture which are also supported). However, Spring has to run your method asynchronously. It can thus only behave as if your method had a void return type, and thus it returns null.
As a side note, when you use #Async, your method will already run asynchronously, so you shouldn't use CompletableFuture.supplyAsync() inside the method. You should simply compute your result and return it, wrapped in CompletableFuture.completedFuture() if necessary. If your method is only composing futures (like your service that simply composes asynchronous repository results), then you probably don't need the #Async annotation. See also the example from the Getting Started guide.

Best practice to 'rollback' REST method calls inside method

The title might be incorrect, but I will try to explain my issue. My project is a Spring Boot project. I have services which do calls to external REST endpoints.
I have a service method which contains several method calls to other services I have. Every individual method call can be successful or not. Every method call is done to a REST endpoint and there can be issues that for example the webservice is not available or that it throws an unknown exception in rare cases. What ever happens, I need to be able to track which method calls were successful and if any one of them fails, I want to rollback to the original state as if nothing happened, see it a bit as #Transactional annotation. All REST calls are different endpoints and need to be called separately and are from an external party which I don't have influence on. Example:
public MyServiceImpl implements MyService {
#Autowired
private Process1Service;
#Autowired
private Process2Service;
#Autowired
private Process3Service;
#Autowired
private Process4Service;
public void bundledProcess() {
process1Service.createFileRESTcall();
process2Service.addFilePermissionsRESTcall();
process3Service.addFileMetadataRESTcall(); <-- might fail for example
process4Service.addFileTimestampRESTcall();
}
}
If for example process3Service.addFileMetadataRESTcall fails I want to do something like undo (in reverse order) for every step before process3:
process2Service.removeFilePermissionsRESTcall();
process1Service.deleteFileRESTcall();
I read about the Command pattern, but that seems to be used for Undo actions inside an application as a sort of history of actions performed, not inside a Spring web application. Is this correct for my use case too or should I track per method/webservice call if it was successful? Is there a best practice for doing this?
I guess however I track it, I need to know which method call failed and from there on perform my 'undo' method REST calls. Although in theory even these calls might also fail of course.
My main goal is to not have files being created (in my example) which any further processes have not been performed on. It should either be all successful or nothing. A sort of transactional.
Update1: improved pseudo implementation based on comments:
public Process1ServiceImpl implements Process1Service {
public void createFileRESTcall() throws MyException {
// Call an external REST api, pseudo code:
if (REST-call fails) {
throw new MyException("External REST api failed");
}
}
}
public class BundledProcessEvent {
private boolean createFileSuccess;
private boolean addFilePermissionsSuccess;
private boolean addFileMetadataSuccess;
private boolean addFileTimestampSuccess;
// Getters and setters
}
public MyServiceImpl implements MyService {
#Autowired
private Process1Service;
#Autowired
private Process2Service;
#Autowired
private Process3Service;
#Autowired
private Process4Service;
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#Transactional(rollbackOn = MyException.class)
public void bundledProcess() {
BundleProcessEvent bundleProcessEvent = new BundleProcessEvent();
this.applicationEventPublisher.publishEvent(bundleProcessEvent);
bundleProcessEvent.setCreateFileSuccess = bundprocess1Service.createFileRESTcall();
bundleProcessEvent.setAddFilePermissionsSuccess = process2Service.addFilePermissionsRESTcall();
bundleProcessEvent.setAddFileMetadataSuccess = process3Service.addFileMetadataRESTcall();
bundleProcessEvent.setAddFileTimestampSuccess = process4Service.addFileTimestampRESTcall();
}
#TransactionalEventListener(phase = TransactionPhase.AFTER_ROLLBACK)
public void rollback(BundleProcessEvent bundleProcessEvent) {
// If the last process event is successful, we should not
// be in this rollback method even
//if (bundleProcessEvent.isAddFileTimestampSuccess()) {
// remove timestamp
//}
if (bundleProcessEvent.isAddFileMetadataSuccess()) {
// remove metadata
}
if (bundleProcessEvent.isAddFilePermissionsSuccess()) {
// remove file permissions
}
if (bundleProcessEvent.isCreateFileSuccess()) {
// remove file
}
}
Your operation looks like a transaction, so you can use #Transactional annotation. From your code I can't really tell how you are managing HTTP response calls for each of those operations, but you should consider having your service methods to return them, and then do a rollback depending on response calls. You can create an array of methods like so, but how exactly you want your logic to be is up to you.
private Process[] restCalls = new Process[] {
new Process() { public void call() { process1Service.createFileRESTcall(); } },
new Process() { public void call() { process2Service.addFilePermissionsRESTcall(); } },
new Process() { public void call() { process3Service.addFileMetadataRESTcall(); } },
new Process() { public void call() { process4Service.addFileTimestampRESTcall(); } },
};
interface Process {
void call();
}
#Transactional(rollbackOn = Exception.class)
public void bundledProcess() {
restCalls[0].call();
... // say, see which process returned wrong response code
}
#TransactionalEventListener(phase = TransactionPhase.AFTER_ROLLBACK)
public void rollback() {
// handle rollback according to failed method index
}
Check this article. Might come in handy.
The answer to this question is quite broad. There are various ways to do distributed transactions to go through them all here. However, since you are using Java and Spring, your best bet is to use something like JTA (Java Transaction API), which enables a distributed transactions across multiple services/instances/etc.. Fortunately, Spring Boot supports JTA using either Atomikos or Bitronix. You can read the doc here.
One approach to enable distributed transactions is through a message broker such as JMS, RabbitMQ, Kafka, ActiveMQ, etc. and use a protocol like XA transactions (two-phase commit). In the case of external services that do not support distributed, one approach is to write a wrapper service that understands XA transactions to that external service.

Categories

Resources