I've implemented on Service layer method patch() via Mutiny which patches entity (changing one field). Before change I do some check.
But the problem in after patch entity (update SQL instruction) in DB, get method doesn't wait results and returns previous entity state.
How can I make get() be dependent from result of patch()? I need to return changed state of entity after patch.
#Override
public <T> Uni<List<ConfigParamDto<T>>> getAll() {
return cconfigRepository.findAll().list()
.map(param -> param.stream().map(configParamDto<T>::new).collect(Collectors.toList()));
}
#Override
#ReactiveTransactional
public <T> Uni<ConfigParamDto<Object>> patch(String name, T value) throws ConfigParamNotFoundException {
return get(name).map(param -> someCheck(param))
.chain(ignored -> configRepository.patch(name, String.valueOf(value)))
.chain(ignored -> get(name));
}
Related
I am creating a web application for learning purpose using spring web flux, I have a function which first checks if records exits then it updates it else it throws custom NotFoundException. The issue is when i return Mono the controller throws 404 error but when i return the class object which was updated it runs fine and i don't want to return whole object.
The following code runs fine
public Mono<Application> publish(String id,boolean publish)
{
return appRepository.findById(id).flatMap( a -> {
a.setPublished(publish);
return appRepository.save(a);
}).switchIfEmpty( Mono.error(new NotFoundException("Application Not Found")));
}
and below code where 404 error occurs
public Mono<Void> publish(String id,boolean publish)
{
return appRepository.findById(id).flatMap( a -> {
a.setPublished(publish);
appRepository.save(a);
return Mono.empty().then();
}).switchIfEmpty( Mono.error(new NotFoundException("Application Not Found")));
}
I have extended the repository from ReactiveMongoRepository and controller class is just calling the service function
#PutMapping(APP_ROOT_URL + "/{id}/publish")
public Mono<Void> publish(#PathVariable("id") String id)
{
return appService.publish(id, true);
}
The first method doesnt return 404 because:
appRepository.save(a) returns the persisted entity. Not an empty mono. So the switchIfEmpty clause is not triggered.
This is from the doc of ReactiveCrudRepository (one of the parent repositories of ReactiveMongoRepository)
/**
* Saves a given entity. Use the returned instance for further operations as the save operation might have changed the
* entity instance completely.
*
* #param entity must not be {#literal null}.
* #return {#link Mono} emitting the saved entity.
* #throws IllegalArgumentException in case the given {#literal entity} is {#literal null}.
*/
<S extends T> Mono<S> save(S entity);
In second method, you are explicitly returning empty mono. That is why the switchIfEmpty clause is triggered.
One more thing, I would like to point out:
The switchIfEmpty clause is not placed correctly. Since findById returns an empty mono if record is not found for an id, switchIfEmpty should come after that. If you place switchIfEmpty after save, it will never return an empty mono.
So you should have something like this:
public Mono<Application> publish(String id,boolean publish)
{
return appRepository.findById(id)
.switchIfEmpty( Mono.error(new NotFoundException("Application Not Found")))
.flatMap( a -> {
a.setPublished(publish);
return appRepository.save(a);
});
}
And if you want the return type of the method to be Mono<Void> then simply have something like this:
public Mono<Void> publish(String id, boolean publish)
{
return appRepository.findById(id)
.switchIfEmpty( Mono.error(new NotFoundException("Application Not Found")))
.flatMap( a -> {
a.setPublished(publish);
return appRepository.save(a);
})
.then();
}
I'm a new for a project reactor, but i have task to send some information from classic spring rest controller to some service, which is interacts with different system. Whole project developed with project reactor.
Here is my rest controller:
#RestController
public class Controller {
#Autowired
Service service;
#PostMapping("/path")
public Mono<String> test(#RequestHeader Map<String, String> headers) throws Exception {
testService.saveHeader(headers.get("header"));
return service.getData();
}
And here is my service:
#Service
public class Service {
private Mono<String> monoHeader;
private InteractionService interactor;
public Mono<String> getData() {
return Mono.fromSupplier(() -> interactor.interact(monoHeader.block()));
}
public void saveHeader(String header) {
String key = "header";
monoHeader = Mono.just("")
.flatMap( s -> Mono.subscriberContext()
.map( ctx -> s + ctx.get(key)))
.subscriberContext(ctx -> ctx.put(key, header));
}
Is it acceptable solution?
Fisrt off, I don't think you need the Context here. It is useful to implicitly pass data to a Flux or a Mono that you don't create (eg. one that a database driver creates for you). But here you're in charge of creating the Mono<String>.
Does the service saveHeader really achieve something? The call seem transient in nature: you always immediately call the interactor with the last saved header. (there could be a side effect there where two parallel calls to your endpoint end up overwriting each other's headers).
If you really want to store the headers, you could add a list or map in your service, but the most logical path would be to add the header as a parameter of getData().
This eliminates monoHeader field and saveHeader method.
Then getData itself: you don't need to ever block() on a Mono if you aim at returning a Mono. Adding an input parameter would allow you to rewrite the method as:
public Mono<String> getData(String header) {
return Mono.fromSupplier(() -> interactor.interact(header));
}
Last but not least, blocking.
The interactor seems to be an external service or library that is not reactive in nature. If the operation involves some latency (which it probably does) or blocks for more than a few milliseconds, then it should run on a separate thread.
Mono.fromSupplier runs in whatever thread is subscribing to it. In this case, Spring WebFlux will subscribe to it, and it will run in the Netty eventloop thread. If you block that thread, it means no other request can be serviced in the whole application!
So you want to execute the interactor in a dedicated thread, which you can do by using subscribeOn(Schedulers.boundedElastic()).
All in all:
#RestController
public class Controller {
#Autowired
Service service;
#PostMapping("/path")
public Mono<String> test(#RequestHeader Map<String, String> headers) throws Exception {
return service.getData(headers.get("header"));
}
}
#Service
public class Service {
private InteractionService interactor;
public Mono<String> getData(String header) {
return Mono.fromSupplier(() -> interactor.interact(header))
.subscribeOn(Schedulers.boundedElastic());
}
}
How to transfer data via reactor's subscriber context?
Is it acceptable solution?
No.
Your code of saveHeader() method is an equivalent of simple
public void saveHeader(String header) {
monoHeader = Mono.just(header);
}
A subscriberContext is needed if you consume the value elsewhere - if the mono is constructed elsewhere. In your case (where you have all code before your eyes in the same method) just use the actual value.
BTW, there are many ways to implement your getData() method.
One is as suggested by Simon Baslé to get rid of a separate saveHeader() method.
One other way, if you have to keep your monoHeader field, could be
public Mono<String> getData() {
return monoHeader.publishOn(Schedulers.boundedElastic())
.map(header -> interactor.interact(header));
}
I have a state machine
#EnableStateMachine
#Configuration
public class StateMachineConfiguration extends EnumStateMachineConfigurerAdapter<Status, Event> {
#Override
public void configure(StateMachineStateConfigurer<Status, Event> states) throws Exception {
states.withStates()
.initial(Status.DRAFT)
.states(EnumSet.allOf(Status.class));
}
#Override
public void configure(StateMachineTransitionConfigurer<Status, Event> transitions) throws Exception {
transitions
.withExternal()
.target(Status.INVITATION).source(Status.DRAFT)
.event(Event.INVITED)
.guard(new Guard())
.action(new ActionInvited())
.and()
.withExternal()
.target(Status.DECLINED).source(Status.INVITATION)
.event(Event.DECLINED)
.action(new ActionDeclined());
}
#Override
public void configure(StateMachineConfigurationConfigurer<Status, Event> config) throws Exception {
config.withConfiguration().autoStartup(true);
}
}
and I have a model, for example Order.
Model persists in DB. I extract model from DB, now my model has a status Order.status == INVITATION. I want to continue processing model with statemachine, but instance of statemachine will starts processing with initial state DRAFT but I needs continue processing from status INVITATION. In other words I want to execute
stateMachine.sendEvent(MessageBuilder
.withPayload(Event.DECLINED)
.setHeader("orderId", order.id)
.build()
)
and execute action ActionDeclined(). I don't want to persist a context of state machine in DB. I want to setting a state of stateMachine to state of my Model in runtime. How can I do that in right way? Using DefaultStateContext constructor or have an other, more beautiful way?
One possible approach is to create the StateMachine on the fly and to rehydrate the state machine from the DB using the state of the Order.
In this case you need to do the following steps:
Resetting the StateMachine in all regions
Load Order status from DB
Create new DefaultStateMachineContext and populate accordingly
Let's assume you have a build method, which returns new state machines for processing order events (using a StateMachineFactory), but for an existing order, it will rehydrate the state from the database.
StateMachine<Status, Event> build(long orderId) {
orderService.getOrder(orderId) //returns Optional
.map(order -> {
StateMachine<Status, Event> sm = stateMachineFactory.getStateMachine(Long.toString(orderId));
sm.stop();
rehydrateState(sm, sm.getExtendedState(), order.getStatus());
sm.start();
return sm;
})
.orElseGet(() -> createNewStateMachine(orderId);
}
void rehydrateState(StateMachine<Status, Event> newStateMachine, ExtendedState extendedState, Status orderStatus) {
newStateMachine.getStateMachineAccessor().doWithAllRegions(sma ->
sma.resetStateMachine(new DefaultStateMachineContext<>(orderStatus, null, null, extendedState));
});
}
I started working with CompletableFuture in Spring Boot, and I'm seeing in some places that the usual repository methods return CompletableFuture <Entity> instead of Entity.
I do not know what is happening, but when I return instances of CompletableFuture in repositories, the code runs perfectly. However when I return entities, the code does not work asynchronously and always returns null.
Here is an example:
#Service
public class AsyncServiceImpl{
/** .. Init repository instances .. **/
#Async(AsyncConfiguration.TASK_EXECUTOR_SERVICE)
public CompletableFuture<Token> getTokenByUser(Credential credential) {
return userRepository.getUser(credential)
.thenCompose(s -> TokenRepository.getToken(s));
}
}
#Repository
public class UserRepository {
#Async(AsyncConfiguration.TASK_EXECUTOR_REPOSITORY)
public CompletableFuture<User> getUser(Credential credentials) {
return CompletableFuture.supplyAsync(() ->
new User(credentials.getUsername())
);
}
}
#Repository
public class TokenRepository {
#Async(AsyncConfiguration.TASK_EXECUTOR_REPOSITORY)
public CompletableFuture<Token> getToken(User user) {
return CompletableFuture.supplyAsync(() ->
new Token(user.getUserId())
);
}
}
The previous code runs perfectly but the following code doesn't run asynchronously and the result is always null.
#Service
public class AsyncServiceImpl {
/** .. Init repository instances .. **/
#Async(AsyncConfiguration.TASK_EXECUTOR_SERVICE)
public CompletableFuture<Token> requestToken(Credential credential) {
return CompletableFuture.supplyAsync(() -> userRepository.getUser(credential))
.thenCompose(s ->
CompletableFuture.supplyAsync(() -> TokenRepository.getToken(s)));
}
}
#Repository
public class UserRepository {
#Async(AsyncConfiguration.TASK_EXECUTOR_REPOSITORY)
public User getUser(Credential credentials) {
return new User(credentials.getUsername());
}
}
#Repository
public class TokenRepository {
#Async(AsyncConfiguration.TASK_EXECUTOR_SERVICE)
public Token getToken(User user) {
return new Token(user.getUserId());
}
}
Why doesn't this second code work?
As per the Spring #Async Javadoc:
the return type is constrained to either void or Future
and it is also further detailed in the reference documentation:
In the simplest case, the annotation may be applied to a void-returning method.
[…]
Even methods that return a value can be invoked asynchronously. However, such methods are required to have a Future typed return value. This still provides the benefit of asynchronous execution so that the caller can perform other tasks prior to calling get() on that Future.
In your second example, your #Async-annotated methods do not return a Future (or ListenableFuture and CompletableFuture which are also supported). However, Spring has to run your method asynchronously. It can thus only behave as if your method had a void return type, and thus it returns null.
As a side note, when you use #Async, your method will already run asynchronously, so you shouldn't use CompletableFuture.supplyAsync() inside the method. You should simply compute your result and return it, wrapped in CompletableFuture.completedFuture() if necessary. If your method is only composing futures (like your service that simply composes asynchronous repository results), then you probably don't need the #Async annotation. See also the example from the Getting Started guide.
I have a method that pulls in a bunch of data. This has the potential to take a decent amount of time due to the large data set and the amount of computation required. The method that does this call will be used many times. The result list should return the same results each time. With that being said, I want to cache the results, so I only have to do that computation once. I'm supposed to use the CacheBuilder class. The script I have is essentially something like:
class CheckValidValues implements AValidValueInterface {
private ADataSourceInterface dataSource;
public CheckValidValues(ADataSourceInterface dataSource) {
this.dataSource = dataSource;
}
#Override
public void validate(String value) {
List<?> validValues = dataSource.getValidValues();
if (!validValues.contains(value)) {
// throw an exception
So I'm not even sure where I should be putting the caching method (i.e. in the CheckValidValues class or the getValidValues() method in dataSource. Also, I'm not entirely sure how you can add code into one of the methods without it instantiating the cache multiple times. Here's the route that I'm trying to take, but have no idea if it's correct. Adding above the List validValues = dataSource.getValidValues() line:
LoadingCache<String, List<?>> validValuesCache = CacheBuilder.newBuilder()
.expireAfterAccess(30, TimeUnit.SECONDS)
.build(
new CacheLoader<String, List<?>>() {
public List<?> load(#Nonnull String validValues) {
return valuesSupplier.getValidValues();
}
}
);
Then later, I'd think I could get that value with:
validValuesCache.get("validValues");
What I think should happen there is that it will do the getValidValues command and store that in the cache. However, if this method is being called multiple times, then, to me, that would mean it would create a new cache each time.
Any idea what I should do for this? I simply want to add the results of the getValidValues() method to cache so that it can be used in the next iteration without having to redo any computations.
You only want to cache a single value, the list of valid values. Use Guavas' Suppliers.memoizeWithExpiration(Supplier delegate, long duration, TimeUnit unit)
Each valid value is only existing once. So your List is essentially a Set. Back it by a HashSet (or a more efficient variant in Guava). This way the contains() is a hash table lookup instead of a sequential search inside the list.
We use Guava and Spring-Caching in a couple of projects where we defined the beans via Java configuration like this:
#Configuration
#EnableCaching
public class GuavaCacheConfig {
...
#Bean(name="CacheEnabledService")
public SomeService someService() {
return new CacheableSomeService();
}
#Bean(name="guavaCacheManager")
public CacheManager cacheManager() {
// if different caching strategies should occur use this technique:
// http://www.java-allandsundry.com/2014/10/spring-caching-abstraction-and-google.html
GuavaCacheManager guavaCacheManager = new GuavaCacheManager();
guavaCacheManager.setCacheBuilder(cacheBuilder());
return guavaCacheManager;
}
#Bean(name = "expireAfterAccessCacheBuilder")
public CacheBuilder<Object, Object> cacheBuilder() {
return CacheBuilder.newBuilder()
.recordStats()
.expireAfterAccess(5, TimeUnit.SECONDS);
}
#Bean(name = "keyGenerator")
public KeyGenerator keyGenerator() {
return new CustomKeyGenerator();
}
...
}
Note that the code above was taken from one of our integration tests.
The service, which return values should be cached is defined as depicted below:
#Component
#CacheConfig(cacheNames="someCache", keyGenerator=CustomKeyGenerator.NAME, cacheManager="guavaCacheManager")
public class CacheableService {
public final static String CACHE_NAME = "someCache";
...
#Cacheable
public <E extends BaseEntity> E findEntity(String id) {
...
}
...
#CachePut
public <E extends BaseEntity> ObjectId persist(E entity) {
...
}
...
}
As Spring-Caching uses an AOP approach, on invoking a #Cacheable annotated method Spring will first check if already a previous stored return value is available in the cache for the invoked method (depending on the cache key; we use a custom key generator therefore). If no value is yet available, Spring will invoke the actual service method and store the return value into the local cache which is available on subsequent calls.
#CachePut will always execute the service method and put the return value into the cache. This is useful if an existing value inside the cache should be replaced by a new value in case of an update for example.