Flux into Mono List Object - project reactor - java

I'm using project reactor and I've the next issue:
I've one method that return Mono<CustomerResponse> that contains a CustomerDto list, each client has attributes, one of theirs attributes is a payment list. But this payment list is null.
I've another method that receive client id and returns a Flux payment Flux<PaymentDto> for that client.
This are the model
public class CustomerResponse {
private List<CustomerDto> customers;
}
public class CustomerDto {
private int id;
private String fullname;
private String documentNumber;
private List<PaymentDto> payments;
}
These are the interfaces
public interface CustomerService {
public Mono<CustomerResponse> customerSearch(CustomerRequest request);
}
public interface PaymentService {
public Flux<PaymentDto> getPayments(int clientId);
}
This is my method
public Mono<CustomerResponse> getCustomer(CustomerRequest request) {
return customerService.customerSearch(request).map(resp -> resp.getCustomers())
.flatMap(customerList -> {
List<CustomerDto> newCustomerList = customerList.parallelStream().map(customer -> {
Flux<PaymentDto> paymentFlux =
paymentService.getPayments(customer.getId());
// Here: java.lang.IllegalStateException: block()/blockFirst()/blockLast()
customer.setPayments(paymentFlux.collectList().block());
return customer;
}).collect(Collectors.toList());
return Mono.just(new CustomerResponse(newCustomerList));
});
}
I've the next exception:
java.lang.IllegalStateException: block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-http-nio-4
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:83) ~[reactor-core-3.3.6.RELEASE.jar:3.3.6.RELEASE]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
I would like to know if there is a non-blocking or optimal way to do it

You can refactor your code like this to avoid blocking call:
public Mono<CustomerResponse> getCustomer(CustomerRequest request) {
Flux<CustomerDto> customerDtoFluxEnriched = customerService.customerSearch(request)
.map(CustomerResponse::getCustomers).flatMapMany(Flux::fromIterable).flatMap(customerDto -> {
Flux<PaymentDto> paymentFlux = paymentService.getPayments(customerDto.getId());
Mono<List<PaymentDto>> paymentListMono = paymentFlux.collectList();
return paymentListMono.map(paymentList -> {
customerDto.setPayments(paymentList);
return customerDto;
});
});
return customerDtoFluxEnriched.collectList().map(customerList -> {
CustomerResponse customerResponse = new CustomerResponse();
customerResponse.setCustomers(customerList);
return customerResponse;
});
}

Related

Kafka throws an exception when trying to consume an event

I am building a Domain-Driven Design microservice web application in Java Spring Boot and I have a problem where kafka might help. I am new to Kafka and what I'm trying to accomplish is simply to do something after an event occurs. The event is updating an order (changing it from isApproved = false to isApproved = true), and after updating the order status, I am sending a topic in another package called pet-catalog and from there when the package listen for the topic, it updates the entity Pet (from isAdopted = false to isAdopted = true).
This is what I have so far:
The Domain Event publisher (location package com.ddd.sharedkernel.infra;)
public interface DomainEventPublisher {
void publish(DomainEvent event);
}
The Domain Event (location package com.ddd.sharedkernel.domain.events;)
#Getter
public class DomainEvent {
private String topic;
private Instant occurredOn;
public DomainEvent(String topic) {
this.occurredOn = Instant.now();
this.topic = topic;
}
public String toJson() {
ObjectMapper objectMapper = new ObjectMapper();
String output = null;
try {
output = objectMapper.writeValueAsString(this);
} catch (JsonProcessingException e) {
}
return output;
}
public String topic() {
return topic;
}
public static <E extends DomainEvent> E fromJson(String json, Class<E> eventClass) throws JsonProcessingException {
ObjectMapper objectMapper = new ObjectMapper();
return objectMapper.readValue(json,eventClass);
}
}
A Class where we call its constructor when an Order is approved
(location package com.ddd.sharedkernel.domain.events.orders;)
#Getter
public class OrderApproved extends DomainEvent {
private final String orderId;
public OrderApproved(String orderId){
super(TopicHolder.TOPIC_ORDER_APPROVED);
this.orderId=orderId;
}
}
Topic holder
(location package com.ddd.sharedkernel.domain.config;)
public class TopicHolder {
public final static String TOPIC_ORDER_APPROVED = "order-approved";
}
Domain event publisher implementation
(location package com.ddd.ordermanagement.infra;)
#Service
#AllArgsConstructor
public class DomainEventPublisherImpl implements DomainEventPublisher {
private final KafkaTemplate<String, String> kafkaTemplate;
#Override
public void publish(DomainEvent event) {
this.kafkaTemplate.send(event.topic(), event.toJson());
}
}
Publishing an event after updating the order
(location: package com.ddd.ordermanagement.service.impl;)
#Override
#Transactional
public void approveOrder(OrderId orderId) {
Order order = orderRepository.getById(orderId);
order.approveOrder();
//Now after we approve the order that we want to approve
//we need to delete all other orders that are made for the same pet
//because we just approved one order for that pet and now that pet is adopted
//we dont need the remaining orders for that pet
List<Order> orderList = orderRepository.findAll();
for (Order orderToBeDeleted : orderList) {
if (!orderToBeDeleted.isApproved() && orderToBeDeleted.getPetId().getId().equals(order.getPetId().getId())){
orderRepository.deleteById(orderToBeDeleted.getId());
}
}
//After deleting the remaining orders
//we need to update the adopted pet (change its status from false to true)
//this is just hard coded ID for the pet to be updated but the listener throws an
//exception even before it reads the ID
domainEventPublisher.publish(new OrderApproved("e49b4057-582b-42fa-beed-e3b9e6811cdc"));
//The exception is thrown in the code below (PetEventListener)
}
And finally the Event Listener
(location: package com.ddd.petcatalog.xport.events;)
#Service
#AllArgsConstructor
public class PetEventListener {
private final PetService petService;
#KafkaListener(topics = TopicHolder.TOPIC_ORDER_APPROVED, groupId = "petCatalog")
public void consumeOrderApproved(#Payload(required = false) String jsonMessage){
try{
System.out.println("Topic consumed successfully"); //Prints
OrderApproved event = DomainEvent.fromJson(jsonMessage, OrderApproved.class);
System.out.println("Print after creating an event"); //Doesn't print
petService.updatePet(event.getOrderId());
System.out.println("event.getOrderId(): "+ event.getOrderId());//Doesn't print
} catch (Exception e){
e.printStackTrace();
}
}
}
This is the exception output.
Topic consumed successfully
java.lang.IllegalArgumentException: argument "content" is null
at com.fasterxml.jackson.databind.ObjectMapper._assertNotNull(ObjectMapper.java:4757)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3515)
at mk.ukim.finki.emt.sharedkernel.domain.events.DomainEvent.fromJson(DomainEvent.java:37)
at mk.ukim.finki.emt.petcatalog.xport.events.PetEventListener.consumeOrderApproved(PetEventListener.java:27)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:171)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:120)
at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:56)
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:347)
at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:92)
at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:53)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2323)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2304)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:2218)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:2143)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2025)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1707)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1274)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1266)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1161)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:831)

Resilience4j context propagator not able to propagte thread local values

I am trying to migrate my circuit breaker code from Hystrix to Resilience4j. The communication is between two applications out of which one is an artifact containing all the resilience 4j config in the java code itself and the second application which is a microservice uses it directly.
There's one RequestId which generates in the microservice and propagates to the artifact context where it gets printed in the logs. With Hystrix, it was working perfectly fine but ever since I moved to resilience, I am getting null for the request Id.
Below is my config for bulk head and context propagator :
ThreadPoolBulkheadConfig bulkheadConfig = ThreadPoolBulkheadConfig.custom()
.maxThreadPoolSize(maxThreadPoolSize)
.coreThreadPoolSize(coreThreadPoolSize)
.queueCapacity(queueCapacity)
.contextPropagator(new DummyContextPropagator())
.build();
// Bulk Head Registry
ThreadPoolBulkheadRegistry bulkheadRegistry = ThreadPoolBulkheadRegistry.of(bulkheadConfig);
// Create Bulk Head
ThreadPoolBulkhead bulkhead = bulkheadRegistry.bulkhead(name, bulkheadConfig);
Dummy Context Propagator :
public class DummyContextPropagator implements ContextPropagator {
private static final Logger log = LoggerFactory.getLogger( DummyContextPropagator.class);
#Override
public Supplier<Optional<Object>> retrieve() {
return () -> (Optional<Object>) get();
}
#Override
public Consumer<Optional<Object>> copy() {
return (t) -> t.ifPresent(e -> {
clear();
put(e);
});
}
#Override
public Consumer<Optional<Object>> clear() {
return (t) -> DummyContextHolder.clear();
}
public static class DummyContextHolder {
private static final ThreadLocal threadLocal = new ThreadLocal();
private DummyContextHolder() {
}
public static void put(Object context) {
if (threadLocal.get() != null) {
clear();
}
threadLocal.set(context);
}
public static void clear() {
if (threadLocal.get() != null) {
threadLocal.set(null);
threadLocal.remove();
}
}
public static Optional<Object> get() {
return Optional.ofNullable(threadLocal.get());
}
}
}
However, nothing seems to work so that I can get the RequestId.
Am I doing everything right or is there another way to do that ?
i think you want to get params from threadlocal from parent-thread when you in sub-thread, in hystrix it use command-model to decorate callabletask
in resilience4j i think u can fix it like this:
#Resource
DispatcherServlet dispatcherServlet;
#PostConstruct
public void changeThreadLocalModel() {
dispatcherServlet.setThreadContextInheritable(true);
}
i find my last answer may lead to some problems, when you use "dispatcherServlet.setThreadContextInheritable(true);"
it may pollute your custom thread-pool`s threadlocalmap;
so here is my final resolve, and it only works at resilience4j;
#Resource
Resilience4jBulkheadProvider resilience4jBulkheadProvider;
#PostConstruct
public void concurrentThreadContextStrategy() {
ThreadPoolBulkheadConfig threadPoolBulkheadConfig = ThreadPoolBulkheadConfig.custom().contextPropagator(new CustomInheritContextPropagator()).build();
resilience4jBulkheadProvider.configureDefault(id -> new Resilience4jBulkheadConfigurationBuilder()
.bulkheadConfig(BulkheadConfig.ofDefaults()).threadPoolBulkheadConfig(threadPoolBulkheadConfig)
.build());
}
private static class CustomInheritContextPropagator implements ContextPropagator<RequestAttributes> {
#Override
public Supplier<Optional<RequestAttributes>> retrieve() {
// give requestcontext to reference from threadlocal;
// this method call by web-container thread, such as tomcat, jetty,or undertow, depends on what you used;
return () -> Optional.ofNullable(RequestContextHolder.getRequestAttributes());
}
#Override
public Consumer<Optional<RequestAttributes>> copy() {
// load requestcontex into real-call thread
// this method call by resilience4j bulkhead thread;
return requestAttributes -> requestAttributes.ifPresent(context -> {
RequestContextHolder.resetRequestAttributes();
RequestContextHolder.setRequestAttributes(context);
});
}
#Override
public Consumer<Optional<RequestAttributes>> clear() {
// clean requestcontext finally ;
// this method call by resilience4j bulkhead thread;
return requestAttributes -> RequestContextHolder.resetRequestAttributes();
}
}
i got the same problem with springboot 2.5 et springboot cloud 2020.0.6
and I solved it with an implementation of ContextPropagator
public class SleuthPropagator implements ContextPropagator<TraceContext> {
ThreadLocal<ScopedSpan> scopedSpanThreadLocal = new ThreadLocal<>();
#Override
public Supplier<Optional<TraceContext>> retrieve() {
return this::getCurrentcontext;
}
#Override
public Consumer<Optional<TraceContext>> copy() {
return c -> {
if (!c.isPresent()) {
return;
}
TraceContext traceContext = c.get();
ScopedSpan resilience4jSpan = getTracer()
.map(t -> t.startScopedSpanWithParent("Resilience4j", traceContext))
.orElse(null);
scopedSpanThreadLocal.set(resilience4jSpan);
};
}
#Override
public Consumer<Optional<TraceContext>> clear() {
return t -> {
try {
ScopedSpan resilience4jSpan = scopedSpanThreadLocal.get();
if (resilience4jSpan != null) {
resilience4jSpan.finish();
}
} finally {
scopedSpanThreadLocal.remove();
}
};
}
private static Optional<Tracer> getTracer() {
return Optional.ofNullable(Tracing.current())
.map(Tracing::tracer);
}
private Optional<TraceContext> getCurrentcontext() {
return getTracer()
.map(Tracer::currentSpan)
.map(Span::context);
}
}
And use the propagator in adding this to your application.properties
resilience4j.thread-pool-bulkhead.instances.YOUR_BULKHEAD_CONFIG.context-propagators=com.your.package.SleuthPropagator

Convert sequential Monos to Flux

I have a web service where I want to retrieve the elements of a tree up to the root node.
I have a Webflux interface which returns a Mono on each call:
public interface WebService {
Mono<Node> fetchNode(String nodeId);
}
public class Node {
public String id;
public String parentId; // null, if parent node
}
Let's assume there is a tree
1
2 3
4 5
I want to create the following method:
public interface ParentNodeResolver {
Flux<Node> getNodeChain(String nodeId);
}
which would give me on getNodeChain(5) a Flux with the nodes for 5, 3 and 1 and then completes.
Unfortunately, I don't quite understand how I can combine Monos sequentially, but without blocking them. With Flux.generate(), I think I need to block on each mono to check whether it has a next element. Other methods which I've found seem to combine only a fixed number of Monos, but not in this recursive fashion.
Here is a sample code which would simulate the network request with some delay.
public class MonoChaining {
ExecutorService executorService = Executors.newFixedThreadPool(5);
#Test
void name() {
var nodeChain = generateNodeChainFlux("5")
.collectList()
.block();
assertThat(nodeChain).isNotEmpty();
}
private Flux<Node> generateNodeChainFlux(String nodeId) {
//TODO
return Flux.empty();
}
public Mono<Node> getSingleNode(String nodeId) {
var future =
CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(2000); // Simulate delay
if ("5".equals(nodeId)) {
return new Node("5", "3");
} else if ("3".equals(nodeId)) {
return new Node("3", "1");
} else if ("1".equals(nodeId)) {
return new Node("1", null);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
return null;
}, executorService);
return Mono.fromFuture(future);
}
public static class Node {
public String id;
public String parentId;
public Node(String id, String parentId) {
this.id = id;
this.parentId = parentId;
}
}
}
Is there a way to retrieve this?
Thanks!
The operator you are looking for is Mono#expand. It is used for recursively expanding sequences. Read more here.
In your case:
private Flux<Node> generateNodeChainFlux(String nodeId) {
return getSingleNode(nodeId).expand(node -> getSingleNode(node.parentId));
}
Using recursion with flatMap to get parent node and concat to append the current node to resulting flux might work. Try the code below:
public Flux<Node> getNodeChain(String nodeId) {
return fetchNode(nodeId).flatMapMany(node -> {
if (node.parent != null) {
Flux<Node> nodeChain = getNodeChain(node.parent);
return Flux.concat(Flux.just(node), nodeChain);
}
return Flux.just(node);
});
}
Here I'm using flatMapMany to convert Mono to Flux.

Reactor Mono - execute parallel tasks

I am new to Reactor framework and trying to utilize it in one of our existing implementations. LocationProfileService and InventoryService both return a Mono and are to executed in parallel and have no dependency on each other (from the MainService). Within LocationProfileService - there are 4 queries issued and the last 2 queries have a dependency on the first query.
What is a better way to write this? I see the calls getting executed sequentially, while some of them should be executed in parallel. What is the right way to do it?
public class LocationProfileService {
static final Cache<String, String> customerIdCache //define Cache
#Override
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
//These 2 are not interdependent and can be executed immediately
Mono<String> customerAccountMono = getCustomerArNumber(customerId,location) LocationNumber).subscribeOn(Schedulers.parallel()).switchIfEmpty(Mono.error(new CustomerNotFoundException(location, customerId))).log();
Mono<LocationProfile> locationProfileMono = Mono.fromFuture(//location query).subscribeOn(Schedulers.parallel()).log();
//Should block be called, or is there a better way to do ?
String custAccount = customerAccountMono.block(); // This is needed to execute and the value from this is needed for the next 2 calls
Mono<Customer> customerMono = Mono.fromFuture(//query uses custAccount from earlier step).subscribeOn(Schedulers.parallel()).log();
Mono<Result<LocationPricing>> locationPricingMono = Mono.fromFuture(//query uses custAccount from earlier step).subscribeOn(Schedulers.parallel()).log();
return Mono.zip(locationProfileMono,customerMono,locationPricingMono).flatMap(tuple -> {
LocationProfileInfo locationProfileInfo = new LocationProfileInfo();
//populate values from tuple
return Mono.just(locationProfileInfo);
});
}
private Mono<String> getCustomerAccount(String conversationId, String customerId, String location) {
return CacheMono.lookup((Map)customerIdCache.asMap(),customerId).onCacheMissResume(Mono.fromFuture(//query).subscribeOn(Schedulers.parallel()).map(x -> x.getAccountNumber()));
}
}
public class InventoryService {
#Override
public Mono<InventoryInfo> getInventoryInfo(String inventoryId) {
Mono<Inventory> inventoryMono = Mono.fromFuture(//inventory query).subscribeOn(Schedulers.parallel()).log();
Mono<List<InventorySale>> isMono = Mono.fromFuture(//inventory sale query).subscribeOn(Schedulers.parallel()).log();
return Mono.zip(inventoryMono,isMono).flatMap(tuple -> {
InventoryInfo inventoryInfo = new InventoryInfo();
//populate value from tuple
return Mono.just(inventoryInfo);
});
}
}
public class MainService {
#Autowired
LocationProfileService locationProfileService;
#Autowired
InventoryService inventoryService
public void mainService(String customerId, String location, String inventoryId) {
Mono<LocationProfileInfo> locationProfileMono = locationProfileService.getProfileInfoByLocationAndCustomer(....);
Mono<InventoryInfo> inventoryMono = inventoryService.getInventoryInfo(....);
//is using block fine or is there a better way to do?
Mono.zip(locationProfileMono,inventoryMono).subscribeOn(Schedulers.parallel()).block();
}
}
You don't need to block in order to get the pass that parameter your code is very close to the solution. I wrote the code using the class names that you provided. Just replace all the Mono.just(....) with the call to the correct service.
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
Mono<String> customerAccountMono = Mono.just("customerAccount");
Mono<LocationProfile> locationProfileMono = Mono.just(new LocationProfile());
return Mono.zip(customerAccountMono, locationProfileMono)
.flatMap(tuple -> {
Mono<Customer> customerMono = Mono.just(new Customer(tuple.getT1()));
Mono<Result<LocationPricing>> result = Mono.just(new Result<LocationPricing>());
Mono<LocationProfile> locationProfile = Mono.just(tuple.getT2());
return Mono.zip(customerMono, result, locationProfile);
})
.map(LocationProfileInfo::new)
;
}
public static class LocationProfileInfo {
public LocationProfileInfo(Tuple3<Customer, Result<LocationPricing>, LocationProfile> tuple){
//do wathever
}
}
public static class LocationProfile {}
private static class Customer {
public Customer(String cutomerAccount) {
}
}
private static class Result<T> {}
private static class LocationPricing {}
Pleas remember that the first zip is not necessary. I re write it to mach your solution. But I would solve the problem a little bit differently. It would be clearer.
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
return Mono.just("customerAccount") //call the service
.flatMap(customerAccount -> {
//declare the call to get the customer
Mono<Customer> customerMono = Mono.just(new Customer(customerAccount));
//declare the call to get the location pricing
Mono<Result<LocationPricing>> result = Mono.just(new Result<LocationPricing>());
//declare the call to get the location profile
Mono<LocationProfile> locationProfileMono = Mono.just(new LocationProfile());
//in the zip call all the services actually are executed
return Mono.zip(customerMono, result, locationProfileMono);
})
.map(LocationProfileInfo::new)
;
}

JavaRx on ErrorReturn return the different type

Observable is tied to a type. When onError I don't want to return the same type but different Object. Example Response object with status=400. How to achieve this?
public class Test{
#Autowired
private Server server;
public Response getResponse(String id){
Observable<Person> personObservable = server.get(id);
ExecutorService executorService = Executors.newFixedThreadPool(100);
List<Person> persons = new ArrayList<Person>();
personObservable.onErrorReturn(new Func1<Throwable, Person>() {
#Override
public Person call(Throwable throwable) {
//I would like to return a HttpResponseObject taking the message
//from throwable error information how to do it?
// How to use Transform() in this case ?
return null;
}
}).subscribeOn(Schedulers.from(executorService)).subscribe(new Action1<Person>() {
// If i use subscribe() will it be not async?
// I think subscribe still run on the main thread so is this
// subscribeOn use fine ?
#Override
public void call(Person person) {
// Is this fine to use the list outside the observable ?
persons.add(person);
}
});
Response r = new Response;
r.addPersons(persons);
return r;
}
}
Use onErrorResumeNext:
Observable<Person> personObservable = ...;
return personObservable
.toList()
.map(persons -> new Response(persons))
.onErrorResumeNext(error -> new Response(error.getMessage())
.toBlocking().single();

Categories

Resources