Aggregate finished threads and send the response after timeout rX Java - java

I have a use case where I need to aggregate the finished thread responses from multiple Observable objects and return back to the client. My question is how to achieve it with using the rX Java. Here I have written a code snippet but the issue of this one is that this won't return anything after the timeout.
Observable<AggregateResponse> aggregateResponse = Observable.
zip(callServiceA(endpoint), callServiceB(endpoint), callServiceC(endpoint),
(Mashup resultA, Mashup resultB, Mashup resultC) -> {
AggregateResponse result = new AggregateResponse();
result.setResult(resultA.getName() + " " + resultB.getName() + " " + resultC.getName());
return result;
}).timeout(5, TimeUnit.SECONDS);
Subscriber
aggregateResponse.subscribe(new Subscriber<AggregateResponse>() {
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable throwable) {
//Timeout execute this rather than aggregating the finished tasks
System.out.println(throwable.getMessage());
System.out.println(throwable.getClass());
}
#Override
public void onNext(AggregateResponse response) {
asyncResponse.resume(response);
}
});

You need to put the timeout operator on each Observable, zip will wait for all Observables to emit a value before emitting a result, so if only one of them take longer while others already emitted, you will cut down the stream with the timeout (with onError) before the zipped Observable will have a chance to emit.
What you should do, assuming you want to ignore timed out sources while keeping the rest, is to add timeout operator to each Observable and also add error handling like onErrorReturn to each one, the error return can return some kind of 'empty' result (you can't use null in RxJava2), and when you aggregate result ignore those empty results:
Observable<AggregateResponse> aggregateResponse = Observable.
zip(callServiceA(endpoint)
.timeout(5, TimeUnit.SECONDS)
.onErrorReturn(throwable -> new Mashup()),
callServiceB(endpoint)
.timeout(5, TimeUnit.SECONDS)
.onErrorReturn(throwable -> new Mashup()),
callServiceC(endpoint)
.timeout(5, TimeUnit.SECONDS)
.onErrorReturn(throwable -> new Mashup()),
(Mashup resultA, Mashup resultB, Mashup resultC) -> {
AggregateResponse result = new AggregateResponse();
result.setResult(resultA.getName() + " " + resultB.getName() + " " + resultC.getName());
return result;
});

Related

Reactor Sink that emits only 1 event at a time?

I am playing with Replaying Reactor Sinks, I am trying to achieve a mix of a unicast and a replay processor. I would like it to emit to only one subscriber at the same (UnicastProcessor), but that it can also emit a default value on subscribe (ReplayProcessor). Here is something similar to the real case:
Flux<Boolean> monoC = Sinks.many().replay().latestOrDefault(true).asFlux().doOnNext(integer -> System.out.println(new Date() + " - " + Thread.currentThread().getName() + " emiting next"));
for(int i = 0; i < 5; i++) {
new Thread(() -> {
monoC.flatMap(unused ->
webClientBuilder.build()
.get()
.uri("https://www.google.com")
.retrieve()
.toEntityFlux(String.class)
.doOnSuccess(stringResponseEntity -> {
System.out.println(new Date() + " - " + Thread.currentThread().getName() + " finished processing");
})
).subscribe();
}).start();
}
That is printing:
emiting next
...
emiting next
finished processing
...
finished processing
Instead, I would like it to print:
emiting next
finished processing
...
emiting next
finished processing
Update, some more clarifications on the real case scenario:
The real case scenario is: I have a Spring WebFlux application that acts like a relay, it receives a request on a specific endpoint A, and it relays it to another microservice B. This microservice can then reply with a 429 if I go too fast, and in a header with how long I have to wait before retrying again. The retrying thing I have already achieved it with a .retry operator and a Mono.delay, but in the meantime, I can receive another request on my first endpoint A which will have to be blocked until the Mono.delay finishes.
I am trying to achieve this with a Replay Sink, so that after receiving a 429, I emit a "false" to the sink and after Mono.delay is over, it emits a true to the sink, so if in the mean time I receive any further request on A it can filter out all the falses and wait for a true to be emitted.
The problem i have on top of that is that, when I receive too many request to relay on A, microservice B starts responding slow, and getting overloaded. Therefore, i would like to limit the rate that the Sink is emitting. To be precise, i would like the publisher to emit a value, but don't emit any more until the subscriber hits onCompleted.
As soon as I understood your issue correctly, you want the requests to B being processed sequentially. In that case you should have a look at https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#flatMap-java.util.function.Function-int-
public final <V> Flux<V> flatMap(Function<? super T, ? extends Publisher<? extends V>> mapper, int concurrency)
I think your case should look like
//sinks should be global variable for your controller, initialized in #PostConstruct
var sinks = Sinks
//unsafe is required for multithreading
.unsafe()
.many()
.replay()
.latest();
sinks.asFlux()
.doOnNext(it -> System.out.printf("%s is emitting %s\n", Thread.currentThread().getName(), it))
.flatMap(counter -> {
return webClientBuilder.build()
.get()
.uri("https://www.google.com")
.retrieve()
.toEntityFlux(String.class)
.doOnSuccess(stringResponseEntity -> {
System.out.println(counter + " " + new Date() + " - " + Thread.currentThread().getName() + " finished processing with " + stringResponseEntity.getStatusCode());
})
.then(Mono.just(counter));
//concurrency = 1 causes the flatMap being handled only once in parallel
}, 1)
.doOnError(Throwable::printStackTrace)
//this subscription also must be done in #PostConstruct
.subscribe(counter -> System.out.printf("%s completed in %s\n", counter, Thread.currentThread().getName()));
//and this is your endpoint method
for (int i = 0; i < 5; i++) {
int counter = i;
new Thread(() -> {
var result = sinks.tryEmitNext(counter);
if (result.isFailure()) {
//mb in that case you should retry
System.out.printf("%s emitted %s. with fail: %s\n", Thread.currentThread().getName(), counter, result);
} else {
System.out.printf("%s successfully emitted %s\n", Thread.currentThread().getName(), counter);
}
}).start();
}

Rxjava approach the get last Observable from each thread

I'm thinking how to use RXJava for the scenario described bellow.
A List<Object>,each object will be sent to k8s and checked the status till the respone return true,so my polling active is that:
private Observable<Boolean> startPolling(String content) {
log.info("start polling "+ content);
return Observable.interval(2, TimeUnit.SECONDS)
.take(3)
.observeOn(Schedulers.newThread())
.flatMap(aLong -> Observable.just(new CheckSvcStatus().check(content)))
.takeUntil(checkResult -> checkResult)
.timeout(3000L, TimeUnit.MILLISECONDS, Observable.just(false))
;
}
Function of sent action:
Observable<Compo> sentYamlAndGet() {
log.info("sent yaml");
sentYaml()
return Observable.just(content);
}
I try to use the foreach to get each object status which like this:
public void rxInstall() throws JsonProcessingException {
List<Boolean>observables = Lists.newArrayList();
Observable.from(list)
.subscribeOn(Schedulers.newThread())
.concatMap(s -> sendYamlAndGet())
.timeout(3000l, TimeUnit.MILLISECONDS)
.subscribe()
;
Observable.from(list).forEach(s -> {
observables.add(Observable.just(s)
.flatMap(this::startPolling)
.toBlocking()
.last()
)
;
System.out.println(new ObjectMapper().writeValueAsString(observables));
}
Objects of outputs list is :{"o1","o2","o3","o4","o5"}
the last status of objest which I want is : [false,true,false,false,true].
All above style is not much 'ReactX',check object status action do not affect to each other.
How to throw foreach? I trid toIterable(),toList() but failed.
Observable.from(list)
.concatMap(s -> sentYamlAndGet())
.concatMap(this::startPolling)
....
;
Wanted to know if it's good practice to do that and what would be the best way to do that?
Thanks in advance.
pps: currentlly I'm using rxjava1 <version>1.2.0</version> but can change to 2(´▽`)ノ

Vert.x future handler setting

Please have a look on following piece of code located inside class extending AbstractVerticle:
#Override
public void start(Future<Void> serverStartFuture) throws Exception {
log.info("Deploying " + this.getClass().toString() + " verticle...");
//TODO: Handler is not calling.
serverStartFuture.setHandler(event -> {
if(event.succeeded()){
log.info("Deploying " + this.getClass().toString() + " verticle SUCCESS");
} else if (event.failed()){
log.error("Deploying " + this.getClass().toString() + " verticle FAIL:");
log.error(event.cause());
}
});
/* To follow future compose pattern in future */
Future<Void> initSteps = this.initHttpServ();
initSteps.setHandler((AsyncResult<Void> asyncResult) -> {
if(asyncResult.succeeded()){
serverStartFuture.complete();
}else if(asyncResult.failed()){
serverStartFuture.fail(asyncResult.cause());
}
});
}
Assuming that initHttpServ always return complete future:
private Future<Void> initHttpServ(){
Future<Void> httpServerFuture = Future.future();
httpServerFuture.complete();
return httpServerFuture;
}
Why the serverStartFuture.setHandler is never called in my case?
I understand the concept that way:
Create future f
set f future handler
forget about it
Somewhere else in code set f to complete/fail
After setting f result, handler will call
But my piece of code seems to negate this approach.
Am I doing something wrong?
You're not supposed to set the serverStartFuture handler. It is set by Vert.x when the verticle is deployed. You're supposed to either complete the future when your verticle starts successfully, or fail otherwise.
See Asynchronous Verticle start and stop in the Vert.x core documentation.

How to create a TCP receiver using Akka Java API that doesn't respond

I am using this example as my starting point.
In that example, server responds with the modified text but in my case, server doesn't need to respond.
Looking at other similar questions, I am aware that I can either pass Empty ByteString or use filter(p -> false) to not send anything back. However, in that case, the problem is that my whenComplete block doesn't get executed. i.e. the exception gets swallowed. Is there a way to avoid this? Help appreciated on this !
connections.runForeach(connection -> {
System.out.println("New connection from: " + connection.remoteAddress());
final Flow<ByteString, ByteString, NotUsed> echo = Flow.of(ByteString.class)
.via(Framing.delimiter(ByteString.fromString("\n"), 256, FramingTruncation.DISALLOW))
.map(ByteString::utf8String)
.map(s -> s + "!!!\n")
.map(ByteString::fromString);
connection.handleWith(echo, mat);
}, mat).whenComplete((done,throwable) ->
{
//exception handling
}
);
Your analysis is correct, now you are reacting when the server shuts down rather than each connection.
Reacting on the individual connections completing would be done in the flow passed to the connection, something like this:
final Tcp tcp = Tcp.get(system);
tcp.bind("127.0.0.1", 6789).runForeach((connection) -> {
final Flow<ByteString, ByteString, NotUsed> echo = Flow.of(ByteString.class)
.via(Framing.delimiter(ByteString.fromString("\n"), 256, FramingTruncation.DISALLOW))
.map(ByteString::utf8String)
.map(s -> s + "!!!\n")
.map(ByteString::fromString)
.watchTermination((notUsed, completionStage) -> {
completionStage.whenComplete((done, exception) -> {
System.out.println("Connection from " + connection.remoteAddress() + " completed");
});
return notUsed;
});
connection.handleWith(echo, materializer);
}, materializer);

Using a Commonj Work Manager to send Asynchronous HTTP calls

I switched from making sequential HTTP calls to 4 REST services, to making 4 simultaneous calls using a commonj4 work manager task executor. I'm using WebLogic 12c. This new code works on my development environment, but in our test environment under load conditions, and occasionally while not under load, the results map is not populated with all of the results. The logging suggests that each work item did receive back the results though. Could this be a problem with the ConcurrentHashMap? In this example from IBM, they use their own version of Work and there's a getData() method, although it doesn't like that method really exists in their class definition. I had followed a different example that just used the Work class but didn't demonstrate how to get the data out of those threads into the main thread. Should I be using execute() instead of schedule()? The API doesn't appear to be well documented. The stuckthreadtimeout is sufficiently high. component.processInbound() actually contains the code for the HTTP call, but I the problem isn't there because I can switch back to the synchronous version of the class below and not have any issues.
http://publib.boulder.ibm.com/infocenter/wsdoc400/v6r0/index.jsp?topic=/com.ibm.websphere.iseries.doc/info/ae/asyncbns/concepts/casb_workmgr.html
My code:
public class WorkManagerAsyncLinkedComponentRouter implements
MessageDispatcher<Object, Object> {
private List<Component<Object, Object>> components;
protected ConcurrentHashMap<String, Object> workItemsResultsMap;
protected ConcurrentHashMap<String, Exception> componentExceptionsInThreads;
...
//components is populated at this point with one component for each REST call to be made.
public Object route(final Object message) throws RouterException {
...
try {
workItemsResultsMap = new ConcurrentHashMap<String, Object>();
componentExceptionsInThreads = new ConcurrentHashMap<String, Exception>();
final String parentThreadID = Thread.currentThread().getName();
List<WorkItem> producerWorkItems = new ArrayList<WorkItem>();
for (final Component<Object, Object> component : this.components) {
producerWorkItems.add(workManagerTaskExecutor.schedule(new Work() {
public void run() {
//ExecuteThread th = (ExecuteThread) Thread.currentThread();
//th.setName(component.getName());
LOG.info("Child thread " + Thread.currentThread().getName() +" Parent thread: " + parentThreadID + " Executing work item for: " + component.getName());
try {
Object returnObj = component.processInbound(message);
if (returnObj == null)
LOG.info("Object returned to work item is null, not adding to producer components results map, for this producer: "
+ component.getName());
else {
LOG.info("Added producer component thread result for: "
+ component.getName());
workItemsResultsMap.put(component.getName(), returnObj);
}
LOG.info("Finished executing work item for: " + component.getName());
} catch (Exception e) {
componentExceptionsInThreads.put(component.getName(), e);
}
}
...
}));
} // end loop over producer components
// Block until all items are done
workManagerTaskExecutor.waitForAll(producerWorkItems, stuckThreadTimeout);
LOG.info("Finished waiting for all producer component threads.");
if (componentExceptionsInThreads != null
&& componentExceptionsInThreads.size() > 0) {
...
}
List<Object> resultsList = new ArrayList<Object>(workItemsResultsMap.values());
if (resultsList.size() == 0)
throw new RouterException(
"The producer thread results are all empty. The threads were likely not created. In testing this was observed when either 1)the system was almost out of memory (Perhaps the there is not enough memory to create a new thread for each producer, for this REST request), or 2)Timeouts were reached for all producers.");
//** The problem is identified here. The results in the ConcurrentHashMap aren't the number expected .
if (workItemsResultsMap.size() != this.components.size()) {
StringBuilder sb = new StringBuilder();
for (String str : workItemsResultsMap.keySet()) {
sb.append(str + " ");
}
throw new RouterException(
"Did not receive results from all threads within the thread timeout period. Only retrieved:"
+ sb.toString());
}
LOG.info("Returning " + String.valueOf(resultsList.size()) + " results.");
LOG.debug("List of returned feeds: " + String.valueOf(resultsList));
return resultsList;
}
...
}
}
I ended up cloning the DOM document used as a parameter. There must be some downstream code that has side effects on the parameter.

Categories

Resources