How to stream from a capped collection with Spring data-mongodb-reactive - java

I'm trying to use this interesting repository method :
#Tailable
Flux<Movie> findWithTailableCursorBy();
by expose it in a controller,
to stream new saved docs in a capped collection:
This is a DataAppInitializr :
#EventListener(ApplicationReadyEvent.class)
public void run(ApplicationReadyEvent evt) {
operations.collectionExists(Movie.class)
.flatMap(exists -> exists ? operations.dropCollection(Movie.class) : Mono.just(exists))
.then(operations.createCollection(Movie.class, CollectionOptions.empty()
.size(256 * 256)
.maxDocuments(10)
.capped()))
.thenMany(operations.insertAll(Flux.just("Jeyda", "Kaf Efrit").map(title-> new Movie(title)).collectList()))
.subscribe();
}
This is the controller method :
#GetMapping(value = "/tail", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
Flux<Movie> allTail() {
return movieRepository.findWithTailableCursorBy();
}
I got no exception,
I’m just getting a white page in the browser and no stream of new docs. Am I missing a step ?
Thank you in advance!

There are two aspects in your question that do not fit what you want to achieve:
Your code contains blocking bits: block(). Do not call .block() in initializers and event handlers during startup or when receiving events triggered by reactive infrastructure. Blocking is the easiest way to disrupt any functionality and makes your application defunct.
Browsers aren't the ideal tool to consume streams with a page view. Rather use cURL.
Besides that, you seem to have a mismatch between Flux<Person> and Flux<Movie>.

The issue comes from SecurityWebFilterChain from the spring-security-webflux. I should contact concerned people to notify them.
Thank you for your support!

Related

SpringBoot functional Web MVC, missing a way to return CompletableFuture<ResponseEntity<String>>

I have a working production SpringBoot application, and part of it is getting a do-over. It would be very beneficial for me to delete my old #RequestMapping from the ResponseEntity<String> foo()s of my world, keeping the old code as an as a duplicate while we try to roll out the new functionality behind a feature gate.. All production tenants go through my no-longer-declarative foo() function, while all my test and automation tenants can start to tinker with a brand new EntityResponse<String> bar().
The way to implement the change was so clear in my mind:
class Router{
#Bean
RouterFunction<ServerResponse> helloWorldRouterFunction(OldHelloWorldService oldHelloWorldService) {
return RouterFunctions.route()
.route(RequestPredicates.path("/helloWorld/{option}"), x ->
{
String option = x.pathVariable("option");
if (FeatureManager.isActive()) {
return ServerResponse.ok().body(String.format("New implementation of Hello World! your option is: %s", option));
} else {
// FutureServerResponse is my own bad implementation of the ServerResponse interface
return FutureServerResponse.from(oldHelloWorldService.futureFoo(Integer.parseInt(option)));
}
}
)
.build();
}
}
Here's the implementation for OldHelloWorldService::futureFoo
#RestController
static class OldHelloWorldService {
#RequestMapping("/specialCase")
ResponseEntity<String> specialCase() {
// some business logic
return ResponseEntity.ok().body("Special case for Hello World with option 2");
}
/**
* Old declarative implementation, routed via functional {#link ServerRouteConfiguration}
* to allow dynamic choice based on {#link FeatureManager#isActive()}
* <p>
* as you can see, before the change, this function was a {#link RequestMapping} and it handled the
* completable future, we could return both concrete OK responses with a body, and FOUND responses with a location.
*/
// #RequestMapping("/helloWorld/{option}")
CompletableFuture<ResponseEntity<String>> futureFoo(
// #PathVariable
int option) {
return CompletableFuture.supplyAsync(() -> {
if (option == 2) {
return ResponseEntity.status(HttpStatus.FOUND)
.location(URI.create("/specialCase"))
.build();
} else {
return ResponseEntity.ok().body(String.format("Old implementation of Hello World! your option is: %s", option));
}
});
}
}
This feature lets my backend code decide what kind of ResponseEntity it will send, in the future. As you see, a smart function might for instance decide to either show a String message with an OK status, or give a Location header, and declare FOUND status, and not even give a String body at all. because the result type was a full fluid ResponseEntity, I had the power to do what I will.
Now with a EntityResponse you may still use a CompletionStage, but only as the actual entity. while building the EntityResponse I am required to give it a definitive final status. If it was OK, I can't decide it will be FOUND when my CompletionStage ran it's course.
The only problem with the above code, is that org.springframework.web.servlet.function does not contain the FutureServerResponse implementation I need. I created my own, and it works, but it feels hacky, And I wouldn't want it in my production code.
Now I feel like the functionality should still be there somewhere, Why isn't there a FutureServerResponse that can decide in the future what it is? Is there a workaround to this problem maybe somehow (ab)using views?
To state the maybe not-so-obvious.. I am contemplating a move to reactive and WebFlux, but changing the entire runtime will have more dramatic implications on current production tenants, and making that move with a feature gate would be impossible because the urls would have to be shared between MVC and Flux.
It's a niche problem, and Functional Web MVC has little resources so I will appreciate greatly any help I can get.
I have created a github companion for this question.
I've had to patch spring web to get what I needed. I pushed a pull-request to spring web with my patch, in the end something similar was created and pushed to the 5.3 release of spring.
if anybody else is looking for the async behavior described in the question, ServerResponse.async function in spring 5.3.0+ (spring boot 2.4.0+) solves the issue.

Using CompletableFuture.allOf on a variable number of objects

Please bear with me, i dont usually use spring and havent used newer versions of java (I say newer I mean anything past prob 1.4)
Anyway, I have an issue where I have to do rest calls to do a search using multiple parallel requests. Ive been looking around online and I see you can use CompletableFuture.
So I created my method to get the objects I need form the rest call like:
#Async
public CompletableFuture<QueryObject[]> queryObjects(String url)
{
QueryObject[] objects= restTemplate.getForObject(url, QueryObject[].class);
return CompletableFuture.completedFuture(objects);
}
Now I need to call that with something like:
CompletableFuture<QueryObject> page1 = QueryController.queryObjects("http://myrest.com/ids=[abc, def, ghi]);
CompletableFuture<QueryObject> page2 = QueryController.queryObjects("http://myrest.com/ids=[jkl, mno, pqr]);
The problem I have is that the call needs to only do three ids at a time and there could be a list of variable number ids. So I parse the idlist and create a query string like above. The problem with that I am having is that while I can call the queries I dont have separate objects that I can then call CompletableFuture.allOf on.
Can anyone tell me the way to do this? Ive been at it for a while now and Im not getting any further than where I am now.
Happy to provide more info if the above isnt sufficient
You are not getting any benefit of using the CompletableFuture in a way you're using it right now.
The restTemplate method you're using is a synchronous method, so it has to finish and return a result, before proceeding. Because of that wrapping the final result in a CompletableFuture doesn't cause it to be executed asynchronously (neither in parallel). You just wrap a response, that you have already retrieved.
If you want to benefit from the async execution, then you can use for example the AsyncRestTemplate or the WebClient .
A simplified code example:
public ListenableFuture<ResponseEntity<QueryObject[]>> queryForObjects(String url) {
return asyncRestTemplate.getForEntity(url, QueryObject[].class);
}
public List<QueryObject> queryForList(String[] elements) {
return Arrays.stream(elements)
.map(element -> queryForObjects("http://myrest.com/ids=[" + element + "]"))
.map(future -> future.completable().join())
.filter(Objects::nonNull)
.map(HttpEntity::getBody)
.flatMap(Arrays::stream)
.collect(Collectors.toList());
}

"sharing" parts of a reactive stream over multiple rest calls

I have this Spring WebFlux controller:
#RestController
public class Controller
{
#PostMapping("/doStuff")
public Mono<Response> doStuff(#RequestBody Mono<Request> request)
{
...
}
}
Now, say I wanted to relate separate requests coming to this controller from different clients to group processing based on some property of the Request object.
Take 1:
#PostMapping("/doStuff")
public Mono<Response> doStuff(#RequestBody Mono<Request> request)
{
return request.flux()
.groupBy(r -> r.someProperty())
.flatMap(gf -> gf.map(r -> doStuff(r)));
}
This will not work, because every call will get its own instance of the stream. The whole flux() call doesn't make sense, there will always ever be one Request object going through the stream even if there's many of those streams fired at the same time as a result of simultaneous calls coming from clients. What I need, I gather, is some part of the stream that is shared between all requests where I could do my grouping, which led me to this slightly over engineered code
Take 2:
private AtomicReference<FluxSink<Request>> sink = new AtomicReference<>();
private Flux<Response> serializingStream;
public Controller()
{
this.serializingStream =
Flux.<Request>create(fluxSink -> sink.set(fluxSink), ERROR)
.groupBy(r -> r.someProperty())
.flatMap(gf -> gf.map(r -> doStuff(r)));
.publish()
.autoConnect();
this.serializingStream.subscribe().dispose(); //dummy subscription to set the sink;
}
#PostMapping("/doStuff")
public Mono<Response> doStuff(#RequestBody Request request)
{
req.setReqId(UUID.randomUUID().toString());
return
serializingStream
.doOnSubscribe(__ -> sink.get().next(req))
.filter(resp -> resp.getReqId().equals(req.getReqId()))
.take(1)
.single();
}
And this kind of works, though it looks like I am doing things I shouldn't (or at least they don't feel right), like leaking the FluxSink and then injecting a value through it while subscribing, adding a request ID so that I can then filter the right response. Also, if error happens in the serializingStream then it breakes everything for everyone, but I guess I could try to isolate the errors to keep things going.
The question is, is there a better way of doing this that doesn't feel like an open heart surgery.
Also, related question for a similar scenario. I was thinking about using Akka Persistence to implement event sourcing and have it trigerred from inside that Reactor stream. I was reading about Akka Streams that allow to wrap an Actor and then there's some ways of converting that into something that can be hooked up with Reactor (aka Publisher or Subscriber), but then if every requests gets it's own stream, I am effectively loosing back pressure and am risking OOME because of flooding the Persistent Actor's mailbox, so I guess that problem falls in to the same category like the one I described above.

Is there a way to successfully execute nested flux operations without actually blocking your code?

While working with Spring Webflux, I'm trying to insert some data in the realm object server which interacts with Java apps via a Rest API. So basically I have a set of students, who have a set of subjects and my objective is to persist those subjects in a non-blocking manner. So I use a microservice exposed via a rest endpoint which provides me with a Flux of student roll numbers, and for that flux, I use another microservice exposed via a rest endpoint that gets me the Flux of subjects, and for each of these subjects, I want to persist them in the realm server via another rest endpoint. I wanted to make this all very nonblocking which is why I wanted my code to look like this.
void foo() {
studentService.getAllRollnumbers().flatMap(rollnumber -> {
return subjectDirectory.getAllSubjects().map(subject -> {
return dbService.addSubject(subject);
})
});
}
But this doesn't work for some reason. But once I call blocks on the things, they get into place, something like this.
Flux<Done> foo() {
List<Integer> rollNumbers = studentService.getAllRollnumbers().collectList().block();
rollNumbers.forEach(rollNumber -> {
List<Subject> subjects = subjectDirectory.getAllSubjects().collectList().block();
subjects.forEach(subject -> {dbService.addSubject(subject).block();});
});
return Flux.just(new NotUsed());
}
getAllRollnumbers() returns a flux of integers.
getAllSubjects() returns a flux of subject.
and addSubject() returns a Mono of DBResponse pojo.
What I can understand is that the thread executing this function is getting expired before much of it gets triggerred. Please help me work this code in an async non blocking manner.
You are not subscribing to the Publisher at all in the first instance that is why it is not executing. You can do this:
studentService.getAllRollnumbers().flatMap(rollnumber -> {
return subjectDirectory.getAllSubjects().map(subject -> {
return dbService.addSubject(subject);
})
}).subscribe();
However it is usually better to let the framework take care of the subscription, but without seeing the rest of the code I can't advise.

Reactor / WebFlux implement a reactive http news ticker

I have a request that is rather simple to formulate, but I cannot pull it of without leaking resources.
I want to return a response of type application/stream+json, featuring news events someone posted. I do not want to use Websockets, not because I don't like them, I just want to know how to do it with a stream.
For this I need to return a Flux<News> from my restcontroller, that is continuously fed with news, once someone posts any.
My attempt for this was creating a Publisher:
public class UpdatePublisher<T> implements Publisher<T> {
private List<Subscriber<? super T>> subscribers = new ArrayList<>();
#Override
public void subscribe(Subscriber<? super T> s) {
subscribers.add(s);
}
public void pushUpdate(T message) {
subscribers.forEach(s -> s.onNext(message));
}
}
And a simple News Object:
public class News {
String message;
// Constructor, getters, some properties omitted for readability...
}
And endpoints to publish news respectively get the stream of news
// ...
private UpdatePublisher<String> updatePublisher = new UpdatePublisher<>();
#GetMapping(value = "/news/ticker", produces = "application/stream+json")
public Flux<News> getUpdateStream() {
return Flux.from(updatePublisher).map(News::new);
}
#PutMapping("/news")
public void putNews(#RequestBody News news) {
updatePublisher.pushUpdate(news.getMessage());
}
This WORKS, but I cannot unsubscribe, or access any given subscription again - so once a client disconnects, the updatePublisher will just continue to push onto a growing number of dead channels - as I have no way to call the onCompleted() handler on the subscriptions.
TL;DL:
Can one push messages onto a possible endless Flux from a different thread and still terminate the Flux on demand without relying on a reset by peer exception or something along those lines?
You should never try to implement yourself the Publisher interface, as it boils down to getting the reactive streams implementation right. This is exactly the issue you're facing here.
Instead you should use one of the generator operators provided by Reactor itself (this is actually a Reactor question, nothing specific to Spring WebFlux).
In this case, Flux.create or Flux.push are probably the best candidates, given your code uses some type of event listener to push events down the stream. See the reactor project reference documentation on that.
Without more details, it's hard to give you a concrete code sample that solves your problem. Here are a few pointers though:
you might want to .share() the stream of events for all subscribers if you'd like some multicast-like communication pattern
pay attention to the push/pull/push+pull model that you'd like to have here; how is the backpressure supposed to work here? What if we produce more events that the subscribers can handle?
this model would only work on a single application instance. If you'd like this to work on multiple application instances, you might want to look into messaging patterns using a broker

Categories

Resources