Mutiny - Discarding Failures when Combining Results of Several Unis - java

I'm using Mutiny to try to fetch data from several external sources. Each call produces a list of results, and I am currently combining these results into one list in a Uni as follows:
List<Uni<List<Result>>> unis = new ArrayList<>();
for (Source source : sources) {
unis.add(source.getResults());
}
return Uni.combine().all().unis(unis).combinedWith(
responses -> {
List<Result> res = new ArrayList<>();
for (List<Result> response : (List<List<Result>>) responses) {
res.addAll(response);
}
return res;
}
);
When one of these Unis fails, though, the entire final Uni fails.
I want to be able to get the combined list of results from all of the calls that do not fail, and just log failures or something, but I can't figure out how to do this from the Mutiny documentation. Any help would be much appreciated.

You should have an error-handling strategy for each Uni then, so the combination only sees succeeeding Uni.
See:
https://quarkus.io/blog/mutiny-failure-handling/
https://smallrye.io/smallrye-mutiny/2.0.0/tutorials/handling-failures/

Related

how to convert Flux<pojo> to ArrayList<String>

In my spring-boot springboot service class, I have created the following code which is not working as desired:
Service class:
Flux<Workspace> mWorkspace = webClient.get().uri(WORKSPACEID)
.retrieve().bodyToFlux(Workspace.class);
ArrayList<String> newmWorkspace = new ArrayList();
newmWorkspace = mWorkspace.blockLast();
return newmWorkspace;
Please someone help me on converting the list of json values to put it into arrayList
Json
[
{
"id:"123abc"
},
{
"id:"123abc"
}
]
Why is the code not working as desired
mWorkspace is a publisher of one or many items of type Workspace.
Calling newmWorkspace.blockLast() will get a Workspace from that Publisher:
which is an object of type: Workspace and not of type ArrayList<String>.
That's why : Type mismatch: cannot convert from Workspace to ArrayList<String>
Converting from Flux to an ArrayList
First of all, in reactive programming, a Flux is not meant to be blocked, the blockxxx methods are made for testing purposes. If you find yourself using them, then you may not need reactive logic.
In your service, you shall try this :
//initialize the list
ArrayList<String> newmWorkspace = new ArrayList<>();
Flux<Workspace> mWorkspace = webClient.get().uri(WORKSPACEID)
.retrieve().bodyToFlux(Workspace.class)
.map(workspace -> {
//feed the list
newmWorkspace.add(workspace.getId());
return workspace;
});
//this line will trigger the publication of items, hence feeding the list
mWorkspace.subscribe();
Just in case you want to convert a JSON String to a POJO:
String responseAsjsonString = "[{\"id\": \"123abc\"},{\"id\": \"123cba\"}] ";
Workspace[] workspaces = new ObjectMapper().readValue(responseAsjsonString, Workspace[].class);
You would usually want to avoid blocking in a non-blocking application. However, if you are just integrating from blocking to non-blocking and doing so step-by-step (unless you are not mixing blocking and non-blocking in your production code), or using a servlet stack app but want to only use the WebFlux client, it should be fine.
With that being said, a Flux is a Publisher that represents an asynchronous sequence of 1..n emitted items. When you do a blockLast you wait until the last signal completes, which resolves to a Workspace object.
You want to collect each resolved item to a list and return that. For this purpose, there is a useful method called collectList, which does this job without blocking the stream. You can then block the Mono<List<Workspace>> returned by this method to retrieve the list.
So this should give you the result you want:
List<Workspace> workspaceList = workspaceFlux.collectList().block();
If you must use a blocking call in the reactive stack, to avoid blocking the event loop, you should subscribe to it on a different scheduler. For the I/O purposes, you should use the boundedElastic Scheduler. You almost never want to call block on a reactive stack, instead subscribe to it. Or better let WebFlux to handle the subscription by returning the publisher from your controller (or Handler).

Writing unit tests for Java 8 streams

I have a list and I'm streaming this list to get some filtered data as:
List<Future<Accommodation>> submittedRequestList =
list.stream().filter(Objects::nonNull)
.map(config -> taskExecutorService.submit(() -> requestHandler
.handle(jobId, config))).collect(Collectors.toList());
When I wrote tests, I tried to return some data using a when():
List<Future<Accommodation>> submittedRequestList = mock(LinkedList.class);
when(list.stream().filter(Objects::nonNull)
.map(config -> executorService.submit(() -> requestHandler
.handle(JOB_ID, config))).collect(Collectors.toList())).thenReturn(submittedRequestList);
I'm getting org.mockito.exceptions.misusing.WrongTypeOfReturnValue:
LinkedList$$EnhancerByMockitoWithCGLIB$$716dd84d cannot be returned by submit() error. How may I resolve this error by using a correct when()?
You can only mock single method calls, not entire fluent interface cascades.
Eg, you could do
Stream<Future> fs = mock(Stream.class);
when(requestList.stream()).thenReturn(fs);
Stream<Future> filtered = mock(Stream.class);
when(fs.filter(Objects::nonNull).thenReturn(filtered);
and so on.
IMO it's really not worth mocking the whole thing, just verify that all filters were called and check the contents of the result list.

JMeter as code assertions are not being considered in test results

I'm using JMeter as a code (programmatic approach instead of GUI, with a Java Maven project) in order to stress-test an AWS Lambda Serverless API.
I've already developed a test plan, thread group, HTTPSamplerProxy and so on...
The execution of the calls to the API works perfect but is not the case e.g. for the DurationAssertion I've added to the HTTP Sampler..
I've also set a CSV file for the output where after execution I see everything ok (status code 200..), but the test should fail due to it is over the DurationAssertion I've configured (in addition to other assertion test elements).
I thought that perhaps I had to set "enabled" = true in the DurationAssertion object, but no effect.. Also, I've tried to access the JMeter Context in this way:
JMeterContextService.getContext().getPreviousResult()
I expected above code to retrieve a SampleResult (which has an AssertionResult collection), but the SampleResult is null..
A test plan with test elements (DurationAssertion in this case) without its respective analysis of the results of these assertions make no sense.. I want to see a failure message in each call that exceeds a certain threshold.. If I'd be using the JMeter GUI, I would add a ViewResultTree, which shows a Sampler Result view with detail of the request, response, and associated test asserts. And in addition to assertion result (per each request) I wanna see the request payload, full response, headers.. But in programmatic mode (without using the GUI).
So I would highly appreciate if anyone could give me some hint in how to accomplish this goal but by code.
UPDATE 1: I share a github snippet with the entire source code, such as UBIK LOAD PACK user suggested me:
https://gist.github.com/svillarreal/5eb90a66b8972633b95c249abb3566da
UPDATE 2: Inspection of context object (evaluated after JMeter engine finished run) - All null inside
UPDATE 3
i) I've recently found a jmeter.properties file, where I've configured the following properties:
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.assertion_results=all
And now the output as XML instead of CSV shows, at least, the sent request payload and the response data, which is VERY useful for analyse error cases.
ii) I did the inspection of JMeterContextService.getContext() inside the JMeterEngine execution instead of after it finishes run and then I could realize that there is one context per thread group, and during its run this object is full so now is clear why in UPDATE 2 all the properties are null..
Best regards and thanks!
I can think about at least one use case when your approach will not work: JMeter didn't receive response from the server at all.
For example if your server gets overloaded it might be the case that JMeter will never get response back therefore your Duration Assertion will simply not be applied as PostProcessors, Listeners and Assertions are not fired given that SampleResult is null.
So in order to be on the safe side I would recommend applying connect and response timeouts to your HTTP Request sampler(s)
HTTPSamplerProxy httpSampler = new HTTPSamplerProxy();
httpSampler.setConnectTimeout("3000");
httpSampler.setResponseTimeout("3000");
//etc.
If you have > 1 HTTP Request sampler in Test Plan it makes sense to go for HTTP Request Defaults instead of setting the timeouts individually.
Finally I could fix this. The issue was that I was managing erroneously the tree that is passed to the StandardJMeterEngine.
In JMeter everything is based on this tree, and like in the GUI, we should take care about how the elements are positioned in its hierarchy.
Analysing the library and debugging it intensely I've realized more in deep how JMeter works and I've understood that everything is managed starting from the HashTree. So the solution was to add the DurationAssertion and ResponseAssertion as HTTPSamplerProxy node's childs instead of putting them as HTTPSamplerProxy's test elements.
In particular, the method that fills the assertions to check after the execution is the following (and that let me know how to manage the hashtree):
// org.apache.jmeter.threads.TestCompiler
private void saveSamplerConfigs(Sampler sam) {
List<ConfigTestElement> configs = new LinkedList<>();
List<Controller> controllers = new LinkedList<>();
List<SampleListener> listeners = new LinkedList<>();
List<Timer> timers = new LinkedList<>();
List<Assertion> assertions = new LinkedList<>();
LinkedList<PostProcessor> posts = new LinkedList<>();
LinkedList<PreProcessor> pres = new LinkedList<>();
for (int i = stack.size(); i > 0; i--) {
addDirectParentControllers(controllers, stack.get(i - 1));
List<PreProcessor> tempPre = new LinkedList<>();
List<PostProcessor> tempPost = new LinkedList<>();
List<Assertion> tempAssertions = new LinkedList<>();
for (Object item : testTree.list(stack.subList(0, i))) {
if (item instanceof ConfigTestElement) {
configs.add((ConfigTestElement) item);
}
if (item instanceof SampleListener) {
listeners.add((SampleListener) item);
}
if (item instanceof Timer) {
timers.add((Timer) item);
}
if (item instanceof Assertion) {
tempAssertions.add((Assertion) item);
}
if (item instanceof PostProcessor) {
tempPost.add((PostProcessor) item);
}
if (item instanceof PreProcessor) {
tempPre.add((PreProcessor) item);
}
}
assertions.addAll(0, tempAssertions);
pres.addAll(0, tempPre);
posts.addAll(0, tempPost);
}
SamplePackage pack = new SamplePackage(configs, listeners, timers, assertions,
posts, pres, controllers);
pack.setSampler(sam);
pack.setRunningVersion(true);
samplerConfigMap.put(sam, pack);
}
Also I had to activate the following property:
jmeter.save.saveservice.assertion_results_failure_message=true
As a consequence now I have my CSV file report with the assertions results messages included in a exclusive column for that.
Well, issue resolved. ** I've updated the github snippet gist with the final solution ** Many thanks to all that read this post and tried to collaborate.
Best regards,

how to run multiple synchronous functions asynchronously?

I am writing in Java on the Vertx framework, and I have an architecture question regarding blocking code.
I have a JsonObject which consists of 10 objects, like so:
{
"system":"CD0",
"system":"CD1",
"system":"CD2",
"system":"CD3",
"system":"CD4",
"system":"CD5",
"system":"CD6",
"system":"CD7",
"system":"CD8",
"system":"CD9"
}
I also have a synchronous function which gets an object from the JsonObject, and consumes a SOAP web service, while sending the object to it.
the SOAP Web service gets the content (e.g. CD0), and after a few seconds returns an Enum.
I then want to take that enum value returned, and save it in some sort of data variable(like hash table).
What I ultimately want is a function that will iterate over all the JsonObject's objects, and for each one, run the blocking code, in parallel.
I want it to run in parallel so even if one of the calls to the function needs to wait 20 seconds, it won't stuck the other calls.
how can I do such a thing in vertx?
p.s: I will appreciate if you will correct mistakes I wrote.
Why not to use rxJava and "zip" separate calls? Vertx has great support for rxJava too. Assuming that you are calling 10 times same method with different String argument and returning another String you could do something like this:
private Single<String> callWs(String arg) {
return Single.fromCallable(() -> {
//DO CALL WS
return "yourResult";
});
}
and then just use it with some array of arguments:
String[] array = new String[10]; //get your arguments
List<Single<String>> wsCalls = new ArrayList<>();
for (String s : array) {
wsCalls.add(callWs(s));
}
Single.zip(wsCalls, r -> r).subscribe(allYourResults -> {
// do whatever you like with resutls
});
More about zip function and reactive programming in general: reactivex.io

Fetching a list with Lagom framework

I'm VERY new to Lagom framework and I have absolutely no idea what I'm doing. I have a simple CRUD lagom application that does work, but I can't figure out how to retrieve a list.
So this is what I have for now, but I'm getting
#Override
public ServiceCall<NotUsed, Source<Movie, ?>> getMovies() {
return request -> {
CompletionStage<Source<Movie, ?>> movieFuture = session.selectAll("SELECT * FROM movies")
.thenApply(rows -> rows.stream()
.map(row -> Movie.builder()
.id(row.getString("id"))
.name(row.getString("name"))
.genre(row.getString("genre"))
.build()));
//.thenApply(TreePVector::from));
//.thenApply(is -> is.collect(Collectors.toList()))
return movieFuture;
};
}
but I'm getting a [Java] Type mismatch: cannot convert from Stream<Object> to Source<Movie,?> error on the rows.stream() line.
Any help would be appreciated.
Thanks in advance.
It looks like the return type should be a Source (from Akka Reactive Streams) but you are building a Java 8 Stream.
The problem can be easily solved if you used select instead of selectAll when querying the database. Lagom's CassandraSession provides two families of meqthods to query the DB: (1) select(...) will immediately return a Source<Row,NotUsed> which is a reactive stream or (2) selectAll(...) which will gather all rows in memory and return a List<Row>. The latter could take your server down because will try to put all the info in memory. The former will use reactive streams to deliver items adapting the speed to your consuming end speed (backpressure) keeping a very low memory footprint.
Your code can be rewritten as:
public ServiceCall<NotUsed, Source<GreetingMessage, ?>> getGreetings() {
return request ->
CompletableFuture.completedFuture(
session.select("SELECT * FROM greetings")
.map(row -> new GreetingMessage(row.getString(0)))
);
}
Using select creates a Source<>. You can map items individually on that Source<> using the lambda you already developed.

Categories

Resources