When using Java Lambda function to do a kinesis data firehose transformation , getting the below error. The below is my transformed JSON look like
{
"records": [
{
"recordId": "49586022990098427206724983301551059982279766660054253570000000",
"result": "Ok",
"data": "ZXlKMGFXTnJaWEpmYzNsdFltOXNJam9pVkVWVFZEY2lMQ0FpYzJWamRHOXlJam9pU0VWQlRGUklRMEZTUlNJc0lDSmphR0Z1WjJVaQ0KT2kwd0xqQTFMQ0FpY0hKcFkyVWlPamcwTGpVeGZRbz0="
}
]
}
error in the kinesis console is like
Invalid output structure: Please check your function and make sure the processed records contain valid result status of Dropped, Ok, or ProcessingFailed
Anyone have an idea on this , i could not find an example code using Java on the kinesis data transformation
https://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html
This document says about the output structure
I just got done struggling through this in scala (java compatible). The key is to use the return type: com.amazonaws.services.lambda.runtime.events.KinesisAnalyticsInputPreprocessingResponse
import java.nio.ByteBuffer
import com.amazonaws.services.lambda.runtime.events.KinesisAnalyticsInputPreprocessingResponse._
import com.amazonaws.services.lambda.runtime.events.{KinesisAnalyticsInputPreprocessingResponse, KinesisFirehoseEvent}
import com.amazonaws.services.lambda.runtime.{Context, LambdaLogger, RequestHandler}
import scala.collection.JavaConversions._
import scala.language.implicitConversions
class Handler extends RequestHandler[KinesisFirehoseEvent, KinesisAnalyticsInputPreprocessingResponse] {
override def handleRequest(in: KinesisFirehoseEvent, context: Context): KinesisAnalyticsInputPreprocessingResponse = {
val logger: LambdaLogger = context.getLogger
val records = in.getRecords
val tranformed = records.flatMap(record => {
try {
val changed = record.getData.array()
//do some sort of transform
val rec = new Record(record.getRecordId, Result.Ok, ByteBuffer.wrap(changed))
Some(rec)
} catch {
case e: Exception => {
logger.log(e.toString)
Some(new Record(record.getRecordId, Result.Dropped, record.getData))
}
}
})
val response = new KinesisAnalyticsInputPreprocessingResponse()
response.setRecords(tranformed.toList)
response
}
}
A java example:
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.KinesisAnalyticsInputPreprocessingResponse;
import com.amazonaws.services.lambda.runtime.events.KinesisFirehoseEvent;
import com.fasterxml.jackson.databind.ObjectMapper;
import lombok.RequiredArgsConstructor;
import lombok.extern.log4j.Log4j2;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
#Log4j2
#RequiredArgsConstructor
public class FirehoseHandler implements RequestHandler<KinesisFirehoseEvent, KinesisAnalyticsInputPreprocessingResponse> {
private final ObjectMapper mapper;
#Override
public KinesisAnalyticsInputPreprocessingResponse handleRequest(KinesisFirehoseEvent kinesisFirehoseEvent, Context context) {
return Flux.fromIterable(kinesisFirehoseEvent.getRecords())
.flatMap(this::transformRecord)
.collectList()
.map(KinesisAnalyticsInputPreprocessingResponse::new)
.block();
}
private Mono<KinesisAnalyticsInputPreprocessingResponse.Record> transformRecord(KinesisFirehoseEvent.Record record) {
return Mono.just(record.getData())
.map(StandardCharsets.UTF_8::decode)
.flatMap(data -> Mono.fromCallable(() -> doYourOwnThing(data)))
.map(StandardCharsets.UTF_8::encode)
.map(data -> new KinesisAnalyticsInputPreprocessingResponse.Record(record.getRecordId(), KinesisAnalyticsInputPreprocessingResponse.Result.Ok, data))
.onErrorResume(e -> Mono.just(new KinesisAnalyticsInputPreprocessingResponse.Record(record.getRecordId(), KinesisAnalyticsInputPreprocessingResponse.Result.ProcessingFailed, record.getData())));
}
}
Related
I am developing a Quarkus service-based application for which I am adding open API based annotations such as #ExampleObject. For this, I would like to add the resources file contents as an example that can appear in the SwaggerUI.
I am getting the following error when I add the reference to the files from the resources folder:
Errors
Resolver error at paths./api/generateTestData.post.requestBody.content.application/json.examples.Example1 Schema.$ref
Could not resolve reference: Could not resolve pointer: /Example1.json does not exist in document
Resolver error at paths./api/generateTestData.post.requestBody.content.application/json.examples.Example2 Schema.$ref
Could not resolve reference: Could not resolve pointer: /Example2.json does not exist in document
Following is my Quarkus based Java code:
#RequestBody(description = "InputTemplate body",
content = #Content(schema = #Schema(implementation = InputTemplate.class), examples = {
#ExampleObject(name = "Example-1",
description = "Example-1 for InputTemplate.",
ref = "#/resources/Example1.json"), externalValue = "#/resources/Example2.json"
#ExampleObject(name = "Example-2",
description = "Example-2 for InputTemplate.",
ref = "#/resources/Example1.json") //externalValue = "#/resources/Example1.json"
}))
Note:
I am able to add the String as value but the content for these examples is very large so I would like to read from the files only so trying this approach.
Is there any way I can access the resources file and add it as a ref within my #ExampleObject
A working example below:
Create an OASModelFilter class which implements OASFilter:
package org.acme;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ObjectNode;
import org.eclipse.microprofile.openapi.OASFactory;
import org.eclipse.microprofile.openapi.OASFilter;
import org.eclipse.microprofile.openapi.models.Components;
import org.eclipse.microprofile.openapi.models.OpenAPI;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import java.util.LinkedHashMap;
import java.util.Map;
import org.eclipse.microprofile.openapi.models.examples.Example;
public class OASModelFilter implements OASFilter {
ObjectMapper objectMapper = new ObjectMapper();
#Override
public void filterOpenAPI(OpenAPI openAPI) {
//openApi.getComponents() will result in NULL as we don't have any openapi.yaml file.
Components defaultComponents = OASFactory.createComponents();
if(openAPI.getComponents() == null){
openAPI.setComponents(defaultComponents);
}
generateExamples().forEach(openAPI.getComponents()::addExample);
}
Map<String, Example> generateExamples() {
Map<String, Example> examples = new LinkedHashMap<>();
try {
//loop over your Example JSON Files,..
//In this case, the example is only for 1 file.
ClassLoader loader = Thread.currentThread().getContextClassLoader();
InputStream userJsonFileInputStream = loader.getResourceAsStream("user.json");
String fileJSONContents = new String(userJsonFileInputStream.readAllBytes(), StandardCharsets.UTF_8);
//Create a unique example for each File/JSON
Example createExample = OASFactory.createExample()
.description("User JSON Description")
.value(objectMapper.readValue(fileJSONContents, ObjectNode.class));
// Save your Example with a Unique Map Key.
examples.put("createExample", createExample);
} catch (IOException ioException) {
System.out.println("An error occured" + ioException);
}
return examples;
}
}
The controller using createExample as its #ExampleObject.
#Path("/hello")
public class GreetingResource {
#GET
#Produces(MediaType.TEXT_PLAIN)
#APIResponses(
value = {
#APIResponse(responseCode = "200", content = #Content(
mediaType = "*/*",
examples = {
#ExampleObject(name = "boo",
summary = "example of boo",
ref = "createExample")
}
))
}
)
public String hello() {
return "Hello RESTEasy";
}
}
In your application.properties, specify the following: Take note that it references the full package path of the Filter.
mp.openapi.filter=org.acme.OASModelFilter
Contents of user.json file:
{
"hello": "world",
"my": "json",
"testing": "manually adding resource JSONs as examples"
}
The JSON file used is located directly under resources. Of course you can change that path, but you need to update your InputStream.
mvn clean install
mvn quarkus:dev
Go to http://localhost:8080/q/swagger-ui/ and you will now see your user.json file contents displayed
Hopes this helps you,
References for my investigation:
https://github.com/labcabrera/rolemaster-core/blob/c68331c10ef358f6288518350c79d4868ff60d2c/src/main/java/org/labcabrera/rolemaster/core/config/OpenapiExamplesConfig.java
https://github.com/bf2fc6cc711aee1a0c2a/kafka-admin-api/blob/54496dd67edc39a81fa7c6da4c966560060c7e3e/kafka-admin/src/main/java/org/bf2/admin/kafka/admin/handlers/OASModelFilter.java
The below works, but as you can see I am creating the PATHS, and you still need to know what the (path/address/is) in order to create paths.
It could help you in thinking in approaching it in a different way.
If you are considering modifying the #ApiResponses/#ApiResponse annotations directly, then it wont work.
package org.acme;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ObjectNode;
import org.eclipse.microprofile.openapi.OASFactory;
import org.eclipse.microprofile.openapi.OASFilter;
import org.eclipse.microprofile.openapi.models.Components;
import org.eclipse.microprofile.openapi.models.OpenAPI;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.LinkedHashMap;
import java.util.Map;
import org.eclipse.microprofile.openapi.models.examples.Example;
import io.quarkus.logging.Log;
public class CustomOASFilter implements OASFilter {
ObjectMapper objectMapper = new ObjectMapper();
#Override
public void filterOpenAPI(OpenAPI openAPI) {
//openApi.getComponents() will result in NULL as we don't have any openapi.yaml file.
Components defaultComponents = OASFactory.createComponents();
if (openAPI.getComponents() == null) {
openAPI.setComponents(defaultComponents);
}
generateExamples().forEach(openAPI.getComponents()::addExample);
openAPI.setPaths(OASFactory.createPaths()
.addPathItem(
"/hello/customer", OASFactory.createPathItem()
.GET(
OASFactory.createOperation()
.operationId("hello-customer-get")
.summary("A simple get call")
.description("Getting customer information")
.responses(
OASFactory.createAPIResponses()
.addAPIResponse(
"200", OASFactory.createAPIResponse()
.content(OASFactory.createContent()
.addMediaType("application/json", OASFactory.createMediaType()
.examples(generateExamples()))))))));
}
Map<String, Example> generateExamples() {
Map<String, Example> examples = new LinkedHashMap<>();
try {
ClassLoader loader = Thread.currentThread().getContextClassLoader();
String userJSON = new String(loader.getResourceAsStream("user.json").readAllBytes(), StandardCharsets.UTF_8);
String customerJson = new String(loader.getResourceAsStream("customer.json").readAllBytes(), StandardCharsets.UTF_8);
Example userExample = OASFactory.createExample()
.description("User JSON Example Description")
.value(objectMapper.readValue(userJSON, ObjectNode.class));
Example customerExample = OASFactory.createExample()
.description("Customer JSON Example Description")
.value(objectMapper.readValue(customerJson, ObjectNode.class));
examples.put("userExample", userExample);
examples.put("customerExample", customerExample);
} catch (IOException ioException) {
Log.error(ioException);
}
return examples;
}
}
EDIT: This is working well in spring-boot
The above answer might work but it has too much code to put into to make it work.
Instead, you can use externalValue field to pass on the JSON file.
For example,
#ExampleObject(
summary = "temp",
name =
"A 500 error",
externalValue = "/response.json"
)
And now you can create your json file under /resources/static like below,
Swagger doc screenshot
And that's all you need. You don't need to write any manual code here.
Hope this will help you fix the issue.
In the route set up we have a call for WebClient.build() being set up before the route is declared:
#Override
public void configure() {
createSubscription(activeProfile.equalsIgnoreCase("RESTART"));
from(String.format("reactive-streams:%s", streamName))
.to("log:camel.proxy?level=INFO&groupInterval=500000")
.to(String.format("kafka:%s?brokers=%s", kafkaTopic, kafkaBrokerUrls));
}
private void createSubscription(boolean restart) {
WebClient.builder()
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.TEXT_XML_VALUE)
.build()
.post()
.uri(initialRequestUri)
.body(BodyInserters.fromObject(restart ? String.format(restartRequestBody, ZonedDateTime.now(ZoneId.of("UTC")).toString().replace("[UTC]", "")) : initialRequestBody))
.retrieve()
.bodyToMono(String.class)
.map(initResp ->
new JSONObject(initResp)
.getJSONObject("RESPONSE")
.getJSONArray("RESULT")
.getJSONObject(0)
.getJSONObject("INFO")
.getString("SSEURL")
)
.flatMapMany(url -> {
log.info(url);
return WebClient.create()
.get()
.uri(url)
.retrieve()
.bodyToFlux(new ParameterizedTypeReference<ServerSentEvent<String>>() {
})
.flatMap(sse -> {
val data = new JSONObject(sse.data())
.getJSONObject("RESPONSE")
.getJSONArray("RESULT")
.getJSONObject(0)
.getJSONArray(apiName);
val list = new ArrayList<String>();
for (int i = 0; i < data.length(); i++) {
list.add(data.getJSONObject(i).toString());
}
return Flux.fromIterable(list);
}
);
}
)
.onBackpressureBuffer()
.flatMap(msg -> camelReactiveStreamsService.toStream(streamName, msg, String.class))
.doFirst(() -> log.info(String.format("Reactive stream %s was %s", streamName, restart ? "restarted" : "started")))
.doOnError(err -> {
log.error(String.format("Reactive stream %s has terminated with error, restarting", streamName), err);
createSubscription(true);
})
.doOnComplete(() -> {
log.warn(String.format("Reactive stream %s has completed, restarting", streamName));
createSubscription(true);
})
.subscribe();
}
for my understanding the WebClient set up is made for the whole Spring Boot app and not the specific route of the Apache Camel (it isn't bent to the specific route id or url somehow), that's why new routes using the new reactive steams of other urls and other needs with headers/initial messages will get this set up too, what isn't needed.
So, the question here, is it possible to make a specific WebClient set up, associated not with the whole application, but with the specific route and make it applied for the route?
Is this configuration possible with Spring DSL?
The way to be applied there is rather complex:
Create 2 routes, the first one is executed first and only once and is triggering a specific method of specific bean, passing the set up for the WebClient.builder() with method parameters and executing the subscription for the WebFlux. And yes, that reactive streams set up is done within the Spring Boot app's Spring context, not the Apache Camel context. So it has no direct associations with route rather than being called for set up when the specific route was started. So route looks like:
<?xml version="1.0" encoding="UTF-8"?>
Provide the bean. I thave put it to the Spring Boot app, not the Apache Camel context like below. The drawback here is that I have to put it here no matter will the specific route work or now. So it is always in the memory.
import org.apache.camel.CamelContext;
import org.apache.camel.component.reactive.streams.api.CamelReactiveStreamsService;
import org.json.JSONArray;
import org.json.JSONObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.http.codec.ServerSentEvent;
import org.springframework.stereotype.Component;
import org.springframework.web.reactive.function.BodyInserters;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Flux;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.util.ArrayList;
#Component
public class WebFluxSetUp {
private final Logger logger = LoggerFactory.getLogger(WebFluxSetUp.class);
private final CamelContext camelContext;
private final CamelReactiveStreamsService camelReactiveStreamsService;
WebFluxSetUp(CamelContext camelContext, CamelReactiveStreamsService camelReactiveStreamsService) {
this.camelContext = camelContext;
this.camelReactiveStreamsService = camelReactiveStreamsService;
}
public void executeWebfluxSetup(boolean restart, String initialRequestUri, String restartRequestBody, String initialRequestBody, String apiName, String streamName) {
{
WebClient.builder().defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.TEXT_XML_VALUE).build().post().uri(initialRequestUri).body(BodyInserters.fromObject(restart ? String.format(restartRequestBody, ZonedDateTime.now(ZoneId.of("UTC")).toString().replace("[UTC]", "")) : initialRequestBody)).retrieve().bodyToMono(String.class).map(initResp -> new JSONObject(initResp).getJSONObject("RESPONSE").getJSONArray("RESULT").getJSONObject(0).getJSONObject("INFO").getString("SSEURL")).flatMapMany(url -> {
logger.info(url);
return WebClient.create().get().uri(url).retrieve().bodyToFlux(new ParameterizedTypeReference<ServerSentEvent<String>>() {
}).flatMap(sse -> {
JSONArray data = new JSONObject(sse.data()).getJSONObject("RESPONSE").getJSONArray("RESULT").getJSONObject(0).getJSONArray(apiName);
ArrayList<String> list = new ArrayList<String>();
for (int i = 0; i < data.length(); i++) {
list.add(data.getJSONObject(i).toString());
}
return Flux.fromIterable(list);
});
}).onBackpressureBuffer().flatMap(msg -> camelReactiveStreamsService.toStream(streamName, msg, String.class)).doFirst(() -> logger.info(String.format("Reactive stream %s was %s", streamName, restart ? "restarted" : "started"))).doOnError(err -> {
logger.error(String.format("Reactive stream %s has terminated with error, restarting", streamName), err);
executeWebfluxSetup(true, initialRequestUri, restartRequestBody, initialRequestBody, apiName, streamName);
}).doOnComplete(() -> {
logger.warn(String.format("Reactive stream %s has completed, restarting", streamName));
executeWebfluxSetup(true, initialRequestUri, restartRequestBody, initialRequestBody, apiName, streamName);
}).subscribe();
}
}
}
Other drawbacks there is when the route is stopped, the WebFlux client still trying to spam the reactive stream url. And there is no route-associated api/event handler to stop it and make not had-coded to the specific route.
Hi I have a file listener that is reading files parallel / more than one at a time
package com.example.demo.flow;
import lombok.extern.slf4j.Slf4j;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.dsl.*;
import org.springframework.integration.dsl.channel.MessageChannels;
import org.springframework.integration.file.dsl.Files;
import org.springframework.stereotype.Component;
import java.io.File;
import java.util.concurrent.Executors;
/**
* Created by muhdk on 03/01/2020.
*/
#Component
#Slf4j
public class TestFlow {
#Bean
public StandardIntegrationFlow errorChannelHandler() {
return IntegrationFlows.from("testChannel")
.handle(o -> {
log.info("Handling error....{}", o);
}).get();
}
#Bean
public IntegrationFlow testFile() {
IntegrationFlowBuilder testChannel = IntegrationFlows.from(Files.inboundAdapter(new File("d:/input-files/")),
e -> e.poller(Pollers.fixedDelay(5000L).maxMessagesPerPoll(5)
.errorChannel("testChannel")))
.channel(MessageChannels.executor(Executors.newFixedThreadPool(5)))
.transform(o -> {
throw new RuntimeException("Failing on purpose");
}).handle(o -> {
});
return testChannel.get();
}
}
Its not going to my custom error channel
but if I remove the line
.channel(MessageChannels.executor(Executors.newFixedThreadPool(5)))
Then it goes to error channel.
How can I make it work so that it does go to my custom error channel with executor.
It looks like when using Executor services with multiple messages it doesn't work with normal errorChannel which I have no idea why
I made a change like this
#Bean
public IntegrationFlow testFile() {
IntegrationFlowBuilder testChannel = IntegrationFlows.from(Files.inboundAdapter(new File("d:/input-files/")),
e -> e.poller(Pollers.fixedDelay(5000L).maxMessagesPerPoll(10)
))
.enrichHeaders(h -> h.header(MessageHeaders.ERROR_CHANNEL, "testChannel"))
.channel(MessageChannels.executor(Executors.newFixedThreadPool(5)))
.transform(o -> {
throw new RuntimeException("Failing on purpose");
}).handle(o -> {
});
return testChannel.get();
}
The line here
.enrichHeaders(h -> h.header(MessageHeaders.ERROR_CHANNEL, "testChannel"))
The rest remain the same and it works.
I'm trying to rewrite the wikipedia edit stream analytics in Apache Flink tutorials to Scala from https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/run_example_quickstart.html
The code from the tutorial is
import org.apache.flink.api.common.functions.FoldFunction;
import org.apache.flink.api.java.functions.KeySelector;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.wikiedits.WikipediaEditEvent;
import org.apache.flink.streaming.connectors.wikiedits.WikipediaEditsSource;
public class WikipediaAnalysis {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment see = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<WikipediaEditEvent> edits = see.addSource(new WikipediaEditsSource());
KeyedStream<WikipediaEditEvent, String> keyedEdits = edits
.keyBy(new KeySelector<WikipediaEditEvent, String>() {
#Override
public String getKey(WikipediaEditEvent event) {
return event.getUser();
}
});
DataStream<Tuple2<String, Long>> result = keyedEdits
.timeWindow(Time.seconds(5))
.fold(new Tuple2<>("", 0L), new FoldFunction<WikipediaEditEvent, Tuple2<String, Long>>() {
#Override
public Tuple2<String, Long> fold(Tuple2<String, Long> acc, WikipediaEditEvent event) {
acc.f0 = event.getUser();
acc.f1 += event.getByteDiff();
return acc;
}
});
result.print();
see.execute();
}
}
below is my attempt in scala
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import org.apache.flink.streaming.connectors.wikiedits.{WikipediaEditEvent, WikipediaEditsSource}
import org.apache.flink.streaming.api.windowing.time.Time
object WikipediaAnalytics extends App{
val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
val edits = env.addSource(new WikipediaEditsSource());
val keyedEdits = edits.keyBy(event => event.getUser)
val result = keyedEdits.timeWindow(Time.seconds(5)).fold(("", 0L), (we: WikipediaEditEvent, t: (String, Long)) =>
(we.getUser, t._2 + we.getByteDiff))
}
which is more or less a word to word conversion to scala, based on which the type of the val result should be DataStream[(String, Long)] but the actual type inferred after fold() is no where close.
Please help identify what is wrong with the scala code
EDIT1: made the below changes, using the currying schematic of fold[R] and the type now confirms to the expected type, but still couldn't get hold of the reason though
val result_1: (((String, Long), WikipediaEditEvent) => (String, Long)) => DataStream[(String, Long)] =
keyedEdits.timeWindow(Time.seconds(5)).fold(("", 0L))
val result_2: DataStream[(String, Long)] = result_1((t: (String, Long), we: WikipediaEditEvent ) =>
(we.getUser, t._2 + we.getByteDiff))
The problem seems to be with the fold, you have to have a closing bracket after your accumulator initialValue. When you fix that, the code will fail to compile because it doesn't have TypeInformation available for WikipediaEditEvent. The easiest way to resolve that is to import more of flink scala API. See below for a full example:
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.wikiedits.WikipediaEditsSource
import org.apache.flink.streaming.api.windowing.time.Time
object WikipediaAnalytics extends App {
val see: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
val edits = see.addSource(new WikipediaEditsSource())
val userEditsVolume: DataStream[(String, Int)] = edits
.keyBy(_.getUser)
.timeWindow(Time.seconds(5))
.fold(("", 0))((acc, event) => (event.getUser, acc._2 + event.getByteDiff))
userEditsVolume.print()
see.execute("Wikipedia User Edit Volume")
}
This is in relation to question in stackoverflow .
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
ExecutorService ec = Executors.newFixedThreadPool(100);
public CompletionStage<Result> test {
return CompletableFuture.supplyAsync(() -> {
return Ebean.createSqlQuery(sqlQuery).findList();
}, ec) // <-- 'ec' is the ExecutorService you want to use
.thenApply(rows -> {
ObjectMapper mapper = new ObjectMapper();
return ok(f(rows));
//do lot of computation over rows
)}
}
Does the program use default executioncontext to do the computation f(rows) or use the execution context ec?
If I want to set the settings similar to akka exection context like
my-context {
fork-join-executor {
parallelism-factor = 20.0
parallelism-max = 200
}
}
How do I do it?