creating dbthreadpools in java play - java

This is in relation to question in stackoverflow .
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
ExecutorService ec = Executors.newFixedThreadPool(100);
public CompletionStage<Result> test {
return CompletableFuture.supplyAsync(() -> {
return Ebean.createSqlQuery(sqlQuery).findList();
}, ec) // <-- 'ec' is the ExecutorService you want to use
.thenApply(rows -> {
ObjectMapper mapper = new ObjectMapper();
return ok(f(rows));
//do lot of computation over rows
)}
}
Does the program use default executioncontext to do the computation f(rows) or use the execution context ec?
If I want to set the settings similar to akka exection context like
my-context {
fork-join-executor {
parallelism-factor = 20.0
parallelism-max = 200
}
}
How do I do it?

Related

How to set up several different WebFlux client properties for the different Apache Camel routes?

In the route set up we have a call for WebClient.build() being set up before the route is declared:
#Override
public void configure() {
createSubscription(activeProfile.equalsIgnoreCase("RESTART"));
from(String.format("reactive-streams:%s", streamName))
.to("log:camel.proxy?level=INFO&groupInterval=500000")
.to(String.format("kafka:%s?brokers=%s", kafkaTopic, kafkaBrokerUrls));
}
private void createSubscription(boolean restart) {
WebClient.builder()
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.TEXT_XML_VALUE)
.build()
.post()
.uri(initialRequestUri)
.body(BodyInserters.fromObject(restart ? String.format(restartRequestBody, ZonedDateTime.now(ZoneId.of("UTC")).toString().replace("[UTC]", "")) : initialRequestBody))
.retrieve()
.bodyToMono(String.class)
.map(initResp ->
new JSONObject(initResp)
.getJSONObject("RESPONSE")
.getJSONArray("RESULT")
.getJSONObject(0)
.getJSONObject("INFO")
.getString("SSEURL")
)
.flatMapMany(url -> {
log.info(url);
return WebClient.create()
.get()
.uri(url)
.retrieve()
.bodyToFlux(new ParameterizedTypeReference<ServerSentEvent<String>>() {
})
.flatMap(sse -> {
val data = new JSONObject(sse.data())
.getJSONObject("RESPONSE")
.getJSONArray("RESULT")
.getJSONObject(0)
.getJSONArray(apiName);
val list = new ArrayList<String>();
for (int i = 0; i < data.length(); i++) {
list.add(data.getJSONObject(i).toString());
}
return Flux.fromIterable(list);
}
);
}
)
.onBackpressureBuffer()
.flatMap(msg -> camelReactiveStreamsService.toStream(streamName, msg, String.class))
.doFirst(() -> log.info(String.format("Reactive stream %s was %s", streamName, restart ? "restarted" : "started")))
.doOnError(err -> {
log.error(String.format("Reactive stream %s has terminated with error, restarting", streamName), err);
createSubscription(true);
})
.doOnComplete(() -> {
log.warn(String.format("Reactive stream %s has completed, restarting", streamName));
createSubscription(true);
})
.subscribe();
}
for my understanding the WebClient set up is made for the whole Spring Boot app and not the specific route of the Apache Camel (it isn't bent to the specific route id or url somehow), that's why new routes using the new reactive steams of other urls and other needs with headers/initial messages will get this set up too, what isn't needed.
So, the question here, is it possible to make a specific WebClient set up, associated not with the whole application, but with the specific route and make it applied for the route?
Is this configuration possible with Spring DSL?
The way to be applied there is rather complex:
Create 2 routes, the first one is executed first and only once and is triggering a specific method of specific bean, passing the set up for the WebClient.builder() with method parameters and executing the subscription for the WebFlux. And yes, that reactive streams set up is done within the Spring Boot app's Spring context, not the Apache Camel context. So it has no direct associations with route rather than being called for set up when the specific route was started. So route looks like:
<?xml version="1.0" encoding="UTF-8"?>
Provide the bean. I thave put it to the Spring Boot app, not the Apache Camel context like below. The drawback here is that I have to put it here no matter will the specific route work or now. So it is always in the memory.
import org.apache.camel.CamelContext;
import org.apache.camel.component.reactive.streams.api.CamelReactiveStreamsService;
import org.json.JSONArray;
import org.json.JSONObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.http.codec.ServerSentEvent;
import org.springframework.stereotype.Component;
import org.springframework.web.reactive.function.BodyInserters;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Flux;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.util.ArrayList;
#Component
public class WebFluxSetUp {
private final Logger logger = LoggerFactory.getLogger(WebFluxSetUp.class);
private final CamelContext camelContext;
private final CamelReactiveStreamsService camelReactiveStreamsService;
WebFluxSetUp(CamelContext camelContext, CamelReactiveStreamsService camelReactiveStreamsService) {
this.camelContext = camelContext;
this.camelReactiveStreamsService = camelReactiveStreamsService;
}
public void executeWebfluxSetup(boolean restart, String initialRequestUri, String restartRequestBody, String initialRequestBody, String apiName, String streamName) {
{
WebClient.builder().defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.TEXT_XML_VALUE).build().post().uri(initialRequestUri).body(BodyInserters.fromObject(restart ? String.format(restartRequestBody, ZonedDateTime.now(ZoneId.of("UTC")).toString().replace("[UTC]", "")) : initialRequestBody)).retrieve().bodyToMono(String.class).map(initResp -> new JSONObject(initResp).getJSONObject("RESPONSE").getJSONArray("RESULT").getJSONObject(0).getJSONObject("INFO").getString("SSEURL")).flatMapMany(url -> {
logger.info(url);
return WebClient.create().get().uri(url).retrieve().bodyToFlux(new ParameterizedTypeReference<ServerSentEvent<String>>() {
}).flatMap(sse -> {
JSONArray data = new JSONObject(sse.data()).getJSONObject("RESPONSE").getJSONArray("RESULT").getJSONObject(0).getJSONArray(apiName);
ArrayList<String> list = new ArrayList<String>();
for (int i = 0; i < data.length(); i++) {
list.add(data.getJSONObject(i).toString());
}
return Flux.fromIterable(list);
});
}).onBackpressureBuffer().flatMap(msg -> camelReactiveStreamsService.toStream(streamName, msg, String.class)).doFirst(() -> logger.info(String.format("Reactive stream %s was %s", streamName, restart ? "restarted" : "started"))).doOnError(err -> {
logger.error(String.format("Reactive stream %s has terminated with error, restarting", streamName), err);
executeWebfluxSetup(true, initialRequestUri, restartRequestBody, initialRequestBody, apiName, streamName);
}).doOnComplete(() -> {
logger.warn(String.format("Reactive stream %s has completed, restarting", streamName));
executeWebfluxSetup(true, initialRequestUri, restartRequestBody, initialRequestBody, apiName, streamName);
}).subscribe();
}
}
}
Other drawbacks there is when the route is stopped, the WebFlux client still trying to spam the reactive stream url. And there is no route-associated api/event handler to stop it and make not had-coded to the specific route.

Spring Integration DSL Custom Error Channel Issue with Executor

Hi I have a file listener that is reading files parallel / more than one at a time
package com.example.demo.flow;
import lombok.extern.slf4j.Slf4j;
import org.springframework.context.annotation.Bean;
import org.springframework.integration.dsl.*;
import org.springframework.integration.dsl.channel.MessageChannels;
import org.springframework.integration.file.dsl.Files;
import org.springframework.stereotype.Component;
import java.io.File;
import java.util.concurrent.Executors;
/**
* Created by muhdk on 03/01/2020.
*/
#Component
#Slf4j
public class TestFlow {
#Bean
public StandardIntegrationFlow errorChannelHandler() {
return IntegrationFlows.from("testChannel")
.handle(o -> {
log.info("Handling error....{}", o);
}).get();
}
#Bean
public IntegrationFlow testFile() {
IntegrationFlowBuilder testChannel = IntegrationFlows.from(Files.inboundAdapter(new File("d:/input-files/")),
e -> e.poller(Pollers.fixedDelay(5000L).maxMessagesPerPoll(5)
.errorChannel("testChannel")))
.channel(MessageChannels.executor(Executors.newFixedThreadPool(5)))
.transform(o -> {
throw new RuntimeException("Failing on purpose");
}).handle(o -> {
});
return testChannel.get();
}
}
Its not going to my custom error channel
but if I remove the line
.channel(MessageChannels.executor(Executors.newFixedThreadPool(5)))
Then it goes to error channel.
How can I make it work so that it does go to my custom error channel with executor.
It looks like when using Executor services with multiple messages it doesn't work with normal errorChannel which I have no idea why
I made a change like this
#Bean
public IntegrationFlow testFile() {
IntegrationFlowBuilder testChannel = IntegrationFlows.from(Files.inboundAdapter(new File("d:/input-files/")),
e -> e.poller(Pollers.fixedDelay(5000L).maxMessagesPerPoll(10)
))
.enrichHeaders(h -> h.header(MessageHeaders.ERROR_CHANNEL, "testChannel"))
.channel(MessageChannels.executor(Executors.newFixedThreadPool(5)))
.transform(o -> {
throw new RuntimeException("Failing on purpose");
}).handle(o -> {
});
return testChannel.get();
}
The line here
.enrichHeaders(h -> h.header(MessageHeaders.ERROR_CHANNEL, "testChannel"))
The rest remain the same and it works.

CompletableFuture, main never exits

I'm learning Java 8 and more in detail the "CompletableFuture".
Following this interesting tutorial:
https://www.callicoder.com/java-8-completablefuture-tutorial/
I wrote the following Java class :
package parallels;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
import java.util.stream.Collectors;
import javax.ws.rs.client.ClientRequestFilter;
import javax.ws.rs.core.Response;
import org.jboss.resteasy.client.jaxrs.ResteasyClient;
import org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder;
import org.jboss.resteasy.client.jaxrs.ResteasyWebTarget;
public class Test {
private static final String USER_AGENT = "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0";
private static final Executor executor = Executors.newFixedThreadPool(100);
public static void main(String[] args) {
List<String> webPageLinks= new ArrayList<String>();
for (int i=0;i<30;i++) {
webPageLinks.add("http://jsonplaceholder.typicode.com/todos/1");
}
// Download contents of all the web pages asynchronously
List<CompletableFuture<String>> pageContentFutures = webPageLinks.stream()
.map(webPageLink -> downloadWebPage(webPageLink))
.collect(Collectors.toList());
// Create a combined Future using allOf()
CompletableFuture<Void> allFutures = CompletableFuture.allOf(
pageContentFutures.toArray(new CompletableFuture[pageContentFutures.size()])
);
// When all the Futures are completed, call `future.join()` to get their results and collect the results in a list -
CompletableFuture<List<String>> allPageContentsFuture = allFutures.thenApply(v -> {
return pageContentFutures.stream()
.map(pageContentFuture -> pageContentFuture.join())
.collect(Collectors.toList());
});
}
private static CompletableFuture<String> downloadWebPage(String pageLink) {
CompletableFuture<String> completableFuture = CompletableFuture.supplyAsync(() -> getRequest(pageLink),executor);
return completableFuture;
}
public static String getRequest(String url) {
System.out.println("getRequest");
String resp =null;
try {
ResteasyClient client = new ResteasyClientBuilder().build();
ResteasyWebTarget target = client.target(url);
target.register((ClientRequestFilter) requestContext -> {
requestContext.getHeaders().add("User-Agent",USER_AGENT);
});
Response response = target.request().get();
resp= response.readEntity(String.class);
System.out.println(resp);
response.close();
client.close();
System.out.println("End getRequest");
}catch(Throwable t) {
t.printStackTrace();
}
return resp;
}
}
(In order to run that code you need "resteasy-client" library)
But I don't understand why even when all the responses are collected the main method doesn't terminate...
Did I miss something?
Is there some "complete" method to call anywhere, and if yes where?
Your main method completes, but the program continues running as you have created other threads which are still alive. The best solution is to call shutdown on your ExecutorService once you've submitted all your tasks to it.
Alternatively you could create an ExecutorService which uses daemon threads (see the Thread documentation), or a ThreadPoolExecutor with allowCoreThreadTimeout(true), or just call System.exit at the end of your main method.

Kinesis firehose data transformation using Java

When using Java Lambda function to do a kinesis data firehose transformation , getting the below error. The below is my transformed JSON look like
{
"records": [
{
"recordId": "49586022990098427206724983301551059982279766660054253570000000",
"result": "Ok",
"data": "ZXlKMGFXTnJaWEpmYzNsdFltOXNJam9pVkVWVFZEY2lMQ0FpYzJWamRHOXlJam9pU0VWQlRGUklRMEZTUlNJc0lDSmphR0Z1WjJVaQ0KT2kwd0xqQTFMQ0FpY0hKcFkyVWlPamcwTGpVeGZRbz0="
}
]
}
error in the kinesis console is like
Invalid output structure: Please check your function and make sure the processed records contain valid result status of Dropped, Ok, or ProcessingFailed
Anyone have an idea on this , i could not find an example code using Java on the kinesis data transformation
https://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html
This document says about the output structure
I just got done struggling through this in scala (java compatible). The key is to use the return type: com.amazonaws.services.lambda.runtime.events.KinesisAnalyticsInputPreprocessingResponse
import java.nio.ByteBuffer
import com.amazonaws.services.lambda.runtime.events.KinesisAnalyticsInputPreprocessingResponse._
import com.amazonaws.services.lambda.runtime.events.{KinesisAnalyticsInputPreprocessingResponse, KinesisFirehoseEvent}
import com.amazonaws.services.lambda.runtime.{Context, LambdaLogger, RequestHandler}
import scala.collection.JavaConversions._
import scala.language.implicitConversions
class Handler extends RequestHandler[KinesisFirehoseEvent, KinesisAnalyticsInputPreprocessingResponse] {
override def handleRequest(in: KinesisFirehoseEvent, context: Context): KinesisAnalyticsInputPreprocessingResponse = {
val logger: LambdaLogger = context.getLogger
val records = in.getRecords
val tranformed = records.flatMap(record => {
try {
val changed = record.getData.array()
//do some sort of transform
val rec = new Record(record.getRecordId, Result.Ok, ByteBuffer.wrap(changed))
Some(rec)
} catch {
case e: Exception => {
logger.log(e.toString)
Some(new Record(record.getRecordId, Result.Dropped, record.getData))
}
}
})
val response = new KinesisAnalyticsInputPreprocessingResponse()
response.setRecords(tranformed.toList)
response
}
}
A java example:
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.KinesisAnalyticsInputPreprocessingResponse;
import com.amazonaws.services.lambda.runtime.events.KinesisFirehoseEvent;
import com.fasterxml.jackson.databind.ObjectMapper;
import lombok.RequiredArgsConstructor;
import lombok.extern.log4j.Log4j2;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
#Log4j2
#RequiredArgsConstructor
public class FirehoseHandler implements RequestHandler<KinesisFirehoseEvent, KinesisAnalyticsInputPreprocessingResponse> {
private final ObjectMapper mapper;
#Override
public KinesisAnalyticsInputPreprocessingResponse handleRequest(KinesisFirehoseEvent kinesisFirehoseEvent, Context context) {
return Flux.fromIterable(kinesisFirehoseEvent.getRecords())
.flatMap(this::transformRecord)
.collectList()
.map(KinesisAnalyticsInputPreprocessingResponse::new)
.block();
}
private Mono<KinesisAnalyticsInputPreprocessingResponse.Record> transformRecord(KinesisFirehoseEvent.Record record) {
return Mono.just(record.getData())
.map(StandardCharsets.UTF_8::decode)
.flatMap(data -> Mono.fromCallable(() -> doYourOwnThing(data)))
.map(StandardCharsets.UTF_8::encode)
.map(data -> new KinesisAnalyticsInputPreprocessingResponse.Record(record.getRecordId(), KinesisAnalyticsInputPreprocessingResponse.Result.Ok, data))
.onErrorResume(e -> Mono.just(new KinesisAnalyticsInputPreprocessingResponse.Record(record.getRecordId(), KinesisAnalyticsInputPreprocessingResponse.Result.ProcessingFailed, record.getData())));
}
}

Java SchedulerExecutor

Recently I wrote code that had to limit request throughput. I used ScheduleExecutorService.scheduleAtFixedRate and I believed that it should do the work (It did!) but I wrote some test to check time of scheduled task and i was amazed. First few tasks weren't scheduled as javadoc explain with n*period. Can anyone explain me what am I missing?
If it work that way then why it is not mentioned in javadoc? And then question is how exactly scheduler work?
I would like to avoid looking into sources:)
Example:
import java.time.Duration;
import java.time.LocalTime;
import java.time.temporal.ChronoUnit;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class ExecutorTest {
Executor executor;
ScheduledExecutorService schedulingExecutor;
BlockingQueue<LocalTime> times;
public static void main(String[] args) throws InterruptedException {
new ExecutorTest().start();
}
public ExecutorTest() {
schedulingExecutor = Executors.newScheduledThreadPool(1);
executor = Executors.newCachedThreadPool();
times = new LinkedBlockingQueue<>();
}
public void start() throws InterruptedException {
schedulingExecutor.scheduleAtFixedRate(this::executeTask, 0, 50, TimeUnit.MILLISECONDS);
LocalTime nextEvaluatedTime = times.take();
LocalTime time = nextEvaluatedTime;
while (true) {
System.out.println(String.format(String.join(" ", "recorded time: %d", "calculated proper time: %d", "diff: %d"),
time.toNanoOfDay(),
nextEvaluatedTime.toNanoOfDay(),
Duration.between(nextEvaluatedTime, time).toNanos()));
nextEvaluatedTime = time.plus(50, ChronoUnit.MILLIS);
time = times.take();
}
}
private void executeTask() {
executor.execute(() -> {
times.add(LocalTime.now());
});
}
}
If you run this program you could see that few first time wasn't recorded as expected. Why?

Categories

Resources