CompletableFuture, main never exits - java

I'm learning Java 8 and more in detail the "CompletableFuture".
Following this interesting tutorial:
https://www.callicoder.com/java-8-completablefuture-tutorial/
I wrote the following Java class :
package parallels;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
import java.util.stream.Collectors;
import javax.ws.rs.client.ClientRequestFilter;
import javax.ws.rs.core.Response;
import org.jboss.resteasy.client.jaxrs.ResteasyClient;
import org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder;
import org.jboss.resteasy.client.jaxrs.ResteasyWebTarget;
public class Test {
private static final String USER_AGENT = "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0";
private static final Executor executor = Executors.newFixedThreadPool(100);
public static void main(String[] args) {
List<String> webPageLinks= new ArrayList<String>();
for (int i=0;i<30;i++) {
webPageLinks.add("http://jsonplaceholder.typicode.com/todos/1");
}
// Download contents of all the web pages asynchronously
List<CompletableFuture<String>> pageContentFutures = webPageLinks.stream()
.map(webPageLink -> downloadWebPage(webPageLink))
.collect(Collectors.toList());
// Create a combined Future using allOf()
CompletableFuture<Void> allFutures = CompletableFuture.allOf(
pageContentFutures.toArray(new CompletableFuture[pageContentFutures.size()])
);
// When all the Futures are completed, call `future.join()` to get their results and collect the results in a list -
CompletableFuture<List<String>> allPageContentsFuture = allFutures.thenApply(v -> {
return pageContentFutures.stream()
.map(pageContentFuture -> pageContentFuture.join())
.collect(Collectors.toList());
});
}
private static CompletableFuture<String> downloadWebPage(String pageLink) {
CompletableFuture<String> completableFuture = CompletableFuture.supplyAsync(() -> getRequest(pageLink),executor);
return completableFuture;
}
public static String getRequest(String url) {
System.out.println("getRequest");
String resp =null;
try {
ResteasyClient client = new ResteasyClientBuilder().build();
ResteasyWebTarget target = client.target(url);
target.register((ClientRequestFilter) requestContext -> {
requestContext.getHeaders().add("User-Agent",USER_AGENT);
});
Response response = target.request().get();
resp= response.readEntity(String.class);
System.out.println(resp);
response.close();
client.close();
System.out.println("End getRequest");
}catch(Throwable t) {
t.printStackTrace();
}
return resp;
}
}
(In order to run that code you need "resteasy-client" library)
But I don't understand why even when all the responses are collected the main method doesn't terminate...
Did I miss something?
Is there some "complete" method to call anywhere, and if yes where?

Your main method completes, but the program continues running as you have created other threads which are still alive. The best solution is to call shutdown on your ExecutorService once you've submitted all your tasks to it.
Alternatively you could create an ExecutorService which uses daemon threads (see the Thread documentation), or a ThreadPoolExecutor with allowCoreThreadTimeout(true), or just call System.exit at the end of your main method.

Related

How to set up several different WebFlux client properties for the different Apache Camel routes?

In the route set up we have a call for WebClient.build() being set up before the route is declared:
#Override
public void configure() {
createSubscription(activeProfile.equalsIgnoreCase("RESTART"));
from(String.format("reactive-streams:%s", streamName))
.to("log:camel.proxy?level=INFO&groupInterval=500000")
.to(String.format("kafka:%s?brokers=%s", kafkaTopic, kafkaBrokerUrls));
}
private void createSubscription(boolean restart) {
WebClient.builder()
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.TEXT_XML_VALUE)
.build()
.post()
.uri(initialRequestUri)
.body(BodyInserters.fromObject(restart ? String.format(restartRequestBody, ZonedDateTime.now(ZoneId.of("UTC")).toString().replace("[UTC]", "")) : initialRequestBody))
.retrieve()
.bodyToMono(String.class)
.map(initResp ->
new JSONObject(initResp)
.getJSONObject("RESPONSE")
.getJSONArray("RESULT")
.getJSONObject(0)
.getJSONObject("INFO")
.getString("SSEURL")
)
.flatMapMany(url -> {
log.info(url);
return WebClient.create()
.get()
.uri(url)
.retrieve()
.bodyToFlux(new ParameterizedTypeReference<ServerSentEvent<String>>() {
})
.flatMap(sse -> {
val data = new JSONObject(sse.data())
.getJSONObject("RESPONSE")
.getJSONArray("RESULT")
.getJSONObject(0)
.getJSONArray(apiName);
val list = new ArrayList<String>();
for (int i = 0; i < data.length(); i++) {
list.add(data.getJSONObject(i).toString());
}
return Flux.fromIterable(list);
}
);
}
)
.onBackpressureBuffer()
.flatMap(msg -> camelReactiveStreamsService.toStream(streamName, msg, String.class))
.doFirst(() -> log.info(String.format("Reactive stream %s was %s", streamName, restart ? "restarted" : "started")))
.doOnError(err -> {
log.error(String.format("Reactive stream %s has terminated with error, restarting", streamName), err);
createSubscription(true);
})
.doOnComplete(() -> {
log.warn(String.format("Reactive stream %s has completed, restarting", streamName));
createSubscription(true);
})
.subscribe();
}
for my understanding the WebClient set up is made for the whole Spring Boot app and not the specific route of the Apache Camel (it isn't bent to the specific route id or url somehow), that's why new routes using the new reactive steams of other urls and other needs with headers/initial messages will get this set up too, what isn't needed.
So, the question here, is it possible to make a specific WebClient set up, associated not with the whole application, but with the specific route and make it applied for the route?
Is this configuration possible with Spring DSL?
The way to be applied there is rather complex:
Create 2 routes, the first one is executed first and only once and is triggering a specific method of specific bean, passing the set up for the WebClient.builder() with method parameters and executing the subscription for the WebFlux. And yes, that reactive streams set up is done within the Spring Boot app's Spring context, not the Apache Camel context. So it has no direct associations with route rather than being called for set up when the specific route was started. So route looks like:
<?xml version="1.0" encoding="UTF-8"?>
Provide the bean. I thave put it to the Spring Boot app, not the Apache Camel context like below. The drawback here is that I have to put it here no matter will the specific route work or now. So it is always in the memory.
import org.apache.camel.CamelContext;
import org.apache.camel.component.reactive.streams.api.CamelReactiveStreamsService;
import org.json.JSONArray;
import org.json.JSONObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.http.codec.ServerSentEvent;
import org.springframework.stereotype.Component;
import org.springframework.web.reactive.function.BodyInserters;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Flux;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.util.ArrayList;
#Component
public class WebFluxSetUp {
private final Logger logger = LoggerFactory.getLogger(WebFluxSetUp.class);
private final CamelContext camelContext;
private final CamelReactiveStreamsService camelReactiveStreamsService;
WebFluxSetUp(CamelContext camelContext, CamelReactiveStreamsService camelReactiveStreamsService) {
this.camelContext = camelContext;
this.camelReactiveStreamsService = camelReactiveStreamsService;
}
public void executeWebfluxSetup(boolean restart, String initialRequestUri, String restartRequestBody, String initialRequestBody, String apiName, String streamName) {
{
WebClient.builder().defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.TEXT_XML_VALUE).build().post().uri(initialRequestUri).body(BodyInserters.fromObject(restart ? String.format(restartRequestBody, ZonedDateTime.now(ZoneId.of("UTC")).toString().replace("[UTC]", "")) : initialRequestBody)).retrieve().bodyToMono(String.class).map(initResp -> new JSONObject(initResp).getJSONObject("RESPONSE").getJSONArray("RESULT").getJSONObject(0).getJSONObject("INFO").getString("SSEURL")).flatMapMany(url -> {
logger.info(url);
return WebClient.create().get().uri(url).retrieve().bodyToFlux(new ParameterizedTypeReference<ServerSentEvent<String>>() {
}).flatMap(sse -> {
JSONArray data = new JSONObject(sse.data()).getJSONObject("RESPONSE").getJSONArray("RESULT").getJSONObject(0).getJSONArray(apiName);
ArrayList<String> list = new ArrayList<String>();
for (int i = 0; i < data.length(); i++) {
list.add(data.getJSONObject(i).toString());
}
return Flux.fromIterable(list);
});
}).onBackpressureBuffer().flatMap(msg -> camelReactiveStreamsService.toStream(streamName, msg, String.class)).doFirst(() -> logger.info(String.format("Reactive stream %s was %s", streamName, restart ? "restarted" : "started"))).doOnError(err -> {
logger.error(String.format("Reactive stream %s has terminated with error, restarting", streamName), err);
executeWebfluxSetup(true, initialRequestUri, restartRequestBody, initialRequestBody, apiName, streamName);
}).doOnComplete(() -> {
logger.warn(String.format("Reactive stream %s has completed, restarting", streamName));
executeWebfluxSetup(true, initialRequestUri, restartRequestBody, initialRequestBody, apiName, streamName);
}).subscribe();
}
}
}
Other drawbacks there is when the route is stopped, the WebFlux client still trying to spam the reactive stream url. And there is no route-associated api/event handler to stop it and make not had-coded to the specific route.

Java Function Timeout after 20 seconds

I'm getting started with a lambda function for java and I am working through the HelloWorldFunction that was generated by sam init
When running the sample I get only timeouts.
What should I check? What have I missed?
$ sam local invoke HelloWorldFunction --no-event
Invoking helloworld.App::handleRequest (java8)
2019-09-18 12:07:23 Found credentials in shared credentials file: ~/.aws/credentials
Fetching lambci/lambda:java8 Docker container image......
Mounting /Users/********/Documents/github/sam-app/.aws-sam/build/HelloWorldFunction as
/var/task:ro,delegated inside runtime container
START RequestId: 8a420a00-ef81-4921-9a9e-508111fc5c8a Version: $LATEST
Function 'HelloWorldFunction' timed out after 20 seconds
It's the sample that is generated with sam init.
package helloworld;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.io.IOException;
import java.net.URL;
import java.util.HashMap;
import java.util.Map;
import java.util.stream.Collectors;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
/**
* Handler for requests to Lambda function.
*/
public class App implements RequestHandler<Object, Object> {
public Object handleRequest(final Object input, final Context context) {
Map<String, String> headers = new HashMap<>();
headers.put("Content-Type", "application/json");
headers.put("X-Custom-Header", "application/json");
try {
final String pageContents = this.getPageContents("https://checkip.amazonaws.com");
String output = String.format("{ \"message\": \"hello world\", \"location\": \"%s\" }", pageContents);
return new GatewayResponse(output, headers, 200);
} catch (IOException e) {
return new GatewayResponse("{}", headers, 500);
}
}
private String getPageContents(String address) throws IOException{
URL url = new URL(address);
try(BufferedReader br = new BufferedReader(new InputStreamReader(url.openStream()))) {
return br.lines().collect(Collectors.joining(System.lineSeparator()));
}
}
}
I believe my issue is the proxy in my corporate environment.
How do I set the proxies in the java code?

How do you read and print a chunked HTTP response using java.net.http as chunks arrive?

Java 11 introduces a new package, java.net.http, for making HTTP requests. For general usage, it's pretty straight forward.
My question is: how do I use java.net.http to handle chunked responses as each chunk is received by the client?
java.http.net contains a reactive BodySubscriber which appears to be what I want, but I can't find an example of how it's used.
http_get_demo.py
Below is a python implementation that prints chunks as they arrive, I'd like to the same thing with java.net.http:
import argparse
import requests
def main(url: str):
with requests.get(url, stream=True) as r:
for c in r.iter_content(chunk_size=1):
print(c.decode("UTF-8"), end="")
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Read from a URL and print as text as chunks arrive")
parser.add_argument('url', type=str, help="A URL to read from")
args = parser.parse_args()
main(args.url)
HttpGetDemo.java
Just for completeness, here's a simple example of making a blocking request using java.net.http:
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpResponse;
import java.net.http.HttpRequest;
public class HttpGetDemo {
public static void main(String[] args) throws Exception {
var request = HttpRequest.newBuilder()
.uri(URI.create(args[0]))
.build();
var bodyHandler = HttpResponse.BodyHandlers
.ofString();
var client = HttpClient.newHttpClient();
var response = client.send(request, bodyHandler);
System.out.println(response.body());
}
}
HttpAsyncGetDemo.java
And here's the example making an non-blocking/async request:
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpResponse;
import java.net.http.HttpRequest;
/**
* ReadChunked
*/
public class HttpAsyncGetDemo {
public static void main(String[] args) throws Exception {
var request = HttpRequest.newBuilder()
.uri(URI.create(args[0]))
.build();
var bodyHandler = HttpResponse.BodyHandlers
.ofString();
var client = HttpClient.newHttpClient();
client.sendAsync(request, bodyHandler)
.thenApply(HttpResponse::body)
.thenAccept(System.out::println)
.join();
}
}
The python code does not ensure that the response body data is made available one HTTP chunk at a time. It just provides small amounts of data to the application, thus reducing the amount of memory consumed at the application level ( it could be buffered lower in the stack ). The Java 11 HTTP Client supports streaming through one of the streaming body handlers, HttpResponse.BodyHandlers: ofInputStream, ofByteArrayConsumer, asLines, etc.
Or write your own handler / subscriber as demonstrated:
https://www.youtube.com/watch?v=qiaC0QMLz5Y
You can print ByteBuffers as they come, but there's no guarantee that a ByteBuffer corresponds to a chunk. Chunks are handled by the stack. One ByteBuffer slice will be pushed for every chunk - but if there isn’t enough space remaining in the buffer, then a partial chunk will be pushed. All the consumer sees is a stream of ByteBuffers that contain the data.
So what you can do is print those ByteBuffers as they come, but you have no guarantee that they correspond exactly one chunk each as was sent by the server.
Note: If the body of your request is text based, then you can use
BodyHandlers.fromLineSubscriber(Subscriber<? super String> subscriber) with a custom Subscriber<String> that will print each line as it comes.
The BodyHandlers.fromLineSubscriber does the hard word of decoding bytes into chars using the charset indicated in the response headers, buffering bytes if needed until they can be decoded (a ByteBuffer might end in the middle of an encoding sequence if the text contains chars encoded over multiple bytes), and splitting them at the line boundary. The Subscriber::onNext method will be invoked once for each line in the text. See https://download.java.net/java/early_access/jdk11/docs/api/java.net.http/java/net/http/HttpResponse.BodyHandlers.html#fromLineSubscriber(java.util.concurrent.Flow.Subscriber) for more info.
Thanks to #pavel and #chegar999 for their partial answers. They led me to my solution.
Overview
The solution I came up with is below. Basically, the solution is to use a custom java.net.http.HttpResponse.BodySubscriber. A BodySubscriber contains reactive methods (onSubscribe, onNext, onError, and onComplete) and a getBody method that basically returns a java CompletableFuture that will eventually produce the body of the HTTP request. Once you have your BodySubscriber in hand you can use it like:
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(uri))
.build();
return client.sendAsync(request, responseInfo -> new StringSubscriber())
.whenComplete((r, t) -> System.out.println("--- Status code " + r.statusCode()))
.thenApply(HttpResponse::body);
Note the line:
client.sendAsync(request, responseInfo -> new StringSubscriber())
That's where we register our custom BodySubscriber; in this case, my custom class is named StringSubscriber.
CustomSubscriber.java
This is a complete working example. Using Java 11, you can run it without compiling it. Just past it into a file named CustomSubscriber.java, then run the command java CustomSubscriber <some url>. It prints the contents of each chunk as it arrives. It also collects them and returns them as the body when the response has completed.
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.http.HttpResponse.BodyHandlers;
import java.net.http.HttpResponse.BodySubscriber;
import java.net.URI;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.concurrent.Flow;
import java.util.stream.Collectors;
import java.util.List;
public class CustomSubscriber {
public static void main(String[] args) {
CustomSubscriber cs = new CustomSubscriber();
String body = cs.get(args[0]).join();
System.out.println("--- Response body:\n: ..." + body + "...");
}
public CompletableFuture<String> get(String uri) {
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(uri))
.build();
return client.sendAsync(request, responseInfo -> new StringSubscriber())
.whenComplete((r, t) -> System.out.println("--- Status code " + r.statusCode()))
.thenApply(HttpResponse::body);
}
static class StringSubscriber implements BodySubscriber<String> {
final CompletableFuture<String> bodyCF = new CompletableFuture<>();
Flow.Subscription subscription;
List<ByteBuffer> responseData = new CopyOnWriteArrayList<>();
#Override
public CompletionStage<String> getBody() {
return bodyCF;
}
#Override
public void onSubscribe(Flow.Subscription subscription) {
this.subscription = subscription;
subscription.request(1); // Request first item
}
#Override
public void onNext(List<ByteBuffer> buffers) {
System.out.println("-- onNext " + buffers);
try {
System.out.println("\tBuffer Content:\n" + asString(buffers));
}
catch (Exception e) {
System.out.println("\tUnable to print buffer content");
}
buffers.forEach(ByteBuffer::rewind); // Rewind after reading
responseData.addAll(buffers);
subscription.request(1); // Request next item
}
#Override
public void onError(Throwable throwable) {
bodyCF.completeExceptionally(throwable);
}
#Override
public void onComplete() {
bodyCF.complete(asString(responseData));
}
private String asString(List<ByteBuffer> buffers) {
return new String(toBytes(buffers), StandardCharsets.UTF_8);
}
private byte[] toBytes(List<ByteBuffer> buffers) {
int size = buffers.stream()
.mapToInt(ByteBuffer::remaining)
.sum();
byte[] bs = new byte[size];
int offset = 0;
for (ByteBuffer buffer : buffers) {
int remaining = buffer.remaining();
buffer.get(bs, offset, remaining);
offset += remaining;
}
return bs;
}
}
}
Trying it out
To test this solution, you'll need a server that sends a response that uses Transfer-encoding: chunked and sends it slow enough to watch the chunks arrive. I've created one at https://github.com/hohonuuli/demo-chunk-server but you can spin it up using Docker like so:
docker run -p 8080:8080 hohonuuli/demo-chunk-server
Then run the CustomSubscriber.java code using java CustomSubscriber.java http://localhost:8080/chunk/10
There is now a new Java library to address this kind of requirements
RxSON: https://github.com/rxson/rxson
It utilizes the JsonPath wit RxJava to read JSON streamed chunks from the response as soon as they arrive, and parse them to java objects.
Example:
String serviceURL = "https://think.cs.vt.edu/corgis/datasets/json/airlines/airlines.json";
HttpRequest req = HttpRequest.newBuilder(URI.create(serviceURL)).GET().build();
RxSON rxson = new RxSON.Builder().build();
String jsonPath = "$[*].Airport.Name";
Flowable<String> airportStream = rxson.create(String.class, req, jsonPath);
airportStream
.doOnNext(it -> System.out.println("Received new item: " + it))
//Just for test
.toList()
.blockingGet();

creating dbthreadpools in java play

This is in relation to question in stackoverflow .
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
ExecutorService ec = Executors.newFixedThreadPool(100);
public CompletionStage<Result> test {
return CompletableFuture.supplyAsync(() -> {
return Ebean.createSqlQuery(sqlQuery).findList();
}, ec) // <-- 'ec' is the ExecutorService you want to use
.thenApply(rows -> {
ObjectMapper mapper = new ObjectMapper();
return ok(f(rows));
//do lot of computation over rows
)}
}
Does the program use default executioncontext to do the computation f(rows) or use the execution context ec?
If I want to set the settings similar to akka exection context like
my-context {
fork-join-executor {
parallelism-factor = 20.0
parallelism-max = 200
}
}
How do I do it?

HTTP Request Object

Is there an object within the standard Java SE that can accept a HTTP request from a socket? I have found how to create and send one, however I have not found a way to retrieve a HTTP object from a socket. I can create one my self, but I would rather rely on a heavily tested object.
This seems like something that would be readily available given the structure of JSP.
There is a small HTTP server in the Java 6 SDK (not sure if it will be in the JRE or in non-Sun JVM's).
From http://www.java2s.com/Code/Java/JDK-6/LightweightHTTPServer.htm :
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.util.Iterator;
import java.util.List;
import java.util.Set;
import java.util.concurrent.Executors;
import com.sun.net.httpserver.Headers;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
public class HttpServerDemo {
public static void main(String[] args) throws IOException {
InetSocketAddress addr = new InetSocketAddress(8080);
HttpServer server = HttpServer.create(addr, 0);
server.createContext("/", new MyHandler());
server.setExecutor(Executors.newCachedThreadPool());
server.start();
System.out.println("Server is listening on port 8080" );
}
}
class MyHandler implements HttpHandler {
public void handle(HttpExchange exchange) throws IOException {
String requestMethod = exchange.getRequestMethod();
if (requestMethod.equalsIgnoreCase("GET")) {
Headers responseHeaders = exchange.getResponseHeaders();
responseHeaders.set("Content-Type", "text/plain");
exchange.sendResponseHeaders(200, 0);
OutputStream responseBody = exchange.getResponseBody();
Headers requestHeaders = exchange.getRequestHeaders();
Set<String> keySet = requestHeaders.keySet();
Iterator<String> iter = keySet.iterator();
while (iter.hasNext()) {
String key = iter.next();
List values = requestHeaders.get(key);
String s = key + " = " + values.toString() + "\n";
responseBody.write(s.getBytes());
}
responseBody.close();
}
}
}
Yeah, you make a new HTTP Request object from what you accept on the socket. What you do after that is up to you, but it should probably involve an HTTP Response.
import java.io.*;
import java.net.*;
import java.util.*;
public final class WebServer {
public static void main(String args[]) throws Exception {
int PORT = 8080;
ServerSocket listenSocket = new ServerSocket(PORT);
while(true) {
HttpRequest request = new HttpRequest(listenSocket.accept());
Thread thread = new Thread(request);
thread.start();
}
}
}
From: http://www.devhood.com/tutorials/tutorial_details.aspx?tutorial_id=396
There's some more work to be done in the tutorial, but it does look nice.
It looks like you are looking for a Servlet. A servlet is an API that lets you receive and respond to an HTTP request.
Your servlet gets deployed in a container, which is basically the actual Web server that will take care of all the protocol complexities. (The most populare are Tomcat and Jetty)

Categories

Resources