"url":"https://asia-east2-jsondoc.cloudfunctions.net/function-1?delay=1000" //url that takes 1000 ms to return "isParallel": true, "count": "3"
isParallel = true means make parallel calls, false means – make a sequential call, count represents the number of parallel or sequential calls to make.
I have to call the above endpoint and the output should be 1 sec.
How can I call the rest endpoint with the concurrent request? I know how to call single-threaded using rest templates.
Use RestTemplate with ExecutorService
Using ExecutorService to perform 3 concurrent calls with RestTemplate:
Strnig url = "https://asia-east2-jsondoc.cloudfunctions.net/function-1?delay=1000";
RestTemplate restTemplate = new RestTemplate();
ExecutorService executor = Executors.newFixedThreadPool(3);
Future<String> future1 = executor.submit(() -> restTemplate.getForObject(url , String.class));
Future<String> future2 = executor.submit(() -> restTemplate.getForObject(url , String.class));
Future<String> future3 = executor.submit(() -> restTemplate.getForObject(url , String.class));
String response1 = future1.get();
String response2 = future2.get();
String response3 = future3.get();
executor.shutdown();
Use reactive WebClient
Using reactive WebClient to perform 3 concurrent calls and display the response in subscribe callback:
String url = "https://asia-east2-jsondoc.cloudfunctions.net/function-1?delay=5000";
WebClient webClient = WebClient.builder().build();
Mono<String> mono1 = webClient.get().uri(url).retrieve().bodyToMono(String.class);
Mono<String> mono2 = webClient.get().uri(url).retrieve().bodyToMono(String.class);
Mono<String> mono3 = webClient.get().uri(url).retrieve().bodyToMono(String.class);
Flux.merge(mono1, mono2, mono3).subscribe(System.out::println);
Related
I am new to vertx and RxJava. I am trying to implement a simple test program. However, I am not able to understand the dynamics of this program. Why do some requests take more than 10 seconds to respond?
Below is my sample Test application
public class Test {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
WebClient webClient = WebClient.create(vertx);
Observable < Object > google = hitURL("www.google.com", webClient);
Observable < Object > yahoo = hitURL("www.yahoo.com", webClient);
for (int i = 0; i < 100; i++) {
google.repeat(100).subscribe(timeTaken -> {
if ((Long) timeTaken > 10000) {
System.out.println(timeTaken);
}
}, error -> {
System.out.println(error.getMessage());
});
yahoo.repeat(100).subscribe(timeTaken -> {
if ((Long) timeTaken > 10000) {
System.out.println(timeTaken);
}
}, error -> {
System.out.println(error.getMessage());
});
}
}
public static Observable < Object > hitURL(String url, WebClient webClient) {
return Observable.create(emitter -> {
Long l1 = System.currentTimeMillis();
webClient.get(80, url, "").send(ar -> {
if (ar.succeeded()) {
Long elapsedTime = (System.currentTimeMillis() - l1);
emitter.onNext(elapsedTime);
} else {
emitter.onError(ar.cause());
}
emitter.onComplete();
});
});
}
}
What I want to know is, what is making my response time slow?
The problem here seems to be in the way you are using WebClient and/or the way you are measuring "response" times (depending on what you are trying to achieve here).
Vert.x's WebClient, like most http clients, under the hood uses limited-size connection pool to send the requests. In other words, calling .send(...) does not necessarily start the http request immediately - instead, it might wait in some sort of queue for an available connection. Your measurements include this potential waiting time.
You are using the default pool size, which seems to be 5 (at least in the latest version of Vert.x - it's defined here), and almost immediately starting 200 http requests. It's not surprising that most of the time your requests wait for the available connection.
You might try increasing the pool size if you want to test if I'm right:
WebClient webClient = WebClient.create(vertx, new WebClientOptions().setMaxPoolSize(...));
I am using spring boot 2.x and making two async call using webclient, I am getting proper response with one call while other call encounter some exception.
I want to zip both the responses together using zip method, but while using block with zip it throws exception and control flows to catch block. I want both the responses to be zipped along with exception in one or both. Please guide me how to do that.
Mono<BookResponse> bookResponseMono =webClient.get()
.uri("/getBooking/" + bookingId).headers(headers->headers.addAll(args)
.retrieve()
.bodyToMono(BookResponse.class);// with proper responce
Mono<Address> addressResponseMono =webClient.get()
.uri("/getAddress/" + bookingId)
.headers(headers->headers.addAll(args))
.retrieve()
.bodyToMono(Address.class);// encounter readtimeout exception
Tuple2<BookResponse, Address> resp = bookResponseMono.zipWith(addressResponseMono).block();// throws exception but
I want to zip both the responses along with exception.
onErrorResume worked for me for above problem.
bookResponseMono = webClient.get()
.uri("/getBooking/" + bookingId)
.headers(headers->headers.addAll(args))
.retrieve()
.bodyToMono(String.class)
.onErrorResume(err -> {
BookResponse bookResponse = new BookResponse();
bookResponse.setError(setError(err));
return Mono.just(setError(err));
});
addressResponseMono = webClient.get()
.uri("/getAddress/" + bookingId)
.headers(headers -> headers.addAll(args))
.retrieve()
.bodyToMono(String.class)
.onErrorResume(err -> {
Address address = new Address();
address.setError(setError(err));
return Mono.just(setError(err));
});
Zipping at last
bookAndAddressResponse = bookResponseMono
.zipWith(addressResponseMono, BookAndAddressResponse::new)
.block();
I have a servlet request that basically requests data given by an input date. As I have multiple dates, I have to send multiple requests, and then aggregate the results. For example:
List<Result> results = new ArrayList<>();
for (LocalDate date : dates) {
ServletReq req = new ServletReq(date);
try {
ServletRsp rsp = webservice.send(req);
results.addAll(rsp.getResults());
} catch (SpecificException e) {
//just ignore this result and continue
}
}
Question: how can I parallelize the code above? Means: sending multiple ServletReq async, and collect the result into the list. Wait for all requests to finish (maybe with a timeout), and ignore the SpecificException.
I started as follows, but neither do I know if this is the right direction, nor did I succeed transfering the code above completely. Especially regarding the exception to be ignored.
ExecutorService service = Executors.newCachedThreadPool();
List<CompletableFuture<ServletRsp>> futures = new ArrayList<>();
for (LocalDate date : dates) {
ServletReq req = new ServletReq(date);
CompletableFuture future = CompletableFuture.supplyAsync(() -> webservice.send(req), service);
futures.add(future);
}
CompletableFuture.allOf(futures.toArray(new CompletableFuture[futures.size()])).join();
So far, but: How can I call rsp.getResults() on the async result, and put everything into the list. And how can I ignore the SpecificException during the async execution? (I cannot modify the webservice.send() method!).
catch them within the supplier and return e.g. null. Only do that if you'd really do nothing with the exception anyways. To get the results at future.get() you have to deal with null and ExecutionExceptions.
Eg
CompletableFuture<ServletRsp> future = CompletableFuture.supplyAsync(() -> {
try {
return webservice.send(new ServletReq(date));
} catch (SpecificException e) {
return null;
}
});
rethrow them as (custom?) RuntimeException so you don't lose them. Now you deal with just exceptions in the end but some are double-wrapped.
Manually complete the future.
E.g.
CompletableFuture<ServletRsp> future = new CompletableFuture<>();
service.execute(() -> {
try {
future.complete(webservice.send(new ServletReq(date));
} catch (SpecificException e) {
future.completeExceptionally(e);
}
});
futures.add(future);
No more wrapping besides in ExecutionException. CompletableFuture.supplyAsync does about exactly that, but has no code to deal with checked exceptions.
Just use the good old ExecutorService#submit(Callable<T> callable) method which accepts code that throws:
e.g.
List<Callable<String>> tasks = dates.stream()
.map(d -> (Callable<ServletRsp>) () -> send(new ServletReq(d)))
.collect(Collectors.toList());
List<Future<ServletRsp>> completed = service.invokeAll(tasks);
I think you're on a good path there.
The issue is, that there is no mechanism to nicely collect the results, except doing it yourself:
ExecutorService service = Executors.newCachedThreadPool();
List<CompletableFuture<Void>> futures = new ArrayList<>(); // these are only references to tell you when the request finishes
Queue<ServletRsp> results = new ConcurrentLinkedQueue<>(); // this has to be thread-safe
for (LocalDate date : dates) {
ServletReq req = new ServletReq(date);
CompletableFuture future = CompletableFuture
.supplyAsync(() -> webservice.send(req), service)
.thenAcceptAsync(results::add);
futures.add(future);
}
CompletableFuture.allOf(futures.toArray(new CompletableFuture[futures.size()])).join();
// do stuff with results
I've tried to keep most of the code as you've written it. Maybe it's a bit cleaner with streams:
List<CompletableFuture<Void>> collect = dates
.map(date -> CompletableFuture
.supplyAsync(() -> webservice.send(new ServletReq(date)), service)
.thenAcceptAsync(results::add))
.collect(Collectors.toList());
// wait for all requests to finish
CompletableFuture.allOf(collect.toArray(new CompletableFuture[collect.size()])).thenAcceptAsync(ignored -> {
//you can also handle the response async.
});
I have following problem in my service I am building object X however In order to build it I need to make few http calls in order to get all required data to fill it (each rest fills certain part of the object.) In order to keep performance high I thought it would be nice to make call async and after all calls are done return object to the caller. It looks something like this
ListenableFuture<ResponseEntity<String>> future1 = asycTemp.exchange(url, method, requestEntity, responseType);
future1.addCallback({
//process response and set fields
complexObject.field1 = "PARSERD RESPONSE"
},{
//in case of fail fill default or take some ather actions
})
I don't know how to wait for all features to be done. I guess that they are some standard spring ways of solving this kind of issue. Thanks in advance for any suggestions. Spring version - 4.2.4.RELEASE
Best regards
Adapted from Waiting for callback for multiple futures.
This example simply requests the Google and Microsoft homepages. When the response is received in the callback, and I've done my processing, I decrement a CountDownLatch. I await the CountDownLatch, "blocking" the current thread until the CountDownLatch reaches 0.
It's important that you decrement if your call fails or succeeds, as you must hit 0 to continue with the method!
public static void main(String[] args) throws Exception {
String googleUrl = "http://www.google.com";
String microsoftUrl = "http://www.microsoft.com";
AsyncRestTemplate asyncRestTemplate = new AsyncRestTemplate();
ListenableFuture<ResponseEntity<String>> googleFuture = asyncRestTemplate.exchange(googleUrl, HttpMethod.GET, null, String.class);
ListenableFuture<ResponseEntity<String>> microsoftFuture = asyncRestTemplate.exchange(microsoftUrl, HttpMethod.GET, null, String.class);
final CountDownLatch countDownLatch = new CountDownLatch(2);
ListenableFutureCallback<ResponseEntity<java.lang.String>> listenableFutureCallback = new ListenableFutureCallback<ResponseEntity<String>>() {
public void onSuccess(ResponseEntity<String> stringResponseEntity) {
System.out.println(String.format("[Thread %d] Status Code: %d. Body size: %d",
Thread.currentThread().getId(),
stringResponseEntity.getStatusCode().value(),
stringResponseEntity.getBody().length()
));
countDownLatch.countDown();
}
public void onFailure(Throwable throwable) {
System.err.println(throwable.getMessage());
countDownLatch.countDown();
}
};
googleFuture.addCallback(listenableFutureCallback);
microsoftFuture.addCallback(listenableFutureCallback);
System.out.println(String.format("[Thread %d] This line executed immediately.", Thread.currentThread().getId()));
countDownLatch.await();
System.out.println(String.format("[Thread %d] All responses received.", Thread.currentThread().getId()));
}
The output from my console:
[Thread 1] This line executed immediately.
[Thread 14] Status Code: 200. Body size: 112654
[Thread 13] Status Code: 200. Body size: 19087
[Thread 1] All responses received.
It says in Apache Spark documentation "within each Spark application, multiple “jobs” (Spark actions) may be running concurrently if they were submitted by different threads". Can someone explain how to achieve this concurrency for the following sample code?
SparkConf conf = new SparkConf().setAppName("Simple_App");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> file1 = sc.textFile("/path/to/test_doc1");
JavaRDD<String> file2 = sc.textFile("/path/to/test_doc2");
System.out.println(file1.count());
System.out.println(file2.count());
These two jobs are independent and must run concurrently.
Thank You.
Try something like this:
final JavaSparkContext sc = new JavaSparkContext("local[2]","Simple_App");
ExecutorService executorService = Executors.newFixedThreadPool(2);
// Start thread 1
Future<Long> future1 = executorService.submit(new Callable<Long>() {
#Override
public Long call() throws Exception {
JavaRDD<String> file1 = sc.textFile("/path/to/test_doc1");
return file1.count();
}
});
// Start thread 2
Future<Long> future2 = executorService.submit(new Callable<Long>() {
#Override
public Long call() throws Exception {
JavaRDD<String> file2 = sc.textFile("/path/to/test_doc2");
return file2.count();
}
});
// Wait thread 1
System.out.println("File1:"+future1.get());
// Wait thread 2
System.out.println("File2:"+future2.get());
Using scala parallel collections feature
Range(0,10).par.foreach {
project_id =>
{
spark.table("store_sales").selectExpr(project_id+" as project_id", "count(*) as cnt")
.write
.saveAsTable(s"counts_$project_id")
}
}
PS. Above would launch up to 10 parallel Spark jobs but it could be less depending on number of available cores on Spark Driver. Above method by GQ using Futures is more flexible in this regard.