I am trying to figure out how to determine if all async HTTP GET requests I've made have completed, so that I can execute another method. For context, I have something similar to the code below:
public void init() throws IOException {
Map<String, CustomObject> mapOfObjects = new HashMap<String, CustomObject>();
ObjectMapper mapper = new ObjectMapper();
// some code to populate the map
mapOfObjects.forEach((k,v) -> {
HttpClient.asyncGet("https://fakeurl1.com/item/" + k, createCustomCallbackOne(k, mapper));
// HttpClient is just a wrapper class for your standard OkHTTP3 calls,
// e.g. client.newcall(request).enqueue(callback);
HttpClient.asyncGet("https://fakeurl2.com/item/" + k, createCustomCallbackTwo(k, mapper));
});
}
private createCustomCallbackOne(String id, ObjectMapper mapper) {
return new Callback() {
#Override
public void onResponse(Call call, Response response) throws IOException {
if (response.isSuccessful()) {
try (ResponseBody body = response.body()) {
CustomObject co = mapOfObjects.get(id);
if (co != null) {
co.setFieldOne(mapper.readValue(body.byteStream(), FieldOne.class)));
}
} // implicitly closes the response body
}
}
#Override
public void onFailure(Call call, IOException e) {
// log error
}
}
}
// createCustomCallbackTwo does more or less the same thing,
// just sets a different field and then performs another
// async GET in order to set an additional field
So what would be the best/correct way to monitor all these asynchronous calls to ensure they have completed and I can go about performing another method on the Objects stored inside the map?
The most simple way would be to keep a count of how many requests are 'in flight'. Increment it for each request enqueued, decrement it at the end of the callback. When/if the count is 0, any/all requests are done. Using a semaphore or counting lock you can wait for it to become 0 without polling.
Note that the callbacks run on separate threads, so you must provide some kind of synchronization.
If you want to create a new callback for every request, you could use something like this:
public class WaitableCallback implements Callback {
private boolean done;
private IOException exception;
private final Object[] signal = new Object[0];
#Override
public void onResponse(Call call, Response response) throws IOException {
...
synchronized (this.signal) {
done = true;
signal.notifyAll();
}
}
#Override
public void onFailure(Call call, IOException e) {
synchronized (signal) {
done = true;
exception = e;
signal.notifyAll();
}
}
public void waitUntilDone() throws InterruptedException {
synchronized (this.signal) {
while (!this.done) {
this.signal.wait();
}
}
}
public boolean isDone() {
synchronized (this.signal) {
return this.done;
}
}
public IOException getException() {
synchronized (this.signal) {
return exception;
}
}
}
Create an instance for every request and put it into e.g. a List<WaitableCallback> pendingRequests.
Then you can just wait for all requests to be done:
for ( WaitableCallback cb : pendingRequests ) {
cb.waitUntilDone();
}
// At this point, all requests have been processed.
However, you probably should not create a new identical callback object for every request. Callback's methods get the Call passed as parameter so that the code can examine it to figure out which request it is processing; and in your case, it seems you don't even need that. So use a single Callback instance for the requests that should be handled identically.
If the function asyncGet calls your function createCustomCallbackOne then its easy.
For each key you are calling two pages. "https://fakeurl1.com/item/" and "https://fakeurl2.com/item/" (left out + k)
So you need a map to trach that and just one call back function is enough.
Use a map with key indicating each call:
static final Map<String, Integer> trackerOfAsyncCalls = new HashMap<>();
public void init() throws IOException {
Map<String, CustomObject> mapOfObjects = new HashMap<String, CustomObject>();
//need to keep a track of the keys in some object
ObjectMapper mapper = new ObjectMapper();
trackerOfAsyncCalls.clear();
// some code to populate the map
mapOfObjects.forEach((k,v) -> {
HttpClient.asyncGet("https://fakeurl1.com/item/" + k, createCustomCallback(k,1 , mapper));
// HttpClient is just a wrapper class for your standard OkHTTP3 calls,
// e.g. client.newcall(request).enqueue(callback);
HttpClient.asyncGet("https://fakeurl2.com/item/" + k, createCustomCallback(k, 2, mapper));
trackerOfAsyncCalls.put(k + "-2", null);
});
}
//final important
private createCustomCallbackOne(final String idOuter, int which, ObjectMapper mapper) {
return new Callback() {
final String myId = idOuter + "-" + which;
trackerOfAsyncCalls.put(myId, null);
#Override
public void onResponse(Call call, Response response) throws IOException {
if (response.isSuccessful()) {
trackerOfAsyncCalls.put(myId, 1);
///or put outside of if if u dont care if success or fail or partial...
Now set up a thread or best a schduler that is caclled every 5 seconds, check all eys in mapOfObjects and trackerOfAsyncCalls to see if all keys have been started and some final success or timeout or error status has been got for all.
Related
I'm implementing a GraphQL client in a Java application using Apollo's auto generation of queries, and so far I've been able to chain calls and I also get the data I want. The issue is that Apollo makes me implement the anonymous method ApolloCall.Callback<>() which overrides void onResponse(Response response) and void onFailure(), but I'm unable to find a way to get a hold of this Response object, which I want to collect and make sure I have.
This is a Spring Boot project on Java 11, I've tried to make use of CompletableFuture but with limited knowledge of it and how to use it for this particular problem I feel out of luck. I've also tried to implement the RxJava support that Apollo is supposed to have but I couldn't resolve dependency issues with that approach.
I'm pretty sure that futures will solve it but again I don't know how.
public void getOwnerIdFromClient() {
client
.query(getOwnerDbIdQuery)
.enqueue(
new ApolloCall.Callback<>() {
#Override
public void onResponse(#Nonnull Response<Optional<GetOwnerDbIdQuery.Data>> response) {
int ownerId =
response
.data()
.get()
.entities()
.get()
.edges()
.get()
.get(0)
.node()
.get()
.ownerDbId()
.get();
System.out.println("OwnerId = " + ownerId);
}
#Override
public void onFailure(#Nonnull ApolloException e) {
logger.error("Could not retrieve response from GetOwnerDbIdQuery.", e);
}
});
}
Since I wish to work with this int ownerId outside of the onResponse this isn't a sufficient solution. I'd actually like to make this call x amount of times, and create a list of all the id's I actually got, since this might return a null id as well, which means I need some way to wait for them all to finish.
You are right, this can be done using Futures:
change return type to Future
complete the future in onResponse
Approximately:
public Future<Integer> getOwnerIdFromClient(){
Future<Integer> result=new CompletableFuture<Integer>();
client
.query(getOwnerDbIdQuery)
.enqueue(
new ApolloCall.Callback<>(){
#Override
public void onResponse(#Nonnull Response<Optional<GetOwnerDbIdQuery.Data>>response){
// get owner Id
System.out.println("OwnerId = "+ownerId);
result.complete(ownerId)
}
#Override
public void onFailure(#Nonnull ApolloException e){
logger.error("Could not retrieve response from GetOwnerDbIdQuery.",e);result.completeExceptionally(e);
}
});
return result;
}
If anyone else is coming across this, it took me quite a while to figure out the generics, but you can do this in a generic manner (to avoid the copy/paste for all your different query types) by using the following function as a separate class or wrapper:
private <D extends Operation.Data, T, V extends Operation.Variables> CompletableFuture<T> execute(Query<D, T, V> query) {
CompletableFuture<T> future = new CompletableFuture<>();
client.query(query).enqueue(new ApolloCall.Callback<>() {
#Override
public void onResponse(#NotNull Response<T> response) {
if (response.hasErrors()) {
String errors = Objects.requireNonNull(response.getErrors()).stream().map(Object::toString).collect(Collectors.joining(", "));
future.completeExceptionally(new ApolloException(errors));
return;
}
future.complete(response.getData());
}
#Override
public void onFailure(#NotNull ApolloException e) {
future.completeExceptionally(e);
}
});
return future;
}
Then it should just be a case of calling
Integer myResult = execute(getOwnerDbIdQuery).get();
I have never really worked with asynchronous programming in Java and got very confused on the practice is the best one.
I got this method
public static CompletableFuture<Boolean> restoreDatabase(){
DBRestorerWorker dbWork = new DBRestorerWorker();
dbWork.run();
return "someresult" ;
}
then this one which calls the first one
#POST
#Path("{backupFile}")
#Consumes("application/json")
public void createOyster(#PathParam("backupFile") String backupFile) {
RestUtil.restoreDatabase("utv_johan", backupFile);
//.then somemethod()
//.then next method()
}
What I want to do is first call the restoreDatabase() method which calls dbWork.run() (which is an void method) and when that method is done I want createOyster to do the next one and so forth until I have done all the steps needed. Someone got a guideline were to start with this. Which practice is best in today's Java?
As you already use CompletableFuture, you may build your async execution pipeline like.
CompletableFuture.supplyAsync(new Supplier<String>() {
#Override
public String get() {
DBRestorerWorker dbWork = new DBRestorerWorker();
dbWork.run();
return "someresult";
};
}).thenComposeAsync((Function<String, CompletionStage<Void>>) s -> {
CompletableFuture<String> future = new CompletableFuture<>();
try{
//createOyster
future.complete("oyster created");
}catch (Exception ex) {
future.completeExceptionally(ex);
}
return null;
});
As you could see, You can call thenComposeAsync or thenCompose to build a chain of CompletionStages and perform tasks using results of the previous step or make Void if you don't have anything to return.
Here's a very good guide
You can use AsyncResponse:
import javax.ws.rs.container.AsyncResponse;
public static CompletableFuture<String> restoreDatabase(){
DBRestorerWorker dbWork = new DBRestorerWorker();
dbWork.run();
return CompletableFuture.completedFuture("someresult");
}
and this
#POST
#Path("{backupFile}")
#Consumes("application/json")
public void createOyster(#PathParam("backupFile") String backupFile,
#Suspended AsyncResponse ar) {
RestUtil.restoreDatabase("utv_johan", backupFile)
.thenCompose(result -> doSomeAsyncCall())
.thenApply(result -> doSomeSyncCall())
.whenComplete(onFinish(ar))
//.then next method()
}
utility function to send response
static <R> BiConsumer<R, Throwable> onFinish(AsyncResponse ar) {
return (R ok, Throwable ex) -> {
if (ex != null) {
// do something with exception
ar.resume(ex);
}
else {
ar.resume(ok);
}
};
}
I am using spring framework StringRedisTemplate to update an entry which happen with multiple threads.
public void processSubmission(final String key, final Map<String, String> submissionDTO) {
final String hashKey = String.valueOf(Hashing.MURMUR_HASH.hash(key));
this.stringRedisTemplate.expire(key, 60, TimeUnit.MINUTES);
final HashOperations<String, String, String> ops = this.stringRedisTemplate.opsForHash();
Map<String, String> data = findByKey(key);
String json;
if (data != null) {
data.putAll(submissionDTO);
json = convertSubmission(data);
} else {
json = convertSubmission(submissionDTO);
}
ops.put(key, hashKey, json);
}
In this json entry looks below,
key (assignmentId) -> value (submissionId, status)
As seen in code, before update the cache entry, I fetch current entry and add the new entry and put them all. But since this operation can be do in multiple threads, there can be situation of race condition leads to data lost. I could synchronize above method, but then it will be a bottle neck for the parallel processing power of RxJava implementation where processSubmission method is call via RxJava on two asynchronous threads.
class ProcessSubmission{
#Override
public Observable<Boolean> processSubmissionSet1(List<Submission> submissionList, HttpHeaders requestHeaders) {
return Observable.create(observer -> {
for (final Submission submission : submissionList) {
//Cache entry insert method invoke via this call
final Boolean status = processSubmissionExecutor.processSubmission(submission, requestHeaders);
observer.onNext(status);
}
observer.onCompleted();
});
}
#Override
public Observable<Boolean> processSubmissionSet2(List<Submission> submissionList, HttpHeaders requestHeaders) {
return Observable.create(observer -> {
for (final Submission submission : submissionList) {
//Cache entry insert method invoke via this call
final Boolean status = processSubmissionExecutor.processSubmission(submission, requestHeaders);
observer.onNext(status);
}
observer.onCompleted();
});
}
}
Above will call from below service API.
class MyService{
public void handleSubmissions(){
final Observable<Boolean> statusObser1 = processSubmission.processSubmissionSet1(subListDtos.get(0), requestHeaders)
.subscribeOn(Schedulers.newThread());
final Observable<Boolean> statusObser2 = processSubmission.processSubmissionSet2(subListDtos.get(1), requestHeaders)
.subscribeOn(Schedulers.newThread());
statusObser1.subscribe();
statusObser2.subscribe();
}
}
So handleSubmissions is calling with multiple threads per assignment id. But then per main thread is create and call two reactive java threads and process the submission list associate with each assignment.
What would be the best approach I could prevent redis entry race condition, while keep the performance of the RxJava implementation? Is there a way I could do this redis operation more efficient way?
It looks like you're only using the ops variable to do a put operation at the end, and you could isolate that point which is where you need to synchronize.
In the short research that I did, I couldn't find if HashOperations is not already thread-safe).
But an example of how you could just isolate the part you're concerned about is to do something like:
public void processSubmission(final String key, final Map<String, String> submissionDTO) {
final String hashKey = String.valueOf(Hashing.MURMUR_HASH.hash(key));
this.stringRedisTemplate.expire(key, 60, TimeUnit.MINUTES);
Map<String, String> data = findByKey(key);
String json;
if (data != null) {
data.putAll(submissionDTO);
json = convertSubmission(data);
} else {
json = convertSubmission(submissionDTO);
}
putThreadSafeValue(key, hashKey, json);
}
And have a method that is synchronized just for the put operation:
private synchronized void putThreadSafeValue(key, hashKey, json) {
final HashOperations<String, String, String> ops = this.stringRedisTemplate.opsForHash();
ops.put(key, hashKey, json);
}
There are a number of ways to do this, but it looks like you could restrict the thread contention down to that put operation.
I am writing a controller, that I need to make it asynchronous. How can I deal with a list of ListenableFuture? Because I have a list of URLs that I need to send GET request one by one, what is the best solution for it?
#RequestMapping(value = "/repositories", method = RequestMethod.GET)
private void getUsername(#RequestParam(value = "username") String username) {
System.out.println(username);
List<ListenableFuture> futureList = githubRestAsync.getRepositoryLanguages(username);
System.out.println(futureList.size());
}
In the service I use List<ListanbleFuture> which seems does not work, since it is asynchronous, in the controller method I cannot have the size of futureList to run a for loop on it for the callbacks.
public List<ListenableFuture> getRepositoryLanguages(String username){
return getRepositoryLanguages(username, getUserRepositoriesFuture(username));
}
private ListenableFuture getUserRepositoriesFuture(String username) throws HttpClientErrorException {
HttpEntity entity = new HttpEntity(httpHeaders);
ListenableFuture future = restTemplate.exchange(githubUsersUrl + username + "/repos", HttpMethod.GET, entity, String.class);
return future;
}
private List<ListenableFuture> getRepositoryLanguages(final String username, ListenableFuture<ResponseEntity<String>> future) {
final List<ListenableFuture> futures = new ArrayList<>();
future.addCallback(new ListenableFutureCallback<ResponseEntity<String>>() {
#Override
public void onSuccess(ResponseEntity<String> response) {
ObjectMapper mapper = new ObjectMapper();
try {
repositories = mapper.readValue(response.getBody(), new TypeReference<List<Repositories>>() {
});
HttpEntity entity = new HttpEntity(httpHeaders);
System.out.println("Repo size: " + repositories.size());
for (int i = 0; i < repositories.size(); i++) {
futures.add(restTemplate.exchange(githubReposUrl + username + "/" + repositories.get(i).getName() + "/languages", HttpMethod.GET, entity, String.class));
}
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void onFailure(Throwable throwable) {
System.out.println("FAILURE in getRepositoryLanguages: " + throwable.getMessage());
}
});
return futures;
}
Should I use something like ListenableFuture<List> instead of List<ListenableFuture> ?
It seems like you have a List<ListenableFuture<Result>>, but you want a ListenableFuture<List<Result>>, so you can take one action when all of the futures are complete.
public static <T> ListenableFuture<List<T>> allOf(final List<? extends ListenableFuture<? extends T>> futures) {
// we will return this ListenableFuture, and modify it from within callbacks on each input future
final SettableListenableFuture<List<T>> groupFuture = new SettableListenableFuture<>();
// use a defensive shallow copy of the futures list, to avoid errors that could be caused by
// someone inserting/removing a future from `futures` list after they call this method
final List<? extends ListenableFuture<? extends T>> futuresCopy = new ArrayList<>(futures);
// Count the number of completed futures with an AtomicInt (to avoid race conditions)
final AtomicInteger resultCount = new AtomicInteger(0);
for (int i = 0; i < futuresCopy.size(); i++) {
futuresCopy.get(i).addCallback(new ListenableFutureCallback<T>() {
#Override
public void onSuccess(final T result) {
int thisCount = resultCount.incrementAndGet();
// if this is the last result, build the ArrayList and complete the GroupFuture
if (thisCount == futuresCopy.size()) {
List<T> resultList = new ArrayList<T>(futuresCopy.size());
try {
for (ListenableFuture<? extends T> future : futuresCopy) {
resultList.add(future.get());
}
groupFuture.set(resultList);
} catch (Exception e) {
// this should never happen, but future.get() forces us to deal with this exception.
groupFuture.setException(e);
}
}
}
#Override
public void onFailure(final Throwable throwable) {
groupFuture.setException(throwable);
// if one future fails, don't waste effort on the others
for (ListenableFuture future : futuresCopy) {
future.cancel(true);
}
}
});
}
return groupFuture;
}
Im not quite sure if you are starting a new project or working on a legacy one, but if the main requirement for you is none blocking and asynchronous rest service I would suggest you to have a look into upcoming Spring Framework 5 and it integration with reactive streams. Particularly Spring 5 will allow you to create fully reactive and asynchronous web services with little of coding.
So for example fully functional version of your code can be written with this small code snippet.
#RestController
public class ReactiveController {
#GetMapping(value = "/repositories")
public Flux<String> getUsername(#RequestParam(value = "username") String username) {
WebClient client = WebClient.create(new ReactorClientHttpConnector());
ClientRequest<Void> listRepoRequest = ClientRequest.GET("https://api.github.com/users/{username}/repos", username)
.accept(MediaType.APPLICATION_JSON).header("user-agent", "reactive.java").build();
return client.exchange(listRepoRequest).flatMap(response -> response.bodyToFlux(Repository.class)).flatMap(
repository -> client
.exchange(ClientRequest
.GET("https://api.github.com/repos/{username}/{repo}/languages", username,
repository.getName())
.accept(MediaType.APPLICATION_JSON).header("user-agent", "reactive.java").build())
.map(r -> r.bodyToMono(String.class)))
.concatMap(Flux::merge);
}
static class Repository {
private String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
}
To run this code locally just clone the spring-boot-starter-web-reactive and copy the code into it.
The result is something like {"Java":50563,"JavaScript":11541,"CSS":1177}{"Java":50469}{"Java":130182}{"Shell":21222,"Makefile":7169,"JavaScript":1156}{"Java":30754,"Shell":7058,"JavaScript":5486,"Batchfile":5006,"HTML":4865} still you can map it to something more usable in asynchronous way :)
In my GWT Application I'm often refering several times to the same server results. I also don't know which code is executed first. I therefore want to use caching of my asynchronous (client-side) results.
I want to use an existing caching library; I'm considering guava-gwt.
I found this example of a Guava synchronous cache (in guava's documentation):
LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
.build(
new CacheLoader<Key, Graph>() {
public Graph load(Key key) throws AnyException {
return createExpensiveGraph(key);
}
});
This is how I'm trying to use a Guava cache asynchronously (I have no clue about how to make this work):
LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
.build(
new CacheLoader<Key, Graph>() {
public Graph load(Key key) throws AnyException {
// I want to do something asynchronous here, I cannot use Thread.sleep in the browser/JavaScript environment.
service.createExpensiveGraph(key, new AsyncCallback<Graph>() {
public void onFailure(Throwable caught) {
// how to tell the cache about the failure???
}
public void onSuccess(Graph result) {
// how to fill the cache with that result???
}
});
return // I cannot provide any result yet. What can I return???
}
});
GWT is missing many classes from the default JRE (especially concerning threads and concurrancy).
How can I use guava-gwt to cache asynchronous results?
As I understood what you want to achieve is not just a asynchronous cache but also a lazy cache and to create one the GWT is not a best place as there is a big problem when implementing a GWT app with client side Asynchronous executions, as GWT lacks the client side implementations of Futures and/or Rx components (still there are some implementations of RxJava for GWT). So in usual java what you want to create can be achieved by :
LoadingCache<String, Future<String>> graphs = CacheBuilder.newBuilder().build(new CacheLoader<String, Future<String>>() {
public Future<String> load(String key) {
ExecutorService service = Executors.newSingleThreadExecutor();
return service.submit(()->service.createExpensiveGraph(key));
}
});
Future<String> value = graphs.get("Some Key");
if(value.isDone()){
// This will block the execution until data is loaded
String success = value.get();
}
But as GWT has no implementations for Futures you need to create one just like
public class FutureResult<T> implements AsyncCallback<T> {
private enum State {
SUCCEEDED, FAILED, INCOMPLETE;
}
private State state = State.INCOMPLETE;
private LinkedHashSet<AsyncCallback<T>> listeners = new LinkedHashSet<AsyncCallback<T>>();
private T value;
private Throwable error;
public T get() {
switch (state) {
case INCOMPLETE:
// Do not block browser so just throw ex
throw new IllegalStateException("The server response did not yet recieved.");
case FAILED: {
throw new IllegalStateException(error);
}
case SUCCEEDED:
return value;
}
throw new IllegalStateException("Something very unclear");
}
public void addCallback(AsyncCallback<T> callback) {
if (callback == null) return;
listeners.add(callback);
}
public boolean isDone() {
return state == State.SUCCEEDED;
}
public void onFailure(Throwable caught) {
state = State.FAILED;
error = caught;
for (AsyncCallback<T> callback : listeners) {
callback.onFailure(caught);
}
}
public void onSuccess(T result) {
this.value = result;
state = State.SUCCEEDED;
for (AsyncCallback<T> callback : listeners) {
callback.onSuccess(value);
}
}
}
And your implementation will become :
LoadingCache<String, FutureResult<String>> graphs = CacheBuilder.newBuilder().build(new CacheLoader<String, FutureResult<String>>() {
public FutureResult<String> load(String key) {
FutureResult<String> result = new FutureResult<String>();
return service.createExpensiveGraph(key, result);
}
});
FutureResult<String> value = graphs.get("Some Key");
// add a custom handler
value.addCallback(new AsyncCallback<String>() {
public void onSuccess(String result) {
// do something
}
public void onFailure(Throwable caught) {
// do something
}
});
// or see if it is already loaded / do not wait
if (value.isDone()) {
String success = value.get();
}
When using the FutureResult you will not just cache the execution but also get some kind of laziness so you can show some loading screen while the data is loaded into cache.
If you just need to cache the asynchronous call results, you can go for a
Non-Loading Cache, instead of a Loading Cache
In this case you need to use put, getIfPresent methods to store and retrieve records from cache.
String v = cache.getIfPresent("one");
// returns null
cache.put("one", "1");
v = cache.getIfPresent("one");
// returns "1"
Alternatively a new value can be loaded from a Callable on cache misses
String v = cache.get(key,
new Callable<String>() {
public String call() {
return key.toLowerCase();
}
});
For further reference: https://guava-libraries.googlecode.com/files/JavaCachingwithGuava.pdf