Does Feign client have native implementation of bottlenecking? - java

I am facing the following situation, which to my surprise, I couldn't find much documentation:
There is a service which only provides a rest call for item details, by obtaining it 1 by 1.
There are 1k+ items in total.
For responsiveness reasons, I would like to persist this data on my end, and not fetch it lazily.
In order for my API key to not be locked, I would like to limit my calls to X calls / second.
I could not find any support for this in the Feign documentation.
Does anybody know if there is one? Or do you have any suggestions on how to go about this implementation?

There is no built in throttling capability in Feign, that is delegated to the underlying Client implementation. With that said, you can define your own client extending from one of the provided ones, Apache Http, OkHttp, and Ribbon.
One solution is to extend the Client to use a ScheduledThreadPoolExecutor as outlined in this answer.
Apache HttpClient: Limit total calls per second
To use this with the provided ApacheHttpClient in Feign, you could extend it, providing your own implementation of the execute method to use the executor.
public class ThrottledHttpClient extends ApacheHttpClient {
// create a pool with one thread, you'll control the flow later.
private final ExecutorService throttledQueue = Executors.newScheduledThreadPool(1);
#Override
public Response execute(Request request, Request.Options options) throws IOException {
// use the executor
ScheduledFuture<Response> future = throttledQueue.scheduleAtFixedRate(super.execute(), ....);
return future.get()
}
Set the appropriate thread pool size, delay and fixed wait to achieve the throughput you desire.

Related

Calling async methods (Vert.x, Java) from necessarily synchronous ones

We have a set of Java applications that were originally written using normal synchronous methods but have largely been converted to asynchronous Vert.x (the regular API, not Rx) wherever it makes sense. We're having some trouble at the boundaries between sync and async code, especially when we have a method that must be synchronous (reasoning explained below) and we want to invoke an async method from it.
There are many similar questions asked previously on Stack Overflow, but practically all of them are in a C# context and the answers do not appear to apply.
Among other things we are using Geotools and Apache Shiro. Both provide customization through extension using APIs they have defined that are strictly synchronous. As a specific example, our custom authorization realm for Shiro needs to access our user data store, for which we have created an async DAO API. The Shiro method we have to write is called doGetAuthorizationInfo; it is expected to return an AuthorizationInfo. But there does not appear to be a reliable way to access the authorization data from the other side of the async DAO API.
In the specific case that the thread was not created by Vert.x, using a CompletableFuture is a workable solution: the synchronous doGetAuthorizationInfo would push the async work over to a Vert.x thread and then block the current thread in CompletableFuture.get() until the result becomes available.
Unfortunately the Shiro (or Geotools, or whatever) method may be invoked on a Vert.x thread. In that case it is extremely bad to block the current thread: if it's the event loop thread then we're breaking the Golden Rule, while if it's a worker thread (say, via Vertx.executeBlocking) then blocking it will prevent the worker from picking up anything more from its queue - meaning the blocking will be permanent.
Is there a "standard" solution to this problem? It seems to me that it will crop up anytime Vert.x is being used under an extensible synchronous library. Is this just a situation that people avoid?
EDIT
... with a bit more detail. Here is a snippet from org.apache.shiro.realm.AuthorizingRealm:
/**
* Retrieves the AuthorizationInfo for the given principals from the underlying data store. When returning
* an instance from this method, you might want to consider using an instance of
* {#link org.apache.shiro.authz.SimpleAuthorizationInfo SimpleAuthorizationInfo}, as it is suitable in most cases.
*
* #param principals the primary identifying principals of the AuthorizationInfo that should be retrieved.
* #return the AuthorizationInfo associated with this principals.
* #see org.apache.shiro.authz.SimpleAuthorizationInfo
*/
protected abstract AuthorizationInfo doGetAuthorizationInfo(PrincipalCollection principals);
Our data access layer has methods like this:
void loadUserAccount(String id, Handler<AsyncResult<UserAccount>> handler);
How can we invoke the latter from the former? If we knew doGetAuthorizationInfo was being invoked in a non-Vert.x thread, then we could do something like this:
#Override
protected AuthorizationInfo doGetAuthorizationInfo(PrincipalCollection principals) {
CompletableFuture<UserAccount> completable = new CompletableFuture<>();
vertx.<UserAccount>executeBlocking(vertxFuture -> {
loadUserAccount((String) principals.getPrimaryPrincipal(), vertxFuture);
}, res -> {
if (res.failed()) {
completable.completeExceptionally(res.cause());
} else {
completable.complete(res.result());
}
});
// Block until the Vert.x worker thread provides its result.
UserAccount ua = completable.get();
// Build up authorization info from the user account
return new SimpleAuthorizationInfo(/* etc...*/);
}
But if doGetAuthorizationInfo is called in a Vert.x thread then things are completely different. The trick above will block an event loop thread, so that's a no-go. Or if it's a worker thread then the executeBlocking call will put the loadUserAccount task onto the queue for that same worker (I believe), so the subsequent completable.get() will block permanently.
I bet you know the answer already, but are wishing it wasn't so -- If a call to GeoTools or Shiro will need to block waiting for a response from something, then you shouldn't be making that call on a Vert.x thread.
You should create an ExecutorService with a thread pool that you should use to execute those calls, arranging for each submitted task to send a Vert.x message when it's done.
You may have some flexibility in the size of the chunks you move into the thread pool. Instead of tightly wrapping those calls, you can move something larger higher up the call stack. You will probably make this decision based on how much code you will have to change. Since making a method asynchronous usually implies changing all the synchronous methods in its call stack anyway (that's the unfortunate fundamental problem with this kind of async model), you will probably want to do it high on the stack.
You will probably end up with an adapter layer that provides Vert.x APIs for a variety of synchronous services.

When does the AsyncRestTemplate send request?

Today I did some experiments on AsyncRestTemplate. Below is a piece of sample code:
ListenableFuture<ResponseEntity<MyObject[]>> result
= asyncRestTemplate.getForEntity(uri, MyObject[]);
List<MyObject> objects = Arrays.asList(result.get().getBody());
To my surprise, the request was not sent to uri in first line (i.e. after calling getForEntity) but sent after result.get() is called.
Isn't it a synchronous way of doing stuff?
The whole idea of doing async request is that either you do not want to wait for the async task to start/complete OR you want the main thread to do some other task before asking for the result from the Future instance. Internally, the AsyncRestTemplate prepares an AsyncRequest and calls executeAsync method.
AsyncClientHttpRequest request = createAsyncRequest(url, method);
if (requestCallback != null) {
requestCallback.doWithRequest(request);
}
ListenableFuture<ClientHttpResponse> responseFuture = request.executeAsync();
There are two different implementations - HttpComponentsAsyncClientHttpRequest ( which uses high performant async support provided in Apache http component library ) and SimpleBufferingAsyncClientHttpRequest (which uses facilities provided by J2SE classes). In case of HttpComponentsAsyncClientHttpRequest, internally it has a thread factory (which is not spring managed AFAIK) whereas in SimpleBufferingAsyncClientHttpRequest, there is a provision of Spring managed AsyncListenableTaskExecutor. The whole point is that in all cases there is some ExecutorService of some kind to be able to run the tasks asynchronously. Of course as is natural with these thread pools, the actual starting time of task is indeterminate and depends upon lots of factor like load, available CPU etc. and should not be relied upon.
When you call future.get() you're essentially turning an asynchronous operation into a synchronous one by waiting for the result.
It doesn't matter when the actual request is performed, the important thing is that since it's asynchronous, you don't need to worry about it unless/until you need the result.
The advantage is obvious when you need to perform other work before processing the result, or when you're not waiting for a result at all.

Java / Scala Future driven by a callback

Short Version:
How can I create a Promise<Result> which is completed on a trigger of a callback?
Long Version:
I am working on an application which deals with third-party SOAP services. A request from user delegates to multiple SOAP services simultaneously, aggregates the results and sends back to the user.
The system needs to be scalable and should allow multiple concurrent users. As each user requests ends up triggering about 10 web service calls and each call blocking for about 1 second, the system needs to be designed with non-blocking I/O.
I am using Apache CXF within Play Framework (Java) for this system. I have managed to generate the Asynchronous WS Client proxies and enable the async transport. What I am unable to figure out is how to return a Future to Play's Thread when I have delegated to multiple Web Service proxies and the results will be obtained as callbacks.
Option 1: Using async method calls returning Java Future.
As described in this scala.concurrent.Future wrapper for java.util.concurrent.Future thread, there is no way we can convert a Java Future to a Scala Future. Only way to get a result from the Future is to do Future.get() which blocks the caller. Since CXF's generated proxies return Java Future, this option is ruled out.
Option 2: Use Scala Future.
Since CXF generates the proxy interfaces, I am not sure if there is any way I can intervene and return a Scala Future (AFAIK Akka uses Scala Futures) instead of Java Future?
Option 3: Use the callback approach.
The async methods generated by CXF which return Java Future also takes a callback object which I suppose will provide a callback when result is ready. To use this approach, I will need to return a Future which will wait until I receive a callback.
I think Option 3 is most promising, although I have no ideas about how I can return a Promise which will be completed on receiving a callback. I could possibly have a thread waiting in a while(true) and waiting in between until result is available. Again, I don't know how I can go into wait without blocking the thread?
In a nutshell, I am trying to build a system which is making a lot of SOAP web service calls, where each call blocks for significant time. The system may easily run out of threads in case of lot of concurrent web service calls. I am working on finding a solution which is non-blocking I/O based which can allow many ongoing web service calls at the same time.
Option 3 looks good :) A couple of imports to start with...
import scala.concurrent.{Await, Promise}
import scala.concurrent.duration.Duration
and, just to illustrate the point, here's a mocked CXF API that takes the callback:
def fetch(url: String, callback: String => Unit) = {
callback(s"results for $url")
}
Create a promise, call API with promise as callback:
val promise = Promise[String]
fetch("http://corp/api", result => promise.success(result))
Then you can take promise.future which is an instance of Future into your Play app.
To test it, you can do this:
Await.result(promise.future, Duration.Inf)
which will block awaiting the result, at which point you should see "results for http://corp/api" in the console.

What's the level of asynchronism in Play! framework

Play! touts its asynchronous HTTP handling feature, though it is not very clear to me what else are truly async (non-blocking and without thread switching.) In the asynchronous examples I read, like the one below taken from the Play! Framework Cookbook:
public static void generateInvoice(Long orderId) {
Order order = Order.findById(orderId); // #a
InputStream is = await(new OrderAsPdfJob(order).now()); // #b
renderBinary(is);
}
They focuses on the long/expensive "business logic" step at #b, but my concern is at the DB calls at #a. In fact, majority of the controller methods in many apps will just try to do multiple CRUD to DB, like:
public static void generateInvoice(Long orderId) {
Order order = Order.findById(orderId); // #a
render(order);
}
I'm particularly concerned about the claim of using "small number of threads" when serving this DB access pattern.
So the questions are
Will Play! will block on the JDBC calls?
If we wrap such calls in future/promise/await, it will cause thread switching (besides the inconvenience due the pervasiveness of DB calls,) right?
In light of this, how does its asynchronism comparing to a servlet server with NIO connector (e.g. Tomcat + NIO connector but without using the new event handler) in serving this DB access pattern?
Is there any plan to support asynchronous DB driver, like http://code.google.com/p/adbcj/ ?
Play will block on JDBC calls--there's no magic to prevent that.
To wrap a j.u.c.Future in an F.Promise for Play, a loop is needed. This can result in a lot of context switches.
Servlet containers can use NIO e.g. to keep connections open between requests without tying up threads for inactive connections. But a JDBC call in request handling code will block and tie up a thread just the same.
ADBCJ implements j.u.c.Future, but also supports callbacks, which can be tied to an F.Promise, see https://groups.google.com/d/topic/play-framework/c4DOOtGF50c/discussion.
I'm not convinced Play's async feature is worthwhile, given how much it complicates the code and testing. Maybe if you need to handle thousands of requests per second on a single machine while calling slow services.

Alternative of MultiThreading in Java

I have a question bother me a while.
For example,I have a multithreaded server, when it receives a request, it pass this request to a handler, this handler will process this request. One reason we make server multithreaded is:
if it is not multithreaded, when the server processing this request, during the meaning time,
another request coming, then this request will be drop, because the server is not available now.
So I wonder if there is an alternative of multithreaded server, for example, we can create a queue for non-multithreading server? when it can fetch another request from the queue once it finish one.
Yes, you can have an event-based server. This capability is offered by the java.nio package, though you could use a framework like netty rather than do it from scratch.
However, note that while this used to be considered a way to get better performance, it seems like a regular multithreaded server actually offers better performances with today's hardware and operating systems.
Yes you can. Have you considered SEDA-like techniques (i.e. event-driven techniques)? You may want to investigate the Netty library too. It does most of the job for you when it comes to using NIO.
You can still have a single threaded engine with a multi-threaded server.
consider the following skeleton - if you have an Engine that runs, it can be completely single threaded, just handing requests in the order they're received. This allows you to use non-thread-safe components in the business logic, and you've managed to separate your networking layer from your business logic layer! It's a win-win scenario.
class Engine implements Runnable {
private final Object requestLock = new Object();
private List<Request> requests = new LinkedList<Request>();
private boolean running = true;
private Request nextRequest() {
synchronized(requestLock) { return requests.poll(); }
}
/**
* The engine is single threaded. It doesn't care about server connections
*/
public void run() {
while(running) {
Request request = nextRequest();
// handle your request as normal
// also consider making a mechanism to send Responses
}
}
}

Categories

Resources