Make client-side streaming synchronous/blocking in gRPC Java application - java

I would like to make client-side streaming blocking. The definition of that protocol can look like this:
rpc RecordRoute(stream Point) returns (RouteSummary) {}
As it's said in the documentation, for certain types of streaming call, it's only possible to use async stub:
a non-blocking/asynchronous stub that makes non-blocking calls to the server, where the response is returned asynchronously. You can make certain types of streaming call only using the asynchronous stub.
Then how can I make that call blocking/synchronous? Is it possible?

Blocking stub can only be used for RPCs that client sends only a single request. For client streaming calls, you can only use async stub. The generated code for blocking stub does not contain the RPC method for client-streaming or bidi-streaming methods.
If you want to avoid excessive buffering due to async requests, you can use the CallStreamObServer API to do manual flow control. With some external synchronizations such as a CountDownLatch, the async API can behave synchronously. See how gRPC's manual flow control example works.

Related

Blocking inside of StreamObserver.onNext()

I couldn't find anything in the GRPC documentation about this. Does GRPC expect my implementation of StreamObserver.onNext() to be non-blocking? What are the implications on GRPC if it does block (e.g. rejects new requests, queues up new requests, etc..)?
You can block if you need to block.
Since the callbacks for the RPC are considered non-thread-safe, blocking will delay other callbacks until you return. That includes setOnReadyHandler and setOnCancelHandler in ClientCallStreamObserver and ServerCallStreamObserver.
In streaming RPCs, gRPC automatically requests another message after you return from your onNext(), so if you block gRPC will avoid receiving too many more messages. gRPC will still allow some messages to be buffered, however.
Blocking has no impact on new RPCs.

Do we have to write reactive code inside WebFlux?

I know that a reactive controller should return a Flux<T> or Mono<T>, which means it's reactive on http handler level.
But if we do an external http call in a controller with non-reactive code, which have to wait a long time IO until that http call response, what will happen if 10000 users use http to call this controller at the same time? Let's assume there is only one thread to handle the code inside controller, so will more requests be handled during the IO?
If no, do we have to use reactive code such as WebClient and ReactiveRepository to call external http API and CRUD on DB?
If yes, how can that be implemented? Because it's just lines of non-reactive code, how Java know like "hey, it's waiting for response, let's handle another event first"?
Doing blocking I/O within a reactive pipeline (e.g. a reactive controller method) is forbidden; doing so can lead to serious problems at runtime or even terminal errors if a block operator is used.
Reactor provides infrastructure to wrap blocking calls and schedule that work on specific threads. See the How do I wrap a synchronous, blocking call section in the reactor reference documentation. Doing that will work but is likely to have a negative effect on performance.
I did an experiment myself, looks like we have to use reactive code inside WebFlux so that everything is reactive and the performance is really high

Does Undertow support async I/O from an async source?

I have a scenario where I’m attempting to serve data in a non-blocking fashion which is sourced by a RxJava Observable (also non-blocking). I’m using the WriteListener callback provided by ServletOutputStream. I’m running into an issue where the write is throwing an IllegalStateException (java.lang.IllegalStateException: UT010035: Stream in async mode was not ready for IO operation) immediately after a successful isReady() check on the ServletOutputStream.
While looking deeper, I noticed this comment in the Undertow implementation of ServletOutputStream:
Once the write listener has been set operations must only be invoked on this stream from the write listener callback. Attempting to invoke from a different thread will result in an IllegalStateException.
Given that my data source is asynchronous there are scenarios where the onWritePossible() callback will reach a state where there no data immediately available and I would need to wait for more to be received from the source. In these cases, I would need to interact with the stream from the callback of my data source which is going to be a different thread. The only other option would be to suspend the thread used to call onWritePossible() and wait for more data to arrive, but this would be a blocking operation which defeats the whole purpose.
Is there another approach that I’m missing? The single thread requirement of Undertow doesn’t seem to be required by the Servlet 3.1 spec. From what I’ve read, other implementations appear to tolerate the multi-threaded approach given that the application coordinates the stream access synchronization.

Java / Scala Future driven by a callback

Short Version:
How can I create a Promise<Result> which is completed on a trigger of a callback?
Long Version:
I am working on an application which deals with third-party SOAP services. A request from user delegates to multiple SOAP services simultaneously, aggregates the results and sends back to the user.
The system needs to be scalable and should allow multiple concurrent users. As each user requests ends up triggering about 10 web service calls and each call blocking for about 1 second, the system needs to be designed with non-blocking I/O.
I am using Apache CXF within Play Framework (Java) for this system. I have managed to generate the Asynchronous WS Client proxies and enable the async transport. What I am unable to figure out is how to return a Future to Play's Thread when I have delegated to multiple Web Service proxies and the results will be obtained as callbacks.
Option 1: Using async method calls returning Java Future.
As described in this scala.concurrent.Future wrapper for java.util.concurrent.Future thread, there is no way we can convert a Java Future to a Scala Future. Only way to get a result from the Future is to do Future.get() which blocks the caller. Since CXF's generated proxies return Java Future, this option is ruled out.
Option 2: Use Scala Future.
Since CXF generates the proxy interfaces, I am not sure if there is any way I can intervene and return a Scala Future (AFAIK Akka uses Scala Futures) instead of Java Future?
Option 3: Use the callback approach.
The async methods generated by CXF which return Java Future also takes a callback object which I suppose will provide a callback when result is ready. To use this approach, I will need to return a Future which will wait until I receive a callback.
I think Option 3 is most promising, although I have no ideas about how I can return a Promise which will be completed on receiving a callback. I could possibly have a thread waiting in a while(true) and waiting in between until result is available. Again, I don't know how I can go into wait without blocking the thread?
In a nutshell, I am trying to build a system which is making a lot of SOAP web service calls, where each call blocks for significant time. The system may easily run out of threads in case of lot of concurrent web service calls. I am working on finding a solution which is non-blocking I/O based which can allow many ongoing web service calls at the same time.
Option 3 looks good :) A couple of imports to start with...
import scala.concurrent.{Await, Promise}
import scala.concurrent.duration.Duration
and, just to illustrate the point, here's a mocked CXF API that takes the callback:
def fetch(url: String, callback: String => Unit) = {
callback(s"results for $url")
}
Create a promise, call API with promise as callback:
val promise = Promise[String]
fetch("http://corp/api", result => promise.success(result))
Then you can take promise.future which is an instance of Future into your Play app.
To test it, you can do this:
Await.result(promise.future, Duration.Inf)
which will block awaiting the result, at which point you should see "results for http://corp/api" in the console.

Example of Android USB Host Asynchronous Bulk Transfer

I am working with Android USB Host mode and would like to perform an asynchronous bulk transfer. I have so far been successfully using synchronous bulk transfers, but am having a little trouble grasping how the pieces come together for an asynchronous transfer. From the UsbRequest documentation (bold mine):
Requests on bulk endpoints can be sent synchronously via bulkTransfer(UsbEndpoint, byte[], int, int) or asynchronously via queue(ByteBuffer, int) and requestWait() [a UsbDeviceConnection method].
Ok, so does this mean I call queue() from the existing thread of execution and then requestWait() somewhere else in another thread? Where does requestWait() get my logic from to execute when the request completes? Most of the async work I have done has been in languages like Javascript and Python, generally by passing a callback function as an argument. In Java I was expected to perhaps pass an object that implements a specific method as a callback, but I can't see that happening anywhere. Perhaps my mental model of the whole thing is wrong.
Can someone provide an isolated example of sending an asynchronous bulk transfer?
Basically the requestWait() method is going to return once the queued UsbRequest has completed. You can do this on the same thread or on another. Use the setClientData() AND getClientData() methods to determine which request has just completed, assuming that you had more than one outstanding!
You can queue multiple UsbRequests across multiple EndPoints and the consume their completion status by repeatedly calling requestWait() until you have no more outstanding requests.

Categories

Resources