Example of Android USB Host Asynchronous Bulk Transfer - java

I am working with Android USB Host mode and would like to perform an asynchronous bulk transfer. I have so far been successfully using synchronous bulk transfers, but am having a little trouble grasping how the pieces come together for an asynchronous transfer. From the UsbRequest documentation (bold mine):
Requests on bulk endpoints can be sent synchronously via bulkTransfer(UsbEndpoint, byte[], int, int) or asynchronously via queue(ByteBuffer, int) and requestWait() [a UsbDeviceConnection method].
Ok, so does this mean I call queue() from the existing thread of execution and then requestWait() somewhere else in another thread? Where does requestWait() get my logic from to execute when the request completes? Most of the async work I have done has been in languages like Javascript and Python, generally by passing a callback function as an argument. In Java I was expected to perhaps pass an object that implements a specific method as a callback, but I can't see that happening anywhere. Perhaps my mental model of the whole thing is wrong.
Can someone provide an isolated example of sending an asynchronous bulk transfer?

Basically the requestWait() method is going to return once the queued UsbRequest has completed. You can do this on the same thread or on another. Use the setClientData() AND getClientData() methods to determine which request has just completed, assuming that you had more than one outstanding!
You can queue multiple UsbRequests across multiple EndPoints and the consume their completion status by repeatedly calling requestWait() until you have no more outstanding requests.

Related

Does Undertow support async I/O from an async source?

I have a scenario where I’m attempting to serve data in a non-blocking fashion which is sourced by a RxJava Observable (also non-blocking). I’m using the WriteListener callback provided by ServletOutputStream. I’m running into an issue where the write is throwing an IllegalStateException (java.lang.IllegalStateException: UT010035: Stream in async mode was not ready for IO operation) immediately after a successful isReady() check on the ServletOutputStream.
While looking deeper, I noticed this comment in the Undertow implementation of ServletOutputStream:
Once the write listener has been set operations must only be invoked on this stream from the write listener callback. Attempting to invoke from a different thread will result in an IllegalStateException.
Given that my data source is asynchronous there are scenarios where the onWritePossible() callback will reach a state where there no data immediately available and I would need to wait for more to be received from the source. In these cases, I would need to interact with the stream from the callback of my data source which is going to be a different thread. The only other option would be to suspend the thread used to call onWritePossible() and wait for more data to arrive, but this would be a blocking operation which defeats the whole purpose.
Is there another approach that I’m missing? The single thread requirement of Undertow doesn’t seem to be required by the Servlet 3.1 spec. From what I’ve read, other implementations appear to tolerate the multi-threaded approach given that the application coordinates the stream access synchronization.

How Do Applications handle Asynchronous Responses - via Callback

I have been doing Java for a few years but I have not had much experience with Asynchronous programming.
I am working on an application that makes SOAP web service calls to some Synchronous web services and currently the implementation of my consuming application is Synchronous also ie. my applications threads block while waiting for the response.
I am trying to learn how to handle these SOAP calls in an asynchronous way - just for the hell of it but I have some high-level questions which I cant seem to find any answers to.
I am using CXF but my question is not specifically about CXF or SOAP, but higher-level, in terms of asynchronous application architecture I think.
What I want to know (working thru a scenario) - at a high level - is:
So I have a Thread (A) running in my JVM that makes a call to a remote web service
It registers a callback method and returns a Future
Thread (A) has done its bit and gets returned to its pool once it has returned the Future
The remote web service response returns and Thread (B) gets allocated and calls the callback method (which generally populates the Future with a result I believe)
Q1. I cant get my head off the blocking thread model - if Thread (A) is no longer listening to that network socket then how does the response that comes back from the remote service get allocated Thread (B) - is it simply treated as a new request coming into the server/container which then allocates a thread to service it?
Q2. Closely related to Q1 I imagine: if no Thread has the Future, or handler (with its callback method) on its stack, then how does the response from the remote web service get associated with the callback method it needs to call?
Or, in another way of asking, how does Thread B (now dealing with the response) get given a reference to the Future/Callback object?
Very sorry my question is so long - and thanks to anyone who gave their time to read through it! :)
I don't see why you'd add all this complexity using asynchronous Threading.
The way to design an asynchronous soap service:
You have one service sending out a response to a given client / clients.
Those clients work on the response given asynchronously.
When done, they would call another soap method to return their response.
The response will just be stored in a queue (e.g. a database table), without any extra logic. You'd have a "Worker" Service working on the incoming tasks. If a response is needed again another method on the other remote service would be called. The requests I would store as events in the database, which would later be asynchronously handled by an EventHandler. See
Hexagonal Architecture:
https://www.youtube.com/watch?v=fGaJHEgonKg
Your Q1 and Q2 seem to have more to do with multithreading than they have to do with asynchronous calls.
The magic of asynchronous web service calls is that you don't have to worry about multithreading to handle blocking while waiting for a response.
It's a bit unclear from the question what the specific problem statement is (i.e., what you are hoping to have your application do while blocking or rather than blocking), but here are a couple ways that you could use asynchronous web service calls that will allow you to do other work.
For the following cases, assume that the dispatch() method calls Dispatch.invokeAsync(T msg, AsyncHandler handler) and returns a Future:
1) Dispatch multiple web service requests, so that they run in parallel:
If you have multiple services to consume and they can all execute independently, dispatch them all at once and process the responses when you have received them all.
ArrayList<Future<?>> futures = new ArrayList<Future<?>>();
futures.add(serviceToConsume1.dispatch());
futures.add(serviceToConsume2.dispatch());
futures.add(serviceToConsume3.dispatch());
// now wait until all services return
for(Future f<?> : futures) {
f.get();
}
// now use responses to continue processing
2) Polling:
Future<?> f = serviceToConsume.dispatch();
while(!f.isDone()) {
// do other work here
}
// now use response to continue processing

Write to GAE datastore asynchronously

In my Java app, sometimes my users do some work that requires a datastore write, but I don't want to keep the user waiting while the datastore is writing. I want to immediately return a response to the user while the data is stored in the background.
It seems fairly clear that I could do this by using GAE task queues, enqueueing a task to store the data. But I also see that there's an Async datastore API, which seems like it would be much easier than dealing with task queues.
Can I just call AsyncDatastoreService.put() and then return from my servlet? Will that API store my data without keeping my users waiting?
I think you are right that the Async calls seem easier. However, the docs for AsyncDatastore mention one caveat that you should consider:
Note: Exceptions are not thrown until you call the get() method. Calling this method allows you to verify that the asynchronous operation succeeded.
The "get" in that note is being called on the Future object returned by the async call. If you just return from your servlet without ever calling get on the Future object, you might not know for sure whether your put() worked.
With a queued task, you can handle the error cases more explicitly, or just rely on the automatic retries. If all you want to queue is datastore puts, you should be able to create (or find) a utility class that does most of the work for you.
Unfortunately, there aren't any really good solutions here. You can enqueue a task, but there's several big problems with that:
Task payloads are limited in size, and that size is smaller than the entity size limit.
Writing a record to the datastore is actually pretty fast, in wall-clock time. A significant part of the cost, too, is serializing the data, which you have to do to add it to the task queue anyway.
By using the task queue, you're creating more eventual consistency - the user may come back and not see their changes applied, because the task has not yet executed. You may also be introducing transaction issues - how do you handle concurrent updates?
If something fails, it could take an arbitrarily long time to apply the user's updates. In such situations, it probably would have been better to simply return an error to the user.
My recommendation would be to use the async API where possible, but to always write to the datastore directly. Note that you need to wait on all your outstanding API calls, as Peter points out, or you won't know if they failed - and if you don't wait on them, the app server will, before returning a response to the user.
If all you need is for the user to have a responsive interface while stuff churns in the back on the db, all you have to do is make an asynchronous call at the client level, aka do some ajax that sends the db write request, changes imemdiatelly the users display, and then upon an ajax request callback update the view with whatever is it you wish.
You can easily add GWT support to you GAE project (either via eclipse plugin or maven gae plugin) and have the time of your life doing asynchronous stuff.

Java - networking - Best Practice - mixed synchronous / asynchronous commands

I'm developing a small client-server program in Java.
The client and the server are connected over one tcp-connection. Most parts of the communication are asynchronous (can happen at any time) but some parts I want to be synchronous (like ACKs for a sent command).
I use a Thread that reads commands from the socket's InputStream and raises an onCommand() event. The Command itself is progressed by the Command-Design-Pattern.
What would be a best-practice approach (Java), to enable waiting for an ACK without missing other, commands that could appear at the same time?
con.sendPacket(new Packet("ABC"));
// wait for ABC_ACK
edit1
Think of it like an FTP-Connection but that both data and control-commands are on the same connection. I want to catch the response to a control-command, while data-flow in the background is running.
edit2
Everything is sent in blocks to enable multiple (different) transmissons over the same TCP-Connection (multiplexing)
Block:
1 byte - block's type
2 byte - block's payload length
n byte - block's paylod
In principle, you need a registry of blocked threads (or better, the locks on which they are waiting), keyed with some identifier which will be sent by the remote side.
For asynchronous operation, you simply sent the message and proceed.
For synchronous operation, after sending the message, your sending thread (or the thread which initiated this) create a lock object, adds this with some key to the registry and then waits on the lock until notified.
The reading thread, when it receives some answer, looks in the registry for the lock object, adds the answer to it, and calls notify(). Then it goes reading the next input.
The hard work here is the proper synchronization to avoid dead locks as well as missing a notification (because it comes back before we added ourself to the registry).
I did something like this when I implemented the remote method calling protocol for our Fencing-applet. In principle RMI works the same way, just without the asynchronous messages.
#Paulo's solution is one I have used before. However, there may be a simpler solution.
Say you don't have a background thread reading results in the connection. What you can do instead do is use the current thread to read any results.
// Asynchronous call
conn.sendMessage("Async-request");
// server sends no reply.
// Synchronous call.
conn.sendMessage("Sync-request");
String reply = conn.readMessage();

How to implement blocking request-reply using Java concurrency primitives?

My system consists of a "proxy" class that receives "request" packets, marshals them and sends them over the network to a server, which unmarshals them, processes, and returns some "response packet".
My "submit" method on the proxy side should block until a reply is received to the request (packets have ids for identification and referencing purposes) or until a timeout is reached.
If I was building this in early versions of Java, I would likely implement in my proxy a collection of "pending messages ids", where I would submit a message, and wait() on the corresponding id (with a timeout). When a reply was received, the handling thread would notify() on the corresponding id.
Is there a better way to achieve this using an existing library class, perhaps in java.util.concurrency?
If I went with the solution described above, what is the correct way to deal with the potential race condition where a reply arrives before wait() is invoked?
The simple way would be to have a Callable that talks to the server and returns the Response.
// does not block
Future<Response> response = executorService.submit(makeCallable(request));
// wait for the result (blocks)
Response r = response.get();
Managing the request queue, assigning threads to the requests, and notifying the client code is all hidden away by the utility classes.
The level of concurrency is controlled by the executor service.
Every network call blocks one thread in there.
For better concurrency, one could look into using java.nio as well (but since you are talking to same server for all requests, a fixed number of concurrent connections, maybe even just one, seems to be sufficient).

Categories

Resources