Asynchronous programming using Java - java

Where can I find asynchronous programming example using Java? I'm interested in finding patterns in asynchronous programming for building applications that present responsiveness (preventing applications that periodically hang and stop responding to user input and server applications that do not respond to client requests in a timely fashion) and scalability.
In particularly it will be helpful to see a sample which performs I/O operations (such as file reads/writes, Web requests, and database queries) and also has a lot of CPU processing involved like a shopping suggester in a webpage.
Which are the Java libraries which can help in determining when an application's responsiveness is unpredictable - because the application's thread performs I/O requests, the application is basically giving up control of the thread's processing to the I/O device (a hard drive, a network, or whatever)

In a GUI, you could use threads to perform background tasks.
Java supports non blocking I/O in the new I/O API (NIO).
If your question is more architecturally oriented, this book offers an in-depth discussion of asynchronous patterns: Patterns of Enterprise Application Architecture, by Martin Fowler.

For examples performing asynchronous operations with emphasis on non-blocking IO on files, you may check some samples here: https://github.com/javasync/idioms (disclaimer I am the author).
I use this samples in introduction to asynchronous programming in Java and we explore callback based, CompletableFuture and finally reactive streams.

Which are the Java libraries which can help in determining when an application's responsiveness is unpredictable - because the application's thread performs I/O requests, the application is basically giving up control of the thread's processing to the I/O device (a hard drive, a network, or whatever)
If I understand you correctly, you are asking for some library that examines other threads to determine if they are blocked in I/O calls.
I don't think that this is possible in a standard JVM. Furthermore, I don't think that this would necessarily be sufficient to guarantee "responsiveness".

If you are using some kind of I/O operation (for example read on InputStream, which can block) you put it into a thread and the simplest solution is to use join on the thread for a given amount:
MyThread myThread = new MyThread();
myThread.start();
myThread.join(10000);
This join will then wait for atmost 10 seconds. After that time you can just ignore the thread, ...
You can also use the Decorator pattern. You can read more here.

in a web environment, you can make use of the new j2ee6 Asynchronous feature.
take a look at
http://docs.oracle.com/javaee/6/tutorial/doc/gkkqg.html

Related

Asynchronous File I/O via POSIX AIO or Windows Overlapped IO in Java

System.IO.File in .NET and .NET Core has a family of Read...Async() methods, all of which return either Task<byte[]> or Task<string> (Task<T> is the .NET's equivalent of Java's Future<T>).
This looks largely equivalent to AsynchronousFileChannel APIs (which either consume a CompletionHandler or return a Future), with one major difference.
AsynchronousFileChannel uses a managed background thread to perform asynchronous I/O (the thread may be provided either by the default thread pool (sun.nio.ch.ThreadPool) or by the ExecutorService explicitly specified during channel creation).
FileStream implementation in .NET, on the other hand, passes FileOptions.Asynchronous flag to the underlying operating system (see also Synchronous and Asynchronous I/O), doesn't spawn any managed background threads and uses what is called an Overlapped I/O.
Questions:
Is there any (existing or planned) File I/O API in Java which would use Overlapped I/O on Windows and POSIX AIO on Unices? Update: Windows-specific Java runtime features sun.nio.ch.WindowsAsynchronousFileChannelImpl which is exactly an abstraction layer on top of Overlapped I/O.
Are there any plans to provide java.nio.channels.SelectableChannel implementations for File I/O? If no, what are the technical limitations?
It is not really possible. The Whole IO API would have to be re-implemented. NIO means non blocking I/O it is not the same as Asynchronous I/O. Non blocking is implemented in JAVA and long story short that means the OS has no ability to notify runtime that operation is completed. Isned java uses select() or poll() system calls to check if data is available.
I could talk about it but stollen picture is worth 100 words:
That is why in JAVA the separate thread is required to constantly call check,check,check,check .....
I don't know .NET platform but if what you posted is correct it utilizing asynchronous I/O so the last column. But I don't trust anything that comes from Microsoft.
Hope it answers your question. Also here I a additional reading material:
https://stackoverflow.com/a/2625565/8951886

Why Java Socket doesn't support interruption handling?

I have been thinking about why JDBC is only blocking operation and why I can't set some listener to the hypothetical event handler onResultSetArrived(ResultSet rs). Why I have to block single one thread per each JDBC query.
After a while I've dive into Java Sockets (I suppose JDBC is build on top of them) and realised that there also isn't any event handling. Only option to provide non-blocking read is through the available() method but this is very inefficient as it has to be checked periodically in the loop.
As far as I'm aware, interruption is fundamental thing in PC. It goes down from the hardware up to the operating system. In the Java it can be implemented into event driven approach in read value from Socket.
Now, my question is am I missing something and there exists some workaround or current architecture in Java really is one thread per one blocking operation? And if yes isn't it inefficient?
In Java, you can have many threads. A thread is doing its stuff until it is blocked somewhere (typically, on a mutex or a I/O operation). Of course, this does not block other threads.
The fundamental scenario of multithreaded applications is that you use multiple threads when waiting for a blocked thread would introduce too much waiting. Definition of "too much" here depends entirely on you, but in general, this is how you achieve beter performance through better utilization of resources.
There are some limitations in how threads in Java work, however. Most, if not all of them are when the thread is blocked somewhere "outside" of Java such as in OS call or external (native) library. Theoretically, if native code blocks a thread, Java can not do anything about it. Normally, this should not be a problem unless the native code has a bug.
So in the case of a blocking JDBC response, you would create a new thread which would do other work while first thread is waiting for database to complete. Alternatively, you could make a thread just for doing JDBC. You could make it exactly like you want (with listeners etc.) except for limitations imposed by OS. So it's possible, but it's probably not provided out-of-the-box by JDBC drivers. There is a lot of infrastructure already in core Java which you might find useful (thread pools, workers, synchronized collections). But as with any multithreading, you need to be very careful with accessing data from different threads simultaneously.
Since Java 7, there is also support for non-blocking I/O (NIO). This is almost exactly what you are describing. I/O is offloaded to OS, so your operations return immediately and you get a callback when the operation is finished. However, not all libraries support NIO. For my work, I have never had a reason to use it, because I could always implement the same stuff with my threads at least as good.
If the question is whether the "current architecture in Java really is one thread per one blocking operation" and by "blocking operation" you mean "database operation" then the answer is no. Most database drivers available for Java currently are jdbc-based and do work that way. But there are usable alternatives (https://spring.io/blog/2016/11/28/going-reactive-with-spring-data) and more on the way (
https://blogs.oracle.com/java/jdbc-next:-a-new-asynchronous-api-for-connecting-to-a-database , https://dzone.com/articles/spring-5-webflux-and-jdbc-to-block-or-not-to-block). For how this works see How is ReactiveMongo implemented so that it is considered non-blocking?
For jdbc there are also ways to wrap the jdbc calls (Wrapping blocking I/O in project reactor , Spring webflux and reading from database ) and projects pursuing this approach (https://dzone.com/articles/myth-asynchronous-jdbc)

Callable / Runnable Controller methods: What's the point?

In Spring, you can let controllers return a Callable instead of T, which will immediately release the request processing thread and compute the result in a MvcAsync Thread managed by the WebAsyncManager. You just need to wrap the controller method content in a return () -> {... return result; };. Very easy!
But what is the point? What is the difference between
a) having 500 request processing threads and letting them do all the work and
b) having just a few request processing threads and executing all requests in Callables with a concurrencyLimit of 500?
The second option b) actually looks worse to me, since there is overhead involved in managing the whole MvcAsync magic.
I get how you can harvest #Async methods to execute two methods at once and return a result once both finished, but I obviously didn't understand Callable controller methods.
Suppose you have a Tomcat server that has 10 threads listening for client requests. If you have a client that invokes an endpoint that takes 5 seconds to respond, that client holds that thread for those 5 seconds. Add a few concurrent clients and you will soon run out of threads during those 5 seconds.
The situation is even worse, because during most of those 5 seconds your request is doing mostly I/O, which means you just block your thread to do nothing but waiting.
So, the ability of Spring to use Callable, CompletableFuture or ListenableFuture as the return types of controllers is precisely to allow programmers to overcome this kind of problem to a certain extend.
Fundamentally, just returning one of these types is only going to release the Web Server thread making it available to be used by another client. So you get to attend more clients in the same amount of time, However that, by itself, may not be enough to implement a non-blocking IO (aka NIO) API.
Most of these features come from the core functionality offered by Servlet API and Servlet Async IO, which Spring should probably use under the hood. You may want to take a look at the following interesting videos that helped me understand this from the ground up:
Scale your Web Applications with Servlet 3.1 Async I/O, Part 1
Scale your Web Applications with Servlet 3.1 Async I/O, Part 2
Into the Wild with Servlet Async IO
Those videos explain the idea behind Servlet Async I/O and the final goal of implementing NIO Web apps as well.
The holy grail here is to reach a point in which the threads in your thread pool are never blocked waiting for some I/O to happen. They are either doing some CPU bound work, or they're back in the thread pool where they can be used by some other client. When you do I/O you don't introduce wait, you register some form of callback that will be used to tell you when the results are ready, and in the meantime you can use your valuable CPU cores to work on something else. If you think it over, a Callable, a CompletableFuture or a ListenableFuture are that sort of callback objects that Spring infrastructure is using under the hood to invoke their functionality to attend a request in a separate thread.
This increases your throughput, since you can attend more clients concurrently, simply by optimizing the use of your valuable CPU resources, particularly if you do it in a NIO way, since as you can imagine, just moving the request to another thread, although beneficial (since you free a valuable Tomcat thread), would still be blocking and therefore, you'd be just moving your problem to another thread pool.
I believe this fundamental principle is also behind a good part of the work that the Spring team is currently doing in Project Reactor since in order to leverage the power of this type of features you need to introduce asynchronous programming in your APIs and that is hard to do.
That's also the reason for the proliferation of frameworks like Netty, RxJava, Reactive Streams Initiative and the Project Reactor. They all are seeking to promote this type of optimization and programming model.
There is also an interesting movement of new frameworks that leverage this powerful features and are trying to compete with or even complement Spring yet limited functionality in that area. I'm talking of interesting projects like Vert.x and Ratpack and now that we're at it, this feature is one of the major selling points of Node.js as well.

Advantage of asynchronous libraries

I was going through the twitter finagle library which is an asynchronous service framework in scala and I have some question regarding asynchronous libraries in general.
So as I understand, the advantage of an synchronous library using a callback is that the application thread gets free and the library calls the callback as soon as the request is completed over the network. And in general the application threads might not have a 1:1 mapping with the library thread.
The service call in the library thread is blocking right?
If that's the case then we are just making the blocking call in some other thread. This makes the application thread free but some other thread is doing the same work. Can't we just increase the number of application threads to have that advantage?
It's possible that I mis-understand how the asynchronous libraries are implemented in Java/Scala or JVM in general. Can anyone help me understand how does this work?
Async approach is useful abstraction: your CPU-intensive thread offloads long-running IO operation to dedicated (maybe, belonging to a library) thread. When IO is done, some other thread will receive IO result.
Using blocking approach, you'll miss CPU ticks for your threads which are doing blocking IO call. And adding some more threads to ensure there's always free thread to do some CPU work means wasting OS/JVM resources for re-scheduling.
Blocking IO is used because it's simpler to program (no need to synchronize caller and callback).
Actually, IO is only one possible use-case where async style is useful. In general, whenever you feel that task at hand will benefit from splitting it into several activities, which can be run concurrently and would communicate with each other, this is the case for async style. Examples not connected to IO:
GUI: GUI event loop thread passes user input to background threads, and they do necessary processing;
Utilizing modern multi-core CPUs: if your task can be split in several subtasks, you can run these in separate threads, utilizing all available cores. Naturally, you'll need to gather results of subtasks, and you'll need async style here.

threading concepts in java

I am intermediate in java. I am working with a company quite new to company, they have asked me to take session on "threading concepts in java with real example" as I don't have hands on on threading I can just prepare slides hoe threads can be implemented using Thread class or Runnable interface.
Can anybody please help me out with real scenario of threads and its implementation?
Thanks in advance
I would recommend Brien Goetz' "Java Concurrency In Practice". It talks about features added to the JDK above and beyond Thread and Runnable that will make your life better.
In 2016 it's a better idea to dig into the java.util.concurrency package and JDK 8 lambdas and parallel stream. No one should be trying to write multithreaded code with raw Thread unless they know what they're doing. We've been given better abstractions - use them.
I'd probably start with Sun's (now Oracle's) excellent documentation on concurrency.
As an example, you can create something like a banking application where you have a shared data structure (accounts), and you have multiple threads operating on the account (performing withdrawals and deposits).
The questions is too generic to answer but here are few details. Threads are used to run multiple things in parallel (in theory only it is based on lot of other factors like num of cpus, no of cores etc etc). Multi-threading has gone through lot of improvements in JDK since its inception. You can read the tutorial here: http://download.oracle.com/javase/tutorial/essential/concurrency/
A real life example can be a telecom application SCP (service control point), that receives lot of requests , (in order of 400 in a sec). The application that handles the request employs a master-slave configuration. There is a thread pool, each thread of which is waiting for a signal to run.
The master thread, receives the request, the request data is posted in some object, that the thread functions reads, and then the thread is signaled to run. When the processing is finished the worker thread is returned to the thread pool.
There can be a flag, that informs about the status of the thread for example, busy, idle, bad etc.

Categories

Resources