selecting among multiple sockets that are ready to be read from - java

I am writing a server-client application. I have a server that holds several sockets that I have got from the accept() method of ServerSocket. I want to read from these sockets but I don't necesserally know which socket is ready to be read from. I need some kind of selector that will select one of the sockets that are ready to be read from, so I can read the data it sends.
Thanks.

You have basically two options to make it work:
Have dedicated thread per accepted socket. This is because the 'regular' socket I/O is blocking. You can not selectively handle multiple sockets using a single thread. And as there is no 'peeking' functionality, you will always take a risk of getting blocked when you invoke read. By having a thread per each socket you are interested in reading, blocking reads will not block any other operations (threads).
Use NIO. NIO allows for asynchronous I/O operations, and basically exactly what you asked for - a Selector.
If you do decide to go NIO-way, I would recommend checking out MINA and Netty. I've found them much easier to work with than plain NIO. Not only will you get a nicer API to work with, but at least MINA had workarounds for some nasty NIO bugs, too.

Related

Why SSLSocket write option does not have timeout?

In Java, write operation on SSLSocket API is blocking and the write operation does not support timeout also.
Can someone please explain?
Can there be a situation where write operation can block a thread forever? I checked on Internet and it seems that there is a possibility of blocking forever.
How to add timeout for write operation?
My application creates two threads one for read and one for write.
1- Can there be a situation where write operation can block a thread forever? I checked on Internet and it seems that there is a possibility of blocking forever.
Yes there can. Though not literally forever :-)
2- Can someone please suggest how we can add timeout for write operation?
You cannot do it with Java's implementation of sockets / SSL sockets, etcetera. Java sockets support connect timeouts and read timeouts but not write timeouts.
See also: How can I set Socket write timout in java?
(Why? Well socket write timeouts were requested in bug ID JDK-4031100 back in 1997 but the bug was closed with status "WontFix". Read the link for the details.)
The alternatives include:
Use a Timer to implement the timeout, and interrupt the thread or close the Socket if the timer goes off. Note that both interrupting and closing will leave you in a state where you need to abandon the socket.
Use NIO selectors and non-blocking I/O.
Because:
If such a facility is needed at all, it is needed at the TCP level, not just the SSL level.
There is no API for it at the TCP level, and I don't mean just in Java: there is no C level API for it either, except maybe on a couple of platforms.
If you added it at the SSL level, a write timeout event would leave the connection in an indeterminate state which would mean that it had to be closed, because you couldn't know how much data had been transmitted, so you couldn't maintain integrity at the SSL level.
To address your specific questions:
Can there be a situation where write operation can block a thread forever? I checked on Internet and it seems that there is a possibility of blocking forever.
Yes. I've seen an application blocked for several days in such a situation. Although not, as #StephenC rightly says, forever. We haven't lived that long yet.
How to add timeout for write operation?
You can do it at the TCP level with non-blocking I/O and a Selector, and you can layer an SSLEngine on top of that to get SSL, but it is a tedious and highly error-prone exercise that many have tried: few have succeeded. Not for the faint-hearted.

Thread-per-request tcp server

I am just trying to understand how to write a thread-per-request TCP server in Java.
I have already written a thread-per-connection server, that runs serverSocket.accept() and creates a new thread each time a new connection comes in.
How could this be modified into a thread-per-request server?
I suppose the incoming connections could be put into some sort of queue, but how would you know which one has issued a request & is ready for service?
I am suspecting that NIO is necessary here, but not sure.
Thanks.
[edit]
To be clear - The original "server" is just a loop that I have written that waits for a connection and then passes it to a new thread.
The lecturer has mentioned "thread-per-request" architecture, and I was wondering how it worked "under the hood".
My first idea about how it works, may be completely wrong.
You can use a Selector to achieve your goal. Here is a good example you can refer.
You can use plain IO, or blocking NIO, (OR non-blocking NIO, or async NIO2) You can have any multiple threads per connection (or a shared worker thread pool) but unless these are waiting for slow services like databases, this might be any faster (it can be much slower if you want low latency)

What's the point of using UDP with NIO?

NIO and TCP make a great pair for many connections. Since a new connection needs to be opened for each new client, each of these clients would typically need their own thread for blocking I/O operations. NIO addresses that problem by allowing data to be read when it can, rather than blocking until it is available. But what about UDP?
I mean, connectionless UDP does not have the blocking nature of TCP associated with it because of how the protocol is designed (send it and forget it, basically). If I decide to send some data to some address, then it will do so, without delays (on server-end). Likewise, if I want to read data, I can just receive individual packets from different sources. I don't need to have many connections to many places using many threads to deal with each of them.
So, how does NIO and selectors enhance UDP? More specifically, when would one prefer to use UDP with NIO rather than the ol' java.net package?
Well the DatagramSocket.receive(...) method is documented as a blocking operation. So for instance if you had one thread that is trying to handle packets from N different sockets, you would need to use NIO and selectors. Similarly, if the thread had to multiplex checking for new packets with other activities, you might do this.
If you don't have these or similar requirements, then selectors won't help. But that's no different to the TCP case. You shouldn't use selectors with TCP if you don't need them, because it potentially adds an extra system call.
(On Linux, in the datagram case, you'd do a select syscall followed by a recv ... instead of just a recv.)
But if you're only dealing with one DatagramSocket, wouldn't the receive method read packets immediately as they arrive, regardless of the fact that they're from a different computer?
If you are listening on one socket for datagrams from "everyone" then yes. If you have different sockets for different computers then no.
And for the TCP comment, sometimes the use of a selector is justified simply by the fact that it is very resource demanding to have thousands of threads, as it would be required by a blocking TCP server.
We weren't discussing that case. But yes, that is true. And the same is true if you have thousands of threads blocking on UDP receives.
My point was that it you don't have lots of threads, or if it doesn't matter if a thread blocks, then NIO doesn't help. In fact, it may reduce performance.
NIO removes the necessity for threads altogether. It lets you handle all your clients in one thread, including both TCP and UDP clients.
connectionless UDP does not have the blocking nature of TCP associated with it
That's not true. Receives still block, and so can sends, at least in theory.

Benefits of Netty over basic ServerSocket server?

I need to create a relatively simple Java tcp/ip server and I'm having a little trouble determining if I should use something like Netty or just stick with simple ServerSocket and InputStream/OutputStream.
We really just need to listen for a request, then pass the new client Socket off to some processing code in a new thread. That thread will terminate once the processing is complete and the response is sent.
I like the idea of pipelines, decoders, etc. in Netty, but for such a simple scenario it doesn't seem worth the added up front development time. It seems like a bit overkill for our initial requirements, but I'm a little nervous that there are lots of things I'm not considering. What, if any, are the benefits of Netty for such simple requirements? What am I failing to consider?
The main advantage of Netty over simply reading from and writing to sockets using streams is that Netty supports non-blocking, asynchronous I/O (using Java's NIO API); when you use streams to read and write from sockets (and you start a new thread for each connected accepted from a ServerSocket) you are using blocking, synchronous I/O.
The Netty approach scales much better, which is important if your system needs to be able to handle many (thousands) of connections at the same time. If your system does not need to scale to many simultaneous connections, it might not be worth the trouble to use a framework like Netty.
Some more background information: Threads are relatively expensive resources in an operating system. Each thread needs memory for the stack (which can be for example 2 MB in size). When you create thousands of threads, this is going to cost a lot of memory; also, operating systems have limits on the number of threads that can be created. So you don't want to start a new thread for each accepted connection. The idea of asynchronous I/O is to decouple the threads from the connections (no one-to-one relation). There can be many more connections than threads, and whenever some event happens on one of the connections (for example, data is received), a thread from a thread pool is temporarily used to handle the event.
I think that the benefits of using netty are not immediate but actually come later when requirements change and maintenance becomes more complex for your project. Netty brings built in understanding of the HTTP protocol so that you can provide simple RESTful web services. Also you have the option of utilizing asynchronous request processing that netty provides as a framework so that you can potentially get better performance and service several orders of magnitude more concurrent requests.
First, write the logic of your service so that it's independent of your communication layer.
As Victor Sorokin said, there's a learning advantage to doing it yourself. So it ought to be worthwhile to write it with sockets. It will involve less effort to get started, and if it works well enough then you're off to the races.
If you find that you need more scalability/robustness later, you can switch to netty. Just write a new netty layer that communicates for your service logic layer and swap them out.

How does one establish multiple IO streams between a client and server?

I'm creating a client/server pair in Java that, for now, only supports interlaced text communication via PrintWriters and BufferedReaders wrapped around both server and client's IO streams.
I would like to implement a function that uses Image[Input/Output]Stream to send a BufferedImage from the server to the client at a set interval.
The problem is that I want the BufferedImages to be sent/received in separate threads so that the client/server can still send/receive text commands.
Can I create multiple streams or sockets? If so, is that the best way?
One way to accomplish this with a single socket is multiplexing the individual streams over a single bytestream connected to the socket, a good implementation of this is BEEP.
Yes, sure you can create as many threads and sockets as you need. Just be careful: do not forget to close sockets and keep process of threads creation under control: to many threads do not improve your performance and may even cause your system to halt.
Probably you should use thread pool. But it depends on your application. Take a look on java.util.concurrency package.
If you have more specific questions do not hesitate to ask them.
A multiplexing stream should maintain multiple buffers.
A reader should be given it's own buffer by the multiplexing stream. The multiplexing stream should grow every buffer during a write operation, and shrink the desired buffer during a read operation.
A single rewind buffer is harder to manage, since the readers need to be stateful, but is generally more scalable, if not performant.
The specific connection protocol used is an implementation detail. Network sockets are just buffers, and can be used to implement a multiplexing stream. The network becomes the bottleneck in this case.

Categories

Resources