How does one establish multiple IO streams between a client and server? - java

I'm creating a client/server pair in Java that, for now, only supports interlaced text communication via PrintWriters and BufferedReaders wrapped around both server and client's IO streams.
I would like to implement a function that uses Image[Input/Output]Stream to send a BufferedImage from the server to the client at a set interval.
The problem is that I want the BufferedImages to be sent/received in separate threads so that the client/server can still send/receive text commands.
Can I create multiple streams or sockets? If so, is that the best way?

One way to accomplish this with a single socket is multiplexing the individual streams over a single bytestream connected to the socket, a good implementation of this is BEEP.

Yes, sure you can create as many threads and sockets as you need. Just be careful: do not forget to close sockets and keep process of threads creation under control: to many threads do not improve your performance and may even cause your system to halt.
Probably you should use thread pool. But it depends on your application. Take a look on java.util.concurrency package.
If you have more specific questions do not hesitate to ask them.

A multiplexing stream should maintain multiple buffers.
A reader should be given it's own buffer by the multiplexing stream. The multiplexing stream should grow every buffer during a write operation, and shrink the desired buffer during a read operation.
A single rewind buffer is harder to manage, since the readers need to be stateful, but is generally more scalable, if not performant.
The specific connection protocol used is an implementation detail. Network sockets are just buffers, and can be used to implement a multiplexing stream. The network becomes the bottleneck in this case.

Related

Why Java NIO can be superior to standard Java sockets?

Recently I was playing with Java sockets and NIO for writing a server. Although it is still not really clear for me why Java NIO could be superior to standard sockets. When writing a server using either of these technologies, in most cases it comes down to having a dispatcher thread that accepts connections and further passes them to working threads.
I have read that in a threaded-model we need a dedicated thread per connection but still we can create a thread pool of a fixed size and reuse them to handle different connections (so that a cost of creation and tear down of threads is reduced).
But with Java NIO it looks similar. We have one thread that accepts requests and some worker thread(s) processing data when it is received.
An example I found where Java NIO would be better is a server that maintains many non-busy connections, like a chat client or http server. But can't really understand why.
There are several distinct reasons.
Using multiplexed I/O with a Selector can save you a lot of threads, which saves you a lot of thread stacks, which save you a lot of memory. On the other hand it moves scheduling from the operating system into your program, so it can cost you a bit of CPU, and it will also cost you a lot of programming complication. Given that select() was designed when the alternative was more processes, not more threads, it is in fact debatable whether the extra complication is really worth it, as against using threads and spending the programming money saved on more memory.
MappedByteBuffers are a slightly faster way of reading files than either java.io or using java.nio.channels with ByteBuffers.
If you are just copying from one channel to another, using 'direct' buffers saves you from having to copy data from the native JNI space into the JVM space and back again; or using the FileChannel.transferTo() method can save you from copying data from kernel space into user space.
Even though NIO supports the Dispatcher model, NIO Sockets are blocking by default and when you use them as such they can be faster than either plain IO or non-blocking NIO for a small (< 100) connections. I also find blocking NIO simpler to work with than non-blocking NIO.
I use non-blocking NIO when I want to use busy waiting. This allows be to have a thread which never gives up the CPU but this is only useful in rare cases i.e. where latency is ciritical.
From my benchmarks the real strength (besides threading model) is, that it consumes less memory bandwith (Kernel<=>Java). E.g. if you open several UDP NIO Multicast Channels and have high traffic you will notice that at a certain number of processes with each new process throughput of all running UDP receivers gets lower. With the traditional socket API i start 3 receiving processes with full throughput. If i start the 4th I reach a limit and received data/second will lower on all the running processes. With nio i can start about 6 processes until this effect kicks in.
I think this is mostly because NIO kind of directly bridges to native or kernel memory, while the old socket copies buffers to the VM process space.
Important in GRID computing and high load server apps (10GBit network or infiniband).

What's the point of using UDP with NIO?

NIO and TCP make a great pair for many connections. Since a new connection needs to be opened for each new client, each of these clients would typically need their own thread for blocking I/O operations. NIO addresses that problem by allowing data to be read when it can, rather than blocking until it is available. But what about UDP?
I mean, connectionless UDP does not have the blocking nature of TCP associated with it because of how the protocol is designed (send it and forget it, basically). If I decide to send some data to some address, then it will do so, without delays (on server-end). Likewise, if I want to read data, I can just receive individual packets from different sources. I don't need to have many connections to many places using many threads to deal with each of them.
So, how does NIO and selectors enhance UDP? More specifically, when would one prefer to use UDP with NIO rather than the ol' java.net package?
Well the DatagramSocket.receive(...) method is documented as a blocking operation. So for instance if you had one thread that is trying to handle packets from N different sockets, you would need to use NIO and selectors. Similarly, if the thread had to multiplex checking for new packets with other activities, you might do this.
If you don't have these or similar requirements, then selectors won't help. But that's no different to the TCP case. You shouldn't use selectors with TCP if you don't need them, because it potentially adds an extra system call.
(On Linux, in the datagram case, you'd do a select syscall followed by a recv ... instead of just a recv.)
But if you're only dealing with one DatagramSocket, wouldn't the receive method read packets immediately as they arrive, regardless of the fact that they're from a different computer?
If you are listening on one socket for datagrams from "everyone" then yes. If you have different sockets for different computers then no.
And for the TCP comment, sometimes the use of a selector is justified simply by the fact that it is very resource demanding to have thousands of threads, as it would be required by a blocking TCP server.
We weren't discussing that case. But yes, that is true. And the same is true if you have thousands of threads blocking on UDP receives.
My point was that it you don't have lots of threads, or if it doesn't matter if a thread blocks, then NIO doesn't help. In fact, it may reduce performance.
NIO removes the necessity for threads altogether. It lets you handle all your clients in one thread, including both TCP and UDP clients.
connectionless UDP does not have the blocking nature of TCP associated with it
That's not true. Receives still block, and so can sends, at least in theory.

Sending File Data To Multiple Clients?

I'm trying to figure out the best way to go about writing data transfer code for a client/server system that handles multiple clients at once.
I'm already keeping a List of clients who connect (I'm using NIO non-blocking framework btw).
Isn't it costly on performance to iterate through every client with each read/write pass and write the buffer data to each channel? Is there a better/more efficient way of doing it?
I've been thinking about dividing up the buffer size based on the number of clients. Is that a viable solution?
Using selectors (as you seem to be doing) really pays when you're handling a really large number of clients (and why optimize for the case in which you don't have a large number of clients ;)
The bottleneck in such system is rarely the CPU which does the iteration, but the I/O anyway, so I wouldn't worry if I where you.

Benefits of Netty over basic ServerSocket server?

I need to create a relatively simple Java tcp/ip server and I'm having a little trouble determining if I should use something like Netty or just stick with simple ServerSocket and InputStream/OutputStream.
We really just need to listen for a request, then pass the new client Socket off to some processing code in a new thread. That thread will terminate once the processing is complete and the response is sent.
I like the idea of pipelines, decoders, etc. in Netty, but for such a simple scenario it doesn't seem worth the added up front development time. It seems like a bit overkill for our initial requirements, but I'm a little nervous that there are lots of things I'm not considering. What, if any, are the benefits of Netty for such simple requirements? What am I failing to consider?
The main advantage of Netty over simply reading from and writing to sockets using streams is that Netty supports non-blocking, asynchronous I/O (using Java's NIO API); when you use streams to read and write from sockets (and you start a new thread for each connected accepted from a ServerSocket) you are using blocking, synchronous I/O.
The Netty approach scales much better, which is important if your system needs to be able to handle many (thousands) of connections at the same time. If your system does not need to scale to many simultaneous connections, it might not be worth the trouble to use a framework like Netty.
Some more background information: Threads are relatively expensive resources in an operating system. Each thread needs memory for the stack (which can be for example 2 MB in size). When you create thousands of threads, this is going to cost a lot of memory; also, operating systems have limits on the number of threads that can be created. So you don't want to start a new thread for each accepted connection. The idea of asynchronous I/O is to decouple the threads from the connections (no one-to-one relation). There can be many more connections than threads, and whenever some event happens on one of the connections (for example, data is received), a thread from a thread pool is temporarily used to handle the event.
I think that the benefits of using netty are not immediate but actually come later when requirements change and maintenance becomes more complex for your project. Netty brings built in understanding of the HTTP protocol so that you can provide simple RESTful web services. Also you have the option of utilizing asynchronous request processing that netty provides as a framework so that you can potentially get better performance and service several orders of magnitude more concurrent requests.
First, write the logic of your service so that it's independent of your communication layer.
As Victor Sorokin said, there's a learning advantage to doing it yourself. So it ought to be worthwhile to write it with sockets. It will involve less effort to get started, and if it works well enough then you're off to the races.
If you find that you need more scalability/robustness later, you can switch to netty. Just write a new netty layer that communicates for your service logic layer and swap them out.

selecting among multiple sockets that are ready to be read from

I am writing a server-client application. I have a server that holds several sockets that I have got from the accept() method of ServerSocket. I want to read from these sockets but I don't necesserally know which socket is ready to be read from. I need some kind of selector that will select one of the sockets that are ready to be read from, so I can read the data it sends.
Thanks.
You have basically two options to make it work:
Have dedicated thread per accepted socket. This is because the 'regular' socket I/O is blocking. You can not selectively handle multiple sockets using a single thread. And as there is no 'peeking' functionality, you will always take a risk of getting blocked when you invoke read. By having a thread per each socket you are interested in reading, blocking reads will not block any other operations (threads).
Use NIO. NIO allows for asynchronous I/O operations, and basically exactly what you asked for - a Selector.
If you do decide to go NIO-way, I would recommend checking out MINA and Netty. I've found them much easier to work with than plain NIO. Not only will you get a nicer API to work with, but at least MINA had workarounds for some nasty NIO bugs, too.

Categories

Resources