As far as I know, NIO can help the server, serve a lot of the requests. Because NIO does not use one thread per request model.
But the client is the one create the connection to the server, usually there are not so many connections, and the client can handle it completely.
I saw some client libraries use NIO, and I am not so sure about it. So why brother NIO on the client side and is there any performance improvement?
There's no great reason to use NIO at the client, unless possibly you are writing something like a web crawler that has hundreds or thousands of outbound connections.
There's arguably not much reason to user at the server either. The select() model was designed for the days when the alternative was forking a process. Now we have threads, the whole model is moot IMHO.
Related
We are designing a TCP server to listen/bind to thousands of ports in Linux.
The traffic per second go though each port would be very little, so we are focusing on efficiently selecting from sockets.
Just binding to these ports and using poll or epoll on each socket maybe OK. However, we are pursueing a more resource efficiency design, for example, carefully design an IO threads model that can handle tens of thousands of sockets, while not costing too much CPU.
Any suggestions are welcome!
PS:
We are using Java, and Netty in this situation seems to be overkill. Is there any lightweight network framwork more suitable?
iptables are off the table, we simply can't use it.
When working with a lot of ports, a really resource-efficient solution is select. The select function is a syscall which allows processing multiple input sources, and handling only those who have received data. It was added to Java with the java.nio package (JDK 7).
You may combine it with an ExecutorService to process only the needed channels, while preserving threads.
Read further:
Select in Java
Select man page
Executor Services
I need to create a relatively simple Java tcp/ip server and I'm having a little trouble determining if I should use something like Netty or just stick with simple ServerSocket and InputStream/OutputStream.
We really just need to listen for a request, then pass the new client Socket off to some processing code in a new thread. That thread will terminate once the processing is complete and the response is sent.
I like the idea of pipelines, decoders, etc. in Netty, but for such a simple scenario it doesn't seem worth the added up front development time. It seems like a bit overkill for our initial requirements, but I'm a little nervous that there are lots of things I'm not considering. What, if any, are the benefits of Netty for such simple requirements? What am I failing to consider?
The main advantage of Netty over simply reading from and writing to sockets using streams is that Netty supports non-blocking, asynchronous I/O (using Java's NIO API); when you use streams to read and write from sockets (and you start a new thread for each connected accepted from a ServerSocket) you are using blocking, synchronous I/O.
The Netty approach scales much better, which is important if your system needs to be able to handle many (thousands) of connections at the same time. If your system does not need to scale to many simultaneous connections, it might not be worth the trouble to use a framework like Netty.
Some more background information: Threads are relatively expensive resources in an operating system. Each thread needs memory for the stack (which can be for example 2 MB in size). When you create thousands of threads, this is going to cost a lot of memory; also, operating systems have limits on the number of threads that can be created. So you don't want to start a new thread for each accepted connection. The idea of asynchronous I/O is to decouple the threads from the connections (no one-to-one relation). There can be many more connections than threads, and whenever some event happens on one of the connections (for example, data is received), a thread from a thread pool is temporarily used to handle the event.
I think that the benefits of using netty are not immediate but actually come later when requirements change and maintenance becomes more complex for your project. Netty brings built in understanding of the HTTP protocol so that you can provide simple RESTful web services. Also you have the option of utilizing asynchronous request processing that netty provides as a framework so that you can potentially get better performance and service several orders of magnitude more concurrent requests.
First, write the logic of your service so that it's independent of your communication layer.
As Victor Sorokin said, there's a learning advantage to doing it yourself. So it ought to be worthwhile to write it with sockets. It will involve less effort to get started, and if it works well enough then you're off to the races.
If you find that you need more scalability/robustness later, you can switch to netty. Just write a new netty layer that communicates for your service logic layer and swap them out.
I would like to ask what would be more appropriate to choose when developing a server similar to SmartFoxServer. I intend to develop a similar yet different server. In the benchmarks made by the ones that developed the above server they had something like 10000 concurrent clients.
I made a bit of research regarding the cost of using too many threads(>500) but cannot decide which way to go. I once made a server in java but that was for a small application and had nothing to do with heavy loads.
Thanks
Take a look at Apache Mina. They've done alot of the heavy lifting required to use NIO effectively in a networking application. Whether or not NIO increases your ability to process concurrent connections really depends on your implementation, but the performance boosts in Tomcat, JBoss and Jetty are plenty evidence to you already in the positive.
i'm not familiar with smartfoxserver, so i can only speak generically (which is not always good :P but here i go)
i think those are 2 different questions. on one hand, the io performance when using native java sockets vs. native sockets written in c (like tomcat).
the other question is how to scale up to that kind of concurrency level. other than that, i'd always choose native sockets (i.e: c).
now, how to scale: it's not a good idea to have a lot of threads running at the same time (os constraints, etc), so i'd choose to scale horizontally, meaning to add a load balancer that can send the requests to different servers that can be linked by using messages (using jms, like rabbitmq or activemq, or even using a protocol like stomp or amqp).
other solution, a cloud environment that allows you to grow your installation as you need
In most benchmarks which test 10K or 100K connections, the server is doing no work and unless your server does next to nothing, these test are unrealistic.
You need to take a clear idea of mow many concurrent connections you want to support.
If you have less than 1K connection, using a thread per connection will work ok. This is the simplest approach to take. Using a dispatcher model with NIO will work better if your request are very simple. Otherwise it won't matter much.
If you have more than 1K connections it is likely you want to use more than one server as each connection is getting less than 1% of a core and the cost of a basic server is relatively cheap these days.
I had built a java TCPServer using serversocketchannels running on one port. However, it is not very scalable as it attends to one incoming socket (blocking mode) only.
I want to extend this TCPServer to service multiple incoming sockets (maximum 10 incoming sockets). As such, am wondering if i should implement the TCPServer using non-blocking io or use thread+blocking io.
Paul Tyma recently compared the two approaches, generating diverse discussion. Under certain circumstances, modern threading libraries can outperform java.nio.channels.Selector. As the result is somewhat counter-intuitive, you may have to prototype both to get a definitive answer.
JBoss-Netty or Apache-Mina are nio framework that provide much things to implement your own server. So, now i'm using netty and happy with it.
With only 10 incoming sockets I don't think the effect is clear to see. What you should do is to focus on the upper layer (protocol, application) and leave that low level network implementation to a framework. I would recommend the Apache Mina for that job. As you will see, I does the job very well with blocking or non blocking, you choose; and leave open interfaces for you to implement the protocol & application.
I would use threads and blocking I/O until you know you have at least 1000 concurrent connections. That also gives you an easy way to get it working. When and if you get to 1000, evaluate.
I am designing a application where many clients connect to a central server. This server keeps these connections, sending keep-alives every half-hour. The server has a embedded HTTP server, which provides a interface to the client connections (ex. http://server/isClientConnected?id=id). I was wondering what is the best way to go about this. My current implementation is in Java, and I just have a Map with ID's as the key, but a thread is started for each connection, and I don't know if this is really the best way to do this. Any pointers would be appreciated.
Thanks,
Isaac Waller
Use the java.nio package, as described on this page: Building Highly Scalable Servers with Java NIO. Also read this page very carefully: Architecture of a Highly Scalable NIO-Based Server.
Personally I'd not bother with the NIO internals and use a framework like Apache MINA or xSocket. NIO is complicated and easy to get wrong in very obscure ways. If you want it to "just work", then use a framework.
With a single thread per connection you can usually scale up to about 10,000 connections on a single machine. For a Windows 32 machine, you probably will hit a limit around 1,000 connections.
To avoid this, you can either change the design of your program, or you can scale out (horizontal). You have to weight the cost of development with the cost of hardware.
The single thread per user, with a single continuous connection is usually the easiest programming model. I would stick with this model until you reach the limits of your current hardware. At that point, I would decide to either change the code, or add more hardware.
If the clients will be connected for long periods of time, allocating a thread per client can be problematic. Each thread on the server requires a certain amount of resources (memory for the stack, for example).
You could use Jetty Continuations to handle the client request with fewer threads by using asynchronous servlets.
Read more about the the Reactor pattern. There is an implementation for that in Java (it uses channels instead of thread for client).
It is easy to implement and very efficient.