I have 150 threads.
Each Thread has Netty Client and it is connected to server.
Should I use more 150 threads to send?
Should I use 75 threads to send?
Should I use no thread to send?
My local test is not meaningful. (I can't operate server over 50)
please help me.
There is no golden rule for this. Depending on your application, you can find that;
just one connection with one thread is enough to use all the resources of the machine.
Using around the number of CPUs to 2 * the number of CPUs is enough to use all the CPU of the machine.
If you have synchronous requests (instead of asynchronous ones) and a high network latency you might find that you are spending most of the time waiting for data in which case more connections would help mitigate this latency.
My preference is to allow asynchronous messaging/requests and allow a single connection to use all the CPU/resources on the machine if it makes sense because while you might get better result when you test with 150 busy connections, in the real world you they might not all be active at once or to the same degree.
Related
I'm creating a Java server for a quite big simulation and I have a couple of high level design questions.
Some background:
The server will run a simulation.
Clients will connect to the server via TCP connections from mobile devices and interact with data structures in the simulation. Initially I will try to use a simple polling scheme in the clients. I find it hard to maintain long-lived TCP connections between mobile devices and the server and I'm not yet sure whether the clients will try to keep an open TCP connection or whether they will set it up and tear it down for each transmission.
When a client is active on a mobile device, I would like to have the client poll the server at least a few times a minute.
The simulation will keep running regardless of whether clients are connected or not.
The total number of existing clients could get very large, many thousands.
Clients mostly poll the server for simulation state, but also sometimes issues control commands to the simulation.
All messages are small in size.
I expect the server to run under Linux on multi-core CPU server hardware.
Currently I have the following idea for threading model in the server:
The simulation logic is executed by a few threads. The simulation logic threads both read and write from/to the simulation data structures.
For each client there is a Java thread performing a blocking read call to the socket for that client. When a poll command is received from a client, the corresponding client thread reads info from the simulation data structures (one client poll would typically be interested in a small subset of the total data structures) and sends a reply to the client on the client's socket. Thus, access to the data structures would need to be synchronized between the client threads and the simulation threads (I would try to have the locks on smaller subsets of the data). If a control command is received from the client, the client thread would write to the data structures.
For small number of clients, I think this would work fine.
Question 1: Would this threading model hold for a large number (thousands) of connected clients? I'm not familiar with what memory/CPU overhead there would be in such a Java implementation.
Question 2: I would like to avoid having the server asynchronously send messages to the clients but in certain scenarios I may need to have the server send "update yourself now" messages asynchronously to some or many clients and I'm not quite sure how to do that. Having the simulation logic thread(s) send those messages doesn't seem right... maybe some "client notification thread pool" concept?
You ask two questions; I'll answer the first.
I've previously written an application that involved thousands of threads in one application. We did once run into a problem with the maximum number of threads on the Linux server; for us, I think the limit was about 1000 threads. This affected our Java application because Java threads use native threads. We set the limit higher, and the application scaled to about 2000 threads, which was what we needed without an issue; I don't know what would have happened had we needed to scale it much higher.
The fact that the default maximum number of threads was 1000 suggests that it might not be wise to run too many thousands of threads on a single Linux server. I believe the primary issue is that sufficient memory for a stack needs to be allocated for each thread.
Our intended long term fix was to change to an architecture where threads from a thread pool each serviced multiple sockets. This really isn't too much of an issue; for each socket, the thread just has to process any pending messages before going on to the next socket. You would have to be careful about synchronizing memory access, but your application already needs to do that since your simulation interacts with multiple threads already so that part would not be a huge change.
In a Java servlet environment, what are the factors that are the bottleneck for number of simultaneous users.
Number of HTTP connections the server can allow per port
Number of HTTP connections the server can allow across several ports (I can have multiple WAS profiles on several HTTP ports)
Number of servlets in pool
Number of threads configured for WAS to use to service connections
RAM available to server (is there any any correletation between number of service threads assuming 0-memory leak in application)
Are there any other factors?
Edited:
To leave business logic out of the picture, assume have only one servlet printing one line on Log4j.
Can my Tomcat server handle 6000 simultaneous HTTP connections? Why
not (file handles? CPU time per request?)?
Can I have thread pool size as 5000 (do idle threads cost CPU/RAM)?
Can I have oracle connection pool size as 500 connections (do idle
connections cost CPU/RAM)?
Is the amount of garbage that is generated for each connection have an impact? For example, if for each HTTP connection 20KB of objects are created and left behind by Tomcat.. then by the time 2500 requests are processed 100MB heap would be used and this may trigger a GC pause of 300ms.
Can we say something like this: if Tomcat uses 0.2 sec of CPU time for processing a single HTTP request, then it would be able to handle roughly 500 http connections in a second. So, 6000 connections would need 5 seconds.
Interesting question, If we leave apart all the performance deciding attributes finally it boils down to how much work you are doing in the servlet or how much time it takes if it has highest I/O, CPU and memory. Now lets move down with you list with the above statement in mind;-
Number of HTTP connections the server can allow per port
There are limit for file descriptors but that again gets triggered by how much time a servlet is taking complete a request or how much time it takes from request first byte receive to finish sending the entire response. Because if it take only 1ms and you are using Netty and persistent connection, you can reach a really high >> 6000.
Number of servlets in pool
Theoretically >> 6000. But how many thread are processing your requests? Is there a thread pool that is burning your requests ? So you want to increase threads, but how much lets say 2000 concurrent threads. Is your CPU behaving poor with context switching ? Is it I/O bound? if yes it makes sense to context switch but then you will be hitting those network limits because a lot of thread waiting on network I/O, so ultimately how much time you spent on a piece of work.
DB
If it oracle, bless you with connection management, you definitely need rigorous monitoring here. Now this is just another limiting factor and can be considered as an just another blocking I/O. By definition of I/O, latency/throughput matters and becomes a bottleneck the moment it becomes the bigger than the smallest piece of work.
So, finally, you need to break down following or more attributes for all the servlets
Is it CPU bound? If yes, how much cycles it takes or can it be converted safely to some time unit. e.g. 1ms for just the compute piece of work.
Is it I/O bound, If yes similarly find the unit.
and others
A long list of what you have, e.g. CPU, Memory, GB/s
Now you know how much work needs to be done and all you do is divide by what you have and keep tuning , so that you find out the optimal and also find out what else attribute you have not considered and consider them.
The biggest bottleneck I have experienced is the time it takes to process the request.
The faster you can service a request, the more connections you can handle.
It's a difficult question to answer due to every application being different.
To figure this out for an application I support, I created a unit test that spawns many threads and I watch the memory usage in VisualVM in eclipse.
You can see how your memory consumption changes with the number of threads in use.
And you should be able to get a thread dump and see how much memory the thread is using.
You can extrapolate an average out to understand how much RAM you might need for N number of users.
The bottleneck will be a moving target since you'll optimize one area until you can scale larger, then another area will become your bottleneck.
If the response time of the servlet is a bottleneck, you'll could use some queuing mathematics to determine how many requests can be queued optimally based on the avg response time.
http://www4.ncsu.edu/~hp/SSME_QueueingTheory.pdf
Hope this helps.
Updated to address your additional questions:
Can my Tomcat server handle 6000 simultaneous HTTP connections? Why not (file handles? CPU time per request?)?
It's possible but probably not. Also you should probably add a web layer in front of the application server if you plan on doing high volume.
Suppose you have 6000 users all pounding away on your application. Each request a user sends only exists on the server for a moment [hopefully], and your peak thread count may have never reached over 20.
I'd recommend setting up some monitoring to understand how your application performs under real use cases. Check out http://Hawt.io which uses Jolokia to grab JMX metrics via http.
If your serious about analytics I'd recommend using something like Graphite to aggregate your JMX metrics. https://github.com/graphite-project/graphite-web
I've written a collector for Jolokia to send metrics to Carbon/Graphite, and may be able to open-source it with approval from my management. Let me know if you are interested.
Can I have thread pool size as 5000 (do idle threads cost CPU/RAM)?
Idle threads are not much to worry about, though setting your thread pool too high could allow your application server to receive too many requests. If this happens you may end up flooding your DB with connections it cant handle, or your memory allocation may not be enough to handle so many requests. This could start overall application performance degradation.
Set too low, and your app server could start queuing request again causing performance degradation.
It's normally to have some queuing during spikes or high volume times, but you don't want to overload your application server. Check out queuing theory to understand more about this.
Also, this is where having a web server in front of the app server could help you. If you have Apache serve your static content, only dynamic requests will reach the application servers in most cases.
Tuning is very specific to your individual application. I'd recommend staying with the defaults and just optimize your code until you can gather enough data to know which knob should be turned.
Can I have oracle connection pool size as 500 connections (do idle connections cost CPU/RAM)?
Same situation as the application thread pool size. Though your pool size for DB should be much smaller than the app thread count.
500 would be too high for most web applications unless you have very high volume, in which case you may need a DB cluster environment like Oracle RAC.
If the pool is set too high and you start using a lot of connections, your DB hardware will not be able to keep up and you will end up with performance problem on the database server.
The time it takes for a query to return may increase, in turn causing your application response time to increase. The "log jam" effect.
Use profiling or metrics to determine the avg number of active DB connections under normal use, and use that as a baseline for determining the max allowed.
Is the amount of garbage that is generated for each connection have an impact? For example, if for each HTTP connection 20KB of objects are created and left behind by Tomcat.. then by the time 2500 requests are processed 100MB heap would be used and this may trigger a GC pause of 300ms.
The numbers would be different, but yes. Also remember the Full GC are more concern. The incremental GCs will not pause your application. Check out "concurrent mark and sweep" and "Garbage first".
Can we say something like this: if Tomcat uses 0.2 sec of CPU time for processing a single HTTP request, then it would be able to handle roughly 500 http connections in a second. So, 6000 connections would need 5 seconds.
It's not quite that easy as each request is coming in, there are also some being processed and completed. Check out queuing theory to understand this better.
http://www4.ncsu.edu/~hp/SSME_QueueingTheory.pdf
There is another common bottleneck : the size of the database connection pool. But I have an additional remark : when you exhaust the number of allowed HTTP connections, of the number of threads allowed to serve request, you will only reject some requests. But when you exhaust memory (too much sessions with too much data for example), you can crash the whole application.
The difference is that in the case of heavy load for a short time, when load later falls down :
in first case, the application is up and can serve requests normally
in second case the application is down and must be restarted
EDIT :
I forgot to remember real use cases. The biggest problem I ever found for serving numerous concurrent connections is the quality of the database requests (assuming you use a database). There is not a direct impact since there is no maximum number, but you can easily hog all database server resources. Common examples of poor database requests :
no index on a table with a large number of rows
a request (on a big table) that makes no use of any index
the n+1 syndrome : with a ORM when you map a one to many relation to a collection no eagerly when you always need data from the collection
the load full database syndrome : with a ORM when you map all relations as eager, any single request ends in loading a high quantity of dependent data.
What is worse with those problems, is that they can cause no harm in tests when the database is young because there are not that many rows, but with time and increasing number of rows performances fall giving a unusable application over few users.
Number of HTTP connections the server can allow per port
Unlimited except by kernel resources, e.g. FDs, socket buffer soace, etc.
Number of HTTP connections the server can allow across several ports (I can have multiple WAS profiles on several HTTP ports)
As the number of connections per port is unlimited, this irrelevant.
Number of servlets in pool
Irrelevant except insofar as it increases the rate of incoming requests.
Number of threads configured for WAS to use to service connections
Relevant in an indirect way, see below.
RAM available to server (is there any any correletation between number of service threads assuming 0-memory leak in application)
Relevant if it limits the number of threads below the configured number of threads mentioned above.
The fundamental limitation is request service time. The shorter, the better. The longer it is, the longer the thread is tied up in that request, the longer wait queues get, ... Queuing theory dictates that the 'sweet spot' is no more than 70% server utilization. Beyond that, wait times grow rapidly with increasing utilization.
So anything that contributes to request service time is significant: for example, thread pool size, connection pool size, concurrency bottlenecks, ...
You should also consider that the use case itself is limiting the amount of concurrency. Imagine a collaborative environment where the order of actions matters. This forces you to synchronize actions - even if you would have been able to process all of them at once.
In java land this could be a simple thing as sharing a single resource which is using blocking access. (e.g. shared Random number generators (not per thread), shared Vectors, concurrent structures like ConcurrentHashMap etc.).
The more synchronization the less you will be able to fully utilize your server hardware.
So apart from running out of memory or saturating the CPU or hitting the garbage collection limit this synchronization might be a problem which does not only need to be solved in your code but maybe even requires you to soften some requirements of the high level workflow.
Seeing point 6, you can use these tools to see if your hardware is being the bottleneck: Assuming that you're on linux, you can use VmStat to see some statistics on your RAM usage, top or atop (depending on your distro) to see processes taking a toll in your CPU and RAM, nload and iftop to see what is consuming network bandwith, and iotop to see what is reading and writing to your disk.
I run multiple game servers and I want to develop a custom application to manage them. Basically all the game servers will connect to the application to exchange data. I don't want any of this data getting lost so I think it would be best to use TCP. I have looked into networking and understand how it works however I have a question about cpu usage. More servers are being added and in the next few months it could potentially reach around 100 - 200 and will continue to grow as needed. Will new threads for each server use a lot of cpu and is it a good idea to do this? Does anyone have any suggestions on how to go about this? Thanks.
You should have a look at non blocking io. With blocking io, each socket will consume 1 thread and the number of threads in a system is limited. And even if you can create 1000+, it is a questionable approach.
With non blocking io, you can server multiple sockets with a single thread. This is a more scalable approach + you control how many threads at any given moment are running.
More servers are being added and in the next few months it could potentially reach around 100 - 200 and will continue to grow as needed. Will new threads for each server use a lot of cpu and is it a good idea to do this?
It is a standard answer to caution away from 100s of threads and to the NIO solution. However, it is important to note that the NIO approach has a significantly more complex implementation. Isolating the interaction with a server connection to a single thread has its advantages from a code standpoint.
Modern OS' can fork 1000s of threads with little overhead aside from the stack memory. If you are sure of your scaling factors (i.e. you're not going to reach 10k connections or something) and you have the core memory then I would say that a thread per TCP connection could work very well. I've very successfully run applications with 1000s of threads and have not seen fall offs in performance due to context switching which used to be the case with earlier processors/kernels.
I'm writing a Netty application. The application is running on a 64 bit eight core linux box
The Netty application is a simple router that accepts requests (incoming pipeline) reads some metadata from the request and forwards the data to a remote service (outgoing pipeline).
This remote service will return one or more responses to the outgoing pipeline. The Netty application will route the responses back to the originating client (the incoming pipeline)
There will be thousands of clients. There will be thousands of remote services.
I'm doing some small scale testing (ten clients, ten remotes services) and I don't see the sub 10 millisecond performance I'm expecting at a 99.9 percentile. I'm measuring latency from both client side and server side.
I'm using a fully async protocol that is similar to SPDY. I capture the time (I just use System.nanoTime()) when we process the first byte in the FrameDecoder. I stop the timer just before we call channel.write(). I am measuring sub-millisecond time (99.9 percentile) from the incoming pipeline to the outgoing pipeline and vice versa.
I also measured the time from the first byte in the FrameDecoder to when a ChannelFutureListener callback was invoked on the (above) message.write(). The time was a high tens of milliseconds (99.9 percentile) but I had trouble convincing myself that this was useful data.
My initial thought was that we had some slow clients. I watched channel.isWritable() and logged when this returned false. This method did not return false under normal conditions
Some facts:
We are using the NIO factories. We have not customized the worker size
We have disabled Nagel (tcpNoDelay=true)
We have enabled keep alive (keepAlive=true)
CPU is idle 90+% of the time
Network is idle
The GC (CMS) is being invoked every 100 seconds or so for a very short amount of time
Is there a debugging technique that I could follow to determine why my Netty application is not running as fast as I believe it should?
It feels like channel.write() adds the message to a queue and we (application developers using Netty) don't have transparency into this queue. I don't know if the queue is a Netty queue, an OS queue, a network card queue or what. Anyway I'm reviewing examples of existing applications and I don't see any anti-patterns I'm following
Thanks for any help/insight
Netty creates Runtime.getRuntime().availableProcessors() * 2 workers by default. 16 in your case. That means you can handle up to 16 channels simultaneously, other channels will wait untils you release the ChannelUpstreamHandler.handleUpstream/SimpleChannelHandler.messageReceived handlers, so don't do heavy operations in these (IO) threads, otherwise you can stuck the other channels.
You haven't specified your Netty version, but it sounds like Netty 3.
Netty 4 is now stable, and I would advise that you update to it as soon as possible.
You have specified that you want ultra low latency times, as well as tens of thousands of clients and services. This doesn't really mix well. NIO is inherently reasonably latent as opposed to OIO. However the pitfall here is that OIO probably wont be able to reach the number of clients you are hoping for. None the less I would use an OIO event loop / factory and see how it goes.
I myself have a TCP server, which takes around 30ms on localhost to send and receive and process a few TCP packets (measured from the time client opens a socket until server closes it). If you really do require such low latencies I suggest you switch away from TCP due to the SYN/ACK spam that is required to open a connection, this is going to use a large part of your 10ms.
Measuring time in a multi-threaded environment is very difficult if you are using simple things like System.nanoTime(). Imagine the following on a 1 core system:
Thread A is woken up and begins processing the incoming request.
Thread B is woken up and begins processing the incoming request. But since we are working on a 1 core machine, this ultimately requires that Thread A is put on pause.
Thread B is done and performed perfectly fast.
Thread A resumes and finishes, but took twice as long as Thread B. Because you actually measured the time it took to finish for Thread A + Thread B.
There are two approaches on how to measure correctly in this case:
You can enforce that only one thread is used at all times.
This allows you to measure the exact performance of the operation, if the OS does not interfere. Because in the above example Thread B can be outside of your program as well. A common approach in this case is to median out the interference, which will give you an estimation of the speed of your code.You can however assume, that on an otherwise idle multi-core system, there will be another core to process background tasks, so your measurement will usually not be interrupted. Setting this thread to high priority helps as well.
You use a more sophisticated tool that plugs into the JVM to actually measure the atomic executions and time it took for those, which will effectively remove outside interference almost completely. One tool would be VisualVM, which is already integrated in NetBeans and available as a plugin for Eclipse.
As a general advice: it is not a good idea to use more threads than cores, unless you know that those threads will be blocked by some operation frequently. This is not the case when using non-blocking NIO for IO-operations as there is no blocking.
Therefore, in your special case, you would actually reduce the performance for clients, as explained above, because communication would be put on hold up to 50% of the time under high load. In worst case, that could cause a client to even run into a timeout, as there is no guarantee when a thread is actually resumed (unless you explicitly request fair scheduling).
I've read several posts about java.net vs java.nio here on StackOverflow and on some blogs. But I still cannot catch an idea of when should one prefer NIO over threaded sockets. Can you please examine my conclusions below and tell me which ones are incorrect and which ones are missed?
Since in threaded model you need to dedicate a thread to each active connection and each thread takes like 250Kilobytes of memory for it's stack, with thread per socket model you will quickly run out of memory on large number of concurrent connections. Unlike NIO.
In modern operating systems and processors a large number of active threads and context switch time can be considered almost insignificant for performance
NIO throughoutput can be lower because select() and poll() used by asynchronous NIO libraries in high-load environments is more expensive than waking up and putting to sleep threads.
NIO has always been slower but it allows you to process more concurrent connections. It's essentially a time/space trade-off: traditional IO is faster but has a heavier memory footprint, NIO is slower but uses less resources.
Java has a hard limit per concurrent threads of 15000 / 30000 depending on JVM and this will limit thread per connection model to this number of concurrent connections maximum, but JVM7 will have no such limit (cannot confirm this data).
So, as a conclusion, you can have this:
If you have tens of thousands concurrent connections - NIO is a better choice unless a request processing speed is a key factor for you
If you have less than that - thread per connection is a better choice (given that you can afford amount of RAM to hold stacks of all concurrent threads up to maximum)
With Java 7 you may want to go over NIO 2.0 in either case.
Am I correct?
That seems right to me, except for the part about Java limiting the number of threads – that is typically limited by the OS it's running on (see How many threads can a Java VM support? and Can't get past 2542 Threads in Java on 4GB iMac OSX 10.6.3 Snow Leopard (32bit)).
To reach that many threads you'll probably need to adjust the stack size of the JVM.
I still think the context switch overhead for the threads in traditional IO is significant. At a high level, you only gain performance using multiple threads if they won't contend for the same resources as much, or they spend time much higher than the context switch overhead on the resources.
The reason for bringing this up, is with new storage technologies like SSD, your threads come back to contend on the CPU much quicker
There is not a single "best" way to build NIO servers, but the preponderance of this particular question on SO suggests that people think there is! Your question summarizes the use cases that are suited to both options well enough to help you make the decision that is right for you.
Also, hybrid solutions are possible too! You could hand the channel off to threads when they are going to do something worthy of their expense, and stick to NIO when it is better.
I would say start with thread-per-connection and adapt from there if you run into problems.
If you really need to handle a million connections you should consider writing (or finding) a simple request broker in C (or whatever) that will use far less memory per connection than any java implementation can. The broker can receive requests asynchronously and queue them to backend workers written in your language of choice.
The backends thus only need a thread per active request, and you can just have a fixed number of them so the memory and database use is predetermined to some degree. When large numbers of requests are running in parallel the requests are made to wait a bit longer.
Thus I think you should never have to resort to NIO select channels or asynchronous I/O (NIO 2) on 64-bit systems. The thread-per-connection model works well enough and you can do your scaling to "tens or hundreds of thousands" of connections using some more appropriate low-level technology.
It is always helpful to avoid premature optimization (i.e. writing NIO code before you really have massive numbers of connections coming in) and don't reinvent the wheel (Jetty, nginx, etc.) if possible.
What most often is overlooked is that NIO allows zero copy handling. E.g. if you listen to the same multicast traffic from within multiple processes using old school sockets on one single server, any multicast packet is copied from the network/kernel buffer to each listening application. So if you build a GRID of e.g. 20 processes, you get memory bandwidth issues. With nio you can examine the incoming buffer without having to copy it to application space. The process then copies only parts of the incoming traffic it is interested in.
another application example:
see http://www.ibm.com/developerworks/java/library/j-zerocopy/ for an example.