I have a basic Java Messaging application that sends JAVA objects to a remote server for processing. I leverage Spring support on both sides of the wire and use ActiveMQ as my JMS provider. It works well - and we have experienced no real problems with 10 clients that send messages concurrently.
However, we now really want to scale. The number of clients is likely to increase to circa 500 . Also, the bandwidth used for each client is more of an issue than was declared initially.
I was wondering whether anyone thought that ActiveMQ is the right tool for this job - or whether socket based TCP/UDP would help us scale better. We are not well versed in some of the 'advanced' features of AMQ, since we are using it with basic Spring JMS Template support.
Any comments / thoughts would be appreciated.
Thanks
Without knowing what quality of service and SLA's you're trying to achieve, the basic rules I follow when evaluating messaging service implementations for any application are the following...
Performance over Reliability
Fast messaging
Intermittent message loss acceptable
Messages are non-persistent
Little to no cost
In this case, a product like ZeroMq (and others alike) would suffice since it works at the socket level, is decentralized, offers extremely low latency and scales well in large distributed systems, and is open source so cost is negligible. If some use cases require persistence and reliability, be prepared to implement custom solutions that traditional messaging middleware offer out-of-box (persistence, durability, replication, etc).
Balance Performance and Reliability
Performance and reliability equally important
Lost messages not acceptable
Messages are persistent
Need for some support
Relatively low cost
Here's where products like ActiveMQ, RabbitMq, etc., come into play. Broker-based middleware addresses reliability and persistence concerns while providing good performance and scalability. Support costs are usually low enough for small and mid-sized companies to afford without breaking the bank. It's safe to say most messaging needs fall into this category because it provides accessibility to both performance and reliability, and you can sacrifice one for the other based on future needs as your application matures without swapping out the entire messaging infrastructure due to a bad choice made years earlier.
Reliability over Performance
Reliability is most important
Messages can't be dropped
Message redelivery out-of-box
Clustering, HA, replication, etc. available out-of-box.
Need enterprise-class, global support and professional services
High Cost
Financial firms, trading systems, banking applications, etc., typically have such requirements where message system reliability has a dollar-value attached to it, and when things don't work, money is lost. Therefore, message persistence, HA/fault-tolerance, fail-over, are all extremely important. If cost is not an issue, look at products like WebLogic, Websphere, SonicMQ, or TIBCO, etc.,...they're expensive, but all offer solid reliability, enterprise support and perform well under load. I've used SonicMQ, it's a great product, very fast and reliable, but it costs an arm and a leg.
Hope it helps...
Related
I am looking to optimize a microservice architecture that currently uses HTTP/REST for internal node-to-node communication.
One option is implementing backpressure capability into the services, (eg) by integrating something like Quasar into the stack. This would no doubt improve things. But I see a couple challenges. One is, the async client threads are transient (in memory) and on client failure (crash), these retry threads will be lost. The second, in theory, if a target server is down for some time, the client could eventually reach OOM attempting retry because threads are ultimately limited, even Quasar Fibers.
I know it's a little paranoid, but I'm wondering if a queue-based alternative would be more advantageous at very large scale.
It would still work asynchronously like Quasar/fibers, except a) the queue is centrally managed and off the client JVM, and b) the queue can be durable, so that in the event client and or target servers go down, no in flight messages are lost.
The downside to queue of course is that there are more hops and it slows down the system. But I'm thinking there is probably a sweet spot where Quasar ROI peaks and a centralized and durable queue becomes more critical to scale and HA.
My question is:
Has this tradeoff been discussed? Are there any papers on using a
centralized external queue / router approach for intraservice
communication.
TL;DR; I just realized I could probably phrase this question as:
"When is it appropriate to use Message Bus based intraservice
communication as opposed to direct HTTP within a microservice
architecture."
I've seen three general protocol design patterns with microservices architectures, when running at scale:
Message bus architecture, using a central broker such as ActiveMQ or Apache Qpid.
"Resilient" HTTP, where some additional logic is built on HTTP to make it more resilient. Typical approaches here are Hystrix (Java), or SmartStack/Baker St (smart proxy).
Point-to-point asynchronous messaging using something like NSQ, ZMQ, or Qpid Proton.
By far the most common design pattern is #2, with a little bit of #1 mixed in when a queue is desirable.
In theory, #3 offers the best of both worlds (resiliency AND scale AND performance) but the technologies are all somewhat immature. It turns out that with #2 you can get really very far (e.g., Netflix uses Hystrix everywhere).
To answer your question directly, I'd say that #1 is very rarely used as an exclusive design pattern because it creates a single bottleneck for your entire system. #1 is common for a subset of the system. For most people, I'd recommend #2 today.
I'm going to develop a little Android game with multiplayer feature. I've made a server framework in C++ using eNet library and I would like to use this framework for make the server.
So, there is any networking library like eNet compatible with Java and C++? I know that exist jEnet (but is very out of date Java-enet-wrapper (https://github.com/csm/java-enet-wrapper), it's immature.
Check out https://github.com/julienr/libenet-android.
ENet is much more advisable than UDT in your case as UDT can be processor intensive and a gaming service would at least hope for many connections. The difference is from UDTs implementation of congestion control which has relatively high CPU demand. UDT is awesome, but designed more for large, high bandwith transfers over long distances, rather than small, high latency transactions which are desired in gaming.
Also note that main stream congestion control algorithms do not do well with small transactions. They work by monitoring RTT of each packet in a transaction and/or by monitoring packet loss rate within a transaction, which is moot when each transation is only 1 - 2 packets on avg. The additional demands of the congestion control protocol will effect latency even though the congestion control itself is not likely to ever be engaged if transfers are kept small.
You might try out UDT: http://udt.sourceforge.net/
I have used it before with good success to communicate between Java and C++ processes.
I would like to ask what would be more appropriate to choose when developing a server similar to SmartFoxServer. I intend to develop a similar yet different server. In the benchmarks made by the ones that developed the above server they had something like 10000 concurrent clients.
I made a bit of research regarding the cost of using too many threads(>500) but cannot decide which way to go. I once made a server in java but that was for a small application and had nothing to do with heavy loads.
Thanks
Take a look at Apache Mina. They've done alot of the heavy lifting required to use NIO effectively in a networking application. Whether or not NIO increases your ability to process concurrent connections really depends on your implementation, but the performance boosts in Tomcat, JBoss and Jetty are plenty evidence to you already in the positive.
i'm not familiar with smartfoxserver, so i can only speak generically (which is not always good :P but here i go)
i think those are 2 different questions. on one hand, the io performance when using native java sockets vs. native sockets written in c (like tomcat).
the other question is how to scale up to that kind of concurrency level. other than that, i'd always choose native sockets (i.e: c).
now, how to scale: it's not a good idea to have a lot of threads running at the same time (os constraints, etc), so i'd choose to scale horizontally, meaning to add a load balancer that can send the requests to different servers that can be linked by using messages (using jms, like rabbitmq or activemq, or even using a protocol like stomp or amqp).
other solution, a cloud environment that allows you to grow your installation as you need
In most benchmarks which test 10K or 100K connections, the server is doing no work and unless your server does next to nothing, these test are unrealistic.
You need to take a clear idea of mow many concurrent connections you want to support.
If you have less than 1K connection, using a thread per connection will work ok. This is the simplest approach to take. Using a dispatcher model with NIO will work better if your request are very simple. Otherwise it won't matter much.
If you have more than 1K connections it is likely you want to use more than one server as each connection is getting less than 1% of a core and the cost of a basic server is relatively cheap these days.
I'm pretty new to web programming and I'm currently developing a web back end for a mobile application. Currently I have the users log in using servlet interactions and once they have full access to the application I need to open a Socket Connection so that I can provide server pushes. Now the problem I'm running into is how people handle thousands of concurrent socket connections. I've run into people talking about ThreadPools which seems pretty easy to implement and NIO. Is there some framework that I can work with to ensure my servers are handling at least 20-30k concurrent connections. I could also forget TCP connections and go for Long-polling but from my understanding TCP is best option resource wise.
#Steve - I'm looking at the former: One serversocket with thousands of connections.
I would look into clustering the web end immediately and use that as your primary scaling mechanism. 30k connections is quite a lot and you don't have much room for growth before you hit a server limit of some kind. If the I/O itself isn't onerous I would just use lots of threads and servers with lots of horsepower and memory. Get it working that way so you can ship, and have a fallback plan to switch to multiplexed NIO if performance or scaling becomes a problem, but be warned that it's a radical overhaul and about ten times as complex to program as java.net. After several years' consideration I am more and more wondering whether NIO to economize on threads is really worth it: it adds several new problems of its own such as a need for push parsing; synchronization issues with the selector if there are worker threads that need to change the registration state of channels; lots of ways to get the code wrong; and the fact that the scheduling overhead moves out of the OS into your application, where you only have linear set-iterator data structures to deal with it unless you engage in yet another level of complexity. It's worth remembering that select() was invented for Unix to allow economizing on processes, which are expensive. Threads are pretty cheap really, and provide a very simple programming model with built-in context for handling a single connection. NIO barely manages this at all except via disciplined use of selection key attachments, much less naturally.
I've work in embedded systems and systems programming for hardware interfaces
to date. For fun and personal knowledge, recently I've been trying to learn more about server programming after getting my hands wet with Erlang. I've been going back and thinking about servers from a C++/Java prospective, and now I wonder how scalable systems can be built with technology like C++ or Java.
I've read that due to context-switching and limited memory, a per-client thread handler isn't realistic. Usually a thread-pool is created and a mix of worker-threads and asynchronous I/O is used to handle requests. I wonder, first of all, how does one determine the thread pool size? Does one simply have to measure and find the optimal balance? Eventually as the system scales then perhaps more than one server is needed to handle requests. How are requests managed across mulitple servers handling a large client base?
I am just looking for some direction into where I might be able to read more and find answers to my questions. What area of computer science would I look into for more information in this area? Are there any design patterns for this area of computing?
Your question is too general to have a nice answer. The answer depends greatly on the context, on how much processing any one Thread does, on how rapidly requests arrive, on the CPU family being used, on the web container being used, and on many other factors.
for C++ I've used boost::asio, it's very modern C++, and quite plesant to work with. Also the C++0x network libraries will be based on ASIO's implementation, so it's valuable knowledge.
As for designs 1thread per client, doesn't work, as you've already learned. And for high performance multithreading the best number of threads seems to be CoresX2, but for servers, there is lots of IO per request, which means lots of idle waiting. And from experience, looking at Apache, MySQL, and Oracle the amount of threads is about CoresX10 for database servers, and CoresX40 for web servers, not saying these are the ideals, but they seem to be patterns of succesful systems, so if your system can be balanced to work optimally with similar numbers atleast you'll know your design isn't completely lousy.
C++ Network Programming: Mastering Complexity Using ACE and Patterns and
C++ Network Programming: Systematic Reuse with ACE and Frameworks are very good books that describe many design patterns and their use with the highly portable ACE library.
Like Lothar, we use the ACE library which contains reactor and proactor patterns for handling asynchronous events and asynchronous I/O with C++ code. We use sizable worker thread pools that grow as needed (to a configurable maximum) and shrink over time.
One of the tricks with C++ is how you are going to propagate exceptions and error situations across network boundaries (which isn't handled by the language). I know that there are ways with .NET to throw exceptions across these network boundaries.
One thing you may consider is looking into SOA (Service Oriented Architecture) for dealing with higher level distributed system issues. ACE if really for running at the bare metal of the machine.