I was just comparing the performance of scala actors vs java threads.
I was amazed to see the difference, I observed that with my system I was able to spawn maximum ~2000 threads (live at a time) only But with the same system I was able to spawn ~500,000 actors of scala.
Both programs used around 81MB of Heap memory of JVM.
Can you explain How java thread are this much heavy weight than scala / akka actors?
What is the key factor which made scala-actor this much light weight?
If I want to achieve best scalability, Should I go for actor based web server instead of java based traditional web/app server like JBoss or Tomcat?
Thanks.
Scala actors (including the Akka variety) use Java threads. There's no magic: more than a few thousand threads running simultaneously is a problem for most desktop machines.
The Actor model allows for awake-on-demand actors which do not occupy a thread unless they have work to do. Some problems can be modeled effectively as lots of sleeping agents waiting to get some work, who will do it relatively quickly and then go back to sleep. In that case, actors are a very efficient way to use Java threading to get your work done, especially if you have a library like Akka where performance has been a high priority.
The Akka docs explain the basics pretty well.
All reasonably scalable web servers have to solve this sort of problem one way or another; you probably ought not be basing your decision for web server primarily on whether actors are used under the hood, and regardless of what you use you can always add actors yourself.
An Akka actor is not equivalent to a thread. It is more like a Callable that is executed on a threadpool.
When a message is dispatched to an actor, that actor is placed on a threadpool to process the message. When it is done, the pooled thread can be used to execute other actors.
Related
In a project of mine, I decided to use Vertx for the HTTP APIs, given its proven performance record. Then, because the application does use event queues internally I started wondering if I should use Vertx event bus and verticles, instead of using my usual ArrayBlockingQueue. I am still quite new to Vertx so I don't know how suitable it could be. I've experience with Akka and Actors and those would fit the bill very well, but I'm not sure if Vertx event bus is designed to scale to 100k events per second?
I work with Vert.x since version 3 and have done some projects with it (It's my main stack, since a couple of years). I did never run into a situation where the event bus was the limiting factor. The event bus is designed to handle such an amount of event and even much more. As #injecteer mentioned, the limiting factor is basically the hardware and how many events will be processed depends on what do you with them and how you scale your code.
Vert.x follows consequently a non-blocking programming model and you should follow it as well ... never be blocking. Vert.x has the concept of loose coupling, that's solved with portioning of the code with "verticles" https://vertx.io/docs/vertx-core/java/#_verticles. You can deploy/start multiple instances of those verticles (your pieces of code). A further base concept is event loop threads (default count of cores * 2).
Each deployed verticle instance will run on a specific event loop thread, and ALL registered handlers (event bus, http server, etc.) are get called on this specific event loop thread at any time. This way you are able to scale your code in a "per thread" fashion, according to your needs. Events over the event bus are distributed with round robin between the verticle instance (and the handlers within the verticles) ... btw handlers of http requests are also distributed with round robin.
Clustered mode is a bit different. How do you (de)serialize dtos (Json, Protobuf, etc.) can become a significant difference in terms of performance. A clustered event bus has TCP sockets between all nodes, means events are sent point-to-point. The cluster manager (Hazelcast is the default) on the other hand defines to which node an event should get send to (round robin on cluster level), but events are NOT sent over the cluster manager. E.g. the cluster manager knows which node has consumers registered on the event bus (on which address).
Since Vert.x 4 milestone 5 the cluster manager SPI provides an entry point where you can implement your own alternative to round robin, e.g. load specific distribution etc.
There are some basic concepts like event loop threads, non-blocking programming, and verticles (which is not mandatory but recommended). If / when those concepts are clear you get a very flexible base for near any kind of application. I personally love it and also did never see any other framework/technology that reaches near a comparable performance (with a proper scaling that fit the load).
I benchmarked Vert.x event bus (using pure Vert.x for pub and sub) and found it to max out at around 100K msg/s / CPU (using high-end Xeon CPU). Interestingly the performance was comparable to Vert.x's WebSockets implementation so I agree it's not the bottleneck if you do:
WS -> Event Bus
But if you do 10 hops on the Event Bus then it could be the bottleneck.
I observed the performance of the LMAX Disrupter to be much higher but once you introduce I/O then the I/O become the bottleneck with Disrupter. The problem with disrupter is that you can't use it with Netty.
From my understanding all libraries running in a single JVM would have comparable performance levels and are limited by your hardware and settings.
So, local event-bus would perform as good as any other local tech.
The things start getting interesting, if you scale up your system across different JVMs and/or different machines. This is where Vert.x EB shines, as you don't have to change the code of your verticles!
You replace the local EB with clustered one, which is a matter of adding dependencies and configuring the cluster, but still no original code for event-bus operations has to be changed. The other way around also works just fine, if you want to squeese several verticles into the same JVM.
Clustering of the EB of course has it's price, but it's performance has to do rather with underlying clustering technologies like Hazelcast (default) or Infinispan than with Vert.x itself.
What I understood from Vert.x documentation (and a little bit of coding in it) is that Vert.x is single threaded and executes events in the event pool. It doesn't wait for I/O or any network operation(s) rather than giving time to another event (which was not before in any Java multi-threaded framework).
But I couldn't understand following:
How single thread is better than multi-threaded? What if there are millions of incoming HTTP requests? Won't it be slower than other multi-threaded frameworks?
Verticles depend on CPU cores. As many CPU cores you have, you can have that many verticles running in parallel. How come a language that works on a virtual machine can make use of CPU as needed? As far as I know, the Java VM (JVM) is an application that uses just another OS process for (here my understanding is less about OS and JVM hence my question might be naive).
If a single threaded, non-blocking concept is so effective then why can't we have the same non-blocking concept in a multi-threaded environemnt? Won't it be faster? Or again, is it because CPU can execute one thread at a time?
What I understood from Vert.x documentation (and a little bit of coding in it) is that Vert.x is single threaded and executes events in the event pool.
It is event-driven, callback-based. It isn't single-threaded:
Instead of a single event loop, each Vertx instance maintains several event loops. By default we choose the number based on the number of available cores on the machine, but this can be overridden.
It doesn't wait for I/O or any network operation(s)
It uses non-blocking or asynchronous I/O, it isn't clear which. Use of the Reactor pattern suggests non-blocking, but it may not be.
rather than giving time to another event (which was not before in any Java multi-threaded framework).
This is meaningless.
How single thread is better than multi-threaded?
It isn't.
What if there are millions of incoming HTTP requests? Won't it be slower than other multi-threaded frameworks?
Yes.
Verticles depend on CPU cores. As many CPU cores you have, you can have that many verticles running in parallel. How come a language that works on a virtual machine can make use of CPU as needed? As far as I know, the Java VM (JVM) is an application that uses just another OS process for (here my understanding is less about OS and JVM hence my question might be naive).
It uses a thread per core, as per the quotation above, or whatever you choose by overriding that.
If a single threaded, non-blocking concept is so effective then why can't we have the same non-blocking concept in a multi-threaded environemnt?
You can.
Won't it be faster?
Yes.
Or again, is it because CPU can execute one thread at a time?
A multi-core CPU can execute more than one thread at a time. I don't know what 'it' in 'is it because' refers to.
First of all, Vertx isn't single threaded by any means. It just doesn't spawn more threads that it needs.
Second, and this is not related to Vertx at all, JVM maps threads to native OS threads.
Third, we can have non-blocking behavior in multithreaded environment. It's not one thread per CPU, but one thread per core.
But then the question is: "what are those threads doing?". Because usually, to be useful, they need other resources. Network, DB, filesystem, memory. And here it becomes tricky. When you're single threaded, you don't have race conditions. The only one accessing the memory at any point of time is you. But if you're multi threaded, you need to concern yourself with mutexes, or any other way to keep you data consistent.
Q:
How single thread is better than multi-threaded? What if there are millions of incoming HTTP requests? Won't it be slower than other multi-threaded frameworks?
A:
Vert.x isn't a single threaded framework, it does make sure that a "verticle" which is something you deploy within you application and register with vert.x is mostly single threaded.
The reason for this is that concurrency with multiple threads over complicates concurrency with locks synchronisation and other concept that need to be taken care of with multi threaded communication.
While verticles are single threaded the do use something called an event loop which is the true power behind this paradigm called the reactor pattern or multi reactor pattern in Vert.x's case. Multiple verticles can be registered within one application, communication between these verticles run through an eventbus which empowers verticles to use an event based transfer protocol internally but this can also be distributed using some other technology to manage the clustering.
event loops handle events coming in on one thread but everything is async so computation gets handled by the loop and when it's done a signal notifies that a result can be used.
So all computation is either callback based or uses something like Reactive.X / fibers / coroutines / channels and the lot.
Due to the simpler communication model for concurrency and other nice features of Vert.x it can actually be faster than a lot of the Blocking and pure multi threaded models out there.
the numbers
Q:
If a single threaded, non-blocking concept is so effective then why can't we have the same non-blocking concept in a multi-threaded environemnt? Won't it be faster? Or again, is it because CPU can execute one thread at a time?
A:
Like a said with the first question it's not really single threaded. Actually when you know something is blocking you'll have to register computation with a method called executeBlocking which wil make it run multithreaded on an ExecutorService managed by Vert.x
The reason why Vert.x's model is mostly faster is also here because event loops make better use of cpu computation features and constraints. This is mostly powered by the Netty project.
the overhead of multi threading with it's locks and syncs imposes to much strain to outdo Vert.x with it's multi reactor pattern.
I am trying to figure out how Akka works and what are the best practices for using Actor Model.
I have couple of questions regarding the same.
Questions:
What are the deciding factors that we should keep in mind when configuring total number of Actors and Threads for below mentioned scenario?
Scenarios:
a. only tell is being invoked on actor (Fire and Forget).
b. ask is being invoked (Futures and Promise).
What are the advantage/disadvantage of using Router e.g RoundRobinRouter(X) over manual actors creation.
How dispatcher orchestrates MailBox, Actor and Threads for message processing.
Futures and Promises can be used independent of Actors and routers. Also the Alvin Alexander link below does a great job comparing Futures/Promises to Threads (which are the same as in Java).
The type of routing you should use will depend on your specific application need. In general you should choose a routing technique that mirrors the real-world problem you are trying to solve. E.g. is it more like a mailbox or bus/broadcast, or a round-robin.
If you don't use the built-in routers offered by Akka, then you might be tempted to write your own router. However it might be hard for you to improve on the Akka library: in the akka.io docs below, they explain that some of the routing work is delegated to the actors by the library, to deal with the fact that the router is single-threaded.
A typical computer will let you launch thousands of threads or actors if you have several gigabytes of RAM. However at any one moment, the number of threads actually running won't be more than the number of cores you have in your CPU.
Here are some articles that might help you decide which techniques to use and how many threads and actors are appropriate:
http://doc.akka.io/docs/akka/2.4.10/scala/routing.html
Akka messaging mechanisms by example
How many actors can be launched in scala?
http://alvinalexander.com/scala/differences-java-thread-vs-scala-future
How many threads can ran on a CPU at a time
My application is supposed to have a "realtime with pause" functionality. The user can pause execution, do some things that modify what's going to happen, then unpause and let stuff happen. Stuff happens at regular intervals as specified by the user, can be slow, can be fast.
My goal at using threading here is to improve performance on multicore systems. The amount of data that the application is supposed to crunch at the time intervals is supposed to be arbitrarily large (I expect lots and lots of loops over collections, modifying object properties and generating random numbers, but precious little disk access). I don't want the application to be constrained by the capacity of a single core, if it can use more to run faster.
Will this actually work this way?
I've run some tests (made a program crunch numbers a lot, and looked at CPU usage during its activity), but it's not really conclusive - usage is certainly in the proximity of 100% on my dual core machine, but hardly ever 100%. Does a single-threaded (main only) Java application use all available cores for computation?
Does a single-threaded (main only) Java application use all available cores for computation?
No, it will normally use a single core.
Making a program do computations in parallel with multiple threads may make it faster, but it's not a magical solution for any kind of problem. Whether this is a suitable solution for your program depends on what your program is doing exactly, and if the algorithm can be parallelized. If, for example, you are doing lots of computations where the next computation depends on the result of the previous computation, then making it multi-threaded will not help a lot, because you can't do the computations at the same time - the next one first has to wait for the answer of the previous one. So, you first have to think about what computations in your program could be run in parallel.
Java has a lot of support for multi-threading. You can program with threads directly, or use an executor service, or use the fork/join framework. Whatever is appropriate depends on what exactly you want to do.
Does a single-threaded (main only) Java application use all available cores for computation?
Not usually, but you could make use of some higher level apis in java that is actually using threads for you and youre not even usinfpg threads directly, more obviousiously fork/join and executors, less obvious the new Streams API on collections (ie parallelStream).
In general, though, to make use of all cores, you need to do some kind of concurrency. Further...its really hard to just observe you OS monitor to see what is going on (especially with only 2 cores)...your OS has other things going on (trying to manage itself, running your IDE, running crontab, running a browers to post to stackoverflow ;).
Finally, just implementing (concurrency) itself may not help, you have to do it "right" for your code/algorithm.
a java thread will run in a single cpu. to use multiple CPUs, you should have multiple threads.
Imagine that u have to do various tasks using your hand. You will do it slowly using one hand and more effciently using both your hands. Similarly, in java or in any other language multi threading provides the system with many hands. The good news is that you can have many threads to do different tasks. Running operations in a single thread will make the program sluggish and sometimes unresponsive. A good practice is to do long running tasks in a separate thread. For example loading large chunks of data from a database should be processed in a separate thread. Downloading data from the internet should also be processed in a separate thread. What happens if you do long running operations in the main thread? The program HANGS and will become unresponsive till the task gets completed and the user will think that there is someting wrong. I hope you get it
I'm not really understanding the dyno and worker process model of Heroku as it relates to a single process but multi-threaded Java-based server.
For example: How do I know (for a single dyno) how many processors are available for my background threads? Do I need to use something like RabbitMQ and create a separate process (app) for each background processing task and communicate between the server and these? Seems a little overkill for some Scheduled Tasks using Thread Cached Executors. Should all Futures be changed to inter-process Futures?
I guess it comes down to this question. Can I no longer write a multi-threaded server and scale the processors available to my server process in order to accommodate my thread activity? Or do I need to refactor my architecture to use separate processes for concurrency? If the former, do I need workers or just multiple dynos?
Thanks.
Heroku supports multiple concurrency models, so it's really up to you how you would like to architect your application. You have access to the full Java stack, so if something makes more sense to just be run as multiple threads in your web processes, you can definitely do that, or you can always enqueue jobs on something like RabbitMQ or Redis and process them on separate worker dynos. Multithreading is simpler and makes sense if the amount of work is light and proportional to your web requests because it will be scaled along with the web dynos; however, if the work is large, not proportional, and/or needs to be scaled independently, then breaking it out into a separate process would be better.
Heroku was originally just a Ruby platform, which does not have the same threading capabilities as Java, so the use of separate worker dynos is more important for Ruby and this is reflected in some of the documentation and examples out there, which might have led to your confusion. Luckily, with Java you have more options available to you and can use what's best for the job at hand.