I noticed that some web frameworks such as Play Framework allows you to configure multiple thread-pools with different sizes (num of threads within it). Let's say we run this play within a single machine with single core. Wouldn't there be a huge overhead by having multiple thread-pools?
For example, smaller thread pool is assuming asynchronous operations vs large thread-pool indicate a lot of blocking calls so threads can context-switch. Both cases is assuming that parallelism factor based on number of cores are in machine. My concern is that processor is further shared.
How does this work?
Thanks!
Play certainly allows you to configure multiple execution contexts (the equivalent of a thread pool), but that does not mean that you should do it, especially if you have a machine with a single core. By default the configuration should be kept low (close to the number of cores) for high-throughput - assuming, of course, that the operations are all non-blocking. If you have blocking operations the idea is to have them run on a separate execution context, as they otherwise lead to the exhaustion of the default request processing ExecutionContext (the request processing pipeline in Play runs on the default ExecutionContext, which is by default limited to a small number of threads).
As to what happens when you have more threads than cores and what happens when you do so highly depends on the operations you're running (in regards to I/O, etc.). One thread per core is supposedly optimal if you only do CPU-bound operations. See also this question.
Related
This is what I see in Oracle documentation and would like to confirm my understanding (source):
A computer system normally has many active processes and threads. This
is true even in systems that only have a single execution core, and
thus only have one thread actually executing at any given moment.
Processing time for a single core is shared among processes and
threads through an OS feature called time slicing.
Does it mean that in a single core machine only one thread can be executed at given moment?
And, does it mean that on multi core machine multiple threads can be executed at given moment?
one thread actually executing at any given moment
Imagine that this is game where 10 people try to sit on 9 chairs in a circle (I think you might know the game) - there isn't enough chairs for every one, but the entire group of people is moving, always. It's just that everyone sits on the chair for some amount of time (very simplified version of time slicing).
Thus multiple processes can run on the same core.
But even if you have multiple processors, it does not mean that a certain thread will run only on that processor during it's entire lifetime. There are tools to achieve that (even in java) and it's called thread affinity, where you would pin a thread only to some processor (this is quite handy in some situations). That thread can be moved (scheduled by the OS) to run on a different core, while running, this is called context switching and for some applications this switching to a different CPU is sometimes un-wanted.
At the same time, of course, multiple threads can run in parallel on different cores.
Does it mean that in a single core machine only one thread can be executed at given moment?
Nope, you can easily have more threads than processors assuming they're not doing CPU-bound work. For example, if you have two threads mostly waiting on IO (either from network or local storage) and another thread consuming the data fetched by the first two threads, you could certainly run that on a machine with a single core and obtain better performance than with a single thread.
And, does it mean that on multi core machine multiple threads can be executed at given moment?
Well yeah you can execute any number of threads on any number of cores, provided that you have enough memory to allocate a stack for each of them. Obviously if each thread makes intensive use of the CPU it will stop being efficient when the number of threads exceeds the number of cores.
Can we run multiple processes in one JVM? And each process should have its own memory quota?
My aim is to start new process when a new http request comes in and assign a separate memory to the process so that each user request has its own memory quota - and doesn't bother other user requests if one's memory quota gets full.
How can I achieve this?
Not sure if this is hypothetical.
Short answer: not really.
The Java platform offers you two options:
Threads. And that is the typical answer in many cases: each new incoming request is dealt with by a separate thread (which is probably coming out of a pool to limit the overall number of thread instances that get created/used in parallel). But of course: threads exist in the same process; there is no such thing as controlling the memory consumption "associated" by what a thread is doing.
Child processes. You can create a real process and use that to run whatever you intend to run. But of course: then you have an external real process to deal with.
So, in essence, the real answer is: no, you can't apply this idea to Java. The "more" Java solution would be to look into concepts such as application servers, for example Tomcat or WebSphere.
Or, if you insist on doing things manually; you could build your own "load balancer"; where you have one client-facing JVM; which simply "forwards" requests to one of many other JVMs; and those "other" JVMs would work independently; each running in its own process; which of course you could then "micro manage" regarding CPU/memory/... usage.
The closest concept is Application Isolation API (JSR-121) that AFAIK has not been implemented: See https://en.wikipedia.org/wiki/Application_Isolation_API.
"The Application Isolation API (JSR 121) provides a specification for isolating and controlling Java application life cycles within a single Java Virtual Machine (JVM) or between multiple JVMs. An isolated computation is described as an Isolate that can communicate and exchange resource handles (e.g. open files) with other Isolates through a messaging facility."
See also https://www.flux.utah.edu/janos/jsr121-internal-review/java/lang/isolate/package-summary.html:
"Informally, isolates are a construct midway between threads and JVMs. Like threads, they can be used to initiate concurrent execution. Like JVMs, they cause execution of a "main" method of a given class to proceed within its own system-level context, independently of any other Java programs that may be running. Thus, isolates differ from threads by guaranteeing lack of interference due to sharing statics or per-application run-time objects (such as the AWT thread and shutdown hooks), and they differ from JVMs by providing an API to create, start, terminate, monitor, and communicate with these independent activities."
When a single user is accessing an application, multiple threads can be used, and they can run parallel if multiple cores are present. If only one processor exists, then threads will run one after another.
When multiple users are accessing an application, how are the threads handled?
I can talk from Java perspective, so your question is "when multiple users are accessing an application, how are the threads handled?".
The answer is it all depends on how you programmed it, if you are using some web/app container they provide thread pool mechanism where you can have more than one threads to server user reuqests, Per user there is one request initiated and which in turn is handled by one thread, so if there are 10 simultaneous users there will be 10 threads to handle the 10 requests simultaneously, now we do have Non-blocking IO now a days where the request processing can be off loaded to other threads so allowing less than 10 threads to handle 10 users.
Now if you want to know how exactly thread scheduling done around CPU core, it again depends on the OS. One thing common though 'thread is the basic unit of allocation to a CPU'. Start with green threads here, and you will understand it better.
The incorrect assuption is
If only one processor exists, then threads will run one after another.
How threads are being executed is up to the runtime environment.
With java there are some definitions that certain parts of your code will not be causing synchronisation with other threads and thus will not cause (potential) rescheduling of threads.
In general, the OS will be in charge of scheduling units-of-execution. In former days mostly such entities have been processes. Now there may by processes and threads (some do scheduling only at thread level). For simplicity let ssume OS is dealing with threads only.
The OS then may allow a thread to run until it reaches a point where it can't continue, e.g. waiting for an I/O operation to cpmplete. This is good for the thread as it can use CPU for max. This is bad for all the other threads that want to get some CPU cycles on their own. (In general there always will be more threads than available CPUs.So, the problem is independent of number of CPUs.) To improve interactive behaviour an OS might use time slices that allow a thread to run for a certain time. After the time slice is expired the thread is forcible removed from the CPU and the OS selects a new thread for being run (could even be the one just interrupted).
This will allow each thread to make some progress (adding some overhead for scheduling). This way, even on a single processor system, threads my (seem) to run in parallel.
So for the OS it is not at all important whether a set of thread is resulting from a single user (or even from a single call to a web application) or has been created by a number of users and web calls.
You need understand about thread scheduler.
In fact, in a single core, CPU divides its time among multiple threads (the process is not exactly sequential). In a multiple core, two (or more) threads can run simultaneously.
Read thread article in wikipedia.
I recommend Tanenbaum's OS book.
Tomcat uses Java multi-threading support to serve http requests.
To serve an http request tomcat starts a thread from the thread pool. Pool is maintained for efficiency as creation of thread is expensive.
Refer to java documentation about concurrency to read more https://docs.oracle.com/javase/tutorial/essential/concurrency/
Please see tomcat thread pool configuration for more information https://tomcat.apache.org/tomcat-8.0-doc/config/executor.html
There are two points to answer to your question : Thread Scheduling & Thread Communication
Thread Scheduling implementation is specific to Operating System. Programmer does not have any control in this regard except setting priority for a Thread.
Thread Communication is driven by program/programmer.
Assume that you have multiple processors and multiple threads. Multiple threads can run in parallel with multiple processors. But how the data is shared and accessed is specific to program.
You can run your threads in parallel Or you can wait for threads to complete the execution before proceeding further (join, invokeAll, CountDownLatch etc.). Programmer has full control over Thread life cycle management.
There is no difference if you have one user or several. Threads work depending the logic of your program. The processor runs every thread for a certain ammount of time and then follows to the next one. The time is very short, so if there are not too much threads (or different processes) working, the user won't notice it. If the processor uses a 20 ms unit, and there are 1000 threads, then every thread will have to wait for two seconds for its next turn. Fortunately, current processors, even with just one core, have two process units which can be used for parallel threads.
In "classic" implementations, all web requests arriving to the same port are first serviced by the same single thread. However as soon as request is received (Socket.accept returns), almost all servers would immediately fork or reuse another thread to complete the request. Some specialized single user servers and also some advanced next generation servers like Netty may not.
The simple (and common) approach would be to pick or reuse a new thread for the whole duration of the single web request (GET, POST, etc). After the request has been served, the thread likely will be reused for another request that may belong to the same or different user.
However it is fully possible to write the custom code for the server that binds and then reuses particular thread to the web request of the logged in user, or IP address. This may be difficult to scale. I think standard simple servers like Tomcat typically do not do this.
Since JVM creates only one process initially, does creating multiple threads in this process boost CPU performance assuming you have multiple CPU processors? This is because since all the threads inside the same process share the resources of the process. So, technically the execution is sequential?
In other words, unless you create two or more processes and associate threads to each of them, you cant avail the full benefit of parallel execution in multiple CPU processors?
Yes, distributing the workload over multiple threads can boost the performance of your program. It also increases the responsiveness.
However there is an increased overhead due to communication and synchronization. Also, not all algorithms are able to be parallelized.
I am implementing a worker pool in Java.
This is essentially a whole load of objects which will pick up chunks of data, process the data and then store the result. Because of IO latency there will be significantly more workers than processor cores.
The server is dedicated to this task and I want to wring the maximum performance out of the hardware (but no I don't want to implement it in C++).
The simplest implementation would be to have a single Java process which creates and monitors a number of worker threads. An alternative would be to run a Java process for each worker.
Assuming for arguments sake a quadcore Linux server which of these solutions would you anticipate being more performant and why?
You can assume the workers never need to communicate with one another.
One process, multiple threads - for a few reasons.
When context-switching between jobs, it's cheaper on some processors to switch between threads than between processes. This is especially important in this kind of I/O-bound case with more workers than cores. The more work you do between getting I/O blocked, the less important this is. Good buffering will pay for threads or processes, though.
When switching between threads in the same JVM, at least some Linux implementations (x86, in particular) don't need to flush cache. See Tsuna's blog. Cache pollution between threads will be minimized, since they can share the program cache, are performing the same task, and are sharing the same copy of the code. We're talking savings on the order of 100's of nanoseconds to several microseconds per switch. If that's small potatoes for you, then read on...
Depending on the design, the I/O data path may be shorter for one process.
The startup and warmup time for a thread is generally much shorter. The OS doesn't have to start a process, Java doesn't have to start another JVM, classloading is only done once, JIT-compilation is only done once, and HotSpot optimizations are done once, and sooner.
Well usually, when discussing multi processing (/w one thread per process) versus multi threading in the same process, while the theoretical overhead is bigger in the first case than in the latter (and thus multi processing is theoretically slower than multi threading), in reality on most modern OSs this is not such a big issue. However when discussing it in the Java context, starting a new process is a lot more costly then starting a new thread. Starting a new process means starting up a new instance of the JVM which is very costly especially in terms of memory. I recommend that you start multiple threads in the same JVM.
Moreover, if you say inter-thread communication is not an issue, you can use Java's Executor Service to get a fixed thread pool of size 2x(number of available CPUs). The number of available CPU's can be autodetected at runtime via Java's Runtime class. This way you get a quick simple multithreading going without any boiler plate code.
Actually, if you do this with large scale taks using multiple jvm process is way faster than one jvm with multple threads. At least we never got one jvm runnning as fast as multple jvms.
We do some calculations where each task uses around 2-3GB ram and does some heavy number crunching. If we spawn 30 jvm's and run 30 task they perform around 15-20% better than spawning 30 threads in one jvm. We tried tuning the gc and the various memory sections and never catched up to the first variant.
We did this on various machines 14 tasks on a 16 core server, 34 tasks on a 36 core server etc. Multithreading in java always performed worde than multiple jvm processes.
It may not make any difference on simple tasks but on heavy calculations it seems jvm performce bad on threads.