Multithreading - multiple users - java

When a single user is accessing an application, multiple threads can be used, and they can run parallel if multiple cores are present. If only one processor exists, then threads will run one after another.
When multiple users are accessing an application, how are the threads handled?

I can talk from Java perspective, so your question is "when multiple users are accessing an application, how are the threads handled?".
The answer is it all depends on how you programmed it, if you are using some web/app container they provide thread pool mechanism where you can have more than one threads to server user reuqests, Per user there is one request initiated and which in turn is handled by one thread, so if there are 10 simultaneous users there will be 10 threads to handle the 10 requests simultaneously, now we do have Non-blocking IO now a days where the request processing can be off loaded to other threads so allowing less than 10 threads to handle 10 users.
Now if you want to know how exactly thread scheduling done around CPU core, it again depends on the OS. One thing common though 'thread is the basic unit of allocation to a CPU'. Start with green threads here, and you will understand it better.

The incorrect assuption is
If only one processor exists, then threads will run one after another.
How threads are being executed is up to the runtime environment.
With java there are some definitions that certain parts of your code will not be causing synchronisation with other threads and thus will not cause (potential) rescheduling of threads.
In general, the OS will be in charge of scheduling units-of-execution. In former days mostly such entities have been processes. Now there may by processes and threads (some do scheduling only at thread level). For simplicity let ssume OS is dealing with threads only.
The OS then may allow a thread to run until it reaches a point where it can't continue, e.g. waiting for an I/O operation to cpmplete. This is good for the thread as it can use CPU for max. This is bad for all the other threads that want to get some CPU cycles on their own. (In general there always will be more threads than available CPUs.So, the problem is independent of number of CPUs.) To improve interactive behaviour an OS might use time slices that allow a thread to run for a certain time. After the time slice is expired the thread is forcible removed from the CPU and the OS selects a new thread for being run (could even be the one just interrupted).
This will allow each thread to make some progress (adding some overhead for scheduling). This way, even on a single processor system, threads my (seem) to run in parallel.
So for the OS it is not at all important whether a set of thread is resulting from a single user (or even from a single call to a web application) or has been created by a number of users and web calls.

You need understand about thread scheduler.
In fact, in a single core, CPU divides its time among multiple threads (the process is not exactly sequential). In a multiple core, two (or more) threads can run simultaneously.
Read thread article in wikipedia.
I recommend Tanenbaum's OS book.

Tomcat uses Java multi-threading support to serve http requests.
To serve an http request tomcat starts a thread from the thread pool. Pool is maintained for efficiency as creation of thread is expensive.
Refer to java documentation about concurrency to read more https://docs.oracle.com/javase/tutorial/essential/concurrency/
Please see tomcat thread pool configuration for more information https://tomcat.apache.org/tomcat-8.0-doc/config/executor.html

There are two points to answer to your question : Thread Scheduling & Thread Communication
Thread Scheduling implementation is specific to Operating System. Programmer does not have any control in this regard except setting priority for a Thread.
Thread Communication is driven by program/programmer.
Assume that you have multiple processors and multiple threads. Multiple threads can run in parallel with multiple processors. But how the data is shared and accessed is specific to program.
You can run your threads in parallel Or you can wait for threads to complete the execution before proceeding further (join, invokeAll, CountDownLatch etc.). Programmer has full control over Thread life cycle management.

There is no difference if you have one user or several. Threads work depending the logic of your program. The processor runs every thread for a certain ammount of time and then follows to the next one. The time is very short, so if there are not too much threads (or different processes) working, the user won't notice it. If the processor uses a 20 ms unit, and there are 1000 threads, then every thread will have to wait for two seconds for its next turn. Fortunately, current processors, even with just one core, have two process units which can be used for parallel threads.

In "classic" implementations, all web requests arriving to the same port are first serviced by the same single thread. However as soon as request is received (Socket.accept returns), almost all servers would immediately fork or reuse another thread to complete the request. Some specialized single user servers and also some advanced next generation servers like Netty may not.
The simple (and common) approach would be to pick or reuse a new thread for the whole duration of the single web request (GET, POST, etc). After the request has been served, the thread likely will be reused for another request that may belong to the same or different user.
However it is fully possible to write the custom code for the server that binds and then reuses particular thread to the web request of the logged in user, or IP address. This may be difficult to scale. I think standard simple servers like Tomcat typically do not do this.

Related

What is the thread breakdown for a Java app running on t2.large EC2

I'm trying to understand just how much performance I could expect from a t2.large, t2.xlarge or t2.2xlarge instance running a Java app.
Suppose I have an app running on a t2.large instance that is receiving x requests per second. How many threads can or will be created to fulfill these requests as fast as possible?
AWS states that a t2.large has 2 vCPUs and that a vCPU is a single thread on an Intel Xeon processor. If that's the case then how many JVM threads can a single Xeon thread handle?
there is no hard limit on how many JVM Threads a single core (or virtual core) can execute. Multiple threads can be executed on the same core after each other. This does not even necessarily has to slow down your application. E.g. if your current thread is wating for an IO-operation to return e.g. a waiting for a response from a Database query or an HTTP call, the thread can be suspended as it is not possible to execute anything in the thread anyway and another thread can take over the time on the CPU.
It depends on the type of work to be executed in your application and on the throughput of parallel requests how many threads are reasonable to run on the given amount of 2 vCPUs.
The best way to understand the limits will be to measure the throughput and response time with different workloads.

Thread execution on single and multi core

This is what I see in Oracle documentation and would like to confirm my understanding (source):
A computer system normally has many active processes and threads. This
is true even in systems that only have a single execution core, and
thus only have one thread actually executing at any given moment.
Processing time for a single core is shared among processes and
threads through an OS feature called time slicing.
Does it mean that in a single core machine only one thread can be executed at given moment?
And, does it mean that on multi core machine multiple threads can be executed at given moment?
one thread actually executing at any given moment
Imagine that this is game where 10 people try to sit on 9 chairs in a circle (I think you might know the game) - there isn't enough chairs for every one, but the entire group of people is moving, always. It's just that everyone sits on the chair for some amount of time (very simplified version of time slicing).
Thus multiple processes can run on the same core.
But even if you have multiple processors, it does not mean that a certain thread will run only on that processor during it's entire lifetime. There are tools to achieve that (even in java) and it's called thread affinity, where you would pin a thread only to some processor (this is quite handy in some situations). That thread can be moved (scheduled by the OS) to run on a different core, while running, this is called context switching and for some applications this switching to a different CPU is sometimes un-wanted.
At the same time, of course, multiple threads can run in parallel on different cores.
Does it mean that in a single core machine only one thread can be executed at given moment?
Nope, you can easily have more threads than processors assuming they're not doing CPU-bound work. For example, if you have two threads mostly waiting on IO (either from network or local storage) and another thread consuming the data fetched by the first two threads, you could certainly run that on a machine with a single core and obtain better performance than with a single thread.
And, does it mean that on multi core machine multiple threads can be executed at given moment?
Well yeah you can execute any number of threads on any number of cores, provided that you have enough memory to allocate a stack for each of them. Obviously if each thread makes intensive use of the CPU it will stop being efficient when the number of threads exceeds the number of cores.

Reading from disk and processing in parallel

This is going to be the most basic and even may be stupid question here. When we talk about using multi threading for better resource utilization. For example, an application reads and processes files from the local file system. Lets say that reading of file from disk takes 5 seconds and processing it takes 2 seconds.
In above scenario, we say that using two threads one to read and other to process will save time. Because even when one thread is processing first file, other thread in parallel can start reading second file.
Question: Is this because of the way CPUs are designed. As in there is a different processing unit and different read/write unit so these two threads can work in parallel on even a single core machine as they are actually handled by different modules? Or this needs multiple core.
Sorry for being stupid. :)
On a single processor, multithreading is achieved through time slicing. One thread will do some work then it will switch to the other thread.
When a thread is waiting on some I/O, such as a file read, it will give up it's CPU time-slice prematurely allowing another thread to make use of the CPU.
The result is overall improved throughput compared to a single thread even on a single core.
Key for below:
= Doing work on CPU
- I/O
_ Idle
Single thread:
====--====--====--====--
Two threads:
====--__====--__====--__
____====--__====--__====
So you can see how more can get done in the same time as the CPU is kept busy where it would have been kept waiting before. The storage device is also being used more.
In theory yes. Single core has same parallelism. One thread waiting for read from file (I/O Wait), another thread is process file that already read before. First thread actually can not running state until I/O operations is completed. Rougly not use cpu resource at this state. Second thread consume CPU resource and complete task. Indeed, multi core CPU has better performance.
To start with, there is a difference between concurrency and parallelism. Theoretically, a single core machine does not support parallelism.
About the question on performance improvement as a result of concurrency (using threads), it is very implementation dependent. Take for instance, Android or Swing. Both of them have a main thread (or the UI thread). Doing large calculation on the main thread will block the UI and make in unresponsive. So from a layman perspective that would be a bad performance.
In your case(I am assuming there is no UI Thread) where you will benefit from delegating your processing to another thread depends on a lot of factors, specially the implementation of your threads. e.g. Synchronized threads would not be as good as the unsynchronized ones. Your problem statement reminds me of classic consumer producer problem. So use of threads should not really be the best thing for your work as you need synchronized threads. IMO It's better to do all the reading and processing in a single thread.
Multithreading will also have a context switching cost. It is not as big as Process's context switching, but it's still there. See this link.
[EDIT] You should preferably be using BlockingQueue for such producer consumer scenario.

Handling Thread pool isolation?

Goal
I want to understand how to handle two thread pools simultaneously in java?
Consider a client server system in which clients are sending blocking I/O requests to the server (for example a file server ). There is a single ThreadPoolExecutor instance running on the server. Some types of client’s requests take much longer to process than other requests. These requests are called high I/O intensity requests. These high I/O intensity requests hog all threads and bring down entire application.
I want to solve this problem by two separate ThreadPoolExecutor.
I create two ThreadPoolExecutor instances ,one for high I/o intensity requests and another for low I/o intensity requests, and through offline workload procedure I create a lookup table to classify requests and when a request arrive I first search its class in the lookup table so that I can handover it to its corresponding thread pool.
Real Problem.
How to share processors equally to these two thread pools. Will this task be handled by JVM itself or I have to handle it by myself on application level ?
Should I make use of cluster and use another machine that run an instance of ThreadPoolExecutor to handle high I/O intensity requests?
Kindly give me proper design suggestions.
Generally is up to the system CPU scheduler how to distribute time between threads. Thread pool has nothing to do with thread scheduling. It can manage some threads reusing or synchronization between them.
The only advantage of creating 2 pools instead of 1 is that one pool can use ThreadFactory different than standard Executors.defaultThreadFactory(). You can give different priority for your demanding clients. Prority is a information that scheBut they would suffer even more then if you make them less important or vice versa ;)
Maybe you could rather do something like tuning it's priority when someone uses too much resources.
Here is some reference how does Microsoft uses priorities to tune threads CPU consumption.
No the JVM cannot route the ThreadPoolExecutor for you. You may implement a external watcher thread that monitor your threads and apply the appropriate policy to them (priority, exception handling and so on).
Take a look at this example:
http://tutorials.jenkov.com/java-multithreaded-servers/thread-pooled-server.html

How to set an appropriate thread number for my thread pool on the server side?

I just want to ask a rookie question: How to set an appropriate thread number for my thread pool on the server side?
Are there any general rules or formulas I can follow?
What are the issues I have to consider? For example, the number of network requests per second, the number of CPU cores, the CPU and memory usage rate in my application, the hardware I use on my server, etc.
Well, basically the size of the pool should be set to the the maximum possible of commands executed concurrently on your configuration, like if you have 4 cores (without HyperThreading), then you can set it to 4. With hyperthreading, you can set it to 8.
There are however questions like: what is the expected behaviour of the application, if it wants to get a thread from the pool, but the pool is empty (like you had 8 threads in the pool, every single one if them is working on a video encoding job in the next 10 minutes, and you get a new request in your manager thread).
You should consider however, that it is NOT guaranteed, that all your threads will run in every moment, even if your application handles threading exceptionally perfectly, as other applications are running on your computer meanwhile (your OS for example), and they need CPU as well.
On the other hand it is also a big question, that what does a thread do in your pool? You provided no informations about what is this thread pool used for, are they used in your own app, or you want to configure an open-source app/commercial app, etc. Creating and managing threads do have serious costs (scheduling, context switching, etc.), which may worth only if, the your threads stay alive long enough (you can provide enough job them to work on).
For further details, a quite good starting point in this subject could be Google I guess, for the following keywords: "scheduling, concurrency, threads, java executor service, hyperthreading".

Categories

Resources