How to schedule Java Threads - java

I have read that Java threads are user-level threads and one of the differences between user level threads and kernel level threads is that kernel level threads are scheduled by the kernel(we cannot change it) where as for user level threads we can define our own scheduling algorithm.
So how do we schedule threads in Java? At any given time, when multiple threads are ready to be executed, the runtime system chooses the Runnable thread with the highest priority for execution. If two threads of the same priority are waiting for the CPU, the scheduler chooses one of them to run in a round-robin fashion. What if I don't want RR? is there a way I can change it or am I missing something here?

You cannot change the scheduling algorithm as for the JVM this is outside the scope. The JVM uses the threading of user threads provided by the underlying OS.
So from the Java perspective you cannot change the scheduling algorithm. The scheduling is done automatically.
The only thing in Java you can do is set the priority of the thread. But how this affects the scheduling algorithm is not defined.
You can try to change the scheduling algorithm of the OS where your VM is running on. But this is highly dependend on the OS used.

For the last 10 years or so JVM threads are system-level threads and not user-level ('green') threads. Even for user-level threads, you don't get to manage them (the JVM does).

The JVM Spec does not state how threads are supposed to be scheduled by an implementation. The Hotspot VM (and most likely almost every other implementation as well) use the OS scheduling mechanisms (as Uwe stated). See also What is the JVM Scheduling algorithm?.
A simple, yet most likely not very efficient way to influence scheduling of your application threads would be to have only n runnable threads for the OS to schedule (n being the number of threads you'd actually like to run in parallel).
That could e.g. be your own implementation of ExecutorService, which makes all threads you don't want to be scheduled by the OS wait until you think they should run.
Of course this way you don't have any influence on other VM threads, let alone other applications or the OS.
A lot more involved (and not plattform independent) would be to change the OS scheduler itself to something more tailored to the needs of a JVM. A quick google research found this abstract, and I guess there's more work done on that field.

In Effective Java, 2nd Ed., Joshua Bloch devotes an item to the discussion of thread scheduling. He goes on at length about how trying to tweak thread scheduling usually only leads to solutions that are JVM implementation dependent, non-portable, and fragile.
If there's a particular scheduling problem that you have, then for new code you should not deal with low-level thread calls anyway. Java has higher level concurrency libraries that simplify many of these tasks. Rather than defining the solution to your problem with threads, you ought to be thinking of Executors and Tasks. There are also higher level facilities that simplify inter-thread communication, such as CountDownLatch.
Low level thread calls such as wait, notify, and notifyAll can be difficult to do properly.

You could write your own thread scheduler, analogous to the Quartz job scheduler for batch jobs.
This would allow you to execute threads at various times of the day during the run of your application.
If all you want is to determine the order of your thread execution, execute the code from one master thread.

Related

Can a blocked thread be rescheduled to do other work?

If I have a thread that is blocked waiting for a lock, can the OS reschedule that thread to do other work until the lock becomes available?
From my understanding, it cannot be rescheduled, it just sits idle until it can acquire the lock. But it just seems inefficient. If we have 100 tasks submitted to an ExecutorService, and 10 threads in the pool: if one of the threads holds a lock and the other 9 threads are waiting for that lock, then only the thread holding the lock can make progress. I would have imagined that the blocked threads could be temporarily rescheduled to run some of the other submitted tasks.
You said:
I would have imagined that the blocked threads could be temporarily rescheduled to run some of the other submitted tasks.
Project Loom
You happen to be describing the virtual threads (fibers) being developed as part of Project Loom for future versions of Java.
Currently the OpenJDK implementation of Java uses threads from the host OS as Java threads. So the scheduling of those threads is actually controlled by the OS rather than the JVM. And yes, as you describe, on all common OSes when Java code blocks, the code’s thread sits idle.
Project Loom layers virtual threads on top of the “real” platform/kernel threads. Many virtual threads may be mapped to each real thread. Running millions of threads is possible, on common hardware.
With Loom technology, the JVM detects blocking code. That blocked code’s virtual thread is “parked”, set aside, with another virtual thread assigned to that real thread to accomplish some execution time while the parked thread awaits a response. This parking-and-switching is quite rapid with little overhead. Blocking becomes extremely “cheap” under Loom technology.
Blocking is quite common in most run-of-the-mill business-oriented apps. Blocking occurs with file I/O, network I/O, database access, logging, console interactions, GUIs, and so on. Such apps using virtual threads are seeing huge performance gains with the experimental builds of Project Loom. These builds are available now, based on early-access Java 17. The Project Loom team seeks feedback.
Using virtual threads is utterly simple: Switch your choice of executor service.
ExecutorService executorService = Executors.newVirtualThreadExecutor() ;
Caveat: As commented by Michael, the virtual threads managed by the JVM depend on the platform/kernel threads managed by the host OS. Ultimately, execution is scheduled by the OS even under Loom. Virtual threads are useful for those times when a blocked Java thread would otherwise be sitting idle on a CPU core. If the host computer were overburdened, the Java threads might see little execution time, with or without virtual threads.
Virtual threads are not appropriate for tasks that rarely block, that are truly CPU-bound. For example, encoding video. Such tasks should continue using conventional threads.
For more info, see the enlightening presentations and interviews with Ron Pressler of Oracle, or with other members of the Loom team. Look for the most recent, as Loom has evolved.
I would have imagined that the blocked threads could be temporarily rescheduled to run some of the other submitted tasks.
That's what the other threads are for. If you create X threads and Y are blocked, you have the remaining X-Y threads to do other submitted tasks. Presumably, the number X was chosen specifically to get the number of concurrent tasks that the implementation and/or programmer thought was best.
You are asking why the implementation doesn't ignore this decision. The answer is because it makes more sense to choose the number of threads reasonably than have the implementation ignore that choice.
You are partially right.
In the executor service scenario that you described, all the 9 threads will be blocked and only one thread will make progress. true.
The part where you are not quite right is when you attempt to expect the behaviour of OS and Java combined. See, the concept of threads exists at both OS and at Java level. But they are two different things. So there are Java-Threads and there are OS-Threads. Java-Threads are implemented through OS-Threads.
Imagine it this way, JVM has (say) 10 Java-Threads in it, some running, some not. Java borrows some OS-Thread to implement the running Java-Threads. Now when a Java-Thread gets blocked (for whatever reason) then what we know for sure is that the Java-Thread has been blocked. We cannot easily observe what happened to the underlying OS-Thread.
The OS could reclaim the OS-Thread and use it for something else, or it can stay blocked - it depends. But even if the OS-Thread is reused, still Java-Thread will remain blocked. In your thread pool scenario, still the nine Java-Threads will be blocked, and only one Java-Thread will be working.
If I have a thread that is blocked waiting for a lock, can the OS reschedule that thread to do other work until the lock becomes available? From my understanding, it cannot be rescheduled, it just sits idle until it can acquire the lock. But it just seems inefficient.
I think you are thinking about this entirely wrong. Just because 10 of your 20 threads are "idle" doesn't mean that the operating system (or the JVM) is somehow consuming resources trying to manage these idle threads. Although in general we work on our applications to make sure that our threads are as unblocked as possible so we can achieve the highest throughput, there are tons of times that we write threads where we expect them to be idle most of the time.
If we have 100 tasks submitted to an ExecutorService, and 10 threads in the pool: if one of the threads holds a lock and the other 9 threads are waiting for that lock, then only the thread holding the lock can make progress. I would have imagined that the blocked threads could be temporarily rescheduled to run some of the other submitted tasks.
It is not the threads which are rescheduled, it is the CPU resources of the system. If 9 of your 10 threads are blocked in your thread-pool then other threads in your application (garbage collector) or other processes can be given more CPU resources on your server. This switching between jobs is what the modern operating systems are really good at and it happens many, many times a second. This is all quite normal.
Now, if your question is really "how do I improve the throughput of my application" then you are asking the right question. First off, you should make sure that your locks are as fine grained as possible to make sure that the thread holding the lock does so for a minimal amount of time. If the blocking happens too often then you should consider increasing the number of threads in the pool so that there is a higher likelihood that some of the jobs will run concurrently. This optimizing of the number of threads in a thread-pool is very application specific. See my post here for some more details: Concept behind putting wait(),notify() methods in Object class
Another thing you might consider is breaking your jobs up into pieces to separate the pieces that can run concurrently from the ones that need to be synchronized. You could have a pool of 10 threads doing the concurrent work and then a single thread doing the operations that require the locks. This is why the ExecutorCompletionService was written so that something downstream can take the results of a thread-pool and act on them as they complete. This will make your program more complicated and you'll need to worry about your queues if you are talking about some large number of jobs or large number of results but you can dramatically improve throughput if you do this right.
A good example of such refactoring is a situation where you have a processing job that has to write the results to a database. If at the end of each job, each thread in the pool needs to get a lock on the database connection then there will be a lot of contention for the lock and less concurrency. If, instead, the processing was done in a thread-pool and there was a single database update thread, it could turn off auto-commit and make updates from multiple jobs in a row between commits which could dramatically increase throughput. Then again, using multiple database connections managed by a connection pool might be a fine solution.

Why Kotlin/Java doesn't have an option for preemptive scheduler?

Heavy CPU bound task could block the thread and delay other tasks waiting execution. That's because JVM can't interrupt running thread and require help from programmer and manual interruption.
So writing CPU bound tasks in Java/Kotlin requires manual intervention to make things run smoothly, like using Sequence in Kotlin in code below.
fun simple(): Sequence<Int> = sequence { // sequence builder
for (i in 1..3) {
Thread.sleep(100) // pretend we are computing it
yield(i) // yield next value
}
}
fun main() {
simple().forEach { value -> println(value) }
}
As far as I understood the reason is that having preemptive scheduler with the ability to interrupt running threads have performance overhead.
But wouldn't it be better to have a switch, so you can choose? If you would like to run JVM with faster non-preemptive scheduler. Or with slower pre-emtpive (interrupting and switching the tread after N instructions) one but able to run things smoothly and don't require manual labor to do that?
I wonder why Java/Kotlin doesn't have such JVM switch that would allow to choose what mode you would like.
When you program using Kotlin coroutines or Java virtual threads (after Loom), you get preemptive scheduling from the OS.
Following usual practices, tasks that are not blocked (i.e., they need CPU) are multiplexed over real OS threads in the Kotlin default dispatcher or Java ForkJoinPool. Those OS threads are scheduled preemptively by the OS.
Unlike old-style multithreading, however, tasks are not assigned to a thread when they are blocked waiting for I/O. This makes no difference in terms of preemption, since a task that is waiting for I/O couldn't possibly preempt another running task anyway.
What you don't get when programming with coroutines, is preemptive scheduling over a large number of tasks simultaneously. If you have many tasks that require the CPU, then the first N will be assigned to a real thread and the OS will time slice them. The remaining ones will wait in the queue until those ones are done.
But in real life, when you have 10000 tasks that need to be simultaneously interactive, they are I/O bound tasks. On average, there aren't many that require the CPU at any one time, so the number of real threads you get from the default dispatcher or ForkJoinPool is plenty. In normal operation, the queue of tasks waiting for threads is almost always empty.
If you really had a situation where 10000 CPU-bound tasks needed to be simultaneously interactive, well, then you would be sad anyway, because time slicing would not provide a very smooth experience.
This question is based on a false premise: In the JVM, a preemptive scheduler is your only choice. No modern JVM uses co-operative multitasking.
No modern JVM implements user space threads or a scheduler of its own. JVMs use native operating system threads instead. Native threads are scheduled by the operating system, and operating system schedulers are preemptive.
The fact that JVM threads map 1-to-1 to native operating system threads is a problem for applications that need a high level of concurrency. Threads are relatively scarce and expensive. To address this, Project Loom is investigating adding "virtual threads" that may allow using native threads more sparingly, especially for I/O bound tasks.
Project Loom is under active development and there is no set schedule of when it will become a part of standard Java. Regarding how Project Loom schedules "virtual threads", the latest (May 2020) update from Project Loom claims "virtual threads are preemptive, not cooperative" but then goes on to say "none of the schedulers in the JDK currently employs time-slice-based preemption of virtual threads". It sounds like in its current state the "virtual thread" scheduler in Project Loom is somewhere between fully co-operative and fully pre-emptive. It will be interesting to see how the project develops and what we will get when it is integrated into main stream Java.
In a July 28 Q&A the Loom project lead Ron Pressler mentioned that you will be able to plug in a scheduler of your own for virtual thread, but did not go into details of how much control you get over the scheduling algorithm.

Manipulate Thread Implementation in JVM

Recently, I've been working on the deployment of concurrent objects onto multicore. In a sample, I use BlockingQueue.take() method whose specification mentions that it is blocking. It means that the method does not release the enclosing thread's resources such that it can be re-used for other concurrent tasks. This is useful since the total number of live threads in a JVM instance is limited and if the application would need thousands of live threads, then it is vital to be able to re-use suspended threads. On the other hand, JVM uses a 1:1 mapping from application-level threads to OS-level threads in Java; i.e. each Java Thread instance becomes an underlying OS-level thread.
The current solution is based on java.util.concurrency in Java 1.5+. Still, we need worker threads that are such scalable to a large number. Now, I am interested to find the following answers:
Is there any way to replace the implementation of java.lang.Thread in JVM such that I can plug my own Thread implementation?
Is this only possible through tweaking C++ sections of the thread implementation in JVM and recompiling it?
Is there any library to provide a way to replace the classical thread in Java?
Again, in the same line, is there a library or a way to guide how some threads in Java can be mapped to only one thread in the OS-level?
I also found this discussing different implementations of JVM and I am not sure if they could help.
Thanks for your comments and ideas in advance.
If you are creating thousands of threads, you're doing it wrong.
Instead, consider using the Executor framework. (Start with the Executors and ThreadPoolExecutor classes.) They allow you to queue thousands of tasks while having a sane number of threads handling them.
I guess this approach is what you meant by "library to replace the classical threads". I highly recommend you look into executors.
One caveat: Executors, by default, use non-daemon threads. Therefore, you must shut down your executor when you're done with it. You can do this at program exit, if there is a normal way to exit your program that doesn't simply involve waiting for all threads to finish. :-)

Understanding java's native threads and the jvm

I understand that the jvm is itself an application that turns the bytecode of the java executable into native machine code, but when using native threads I have some questions that I just cannot seem to answer.
Does every thread create their own
instance of the jvm to handle their
particular execution?
If not then does the jvm have to have some way to schedule which thread it will handle next, if so wouldn't this render the multi-threaded nature of java useless since only one thread can be ran at a time?
Does every thread create their own instance of the JVM to handle their particular execution?
No. They execute in the same JVM so that (for example) they can share objects and values of static fields.
If not then does the JVM have to have some way to schedule which thread it will handle next
There are two kinds of thread implementation in Java. Native threads are mapped onto a thread abstraction which is implemented by the host OS. The OS takes care of native thread scheduling, and time slicing.
The second kind of thread is "green threads". These are implemented and managed by the JVM itself, with the JVM implementing thread scheduling. Java green thread implementations have not been supported by Sun / Oracle JVMs since Java 1.2. (See Green Threads vs Non Green Threads)
If so wouldn't this render the multi-threaded nature of Java useless since only one thread can be ran at a time?
We are talking about green threads now, and this is of historic interest (only) from the Java perspective.
Green threads have the advantage that scheduling and context switching are faster in the non-I/O case. (Based on measurements made with Java on Linux 2.2; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.8.9238)
With pure green threads, N programming language threads are mapped to a single native thread. In this model you don't get true parallel execution, as you noted.
In a hybrid thread implementation, N programming language threads are mapped onto M native threads (where N > M). In this model, the in-process thread scheduler is responsible for the green thread to native thread scheduling AND you get true parallel execution (if M > 1); see https://stackoverflow.com/a/16965741/139985.
But even with the pure green threads, you still get concurrency. Control is switched to another threads a thread blocks on an I/O operation, whick acquiring a lock, and so on. Furthermore, the JVM's runtime could implement periodic thread preemption so that a CPU intensive thread doesn't monopolize the (single) core to the exclusion of other threads
Does every thread create their own instance of the jvm to handle their particular execution?
No, your application running in the JVM can have many threads that all exist within that instance of the JVM.
If not then does the jvm have to have some way to schedule which thread it will handle next...
Yes, the JVM has a thread scheduler. There are many different algorithms for thread scheduling, and which one is used is JVM-vendor dependent. (Scheduling in general is an interesting topic.)
...if so wouldn't this render the multi-threaded nature of java useless since only one thread can be ran at a time?
I'm not sure I understand this part of your question. This is kind of the point of threading. You typically have more threads than CPUs, and you want to run more than one thing at a time. Threading allows you to take full(er) advantage of your CPU by making sure it's busy processing one thread while another is waiting on I/O, or is for some other reason not busy.
A Java thread may be mapped one-to-one to a kernel thread. But this must not be so. There could be n kernel threads running m java threads, where m may be much larger than n, and n should be larger than the number of processors. The JVM itself starts the n kernel threads, and each one of them picks a java thread and runs it for a while, then switches to some other java thread. The operating system picks kernel threads and assigns them to a cpu. So there may be thread scheduling on several levels.
You may be interested to look at the GO programming language, where thousands of so called "Goroutines" are run by dozens of threads.
Java threads are mapped to native OS threads. They have little to do with the JVM itself.

Threads concept in Java

I have a doubt
There are 10 different threads in runnable state. Each having priority 1 to 10. How does the CPU schedules or executes these threads?
Thanks,
Ravi
Since when did this place replace google?
google search for Java thread scheduling, first result:
http://lass.cs.umass.edu/~shenoy/courses/fall01/labs/talab2.html
Mainstream Java implementations use "native threads", which means that thread scheduling is done through the operating system. Java thread priorities simply map to OS-specific values. You should read your OS documentation to figure out what those levels mean, though. :-)
The OS has a thread scheduler that will (using an algorithm) decide based on priority and a few other factors, which thread will be run next. If you have a multi-core system, then each CPU can take a thread for it's account.
There's also the fact that a thread gets a slot of time, and then gets switched out for another thread and has to wait its turn again.
But thread scheduling is an Operating System function.
I hope that gives you an answer to your question.
It is worth noting, that windows ignores raised priorities, unless you are Administrator and on Linux all priorities are ignored unless you are root.
Generally, playing with thread priorities is not very useful.

Categories

Resources