Java Threading behavior - java

I saw the following example on the internet:
public class TwoThreads {
public static class Thread1 extends Thread {
public void run() {
System.out.println("A");
System.out.println("B");
}
}
public static class Thread2 extends Thread {
public void run() {
System.out.println("1");
System.out.println("2");
}
}
public static void main(String[] args) {
new Thread1().start();
new Thread2().start();
}
}
My question is :
It is guarantee that "A" will be printed Before "B" and "1" will be printed before "2", but is it possible that "1" will be printed twice successively by another thread?.In this piece of code we have at least 3 threads(1 main and 2 created). can we imagine the scheduler runs 1 thread: new Thread1().start(); then gave up immediately after System.out.println("1"); then again run another threat in Thread1().start(); that prints "1" again ?
I am using NetBeans IDE, it seems running such a program always lead to the same first result, so it seems there something with caching. From my understanding you deal with that with declaring volatile variables, can it be done here,how ? if not then what is the solution for caching ?
In today's Computer's processor, we mostly have 2 processors,and still we find many multi-threading programs on the net uses more than 2 threads! isn't this process becomes heavy and slow regarding compiling ?

1) There is no guarantee in what order the threads will proceed.
2) The order is not randomized, either, though. So if you run the program under identical (or very similar) conditions, it will probably yield the same thread interleaving. If you need to have a certain behaviour (including randomized behaviour) you need to synchronized things yourself.
3) A CPU with two cores can only run two threads at the same time, but most threads spend most of their time not actually using the CPU, but waiting for something like I/O or user interaction. So you can gain a lot from having more than two threads (only two can concurrently compute, but hundreds can concurrently wait). Take a look at node.js, a recently popular alternative to multi-threaded programming that achieves great throughput for concurrent requests while having only a single thread of execution.

Answer to your 1/2 question:
Though threads run parallel code inside run method of thread is always executed sequentially.
Answer to your 3 question you can best tune your. Application if number of processors = number of threads but this is not a complete truth since if thread is waiting for some blocking operation then it will lead to un optimized performance since during that time another thread could run.

No. You are not synchronizing your threads in any way, so the exact execution order will be at the mercy of the scheduler. Given how your threads are implemented, I don't see how you could ever having "1" (or "A") being printed twice by a single thread.
What caching? And what variables? Your example code has no variables, and therefore nothing that would be appropriate to use with the volatile keyword. It's quite likely that on a given machine running this program will always produce the same result. As noted in #1, you're at the mercy of the scheduler. If the scheduler always behaves the same way, you'll always get the same result. Caching has nothing to do with it.
That depends upon what the threads are doing. If every thread has enough work to load one CPU core to 100%, then yes, having more threads than you have CPU cores is pointless. However, this is very rarely the case. Many threads will spend most of their time sleeping, or waiting for I/O to complete, or otherwise doing things that are not demanding enough to fully load a CPU core. In such a case there's no problem whatsoever with having more threads that CPU cores. In fact, multithreading predates mainstream multicore CPU's, and even back in the days when none of us had more than one CPU core it was still extremely beneficial to be able to have more than one thread.

Related

How long does a Java thread own a CPU time slice?

Each Java thread is mapped to a kernel thread in Windows and in Windows the time slice for thread switching is 10-100ms.
However, from the result of the following simple codes, it seems each thread switches the CPU resource every line of code. Why?
class MyThread extends Thread{
public MyThread(String name){
super(name);
}
public void run(){
for(int i=0;i<5;i++){
System.out.println( this.getName()+":"+i);
}
}
public static void main(String args[]){
Thread p = new MyThread("t1");
p.start();
for(int i=0;i<5;i++){
System.out.println( "main:"+i);
}
}
}
First of all, you don't tell us what output you are seeing and what you are expecting to see. So it is not possible to explain the former, or why the latter may be off-base.
However, there are a number of factors that could explain why the output is different to what you expect.
The output is likely to depend on the number of cores (or hyperthreads) on your system. And the number of them that the OS makes available to your Java application.
If there are higher priority threads running in the Java application itself or in other parts of the system, that will perturb time slicing. (A low priority thread's time-slice may be cut short if a high priority thread needs to be scheduled.)
Java thread rescheduling doesn't just occur due to time slicing. It can also happen when a thread does blocking I/O, or when it attempts to acquire a Lock or a mutex, or wait(...) or sleep(...) or something similar.
System.out.println has some hidden synchronization going on behind the scenes so that two threads printing at the same time don't cause corruption of the shared buffer data structures.
In short, whatever output you see from that program, it will not be clear evidence of time slicing behavior. (And my guess is that it is not evidence of any significance at all.)
But the answer to your question:
How long does a Java thread own a CPU time slice?
is that it is not possible to predict, and would be very difficult to measure.
System.out.println is a blocking operation that takes a lot of time compared to everything else you're doing. It's also synchronized so that only one thread can print a line at a time.
This causes the alternating behaviour you're seeing. While one thread is printing, the other one has plenty of time to get ready to print, call println, and wait for the first thread to complete. When the first thread finishes printing, the second one gets to print, and the first one will be back waiting for it to finish before it's done.

Is there a way to know all possible places in code where the system may interchange threads

I'm reading a book called "Java Concurrency In Practice" and in the first chapter the following code is demonstrated as thread unsafe
public class UnsafeSequence {
private int value;
/** Returns a unique value. */
public int getNext() {
return value++;
}
}
So if two threads run this code we can get unwanted results because they will interchange in different steps such as reading, modifying and writing the value. Is this determined only by OS, or do threads switch between each other on different "bytecode commands" for example? Is there any way to know all possible places where threads might switch from one to another, not just for this code but in general?
As several comments note, no. Two things you can do:
Write your classes in a thread-safe manner, so that thread scheduling isn't an issue.
Use concurrency support to prevent issues.
Keep reading the book.
Is there any way to know all possible places where threads might switch from one to another, not just for this code but in general?
This question is a bit vagiue. Let me split it up in two parts:
Two threads can wander over the same piece of code and happily interleave, except:
inside atomic operations (including complex operations inside of thread-safe classes)
inside guarded blocks (e.g. using a synchronized block, lock, semaphore, or some other memory fence)
Threads can switch all the time, which is 100% up to the OS. In theory a thread might even never get a chance to be 'scheduled in' again if the OS decides so. Threads may die spuriously (e.g. killed in ProcessExplorer). You never know when a thread will be stopped in it's tracks (suspended), but you do know that if it happens inside an atomic operation, no other thread will enter that code until the suspended thread resumes and completes the operation.
It happens whenever the system scheduler feels like. It has nothing to do with the JVM if the JVM only passes that scheduling to the native processor.

Will a thread in a while loop give CPU time to another thread of same type?

If I have the following dummy code:
public static void main(String[] args) {
TestRunnable test1 = new TestRunnable();
TestRunnable test2 = new TestRunnable();
Thread thread1 = new Thread(test1);
Thread thread2 = new Thread(test2);
thread1.start();
thread2.start();
}
public static class TestRunnable implements Runnable {
#Override
public void run() {
while(true) {
//bla bla
}
}
}
In my current program I have a similar structure i.e. two threads executing the same Run() method. But for some reason only thread 1 is given CPU time i.e. thread 2 never gets a chance to run. Is this because while thread 1 is in its while loop , thread 2 waits?
I'm not exactly sure, if a thread is in a while loop is it "blocking" other threads? I would think so, but not 100% sure so it would be nice to know if anyone could inform me of what actually is happening here.
EDIT
Okay, just tried to make a really simple example again and now both threads are getting CPU time. However this is not the case in my original program. Must be some bug somewhere. Looking into that now. Thanks to everyone for clearing it up, at least I got that knowledge.
There is no guarantee by the JVM that it will halt a busy thread to give other threads some CPU.
It's good practice to call Thread.yield();, or if that doesn't work call Thread.sleep(100);, inside your busy loop to let other threads have some CPU.
At some point a modern operating system will preempt the current context and switch to another thread - however, it will also (being a rather dumb thing overall) turn the CPU into a toaster: this small "busy loop" could be computing a checksum, and it would be a shame to make that run slow!
For this reason, it is often advisable to sleep/yield manually - even sleep(0)1 - which will yield execution of the thread before the OS decides to take control. In practice, for the given empty-loop code, this would result in a change from 99% CPU usage to 0% CPU usage when yielding manually. (Actual figures will vary based on the "work" that is done each loop, etc.)
1The minimum time of yielding a thread/context varies based on OS and configuration which is why it isn't always desirable to yield - but then again Java and "real-time" generally don't go in the same sentence.
The OS is responsible for scheduling the thread. That was changed in couple of years ago. It different between different OS (Windows/Linux etc..) and it depends heavily on the number of CPUs and the code running. If the code does not include some waiting functionality like Thread.yield() or synchhonized block with a wait() method on the monitor, it's likely that the CPU will keep the thread running for a long time.
Having a machine with multiple CPUs will improve your parallelism of your application but it's a bad programming to write a code inside a run() method of a thread that doesn't let other thread to run in a multi-threaded environment.
The actual thread scheduling should be handled by the OS and not Java. This means that each Thread should be given equal running time (although not in a predictable order). In your example, each thread will spin and do nothing while it is active. You can actually see this happening if inside the while loop you do System.out.println(this.toString()). You should see each thread printing itself out while it can.
Why do you think one thread is dominating?

Deadlock in a single threaded java program [duplicate]

This question already has answers here:
Is it possible for a thread to Deadlock itself?
(20 answers)
Closed 9 years ago.
Read that deadlock can happen in a single threaded java program. I am wondering how since there won't be any competition after all. As far as I can remember, books illustrate examples with more than one thread. Can you please give an example if it can happen with a single thread.
It's a matter of how exactly you define "deadlock".
For example, this scenario is somewhat realistic: a single-threaded application that uses a size-limited queue that blocks when its limit is reached. As long as the limit is not reached, this will work fine with a single thread. But when the limit is reached, the thread will wait forever for a (non-existing) other thread to take something from the queue so that it can continue.
Before multicore processors became cheap, all desktop computers had single-core processors. Single-core processors runs only on thread. So how multithreading worked then? The simplest implementation for Java would be:
thread1's code:
doSomething();
yield(); // may switch to another thread
doSomethingElse();
thread2's code:
doSomething2();
yield(); // may switch to another thread
doSomethingElse2();
This is called cooperative multithreading - all is done with just 1 thread, and so multithreading was done in Windows 3.1.
Today's multithreading called preemptive multithreading is just a slight modification of cooperative multithreading where this yield() is called automatically from time to time.
All that may reduce to the following interlacings:
doSomething();
doSomething2();
doSomethingElse2();
doSomethingElse();
or:
doSomething();
doSomething2();
doSomethingElse();
doSomethingElse2();
And so on... We converted multithreaded code to single-threaded code. So yes, if a deadlock is possible in multithreaded programs in single-threaded as well. For example:
thread1:
queue.put(x);
yield();
thread2:
x = queue.waitAndGet()
yield();
It's OK with this interlace:
queue.put(x);
x = queue.waitAndGet()
But here we get deadlock:
x = queue.waitAndGet()
queue.put(x);
So yes, deadlocks are possible in single-threaded programs.
Well I dare say yes
If you try to acquire the same lock within the same thread consecutively, it depends on the type of lock or locking implementation whether it checks if the lock is acquired by the same thread. If the implementation does not check this, you have a deadlock.
For synchronized this is checked, but I could not find the guarantee for Semaphore.
If you use some other type of lock, you have to check the spec as how it is guaranteed to behave!
Also as has already been pointed out, you may block (which is different from deadlock) by reading/ writing to a restricted buffer. For instance you write things into a slotted buffer and only read from it on certain conditions. When you can no longer insert, you wait until a slot becomes free, which won't happen since you yourself do the reading.
So I daresay the answer should be yes, albeit not that easy and usually easier to detect.
hth
Mario
Even if your java stuff is single-threaded there are still signal handlers, which are executed in a different thread/context than the main thread.
So, a deadlock can indeed happen even on single-threaded solutions, if/when java is running on linux.
QED.
-pbr
No, Sounds pretty impossible to me.
But you could theoretically lock a system resource while another app locks another that you're going to request and that app is going to request the one you've already locked. Bang Deadlock.
But the OS should be able to sort this thing out by detecting that and give both resources to one app at the time. Chances for this to happen is slim to none, but any good OS should be able to handle this one-in-a billion chance.
If you make the design carefully and only locks one resource at a time, this can not happen.
No.
Deadlock is a result of multiple threads (or processes) attempting to acquire locks in such a way that neither can continue.
Consider a quote from the Wikipedia article: (http://en.wikipedia.org/wiki/Deadlock)
"When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone."
It is actually quite easy:
BlockingQueue bq = new ArrayBlockingQueue(1);
bq.take();
will deadlock.

bug thread handling in java

public class Test extends Thread{
public void hello(String s){
System.out.println(s);
}
public void run(){
hello("I’mrunning...");
}//endofrun()
public static void main(String [] args){
Test t=new Test();
System.out.println("always first");
t.start();
System.out.println("always second but why?");
}
}
I've run that chunk of code 30 times.
Why is "always second but why?" always second on the console? When t.start() is called, we have 2 threads. (2 stacks): the main thread and the second thread. so "i'am running" has to be sometimes the second output on the console. When i delete the "always first" output statement than the two outputs left, behave non deterministic (that's the way it should be)
so what is wrong in my thinking, why is System.out.println("always first"); influencing the concurrency?
By writing something out to the console first, you may well be affecting when JIT compilation occurs and possibly even type initialization occurs. I don't find it completely unbelievable that something like this changes the observed ordering. I wouldn't be surprised to see the program behave slightly differently on different systems and with different JVMs.
The thing is, either of those orderings is completely valid. You shouldn't rely on it being one or the other, and it's not a bug if it always happens the same way. (Or rather, it might be - but it doesn't have to be.)
If you want to ensure a particular order, you need to do that explicitly - if you don't mind what order things happen in, then there's no problem :)
I've run that chunk of code 30 times.
Run it another seven billion times on every OS and hardware combination possible and report your findings then. 30 is a very low value of forever.
Why is "always second but why?" always second on the console?
How many cores do you have? Most thread schedulers will favour the currently running thread over a newly spawned one, especially on single cores, and will favour synchronizing threads between cores at as late a point as possible (the thread object and System.out needs to be passed between the OS threads).
Given threading is not deterministic and most OS don't guarantee fairness nor timeliness, it's not in any way a bug that it behaves in this way.
If you want an explicit ordering between the threads, then you should use either syncronized blocks or the more powerful classes in java.util.concurrent. If you want non-deterministic behaviour, but to allow the other thread to run, you can give a hint to the scheduler using Thread.yield().
public static void main ( String [] args )
{
FirstThreadBug t = new FirstThreadBug();
System.out.println ( "always first" );
t.start();
yield();
System.out.println ( "always second but why?" );
}
Why is "always second but why?" always second on the console?
It's not always second. I managed to produce both orderings in about 5 executions of your code. Both orderings are valid, and thread scheduling depends on your OS, and possibly JVM and hardware as well.
so what is wrong in my thinking, why is System.out.println("always first"); influencing the concurrency?
Your thinking is right, your experiments are misleading you ;)
System.out.println("always first") will always come first, because it comes before the 2nd thread starts, so it will never effect the concurrency.
try placing "always first" sentence after t.start();, you may get what you're expecting :)

Categories

Resources