Thread execution ordering by setting priority - java

I have set thread's priority in below order
A then B then C .But when I am running below program sometimes B runs before A.
I don't understand this execution as I set B's priority less then A's priority.
public class AThread implements Runnable{
public void run(){
System.out.println("In thread A");
}}
public class BThread implements Runnable {
public void run(){
System.out.println("In thread B");
}
}
public class CThread implements Runnable {
public void run(){
System.out.println("In thread C");
}
}
public class ThreadPriorityDemo {
public static void main(String args[]){
AThread A = new AThread();
Thread tA = new Thread(A);
BThread B = new BThread();
Thread tB = new Thread(B);
CThread C = new CThread();
Thread tC = new Thread(C);
tA.setPriority(Thread.MAX_PRIORITY);
tC.setPriority(Thread.MIN_PRIORITY);
tB.setPriority(tA.getPriority() -1);
System.out.println("A started");
tA.start();
System.out.println("B started");
tB.start();
System.out.println("C started");
tC.start();
}
}

Thread priorities are probably not what you think they are.
A thread's priority is a recommendation to the operating system to prefer one thread over another in any scheduling or CPU allocation decision point where these two threads are involved. But how this is implemented depends on the operating system and the JVM implementation.
JavaMex has a nice discussion of thread priorities. The gist is that:
Priorities may have no effect at all.
Priorities are only one part of a calculation that dictates scheduling.
Distinct Java priority values may be translated into the same value in practice (so for example, priority 10 and 9 may be the same).
Each OS makes its own decisions what to do with the priorities, as Java is using the underlying OS's threading mechanism.
Be sure to read the next article after that, which shows you how it's done on Linux and Windows.
I think your problem may stem from the third point above (if you're running on Windows), but it may be any of the other reasons.

If you need to execute threads with exact order, you can't do this with thread priority. You can use to one of the synchronization supports. (e.g Locks, semaphores).

I think the proper answer is: You cannot reliably order thread start by setting thread priority.
I think your confusion stems from the fact that the documentation states
Threads with higher priority are executed in preference to threads
with lower priority.
While this is true, it only refers to threads that are doing computation (or, on some operating systems, waiting for a shared resource). In such cases, threads with higher priority will receive more CPU time, i.e. will be executed in preference to threads that compete for the same resource.
Even if the thread priority would influence the order in which your threads are started (it most likely doesn't), all your threads could actually really run in parallel on modern CPUs as they don't influence each other.
In fact, the order of execution is determined by some other factor entirely: The threads don't do any relevant computation, they spent most of their (really small) execution time waiting for a shared resource, namely System.out.
One has to look at the code to find that the code underlying System.out, which is PrintStream actually does atomic, synchronized writes:
public void write(byte buf[], int off, int len) {
try {
synchronized (this) {
ensureOpen();
out.write(buf, off, len);
if (autoFlush)
out.flush();
}
}
catch (InterruptedIOException x) {
Thread.currentThread().interrupt();
}
catch (IOException x) {
trouble = true;
}
}
So what happens is that the first thread that reaches the println() blocks all other threads until it is done with writing its output. First thread winds regardless of priority because you cannot interrupt a synchronized block (that would defeat the purpose of the monitor).
Which thread gets the lock first depends on more factors than just thread priority and maybe even not on the (Java) thread priority at all.

Related

Try to solving race condition without using any library in Java

I searched "java race condition" and saw a lot of articles, but none of them is what I am looking for.
I am trying to solve the race condition without using lock, synchronization, Thread.sleep something else. My code is here:
public class Test {
static public int amount = 0;
static public boolean x = false;
public static void main(String[] args) {
Thread a = new myThread1();
Thread b = new myThread2();
b.start();
a.start();
}
}
class myThread1 extends Thread {
public void run() {
for (int i = 0; i < 1000000; i++) {
if (i % 100000 == 0) {
System.out.println(i);
}
}
while(true){
Test.x = true;
}
}
}
class myThread2 extends Thread {
public void run() {
System.out.println("Thread 2: waiting...");
while (!Test.x) {
}
System.out.println("Thread 2: finish waiting!");
}
}
I expect the output should be:
Thread 2: waiting...
0
100000
200000
300000
400000
500000
600000
700000
800000
900000
Thread 2: finish waiting!
(Terminated normally)
But it actually is:
Thread 2: waiting...
0
100000
200000
300000
400000
500000
600000
700000
800000
900000
(And the program won't terminate)
After I added a statement to myThread2, changed
while (!Test.x) {
}
to
while (!Test.x) {
System.out.println(".");
}
The program terminate normally and the output is what I expected (except those ".')
I know when 2 threads execute concurrently, the CPU may arbitrarily switch to another before fetch the next instruction of machine code.
I thought it will be fine if one thread read a variable while another thread write to the variable. And I really don't get it why the program will not terminate normally. I also tried to add a Thread sleep statement inside the while loop of myThread1, but the program still will not terminate.
This question puzzled me few weeks, hope any one can help me please.
Try to declare x as volatile :
static public volatile boolean x = false;
Test.x isn't volatile and thus might not be synchronized between threads.
How the print-command in the second loop affects the overall behavior can't be predicted, but apparently in this case it causes x to be synchronized.
In general: if you omit all thread-related features of java, you can't produce any code, that has a well defined behavior. The minimum would be mark variables that are used by different threads as volatile and synchronize pieces of code, that my not run concurrently.
The shared variable x is being read and written from multiple threads without any synchronisation and hence only bad things can happen.
When you have the following,
while (!Test.x) {
}
The compiler might optimise this into an infinite loop since x (the non volatile variable) is not being changed inside the while loop, and this would prevent the program from terminating.
Adding a print statement will add more visibility since it has a synchronised block protecting System.out, this will lead into crossing the memory barrier and getting a fresh copy of Test.x.
You CAN NOT synchronise shared mutable state without using synchronisation constructs.
Much more better would be a LOCK object you may wait in Thread2 and send a notificytion in thread 1. You are currently active waiting in Thread2 and consume a lot of CPU resources.
Dummy code example:
main() {
Object lock = new Object();
Thread2 t2 = new Thread2(lock);
t2.start();
Thread1 t1 = new Thread1(lock);
t1.start();
...
}
class Thread1 {
Object lock = null;
public Thread1(Object lock) {
this.lock = lock;
...
}
public void run() {
...
synchronized (lock) {
lock.notifyAll();
}
}
} // end class Thread1
// similar Thread2
class Thread2 {
... constructor
public void run()
{
System.out.println("Thread 2: waiting...");
synchronized(lock) {
lock.wait();
}
System.out.println("Thread 2: finish waiting!");
}
....
This construct does not consume any CPU cycles without doing anything in "Thread2". You can create a custom number of "Thread2" instances and wait till "Thread1" is finished. You should "only" start all "Thread2" instances before "Thread1". Otherwise "Thread1" may finish before "Thread2" instances are started.
What you are really asking is, "Why does my program work as expected when I add a call to println()?"
Actions performed by one thread aren't generally required to be visible to other threads. The JVM is free to treat each thread as if it's operating in its own, private universe, which is often faster than trying to keep all other threads updated with events in that private universe.
If you have a need for some threads to stay up-to-date with some actions of another thread, you must "synchronize-with" those threads. If you don't, there's no guarantee threads will ever observe the actions of another.
Solving a race condition without a memory barrier is a nonsensical question. There's no answer, and no reason to look for one. Declare x to be a volatile field!
When you call System.out.println(), you are invoking a synchronized method, which, like volatile, acts as a memory barrier to synchronize with other threads. It appears to be sufficient in this case, but in general, even this is not enough to guarantee your program will work as expected. To guarantee the desired behavior, the first thread should acquire and release the same lock, System.out, after setting x to true.
Update:
Eric asks, "I am curious how volatile work, what has it done behind. I thought that everything can be created by addition, subtraction, compare, jumping, and assignment."
Volatile writes work by ensuring that values are written to a location that is accessible to all reading threads, like main memory, instead of something like a processor register or a data cache line.
Volatile reads work by ensuring that values are read from that shared location, instead of, for example, using a value cached in a register.
When Java byte codes are executed, they are translated to native instructions specific to the executing processor. The instructions necessary to make volatile work will vary, but the specification of the Java platform require that whatever the implementation, certain guarantees about visibility are met.

What is the best way to simulate java.lang.Thread?

I'm developing the transformer for Java 6*1) that performs a kind of partial evaluation but let's consider, for simplicity, abstract-syntax-tree interpretation of a Java program.
How to simulate the Thread's behavior by an interpreted program?
At the moment I have in mind the following:
AstInterpreter should implement java.lang.Runnable. It also should rewrite every new-instance-expression of the java.lang.Thread (or its sub-class) replacing the Thread's target (java.lang.Runnable) with the new AstInterpreter instance:
EDIT: more complex examples provided.
EDIT 2: remark 1.
Target program:
class PrintDemo {
public void printCount(){
try {
for(int i = 5; i > 0; i--) {
System.out.println("Counter --- " + i );
}
} catch (Exception e) {
System.out.println("Thread interrupted.");
}
}
}
class ThreadDemo extends Thread {
private Thread t;
private String threadName;
PrintDemo PD;
ThreadDemo( String name, PrintDemo pd){
threadName = name;
PD = pd;
}
public void run() {
synchronized(PD) {
PD.printCount();
}
System.out.println("Thread " + threadName + " exiting.");
}
public void start ()
{
System.out.println("Starting " + threadName );
if (t == null)
{
t = new Thread (this, threadName);
t.start ();
}
}
}
public class TestThread {
public static void main(String args[]) {
PrintDemo PD = new PrintDemo();
ThreadDemo T1 = new ThreadDemo( "Thread - 1 ", PD );
ThreadDemo T2 = new ThreadDemo( "Thread - 2 ", PD );
T1.start();
T2.start();
// wait for threads to end
try {
T1.join();
T2.join();
} catch( Exception e) {
System.out.println("Interrupted");
}
}
}
program 1 (ThreadTest - bytecode interpreted):
new Thread( new Runnable() {
public void run(){
ThreadTest.main(new String[0]);
}
});
program 2 (ThreadTest - AST interpreted):
final com.sun.source.tree.Tree tree = parse("ThreadTest.java");
new Thread( new AstInterpreter() {
public void run(){
interpret( tree );
}
public void interpret(com.sun.source.tree.Tree javaExpression){
//...
}
});
Does the resulting program 2 simulate the Thread's behavior of the initial program 1 correctly?
1) Currently, source=8 / target=8 scheme is accepted.
I see two options:
Option 1: JVM threads. Every time the interpreted program calls Thread.start you also call Thread.start and start another thread with another interpreter. This is simple, saves you from having to implement locks and other things, but you get less control.
Option 2: simulated threads. Similar to how multitasking is implemented on uniprocessors - using time slicing. You have to implement locks and sleeps in the interpreter, and track the simulated threads to know which threads are ready to run, which have finished, which are blocked, etc.
You can execute instructions of one thread until it blocks or some time elapses or some instruction count is reached, and then find another thread which may run now and switch to running that thread. In the context of operating systems this is called process scheduling - you may want to study this topic for inspiration.
You can't do partial evaluation sensibly using a classic interpreter that computes with actual values. You need symbolic values.
For partial evaluation, what you want is to compute the symbolic program state at each program point, and then simplify the program point based on the state known at that program point. You start your partial evaluation process by writing down what you know about the state when the program starts.
If you decorated each program point with its full symbolic state and kept them all around at once, you'd run out of memory fast. So a more practical approach is to enumerate all control flow paths through a method using a depth-first search along the control flow paths, computing symbolic state as you go. When this search backtracks, it throws away the symbolic state for the last node on the current path being explored. Now your saved state is linear in the size of the depth of the flow graph, which is often pretty shallow in a method. (When a method calls another, just extend the control flow path to include the call).
To handle runnables, you have to model the interleavings of the computations in the separate runnables. Interleaving the (enormous) state of two threads will get huge fast. The one thing that might save you here is most state computed by a thread is completely local to that thread, thus is by definition invisible to another thread, and you don't have to worry about interleaving that part of the state. So we are left with simulating interleaving of state seen by both two threads, along with simulation of the local states of each thread.
You can model this interleaving by implied but simulated parallel forks in the control flow: at each simulated step, either one thread makes one step progress, or the other (generalize to N threads). What you get is a new state for each program point for each fork; the actual state for the program point is disjunction of the states generated by this process for each state.
You can simplify the actual state disjunction by taking "disjunctions" of properties of individual properties. For instance, if you know that one thread sets x to a negative number at a particular program point, and another sets it to a positive number at that same point, you can summarize the state of x as "not zero". You'll need a pretty rich type system to model possible value characterizations, or you can live with an impoverished one that computes disjunctions of properties of a variable conservatively as "don't know anything".
This scheme assumes that memory accesses are atomic. They often aren't in real code so you sort of have to model that, too. Probably best to have the interpreter simply complain your program has a race condition if you end up with conflicting read and write operations to a memory location from two threads at the "same" step. A race condition doesn't make your program wrong, but only really clever code use races in ways that aren't broken.
If this scheme is done right, when one thread A makes a call on a synchronous method on an object already in use by another thread B, you can stop interleaving A with B until B leaves the synchronous method.
If there is never interference between threads A and B over the same abstract object, you can remove the synchronized declaration from the object declaration. I think this was your original goal
All this isn't easy to organize, and it is likely very expensive time/spacewise to run. Trying to draw up an example of all this pretty laborious, so I won't do it here.
Model checkers https://en.wikipedia.org/wiki/Model_checking do a very similar thing in terms of generating the "state space", and have similar time/space troubles. If you want to know more about how to manage state do this, I'd read the literature on this.

java Multithreading concurrency of two or more threads?

Well,
I'm trying to understand this case. When i create two thread sharing the same instance of Runnable. Why is this order?
Hello from Thread t 0
Hello from Thread u 1
Hello from Thread t 2
Hello from Thread t 4
Hello from Thread u 3 <----| this is not in order
Hello from Thread u 6
Hello from Thread t 5 <----| this one too
Hello from Thread t 8
Hello from Thread t 9
Hello from Thread t 10
i'll show you the code of two thread:
public class MyThreads {
public static void main(String[] args) {
HelloRunnerShared r = new HelloRunnerShared();
Thread t = new Thread(r,"Thread t");
Thread u = new Thread(r,"Thread u");
t.start();
u.start();
}
}
And concluding, the final question is if i'm running this thread i understand they're not running in order but. Why a thread is keeping or printing a number in disorder?
This is the code for the runnable:
class HelloRunnerShared implements Runnable{
int i=0;
public void run(){
String name = Thread.currentThread().getName();
while (i< 300) {
System.out.println("Hello from " + name + " " + i++);
}
}
}
i thought they would be processed intercalated. It's just an assumption!!
Thanks!
Why do you think threads should be executing in a particular order? It's a nondeterministic phenomenon -- whichever is scheduled first, runs first.
Use the ExecutorService.invokeAll if you want things to run in the order in a fixed order, regardless of their schedule.
There are several things going on:
The OS scheduler can switch between threads any time it wants. There's no fairness requirement, the scheduler may favor one thread over another (for instance, it could be trying to minimize the amount of context-switching).
The only locking going on is on the PrintStream used by the println method, which keeps the threads from writing to the console simultaneously. Which thread acquires the lock on the PrintStream when depends on the OS scheduler. The locks used are the intrinsic ones used with the synchronized keyword, they are not fair. The scheduler can give the lock to the same thread that took it last time.
++ is not an atomic operation. The two threads can get in each other's way updating i. You could use AtomicInteger instead of an int.
Access to i is not protected by a lock or any other means of enforcing a happens-before boundary, so updates to it may or may not be visible to other threads. Just because one thread updates i doesn't automatically mean the other thread will see the updated value right away, or at all (how forgiving the JVM is about this depends on the implementation). In the absence of happens-before boundaries the JVM can make optimizations like reordering bytecodes or performing aggressive caching.

Java - How is processing time split between threads?

I'm new to threading in Java. I've been reading some online tutorials, but haven't found much material that addresses how the processing time is split between threads.
I created a Runnable class:
public class HelloThread implements Runnable {
public void run() {
int i = 0;
while(true)
{
System.out.println("New Thread" + i);
i++;
}
}
}
and I start the new thread in:
public static void main (String[] args) {
// Start a new thread
Thread helloThread = new Thread(new HelloThread());
helloThread.start();
int i = 0;
while(true)
{
System.out.println("hello from main thread" + i);
i++;
}
}
The output alternates between the message in the helloThread and in the main thread. How is the processing time split between these two threads? I played around with the setPriority() method, but even when I set the helloThread to a priority of 10, the main thread still gets some processing time.
Thanks!
You cannot predict when a thread will run (as you assume). Even if your thread priority is 10, it is no guarantee that you threads will run at all. Threading depends on the concrete JVM (there are different JVM implementation) as well as on the underlying operating system and even on your hardware (like how many cores you have).
In your example, you have two threads (the main thread and your HelloThread). Both are running. So it is a completely OK that both threads run and print messages. It is the purpose of threads to be scheduled alternating and run in parallel.
It is because they have the same priority. You can add statement like
System.out.println( Thread.currentThread().getPriority() );
Both threads should have priority of 5 based on your example
As Thomas Uhrig explained, even with higher priority, the other thread will still get scheduled and executed. It is nondeterministic.

Do threads work with respect to their respective priority number?

Why would the compiler print 2 a and then 2 b or vice versa when giving the priority to Thread a to start? Shouldn't thread b wait for thread a to finish in order to start? Can someone please explain how does it work?
public class Test1 extends Thread{
static int x = 0;
String name;
Test1(String n) {
name = n;
}
public void increment() {
x = x+1;
System.out.println(x + " " + name);
}
public void run() {
this.increment();
}
}
public class Main {
public static void main(String args[]) {
Test1 a = new Test1("a");
Test1 b = new Test1("b");
a.setPriority(3);
b.setPriority(2);
a.start();
b.start();
}
}
Giving priorities is not a job for the compiler. It is the OS scheduler to schedule and give CPU time (called quantum) to threads.
The scheduler further tries to run as much threads at once as possible, based on the available number of CPUs. In today's multicore systems, more often than not more than one core are available.
If you want for a thread to wait for another one, use some synchronizing mechanism.
Shouldn't thread b wait for thread a to finish in order to start?
No. The priority does not block the thread execution. It only tells the JVM to execute the thread "in preference to threads with lower priority". This does imply a wait.
Since your code is so trivial, there is nothing to wait for. Any of the two threads is run.
Why would the compiler print 2 a and then 2 b?
Luck of the draw. "Priority" means different things on different operating systems, but in general, it's always part of how the OS decides which thread gets to run and which one must wait when there's not enough CPUs available to run them both at the same time. If you computer has two or more idle CPUs when you start that program, then everybody gets to run. Priority doesn't matter in that case, and its just a race to see which one gets to the println(...) call first.
The a thread in your example has an advantage because the program doesn't call b.start() until after the a.start() method returns, but how big that advantage actually is depends on the details of the OS thread scheduling algorithm. The a thread could get a huge head start (like, it actually finishes before b even starts), or it could be a near-trivial head start.

Categories

Resources