Java - Synchronized methods causes program to slow down massively - java

I'm trying to learn about threads and synchronization. I made this test program:
public class Test {
static List<Thread> al = new ArrayList<>();
public static void main(String[] args) throws IOException, InterruptedException {
long startTime = System.currentTimeMillis();
al.add(new Thread(() -> fib1(47)));
al.add(new Thread(() -> fib2(47)));
for (Thread t : al)
t.start();
for (Thread t: al)
t.join();
long totalTime = System.currentTimeMillis() - startTime;
System.out.println(totalTime);
}
public static synchronized int fib1(int x) {
return x <= 2 ? 1 : fib1(x-2) + fib1(x-1);
}
public static synchronized int fib2(int x) {
return x <= 2 ? 1 : fib2(x-2) + fib2(x-1);
}
}
This program takes around 273 seconds to finish, but if I remove both of the synchronized it runs in 7 seconds instead. What causes this massive difference?
EDIT:
I'm aware that I'm using a terribly slow algorithm for calculating fibonacci numbers. And I'm also aware that the threads don't share resources and thus the methods don't need to be synchronized. However, this is just a test program where I'm trying to figure out how synchronized works and I choose a slow algorithm on purpose so I could measure time taken in milliseconds.

Your program does not get stuck - it's just terribly slow.
This is due to two reasons:
1. Algorithm Complexity
As others and youself have mentioned, the way you compute the Fibonacci numbers is really slow because it computes the same values over and over again. Using a smaller input will bring down the runtime to a reasonable value. But this is not what your question is about.
2. Synchronized
This slows down your program in 2 ways:
First of all, making the methods synchronized is not necessary since they do not modify anything outside of the method itself. In fact it prevents both threads from running at the same time as the methods are static therefore preventing two thread from being in either of them at the same time.
So your code is effectively using only one thread, not two.
Also synchronized adds a significant overhead to the methods since it requires acquiring a lock when entering the method - or at least checking whether the current thread already possesses the lock.
These operations are quite expensive and they have to be done every single time one of the methods is entered. Since - due to the recursion - this happens a lot, it has an extreme impact on the program performance.
Interestingly the performance is much better when you run it with just a single thread - even with the methods being synchronized.
The reason is the runtime optimizations done by the JVM.
If you are using just one thread, the JVM can optimize the synchronized checks away since there cannot be a conflict. This reduces the runtime significantly - but not exactly to the value that it would have without synchronized due to starting with 'cold code' and some remaining runtime checks.
When running with 2 threads on the other hand, the JVM cannot do this optimization, therefore leaving the expensive synchronized operations that cause the code to be so terribly slow.
Btw: fib1 and fib2 are identical, delete one of them

When you put static synchronized on a method that means that, in order for a thread to execute that method, it first has to acquire the lock for the class (which here is Test). The two static fib methods use the same lock. One thread gets the lock, executes the fib method, and releases the lock, then the other thread gets to execute the method. Which thread gets the lock first is up to the OS.
It was already mentioned the locks are re-entrant and there's no problem with calling a synchronized method recursively. The thread holds the lock from the time it first calls the fib method, that call doesn't complete until all the recursive calls have completed, so the method runs to completion before the thread releases the lock.
The main thread isn't doing anything but waiting, and only one of the threads calling a fib method can run at a time. It does make sense that removing the synchronized modifier would speed up things, without locking the two threads can run concurrently, possibly using different processors.
The methods do not modify any shared state so there's no reason to synchronize them. Even if they did need to be synchronized there would still be no reason to have two separate fib methods here, because in any case invoking either the fib1 or fib2 method requires acquiring the same lock.
Using synchronized without static means that the object instance, not the class, is used as the lock. The reason that all the synchronized methods use the same lock is that the point is to protect shared state, an object might have various methods that modify the object's internal state, and to protect that state from concurrent modifications no more than one thread should be executing any one of these methods at a time.

Your program is not deadlocked, and it also isn't appreciably slower because of unnecessary synchronization. Your program appears "stuck" because of the branching factor of your recursive function.
Branching Factor of Recursion
When N >= 4, you recurse twice. In other words, on average, your recursion has a branching factor of two, meaning if you are computing the N-th Fibonacci number recursively, you will call your function about 2^N times. 2^47 is a HUGE number (like, in the hundreds of trillions). As others have suggested, you can cut this number WAY down by saving intermediate results and returning them instead of recomputing them.
More on synchronization
Acquiring locks is expensive. However, in Java, if a thread has a lock and re-enters the same synchronized block that it already owns the lock for, it doesn't have to reacquire the lock. Since each thread already owns the respective lock for each function they enter, they only have to acquire one lock apiece for the duration of your program. The cost of acquiring one lock is weensy compared to recursing hundreds of trillions of times :)

#MartinS is correct that synchronized is not necessary here because you have no shared state. That is, there is no data that you are trying to prevent being accessed concurrently by multiple threads.
However, you are slowing your program down by the addition of the synchronized call. My guess is that without synchronized, you should see two cores spinning at 100% for however long it takes to compute this method. When you add synchronized, whichever thread grabs the lock first gets to spin at 100%. The other one sits there waiting for the lock. When the first thread finishes, the second one gets to go.
You can test this by timing your program (start with smaller values to keep it to a reasonable time). The program should run in approximately half the time without synchronized than it does with.

When the fib1 (or fib2) method recurs, it doesn't release the lock. More over, it acquires the lock again (it is faster than initial locking).
Good news is that synchronized methods in Java are reentrant.
You are better not to synchronize the recursion itself.
Split your recursive methods into two:
one recursive non-synchronized method (it should be private as it is not thread-safe);
one public synchronized method without recursion per se, which calls the second method.
Try to measure such code, you should get 14 seconds, because both threads synchronize on the same lock Test.class.

The issue you see is because a static synchronized method synchronizes on the Class. So your two Threads spend an extraordinary amount of time fighting over the single lock on Test.class.
For the purposes of this learning exercise, the best way to speed it up would be to create two explicit lock objects. In Test, add
static final Object LOCK1 = new Object();
static final Object LOCK2 = new Object();
and then, in fib1() and fib2(), use a synchronized block on those two objects. e.g.
public static int fib1(int x) {
synchronized(LOCK1) {
return x <= 2 ? 1 : fib1(x-2) + fib1(x-1);
}
}
public static int fib2(int x) {
synchronized(LOCK2) {
return x <= 2 ? 1 : fib2(x-2) + fib2(x-1);
}
}
Now the first thread only needs to grab LOCK1, with no contention, and the second thread only grabs LOCK2, again, with no contention. (So long as you only have those two threads) This should run only slightly slower than the completely unsynchronized code.

Related

HashMap synchronized `put` but not `get`

I have the following code snippet that I'm trying to see if it can crash/misbehave at some point. The HashMap is being called from multiple threads in which put is inside a synchronized block and get is not. Is there any issue with this code? If so, what modification I need to make to see that happens given that I only use put and get this way, and there is no putAll, clear or any operations involved.
import java.util.HashMap;
import java.util.Map;
public class Main {
Map<Integer, String> instanceMap = new HashMap<>();
public static void main(String[] args) {
System.out.println("Hello");
Main main = new Main();
Thread thread1 = new Thread("Thread 1"){
public void run(){
System.out.println("Thread 1 running");
for (int i = 0; i <= 100; i++) {
System.out.println("Thread 1 " + i + "-" + main.getVal(i));
}
}
};
thread1.start();
Thread thread2 = new Thread("Thread 2"){
public void run(){
System.out.println("Thread 2 running");
for (int i = 0; i <= 100; i++) {
System.out.println("Thread 2 " + i + "-" + main.getVal(i));
}
}
};
thread2.start();
}
private String getVal(int key) {
check(key);
return instanceMap.get(key);
}
private void check(int key) {
if (!instanceMap.containsKey(key)) {
synchronized (instanceMap) {
if (!instanceMap.containsKey(key)) {
// System.out.println(Thread.currentThread().getName());
instanceMap.put(key, "" + key);
}
}
}
}
}
What I have checked out:
Are size(), put(), remove(), get() atomic in Java synchronized HashMap?
Extending HashMap<K,V> and synchronizing only puts
Why does HashMap.get(key) needs to be synchronized when change operations are synchronized?
I somewhat modified your code:
removed System.out.println() from the "hot" loop, it is internally synchronized
increased the number of iterations
changed printing to only print when there's an unexpected value
There's much more we can do and try, but this already fails, so I stopped there. The next step would we to rewrite the whole thing to jcsctress.
And voila, as expected, sometimes this happens on my Intel MacBook Pro with Temurin 17:
Exception in thread "Thread 2" java.lang.NullPointerException: Cannot invoke "java.lang.Integer.intValue()" because the return value of "java.util.Map.get(Object)" is null
at com.gitlab.janecekpetr.playground.Playground.getVal(Playground.java:35)
at com.gitlab.janecekpetr.playground.Playground.lambda$0(Playground.java:21)
at java.base/java.lang.Thread.run(Thread.java:833)
Code:
private record Val(int index, int value) {}
private static final int MAX = 100_000;
private final Map<Integer, Integer> instanceMap = new HashMap<>();
public static void main(String... args) {
Playground main = new Playground();
Runnable runnable = () -> {
System.out.println(Thread.currentThread().getName() + " running");
Val[] vals = new Val[MAX];
for (int i = 0; i < MAX; i++) {
vals[i] = new Val(i, main.getVal(i));
}
System.out.println(Stream.of(vals).filter(val -> val.index() != val.value()).toList());
};
Thread thread1 = new Thread(runnable, "Thread 1");
thread1.start();
Thread thread2 = new Thread(runnable, "Thread 2");
thread2.start();
}
private int getVal(int key) {
check(key);
return instanceMap.get(key);
}
private void check(int key) {
if (!instanceMap.containsKey(key)) {
synchronized (instanceMap) {
if (!instanceMap.containsKey(key)) {
instanceMap.put(key, key);
}
}
}
}
To specifically explain the excellent sleuthing work in the answer by #PetrJaneček :
Every field in java has an evil coin attached to it. Anytime any thread reads the field, it flips this coin. It is not a fair coin - it is evil. It will flip heads 10,000 times in a row if that's going to ruin your day (for example, you may have code that depends on coinflips landing a certain way, or it'll fail to work. The coin is evil: You may run into the situation that just to ruin your day, during all your extensive testing, the coin flips heads, and during the first week in production it's all heads flips. And then the big new potential customer demos your app and the coin starts flipping some tails on you).
The coinflip decides which variant of the field is used - because every thread may or may not have a local cache of that field. When you write to a field from any thread? Coin is flipped, on tails, the local cache is updated and nothing more happens. Read from any thread? Coin is flipped. On tails, the local cache is used.
That's not really what happens of course (your JVM does not actually have evil coins nor is it out to get you), but the JMM (Java Memory Model), along with the realities of modern hardware, means that this abstraction works very well: It will reliably lead to the right answer when writing concurrent code, namely, that any field that is touched by more than one thread must have guards around it, or must never change at all during the entire duration of the multi-thread access 'session'.
You can force the JVM to flip the coin the way you want, by establishing so-called Happens Before relationships. This is explicit terminology used by the JMM. If 2 lines of code have a Happens-Before relationship (one is defined as 'happening before' the other, as per the JMM's list of HB relationship establishing actions), then it is not possible (short of a bug in the JVM itself) to observe any side effect of the HA line whilst not also observing all side effects of the HB line. (That is to say: the 'happens before' line happens before the 'happens after' line as far as your code could ever tell, though it's a bit of schrodiner's cat situation. If your code doesn't actually look at these files in a way that you'd ever be able to tell, then the JVM is free to not do that. And it won't, you can rely on the evil coin being evil: If the JMM takes a 'right', there will be some combination of CPU, OS, JVM release, version, and phase of the moon that combine to use it).
A small selection of common HB/HA establishing conditions:
The first line inside a synchronized(lock) block is HA relative to the hitting of that block in any other thread.
Exiting a synchronized(lock) block is HB relative to any other thread entering any synchronized(lock) block, assuming the two locks are the same reference.
thread.start() is HB relative to the first line that thread will run.
The 'natural' HB/HA: line X is HB relative to line Y if X and Y are run by the same thread and X is 'before it' in your code. You can't write x = 5; y = x; and have y be set by a version of x that did not witness the x = 5 happening (of course, if another thread is also modifying x, all bets are off unless you have HB/HA with whatever line is doing that).
writes and reads to volatile establish HB/HA but you usually can't get any guarantees about which direction.
This explains the way your code may fail: The get() call establishes absolutely no HB/HA relationship with the other thread that is calling put(), and therefore the get() call may or may not use locally cached variants of the various fields that HashMap uses internally, depending on the evil coin (which is of course hitting some fields; it'll be private fields in the HashMap implementation someplace, so you don't know which ones, but HashMap obviously has long-lived state, which implies fields are involved).
So why haven't you actually managed to 'see' your code asplode like the JMM says it will? Because the coin is EVIL. You cannot rely on this line of reasoning: "I wrote some code that should fail if the synchronizations I need aren't happening the way I want. I ran it a whole bunch of times, and it never failed, therefore, apparently this code is concurrency-safe and I can use this in my production code". That is simply not ever actually reliable. That's why you need to be thinking: Evil! That coin is out to get me! Because if you don't, you may be tempted to write test code like this.
You should be scared of writing code where more than one thread interacts with the same field. You should be bending over backwards to avoid it. Use message queues. Do your chat between threads by using databases, which have much nicer primitives for this stuff (transactions and isolation levels). Rewrite the code so that it takes a bunch of params up front and then runs without interacting with other threads via fields at all, until it is all done, and it then returns a result (and then use e.g. fork/join framework to make it all work). Make your webserver performant and using all the cores simply by relying on the fact that every incoming request will be its own thread, so the only thing that needs to happen for you to use all the cores is for that many folks to hit your server at the same time. If you don't have enough requests, great! Your server isn't busy so it doesn't matter you aren't using all the cores.
If truly you decide that interacting with the same field from multiple threads is the right answer, you need to think NASA programming mars rovers on the lines that interact with those fields, because tests simply cannot be relied upon. It's not as hard as it sounds - especially if you keep the actual interacting with the relevant fields down to a minimum and keep thinking: "Have I established HB/HA"?
In this case, I think Petr figured it out correctly: System.out.println is hella slow and does various synchronizing actions. JMM is a package deal, and commutative: Once HB/HA establishes, everything the HB line changed is observable to the code in the HA line, and add in the natural rule, which means all code that follows the HA line cannot possibly observe a universe where something any line before the HB line did is not yet visible. In other words, the System.out.println statements HB/HA with each other in some order, but you can't rely on that (System.out is not specced to synchronize. But, just about every implementation does. You should not rely on implementation details, and I can trivially write you some java code that is legal, compiles, runs, and breaks no contracts, because you can set System.out with System.setOut - that does not synchronize when interacting with System.out!). The evil coin in this case took the shape of 'accidental' synchronization via intentionally unspecced behaviour of System.out.
The following explanation is more in line with the terminology used in the JMM. Could be useful if you want a more solid understanding of this topic.
2 Actions are conflicting when they access the same address and there is at least 1 write.
2 Actions are concurrent when they are not ordered by a happens-before relation (there is no happens-before edge between them).
2 Actions are in data race when they are conflicting and concurrent.
When there are data races in your program, weird problems can happen like unexpected reordering of instructions, visibility problems, or atomicity problems.
So what makes up the happens-before relation. If a volatile read observes a particular volatile write, then there is a happens-before edge between the write and the read. This means that read will not only see that write, but everything that happened before that write. There are other sources of happens-before edges like the release of a monitor and subsequent acquire of the same monitor. And there is a happens-before edge between A, B when A occurs before B in the program order. Note: the happens-before relation is transitive, so if A happens-before B and B happens-before C, then A happens-before C.
In your case, you have a get/put operations which are conflicting since they access the same address(es) and there is at least 1 write.
The put/get action are concurrent, since is no happens-before edge between writing and reading because even though the write releases the monitor, the get doesn't acquire it.
Since the put/get operations are concurrent and conflicting, they are in data race.
The simplest way to fix this problem, is to execute the map.get in a synchronized block (using the same monitor). This will introduce the desired happens-before edge and makes the actions sequential instead of concurrent and as consequence, the data-race disappears.
A better-performing solution would be to make use of a ConcurrentHashMap. Instead of a single central lock, there are many locks and they can be acquired concurrently to improve scalability and performance. I'm not going to dig into the optimizations of the ConcurrentHashMap because would create confusion.
[Edit]
Apart from a data-race, your code also suffers from race conditions.

how synchronized keyword works internally

I read the below program and answer in a blog.
int x = 0;
boolean bExit = false;
Thread 1 (not synchronized)
x = 1;
bExit = true;
Thread 2 (not synchronized)
if (bExit == true)
System.out.println("x=" + x);
is it possible for Thread 2 to print “x=0”?
Ans : Yes ( reason : Every thread has their own copy of variables. )
how do you fix it?
Ans: By using make both threads synchronized on a common mutex or make both variable volatile.
My doubt is : If we are making the 2 variable as volatile then the 2 threads will share the variables from the main memory. This make a sense, but in case of synchronization how it will be resolved as both the thread have their own copy of variables.
Please help me.
This is actually more complicated than it seems. There are several arcane things at work.
Caching
Saying "Every thread has their own copy of variables" is not exactly correct. Every thread may have their own copy of variables, and they may or may not flush these variables into the shared memory and/or read them from there, so the whole thing is non-deterministic. Moreover, the very term flushing is really implementation-dependent. There are strict terms such as memory consistency, happens-before order, and synchronization order.
Reordering
This one is even more arcane. This
x = 1;
bExit = true;
does not even guarantee that Thread 1 will first write 1 to x and then true to bExit. In fact, it does not even guarantee that any of these will happen at all. The compiler may optimize away some values if they are not used later. The compiler and CPU are also allowed to reorder instructions any way they want, provided that the outcome is indistinguishable from what would happen if everything was really in program order. That is, indistinguishable for the current thread! Nobody cares about other threads until...
Synchronization comes in
Synchronization does not only mean exclusive access to resources. It is also not just about preventing threads from interfering with each other. It's also about memory barriers. It can be roughly described as each synchronization block having invisible instructions at the entry and exit, the first one saying "read everything from the shared memory to be as up-to-date as possible" and the last one saying "now flush whatever you've been doing there to the shared memory". I say "roughly" because, again, the whole thing is an implementation detail. Memory barriers also restrict reordering: actions may still be reordered, but the results that appear in the shared memory after exiting the synchronized block must be identical to what would happen if everything was indeed in program order.
All that only works, of course, only if both blocks use the same locking object.
The whole thing is described in details in Chapter 17 of the JLS. In particular, what's important is the so-called "happens-before order". If you ever see in the documentation that "this happens-before that", it means that everything the first thread does before "this" will be visible to whoever does "that". This may even not require any locking. Concurrent collections are a good example: one thread puts there something, another one reads that, and that magically guarantees that the second thread will see everything the first thread did before putting that object into the collection, even if those actions had nothing to do with the collection itself!
Volatile variables
One last warning: you better give up on the idea that making variables volatile will solve things. In this case maybe making bExit volatile will suffice, but there are so many troubles that using volatiles can lead to that I'm not even willing to go into that. But one thing is for sure: using synchronized has much stronger effect than using volatile, and that goes for memory effects too. What's worse, volatile semantics changed in some Java version so there may exist some versions that still use the old semantics which was even more obscure and confusing, whereas synchronized always worked well provided you understand what it is and how to use it.
Pretty much the only reason to use volatile is performance because synchronized may cause lock contention and other troubles. Read Java Concurrency in Practice to figure all that out.
Q & A
1) You wrote "now flush whatever you've been doing there to the shared
memory" about synchronized blocks. But we will see only the variables
that we access in the synchronize block or all the changes that the
thread call synchronize made (even on the variables not accessed in the
synchronized block)?
Short answer: it will "flush" all variables that were updated during the synchronized block or before entering the synchronized block. And again, because flushing is an implementation detail, you don't even know whether it will actually flush something or do something entirely different (or doesn't do anything at all because the implementation and the specific situation already somehow guarantee that it will work).
Variables that wasn't accessed inside the synchronized block obviously won't change during the execution of the block. However, if you change some of those variables before entering the synchronized block, for example, then you have a happens-before relationship between those changes and whatever happens in the synchronized block (the first bullet in 17.4.5). If some other thread enters another synchronized block using the same lock object then it synchronizes-with the first thread exiting the synchronized block, which means that you have another happens-before relationship here. So in this case the second thread will see the variables that the first thread updated prior to entering the synchronized block.
If the second thread tries to read those variables without synchronizing on the same lock, then it is not guaranteed to see the updates. But then again, it isn't guaranteed to see the updates made inside the synchronized block as well. But this is because of the lack of the memory-read barrier in the second thread, not because the first one didn't "flush" its variables (memory-write barrier).
2) In this chapter you post (of JLS) it is written that: "A write to a
volatile field (§8.3.1.4) happens-before every subsequent read of that
field." Doesn't this mean that when the variable is volatile you will
see only changes of it (because it is written write happens-before
read, not happens-before every operation between them!). I mean
doesn't this mean that in the example, given in the description of the
problem, we can see bExit = true, but x = 0 in the second thread if
only bExit is volatile? I ask, because I find this question here: http://java67.blogspot.bg/2012/09/top-10-tricky-java-interview-questions-answers.html
and it is written that if bExit is volatile the program is OK. So the
registers will flush only bExits value only or bExits and x values?
By the same reasoning as in Q1, if you do bExit = true after x = 1, then there is an in-thread happens-before relationship because of the program order. Now since volatile writes happen-before volatile reads, it is guaranteed that the second thread will see whatever the first thread updated prior to writing true to bExit. Note that this behavior is only since Java 1.5 or so, so older or buggy implementations may or may not support this. I have seen bits in the standard Oracle implementation that use this feature (java.concurrent collections), so you can at least assume that it works there.
3) Why monitor matters when using synchronized blocks about memory
visibility? I mean when try to exit synchronized block aren't all
variables (which we accessed in this block or all variables in the
thread - this is related to the first question) flushed from registers
to main memory or broadcasted to all CPU caches? Why object of
synchronization matters? I just cannot imagine what are relations and
how they are made (between object of synchronization and memory).
I know that we should use the same monitor to see this changes, but I
don't understand how memory that should be visible is mapped to
objects. Sorry, for the long questions, but these are really
interesting questions for me and it is related to the question (I
would post questions exactly for this primer).
Ha, this one is really interesting. I don't know. Probably it flushes anyway, but Java specification is written with high abstraction in mind, so maybe it allows for some really weird hardware where partial flushes or other kinds of memory barriers are possible. Suppose you have a two-CPU machine with 2 cores on each CPU. Each CPU has some local cache for every core and also a common cache. A really smart VM may want to schedule two threads on one CPU and two threads on another one. Each pair of the threads uses its own monitor, and VM detects that variables modified by these two threads are not used in any other threads, so it only flushes them as far as the CPU-local cache.
See also this question about the same issue.
4) I thought that everything before writing a volatile will be up to
date when we read it (moreover when we use volatile a read that in
Java it is memory barrier), but the documentation don't say this.
It does:
17.4.5.
If x and y are actions of the same thread and x comes before y in program order, then hb(x, y).
If hb(x, y) and hb(y, z), then hb(x, z).
A write to a volatile field (§8.3.1.4) happens-before every subsequent
read of that field.
If x = 1 comes before bExit = true in program order, then we have happens-before between them. If some other thread reads bExit after that, then we have happens-before between write and read. And because of the transitivity, we also have happens-before between x = 1 and read of bExit by the second thread.
5) Also, if we have volatile Person p does we have some dependency
when we use p.age = 20 and print(p.age) or have we memory barrier in
this case(assume age is not volatile) ? - I think - No
You are correct. Since age is not volatile, then there is no memory barrier, and that's one of the trickiest things. Here is a fragment from CopyOnWriteArrayList, for example:
Object[] elements = getArray();
E oldValue = get(elements, index);
if (oldValue != element) {
int len = elements.length;
Object[] newElements = Arrays.copyOf(elements, len);
newElements[index] = element;
setArray(newElements);
} else {
// Not quite a no-op; ensures volatile write semantics
setArray(elements);
Here, getArray and setArray are trivial setter and getter for the array field. But since the code changes elements of the array, it is necessary to write the reference to the array back to where it came from in order for the changes to the elements of the array to become visible. Note that it is done even if the element being replaced is the same element that was there in the first place! It is precisely because some fields of that element may have changed by the calling thread, and it's necessary to propagate these changes to future readers.
6) And is there any happens before 2 subsequent reads of volatile
field? I mean does the second read will see all changes from thread
which reads this field before it(of course we will have changes only
if volatile influence visibility of all changes before it - which I am
a little confused whether it is true or not)?
No, there is no relationship between volatile reads. Of course, if one thread performs a volatile write and then two other thread perform volatile reads, they are guaranteed to see everything at least up to date as it was before the volatile write, but there is no guarantee of whether one thread will see more up-to-date values than the other. Moreover, there is not even strict definition of one volatile read happening before another! It is wrong to think of everything happening on a single global timeline. It is more like parallel universes with independent timelines that sometimes sync their clocks by performing synchronization and exchanging data with memory barriers.
It depends on the implementation which decides if threads will keep a copy of the variables in their own memory. In case of class level variables threads have a shared access and in case of local variables threads will keep a copy of it. I will provide two examples which shows this fact , please have a look at it.
And in your example if I understood it correctly your code should look something like this--
package com.practice.multithreading;
public class LocalStaticVariableInThread {
static int x=0;
static boolean bExit = false;
public static void main(String[] args) {
Thread t1=new Thread(run1);
Thread t2=new Thread(run2);
t1.start();
t2.start();
}
static Runnable run1=()->{
x = 1;
bExit = true;
};
static Runnable run2=()->{
if (bExit == true)
System.out.println("x=" + x);
};
}
Output
x=1
I am getting this output always. It is because the threads share the variable and the when it is changed by one thread other thread can see it. But in real life scenarios we can never say which thread will start first, since here the threads are not doing anything we can see the expected result.
Now take this example--
Here if you make the i variable inside the for-loop` as static variable then threads won t keep a copy of it and you won t see desired outputs, i.e. the count value will not be 2000 every time even if u have synchronized the count increment.
package com.practice.multithreading;
public class RaceCondition2Fixed {
private int count;
int i;
/*making it synchronized forces the thread to acquire an intrinsic lock on the method, and another thread
cannot access it until this lock is released after the method is completed. */
public synchronized void increment() {
count++;
}
public static void main(String[] args) {
RaceCondition2Fixed rc= new RaceCondition2Fixed();
rc.doWork();
}
private void doWork() {
Thread t1 = new Thread(new Runnable() {
#Override
public void run() {
for ( i = 0; i < 1000; i++) {
increment();
}
}
});
Thread t2 = new Thread(new Runnable() {
#Override
public void run() {
for ( i = 0; i < 1000; i++) {
increment();
}
}
});
t1.start();
t2.start();
try {
t1.join();
t2.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
/*if we don t use join then count will be 0. Because when we call t1.start() and t2.start()
the threads will start updating count in the spearate threads, meanwhile the main thread will
print the value as 0. So. we need to wait for the threads to complete. */
System.out.println(Thread.currentThread().getName()+" Count is : "+count);
}
}

Java: How exactly do synchronized operations relate to volatility?

Sorry this is such a long question.
Ive been doing lots of research lately into multi-threading as I slowly implement it into a personal project. However, probably due to an abundance of slightly incorrect examples, the use of synchronized blocks and volatility in certain situations is still a bit unclear to me.
My core question is this: Are changes to references and primitives automatically volatile (that is, performed on the main memory and not a cache) when a thread is inside a synchronized block, or does the read also have to be synchronized for it to work properly?
If so What is the purpose of synchronizing a simple getter method? (see example 1 ) Also, are ALL changes sent to main memory as long as the thread has synchronized on anything? eg if it is sent off to do loads of work all over the place inside a very high level sync will every single change then made be to main memory, and nothing ever to cache, until its unlocked again?
If not Does the change have to be explicitly inside a synchronized block, or can java actually pick up on, for example, uses of the Lock object? (see example 3)
If either Does the synchronized object need to be related to the reference/primitive being changed in any way (eg the immediate object that contains it)? Can I write by syncing on one object and read with another if its otherwise safe? (see example 2)
(please note for the following examples that I know that synchronized methods and synchronized(this) are frowned upon and why, but discussion about that is beyond the scope of my question)
Example 1:
class Counter{
int count = 0;
public synchronized void increment(){
count++;
}
public int getCount(){
return count;
}
}
In this example, increment() needs to be synchronized since ++ is not an atomic operation. As such, two threads incremending at the same time may result in a overall increase of 1 to the count. The count primitive needs to be atomic (eg not long/double/reference), and it is so thats fine.
Does getCount() need to be synchronized here and why exactly? The explanation I have heard the most is that I will have no guarantee whether the count returned will be the pre- or post-increment. However, this seems like the explanation for something slightly different, thats found itself in the wrong place. I mean if I were to synchronize getCount(), then I still see no guarantee - its now down to not knowing the locking order, insead of not knowing whether the actual read happens to be before/after the actual write.
Example 2:
Is the following example threadsafe, if you assume that through trickery not shown here that none of these methods will never be called at the same time? Will count increment in an expected way if its done so using a random method each time, and then be read properly, or does the lock have to be the same object? (btw I fully realise how rediculous this example is but Im more interested in theory than practice)
class Counter{
private final Object lock1 = new Object();
private final Object lock2 = new Object();
private final Object lock3 = new Object();
int count = 0;
public void increment1(){
synchronized(lock1){
count++;
}
}
public void increment2(){
synchronized(lock2){
count++;
}
}
public int getCount(){
synchronized(lock3){
return count;
}
}
}
Example 3:
Is the happens-before relationship simply a java concept, or is it an actual thing built into the JVM? Even though I can guarantee a conceptual happens-before relationship for this next example, is java smart enough to pick it up if its a built in thing? I am assuming it is not, but is this example actually threadsafe? If its threadsafe, what about if getCount() did no locking?
class Counter{
private final Lock lock = new Lock();
int count = 0;
public void increment(){
lock.lock();
count++;
lock.unlock();
}
public int getCount(){
lock.lock();
int count = this.count;
lock.unlock();
return count;
}
}
Yes, the read has to be synchronized as well. This page says:
The results of a write by one thread are guaranteed to be visible to a
read by another thread only if the write operation happens-before the
read operation.
[...]
An unlock (synchronized block or method exit) of a monitor
happens-before every subsequent lock (synchronized block or method
entry) of that same monitor
The same page says:
Actions prior to "releasing" synchronizer methods such as Lock.unlock,
Semaphore.release, and CountDownLatch.countDown happen-before actions
subsequent to a successful "acquiring" method such as Lock.lock
So locks offer the same visibility guarantees as synchronized blocks.
Whether you use synchronized blocks or locks, the visibility is only guaranteed if the reader thread uses the same monitor or lock as the writer thread.
Your Example 1 is incorrect: the getter must be synchronized as well if you want to see the latest value of the count.
Your example 2 is incorrect because it uses different locks to guard the same count.
Your example 3 is OK. If the getter did not lock, you could see an older value of the count. The happens-before is something that is guaranteed by the JVM. The JVM has to respect the rules specified, by flushing caches to the main memory for example.
Try to view it in terms of two distinct, simple operations:
Locking (mutual exclusion),
Memory barrier (cache sync, instruction reordering barrier).
Entering a synchronized block entails both locking and memory barrier; leaving the synchronized block entails unlocking + memory barrier; reading/writing a volatile field entails memory barrier only. Thinking in these terms I think you can clarify for yourself all the question above.
As for Example 1, the reading thread will not have any kind of memory barrier. It's not just between seeing the value before/after read, it's about never observing any change to the var after a thread is started.
Example 2. is the most interesting issue you raise. You are indeed given no guarantees by the JLS in this case. In practice you won't be given any ordering guarantees (it's as if the locking aspect wasn't there at all), but you'll still have the benefit of the memory barriers so you will observe changes, unlike the first example. Basically, this is exactly the same as removing synchronized and tagging the int as volatile (apart from the runtime costs of acquiring locks).
Regarding Example 3, by "just a Java thing" I feel you have generics with erasure in mind, something that only the static code checking is aware of. This is not like that -- both locks and memory barriers are pure runtime artifacts. In fact, the compiler can't reason about them at all.

Do I need to synchronize methods inside the run method in java when calling start method?

Can anyone tell me if I'm right or not? I have two thread which will run in parallel.
class MyThread extends Thread {
MyThread() {
}
method1() {
}
method2() {
}
method3() {
}
approach(1):
run() {
method1();
method2();
method3();
}
approach(2):
run() {
//the code of method1 is here (no method calling)
//the code of method2 is here (no method calling)
//the code of method3 is here (no method calling)
}
}
class Test{
public static void main(){
Thread t1 = new Thread();
t1.start();
Thread t2 = new Thread();
t2.start();
}
}
method1, method2 and method3 don't access global shared data but their codes perform some write in local variable within the method section, thus I guess I can not allow overlap execution within the method section.
Thereby:
in approach(1): I need to make the methods (method1, method2 and method3) synchronized, right?
in approach(2): No need to synchronize the code sections, right?
If I'm right in both approach, using the approach(2) will give better performance, right?
Short answer: you don't need the synchronization. Both approaches are equivalent from a thread safety perspective.
Longer answer:
It may be worthwhile taking a step back and remembering what the synchronized block does. It does essentially two things:
makes sure that if thread A is inside a block that's synchronized on object M, no other thread can enter a block that's synchronized on the same object M until thread A is done with its block of code
makes sure that if thread A has done work within a block that's synchronized object M, and then finishes that block, and then thread B enters a block that's also synchronized on
M, then thread B will see everything that thread A had done within its synchronized block. This is called establishing the happens-before relationship.
Note that a synchronized method is just shorthand for wrapping the method's code in synchronized (this) { ... }.
In addition to those two things, the Java Memory Model (JMM) guarantees that within one thread, things will happen as if they had not been reordered. (They may actually be reordered for various reasons, including efficiency -- but not in a way that your program can notice within a single thread. For instance, if you do "x = 1; y = 2" the compiler is free to switch that such that y = 2 happens before x = 1, since a single thread can't actually notice the difference. If multiple threads are accessing x and y, then it's very possible, without proper synchronization, for another thread to see y = 2 before it sees x = 1.)
So, getting back to your original question, there are a couple interesting notes.
First, since a synchronized method is shorthand for putting the whole method inside a "synchronized (this) { ... }" block, t1's methods and t2's methods will not be synchronized against the same reference, and thus will not be synchronized relative to each other. t1's methods will only be synchronized against the t1 object, and t2's will only be synchronized against t2. In other words, it would be perfectly fine for t1.method1() and t2.method1() to run at the same time. So, of those two things the synchronized keyword provides, the first one (the exclusivity of entering the block) isn't relevant. Things could go something like:
t1 wants to enter method1. It needs to acquire the t1 monitor, which is not contended -- so it acquires it and enters the block
t2. wants to enter method2. It needs to acquire the 11 monitor, which is not contended -- s it acquires it and enters the block
t1 finishes method1 and releases its hold on the t1 monitor
t2 finishes method1 and releases its hold on the t2 monitor
As for the second thing synchronization does (establishing happens-before), making method1() and method2() synchronized will basically be ensuring that t1.method1() happens-before t1.method2(). But since both of those happen on the same thread anyway (the t1 thread), the JMM anyway guarantees that this will happen.
So it actually gets even a bit uglier. If t1 and t2 did share state -- that is, synchronization would be necessary -- then making the methods synchronized would not be enough. Remember, a synchronized method means synchronized (this) { ... }, so t1's methods would be synchronized against t1, and t2's would be against t2. You actually wouldn't be establishing any happens-before relationship between t1's methods and t2's.
Instead, you'd have to ensure that the methods are synchronized on the same reference. There are various ways to do this, but basically, it has to be a reference to an object that the two threads both know about.
Assume t1 and t2 both know about the same reference, LOCK. Both have methods like:
method1() {
synchronized(LOCK) {
// do whatever
}
}
Now things could go something like this:
t1 wants to enter method1. It needs to acquire the LOCK monitor, which is not contended -- so it acquires it and enters the block
t2 wants to enter method1. It needs to acquire the LOCK monitor, which is already held by t1 -- so t2 is put on hold.
t1 finishes method1 and releases its hold on the LOCK monitor
t2 is now able to acquire the LOCK monitor, so it does, and starts on the meat of method1
t2 finishes method1 and releases its hold on the LOCK monitor
You are saying your methods don't access global shared data and write only local variables so there is no need to synchronize them Because both the threads will be having their own copies of local variables. They will not overlap or something.
This kind of problem is faced in case of static/class variables. If multiple threads try to change the value of static variables at same time then there comes the problem so there we need to synchronize.
If the methods you're calling don't write to global shared data, you don't have to synchronize them.
In a multithreaded program, each thread has its own call stack. The local variables of each method will be separate in each thread, and will not overwrite one another.
Therefore, approach 1 works fine, does not require synchronization overhead, and is much better programming practice because it avoids duplicated code.
Thread-wise your ok. local variables within methods are not shared between threads as each instance running in a thread will have its own stack.
You won't have any speed improvements between the two approaches it is just a better organisation of the code (shorter methods are easier to understand)
If each method is independent of the other you may want to consider if they belong in the same class. If you want the performance gain create 3 different classes and execute multiple threads for each method (performance gains depends on the number of available cores cpu/io ration etc.)
Thereby: in approach(1): I need to make the methods(method1,method2
and method3) synchronized, right? in approach(2): No need to
synchronize the code sections, right?
Invoking in-lined methods v/s invoking multiple methods don't determine whether a method should be synchronized or not. I'd recommend you to read this and then ask for more clarifications.
If I'm right in both approach, using the approach(2) will give better performance, right?
At the cost of breaking down methods into a single god method? Sure, but you would be looking at a "very" miniscule improvement as compared to the lost code readability, something definitely not recommended.
method1, 2 and 3 won't be executed concurrently so if the variables that they read/write are not shared outside the class with other threads while they're running then there is no synchronization required and no need to inline.
If they modify data that other threads will read at the same time that they're running then you need to guard access to that data.
If they read data that other threads will write at the same time that they're running then you need to guard access to that data.
If other threads are expected to read data modified by method1, 2, or 3, then you need to make the run method synchronized (or them in a synchronized block) to set up a gate so that the JVM will set up a memory barrier and ensure that other threads can see the data after m1,2 and 3 are done.

Synchronized code performs faster than unsynchronized one

I came out with this stunning result which i absolutely do not know the reason for:
I have two methods which are shortened to:
private static final ConcurrentHashMap<Double,Boolean> mapBoolean =
new ConcurrentHashMap<Double, Boolean>();
private static final ConcurrentHashMap<Double,LinkedBlockingQueue<Runnable>> map
= new ConcurrentHashMap<Double, LinkedBlockingQueue<Runnable>>();
protected static <T> Future<T> execute(final Double id, Callable<T> call){
// where id is the ID number of each thread
synchronized(id)
{
mapBoolean.get();// then do something with the result
map.get();//the do somethign with the result
}
}
protected static <T> Future<T> executeLoosely(final Double id, Callable<T> call){
mapBoolean.get();// then do something with the result
map.get();//the do somethign with the result
}
}
On profiling with over 500 threads, and each thread calling each of the above methods 400 times each, I found out that execute(..) performs atleast 500 times better than executeLoosely(..) which is weird because executeLoosely is not synchronized and hence more threads can process the code simultaneously.
Any reasons??
The overhead of using 500 threads on a machine which I assume doesn't have 500 cores, using tasks which takes about 100-1000x as long as a lookup on a Map to execute code which the JVM could detect doesn't do anything, is likely to produce a random outcome. ;)
Another problem you could have is that a test which faster being performed with one thread can benefit from using synchronized because it biases access to one thread. i.e. it turns your multi-threaded test back into a single threaded one which is the fastest in the first place.
You should compare the timings you get with a single thread doing a loop. If this is faster (which I believe it would be) then its not a useful multi-threaded test.
My guess is that you are running the synchronized code after the unsynchronised code. i.e. after the JVM has warmed up a little. Swap the order you perform these tests and run them many times and you will get different results.
In the non synchronized scenario :
1) wait to acquire lock on a segment of the map, lock, perform operation on the map, unlock, wait to acquire lock on a segment of the other map, lock, perform operation on the other map, unlock.
The segment level locking will be performed only in cases of concurrent write to the segment which doesn't look to be the case in your example.
In the synchronized scenario :
1) wait to lock, perform both the operations, unlock.
The time taken for context switching can have an impact? How many cores does the machine running the test have?
How are the maps structured, same sort of keys?

Categories

Resources