Because it always prints out '3'. No synchronization needed? I am testing this simple thing because I am having a trouble in a real multiple thread problem, which isn't good to illustrate the problem, because it's large. This is a simplified version to showcase the situation.
class Test {
public static int count = 0;
class CountThread extends Thread {
public void run()
{
count++;
}
}
public void add(){
CountThread a = new CountThread();
CountThread b = new CountThread();
CountThread c = new CountThread();
a.start();
b.start();
c.start();
try {
a.join();
b.join();
c.join();
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
public static void main(String[] args) {
Test test = new Test();
System.out.println("START = " + Test.count);
test.add();
System.out.println("END: Account balance = " + Test.count);
}
Because it always prints out '3'. No synchronization needed?
It is not thread safe and you are just getting lucky. If you run this 1000 times, or on different architectures, you will see different output -- i.e. not 3.
I would suggest using AtomicInteger instead of a static field ++ which is not synchronized.
public static AtomicInteger count = new AtomicInteger();
...
public void run() {
count.incrementAndGet();
}
...
Seems to me like count++ is fast enough to finish until you invoke 'run' for the other class. So basically it runs sequential.
But, if this was a real life example, and two different threads were usingCountThread parallelly, then yes, you would have synchronization problem.
To verify that, you can try to print some test output before count++ and after, then you'll see if b.start() is invoking count++ before a.start() finished. Same for c.start().
Consider using AtomicInteger instead, which is way better than synchronizing when possible -
incrementAndGet
public final int incrementAndGet()
Atomically increments by one the current value.
This code is not thread-safe:
public static int count = 0;
class CountThread extends Thread {
public void run()
{
count++;
}
}
You can run this code a million times on one system and it might pass every time. This does not mean is it is thread-safe.
Consider a system where the value in count is copied to multiple processor caches. They all might be updated independently before something forces one of the caches to be copied back to main RAM. Consider that ++ is not an atomic operation. The order of reading and writing of count may cause data to be lost.
The correct way to implement this code (using Java 5 and above):
public static java.util.concurrent.atomic.AtomicInteger count =
new java.util.concurrent.atomic.AtomicInteger();
class CountThread extends Thread {
public void run()
{
count.incrementAndGet();
}
}
It's not thread safe just because the output is right. Creating a thread causes a lot of overhead on the OS side of things, and after that it's just to be expected that that single line of code will be done within a single timeslice. It's not thread safe by any means, just not enough potential conflicts to actually trigger one.
It is not thread safe.
It just happened to be way to short to have measurable chance to show the issue. Consider counting to much higher number (1000000?) in run to increase chance of 2 operations on multiple threads to overlap.
Also make sure your machine is not single core CPU...
To make the class threadsafe either make count volatile to force memory fences between threads, or use AtomicInteger, or rewrite like this (my preference):
class CountThread extends Thread {
private static final Object lock = new Object();
public void run()
{
synchronized(lock) {
count++;
}
}
}
Related
I am totally puzzled with the two samples.
public class VTest {
private static /*volatile*/ boolean leap = true;
public static void main(String[] args) throws InterruptedException {
Thread t2 = new Thread(new Runnable() {
#Override
public void run() {
while (leap) {
}
}
});
t2.start();
Thread.sleep(3000);
leap = false;
}
}
In this case, t2 is not able to stop, as leap was stored locally so that t2 can't access the leap updated in main thread.
public class VTest2 {
private static int m = 0;
public static void main(String[] args) throws InterruptedException {
Thread t2 = new Thread(new Runnable() {
#Override
public void run() {
for (int i = 0; i < 10000; ++i) ++m;
}
});
t2.start();
for (int i = 0; i < 10000; ++i) ++m;
Thread.sleep(3000);
System.out.println(m);
}
}
But, in this case, the m is always be 20000, why isn't 10000?
Any answer will be appreciated.
It's not really a matter of "when". Because of the way that m is declared, the two threads have no reason to believe that it needs to consider the value in main memory.
Consider that ++m is not an atomic operation, but is rather:
A read
An increment
A write
Because the thread doesn't know it needs to read from or flush to main memory, there is no guarantee as to how it is executed:
Perhaps it reads from main memory each time, and flushes to main memory each time
Perhaps it reads from main memory just once, and doesn't flush to main memory when it writes
Perhaps it reads from/writes to main memory on some iterations of the loop
(...many other ways)
So, essentially, the answer is that there is no guarantee that the value is read from or written to main memory, ever.
If you declare m as volatile, that gives you some guarantees: that m is definitely read from main memory, and definitely flushed to main memory. However, because ++m isn't atomic, there is no guarantee that you get 20000 at the end (it's possible it could be 2, at worst), because the work of the two threads can intersperse (e.g. both threads read the same value of m, increment it, and both write back the same value m+1).
To do this correctly, you need to ensure that:
++m is executed atomically
The value is guaranteed to be visible.
The easiest way of doing this would be to use an AtomicInteger instead; however, you could mutually synchronize the increments:
synchronized (VTest2.class) {
++m;
}
You then also need to synchronize the final read, in order to ensure you are definitely seeing the last value written by t2:
synchronized (VTest2.class) {
System.out.println(m);
}
In this case, t2 is not able to stop, as leap was stored locally so that t2 can't access the leap updated in main thread.
That's not really the case: the leap variable was not stored "locally" by the thread. It's still a shared static variable. However, because it is not marked as volatile, and there is no synchronization happening whatsoever, the JVM (the JIT in particular) is free to do optimization to avoid loading it. I believe in this case it is removing the check on the variable.
Note: The second code incrementing m is not thread-safe: try increasing the loop to millions to test that, it will almost never match the expected sum.
I'm reading java multithreading tutorial which says thread only gives up key until it completes synchronised method, however when I run the following code (about 20 times):
public class SyncDemo implements Runnable{
#Override
public void run() {
for (int i = 0; i < 10; i++) {
sync();
}
}
private synchronized void sync() {
System.out.println(Thread.currentThread().getName());
}
public static void main(String[] args) {
SyncDemo s = new SyncDemo();
Thread a = new Thread(s, "a");
Thread b = new Thread(s, "b");
a.start();
b.start();
}
}
it only prints a then b, which I expect a mixed sequence of them because current thread will unlock every time after sync() is executed inside the loop? Thus giving the other thread a chance to print its name?
There is nothing in your program that would demand a certain execution order. So the run time will schedule the threads in a way that makes most sense in the current situation. Factors that may influence the order are number of processors, load situation, ...
Let's imagine I have the following java class :
static class Singleton {
static Singleton i;
static Singleton getInstance() {
if (i == null) {
i = new Singleton();
}
return i;
}
}
Now, we all know this will work, but - it apparently is not thread safe - I am not actually trying to fix the thread safety - this is more of a demo, my other class is identical, but uses a mutex and synchronization - the unit test will be ran against each to show that one is thread safe, and the other, is not. What might the unit test which would fail if getInstance is not thread safe look like?
Well, race conditions are by nature probabilistic so there's no deterministic way to truly generate a race condition. Any possible way against your current code would need to be run many times until the desired outcome is achieved. You can enforce a loose ordering of access on i by making a mock singleton to test against to simulate what a certain condition might look like, though. Rule of thumb with synchronization is preventative measures beat trying to test and figure out what's wrong after bad code is mangled in a code base.
static class Singleton {
static Singleton i;
static Singleton getInstance(int tid) {
if (i == null) {
if (tid % 2 == 0) i = new Singleton()
}
return i;
}
}
So certain threads will write to i and other threads will read i as if they reached "return i" before "the even thread id's were able to check and initialize i" (sort of, not exactly, but it simulates the behavior). Still, there's a race between the even threads in this case because the even threads may still write to i after another reads null. To improve, you'd need to implement thread safety to force the condition where one thread reads i, gets null, while the other thread sets i to new Singleton() a thread-unsafe condition. But at that point you're better off just solving the underlying issue (just make getInstance thread safe!)
TLDR: there are infinitely many race conditions that can occur in a unsafe function call. You can mock the code to generate a mock of a specific race condition (say, between just two threads) but it's not feasible to just blanket test for "race conditions"
This code worked for me.
The trick is that it is probabilistic like said by other users.
So, the approach that should be taken is to run for a number of times.
public class SingletonThreadSafety {
public static final int CONCURRENT_THREADS = 4;
private void single() {
// Allocate an array for the singletons
final Singleton[] singleton = new Singleton[CONCURRENT_THREADS];
// Number of threads remaining
final AtomicInteger count = new AtomicInteger(CONCURRENT_THREADS);
// Create the threads
for(int i=0;i<CONCURRENT_THREADS;i++) {
final int l = i; // Capture this value to enter the inner thread class
new Thread() {
public void run() {
singleton[l] = Singleton.getInstance();
count.decrementAndGet();
}
}.start();
}
// Ensure all threads are done
// The sleep(10) is to be somewhat performant, (if just loop,
// this will be a lot slow. We could use some other threading
// classes better, like CountdownLatch or something.)
try { Thread.sleep(10); } catch(InterruptedException ex) { }
while(count.get() >= 1) {
try { Thread.sleep(10); } catch(InterruptedException ex) { }
}
for( int i=0;i<CONCURRENT_THREADS - 1;i++) {
assertTrue(singleton[i] == singleton[i + 1]);
}
}
#Test
public void test() {
for(int i=0;i<1000;i++) {
Singleton.i = null;
single();
System.out.println(i);
}
}
}
This have to make some change in the Singleton design pattern. That the instance variable is now accessible in the Test class. So that we could reset the Singleton instance available to null again every time the test is repeated, then we repeat the test 1000 times (if you have more time, you could make it more, sometimes finding an odd threading problem require that).
In some cases this solution works. Unfortunately its hard to test singleton to provoke thread unsafe.
#Test
public void checkThreadUnSafeSingleton() throws InterruptedException {
int threadsAmount = 500;
Set<Singleton> singletonSet = Collections.newSetFromMap(new ConcurrentHashMap<>());
ExecutorService executorService = Executors.newFixedThreadPool(threadsAmount);
for (int i = 0; i < threadsAmount; i++) {
executorService.execute(() -> {
Singleton singleton = Singleton.getInstance();
singletonSet.add(singleton);
});
}
executorService.shutdown();
executorService.awaitTermination(1, TimeUnit.MINUTES);
Assert.assertEquals(2, singletonSet.size());
}
I tried to create a race condition like this.
class Bankaccount {
private int balance=101;
public int getBalance(){
return balance;
}
public void withdraw(int i){
balance=balance-i;
System.out.println("..."+balance);
}
}
public class Job implements Runnable{
Bankaccount b=new Bankaccount();
public void run(){
if(b.getBalance()>100){
System.out.println("the balanced ammount is"+b.getBalance());
/*try{
Thread.sleep(9000);
}
catch(Exception e){
}*/
makeWithdrawl(100);
}
}
public void makeWithdrawl(int ammount){
b.withdraw(ammount);
System.out.println(b.getBalance());
}
public static void main(String[] args) {
Job x=new Job();
Job y=new Job();
Thread t1=new Thread(x);
Thread t2=new Thread(y);
t1.start();
t2.start();
}
}
I am getting output:
the balanced ammount is101
...1
1
the balanced ammount is101
...1
I was expecting it to be in negative as two times withdrawal happened for 100
What is missing here? Thanks in Advance
Race conditions appear when multiple threads change shared data. In your example each thread has its own Bankaccount class. You need to make it shared, like this:
class Job implements Runnable{
Bankaccount b;
Job(Bankaccount b){
this.b = b;
}
public void run(){
if (b != null)
if(b.getBalance()>100){
System.out.println("the balanced ammount is " + b.getBalance());
makeWithdrawal(100);
}
}
public void makeWithdrawal(int ammount){
b.withdraw(ammount);
System.out.println(b.getBalance());
}
public static void main(String[] args) {
// Creating one Bankaccount instance
Bankaccount b = new Bankaccount();
// Passing one instance to different threads
Job x=new Job(b);
Job y=new Job(b);
Thread t1=new Thread(x);
Thread t2=new Thread(y);
// Race conditions may appear
t1.start();
t2.start();
}
}
Unfortunately, this is not enough. Multithreaded programs are non deterministic and you can receive different results after several executions of the program. For example, thread t1 can manage to make withdrawal before thread t2 started to check the balance. Hence, t2 will not do withdrawal due to the lack of money.
To increase the likelihood of negative balance you can insert delay between checking the balance and withdrawing money.
There are several things you need to understand about this.
1) Your particular JVM on your particular system may be impervious to race conditions you are trying to reproduce here.
2) You are not likely to reproduce a race condition with a single run. It's supposed to be non-deterministic, if it gave consistent results it wouldn't be a race condition, but rather an error. To improve your chances make an automated sanity check and run the code 100k times.
3) Using memory barriers so that both threads start at the same time increases the chances a race condition will occur. Using a multi-core system helps too.
4) Your code cannot produce a race condition anyway. Look closely - each job is using it's own account. For a race condition you need shared state.
Your code cant create a race condition, but here is some information for you.
Reproducing a race condition reliably is going to be really hard to do because multi threaded programs are inherently non-deterministic. This means that there is no gaurunteed order to the order in which independent commands in independent threads execute.
This discussion has some good information on the topic:
Can a multi-threaded program ever be deterministic?
What I think you mean in your example, is that you want the balance to be garunteed to have a specific value after a thread executes on it. To do that, You will have to use locks to make sure the only one thread accesses the variable in question at a time.
A lock makes sure that any thread which attempts to read that value while some other thread is manipulating it, must wait until the thread manipulating the variable completes before it can then read and use the variable itself.
You will need locks to do what you are trying to do in your example
Here is the official documentation on using locks to protect
variables, it includes a small example
http://docs.oracle.com/javase/tutorial/essential/concurrency/newlocks.html
This Discussion has a good answer about utilizing locks
in java
Java Thread lock on synchronized blocks
try this code to generate a race condition
// This class exposes a publicly accessible counter
// to help demonstrate data race problem
class Counter {
public static long count = 0;
}
// This class implements Runnable interface
// Its run method increments the counter three times
class UseCounter implements Runnable {
public void increment() {
// increments the counter and prints the value
// of the counter shared between threads
Counter.count++;
System.out.print(Counter.count + " ");
}
public void run() {
increment();
increment();
increment();
}
}
// This class creates three threads
public class DataRace {
public static void main(String args[]) {
UseCounter c = new UseCounter();
Thread t1 = new Thread(c);
Thread t2 = new Thread(c);
Thread t3 = new Thread(c);
t1.start();
t2.start();
t3.start();
}
}
and try this code to fix it
public void increment() {
// increments the counter and prints the value
// of the counter shared between threads
synchronized(this){
Counter.count++;
System.out.print(Counter.count + " ");
}
}
this code snippet is from the book "Oracle Certified Professional Java SE 7 Programmer Exams 1Z0-804 and 1Z0-805" written by SG Ganesh, Tushar Sharma
I have always thought that synchronizing the run method in a java class which implements Runnable is redundant. I am trying to figure out why people do this:
public class ThreadedClass implements Runnable{
//other stuff
public synchronized void run(){
while(true)
//do some stuff in a thread
}
}
}
It seems redundant and unnecessary since they are obtaining the object's lock for another thread. Or rather, they are making explicit that only one thread has access to the run() method. But since its the run method, isn't it itself its own thread? Therefore, only it can access itself and it doesn't need a separate locking mechanism?
I found a suggestion online that by synchronizing the run method you could potentially create a de-facto thread queue for instance by doing this:
public void createThreadQueue(){
ThreadedClass a = new ThreadedClass();
new Thread(a, "First one").start();
new Thread(a, "Second one, waiting on the first one").start();
new Thread(a, "Third one, waiting on the other two...").start();
}
I would never do that personally, but it lends to the question of why anyone would synchronize the run method. Any ideas why or why not one should synchronize the run method?
Synchronizing the run() method of a Runnable is completely pointless unless you want to share the Runnable among multiple threads and you want to sequentialize the execution of those threads. Which is basically a contradiction in terms.
There is in theory another much more complicated scenario in which you might want to synchronize the run() method, which again involves sharing the Runnable among multiple threads but also makes use of wait() and notify(). I've never encountered it in 21+ years of Java.
There is 1 advantage to using synchronized void blah() over void blah() { synchronized(this) { and that is your resulting bytecode will be 1 byte shorter, since the synchronization will be part of the method signature instead of an operation by itself. This may influence the chance to inline the method by the JIT compiler. Other than that there is no difference.
The best option is to use an internal private final Object lock = new Object() to prevent someone from potentially locking your monitor. It achieves the same result without the downside of the evil outside locking. You do have that extra byte, but it rarely makes a difference.
So I would say no, don't use the synchronized keyword in the signature. Instead, use something like
public class ThreadedClass implements Runnable{
private final Object lock = new Object();
public void run(){
synchronized(lock) {
while(true)
//do some stuff in a thread
}
}
}
}
Edit in response to comment:
Consider what synchronization does: it prevents other threads from entering the same code block. So imagine you have a class like the one below. Let's say the current size is 10. Someone tries to perform an add and it forces a resize of the backing array. While they're in the middle of resizing the array, someone calls a makeExactSize(5) on a different thread. Now all of a sudden you're trying to access data[6] and it bombs out on you. Synchronization is supposed to prevent that from happening. In multithreaded programs you simply NEED synchronization.
class Stack {
int[] data = new int[10];
int pos = 0;
void add(int inc) {
if(pos == data.length) {
int[] tmp = new int[pos*2];
for(int i = 0; i < pos; i++) tmp[i] = data[i];
data = tmp;
}
data[pos++] = inc;
}
int remove() {
return data[pos--];
}
void makeExactSize(int size) {
int[] tmp = new int[size];
for(int i = 0; i < size; i++) tmp[i] = data[i];
data = tmp;
}
}
Why? Minimal extra safety and I don't see any plausible scenario where it would make a difference.
Why not? It's not standard. If you are coding as part of a team, when some other member sees your synchronized run he'll probably waste 30 minutes trying to figure out what is so special either with your run or with the framework you are using to run the Runnable's.
From my experience, it's not useful to add "synchronized" keyword to run() method. If we need synchronize multiple threads, or we need a thread-safe queue, we can use more appropriate components, such as ConcurrentLinkedQueue.
Well you could theoretically call the run method itself without problem (after all it is public). But that doesn't mean one should do it. So basically there's no reason to do this, apart from adding negligible overhead to the thread calling run(). Well except if you use the instance multiple times calling new Thread - although I'm a) not sure that's legal with the threading API and b) seems completely useless.
Also your createThreadQueue doesn't work. synchronized on a non-static method synchronizes on the instance object (ie this), so all three threads will run in parallel.
Go through the code comments and uncomment and run the different blocks to clearly see the difference, note synchronization will have a difference only if the same runnable instance is used, if each thread started gets a new runnable it won't make any difference.
class Kat{
public static void main(String... args){
Thread t1;
// MyUsualRunnable is usual stuff, only this will allow concurrency
MyUsualRunnable m0 = new MyUsualRunnable();
for(int i = 0; i < 5; i++){
t1 = new Thread(m0);//*imp* here all threads created are passed the same runnable instance
t1.start();
}
// run() method is synchronized , concurrency killed
// uncomment below block and run to see the difference
MySynchRunnable1 m1 = new MySynchRunnable1();
for(int i = 0; i < 5; i++){
t1 = new Thread(m1);//*imp* here all threads created are passed the same runnable instance, m1
// if new insances of runnable above were created for each loop then synchronizing will have no effect
t1.start();
}
// run() method has synchronized block which lock on runnable instance , concurrency killed
// uncomment below block and run to see the difference
/*
MySynchRunnable2 m2 = new MySynchRunnable2();
for(int i = 0; i < 5; i++){
// if new insances of runnable above were created for each loop then synchronizing will have no effect
t1 = new Thread(m2);//*imp* here all threads created are passed the same runnable instance, m2
t1.start();
}*/
}
}
class MyUsualRunnable implements Runnable{
#Override
public void run(){
try {Thread.sleep(1000);} catch (InterruptedException e) {}
}
}
class MySynchRunnable1 implements Runnable{
// this is implicit synchronization
//on the runnable instance as the run()
// method is synchronized
#Override
public synchronized void run(){
try {Thread.sleep(1000);} catch (InterruptedException e) {}
}
}
class MySynchRunnable2 implements Runnable{
// this is explicit synchronization
//on the runnable instance
//inside the synchronized block
// MySynchRunnable2 is totally equivalent to MySynchRunnable1
// usually we never synchronize on this or synchronize the run() method
#Override
public void run(){
synchronized(this){
try {Thread.sleep(1000);} catch (InterruptedException e) {}
}
}
}