I have defined an atomicinterger variable in class. The class also extends thread class .I have created two threads and incrementing the value of the atomic integer in run method .I was excepting the value of 2 after running two threads, But I am getting value as 1. Please let me know what went wrong in the below code.
public class AtomicIntergerTest extends Thread {
AtomicInteger i = new AtomicInteger();
int j = 0;
#Override
public void run() {
i.incrementAndGet();
}
public static void main(String[] args) throws InterruptedException {
// TODO Auto-generated method stub
AtomicIntergerTest th1 = new AtomicIntergerTest();
AtomicIntergerTest th2 = new AtomicIntergerTest();
th1.start();
th2.start();
th1.join();
th2.join();
System.out.println(th2.i.get());
}
}
Because AtomicInteger i = new AtomicInteger(); creates one AtomicInteger per instance. You wanted
static AtomicInteger i = new AtomicInteger();
You said:
I am still confused if have have a static integer i can get the same data visible across the all threads with that i can achieve my thread safety
You misunderstand. No you can't.
The VM basically works like this:
For any given field, all threads have a local copy of that field, and they all have an unfair coin. They flip it anytime they read or write to it. If the coin lands heads, they sync the field and read/write it. If the coin lands tails, they read/write the copy. The coin is unfair, in that it will mess with you: Always flip heads during tests and on your machine, flip tails a few times juuuuust when giving a demo to that important customer.
Thus, if we have this code running in one thread:
x = x + 1;
and the same code in another thread, x may only increment once, because the second thread read it's clone copy and thus the first thread's increment is missed. Or not. Maybe today you tried 100 times and the number ended up +2 every time. And then the next song on your winamp starts or the phase of the moon changes and now it seems to reliably add only 1. It can all happen, which is why such code is fundamentally broken. You should not interact with a field from multiple threads unless you've taken great care to streamline it.
You cannot force the coin, unless you establish so-called comes-before or comes-after relationships. You do this by using volatile or synchronized, or by calling code that does that. It gets complicated. Quickly.
AtomicInteger is a different solution. It is, well, atomic. If you have a thread with:
atomicInt.incrementAndGet();
and another one with the same thing, after they've both ran, that atomicInt has been incremented by 2, guaranteed.
Related
I have the following code snippet that I'm trying to see if it can crash/misbehave at some point. The HashMap is being called from multiple threads in which put is inside a synchronized block and get is not. Is there any issue with this code? If so, what modification I need to make to see that happens given that I only use put and get this way, and there is no putAll, clear or any operations involved.
import java.util.HashMap;
import java.util.Map;
public class Main {
Map<Integer, String> instanceMap = new HashMap<>();
public static void main(String[] args) {
System.out.println("Hello");
Main main = new Main();
Thread thread1 = new Thread("Thread 1"){
public void run(){
System.out.println("Thread 1 running");
for (int i = 0; i <= 100; i++) {
System.out.println("Thread 1 " + i + "-" + main.getVal(i));
}
}
};
thread1.start();
Thread thread2 = new Thread("Thread 2"){
public void run(){
System.out.println("Thread 2 running");
for (int i = 0; i <= 100; i++) {
System.out.println("Thread 2 " + i + "-" + main.getVal(i));
}
}
};
thread2.start();
}
private String getVal(int key) {
check(key);
return instanceMap.get(key);
}
private void check(int key) {
if (!instanceMap.containsKey(key)) {
synchronized (instanceMap) {
if (!instanceMap.containsKey(key)) {
// System.out.println(Thread.currentThread().getName());
instanceMap.put(key, "" + key);
}
}
}
}
}
What I have checked out:
Are size(), put(), remove(), get() atomic in Java synchronized HashMap?
Extending HashMap<K,V> and synchronizing only puts
Why does HashMap.get(key) needs to be synchronized when change operations are synchronized?
I somewhat modified your code:
removed System.out.println() from the "hot" loop, it is internally synchronized
increased the number of iterations
changed printing to only print when there's an unexpected value
There's much more we can do and try, but this already fails, so I stopped there. The next step would we to rewrite the whole thing to jcsctress.
And voila, as expected, sometimes this happens on my Intel MacBook Pro with Temurin 17:
Exception in thread "Thread 2" java.lang.NullPointerException: Cannot invoke "java.lang.Integer.intValue()" because the return value of "java.util.Map.get(Object)" is null
at com.gitlab.janecekpetr.playground.Playground.getVal(Playground.java:35)
at com.gitlab.janecekpetr.playground.Playground.lambda$0(Playground.java:21)
at java.base/java.lang.Thread.run(Thread.java:833)
Code:
private record Val(int index, int value) {}
private static final int MAX = 100_000;
private final Map<Integer, Integer> instanceMap = new HashMap<>();
public static void main(String... args) {
Playground main = new Playground();
Runnable runnable = () -> {
System.out.println(Thread.currentThread().getName() + " running");
Val[] vals = new Val[MAX];
for (int i = 0; i < MAX; i++) {
vals[i] = new Val(i, main.getVal(i));
}
System.out.println(Stream.of(vals).filter(val -> val.index() != val.value()).toList());
};
Thread thread1 = new Thread(runnable, "Thread 1");
thread1.start();
Thread thread2 = new Thread(runnable, "Thread 2");
thread2.start();
}
private int getVal(int key) {
check(key);
return instanceMap.get(key);
}
private void check(int key) {
if (!instanceMap.containsKey(key)) {
synchronized (instanceMap) {
if (!instanceMap.containsKey(key)) {
instanceMap.put(key, key);
}
}
}
}
To specifically explain the excellent sleuthing work in the answer by #PetrJaneček :
Every field in java has an evil coin attached to it. Anytime any thread reads the field, it flips this coin. It is not a fair coin - it is evil. It will flip heads 10,000 times in a row if that's going to ruin your day (for example, you may have code that depends on coinflips landing a certain way, or it'll fail to work. The coin is evil: You may run into the situation that just to ruin your day, during all your extensive testing, the coin flips heads, and during the first week in production it's all heads flips. And then the big new potential customer demos your app and the coin starts flipping some tails on you).
The coinflip decides which variant of the field is used - because every thread may or may not have a local cache of that field. When you write to a field from any thread? Coin is flipped, on tails, the local cache is updated and nothing more happens. Read from any thread? Coin is flipped. On tails, the local cache is used.
That's not really what happens of course (your JVM does not actually have evil coins nor is it out to get you), but the JMM (Java Memory Model), along with the realities of modern hardware, means that this abstraction works very well: It will reliably lead to the right answer when writing concurrent code, namely, that any field that is touched by more than one thread must have guards around it, or must never change at all during the entire duration of the multi-thread access 'session'.
You can force the JVM to flip the coin the way you want, by establishing so-called Happens Before relationships. This is explicit terminology used by the JMM. If 2 lines of code have a Happens-Before relationship (one is defined as 'happening before' the other, as per the JMM's list of HB relationship establishing actions), then it is not possible (short of a bug in the JVM itself) to observe any side effect of the HA line whilst not also observing all side effects of the HB line. (That is to say: the 'happens before' line happens before the 'happens after' line as far as your code could ever tell, though it's a bit of schrodiner's cat situation. If your code doesn't actually look at these files in a way that you'd ever be able to tell, then the JVM is free to not do that. And it won't, you can rely on the evil coin being evil: If the JMM takes a 'right', there will be some combination of CPU, OS, JVM release, version, and phase of the moon that combine to use it).
A small selection of common HB/HA establishing conditions:
The first line inside a synchronized(lock) block is HA relative to the hitting of that block in any other thread.
Exiting a synchronized(lock) block is HB relative to any other thread entering any synchronized(lock) block, assuming the two locks are the same reference.
thread.start() is HB relative to the first line that thread will run.
The 'natural' HB/HA: line X is HB relative to line Y if X and Y are run by the same thread and X is 'before it' in your code. You can't write x = 5; y = x; and have y be set by a version of x that did not witness the x = 5 happening (of course, if another thread is also modifying x, all bets are off unless you have HB/HA with whatever line is doing that).
writes and reads to volatile establish HB/HA but you usually can't get any guarantees about which direction.
This explains the way your code may fail: The get() call establishes absolutely no HB/HA relationship with the other thread that is calling put(), and therefore the get() call may or may not use locally cached variants of the various fields that HashMap uses internally, depending on the evil coin (which is of course hitting some fields; it'll be private fields in the HashMap implementation someplace, so you don't know which ones, but HashMap obviously has long-lived state, which implies fields are involved).
So why haven't you actually managed to 'see' your code asplode like the JMM says it will? Because the coin is EVIL. You cannot rely on this line of reasoning: "I wrote some code that should fail if the synchronizations I need aren't happening the way I want. I ran it a whole bunch of times, and it never failed, therefore, apparently this code is concurrency-safe and I can use this in my production code". That is simply not ever actually reliable. That's why you need to be thinking: Evil! That coin is out to get me! Because if you don't, you may be tempted to write test code like this.
You should be scared of writing code where more than one thread interacts with the same field. You should be bending over backwards to avoid it. Use message queues. Do your chat between threads by using databases, which have much nicer primitives for this stuff (transactions and isolation levels). Rewrite the code so that it takes a bunch of params up front and then runs without interacting with other threads via fields at all, until it is all done, and it then returns a result (and then use e.g. fork/join framework to make it all work). Make your webserver performant and using all the cores simply by relying on the fact that every incoming request will be its own thread, so the only thing that needs to happen for you to use all the cores is for that many folks to hit your server at the same time. If you don't have enough requests, great! Your server isn't busy so it doesn't matter you aren't using all the cores.
If truly you decide that interacting with the same field from multiple threads is the right answer, you need to think NASA programming mars rovers on the lines that interact with those fields, because tests simply cannot be relied upon. It's not as hard as it sounds - especially if you keep the actual interacting with the relevant fields down to a minimum and keep thinking: "Have I established HB/HA"?
In this case, I think Petr figured it out correctly: System.out.println is hella slow and does various synchronizing actions. JMM is a package deal, and commutative: Once HB/HA establishes, everything the HB line changed is observable to the code in the HA line, and add in the natural rule, which means all code that follows the HA line cannot possibly observe a universe where something any line before the HB line did is not yet visible. In other words, the System.out.println statements HB/HA with each other in some order, but you can't rely on that (System.out is not specced to synchronize. But, just about every implementation does. You should not rely on implementation details, and I can trivially write you some java code that is legal, compiles, runs, and breaks no contracts, because you can set System.out with System.setOut - that does not synchronize when interacting with System.out!). The evil coin in this case took the shape of 'accidental' synchronization via intentionally unspecced behaviour of System.out.
The following explanation is more in line with the terminology used in the JMM. Could be useful if you want a more solid understanding of this topic.
2 Actions are conflicting when they access the same address and there is at least 1 write.
2 Actions are concurrent when they are not ordered by a happens-before relation (there is no happens-before edge between them).
2 Actions are in data race when they are conflicting and concurrent.
When there are data races in your program, weird problems can happen like unexpected reordering of instructions, visibility problems, or atomicity problems.
So what makes up the happens-before relation. If a volatile read observes a particular volatile write, then there is a happens-before edge between the write and the read. This means that read will not only see that write, but everything that happened before that write. There are other sources of happens-before edges like the release of a monitor and subsequent acquire of the same monitor. And there is a happens-before edge between A, B when A occurs before B in the program order. Note: the happens-before relation is transitive, so if A happens-before B and B happens-before C, then A happens-before C.
In your case, you have a get/put operations which are conflicting since they access the same address(es) and there is at least 1 write.
The put/get action are concurrent, since is no happens-before edge between writing and reading because even though the write releases the monitor, the get doesn't acquire it.
Since the put/get operations are concurrent and conflicting, they are in data race.
The simplest way to fix this problem, is to execute the map.get in a synchronized block (using the same monitor). This will introduce the desired happens-before edge and makes the actions sequential instead of concurrent and as consequence, the data-race disappears.
A better-performing solution would be to make use of a ConcurrentHashMap. Instead of a single central lock, there are many locks and they can be acquired concurrently to improve scalability and performance. I'm not going to dig into the optimizations of the ConcurrentHashMap because would create confusion.
[Edit]
Apart from a data-race, your code also suffers from race conditions.
This is just a concept :
I want to have a int be updated, by a function that is running inside multiple threads.
My function randomly chooses an int and checks if it's bigger than the existing int.
If so it updates the variable.
How do I ensure my values never clash or my function is looking at an older value not yet updated?
I'm new to threads so any information is valuable.
This a job for AtomicInteger!
AtomicInteger atomic;
...
int newValue = atomic.accumulateAndGet(rand, Math::max);
How do I ensure my values never clash?
If you are new to threads, then I would say, use a synchronized block:
final Object lock = new Object();
int global_r = 0;
...
while (true) {
int r = random.nextInt();
synchronized(lock) {
if (r > global_r) {
global_r = r;
break;
}
}
}
Only one thread at a time can enter the synchronized(lock)... block. A thread that tries when another thread already is in it will be forced to wait.
One should always try to have threads spend as little time as possible inside a synchronized(...) block, which is why I generate the random number outside of the block, and only do the test and the assignment inside it.
Another answer here suggests that you use AtomicInteger. That works too, but it is not what I would recommend for somebody who is taking their very first steps toward understanding how to use threads.
I had a question related to accessing individual elements via an Atomic Reference.
If I have an IntegerArray and an atomic reference to it;will reading and writing to individual elements of the array via the AtomicReference variable cause data races?
In the code below: num is an Integer Array with aRnumbers being the atomic reference to the array.
In threads 1 and 2; I access aRnumbers.get()[1] and increment it by 1.
I am able to access individual elements via the atomic reference without data races to accurate results each time with 22 as the output of aRnumbers.get()[1] in the main thread after both threads complete.
But,since the atomic reference is defined on the array and not on the individual elements; shouldn't there be a data race in this case leading to 21/22 as the output?
Isn't having data races in this case the motivation for having a AtomicIntegerArray data structure which provides a separate AtomicReference to each element?
Please find below the java code that i am trying to run.Could anyone kindly let me know where I am going wrong.
import java.util.concurrent.atomic.AtomicReference;
public class AtomicReferenceExample {
private static int[] num= new int[2];
private static AtomicReference<int[]> aRnumbers;
public static void main(String[] args) throws InterruptedException {
Thread t1 = new Thread(new MyRun1());
Thread t2 = new Thread(new MyRun2());
num[0]=10;
num[1]=20;
aRnumbers = new AtomicReference<int[]>(num);
System.out.println("In Main before:"+aRnumbers.get()[0]+aRnumbers.get()[1]);
t1.start();
t2.start();
t1.join();
t2.join();
System.out.println("In Main after:"+aRnumbers.get()[0]+aRnumbers.get()[1]);
}
static class MyRun1 implements Runnable {
public void run() {
System.out.println("In T1 before:"+aRnumbers.get()[1]);
aRnumbers.get()[1]=aRnumbers.get()[1]+1;
}
}
static class MyRun2 implements Runnable {
public void run() {
System.out.println("In T2 before:"+aRnumbers.get()[1]);
aRnumbers.get()[1]=aRnumbers.get()[1]+1;
}
}
}
shouldn't there be a data race in this case leading to 21/22 as the output?
Indeed there is. Your thread are so short lived that most likely they are not running at the same time.
Isn't having data races in this case the motivation for having a AtomicIntegerArray data structure which provides a separate AtomicReference to each element?
Yes, it is.
Could anyone kindly let me know where I am going wrong.
Starting a thread takes 1 - 10 milli-seconds.
Incrementing a value like this even without the code being JITed is likely to take << 50 microseconds. If it was optimised it would take about 50 - 200 nano-seconds per increment.
As starting athread takes about 20 - 200x longer than the operating they won't be running at the same time so there is no race condition.
Try incrementing the value a few million times, so you have a race condition because both threads are running at the same time.
Incrementing an element consists of three steps:
Reading the value.
Incrementing the value.
Writing the value back.
A race condition can occur. Take an example: Thread 1 reads the value (let's say 20). Task switch. Thread 2 reads the value (20 again), increments it and writes it back (21). Task switch. The first thread increments the value and writes it back (21). So while 2 incrementing operations took place, the final value is still incremented only by one.
The data structure does not help in this case. A thread safe collection helps keeping the structure consistent when concurrent threads are adding, accessing and removing elements. But here you need to lock access to an element during the three steps of the increment operation.
I have a class with a getter getInt() and a setter setInt() on a certain field, say field
Integer Int;
of an object of a class, say SomeClass.
The setInt() here is synchronized-- getInt() isn't.
I am updating the value of Int from within multiple threads.
Each thread is getting the value Int, and setting it appropriately.
The threads aren't sharing any other resources in any way.
The code executed in each thread is as follows.
public void update(SomeClass c) {
while (<condition-1>) // the conditions here and the calculation of
// k below dont have anything to do
// with the members of c
if (<condition-2>) {
// calculate k here
synchronized (c) {
c.setInt(c.getInt()+k);
// System.out.println("in "+this.toString());
}
}
}
The run() method is just invoking the above method on the members updated from within the constructor by the params passed to it:
public void run() { update(c); }
When I run this on large sequences, the threads aren't interleaving much-- i see one thread executing for long without any other thread running in between.
There must be a better way of doing this.
I can't change the internals of SomeClass, or of the class invoking the threads.
How can this be done better?
TIA.
//=====================================
EDIT:
I'm not after manipulating the execution sequence of the threads. They all have the same priority. It`s just that what i see in the outcome is suggesting that the threads aren't sharing the execution time evenly-- one of them, once takes over, executing on. However, I can't see why this code should be doing this.
It`s just that what i see in the outcome is suggesting that the threads aren't sharing the execution time evenly
Well, this is exactly what you don't want if you are after efficiency. Yanking a thread from being executed and scheduling another thread is generally very costly. Therefore it's actually advantageous to do one of them, once takes over, executing on. Of course, when this is overdone you could see higher throughput but longer response time. In theory. In practice, JVMs thread scheduling is well tuned for almost all purposes, and you don't want to try changing it in almost all situations. As a rule of thumb, if you are interested in response times in millisecond order, you probably want to stay away messing with it.
tl;dr: It's not being inefficient, you probably want to leave it as it is.
EDIT:
Having said that, using an AtomicInteger may help in performance, and is in my opinion less error prone than using a lock (synchronized keyword). You need to be hitting that variable really very hard in order to get a measurable benefit though.
The JDK provides a nice solution for multi threaded int access, AtomicInteger:
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/atomic/AtomicInteger.html
As Enno Shioji has pointed out, letting one thread proceed might be the most efficient way to execute your code in some scenarios.
It depends on how much cost the thread synchronization imposes in relation to the other work of your code (which we don’t know). If you have a loop like:
while (<condition-1>)
if (<condition-2>) {
// calculate k here
synchronized (c) {
c.setInt(c.getInt()+k);
}
}
and the test for condition-1 and condition-2 and the calculation of k is rather cheap compared to the synchronization cost, the Hotspot optimizer might decide to reduce the overhead by transforming the code to something like this:
synchronized (c) {
while (<condition-1>)
if (<condition-2>) {
// calculate k here
c.setInt(c.getInt()+k);
}
}
(or a rather more complicated structure by performing loop unrolling and span the synchronized block over multiple iterations). The bottom line is that the optimized code might block other threads longer but let the one owning the lock finish faster resulting in an overall faster execution.
This does not mean that a single-threaded execution was the fastest way to handle your problem. It also doesn’t mean that using an AtomicInteger here would be the best option to solve the problem. It would create a higher CPU load and possibly a small acceleration but it doesn’t solve your real mistake:
It is completely unnecessary to update c within the loop at a high frequency. After all, your threads do not depend on seeing updates to c timely. It even looks like they are not using it at all. So the correct fix would be to move the update out of the loop:
int kTotal=0;
while (<condition-1>)
if (<condition-2>) {
// calculate k here
kTotal += k;
}
synchronized (c) {
c.setInt(c.getInt()+kTotal);
}
Now, all threads can run in parallel (assuming the code you haven’t posted here doesn’t contain inter-thread dependencies) and the synchronization cost is reduced to a minimum. You could still change it to an AtomicInteger as well but that’s not that important anymore.
Answering to this
i see one thread executing for long without any other thread running in between.
There must be a better way of doing this.
You can not control how threads will be executed. JVM does this for you, and does not like you to interfere in its work.
Still you can look at yield as your option, but that also does not ensure same thread will not be picked again.
The java.lang.Thread.yield() method causes the currently executing thread object to temporarily pause and allow other threads to execute.
I've found it better to use wait() and notify() than yield. Check out this example (seen from a book)-
class Q {
int n;
boolean valueSet = false;
synchronized int get() {
if(!valueSet)
wait(); //handle InterruptedException
//
valueSet = false;
notify();//if thread waiting in put, now notified
}
synchronized void put(int n) {
if(valueSet)
wait(); //handle InterruptedException
//
valueSet = true;
//if thread in get waiting then that is resumed now
notify();
}
}
or you could try using sleep() and join the threads in the end in main() but that isn't a foolproof way
You are having public void update(SomeClass c) method in your code and this method is an instance method in which you are passing the object as parameter.
synchronized(c) in your code is doing nothing. Let me show you with some example,
So if you will make different objects of this class and then try to make them different threads like,
class A extends Thread{
public void update(SomeClass c){}
public void run(){
update(c)
}
public static void main(String args[]){
A t1 = new A();
A t2 = new A();
t1.start();
t2.start();
}
}
Then both of these t1 & t2 will have their own copies of update method and the reference variable c which you are making synchronized will also be different for both the threads. t1 calls its own update() method and t2 calls its own update() method. So synchronization won't work.
Synchronization will work when you have something common for both the threads.
Something like,
class A extends Thread{
static SomeClass c;
public void update(){
synchronized(c){
}
}
public void run(){
update(c)
}
public static void main(String args[]){
A t1 = new A();
A t2 = new A();
t1.start();
t2.start();
}
}
This way the actual concept of synchronization will be applied.
I have a loop that doing this:
WorkTask wt = new WorkTask();
wt.count = count;
Thread a = new Thread(wt);
a.start();
When the workTask is run, the count will wt++ ,
but the WorkTask doesn't seems change the count number, and between the thread, the variable can't share within two thread, what did I wrote wrong? Thanks.
Without seeing the code for WorkThread it's hard to pin down the problem, but most likely you are missing synchronization between the two threads.
Whenever you start a thread, there are no guarantees on whether the original thread or the newly created thread runs first, or how they are scheduled. The JVM/operating system could choose to run the original thread to completion and then start running the newly created thread, run the newly created thread to completion and then switch back to the original thread, or anything in between.
In order to control how the threads run, you have to synchronize them explicitly. There are several ways to control the interaction between threads - certainly too much to describe in a single answer. I would recommend the concurrency trail of the Java tutorials for a broad overview, but in your specific case the synchronization mechanisms to get you started will probably be Thread.join and the synchronized keyword (one specific use of this keyword is described in the Java tutorials).
Make the count variable static (it looks like each thread has its own version of the variable right now) and use a mutex to make it thread safe (ie use the synchronized instruction)
From your description I came up with the following to demonstrate what I perceived as your issue. This code, should output 42. But it outputs 41.
public class Test {
static class WorkTask implements Runnable {
static int count;
#Override
public void run() {
count++;
}
}
public static void main(String... args) throws Exception {
WorkTask wt = new WorkTask();
wt.count = 41;
Thread a = new Thread(wt);
a.start();
System.out.println(wt.count);
}
}
The problem is due to the print statement running before thread had a chance to start.
To cause the current thread ( the thread that is going to read variable count ) to wait until the thread finishes, add the following after starting thre thread.
a.join();
If you are wishing to get a result back from a thread, I would recommend you to use Callable
interface and an ExecutorSercive to submit it. e.g:
Future future = Executors.newCachedThreadPool().submit
(new Callable<Interger>()
{
int count = 1000;
#Override public Integer call() throws Exception
{
//here goes the operations you want to be executed concurrently.
return count + 1; //Or whatever the result is.
}
}
//Here goes the operations you need before the other thread is done.
System.out.println(future.get()); //Here you will retrieve the result from
//the other thread. if the result is not ready yet, the main thread
//(current thread) will wait for it to finish.
this way you don't have to deal with the synchronization problems and etc.
you can see further about this in Java documentations:
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/package-summary.html