The reorder explanation of modified String.hashCode() - java

Refer to this blog and this topic.
It seems the code will be reorder even in single thread ?
public int hashCode() {
if (hash == 0) { // (1)
int off = offset;
char val[] = value;
int len = count;
int h = 0;
for (int i = 0; i < len; i++) {
h = 31*h + val[off++];
}
hash = h;
}
return hash; // (2)
}
But its really confusing to me, why (2) could return 0 and (1) could be non-zero ?
If i use the code in single thread, this will even doen't work, how could it happens ?

The first point of java memory model is:
Each action in a thread happens before every action in that thread
that comes later in the program's order.
That's why reordering in single thread is impossible. As long as code is not synchronized such guaranties are not provided for multiple threads.
Have a look at String hashCode implementation. It first loads hash to a local variable and only then performs check and return. That's how such reorderings are prevented. But this does not save us from multiple hashCode calculations.

First question:
Will reordering of instructions happen in single threaded execution?
Answer:
Reordering of instructions is a compiler optimization. The order of instructions in one thread will be the same no matter how many threads involved. Or: Yes in single threaded, too.
Second question:
Why could this lead to a problem in multi-threading but not with one thread?
Answer:
The rules for this reordering are designed to gurantee that there are no strange effects in single threaded or correctly synchronized code. That means: If we write code that's neither single threaded nor correctly synchronized there might be strange effects and we have to understand the rules and take care to avoid those effects.
So again as the auhor of the orginal blog said: Don't try if you're not really sure to understand those rules. And every compiler will be tested not to break String.hashCode() but compliers won't be tested with your code.
Edit:
Third question:
And again what is really happening?
Answer:
As we look at the code it will deal fine with not seeing changes of another thread.So the first thing we have to understand is: A method doesn't return a variable nor a constanst nor a literal. No a method return what's on top of the stack when the programm counter is reset. This has to be initialized at some point in time and it can be overwritten later on. This means it can be initialized first with the content of hash (0 now) then another thread finishes calculation and set hash to something and then the check hash == 0 happens. In turn the return value is not overwritten anymore and 0 is returned.
So the point is: The return value can change independently of the returned variable as it is not the same. Modern programming language make it look the same to make our lives easier. But this abstraction as wholes when you don't adhere to the rules.

Related

java: If I assign a variable the same value as just before, does it change the memory or does JIT recognize this?

For example:
class Main {
public boolean hasBeenUpdated = false;
public void updateMain(){
this.hasBeenUpdated = true;
/*
alternative:
if(!hasBeenUpdated){
this.hasBeenUpdated = true;
}
*/
}
public void persistUpdate(){
this.hasBeenUpdated = false;
}
}
public Main instance = new Main()
instance.updateMain()
instance.updateMain()
instance.updateMain()
Does instance.hasBeenUpdated get updated 3 times in memory?
The reason I ask this is because I hoped to use a boolean("hasBeenUpdated") as a flag, and this could theoretically be "changed" many, many times, before I call "instance.persistUpdate()".
Does the JVM's JIT see this and perform an optimization?
JIT will collapse redundant statements only when it can PROVE that removing the code will not change the behavior. For example, if you did this:
int i;
i = 1;
i = 1;
i = 1;
The first two assignments are provably redundant, and the JIT could eliminate them. If instead it's
int i;
i = someMethodReturningInt();
i = someMethodReturningInt();
i = someMethodReturningInt();
the JIT has no way of knowing what someMethodReturnintInt() does, including whether it has any side effects, so it must invoke the method 3 times. Whether or not it actually stores any but the final value is immaterial, as the code would behave the same either way. (Declaring volatile int i; instead would force it to store each value)
Of course if you're doing other things in between the method invocations the it will be forced to perform the assignment.
The whole topic is part of the more general "happens-before" and "happens-after" concepts documented in the language and JVM specifications.
Optimization is NEVER supposed to change the behavior of a program, except possibly to reduce its runtime. There have been instances where bugs in the optimizer inadvertently did introduce errors, but these have been few and far between. In general you don't need to worry about whether optimization will break your code.
It can perform an optimization, yes.
As a matter of fact, it can issue a single write, or a single call to updateMain. All those three calls will be collapsed to one, only.
But for that to happen, JIT has to prove that nothing else breaks, or more specifically that code does not break the JMM rules. In this specific case, as far as I understand it, it does not.
Given the choice is between JVM code that implements
move new value to variable
and
compare new value with current value of variable
if not the same
move new value to variable
the JVM would have to be fairly nutty to implement it the latter way. That's a pessimization, not an optimization.
The JVM to a large extent relies on the real machine to do simple operations, and real machines store values in memory when you tell them to store values in memory.

Java volatile loop

I am working on someone's code and came across the equivalent of this:
for (int i = 0; i < someVolatileMember; i++) {
// Removed for SO
}
Where someVolatileMember is defined like this:
private volatile int someVolatileMember;
If some thread, A, is running the for loop and another thread, B, writes to someVolatileMember then I assume the number of iterations to do would change while thread A is running the loop which is not great. I assume this would fix it:
final int someLocalVar = someVolatileMember;
for (int i = 0; i < someLocalVar; i++) {
// Removed for SO
}
My questions are:
Just to confirm that the number of iterations thread A does can be
changed while the for loop is active if thread B modifies
someVolatileMember
That the local non-volatile copy is sufficient to make sure that when
thread A runs the loop thread B cannot change the number of
iterations
Your understanding is correct:
Per the Java Language Specification, the semantics of a volatile field ensure consistency between values seen after updates done between different threads:
The Java programming language provides a second mechanism, volatile fields, that is more convenient than locking for some purposes.
A field may be declared volatile, in which case the Java Memory Model ensures that all threads see a consistent value for the variable (ยง17.4).
Note that even without the volatile modifier, the loop count is likely to change depending on many factors.
Once a final variable is assigned, its value is never changed so the loop count will not change.
Well first of all that field is private (unless you omitted some methods that actually might alter it)...
That loop is a bit on non-sense, the way it is written and assuming there are methods that actually might alter someVolatileMember; it is so because you might never know when if finishes, or if does at all. That might even turn out to be a much more expensive loop as having a non-volatile field, because volatile means invalidating caches and draining buffers at the CPU level much more often than usual variables.
Your solution to first read a volatile and use that is actually a very common pattern; it's also given birth to a very common anti-pattern too : "check then act"... You read it into a local variable because if it later changes, you don't care - you are working with the freshest copy you had at the moment. So yes, your solution to copy it locally is fine.
There are also performance implications, since the value of volatile is never fetched from the most local cache but additional steps are being taken by the CPU to ensure that modifications are propagated (it could be cache coherence protocols, deferring reads to L3 cache, or reading from RAM). There are also implications to other variables in scope where volatile variable is used (these get synced with main memory too, however i am not demonstrating it here).
Regarding performance, following code:
private static volatile int limit = 1_000_000_000;
public static void main(String[] args) {
long start = System.nanoTime();
for (int i = 0; i < limit; i++ ) {
limit--; //modifying and reading, otherwise compiler will optimise volatile out
}
System.out.println(limit + " took " + (System.nanoTime() - start) / 1_000_000 + "ms");
}
... prints 500000000 took 4384ms
Removing volatile keyword from above will result in output 500000000 took 275ms.

Java - cache coherence between successive parallel streams?

Consider the following piece of code (which isn't quite what it seems at first glance).
static class NumberContainer {
int value = 0;
void increment() {
value++;
}
int getValue() {
return value;
}
}
public static void main(String[] args) {
List<NumberContainer> list = new ArrayList<>();
int numElements = 100000;
for (int i = 0; i < numElements; i++) {
list.add(new NumberContainer());
}
int numIterations = 10000;
for (int j = 0; j < numIterations; j++) {
list.parallelStream().forEach(NumberContainer::increment);
}
list.forEach(container -> {
if (container.getValue() != numIterations) {
System.out.println("Problem!!!");
}
});
}
My question is: In order to be absolutely certain that "Problem!!!" won't be printed, does the "value" variable in the NumberContainer class need to be marked volatile?
Let me explain how I currently understand this.
In the first parallel stream, NumberContainer-123 (say) is incremented by ForkJoinWorker-1 (say). So ForkJoinWorker-1 will have an up-to-date cache of NumberContainer-123.value, which is 1. (Other fork-join workers, however, will have out-of-date caches of NumberContainer-123.value - they will store the value 0. At some point, these other workers' caches will be updated, but this doesn't happen straight away.)
The first parallel stream finishes, but the common fork-join pool worker threads aren't killed. The second parallel stream then starts, using the very same common fork-join pool worker threads.
Suppose, now, that in the second parallel stream, the task of incrementing NumberContainer-123 is assigned to ForkJoinWorker-2 (say). ForkJoinWorker-2 will have its own cached value of NumberContainer-123.value. If a long period of time has elapsed between the first and second increments of NumberContainer-123, then presumably ForkJoinWorker-2's cache of NumberContainer-123.value will be up-to-date, i.e. the value 1 will be stored, and everything is good. But what if the time elapsed between first and second increments if NumberContainer-123 is extremely short? Then perhaps ForkJoinWorker-2's cache of NumberContainer-123.value might be out of date, storing the value 0, causing the code to fail!
Is my description above correct? If so, can anyone please tell me what kind of time delay between the two incrementing operations is required to guarantee cache consistency between the threads? Or if my understanding is wrong, then can someone please tell me what mechanism causes the thread-local caches to be "flushed" in between the first parallel stream and the second parallel stream?
It should not need any delay. By the time you're out of ParallelStream's forEach, all the tasks have finished. That establishes a happens-before relation between the increment and the end of forEach. All the forEach calls are ordered by being called from the same thread, and the check, similarly, happens-after all the forEach calls.
int numIterations = 10000;
for (int j = 0; j < numIterations; j++) {
list.parallelStream().forEach(NumberContainer::increment);
// here, everything is "flushed", i.e. the ForkJoinTask is finished
}
Back to your question about the threads, the trick here is, the threads are irrelevant. The memory model hinges on the happens-before relation, and the fork-join task ensures happens-before relation between the call to forEach and the operation body, and between the operation body and the return from forEach (even if the returned value is Void)
See also Memory visibility in Fork-join
As #erickson mentions in comments,
If you can't establish correctness through happens-before relationships,
no amount of time is "enough." It's not a wall-clock timing issue; you
need to apply the Java memory model correctly.
Moreover, thinking about it in terms of "flushing" the memory is wrong, as there are many more things that can affect you. Flushing, for instance, is trivial: I have not checked, but can bet that there's just a memory barrier on the task completion; but you can get wrong data because the compiler decided to optimise non-volatile reads away (the variable is not volatile, and is not changed in this thread, so it's not going to change, so we can allocate it to a register, et voila), reorder the code in any way allowed by the happens-before relation, etc.
Most importantly, all those optimizations can and will change over time, so even if you went to the generated assembly (which may vary depending on the load pattern) and checked all the memory barriers, it does not guarantee that your code will work unless you can prove that your reads happen-after your writes, in which case Java Memory Model is on your side (assuming there's no bug in JVM).
As for the great pain, it's the very goal of ForkJoinTask to make the synchronization trivial, so enjoy. It was (it seems) done by marking the java.util.concurrent.ForkJoinTask#status volatile, but that's an implementation detail you should not care about or rely upon.

How atomicity is achieved in the classes defined in java.util.concurrent.atomic package?

I was going through the source code of java.util.concurrent.atomic.AtomicInteger to find out how atomicity is achieved by the atomic operations provided by the class. For instance AtomicInteger.getAndIncrement() method source is as follows
public final int getAndIncrement() {
for (;;) {
int current = get();
int next = current + 1;
if (compareAndSet(current, next))
return current;
}
}
I am not able to understand the purpose of writing the sequence of operations inside a infinite for loop. Does it serve any special purpose in Java Memory Model (JMM). Please help me find a descriptive understanding. Thanks in advance.
I am not able to understand the purpose of writing the sequence of operations inside a infinite for loop.
The purpose of this code is to ensure that the volatile field gets updated appropriately without the overhead of a synchronized lock. Unless there are a large number of threads all competing to update this same field, this will most likely spin a very few times to accomplish this.
The volatile keyword provides visibility and memory synchronization guarantees but does not in itself ensure atomic operations with multiple operations (test and set). If you are testing and then setting a volatile field there are race-conditions if multiple threads are trying to perform the same operation at the same time. In this case, if multiple threads are trying to increment the AtomicInteger at the same time, you might miss one of the increments. The concurrent code here uses the spin loop and the compareAndSet underlying methods to make sure that the volatile int is only updated to 4 (for example) if it still is equal to 3.
t1 gets the atomic-int and it is 0.
t2 gets the atomic-int and it is 0.
t1 adds 1 to it
t1 atomically tests to make sure it is 0, it is, and stores 1.
t2 adds 1 to it
t2 atomically tests to make sure it is 0, it is not, so it has to spin and try again.
t2 gets the atomic-int and it is 1.
t2 adds 1 to it
t2 atomically tests to make sure it is 1, it is, and stores 2.
Does it serve any special purpose in Java Memory Model (JMM).
No, it serves the purpose of the class and method definitions and uses the JMM and the language definitions around volatile to achieve its purpose. The JMM defines what the language does with the synchronized, volatile, and other keywords and how multiple threads interact with cached and central memory. This is mostly about native code interactions with operating system and hardware and is rarely, if ever, about Java code.
It is the compareAndSet(...) method which gets closer to the JMM by calling into the Unsafe class which is mostly native methods with some wrappers:
public final boolean compareAndSet(int expect, int update) {
return unsafe.compareAndSwapInt(this, valueOffset, expect, update);
}
I am not able to understand the purpose of writing the sequence of
operations inside a infinite for loop.
To understand why it is in an infinite loop I find it helpful to understand what the compareAndSet does and how it may return false.
Atomically sets the value to the given updated value if the current
value == the expected value.
Parameters:
expect - the expected value
update - the new value
Returns:
true if successful. False return indicates that the actual value was not
equal to the expected value
So you read the Returns message and ask how is that possible?
If two threads are invoking incrementAndGet at close to the same time, and they both enter and see the value current == 1. Both threads will create a thread-local next == 2 and try to set via compareAndSet. Only one thread will win as per documented and the thread that loses must try again.
This is how CAS works. You attempt to change the value if you fail, try again, if you succeed then continue on.
Now simply declaring the field as volatile will not work because incrementing is not atomic. So something like this is not safe from the scenario I explained
volatile int count = 0;
public int incrementAndGet(){
return ++count; //may return the same number more than once.
}
Java's compareAndSet is based on CPU compare-and-swap (CAS) instructions see http://en.wikipedia.org/wiki/Compare-and-swap. It compares the contents of a memory location to a given value and, only if they are the same, modifies the contents of that memory location to a given new value.
In case of incrementAndGet we read the current value and call compareAndSet(current, current + 1). If it returns false it means that another thread interfered and changed the current value, which means that our attempt failed and we need to repeat the whole cycle until it succeeds.

Not sure this usage of volatile makes sense, seems to have the same issue

Reading here:
JLS 8.3.1.4 volatile Fields
Without volatile it says
"then method two could occasionally print a value for j that is greater than the value of i, because the example includes no synchronization and"
class Test {
static volatile int i = 0, j = 0;
static void one() { i++; j++; }
static void two() {
System.out.println("i=" + i + " j=" + j);
}
}
With volatile is says
"It is possible, however, that any given invocation of method two might observe a value for j that is much greater than the value observed for i, because method one might be executed many times between the moment when method two fetches the value of i and the moment when method two fetches the value of j."
In behaves 'properly' with synchornization, but I'm confused as to what benefit volatile brings here?
I thought volatile gaurantees the order is preserved, so I would have thought it SOME cases the value of i might be greater than j, but not the other way around since that implies the order of incrementing was changed.
Is that a typo in the doc? If not, please explain how j could be greater than i when using volatile.
It is saying that in the middle of method two method one could run several and that the value read for j would be higher than the value read for i.
read i
run method 1
run method 1
read j
The volatile variable tells the JIT compiler not to perform any optimizations that could affect the ordering of access to that variable. The writes to the volatile variable are always performed in the memory and never on the cache or cpu registers.
Also two more points:
i++ is not a single operation but three: a) read variable i, b) increment, c) store. This "triple" operation is not atomic, meaning there is not a guarantee that would be completed without some other thread looking into its inconsistent state. If you want to do that look at AtomicInteger#getAndIncrement()
your method one() is not synchronised therefore you can have one thread having completed i++ then the second thread prints and the first thread completes the j++ operation then.
From what I understand, volatile guarantees that different threads will reference the same variable instead of copying it, i.e., if you update a volatile variable in a thread, all others will have this variable update because they all reference the same. A good example of this can be found at Why Volatile Matters.
The thing about method two is that it isn't Atomic. It won't run in only one CPU cycle. You can divide it in different operations like #Sign stated. Even i++ isn't atomic, as it needs to read variable i, incremente it and store it again at i reference in memory.
You got the behavior of volatile right; you just didn't read what you quote carefully:
because method one might be executed many times between the moment
when method two fetches the value of i and the moment when method two
fetches the value of j
The order is preserved in one(), it's that in two(), i is fetched and printed, but in the time it takes to print i both i and j might be incremented many times by calls to one() from other threads and so the printed value for j will be higher than the printed value of i.
volatile makes reading OR writing thread safe/memory consistent. It doesn't make reading AND writing atomic however. Using volatile is only fine if only one thread will ever update it.
I suggest you use AtomicInteger instead.
In "source code order", increments to i happen-before increments to j. So, a thread invoking method one() will always observe that i >= j. But, when other threads observe those variables, they can see things differently.
There are certain events that establish what the JLS calls "synchronization order." It makes sense to talk about these events (and only these events) as "happening before" others. Writing to a volatile variable is one of these. Without using volatile, it doesn't make any sense to say that i is incremented before j; those writes could be re-ordered, and that re-ordering can be observed by other threads.
A better example for what can happen without volatile would be this:
static void oneAndAHalf() { System.out.println("j=" + j + " i=" + i);
Even though j appears to be incremented after i, and j is fetched before i, you could still observe j > i because the removal of volatile would permit the operations in one() to be reordered. Add volatile, and oneAndAHalf() will always show i >= j, as you expect.
If you take away volatile, then method two() could print a value for j that is greater than i for either of two reasons: because operations have been reordered, or because i and j are not treated atomically. The current two() method doesn't unambiguously illustrate the utility of volatile. Add volatile, and you'll get the same output, but the only reason is that the operations are not atomic.
To see a consistent view, where i == j, both methods could be synchronized. This would make the increment to both variables appear to be atomic.

Categories

Resources