I have two methods as follows:
class A{
void method1(){
someObj.setSomeAttribute(true);
someOtherObj.callMethod(someObj);
}
void method2(){
someObj.setSomeAttribute(false);
someOtherObj.callMethod(someObj);
}
}
where in another place that attribute is evaluated:
class B{
void callMethod(Foo someObj){
if(someObj.getAttribute()){
//do one thing
} else{
//so another thing
}
}
}
Note that A.method1 and A.method2 are updating the attribute of the same object. If those 2 methods are run in 2 threads, will this work or will there be unexpected results?
Will there be unexpected results? Yes, guaranteed, in that if you modify things you wouldn't want to have an impact on your app (such as the phase of the moon, the current song playing in your winamp, whether your dog is cuddling near the CPU, if it's the 5th tuesday of the month, and other such things), that may have an effect on behaviour. Which you don't want.
What you've described is a so-called violation of the java memory model: The end result is that any java implementation is free to return any of multiple values and nevertheless, that VM is operating properly according to the java specification. Even if it does so seemingly arbitrarily.
As a general rule, each thread gets an unfair coin. Unfair, in that it will try to mess with you: It'll flip correctly every time when you test it out, and then in production, and only when you're giving a demo to that crucial customer, it'll get ya.
Every time it reads to or writes from any field, it will flip this mean coin. On heads, it will use the actual field. On tails, it will use a local copy it made.
That's oversimplifying the model quite a bit, but it's a good start to try to get your head around how this works.
The way out is to force so-called 'comes before' relationships: What java will do, is ensure that what you can observe matches these relationships: If event A is defined as having a comes-before relationship vs. event B, then anything A did will be observed, exactly as is, by B, guaranteed. No more coin flips.
Examples of establishing comes-before relationships involve using volatile, synchronized, and any methods that use these things internally.
NB: Of course. if your setSomeAttribute method, which you did not paste, includes some comes-before-establishing act, then there's no problem here, but as a rule a method called setX will not be doing that.
An example of one that doesn't:
class Example {
private String attr;
public void setAttr(String attr) {
this.attr = attr;
}
}
some examples of ones that do:
Let's say method B.callMethod is executed in the same thread as method1 - then you are guaranteed to at least observe the change method1 made, though it's still a coin flip (whether you actually see what method2 did or not). What would not be possible is seeing the value of that attribute before either method1 or method2 runs, because code running in a single thread has comes-before across the entire run (any line that is executed before another in the same thread has a comes-before relationship).
The set method looks like:
class Example {
private String attr;
private final Object lock = new Object();
public void setAttr(String attr) {
synchronized (lock) {
this.attr = attr;
}
}
public String getAttr() {
synchronized (lock) {
return this.attr;
}
}
}
Now the get and set ops lock on the same object, that's one of the ways to establish comes-before. Which thread got to a lock first is observable behaviour; if method1's set got there before B's get, then you are guaranteed to observe method1's set.
More generally, sharing state between threads is extremely tricky and you should endeavour not do so. The alternatives are:
Initialize all state before starting a thread, then do the job, and only when it is finished, relay all results back. Fork/join does this.
Use a messaging system that has great concurrency fundamentals, such as a database, which has transactions, or message queue libraries.
If you must share state, try to write things in terms of the nice classes in j.u.concurrent.
I assume what you expected is when you call A.method1, someObj.getAttribute() will return true in B.callMethod, when you call A.method2, someObj.getAttribute() will return false in B.callMethod.
Unfortunately,this will not work. Because between the line setSomeAttribute and callMethod,other thread may have change the value of the attribute.
If you are only use the attribute in callMethod,why not just pass the attribute instead of the Foo object. Code as follow:
class A{
void method1(){
someOtherObj.callMethod(true);
}
}
class B{
void callMethod(boolean flag){
if(flag){
//do one thing
} else{
//so another thing
}
}
}
If you must use Foo as the parameter, what you can do is to make setAttribute and callMethod atomic.
The easiest way to achieve it is to make it synchronized.Code as follow:
synchronized void method1(){
someObj.setSomeAttribute(true);
someOtherObj.callMethod(someObj);
}
synchronized void method2(){
someObj.setSomeAttribute(false);
someOtherObj.callMethod(someObj);
}
But this may have bad performance, you can achieve it with some more fine-grained lock.
Related
Is it just that two synchronised methods that can't run concurrently?
For example, someone codes 2 Java classes.
public class A {
public synchronized void a1(){
//do something }
public void a2(){
//do something }
public synchronized int a3(){
int var = 0;
//do something changing var
return var; } }
public class B {
public void b1(){
//do something
}
public synchronized int b2(){
int var = 0;
//do something changing var
return var;
}
public void b3(){
//do something
}
}
With a main method,
public static void main(String[] args){
A firstA = new A();
A secondA = new A();
B firstB = newB();
//create some threads that perform operations on firstA, secondA and first B and start them
}
Assuming they run in parallel and call methods on the created objects and also assuming that none of these methods are prevented by other mechanisms from running currently.
Out of the methods in the example I gave, are there any that cannot be run concurrently if called at the same time by 2 different threads?
By running concurrently I mean, they will be sequenced so that one will finish before the other starts.
Assuming that there is no other synchronization involved, the only calls that will be mutually excluded are:
firstA.a1() vs firstA.a1()
firstA.a1() vs firstA.a3()
secondA.a1() vs secondA.a1()
secondA.a1() vs secondA.a3()
firstB.b2() vs firstB.b2()
This is because synchronized blocks have to use the same lock for them to have an effect on each other. When you mark a whole method as synchronized, the lock object will be the value of this. So methods in firstA will be completely independent of methods of secondA or firstB, because they use different locks.
The methods in your code that aren't marked synchronized won't even attempt to acquire any lock at all, they can run whenever they want.
Please note that mutual exclusion is just one aspect of multithreading. For your programme to work correctly, you also have to consider things like visibility and safe publication.
Of course, the answer depends on what "something" you "do" when you "do something", and since you don't show that, no one can answer. That's what happens when you omit the significant information. Don't omit the significant information.
What we can say is that the synchronized methods in A do not use the same lock as the synchronized methods in B.
If multiple threads access the same data, then they must synchronize with each other. The only way for threads to synchronize with each other is to use the same means of synchronization. If that's the use of synchronized, then they must synchronize on the same monitor (lock). So if the A methods and B methods access the same data from different threads, and who can tell if they do from what you show? then they won't work as shown.
If any of the unsynchronized methods access the same data from different threads, and the part you didn't show does not have other effective synchronization means, then they won't work.
All this depends on what you decided not to show us.
All of them can run at the same time without inturrupting each other.
because each thread handles different instance, there's no need for them to wait to what so ever synconized method.
First the code fragments...
final class AddedOrders {
private final Set<Order> orders = Sets.newConcurrentHashSet();
private final Set<String> ignoredItems = Sets.newConcurrentHashSet();
private boolean added = false;
public synchronized void clear() {
added = false;
}
public synchronized void add(Order order) {
added = orders.add(order);
}
public synchronized void remove(Order order) {
if (added) orders.remove(order);
}
public synchronized void ban(String item) {
ignoredItems.add(item);
}
public synchronized boolean has(Order order) {
return orders.contains(order);
}
public synchronized Set<Order> getOrders() {
return orders;
}
public synchronized boolean ignored(String item) {
return ignoredItems.contains(item);
}
}
private final AddedOrders added = new AddedOrders();
...
boolean subscribed;
int i = 10;
synchronized (added) {
while (!(subscribed = client.getSubscribedOrders().containsAll(added.getOrders())) && (i>0)) {
Helper.out("...order not subscribed yet (try: %d)", i);
Thread.sleep(200);
i--;
}
}
What I'd like to know...
Could someone point out which synchronized are not necessary?
Of course this is not the full code but assume that in the full project that all methods are called, and that some combinations of methods are called in the check value first, then modify style
added(the class) is accessed by multipleThreads
client is part of an external Server API, that I'm not entirely sure if it is Thread-Safe yet but I think it must be
ConcurrentHashSet is a google guava Class but it is based on ConcurrentHashMap apparently and the docs say it carries all the same concurrency guarantees.
But I don't really understand completely what those guarantees all are, even though I did some reading. Namely I know it's not ok to just check and set a value in a synchronized HashMap (without synchronizing on the synchronized Map using a synchronized block), however I do not know if you can do that in ConcurrentHashMap or not (without synchronizing on the ConcurrentHashMap using a synchronized block).
The only cases in your code where you really need synchronized are the ones where you test or update the added flag. You need the synchronized block to make sure that changes to the flag are visible across threads, and you also need to make sure that the added flag change is made in step with the change to the orders data structure. The synchronized keyword keeps another thread from barging in and doing something in between checking the flag and changing the data structure (the remove method could be broken like this if you remove the synchronization).
The code toward the end seems problematic because you're locking on the added object and then not letting go of the lock, there's not an opportunity for any other thread to make the changes that the thread is looking for. Although it looks like you're waiting for another object to change, so this criticism may be invalid. Sleeping with a lock held seems dangerous, though. This kind of thing is why Object#wait releases the lock it acquired.
Also note that since you're passing references out to the Orders set, code outside this class can add orders. You should do something to protect this internal data, like returning it wrapped in an immutableSet so callers can't make changes.
In general synchronization is used when you want to impose some granularity on changes, where you have 2 or more changes you want made together, without possibility of interleaving. An example is a check-then-act sequence where you execute some code that makes a change based on the value of something else, and you don't want some other thread to execute in between the check and the action (so the decision to act could be made, then the condition that allowed that action changes, so that the action could be invalid). If individual values are changed but they are unrelated, then you can make them volatile or use Atomic variables, and reduce the amount of locking you have to do.
It's a valid point that the synchronized keyword could be removed in cases like the clear method, where the only thing that changes is the added flag, which could be made volatile. The purpose of the added flag continues to elude me. Anything that enters a value that's already present can turn the flag back to false, it's not apparent that reasoning about any action based on what the current value of the flag makes any sense if this structure is getting modified concurrently.
Without knowing the exact context it's hard to say, but in general, classes created without considering their being used with multiple threads probably need to be reworked extensively before being used in a concurrent environment.
So I've been reading on concurrency and have some questions on the way (guide I followed - though I'm not sure if its the best source):
Processes vs. Threads: Is the difference basically that a process is the program as a whole while a thread can be a (small) part of a program?
I am not exactly sure why there is a interrupted() method and a InterruptedException. Why should the interrupted() method even be used? It just seems to me that Java just adds an extra layer of indirection.
For synchronization (and specifically about the one in that link), how does adding the synchronize keyword even fix the problem? I mean, if Thread A gives back its incremented c and Thread B gives back the decremented c and store it to some other variable, I am not exactly sure how the problem is solved. I mean this may be answering my own question, but is it supposed to be assumed that after one of the threads return an answer, terminate? And if that is the case, why would adding synchronize make a difference?
I read (from some random PDF) that if you have two Threads start() subsequently, you cannot guarantee that the first thread will occur before the second thread. How would you guarantee it, though?
In synchronization statements, I am not completely sure whats the point of adding synchronized within the method. What is wrong with leaving it out? Is it because one expects both to mutate separately, but to be obtained together? Why not just have the two non-synchronized?
Is volatile just a keyword for variables and is synonymous with synchronized?
In the deadlock problem, how does synchronize even help the situation? What makes this situation different from starting two threads that change a variable?
Moreover, where is the "wait"/lock for the other person to bowBack? I would have thought that bow() was blocked, not bowBack().
I'll stop here because I think if I went any further without these questions answered, I will not be able to understand the later lessons.
Answers:
Yes, a process is an operating system process that has an address space, a thread is a unit of execution, and there can be multiple units of execution in a process.
The interrupt() method and InterruptedException are generally used to wake up threads that are waiting to either have them do something or terminate.
Synchronizing is a form of mutual exclusion or locking, something very standard and required in computer programming. Google these terms and read up on that and you will have your answer.
True, this cannot be guaranteed, you would have to have some mechanism, involving synchronization that the threads used to make sure they ran in the desired order. This would be specific to the code in the threads.
See answer to #3
Volatile is a way to make sure that a particular variable can be properly shared between different threads. It is necessary on multi-processor machines (which almost everyone has these days) to make sure the value of the variable is consistent between the processors. It is effectively a way to synchronize a single value.
Read about deadlocking in more general terms to understand this. Once you first understand mutual exclusion and locking you will be able to understand how deadlocks can happen.
I have not read the materials that you read, so I don't understand this one. Sorry.
I find that the examples used to explain synchronization and volatility are contrived and difficult to understand the purpose of. Here are my preferred examples:
Synchronized:
private Value value;
public void setValue(Value v) {
value = v;
}
public void doSomething() {
if(value != null) {
doFirstThing();
int val = value.getInt(); // Will throw NullPointerException if another
// thread calls setValue(null);
doSecondThing(val);
}
}
The above code is perfectly correct if run in a single-threaded environment. However with even 2 threads there is the possibility that value will be changed in between the check and when it is used. This is because the method doSomething() is not atomic.
To address this, use synchronization:
private Value value;
private Object lock = new Object();
public void setValue(Value v) {
synchronized(lock) {
value = v;
}
}
public void doSomething() {
synchronized(lock) { // Prevents setValue being called by another thread.
if(value != null) {
doFirstThing();
int val = value.getInt(); // Cannot throw NullPointerException.
doSecondThing(val);
}
}
}
Volatile:
private boolean running = true;
// Called by Thread 1.
public void run() {
while(running) {
doSomething();
}
}
// Called by Thread 2.
public void stop() {
running = false;
}
To explain this requires knowledge of the Java Memory Model. It is worth reading about in depth, but the short version for this example is that Threads have their own copies of variables which are only sync'd to main memory on a synchronized block and when a volatile variable is reached. The Java compiler (specifically the JIT) is allowed to optimise the code into this:
public void run() {
while(true) { // Will never end
doSomething();
}
}
To prevent this optimisation you can set a variable to be volatile, which forces the thread to access main memory every time it reads the variable. Note that this is unnecessary if you are using synchronized statements as both keywords cause a sync to main memory.
I haven't addressed your questions directly as Francis did so. I hope these examples can give you an idea of the concepts in a better way than the examples you saw in the Oracle tutorial.
I have class with 2 synchronized methods:
class Service {
public synchronized void calc1();
public synchronized void calc2();
}
Both takes considerable time to execute. The question is would execution of these methods blocks each other. I.e. can both methods be executed in parallel in different threads?
No they can't be executed in parallel on the same service - both methods share the same monitor (i.e. this), and so if thread A is executing calc1, thread B won't be able to obtain the monitor and so won't be able to run calc2. (Note that thread B could call either method on a different instance of Service though, as it will be trying to acquire a different, unheld monitor, since the this in question would be different.)
The simplest solution (assuming you want them to run independently) would be to do something like the following using explicit monitors:
class Service {
private final Object calc1Lock = new Object();
private final Object calc2Lock = new Object();
public void calc1() {
synchronized(calc1Lock) {
// ... method body
}
}
public void calc2() {
synchronized(calc2Lock) {
// ... method body
}
}
}
The "locks" in question don't need to have any special abilities other than being Objects, and thus having a specific monitor. If you have more complex requirements that might involve trying to lock and falling back immediately, or querying who holds a lock, you can use the actual Lock objects, but for the basic case these simple Object locks are fine.
Yes, you can execute them in two different threads without messing up your class internals but no they won't run in parallel - only one of them will be executed at each time.
No, they cannot be. In this case you might use a synchronized block instead of synchronizing the whole method. Don't forget to synchronize on different objects.
Out of the below two synchronization strategy, which one is optimized (as in processing and generated byte code) and also the scenario in which one should use one of them.
public synchronized void addName(String name)
{
lastName = name;
nameCount++;
nameList.add(name);
}
or
public void addName(String name) {
synchronized(this) {
lastName = name;
nameCount++;
nameList.add(name);
}
}
Also what is advisiable way to handle concurrency:
using java.util.concurrent package
using the above low level methods
using Job or UIJob API (if working in eclipse PDE environment)
Thanks
which one is optimized (as in processing and generated byte code)
According to this IBM DeveloperWorks Article Section 1, a synchronized method generates less bytecode when compared to a synchronized block. The article explains why.
Snippet from the article:
When the JVM executes a synchronized method, the executing thread identifies that the method's method_info structure has the ACC_SYNCHRONIZED flag set, then it automatically acquires the object's lock, calls the method, and releases the lock. If an exception occurs, the thread automatically releases the lock.
Synchronizing a method block, on the
other hand, bypasses the JVM's
built-in support for acquiring an
object's lock and exception handling
and requires that the functionality be
explicitly written in byte code. If
you read the byte code for a method
with a synchronized block, you will
see more than a dozen additional
operations to manage this
functionality. Listing 1 shows calls
to generate both a synchronized method
and a synchronized block:
Edited to address first comment
To give other SOers credit, here is a good discussion about why one would use a sync. block. I am sure you can find more interesting discussions if you search around :)
Is there an advantage to use a Synchronized Method instead of a Synchronized Block?
I personally have not had to use a sync. block to lock on another object other than this, but that is one use SOers point out about sync. blocks.
Your updated two pieces of code are semantically identical. However, using a synchronized block as in the second piece allows you more control, as you could synchronize on a different object or, indeed, not synchronize parts of the method that don't need to be.
Using java.util.concurrent is very much preferrable to using synchronization primitives wherever possible, since it allows you to work at a higher level of abstraction, and use code that was written by very skilled people and tested intensively.
If you're working in eclipse PDE, using its APIs is most likely preferrable, as it ties in with the rest of the platform.
This totally does not matter from any efficiency point of view.
The point of having blocks is you can specify your own lock. You can choose a lock that is encapsulated within the object, as opposed to using this, with the consequence that you have more control over who can acquire the lock (since you can make that lock inaccessible from outside your object).
If you use this as the lock (whether you put synchronized on the method or use the block), anything in your program can acquire the lock on your object, and it's much harder to reason about what your program is doing.
Restricting access to the lock buys you a massive gain in decidability, it's much more beneficial to have that kind of certainty than to shave off a bytecode somewhere.
I know this might be an example, but if you plan on writing such code - think again.
To me it looks like you are duplicating information, and you should not do that unless you see that you need to do performance changes to your code. (Which you almost never should do).
If you really need this to be code that run in several threads, I'd make the nameList into a synchronized list using Collections.synchronizedList.
The last name should be a getter and it could pick the last element in the list.
The nameCount should be the size of the list.
If you do stuff like you have done now, you must also synchronize the access to all of the places where the variables are referenced, and that would make the code a lot less readable and harder to maintain.
You could remove all locking:
class Names {
AtomicReference<Node> names = new AtomicReference<Node>();
public void addName(final String name) {
Node old = names.get();
while (!names.compareAndSet(old, new Node(old, name))) {
old = names.get();
}
}
public String getName() {
final Node node = names.get();
return (node == null) ? null : node.name;
}
static class Node {
final Node parent;
final String name;
Node(final Node parent, final String name) {
this.parent = parent;
this.name = name;
}
int count() {
int count = 0;
Node p = parent;
while (p != null) {
count++;
p = p.parent;
}
return count;
}
}
}
This is basically a Treiber stack implementation. You can get the size, the current name, and you can easily implement an Iterator (albeit reverse to the one in your example) over the contents. Alternative copy-on-write containers could be used as well, depending on your needs.
Impossible to say, since the two code snippets arent equivalent.
The difference (the lack of synchronization of the call to add) may be significant, it might not be. From what you've given us its impossible to say.