C++ (and possibly Java) how are objects locked for synchronization? - java

When objects are locked in languages like C++ and Java where actually on a low level scale) is this performed? I don't think it's anything to do with the CPU/cache or RAM. My best guestimate is that this occurs somewhere in the OS? Would it be within the same part of the OS which performs context switching?
I am referring to locking objects, synchronizing on method signatures (Java) etc.
It could be that the answer depends on which particular locking mechanism?

Locking involves a synchronisation primitive, typically a mutex. While naively speaking a mutex is just a boolean flag that says "locked" or "unlocked", the devil is in the detail: The mutex value has to be read, compared and set atomically, so that multiple threads trying for the same mutex don't corrupt its state.
But apart from that, instructions have to be ordered properly so that the effects of a read and write of the mutex variable are visible to the program in the correct order and that no thread inadvertently enters the critical section when it shouldn't because it failed to see the lock update in time.
There are two aspects to memory access ordering: One is done by the compiler, which may choose to reorder statements if that's deemed more efficient. This is relatively trivial to prevent, since the compiler knows when it must be careful. The far more difficult phenomenon is that the CPU itself, internally, may choose to reorder instructions, and it must be prevented from doing so when a mutex variable is being accessed for the purpose of locking. This requires hardware support (e.g. a "lock bit" which causes a pipeline flush and a bus lock).
Finally, if you have multiple physical CPUs, each CPU will have its own cache, and it becomes important that state updates are propagated to all CPU caches before any executing instructions make further progress. This again requires dedicated hardware support.
As you can see, synchronisation is a (potentially) expensive business that really gets in the way of concurrent processing. That, however, is simply the price you pay for having one single block of memory on which multiple independent context perform work.

There is no concept of object locking in C++. You will typically implement your own on top of OS-specific functions or use synchronization primitives provided by libraries (e.g. boost::scoped_lock). If you have access to C++11, you can use the locks provided by the threading library which has a similar interface to boost, take a look.
In Java the same is done for you by the JVM.

The java.lang.Object has a monitor built into it. That's what is used to lock for the synchronized keyword. JDK 6 added a concurrency packages that give you more fine-grained choices.
This has a nice explanation:
http://www.artima.com/insidejvm/ed2/threadsynch.html
I haven't written C++ in a long time, so I can't speak to how to do it in that language. It wasn't supported by the language when I last wrote it. I believe it was all 3rd party libraries or custom code.

It does depend on the particular locking mechanism, typically a semaphore, but you cannot be sure, since it is implementation dependent.

All architectures I know of use an atomic Compare And Swap to implement their synchronization primitives. See, for example, AbstractQueuedSynchronizer, which was used in some JDK versions to implement Semiphore and ReentrantLock.

Related

Why can't the JVM add synchronized/volatile/Lock at runtime?

Since all java applications are run eventually by the JVM, why can't the JVM wrap around single-threaded code into a multi-thread code at runtime depending on how many threads are running/accessing a part of the code.
The JVM sure is aware of the number of threads running and it sure knows which classes are Threads and which part of code can be accessed by multiple threads.
What are the reasons this cannot be implemented or what can make this complex?
Simply spraying synchronized/volatile/Lock on anything that's used by multiple threads does not result in correct multi-threaded behavior. How would the runtime know the correct granularity of locks, for example? How would it avoid deadlocks?
The early collections classes, eg: Vector and Hashtable were designed with a similarly naive view of concurrency. Everything was synchronized. It turns out that you could still get into trouble quite easily, however. For example, suppose you wanted to check that a Vector contained at least one element, and if so then you'd remove one. Each of the calls to the Vector would be synchronized, but another thread could execute between these calls, and so you could end up with race condition bugs. (This is what I was referring to when I mentioned granularity of locks, earlier.)
Not possible in general for the JVM
Automatically adding synchronization usually does not lead to a positive effect. Synchronization costs both, performance and memory. Performance, because the processor must check the underlying locks. And memory because the locks must be stored somewhere. When the runtime adds locks everywhere, the program will run single threaded (because every method can be only accessed from one thread at a time), but now with higher costs for the CPU and more memory load (because of the lock handling).
The JVM can remove locks automatically
Usually the Java runtime does not have enough information to add locks in a clever way. But it does the opposite: With the so called "escape analysis" it can check, whether a memory block never escapes a certain code block (and is never shared to another thread). If this is the case, several optimizations are applied. One of them is, that the VM removes all synchronizations for this block.
Database engines can do it
There are systems that have enough information to automatically apply locks: database management systems. The more sophisticated database engines use a technique called "multi version concurrency". With this technique, one needs only locks for writing data, not for reading data. So one needs fewer locks as with a traditional approach and more code can run in parallel. But this comes with a cost: Sometimes the degree of parallelism becomes to high and the system comes in a inconsistent state. The system then undoes some of the changes and repeats them at a later time.
Automatic locks with STM and Clojure
This approach can be brought to the JVM in a (too some degree) automatic way. It is then called "software transactional memory". This is very close to your idea of automatic locks and leaves enough room for parallelism to be useful. On the JVM the language Clojure uses software transactional memory.
So while the JVM cannot add locks automatically in general, Clojure enables this to a certain degree. Try it and look how good it serves you.
I can think about the following reasons:
application may use static variables, so 2 application that partially share classes they are using might bother to each other by changing shared state.
What you actually want is implemented by Java EE container that is running several applications pusillanimously. It seems that you are suggesting the JSE container (I have no idea whether this term exists). Try to suggest it to Oracle. It could be a cool JSR!

Does the Java Memory Model (JSR-133) imply that entering a monitor flushes the CPU data cache(s)?

There is something that bugs me with the Java memory model (if i even understand everything correctly). If there are two threads A and B, there are no guarantees that B will ever see a value written by A, unless both A and B synchronize on the same monitor.
For any system architecture that guarantees cache coherency between threads, there is no problem. But if the architecture does not support cache coherency in hardware, this essentially means that whenever a thread enters a monitor, all memory changes made before must be commited to main memory, and the cache must be invalidated. And it needs to be the entire data cache, not just a few lines, since the monitor has no information which variables in memory it guards.
But that would surely impact performance of any application that needs to synchronize frequently (especially things like job queues with short running jobs). So can Java work reasonably well on architectures without hardware cache-coherency? If not, why doesn't the memory model make stronger guarantees about visibility? Wouldn't it be more efficient if the language would require information what is guarded by a monitor?
As i see it the memory model gives us the worst of both worlds, the absolute need to synchronize, even if cache coherency is guaranteed in hardware, and on the other hand bad performance on incoherent architectures (full cache flushes). So shouldn't it be more strict (require information what is guarded by a monitor) or more lose and restrict potential platforms to cache-coherent architectures?
As it is now, it doesn't make too much sense to me. Can somebody clear up why this specific memory model was choosen?
EDIT: My use of strict and lose was a bad choice in retrospect. I used "strict" for the case where less guarantees are made and "lose" for the opposite. To avoid confusion, its probably better to speak in terms of stronger or weaker guarantees.
the absolute need to synchronize, even
if cache coherency is guaranteed in
hardware
Yes, but then you only have to reason against the Java Memory Model, not against a particular hardware architecture that your program happens to run on. Plus, it's not only about the hardware, the compiler and JIT themselves might reorder the instructions causing visibility issue. Synchronization constructs in Java addresses visibility & atomicity consistently at all possible levels of code transformation (e.g. compiler/JIT/CPU/cache).
and on the other hand bad performance
on incoherent architectures (full
cache flushes)
Maybe I misunderstood s/t, but with incoherent architectures, you have to synchronize critical sections anyway. Otherwise, you'll run into all sort of race conditions due to the reordering. I don't see why the Java Memory Model makes the matter any worse.
shouldn't it be more strict (require
information what is guarded by a
monitor)
I don't think it's possible to tell the CPU to flush any particular part of the cache at all. The best the compiler can do is emitting memory fences and let the CPU decides which parts of the cache need flushing - it's still more coarse-grained than what you're looking for I suppose. Even if more fine-grained control is possible, I think it would make concurrent programming even more difficult (it's difficult enough already).
AFAIK, the Java 5 MM (just like the .NET CLR MM) is more "strict" than memory models of common architectures like x86 and IA64. Therefore, it makes the reasoning about it relatively simpler. Yet, it obviously shouldn't offer s/t closer to sequential consistency because that would hurt performance significantly as fewer compiler/JIT/CPU/cache optimizations could be applied.
Existing architectures guarantee cache coherency, but they do not guarantee sequential consistency - the two things are different. Since seq. consistency is not guaranteed, some reorderings are allowed by the hardware and you need critical sections to limit them. Critical sections make sure that what one thread writes becomes visible to another (i.e., they prevent data races), and they also prevent the classical race conditions (if two threads increment the same variable, you need that for each thread the read of the current value and the write of the new value are indivisible).
Moreover, the execution model isn't as expensive as you describe. On most existing architectures, which are cache-coherent but not sequentially consistent, when you release a lock you must flush pending writes to memory, and when you acquire one you might need to do something to make sure future reads will not read stale values - mostly that means just preventing that reads are moved too early, since the cache is kept coherent; but reads must still not be moved.
Finally, you seem to think that Java's Memory Model (JMM) is peculiar, while the foundations are nowadays fairly state-of-the-art, and similar to Ada, POSIX locks (depending on the interpretation of the standard), and the C/C++ memory model. You might want to read the JSR-133 cookbook which explains how the JMM is implemented on existing architectures: http://g.oswego.edu/dl/jmm/cookbook.html.
The answer would be that most multiprocessors are cache-coherent, including big NUMA systems, which almost? always are ccNUMA.
I think you are somewhat confused as to how cache coherency is acomplished in practice. First, caches may be coherent/incoherent with respect to several other things on the system:
Devices
(Memory modified by) DMA
Data caches vs instruction caches
Caches on other cores/processors (the one this question is about)
...
Something has to be made to maintain coherency. When working with devices and DMA, on architectures with incoherent caches with respect to DMA/devices, you would either bypass the cache (and possibly the write buffer), or invalidate/flush the cache around operations involving DMA/devices.
Similarly, when dynamically generating code, you may need to flush the instruction cache.
When it comes to CPU caches, coherency is achieved using some coherency protocol, such as MESI, MOESI, ... These protocols define messages to be sent between caches in response to certain events (e.g: invalidate-requests to other caches when a non-exclusive cacheline is modified, ...).
While this is sufficient to maintain (eventual) coherency, it doesn't guarantee ordering, or that changes are immediately visible to other CPUs. Then, there are also write buffers, which delay writes.
So, each CPU architecture provides ordering guarantees (e.g. accesses before an aligned store cannot be reordered after the store) and/or provide instructions (memory barriers/fences) to request such guarantees. In the end, entering/exiting a monitor doesn't entail flushing the cache, but may entail draining the write buffer, and/or stall waiting for reads to end.
the caches that JVM has access to are really just CPU registers. since there aren't many of them, flushing them upon monitor exit isn't a big deal.
EDIT: (in general) the memory caches are not under the control of JVM, JVM cannot choose to read/write/flush these caches, so forget about them in this discussion
imagine each CPU has 1,000,000 registers. JVM happily exploits them to do crazy fast computations - until it bumps into monitor enter/exit, and has to flush 1,000,000 registers to the next cache layer.
if we live in that world, either Java must be smart enough to analyze what objects aren't shared (majority of objects aren't), or it must ask programmers to do that.
java memory model is a simplified programming model that allows average programmers make OK multithreading algorithms. by 'simplified' I mean there might be 12 people in the entire world who really read chapter 17 of JLS and actually understood it.

Why do all Java Objects have wait() and notify() and does this cause a performance hit?

Every Java Object has the methods wait() and notify() (and additional variants). I have never used these and I suspect many others haven't. Why are these so fundamental that every object has to have them and is there a performance hit in having them (presumably some state is stored in them)?
EDIT to emphasize the question. If I have a List<Double> with 100,000 elements then every Double has these methods as it is extended from Object. But it seems unlikely that all of these have to know about the threads that manage the List.
EDIT excellent and useful answers. #Jon has a very good blog post which crystallised my gut feelings. I also agree completely with #Bob_Cross that you should show a performance problem before worrying about it. (Also as the nth law of successful languages if it had been a performance hit then Sun or someone would have fixed it).
Well, it does mean that every object has to potentially have a monitor associated with it. The same monitor is used for synchronized. If you agree with the decision to be able to synchronize on any object, then wait() and notify() don't add any more per-object state. The JVM may allocate the actual monitor lazily (I know .NET does) but there has to be some storage space available to say which monitor is associated with the object. Admittedly it's possible that this is a very small amount (e.g. 3 bytes) which wouldn't actually save any memory anyway due to padding of the rest of the object overhead - you'd have to look at how each individual JVM handled memory to say for sure.
Note that just having extra methods doesn't affect performance (other than very slightly due to the code obvious being present somewhere). It's not like each object or even each type has its own copy of the code for wait() and notify(). Depending on how the vtables work, each type may end up with an extra vtable entry for each inherited method - but that's still only on a per type basis, not a per object basis. That's basically going to get lost in the noise compared with the bulk of the storage which is for the actual objects themselves.
Personally, I feel that both .NET and Java made a mistake by associating a monitor with every object - I'd rather have explicit synchronization objects instead. I wrote a bit more on this in a blog post about redesigning java.lang.Object/System.Object.
Why are these so fundamental that
every object has to have them and is
there a performance hit in having them
(presumably some state is stored in
them)?
tl;dr: They are thread-safety methods and they have small costs relative to their value.
The fundamental realities that these methods support are that:
Java is always multi-threaded. Example: check out the list of Threads used by a process using jconsole or jvisualvm some time.
Correctness is more important than "performance." When I was grading projects (many years ago), I used to have to explain "getting to the wrong answer really fast is still wrong."
Fundamentally, these methods provide some of the hooks to manage per-Object monitors used in synchronization. Specifically, if I have synchronized(objectWithMonitor) in a particular method, I can use objectWithMonitor.wait() to yield that monitor (e.g., if I need another method to complete a computation before I can proceed). In that case, that will allow one other method that was blocked waiting for that monitor to proceed.
On the other hand, I can use objectWithMonitor.notifyAll() to let Threads that are waiting for the monitor know that I am going to be relinquishing the monitor soon. They can't actually proceed until I leave the synchronized block, though.
With respect to specific examples (e.g., long Lists of Doubles) where you might worry that there's a performance or memory hit on the monitoring mechanism, here are some points that you should likely consider:
First, prove it. If you think there is a major impact from a core Java mechanism such as multi-threaded correctness, there's an excellent chance that your intuition is false. Measure the impact first. If it's serious and you know that you'll never need to synchronize on an individual Double, consider using doubles instead.
If you aren't certain that you, your co-worker, a future maintenance coder (who might be yourself a year later), etc., will never ever ever need a fine granularity of theaded access to your data, there's an excellent chance that taking these monitors away would only make your code less flexible and maintainable.
Follow-up in response to the question on per-Object vs. explicit monitor objects:
Short answer: #JonSkeet: yes, removing the monitors would create problems: it would create friction. Keeping those monitors in Object reminds us that this is always a multithreaded system.
The built-in object monitors are not sophisticated but they are: easy to explain; work in a predictable fashion; and are clear in their purpose. synchronized(this) is a clear statement of intent. If we force novice coders to use the concurrency package exclusively, we introduce friction. What's in that package? What's a semaphore? Fork-join?
A novice coder can use the Object monitors to write decent model-view-controller code. synchronized, wait and notifyAll can be used to implement naive (in the sense of simple, accessible but perhaps not bleeding-edge performance) thread-safety. The canonical example would be one of these Doubles (posited by the OP) which can have one Thread set a value while the AWT thread gets the value to put it on a JLabel. In that case, there is no good reason to create an explicit additional Object just to have an external monitor.
At a slightly higher level of complexity, these same methods are useful as an external monitoring method. In the example above, I explicitly did that (see objectWithMonitor fragments above). Again, these methods are really handy for putting together relatively simple thread safety.
If you would like to be even more sophisticated, I think you should seriously think about reading Java Concurrency In Practice (if you haven't already). Read and write locks are very powerful without adding too much additional complexity.
Punchline: Using basic synchronization methods, you can exploit a large portion of the performance enabled by modern multi-core processors with thread-safety and without a lot of overhead.
All objects in Java have monitors associated with them. Synchronization primitives are useful in pretty much all multi-threaded code, and its semantically very nice to synchronize on the object(s) you are accessing rather than on separate "Monitor" objects.
Java may allocate the Monitors associated with the objects as needed - as .NET does - and in any case the actual overhead for simply allocating (but not using) the lock would be quite small.
In short: its really convenient to store Objects with their thread safety support bits, and there is very little performance impact.
These methods are around to implement inter-thread communication.
Check this article on the subject.
Rules for those methods, taken from that article:
wait( ) tells the calling thread to give up the monitor and go to sleep until some other
thread enters the same monitor and calls notify( ).
notify( ) wakes up the first thread that called wait( ) on the same object.
notifyAll( ) wakes up all the threads that called wait( ) on the same object. The
highest priority thread will run first.
Hope this helps...

Does the JVM create a mutex for every object in order to implement the 'synchronized' keyword? If not, how?

As a C++ programmer becoming more familiar with Java, it's a little odd to me to see language level support for locking on arbitrary objects without any kind of declaration that the object supports such locking. Creating mutexes for every object seems like a heavy cost to be automatically opted into. Besides memory usage, mutexes are an OS limited resource on some platforms. You could spin lock if mutexes aren't available but the performance characteristics of that are significantly different, which I would expect to hurt predictability.
Is the JVM smart enough in all cases to recognize that a particular object will never be the target of the synchronized keyword and thus avoid creating the mutex? The mutexes could be created lazily, but that poses a bootstrapping problem that itself necessitates a mutex, and even if that were worked around I assume there's still going to be some overhead for tracking whether a mutex has already been created or not. So I assume if such an optimization is possible, it must be done at compile time or startup. In C++ such an optimization would not be possible due to the compilation model (you couldn't know if the lock for an object was going to be used across library boundaries), but I don't know enough about Java's compilation and linking to know if the same limitations apply.
Speaking as someone who has looked at the way that some JVMs implement locks ...
The normal approach is to start out with a couple of reserved bits in the object's header word. If the object is never locked, or if it is locked but there is no contention it stays that way. If and when contention occurs on a locked object, the JVM inflates the lock into a full-blown mutex data structure, and it stays that way for the lifetime of the object.
EDIT - I just noticed that the OP was talking about OS-supported mutexes. In the examples that I've looked at, the uninflated mutexes were implemented directly using CAS instructions and the like, rather than using pthread library functions, etc.
This is really an implementation detail of the JVM, and different JVMs may implement it differently. However, it is definitely not something that can be optimized at compile time, since Java links at runtime, and this it is possible for previously unknown code to get a hold of an object created in older code and start synchronizing on it.
Note that in Java lingo, the synchronization primitive is called "monitor" rather than mutex, and it is supported by special bytecode operations. There's a rather detailed explanation here.
You can never be sure that an object will never be used as a lock (consider reflection). Typically every object has a header with some bits dedicated to the lock. It is possible to implement it such that the header is only added as needed, but that gets a bit complicated and you probably need some header anyway (class (equivalent of "vtbl" and allocation size in C++), hash code and garbage collection).
Here's a wiki page on the implementation of synchronisation in the OpenJDK.
(In my opinion, adding a lock to every object was a mistake.)
can't JVM use compare-and-swap instruction directly? let's say each object has a field lockingThreadId storing the id of the thread that is locking it,
while( compare_and_swap (obj.lockingThreadId, null, thisThreadId) != thisTheadId )
// failed, someone else got it
mark this thread as waiting on obj.
shelf this thead
//out of loop. now this thread locked the object
do the work
obj.lockingThreadId = null;
wake up threads waiting on the obj
this is a toy model, but it doesn't seem too expensive, and does no rely on OS.

Java Multi-Threading Beginner Questions

I am working on a scientific application that has readily separable parts that can proceed in parallel. So, I've written those parts to each run as independent threads, though not for what appears to be the standard reason for separating things into threads (i.e., not blocking some quit command or the like).
A few questions:
Does this actually buy me anything on standard multi-core desktops - i.e., will the threads actually run on the separate cores if I have a current JVM, or do I have to do something else?
I have few objects which are read (though never written) by all the threads. Potential problems with that? Solutions to those problems?
For actual clusters, can you recommend frameworks to distribute the threads to the various nodes so that I don't have to manage that myself (well, if such exist)? CLARIFICATION: by this, I mean either something that automatically converts threads into task for individual nodes or makes the entire cluster look like a single JVM (i.e., so it could send threads to whatever processors it can access) or whatever. Basically, implement the parallelization in a useful way on a cluster, given that I've built it into the algorithm, with the minimal job husbandry on my part.
Bonus: Most of the evaluation consists of set comparisons (e.g., union, intersection, contains) with some mapping from keys to get the pertinent sets. I have some limited experience with FORTRAN, C, and C++ (semester of scientific computing for the first, and HS AP classes 10 years ago for the other two) - what sort of speed/ease of parallelization gains might I find if I tied my Java front-end to an algorithmic back-end in one of those languages, and what sort of headache might my level of experience find implementing those operations in those languages?
Yes, using independent threads will use multiple cores in a normal JVM, without you having to do any work.
If anything is only ever read, it should be fine to be read by multiple threads. If you can make the objects in question immutable (to guarantee they'll never be changed) that's even better
I'm not sure what sort of clustering you're considering, but you might want to look at Hadoop. Note that distributed computing distributes tasks rather than threads (normally, anyway).
Multi-core Usage
Java runtimes conventionally schedule threads to run concurrently on all available processors and cores. I think it's possible to restrict this, but it would take extra work; by default, there is no restriction.
Immutable Objects
For read-only objects, declare their member fields as final, which will ensure that they are assigned when the object is created and never changed. If a field is not final, even if it never changed after construction, there can be some "visibility" issues in a multi-threaded program. This could result in the assignments made by one thread never becoming visible to another.
Any mutable fields that are accessed by multiple threads should be declared volatile, be protected by synchronization, or use some other concurrency mechanism to ensure that changes are consistent and visible among threads.
Distributed Computing
The most widely used framework for distributed processing of this nature in Java is called Hadoop. It uses a paradigm called map-reduce.
Native Code Integration
Integrating with other languages is unlikely to be worthwhile. Because of its adaptive bytecode-to-native compiler, Java is already extremely fast on a wide range of computing tasks. It would be wrong to assume that another language is faster without actual testing. Also, integrating with "native" code using JNI is extremely tedious, error-prone, and complicated; using simpler interfaces like JNA is very slow and would quickly erase any performance gains.
As some people have said, the answers are:
Threads on cores - Yes. Java has had support for native threads for a long time. Most OSes have provided kernel threads which automagically get scheduled to any CPUs you have (implementation performance may vary by OS).
The simple answer is it will be safe in general. The more complex answer is that you have to ensure that your Object is actually created & initialized before any threads can access it. This is solved one of two ways:
Let the class loader solve the problem for you using a Singleton (and lazy class loading):
public class MyImmutableObject
{
private static class MyImmutableObjectInstance {
private static final MyImmutableObject instance = new MyImmutableObject();
}
public MyImmutableObject getInstance() {
return MyImmutableObjectInstance.instance;
}
}
Explicitly using acquire/release semantics to ensure a consistent memory model:
MyImmutableObject foo = null;
volatile bool objectReady = false;
// initializer thread:
....
/// create & initialize object for use by multiple threads
foo = new MyImmutableObject();
foo.initialize();
// release barrier
objectReady = true;
// start worker threads
public void run() {
// acquire barrier
if (!objectReady)
throw new IllegalStateException("Memory model violation");
// start using immutable object foo
}
I don't recall off the top of my head how you can exploit the memory model of Java to perform the latter case. I believe, if I remember correctly, that a write to a volatile variable is equivalent to a release barrier, while a read from a volatile variable is equivalent to an acquire barrier. Also, the reason for making the boolean volatile as opposed to the object is that access of a volatile variable is more expensive due to the memory model constraints - thus, the boolean allows you to enforce the memory model & then the object access can be done much faster within the thread.
As mentioned, there's all sorts of RPC mechanisms. There's also RMI which is a native approach for running code on remote targets. There's also frameworks like Hadoop which offer a more complete solution which might be more appropriate.
For calling native code, it's pretty ugly - Sun really discourages use by making JNI an ugly complicated mess, but it is possible. I know that there was at least one commercial Java framework for loading & executing native dynamic libraries without needing to worry about JNI (not sure if there are any free or OSS projects).
Good luck.

Categories

Resources