Java volatile variable question - java

Reading this DZone article about Java concurrency I was wondering if the following code:
private volatile List list;
private final Lock lock = new ReentrantLock();
public void update(List newList) {
ImmutableList l = new ImmutableList().addAll(newList);
lock.lock();
list = l;
lock.unlock();
}
public List get() {
return list;
}
is equivalent to:
private volatile List list;
public void update(List newList) {
ImmutableList l = new ImmutableList().addAll(newList);
list = l;
}
public List get() {
return list;
}
The try { } finally { } block was omitted for brevity. I assume the ImmutableList class to be a truly immutable data structure that holds its own data, such as the one provided in the google-collections library. Since the list variable is volatile and basically what's going on is a copy-on-the-fly, isn't it safe to just skip on using locks?

In this very specific example, I think you would be OK with no locking on the variable reassignment.
In general, I think you are better off using an AtomicReference instead of a volatile variable as the memory consistency effects are the same and the intent is much clearer.

Yes, both of those code samples behave the same way in a concurrent environment. Volatile fields are never cached thread-locally, so after one thread calls update(), which replaces the list with a new list, then get() on all other threads will return the new list.
But if you have code which uses it like this:
list = get()
list = list.add(something) // returns a new immutable list with the new content
update(list)
then it won't work as expected on either of those code examples (if two threads do that in parallel, then the changes made by one of them may be overwritten by the other). But if only one thread is updating the list, or the new value does not depend on the old value, then no problem.

After re-reading this yes they are equivalent.

If we are talking about timing and memory visibility. A volatile read is very close to the time it takes to do a normal read. So if you are doing get() alot then there is little difference. The time it takes to do a volatile write is about 1/3 time to acquire and release a lock. So your second suggestion is a bit faster.
The memory visibility as most people suggested is equivalent, that is any read before the lock acquisition happens before any write after the lock acquisition similar to any read before a volatile read happens before any subsequent write

The following criteria must be met for volatile variables to provide the desired thread-safety:
Writes to the variable do not depend on its current value.
The variable does not participate in invariants with other variables.
Since both are met here - code is thread-safety

I think the default synchronization behavior of volatile doesn't guarantee the ReentrantLock behavior, so it might help with performance. Otherwise, I think it's fine.

Related

java Volatile/synchronization on arraylist

My program looks like this:
public class Main {
private static ArrayList<T> list;
public static void main(String[] args) {
new DataListener().start();
new DataUpdater().start();
}
static class DataListener extends Thread {
#Override
public void run() {
while(true){
//Reading the ArrayList and displaying the updated data
Thread.sleep(5000);
}
}
}
static class DataUpdater extends Thread{
#Override
public void run() {
//Continuously receive data and update ArrayList;
}
}
}
In order to use this ArrayList in both threads, I know two options:
To make the ArrayList volatile. However I read in this article that making variables volatile is only allowed if it "Writes to the variable do not depend on its current value." which I think in this case it does (because for example when you do an add operation on an ArrayList, the contents of the ArrayList after this operation depend on the current contents of the ArrayList, or doesn't it?). Also the DataUpdater has to remove some elements from the list every now and then, and I also read that editing a volatile variable from different threads is not possible.
To make this ArrayList a synchronized variable. However, my DataUpdater will continuously update the ArrayList, so won't this block the DataListener from reading the ArrayList?
Did I misunderstand any concepts here or is there another option to make this possible?
Volatile won't help you at all. The meaning of volatile is that changes made by thread A to a shared variable are visible to thread B immediately. Usually such changes may be in some cache visible only to the thread that made them, and volatile just tells the JVM not to do any caching or optimization that will result in the value update being delayed.
So it is not a means of synchronization. It's just a means of ensuring visibility of change. Moreover, it's change to the variable, not to the object referenced by that variable. That is, if you mark list as volatile, it will only make any difference if you assign a new list to list, not if you change the content of the list!
Your other suggestion was to make the ArrayList a synchronized variable. There is a misconception here. Variables can't be synchronized. The only thing that can be synchronized is code - either an entire method or a specific block inside it. You use an object as the synchronization monitor.
The monitor is the object itself (actually, it's a logical part of the object that is the monitor), not the variable. If you assign a different object to the same variable after synchronizing on the old value, then you won't have your old monitor available.
But in any case, it's not the object that's synchronized, it's code that you decided to synchronize using that object.
You can therefore use the list as the monitor for synchronizing the operations on it. But you can not have list synchronized.
Suppose you want to synchronize your operations using the list as a monitor, you should design it so that the writer thread doesn't hold the lock all the time. That is, it just grabs it for a single read-update, insert, etc., and then releases it. Grabs it again for the next operation, then releases it. If you synchronize the whole method or the whole update loop, the other thread will never be able to read it.
In the reading thread, you should probably do something like:
List<T> listCopy;
synchronized (list) {
listCopy = new ArrayList(list);
}
// Use listCopy for displaying the value rather than list
This is because displaying is potentially slow - it may involve I/O, updating GUI etc. So to minimize the lock time, you just copy the values from the list, and then release the monitor so that the updating thread can do its work.
Other than that, there are many types of objects in the java.util.concurrent package etc. that are designed to help in situations like this, where one side is writing and the other is reading. Check the documentation - perhaps a ConcurrentLinkedDeque will work for you.
Indeed, none of the two solutions is sufficient. You actually need to synchronize the complete iteration on the arraylist, and every write access to the arraylist:
synchronized(list) {
for (T t : list) {
...
}
}
and
synchronized(list) {
// read/add/modify the list
}
make the ArrayList volatile.
You can't make an ArrayList volatile. You can't make any object volatile. The only things in Java that can be volatile are fields.
In your example, list is not an ArrayList.
private static ArrayList<T> list;
list is a static field of the Main class.
The volatile keyword only matters when one thread updates the field, and another thread subsequently accesses the field.
This line updates the list, but does not update the volatile field:
list.add(e);
After executing that line, the list has changed, but the field still refers to the same list object.

With double-checked locking, does a put to a volatile ConcurrentHashMap have happens-before guarantee?

So far, I have used double-checked locking as follows:
class Example {
static Object o;
volatile static boolean setupDone;
private Example() { /* private constructor */ }
getInstance() {
if(!setupDone) {
synchronized(Example.class) {
if(/*still*/ !setupDone) {
o = new String("typically a more complicated operation");
setupDone = true;
}
}
}
return o;
}
}// end of class
Now, because we have groups of threads that all share this class, we changed the boolean to a ConcurrentHashMap as follows:
class Example {
static ConcurrentHashMap<String, Object> o = new ConcurrentHashMap<String, Object>();
static volatile ConcurrentHashMap<String, Boolean> setupsDone = new ConcurrentHashMap<String, Boolean>();
private Example() { /* private constructor */ }
getInstance(String groupId) {
if (!setupsDone.containsKey(groupId)) {
setupsDone.put(groupId, false);
}
if(!setupsDone.get(groupId)) {
synchronized(Example.class) {
if(/*still*/ !setupsDone.get(groupId)) {
o.put(groupId, new String("typically a more complicated operation"));
setupsDone.put(groupId, true); // will this still maintain happens-before?
}
}
}
return o.get(groupId);
}
}// end of class
My question now is: If I declare a standard Object as volatile, I will only get a happens-before relationship established when I read or write its reference. Therefore writing an element within that Object (if it is e.g. a standard HashMap, performing a put() operation on it) will not establish such a relationship. Is that correct? (What about reading an element; wouldn't that require to read the reference as well and thus establish the relationship?)
Now, with using a volatile ConcurrentHashMap, will writing an element to it establish the happens-before relationship, i.e. will the above still work?
Update: The reason for this question and why double-checked locking is important:
What we actually set up (instead of an Object) is a MultiThreadedHttpConnectionManager, to which we pass some settings, and which we then pass into an HttpClient, that we set up, too, and that we return. We have up to 10 groups of up to 100 threads each, and we use double-checked locking as we don't want to block each of them whenever they need to acquire their group's HttpClient, as the whole setup will be used to help with performance testing. Because of an awkward design and an odd platform we run this on we cannot just pass objects in from outside, so we hope to somehow make this setup work. (I realise the reason for the question is a bit specific, but I hope the question itself is interesting enough: Is there a way to get that ConcurrentHashMap to use "volatile behaviour", i.e. establish a happens-before relationship, as the volatile boolean did, when performing a put() on the ConcurrentHashMap? ;)
Yes, it is correct. volatile protects only that object reference, but nothing else.
No, putting an element to a volatile HashMap will not create a happens-before relationship, not even with a ConcurrentHashMap.
Actually ConcurrentHashMap does not hold lock for read operations (e.g. containsKey()). See ConcurrentHashMap Javadoc.
Update:
Reflecting your updated question: you have to synchronize on the object you put into the CHM. I recommend to use a container object instead of directly storing the Object in the map:
public class ObjectContainer {
volatile boolean isSetupDone = false;
Object o;
}
static ConcurrentHashMap<String, ObjectContainer> containers =
new ConcurrentHashMap<String, ObjectContainer>();
public Object getInstance(String groupId) {
ObjectContainer oc = containers.get(groupId);
if (oc == null) {
// it's enough to sync on the map, don't need the whole class
synchronized(containers) {
// double-check not to overwrite the created object
if (!containers.containsKey(groupId))
oc = new ObjectContainer();
containers.put(groupId, oc);
} else {
// if another thread already created, then use that
oc = containers.get(groupId);
}
} // leave the class-level sync block
}
// here we have a valid ObjectContainer, but may not have been initialized
// same doublechecking for object initialization
if(!oc.isSetupDone) {
// now syncing on the ObjectContainer only
synchronized(oc) {
if(!oc.isSetupDone) {
oc.o = new String("typically a more complicated operation"));
oc.isSetupDone = true;
}
}
}
return oc.o;
}
Note, that at creation, at most one thread may create ObjectContainer. But at initialization each groups may be initialized in parallel (but at most 1 thread per group).
It may also happen that Thread T1 will create the ObjectContainer, but Thread T2 will initialize it.
Yes, it is worth to keep the ConcurrentHashMap, because the map reads and writes will happen at the same time. But volatile is not required, since the map object itself will not change.
The sad thing is that the double-check does not always work, since the compiler may create a bytecode where it is reusing the result of containers.get(groupId) (that's not the case with the volatile isSetupDone). That's why I had to use containsKey for the double-checking.
Therefore writing an element within that Object (if it is e.g. a standard HashMap, performing a put() operation on it) will not establish such a relationship. Is that correct?
Yes and no. There is always a happens-before relationship when you read or write a volatile field. The issue in your case is that even though there is a happens-before when you access the HashMap field, there is no memory synchronization or mutex locking when you are actually operating on the HashMap. So multiple threads can see different versions of the same HashMap and can create a corrupted data structure depending on race conditions.
Now, with using a volatile ConcurrentHashMap, will writing an element to it establish the happens-before relationship, i.e. will the above still work?
Typically you do not need to mark a ConcurrentHashMap as being volatile. There are memory barriers that are crossed internal to the ConcurrentHashMap code itself. The only time I'd use this is if the ConcurrentHashMap field is being changed often -- i.e. is non-final.
Your code really seems like premature optimization. Has a profiler shown you that it is a performance problem? I would suggest that you just synchronize on the map and me done with it. Having two ConcurrentHashMap to solve this problem seems like overkill to me.

Java: How exactly do synchronized operations relate to volatility?

Sorry this is such a long question.
Ive been doing lots of research lately into multi-threading as I slowly implement it into a personal project. However, probably due to an abundance of slightly incorrect examples, the use of synchronized blocks and volatility in certain situations is still a bit unclear to me.
My core question is this: Are changes to references and primitives automatically volatile (that is, performed on the main memory and not a cache) when a thread is inside a synchronized block, or does the read also have to be synchronized for it to work properly?
If so What is the purpose of synchronizing a simple getter method? (see example 1 ) Also, are ALL changes sent to main memory as long as the thread has synchronized on anything? eg if it is sent off to do loads of work all over the place inside a very high level sync will every single change then made be to main memory, and nothing ever to cache, until its unlocked again?
If not Does the change have to be explicitly inside a synchronized block, or can java actually pick up on, for example, uses of the Lock object? (see example 3)
If either Does the synchronized object need to be related to the reference/primitive being changed in any way (eg the immediate object that contains it)? Can I write by syncing on one object and read with another if its otherwise safe? (see example 2)
(please note for the following examples that I know that synchronized methods and synchronized(this) are frowned upon and why, but discussion about that is beyond the scope of my question)
Example 1:
class Counter{
int count = 0;
public synchronized void increment(){
count++;
}
public int getCount(){
return count;
}
}
In this example, increment() needs to be synchronized since ++ is not an atomic operation. As such, two threads incremending at the same time may result in a overall increase of 1 to the count. The count primitive needs to be atomic (eg not long/double/reference), and it is so thats fine.
Does getCount() need to be synchronized here and why exactly? The explanation I have heard the most is that I will have no guarantee whether the count returned will be the pre- or post-increment. However, this seems like the explanation for something slightly different, thats found itself in the wrong place. I mean if I were to synchronize getCount(), then I still see no guarantee - its now down to not knowing the locking order, insead of not knowing whether the actual read happens to be before/after the actual write.
Example 2:
Is the following example threadsafe, if you assume that through trickery not shown here that none of these methods will never be called at the same time? Will count increment in an expected way if its done so using a random method each time, and then be read properly, or does the lock have to be the same object? (btw I fully realise how rediculous this example is but Im more interested in theory than practice)
class Counter{
private final Object lock1 = new Object();
private final Object lock2 = new Object();
private final Object lock3 = new Object();
int count = 0;
public void increment1(){
synchronized(lock1){
count++;
}
}
public void increment2(){
synchronized(lock2){
count++;
}
}
public int getCount(){
synchronized(lock3){
return count;
}
}
}
Example 3:
Is the happens-before relationship simply a java concept, or is it an actual thing built into the JVM? Even though I can guarantee a conceptual happens-before relationship for this next example, is java smart enough to pick it up if its a built in thing? I am assuming it is not, but is this example actually threadsafe? If its threadsafe, what about if getCount() did no locking?
class Counter{
private final Lock lock = new Lock();
int count = 0;
public void increment(){
lock.lock();
count++;
lock.unlock();
}
public int getCount(){
lock.lock();
int count = this.count;
lock.unlock();
return count;
}
}
Yes, the read has to be synchronized as well. This page says:
The results of a write by one thread are guaranteed to be visible to a
read by another thread only if the write operation happens-before the
read operation.
[...]
An unlock (synchronized block or method exit) of a monitor
happens-before every subsequent lock (synchronized block or method
entry) of that same monitor
The same page says:
Actions prior to "releasing" synchronizer methods such as Lock.unlock,
Semaphore.release, and CountDownLatch.countDown happen-before actions
subsequent to a successful "acquiring" method such as Lock.lock
So locks offer the same visibility guarantees as synchronized blocks.
Whether you use synchronized blocks or locks, the visibility is only guaranteed if the reader thread uses the same monitor or lock as the writer thread.
Your Example 1 is incorrect: the getter must be synchronized as well if you want to see the latest value of the count.
Your example 2 is incorrect because it uses different locks to guard the same count.
Your example 3 is OK. If the getter did not lock, you could see an older value of the count. The happens-before is something that is guaranteed by the JVM. The JVM has to respect the rules specified, by flushing caches to the main memory for example.
Try to view it in terms of two distinct, simple operations:
Locking (mutual exclusion),
Memory barrier (cache sync, instruction reordering barrier).
Entering a synchronized block entails both locking and memory barrier; leaving the synchronized block entails unlocking + memory barrier; reading/writing a volatile field entails memory barrier only. Thinking in these terms I think you can clarify for yourself all the question above.
As for Example 1, the reading thread will not have any kind of memory barrier. It's not just between seeing the value before/after read, it's about never observing any change to the var after a thread is started.
Example 2. is the most interesting issue you raise. You are indeed given no guarantees by the JLS in this case. In practice you won't be given any ordering guarantees (it's as if the locking aspect wasn't there at all), but you'll still have the benefit of the memory barriers so you will observe changes, unlike the first example. Basically, this is exactly the same as removing synchronized and tagging the int as volatile (apart from the runtime costs of acquiring locks).
Regarding Example 3, by "just a Java thing" I feel you have generics with erasure in mind, something that only the static code checking is aware of. This is not like that -- both locks and memory barriers are pure runtime artifacts. In fact, the compiler can't reason about them at all.

The right way to synchronize access to read only map in Java

I'm writing an analogue of DatabaseConfiguration class which reads configuration from database and I need some advice regards synchronization.
For example,
public class MyDBConfiguration{
private Connection cn;
private String table_name;
private Map<String, String> key_values = new HashMap<String,String>();
public MyDBConfiguration (Connection cn, String table_name) {
this.cn = cn;
this.table_name = table_name;
reloadConfig();
}
public String getProperty(String key){
return this.key_values.get(key);
}
public void reloadConfig() {
Map<String, String> tmp_map = new HashMap<String,String> ();
// read data from database
synchronized(this.key_values)
{
this.key_values = tmp_map;
}
}
}
So I have a couple questions.
1. Assuming properties are read only , do I have use synchronize in getProperty ?
2. Does it make sense to do this.key_values = Collections.synchronizedMap(tmp_map) in reloadConfig?
Thank you.
If multiple threads are going to share an instance, you must use some kind of synchronization.
Synchronization is needed mainly for two reasons:
It can guarantee that some operations are atomic, so the system will keep consistent
It guarantees that every threads sees the same values in the memory
First of all, since you made reloadConfig() public, your object does not really look immutable. If the object is really immutable, that is, if after initialization of its values they cannot change (which is a desired property to have in objects that are shared).
For the above reason, you must synchronize all the access to the map: suppose a thread is trying to read from it while another thread is calling reloadConfig(). Bad things will happen.
If this is really the case (mutable settings), you must synchronize in both reads and writes (for obvious reasons). Threads must synchronize on a single object (otherwise there's no synchronization). The only way to guarantee that all the threads will synchronize on the same object is to synchronize on the object itself or in a properly published, shared, lock, like this:
// synchronizes on the in instance itself:
class MyDBConfig1 {
// ...
public synchronized String getProperty(...) { ... }
public synchronized reloadConfig() { ... }
}
// synchronizes on a properly published, shared lock:
class MyDBConfig2 {
private final Object lock = new Object();
public String getProperty(...) { synchronized(lock) { ... } }
public reloadConfig() { synchronized(lock) { ... } }
}
The properly publication here is guaranteed by the final keyword. It is subtle: it guarantees that the value of this field is visible to every thread after initialization (without it, a thread might see that lock == null, and bad things will happen).
You could improve the code above by using a (properly published) ReadWriteReentrantLock. It might improve concurrency a bit if that's a concern for you.
Supposing your intention was to make MyDBConfig immutable, you do not need to serialize access to the hash map (that is, you don't necessarily need to add the synchronized keyword). You might improve concurrency.
First of all, make reloadConfig() private (this will indicate that, for consumers of this object, it is indeed immutable: the only method they see is getProperty(...), which, by its name, should not modify the instance).
Then, you only need to guarantee that every thread will see the correct values in the hash map. To do so, you could use the same techniques presented above, or you could use a volatile field, like this:
class MyDBConfig {
private volatile boolean initialized = false;
public String getProperty(...) { if (initialized) { ... } else { throw ... } }
private void reloadConfig() { ...; initialized = true; }
public MyDBConfig(...) { ...; reloadConfig(); }
}
The volatile keyword is very subtle. Volatile writes and volatile reads have a happens-before relationship. A volatile write is said to happen-before a subsequent volatile read of the same (volatile) field. What this means is that all the memory locations that have been modified before (in program order) a volatile write are visible to every other thread after they have executed a subsequente volatile read of the same (volatile) field.
In the code above, you write true to the volatile field after all the values have been set. Then, the method reading values (getProperty(...)) begins by executing a volatile read of the same field. Then this method is guaranteed to see the correct values.
In the example above, if you don't publish the instance before the constructor finishes, it is guaranteed that the exception won't get thrown in the method getProperty(...) (because before the constructor finishes, you have written true to initialized).
Assuming that key_values will not be put to after reloadConfig you will need to synchronize access to both reads and writes of the map. You are violating this by only synchronizing on the assignment. You can solve this by removing the synchronized block and assigning the key_values as volatile.
Since the HashMap is effectively read only I wouldn't assign Collections.synchronizedMap rather Collections.unmodifiableMap (this wouldn't effect the Map itself, just prohibit from accidental puts from someone else possible using this class).
Note: Also, you should never synchronize a field that will change. The results are very unpredictable.
Edit: In regards to the other answers. It is highly suggested that all shared mutable data must be synchronized as the effects are non-deterministic. The key_values field is a shared mutable field and assignments to it must be synchronized.
Edit: And to clear up any confusion with Bruno Reis. The volatilefield would be legal if you still fill the tmp_map and after its finished being filled assign it to this.key_values it would look like:
private volatile Map<String, String> key_values = new HashMap<String,String>();
..rest of class
public void reloadConfig() {
Map<String, String> tmp_map = new HashMap<String,String> ();
// read data from database
this.key_values = tmp_map;
}
You still need the same style or else as Bruno Reis noted it would not be thread-safe.
I would say that if you guarantee that no code will structurally modify your map, then there is no need to synchronize it.
If multiple threads access a hash map concurrently, and at least one
of the threads modifies the map structurally, it must be synchronized
externally.
http://download.oracle.com/javase/6/docs/api/java/util/HashMap.html
The code you have shown provides only read access to the map. Client code cannot make a structural modification.
Since your reload method alters a temporary map and then changes key_values to point to the new map, again I'd say no synchronization is required. The worst that can happen is someone reads from old copy of the map.
I'm going to keep my head down and wait for the downvotes now ;)
EDIT
As suggested by Bruno, the fly in the ointment is inheritance. If you cannot guarantee that your class will not be sub-classed, then you should be more defensive.
EDIT
Just to refer back to the specific questions posed by the OP...
Assuming properties are read only , do I have use synchronize in getProperty ?
Does it make sense to do this.key_values = Collections.synchronizedMap(tmp_map) in reloadConfig?
... I am genuinely interested to know if my answers are wrong. So I won't give up and delete my answer for a while ;)

ReentrantReadWriteLock - many readers at a time, one writer at a time?

I'm somewhat new to multithreaded environments and I'm trying to come up with the best solution for the following situation:
I read data from a database once daily in the morning, and stores the data in a HashMap in a Singleton object. I have a setter method that is called only when an intra-day DB change occurs (which will happen 0-2 times a day).
I also have a getter which returns an element in the map, and this method is called hundreds of times a day.
I'm worried about the case where the getter is called while I'm emptying and recreating the HashMap, thus trying to find an element in an empty/malformed list. If I make these methods synchronized, it prevents two readers from accessing the getter at the same time, which could be a performance bottleneck. I don't want to take too much of a performance hit since writes happen so infrequently. If I use a ReentrantReadWriteLock, will this force a queue on anyone calling the getter until the write lock is released? Does it allow multiple readers to access the getter at the same time? Will it enforce only one writer at a time?
Is coding this just a matter of...
private final ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();
private final Lock read = readWriteLock.readLock();
private final Lock write = readWriteLock.writeLock();
public HashMap getter(String a) {
read.lock();
try {
return myStuff_.get(a);
} finally {
read.unlock();
}
}
public void setter()
{
write.lock();
try {
myStuff_ = // my logic
} finally {
write.unlock();
}
}
Another way to achieve this (without using locks) is the copy-on-write pattern. It works well when you do not write often. The idea is to copy and replace the field itself. It may look like the following:
private volatile Map<String,HashMap> myStuff_ = new HashMap<String,HashMap>();
public HashMap getter(String a) {
return myStuff_.get(a);
}
public synchronized void setter() {
// create a copy from the original
Map<String,HashMap> copy = new HashMap<String,HashMap>(myStuff_);
// populate the copy
// replace copy with the original
myStuff_ = copy;
}
With this, the readers are fully concurrent, and the only penalty they pay is a volatile read on myStuff_ (which is very little). The writers are synchronized to ensure mutual exclusion.
Yes, if the write lock is held by a thread then other threads accessing the getter method would block since they cannot acquire the read lock. So you are fine here. For more details please read the JavaDoc of ReentrantReadWriteLock - http://download.oracle.com/javase/6/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html
You're kicking this thing off at the start of the day... you'll update it 0-2 times a day and you're reading it 100s of times per day. Assuming that the reading is going to take, say 1 full second(a looonnnng time) in an 8 hour day(28800 seconds) you've still got a very low read load. Looking at the docs for ReentrantReadWriteLock you can 'tweek' the mode so that it will be "fair", which means the thread that's been waiting the longest will get the lock. So if you set it to be fair, I don't think that your write thread(s) are going to be starved.
References
ReentrantReadWriteLock

Categories

Resources