Locking mechanism for object in Java - java

Let's say customer has a Credit Card. And he has balance of x amount of money and he is buying y valued item(y<x). And again he is going to buy another item witch will cost z.
(y+z>x but z<x) .
Now I am going to simulate this scenario in Java. If all transaction happens in sequential, there is no need to panic. Customer can buy y valued item and then he don't have enough credit to buy other one.
But when we come in to multi-threaded environment we have to deal with some locking mechanism or some strategy. Because if some other thread read credit card object before reflect changes by previous thread serious issues will rise.
As far as I can see one way is we can keep a copy of original balance and we can check current value just before update the balance. If value is same as original one then we can make sure other threads doesn't change the balance. If balance different then we have to undo our calculation.
And again Java Synchronization also a good solution. Now my question is what will be the best approach to implement in such a scenario?
Additionally if we are going to see this in big picture. Synchronization hits the performance of the system. Since it is locked the object and other thread has to wait.

I will prefer to have a ReadWriteLock, this helps to to lock it for reading and writing, this is nice because you can have separate read and write lock for each resource:
ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
readWriteLock.readLock().lock();
// multiple readers can enter this section
// if not locked for writing, and not writers waiting
// to lock for writing.
readWriteLock.readLock().unlock();
readWriteLock.writeLock().lock();
// only one writer can enter this section,
// and only if no threads are currently reading.
readWriteLock.writeLock().unlock();
ReadWriteLock internally keeps two Lock instances. One guarding read access, and one guarding write access.

Your proposal doesn't fit. You can't be sure the context switch doesn't happen between check and update.
The only way is synchronization.

What you are talking about sounds like software transactional memory. You optimistically assume that no other threads will modify the data upon which your transaction depends, but you have a mechanism to detect if they have.
The types in the java.util.concurrent.atomic package can help build a lock-free solution. They implement efficient compare-and-swap operations. For example, an AtomicInteger reference would allow you to to something like this:
AtomicInteger balance = new AtomicInteger();
…
void update(int change) throws InsufficientFundsException {
int original, updated;
do {
original = balance.get();
updated = original + change;
if (updated < 0)
throw new InsufficientFundsException();
} while (!balance.compareAndSet(original, update));
}
As you can see, such an approach is subject to a livelocked thread condition, where other threads continually change the balance, causing one thread to loop forever. In practice, specifics of your application determine how likely a livelock is.
Obviously, this approach is complex and loaded with pitfalls. If you aren't a concurrency expert, it's safer to use a lock to provide atomicity. Locking usually performs well enough if code inside the synchronized block doesn't perform any blocking operations, like I/O. If code in critical sections has a definite execution time, you are probably better off using a lock.

As far as I can see one way is we can keep a copy of original balance
and we can check current value just before update the balance. If
value is same as original one then we can make sure other threads
doesn't change the balance. If balance different then we have to undo
our calculation.
Sounds like what AtomicInteger.compareAndSet() and AtomicLong.compareAndSet() do.
An easier-to-understand approach would involve using synchronized methods on your CreditCard class that your code would call to update the balance. (Only one synchronized method on an object can execute at any one time.)
In this case, it sounds like you want a public synchronized boolean makePurchase(int cost) method that returns true on success and false on failure. The goal is that no transaction on your object should require more than one method call - as you've realized, you won't want to make two method calls on CreditCard (getBalance() and later setBalance()) to do the transaction, because of potential race conditions.

Related

Is get() thread-safe operation in Guava's cache?

I found out that put and get with CacheLoader operations use Reentrant lock under the hood, but why this is not implemented for getIfPresent operation?
get which is used by getIfPresent
#Nullable
V get(Object key, int hash) {
try {
if (this.count != 0) {
long now = this.map.ticker.read();
ReferenceEntry<K, V> e = this.getLiveEntry(key, hash, now);
Object value;
if (e == null) {
value = null;
return value;
}
value = e.getValueReference().get();
if (value != null) {
this.recordRead(e, now);
Object var7 = this.scheduleRefresh(e, e.getKey(), hash, value, now, this.map.defaultLoader);
return var7;
}
this.tryDrainReferenceQueues();
}
Object var11 = null;
return var11;
} finally {
this.postReadCleanup();
}
}
put
#Nullable
V put(K key, int hash, V value, boolean onlyIfAbsent) {
this.lock();
.....
Is the only thing I can do to reach thread-safety in basic get/put operations is to use synchronization on client ?
Even if getIfPresent did use locks, that won't help. It's more fundamental than that.
Let me put that differently: Define 'threadsafe'.
Here's an example of what can happen in a non-threadsafe implementation:
You invoke .put on a plain jane j.u.HashMap, not holding any locks.
Simultaneously, a different thread also does that.
The map is now in a broken state. If you iterate through the elements, the first put statement doesn't show at all, the second put statement shows up in your iteration, and a completely unrelated key has disappeared. But calling .get(k) on that map with the second thread's key doesn't find it eventhough it is returned in the .entrySet(). This makes no sense and breaks all rules of j.u.HashMap. The spec of hashmap does not explain any of this, other than 'I am not threadsafe' and leaves it at that.
That's an example of NOT thread safe.
Here is an example of perfectly fine:
2 threads begin.
Some external event (e.g. a log) shows that thread 1 is very very very slightly ahead of thread 2, but the notion of 'ahead', if it is relevant, means your code is broken. That's just not how multicore works.
Thread 1 adds a thing to a concurrency-capable map, and logs that it has done so.
Thread 2 logs that it starts an operation. (From the few things you have observed, it seems to be running slightly 'later') so I guess we're "after" the point where T1 added the thing) now queries for the thing and does not get a result.1
That's fine. That's still thread safe. Thread safe doesn't mean every interaction with an instance of that data type can be understood in terms of 'first this thing happened, then that thing happened'. Wanting that is very problematic, because the only way the computer can really give you that kind of guarantee is to disable all but a single core and run everything very very slowly. The point of a cache is to speed things up, not slow things down!
The problem with the lack of guarantees here is that if you run multiple separate operations on the same object, you run into trouble. Here's some pseudocode for a bank ATM machine that will go epically wrong in the long run:
Ask user how much money they want (say, €50,-).
Retrieve account balance from a 'threadsafe' Map<Account, Integer> (maps account ID to cents in account).
Check if €50,-. If no, show error. If yes...
Spit out €50,-, and update the threadsafe map with .put(acct, balance - 5000).
Everything perfectly threadsafe. And yet this is going to go very very wrong - if the user uses their card at the same time they are in the bank withdrawing money via the teller, either the bank or the user is going to get very lucky here. I'd hope it's obvious to see how and why.
The upshot is: If you have dependencies between operations there is nothing you can do with 'threadsafe' concepts that can possibly fix it; the only way is to actually write code that explicitly marks off these dependencies.
The only way to write that bank code is to either use some form of locking. Basic locking, or optimistic locking, either way is fine, but locking of some sort. It has to look like2:
start some sort of transaction;
fetch account balance;
deal with insufficient funds;
spit out cash;
update account balance;
end transaction;
Now guava's code makes perfect sense:
There is no such thing as 'earlier' and 'later'. You need to stop thinking about multicore in that way. Unless you explicitly write primitives that establish these things. The cache interface does have these. Use the right operation! getIfPresent will get you the cache if it is possible for your current thread to get at that data. If it is not, it returns null, that's what that call does.
If instead you want this common operation: "Get me the cached value. However, if it is not available, then run this code to calculate the cached value, cache the result, and return it to me. In addition, ensure that if 2 threads simultaneously end up running this exact operation, only one thread runs the calculation, and the other will wait for the other one (don't say 'first' one, that's not how you should think about threads) to finish, and use that result instead".. then, use the right call for that: .cache.get(key, k -> calculateValueForKey(k)). As the docs explicitly call out this will wait for another thread that is also 'loading' the value (that's what guava cache calls the calculation process).
No matter what you invoke from the Cache API, you can't 'break it', in the sense that I broke that HashMap. The cache API does this partly by using locks (such as ReentrantLock for mutating operations on it), and partly by using a ConcurrentHashMap under the hood.
[1] Often log frameworks end up injecting an actual explicit lock in the proceedings and thus you do often get guarantees in this case, but only 'by accident' because of the log framework. This isn't a guarantee (maybe you're logging to separate log files, for example!) and often what you 'witness' may be a lie. For example, maybe you have 2 log statements that both log to separate files (and don't lock each other out at all), and they log the timestamp as part of the log. The fact that one log line says '12:00:05' and the other says '12:00:06' means nothing - the log thread fetches the current time, creates a string describing the message, and tells the OS to write it to the file. You obviously get absolutely no guarantee that the 2 log threads run at identical speed. Maybe one thread fetches the time (12:00:05), creates the string, wants to write to the disk but the OS switches to the other thread before the write goes through, the other thread is the other logger, it reads time (12:00:06), makes the string, writes it out, finishes up, and then the first logger continues, writes its context. Tada: 2 threads where you 'observe' one thread is 'earlier' but that is incorrect. Perhaps this example will further highlight why thinking about threads in terms of which one is 'first' steers you wrong.
[2] This code has the additional complication that you're interacting with systems that cannot be transactional. The point of a transaction is that you can abort it; you cannot abort the user grabbing a bill from the ATM. You solve that by logging that you're about to spit out the money, then spit out the money, then log that you have spit out the money. And finally write to this log that it has been processed in the user's account balance. Other code needs to check this log and act accordingly. For example, on startup the bank's DB machine needs to flag 'dangling' ATM transactions and will have to get a human to check the video feed. This solves the problem where someone trips over the power cable of the bank DB machine juuust as the user is about to grab the banknote from the machine.
Seems like guava cache is implementing ConcurrentMap api
class LocalCache<K, V> extends AbstractMap<K, V> implements ConcurrentMap<K, V>
so the base get and put operations should be thread safe by nature

Emulating a memory barrier in Java to get rid of volatile reads

Assume I have a field that's accessed concurrently and it's read many times and seldom written to.
public Object myRef = new Object();
Let's say a Thread T1 will be setting myRef to another value, once a minute, while N other Threads will be reading myRef billions of times continuously and concurrently. I only need that myRef is eventually visible to all threads.
A simple solution would be to use an AtomicReference or simply volatile like this:
public volatile Object myRef = new Object();
However, afaik volatile reads do incur a performance cost. I know it's minuscule, this is more like something I wonder rather than something I actually need. So let's not be concerned with performance and assume this a purely theoretical question.
So the question boils down to: Is there way to safely bypass volatile reads for references that are only seldom written to, by doing something at the write site?
After some reading, it looks like memory barriers could be what I need. So if a construct like this existed, my problem would be solved:
Write
Invoke Barrier (sync)
Everything is synced and all threads will see the new value. (without a permanent cost at read sites, it can be stale or incur a one time cost as the caches are synced, but after that it's all back to regular field gets till next write).
Is there such a construct in Java, or in general? At this point I can't help but think if something like this existed, it would have been already incorporated into the atomic packages by the much smarter people maintaining those. (Disproportionately frequent read vs write might not have been a case to care for?) So maybe there is something wrong in my thinking and such a construct is not possible at all?
I have seen some code samples use 'volatile' for a similar purpose, exploiting it's happen-before contract. There is a separate sync field e.g.:
public Object myRef = new Object();
public volatile int sync = 0;
and at writing thread/site:
myRef = new Object();
sync += 1 //volatile write to emulate barrier
I am not sure this works, and some argue this works on x86 architecture only. After reading related sections in JMS, I think it's only guaranteed to work if that volatile write is coupled with a volatile read from the threads who need to see the new value of myRef. (So doesn't get rid of the volatile read).
Returning to my original question; is this possible at all? Is it possible in Java? Is it possible in one of the new APIs in Java 9 VarHandles?
So basically you want the semantics of a volatile without the runtime cost.
I don't think it is possible.
The problem is that the runtime cost of volatile is due the instructions that implement the memory barriers in the writer and the reader code. If you "optimize" the reader by getting rid of its memory barrier, then you are no longer guaranteed that the reader will see the "seldomly written" new value when it is actually written.
FWIW, some versions of the sun.misc.Unsafe class provide explicit loadFence, storeFence and fullFence methods, but I don't think that using them will give any performance benefit over using a volatile.
Hypothetically ...
what you want is for one processor in a multi-processor system to be able to tell all of the other processors:
"Hey! Whatever you are doing, invalidate your memory cache for address XYZ, and do it now."
Unfortunately, modern ISAs don't support this.
In practice, each processor controls its own cache.
Not quite sure if this is correct but I might solve this using a queue.
Create a class that wraps an ArrayBlockingQueue attribute. The class has an update method and a read method. The update method posts the new value onto the queue and removes all values except the last value. The read method returns the result of a peek operation on the queue, i.e. read but do not remove. Threads peeking the element at the front of the queue do so unimpeded. Threads updating the queue do so cleanly.
You can use ReentrantReadWriteLock which is designed for few writes many reads scenario.
You can use StampedLock which is designed for the same case of few writes many reads, but also reads can be attempted optimistically. Example:
private StampedLock lock = new StampedLock();
public void modify() { // write method
long stamp = lock.writeLock();
try {
modifyStateHere();
} finally {
lock.unlockWrite(stamp);
}
}
public Object read() { // read method
long stamp = lock.tryOptimisticRead();
Object result = doRead(); //try without lock, method should be fast
if (!lock.validate(stamp)) { //optimistic read failed
stamp = lock.readLock(); //acquire read lock and repeat read
try {
result = doRead();
} finally {
lock.unlockRead(stamp);
}
}
return result;
}
Make your state immutable and allow controlled modifications only by cloning the existing object and altering only necessary properties via constructor. Once the new state is constructed, you assign it to the reference being read by the many reading threads. This way reading threads incur zero cost.
X86 provides TSO; you get [LoadLoad][LoadStore][StoreStore] fences for free.
A volatile read requires release semantics.
r1=Y
[LoadLoad]
[LoadStore]
...
And as you can see, this is already provided by the X86 for free.
In your case most of the calls are a read and the cacheline will already be in the local cache.
There is a price to pay on compiler level optimizations, but on a hardware level, a volatile read is just as expensive as a regular read.
On the other hand the volatile write is more expensive because it requires a [StoreLoad] to guarantee sequential consistency (in the JVM this is done using a lock addl %(rsp),0 or an MFENCE). Since writes are very seldom in your situation, this isn't an issue.
I would be careful with optimizations on this level because it is very easy to make the code more complex than is actually needed. Best to guide your development efforts by some benchmarks e.g. using JMH and preferably test it on real hardware. Also there could be other nasty creatures hidden like false sharing.

How to know if a method is thread safe

Suppose I have a method that checks for a id in the db and if the id doesn't exit then inserts a value with that id. How do I know if this is thread safe and how do I ensure that its thread safe. Are there any general rules that I can use to ensure that it doesn't contain race conditions and is generally thread safe.
public TestEntity save(TestEntity entity) {
if (entity.getId() == null) {
entity.setId(UUID.randomUUID().toString());
}
Map<String, TestEntity > map = dbConnection.getMap(DB_NAME);
map.put(entity.getId(), entity);
return map.get(entity.getId());
}
This is a how long is a piece of string question...
A method will be thread safe if it uses the synchronized keyword in its declaration.
However, even if your setId and getId methods used synchronized keyword, your process of setting the id (if it has not been previously initialized) above is not. .. but even then there is an "it depends" aspect to the question. If it is impossible for two threads to ever get the same object with an uninitialised id then you are thread safe because you would never be attempting to concurrently modifying the id.
It is entirely possible, given the code in your question, that there could be two calls to the thread safe getid at the same time for the same object. One by one they get the return value (null) and immediately get pre-empted to let the other thread run. This means both will then run the thread safe setId method - again one by one.
You could declare the whole save method as synchronized, but if you do that the entire method will be single threaded which defeats the purpose of using threads in the first place. You tend to want to minimize the synchronized code to the bare minimum to maximize concurrency.
You could also put a synchronized block around the critical if statement and minimise the single threaded part of the processing, but then you would also need to be careful if there were other parts of the code that might also set the Id if it wasn't previously initialized.
Another possibility which has various pros and cons is to put the initialization of the Id into the get method and make that method synchronized, or simply assign the Id when the object is created in the constructor.
I hope this helps...
Edit...
The above talks about java language features. A few people mentioned facilities in the java class libraries (e.g. java.util.concurrent) which also provide support for concurrency. So that is a good add on, but there are also whole packages which address the concurrency and other related parallel programming paradigms (e.g. parallelism) in various ways.
To complete the list I would add tools such as Akka and Cats-effect (concurrency) and more.
Not to mention the books and courses devoted to the subject.
I just reread your question and noted that you are asking about databases. Again the answer is it depends. Rdbms' usually let you do this type of operation with record locks usually in a transaction. Some (like teradata) use special clauses such as locking row for write select * from some table where pi_cols = 'somevalues' which locks the rowhash to you until you update it or certain other conditions. This is known as pessimistic locking.
Others (notebly nosql) have optimistic locking. This is when you read the record (like you are implying with getid) there is no opportunity to lock the record. Then you do a conditional update. The conditional update is sort of like this: write the id as x provided that when you try to do so the Id is still null (or whatever the value was when you checked). These types of operations are usually down through an API.
You can also do optimistics locking in an RDBMs as follows:
SQL
Update tbl
Set x = 'some value',
Last_update_timestamp = current_timestamp()
Where x = bull AND last_update_timestamp = 'same value as when I last checked'
In this example the second part of the where clause is the critical bit which basically says "only update the record if no one else did and I trust that everyone else will update the last update to when they do". The "trust" bit can sometimes be replaced by triggers.
These types of database operations (if available) are guaranteed by the database engine to be "thread safe".
Which loops me back to the "how long is a piece of string" observation at the beginning of this answer...
Test-and-set is unsafe
a method that checks for a id in the db and if the id doesn't exit then inserts a value with that id.
Any test-and-set pair of operations on a shared resource is inherently unsafe, vulnerable to a race condition. If the two operations are separate (not atomic), then they must be protected as a pair. While one thread completes the test but has not yet done the set, another thread could sneak in and do both the test and the set. The first thread now completes its set without knowing a duplicate action has occurred.
Providing that necessary protection is too broad a topic for an Answer on Stack Overflow, as others have said here.
UPSERT
However, let me point out that an alternative approach to to make the test-and-set atomic.
In the context of a database, that can be done using the UPSERT feature. Also known as a Merge operation. For example, in Postgres 9.5 and later we have the INSERT INTO … ON CONFLICT command. See this explanation for details.
In the context of a Boolean-style flag, a semaphore makes the test-and-set atomic.
In general, when we say "a method is thread-safe" when there is no race-condition to the internal and external data structure of the object it belongs to. In other words, the order of the method calls are strictly enforced.
For example, let's say you have a HashMap object and two threads, thread_a and thread_b.
thread_a calls put("a", "a") and thread_b calls put("a", "b").
The put method is not thread-safe (refer to its documentation) in the sense that while thread_a is executing its put, thread_b can also go in and execute its own put.
A put contains reading and writing part.
thread_a.read("a")
thread_b.read("a")
thread_b.write("a", "b")
thread_a.write("a", "a")
If above sequence happens, you can say ... a method is not thread-safe.
How to make a method thread-safe is by ensuring the state of the whole object cannot be perturbed while the thread-safe method is executing. An easier way is to put "synchronized" keyword in method declarations.
If you are worried about performance, use manual locking using synchronized blocks with a lock object. Further performance improvement can be achieved using a very well designed semaphores.

Java synchronized methods for a single thread

I'm having trouble understanding the synchronized keyword. As far as I know, it is used to make sure that only one thread can access the synchronized method/block at the same time. Then, is there sometimes a reason to make some methods synchronized if only one thread calls them?
If your program is single threaded, there's no need to synchronize methods.
Another case would be that you write a library and indicate that it's not thread safe. The user would then be responsible for handling possible multi-threading use, but you could write it all without synchronization.
If you are sure your class will be always used under single thread there is no reason to use any synchronized methods. But, the reality is - Java is inherently multi threaded environment. At some point of time somebody will use multiple threads. Therefore whichever class needs thread safety should have adequately synchonized methods/synchronized blocks to avoid problems.
No, you don't need synchronization if there is single thread involved.
Always specify thread-safety policy
Actually you never know how a class written by you is going to be used by others in future. So it is always better to explicitly state your policy. So that if in future someone tries to use it in multi-threaded way then they can be aware of the implications.
And the best place to specify the thread-safety policy is in JavaDocs. Always specify in JavaDocs as to whether the class that you are creating is thread safe or not.
When two or more threads need access to a shared resource, they need some way to ensure that the resource will be used by only one thread at a time.
Synchronized method is used to block the Shared resource between the multiple Threads.
So, No need to apply Synchronization for the Single Thread
Consider that you are designing a movie ticket seller application. And lets drop all the technology capabilities that are provided these days, for the sake of visualizing the problem.
There is only one ticket left for the show with 5 different counters selling tickets. Consider there are 2 people trying to buy the last ticket of the show at the counters.
Consider your application workflow to be such
You take in details of the buyer, his name, and his credit card
number. (this is the read operation)
Then you find out how many tickets are left for the show (this
is again a read operation)
Then you book the ticket with the credit card (this is the write
operation)
If this logic isnt synchronised, what would happen?
The details of Customer 1 and Customer 2 would be read up until step 2. Both will try to book the ticket and both their tickets would be booked.
If it is modified to be
You take in details of the buyer, his name, and his credit card
number. (this is the read operation)
Synchronize(
Then you find out how many tickets are left for the show (this is
again a read operation)
Then you book the ticket with the credit card (this is the write
operation) )
There is no chance of overbooking the show due to a thread race condition.
Now, consider this example where you absolutely know that there will be only and only 1 person booking tickets. There is no need for synchronization.
The ticket seller here would be your single thread in case of your
application
I have tried to put this in a very very simplistic manner. There are frameworks, and constraints which you put on the DB to avoid such a simple scenario. But the intent of the answer is to prove the theory of why thread synchronization, and not the capabilities of the way to avoid it.

Why does unsynchronization make ArrayList faster and less secure?

I read the following statement:
ArrayLists are unsynchronized and therefore faster than Vector, but less secure in a multithreaded environment.
I would like to know why unsynchronization can improve the speed, and why it will be less secure?
I will try to address both of your questions:
Improve speed
If the ArrayList were synchronized and multiple threads were trying to read data out of the list at the same time, the threads would have to wait to get an exclusive lock on the list. By leaving the list unsynchronized, the threads don't have to wait and the program will run faster.
Unsafe
If multiple threads are reading and writing to a list at the same time, the threads can have unstable view of the list, and this can cause instability in multi-threaded programs.
The whole point of synchronization is that it means only one thread has access to an object at any given time. Take a box of chocolates as an example. If the box is synchronized (Vector), and you get there first, no one else can take any and you get your pick. If the box is NOT synchronized (ArrayList), anyone walking by can snag a chocolate - It will disappear faster, but you may not get the ones you want.
ArrayLists are unsynchronized and
therefore faster than Vector, but less
secure in a multithreaded environment.
I would like to know why
unsynchronization can improve the
speed,and why it will be less secure?
When multiple threads are reading/writing to a shared memory location, the program might compute incorrect results due to lack of mutual exclusion and proper visibility. Hence lack of synchronization is considered "unsafe". This blog post by Jeremy Manson might provide a good introduction to the topic.
When the JVM executes a synchronized method, it makes sure that the current thread has an exclusive lock on the object on which the method is invoked. Similarly when the method finishes execution, the JVM releases the lock held by the executing thread. Synchronized methods provide mutual exclusion and visibility guarantees - and is important for "safety" (i.e. guaranteeing correctness) of the executing code. But, if only one thread is ever accessing the methods of the object, there is no safety issues to worry about. Although the JVM performance has improved over the years, uncontended synchronization (i.e. locking/unlocking of objects accessed by only one thread) still takes non-zero amount of time. For unsynchronized methods, the JVM does not pay this extra penalty - hence they are faster than their synchronized counterparts.
Vectors force their choice on you. All methods are synchronized and it is difficult to use them incorrectly. But when Vectors are used in a single-threaded context, you pay the price for the extra synchronization unnecessarily. ArrayLists leave the choice to you. When used in the multi-threaded context, it is up to you (the programmer) to correctly synchronizing the code; but when used in a single-threaded context you are guaranteed not to pay any extra synchronization overhead.
Also, when an collection is populated initially, and read subsequently ArrayLists perform better even in a multi-threaded context. For example, consider this method:
public synchronized List<String> getList() {
List<String> list = new Vector<String>();
list.add("Foo");
list.add("Bar");
return Collections.unmodifiableList(list);
}
A list is created, populated, and an immutable view of it is safely published. Looking at the code above it is clear that all subsequent uses of this list are reads and won't need any synchronization even when used by multiple threads - the object is effectively immutable. Using a Vector here incurs the synchronization overhead even for reads where it is not needed; using an ArrayList instead would perform better.
Data structures that synchronize use locks (or other synchronization constructs) to ensure that their data is always in a consistent state. Oftentimes, this requires that one or more threads wait on another thread to finish updating the structure's state, which will then reduce performance, since a wait has been introduced where before there was none.
2 threads can modify the list at the same time and add a new item or delete/modify the same item in the list at the same time because no synchronization (or lock mechanism if you prefer) exists. So imagine you delete one item of the list while somebody else is trying to work with it or you modify an item while someone uses it, it's not very secure.
http://download.oracle.com/javase/1.4.2/docs/api/java/util/ArrayList.html
Read the "Note that this implementation is not synchronized." paragraph, it explains a bit better.
And I forgot, considering speed, it seems quite trivial to imagine that when you try to control the access to a data, you add some mechanisms that prevent other people from accessing your data. Thus, you add some more computations so it is slower...
Non-blocking data structures will be faster than ones that bock, because of that fact. With blocking data structures, if a resources is acquired by some entity it will take time for another entity to acquire that same resource, once it becomes available.
However, this can be less secure in some instances depending on the situation. The main points of contention are during writes. If it can be guaranteed that the data contained in a data structure will not change it has been added and will only be accessed to read the value than there will not be a problem. The issues arise when there is a conflict between a write and a read, or a write and a write.

Categories

Resources