variable value changed simulateously by different threads - java

I have many concurrent running http request serving threads. They will be creating an Object(? extends Object) for every request and save the object in a list.
Advice me some good data structure to implement this list.
I can't use ArrayList since it was not thread safe.
I dont like to use Vector - since its synchronized, it will make other threads to wait when one of the http thread was saving the object.
Also tried LinkedList, but there is data loss due to concurrent update.

Your variable would need to be atomic so that it can safely be updated by multiple threads (see java.util.concurrent.atomic). You could also use an AtomicInteger to keep track of the number of times the variable is updated.
But are you sure you want do this without explicitly controlling the update to a variable?

Related

Java in memory data storage thread safety

I'm making a real time multiplayer game server in Java. I'm storing all data for matches in memory in a HashMap with "match" objects. Each match object contains information about the game and game state for all players (anywhere from 2-5 in one match). The server will pass the same match object for each user's connection to the server.
What I'm a little concerned about is making this thread safe. Connections could be made to different threads in the server, all of which need to access the same match.
The problem with that is there would be a lot of variables/lists in the object, all of which would need to be synchronized. Some of them may need to be used to perform calculations that affect each other, meaning I would need nested synchronized blocks, which I don't want.
Is synchronized blocks for every variable in the match object my only solution, or can I do something else?
I know SQLite has an in memory mode, but the problem I found was this:
Quote from their website:
SQLite supports an unlimited number of simultaneous readers, but it will only allow one writer at any instant in time. For many situations, this is not a problem. Writer queue up. Each application does its database work quickly and moves on, and no lock lasts for more than a few dozen milliseconds. But there are some applications that require more concurrency, and those applications may need to seek a different solution
A few dozen milliseconds? That's a long time. Would that be fast enough, or is there another in memory database that would be suited for real time games?
Your architecture is off in this case. You want a set of data to be modified and updated by several threads at once, which might be possible, but is extremely difficult to get right and fast at the same time.
It would be much easier if you change the architecture like follows:
There is one thread that has exclusive access to a single match object. A thread could handle multiple match objects, but a single match object will only be handled/guarded by a single thread. Now if any external effect wants to change any values, it needs to make a "change request", but cannot change it immediately on it's own. And once the change has been implemented and the values updated, the thread guarding the match object will send out an update to the clients.
So lets say a player scores a goal, then the client thread calls a function
void clientScoredGoal(Client client) {
actionQueue.put(new GoalScoredEvent(client));
}
Where actionQueue is i.E. a BlockingQueue.
The thread handling the match objects is listening on this queue via actionQueue.take() and reacts as soon as a new action has been found. It will then apply the change, updated internal values if neccessary, and then distributes an update package (a "change request" to clients if you want).
Also in general synchronized should be considered bad practice in Java. There are certain situations where it is a good way to handle synchronization, but in like 99% of all cases using features from the Concurrent package will be by far the better solution. Notice the complete lack of synchronized in the example code above, yet it is perfectly thread-safe.
the question is very generic. It is difficult to give specific advice.
I'm making a real time multiplayer game server in Java. I'm storing all data for matches in memory in a HashMap with "match" objects.
If you want to store "match" objects in a Map and then have multiple threads requesting/adding/removing objects from the map, then you have to use a "ConcurrentHashMap".
What I'm a little concerned about is making this thread safe. Connections could be made to different threads in the server, all of which need to access the same match.
The safest and easiest way to have multithreading is to make each "match" an immutable object, then there is no need to synchronize.
If "match" information is mutable and accessed simultaneously by many threads, then you will have to synchronize. But in this case, the "mutable state" is contained within a "match", so only the class "match" will need to use synchronization.
I would need nested synchronized blocks, which I don't want.
I haven't ever seen the need to have nested synchronized blocks. perhaps you should refactor your solution before you try to make it thread safe.
Is synchronized blocks for every variable in the match object my only solution, or can I do something else? I know SQLite has an in memory mode
If you have objects with mutable state that are accessed by multiple threads, then you need to make them thread safe. there is no other way (notice that I didn't say that "synchronized blocks" is the only option. there are different ways to achieve thread safety). Using an in memory database is not the solution to your thread safety problem.
The advantage of using an in memory database is in speeding up the access to information (as you don't have to access a regular database with information stored in an HDD), but with the penalty that now your application needs more RAM.
By the way, even faster than using an in memory database would be to keep all the information that you need within objects in your program (which has the same limitation of requiring more RAM).

How to note web requests in concurrent environment?

We have a web application which receives some million requests per day, we audit the request counts and response status using an interceptor, which intern calls a class annotated with #Async annotation of spring, this class basically adds them to a map and persists the map after a configured interval. As we have fixed set of api we maintain ConcurrentHashMap map having API name as key and its count and response status object as value.So for every request for an api we check whether it exists in our map , if exist we fetch the object against it otherwise we create an object and put it in map. For ex
class Audit{
CounterObject =null;
if(APIMap.contains(apiname){
// fetch existing object
CounterObject=APIMap.get(apiname);
}
else{
//create new object and put it to the map
CounterObject=new CounterObject();
}
// Increment count,note response status and other operations of the CounterObject recieved
}
Then we perform some calculation on the received object (whether from map or newly created) and update counters.
We aggreagate the map values for specific interval and commit it to database.
This works fine for less hits , but under a high load we face some issues. Like
1. First thread got the object and updated the count, but before updating second thread comes and gets the value which is not the latest one, by this time first thread has done the changes and commits the value , but the second threads updates the values it got previously and updated them. But as the key on which operation is performed is same for both the threads the counter is overwritten by the thread whichever writes last.
2. I don't want to put synchronized keyword over the block which has logic for updating the counter. As even if the processing is async and the user gets response even before we check apiname in map still the application resources consumed will be higher under high load if synchronized keyword is used , which can result in late response or in worst case a deadlock.
Can anyone suggest a solution which does can update the counters in concurrent way without having to use synchronized keyword.
Note :: I am already using ConcurrentHashMap but as the lock hold and release is so fast at high load by multiple threads , the counter mismatches.
In your case you are right to look at a solution without locking (or at least with very local locking). And as long as you do simple operations you should be able to pull this off.
First of all you have to make sure you only make one new CounterObject, instead of having multiple threads create one of their own and the last one overwriting earlier object.
ConcurrentHashMap has a very useful function for this: putIfAbsent. It will story an object if there is none and return the object that is in the map right after calling it (although the documentation doesn't state it as directly, the code example does). It works as follows:
CounterObject counter = APIMap.putIfAbsent("key", new CounterObject());
counter.countStuff();
The downside of the above is that you always create a new CounterObject, which might be expensive. If that is the case you can use the Java 8 computeIfAbsent which will only call a lambda to create the object if there is nothing associated with the key.
Finally you have to make sure you CounterObject is threadsafe, preferably without locking/sychronization (although if you have very many CounterObjects, locking on it will be less bad than locking the full map, because fewer threads will try to lock the same object at the same time).
In order to make CounterObject safe without locking, you can look into classes such as AtomicInteger which can do many simple operations without locking.
Note that whenever I say locking here it means either with an explicit lock class or by using synchronize.
The reason for counter mismatch is check and put operation in the Audit class is not atomic on ConcurrentHashMap. You need to use putIfAbsent method that performs check and put operation atomically. Refer ConcurrentHashMap javadoc for putIfAbsent method.

threads accessing non-synchronised methods in Java

can I ask to explain me how threads and synchronisation works in Java?
I want to write a high-performance application. Inside this application, I read a data from files into some nested classes, which are basically a nut-shell around HashMap.
After the data reading is finished, I start threads which need to go through the data and perform different checks on it. However, threads never change the data!
If I can guarantee (or at least try to guarantee;) that my threads never change the data, can I use them calling non-synchronised methods of objects containing data?
If multiple threads access the non-synchronised method, which does not change any class field, but has some internal variables, is it safe?
artificial example:
public class Data{
// this hash map is filled before I start threads
protected Map<Integer, Spike> allSpikes = new HashMap<Integer, Spike>();
public HashMap returnBigSpikes(){
Map<Integer, Spike> bigSpikes = new HashMap<Integer, Spike>();
for (Integer i: allSpikes.keySet()){
if (allSpikes.get(i).spikeSize > 100){
bigSpikes.put(i,allSpikes.get(i));
}
}
return bigSpikes;
}
}
Is it safe to call a NON-synchronised method returnBigSpikes() from threads?
I understand now that such use-cases are potentially very dangerous, because it's hard to control, that data (e.g., returned bigSpikes) will not be modified. But I have already implemented and tested it like this and want to know if I can use results of my application now, and change the architecture later...
What happens if I make the methods synchronised? Will be the application slowed down to 1 CPU performance? If so, how can I design it correctly and keep the performance?
(I read about 20-40 Gb of data (log messages) into the main memory and then run threads, which need to go through the all data to find some correlation in it; each thread becomes only a part of messages to analyse; but for the analysis, the thread should compare each message from its part with many other messages from data; that's why I first decided to allow threads to read data without synchronisation).
Thank You very much in advance.
If allSpikes is populated before all the threads start, you could make sure it isn't changed later by saving it as an unmodifiable map.
Assuming Spike is immutable, your method would then be perfectly safe to use concurrently.
In general, if you have a bunch of threads where you can guarantee that only one thread will modify a resource and the rest will only read that resource, then access to that resource doesn't need to be synchronised. In your example, each time the method returnBigSpikes() is invoked it creates a new local copy of bigSpikes hashmap, so although you're creating a hashmap it is unique to each thread, so no sync'ing problems there.
As long as anything practically immutable (eg. using final keyword) and you use an unmodifiableMap everything is fine.
I would suggest the following UnmodifiableData:
public class UnmodifiableData {
final Map<Integer,Spike> bigSpikes;
public UnmodifiableData(Map<Integer,Spike> bigSpikes) {
this.bigSpikes = Collections.unmodifiableMap(new HashMap<>(bigSpikes));
}
....
}
Your plan should work fine. You do not need to synchronize reads, only writes.
If, however, in the future you wish to cache bigSpikes so that all threads get the same map then you need to be more careful about synchronisation.
If you use ConcurrentHashMap, it will do all syncronization work for you. Its bettr, then making synronization around ordinary HashMap.
Since allSpikes is initialized before you start threads it's safe. Concurrency problems appear only when a thread writes to a resource and others read from it.

How to Ensure Memory Visibility in Java when passing data across threads

I have a producer consumer like pattern where some threads are creating data and periodically passing putting chunks of that data to be consumed by some other threads.
Keeping the Java Memory Model in mind, how do i ensure that the data passed to the consumer thread has full 'visibility'?
I know there are data structures in java.util.concurrent like ConcurrentLinkedQueue that are built specifically for this, but I want to do this as low level as possible without utilizing those and have full transparency on what is going on under the covers to ensure the memory visibility part.
If you want "low level" then look into volatile and synchronized.
To transfer data, you need a field somewhere available to all threads. In your case it really needs to be some sort of collection to handle multiple entries. If you made the field final, referencing, say, a ConcurrentLinkedQueue, you'd pretty much be done. The field could be made public and everyone could see it, or you could make it available with a getter.
If you use an unsynchronized queue, you have more work to do, because you have to manually synchronize all access to it, which means you have to track down all usages; not easy when there's a getter method. Not only do you need to protect the queue from simultaneous access, you must make sure interdependent calls end up in the same synchronized block. For instance:
if (!queue.isEmpty()) obj = queue.remove();
If the whole thing is not synchronized, queue is perfectly capable of telling you it is not empty, then throwing a NoSuchElementException when you try to get the next element. (ConcurrentLinkedQueue's interface is specifically designed to let you do operations like this with one method call. Take a good look at it even if you don't want to use it.)
The simple solution is to wrap the queue in another object whose methods are carefully chosen and all synchronized. The wrapped class, even if it's LinkedList or ArrayList, will now act (if you do it right) like CLQ, and it can be freely released to the rest of the program.
So you would have what is really a global field with an immutable (final) reference to a wrapper class, which contains a LinkedList (for example) and has synchronized methods that use the LinkedList to store and access data. The wrapper class, like CLQ, would be thread-safe.
Some variants on this might be desirable. It might make sense to combine the wrapper with some other high-level class in your program. It might also make sense to create and make available instances of nested classes: perhaps one that only adds to the queue and one that only removes from it. (You couldn't do this with CLQ.)
A final note: having synchronized everything, the next step is to figure out how to unsynchronize (to keep threads from waiting too much) without breaking thread safety. Work really hard on this, and you'll end up rewriting ConcurrentLinkedQueue.

Java Multithreaded Caching with Single Updater Thread

I have a web service that has ~1k request threads running simultaneously on average. These threads access data from a cache (currently on ehcache.) When the entries in the cache expire, the thread that hits the expired entry tries getting the new value from the DB, while the other threads also trying to hit this entry block, i.e. I use the BlockingEhCache decorator. Instead of having the other threads waiting on the "fetching thread," I would like the other threads to use the "stale" value corresponding to the "missed" key. Is there any 3rd party developed ehcache decorators for this purpose? Do you know of any other caching solutions that have this behavior? Other suggestions?
I don't know EHCache good enough to give specific recommendations for it to solve your problem, so I'll outline what I would do, without EHCache.
Let's assume all the threads are accessing this cache using a Service interface, called FooService, and a service bean called SimpleFooService. The service will have the methods required to get the data needed (which is also cached). This way you're hiding the fact that it's cached from from the frontend (http requests objects).
Instead of simply storing the data to be cached in a property in the service, we'll make a special object for it. Let's call it FooCacheManager. It will store the cache in a property in FooCacheManger (Let's say its of type Map). It will have getters to get the cache. It will also have a special method called reload(), which will load the data from the DB (by calling a service methods to get the data, or through the DAO), and replace the content of the cache (saved in a property).
The trick here is as follows:
Declare the cache property in FooCacheManger as AtomicReference (new Object declared in Java 1.5). This guarantees thread safety when you read and also assign to it. Your read/write actions will never collide, or read half-written value to it.
The reload() will first load the data into a temporary map, and then when its finished it will assign the new map to the property saved in FooCacheManager. Since the property is AtomicReference, the assignment is atomic, thus it's basically swiping the map in an instant without any need for locking.
TTL implementation - Have FooCacheManager implement the QuartzJob interface, and making it effectively a quartz job. In the execute method of the job, have it run the reload(). In the Spring XML define this job to run every xx minutes (your TTL) which can also be defined in a property file if you use PropertyPlaceHolderConfigurer.
This method is effective since the reading threads:
Don't block for read
Don't called isExpired() on every read, which is 1k / second.
Also the writing thread doesn't block when writing the data.
If this wasn't clear, I can add example code.
Since ehcache removes stale data, a different approach can be to refresh data with a probability that increases as expiration time approaches, and is 0 if expiration time is "sufficiently" far.
So, if thread 1 needs some data element, it might refresh it, even though data is not old yet.
In the meantime, thread 2 needs same data, it might use the existing data (while refresh thread has not finished yet). It is possible thread 2 might try to do a refresh too.
If you are working with references (the updater thread loads the object and then simply changes the reference in the cache), then no separate synchronization is required for get and set operations on the cache.

Categories

Resources