Design AppServer Interview Discussion - java

I encountered the following question in a recent System Design Interview:
Design an AppServer that interfaces with a Cache and a DB.
I came up with this:
public class AppServer{
public Database DB;
public Cache cache;
public Value get(Key k){
Value res = cache.get(k);
if(res == null){
res = DB.get(k);
cache.set(k, res);
}
}
public void set(Key k, Value v){
cache.set(k, v);
DB.set(k, v);
}
}
This code is fine and works correctly, but follow ups to the question are:
What if there are multiple threads?
What if there are multiple instances of the AppServer?
Suddenly AppServer performance degrades a ton, we find out this is because our cache is consistently missing. Cache size is fixed (already largest that it can be). How can we prevent this?
Response:
I answered that we can use Locks or Conditional Variables. In Java, we can add Synchronized to each method to allow for mutual exclusion, but the interviewer mentioned that this isn't too efficient and wanted only critical parts synchronized.
I thought that we only need to synchronize the 2 set lines in void set(Key k, Value v) and 1 set method in Value get(Key k), however the interviewer pushed for also synchronizing res = DB.get(k);. I agreed with him at the end, but don't fully understand. Don't threads have independent stacks and shared heaps? So when a thread executes get, it stores res in local variable on stack frame, even if another thread executes get sequentially, the former thread retains its get value. Then each thread sets their respective fetched values.
How can we handle multiple instances of the AppServer?
I came up with a Distributed Queue Solution like Kafka, every time we perform a set / get command we queue that command, but he also mentioned that set is ok because the action sets a value in the cache / db, but how would you return the correct value for get? Can someone explain this?
Also there are possible solutions with a versioning system and event system?
Possible solutions:
L1, L2, L3 caches - layers and more caches
Regional / Segmentation caches - use different cache for user groups.
Any other ideas?
Will upvote all insightful responses :)

1
Although JDBC is "supposed" to be thread safe, some drivers aren't and I'm going to assume that Cache isn't thread safe either (although most caches should be thread safe) so in that case, you would need to make the following changes to your code:
Make both fields final
Synchronize the ENTIRE get(...)method
Synchronize the ENTIRE set(...)method
Assuming there is no other way to access the said fields, the behavior of your get(...) method depends on 2 things: first, that updates from the set(...) method can be seen, and secondly, that a cache miss is then stored only by a single thread. You need to synchronize because the idea is to only have one thread perform an expensive DB query in the case that there is a cache miss. If you do not synchronize the entire get(...) method, or you split the synchronized statement, it is possible for another thread to also see a cache miss between the lookup and insertion.
The way I would answer this question is honestly just to toss the entire thing. I would look at how JCIP wrote the cache and base my answer on that.
2
I think your queue solution is fine.
I believe your interviewer means that if another instance of AppServer did not have cached what was already set(...) by another instance of AppServer, then it would lookup and find the correct value in the DB. This solution would be incorrect if you are using multiple threads because it is possible for 2 threads to be set(...)ing conflicting values, then the caches would have 2 different values while depending on the thread safety of your DB, it might not even have the value at all.
Ideally, you'd never create more than a single instance of your AppServer.
3
I don't have enough experience to evaluate this question specifically, but perhaps an LRU cache would improve performance somewhat, or using a hash ring buffer. It might be a stretch but if you wanted to throw out there, perhaps even using ML to determine the best values to either preload to retain at certain times of the day, for example, could also work.
If you are always missing values from your cache, there is no way to improve your code. Performance would be dependent on your database.

Related

Is get() thread-safe operation in Guava's cache?

I found out that put and get with CacheLoader operations use Reentrant lock under the hood, but why this is not implemented for getIfPresent operation?
get which is used by getIfPresent
#Nullable
V get(Object key, int hash) {
try {
if (this.count != 0) {
long now = this.map.ticker.read();
ReferenceEntry<K, V> e = this.getLiveEntry(key, hash, now);
Object value;
if (e == null) {
value = null;
return value;
}
value = e.getValueReference().get();
if (value != null) {
this.recordRead(e, now);
Object var7 = this.scheduleRefresh(e, e.getKey(), hash, value, now, this.map.defaultLoader);
return var7;
}
this.tryDrainReferenceQueues();
}
Object var11 = null;
return var11;
} finally {
this.postReadCleanup();
}
}
put
#Nullable
V put(K key, int hash, V value, boolean onlyIfAbsent) {
this.lock();
.....
Is the only thing I can do to reach thread-safety in basic get/put operations is to use synchronization on client ?
Even if getIfPresent did use locks, that won't help. It's more fundamental than that.
Let me put that differently: Define 'threadsafe'.
Here's an example of what can happen in a non-threadsafe implementation:
You invoke .put on a plain jane j.u.HashMap, not holding any locks.
Simultaneously, a different thread also does that.
The map is now in a broken state. If you iterate through the elements, the first put statement doesn't show at all, the second put statement shows up in your iteration, and a completely unrelated key has disappeared. But calling .get(k) on that map with the second thread's key doesn't find it eventhough it is returned in the .entrySet(). This makes no sense and breaks all rules of j.u.HashMap. The spec of hashmap does not explain any of this, other than 'I am not threadsafe' and leaves it at that.
That's an example of NOT thread safe.
Here is an example of perfectly fine:
2 threads begin.
Some external event (e.g. a log) shows that thread 1 is very very very slightly ahead of thread 2, but the notion of 'ahead', if it is relevant, means your code is broken. That's just not how multicore works.
Thread 1 adds a thing to a concurrency-capable map, and logs that it has done so.
Thread 2 logs that it starts an operation. (From the few things you have observed, it seems to be running slightly 'later') so I guess we're "after" the point where T1 added the thing) now queries for the thing and does not get a result.1
That's fine. That's still thread safe. Thread safe doesn't mean every interaction with an instance of that data type can be understood in terms of 'first this thing happened, then that thing happened'. Wanting that is very problematic, because the only way the computer can really give you that kind of guarantee is to disable all but a single core and run everything very very slowly. The point of a cache is to speed things up, not slow things down!
The problem with the lack of guarantees here is that if you run multiple separate operations on the same object, you run into trouble. Here's some pseudocode for a bank ATM machine that will go epically wrong in the long run:
Ask user how much money they want (say, €50,-).
Retrieve account balance from a 'threadsafe' Map<Account, Integer> (maps account ID to cents in account).
Check if €50,-. If no, show error. If yes...
Spit out €50,-, and update the threadsafe map with .put(acct, balance - 5000).
Everything perfectly threadsafe. And yet this is going to go very very wrong - if the user uses their card at the same time they are in the bank withdrawing money via the teller, either the bank or the user is going to get very lucky here. I'd hope it's obvious to see how and why.
The upshot is: If you have dependencies between operations there is nothing you can do with 'threadsafe' concepts that can possibly fix it; the only way is to actually write code that explicitly marks off these dependencies.
The only way to write that bank code is to either use some form of locking. Basic locking, or optimistic locking, either way is fine, but locking of some sort. It has to look like2:
start some sort of transaction;
fetch account balance;
deal with insufficient funds;
spit out cash;
update account balance;
end transaction;
Now guava's code makes perfect sense:
There is no such thing as 'earlier' and 'later'. You need to stop thinking about multicore in that way. Unless you explicitly write primitives that establish these things. The cache interface does have these. Use the right operation! getIfPresent will get you the cache if it is possible for your current thread to get at that data. If it is not, it returns null, that's what that call does.
If instead you want this common operation: "Get me the cached value. However, if it is not available, then run this code to calculate the cached value, cache the result, and return it to me. In addition, ensure that if 2 threads simultaneously end up running this exact operation, only one thread runs the calculation, and the other will wait for the other one (don't say 'first' one, that's not how you should think about threads) to finish, and use that result instead".. then, use the right call for that: .cache.get(key, k -> calculateValueForKey(k)). As the docs explicitly call out this will wait for another thread that is also 'loading' the value (that's what guava cache calls the calculation process).
No matter what you invoke from the Cache API, you can't 'break it', in the sense that I broke that HashMap. The cache API does this partly by using locks (such as ReentrantLock for mutating operations on it), and partly by using a ConcurrentHashMap under the hood.
[1] Often log frameworks end up injecting an actual explicit lock in the proceedings and thus you do often get guarantees in this case, but only 'by accident' because of the log framework. This isn't a guarantee (maybe you're logging to separate log files, for example!) and often what you 'witness' may be a lie. For example, maybe you have 2 log statements that both log to separate files (and don't lock each other out at all), and they log the timestamp as part of the log. The fact that one log line says '12:00:05' and the other says '12:00:06' means nothing - the log thread fetches the current time, creates a string describing the message, and tells the OS to write it to the file. You obviously get absolutely no guarantee that the 2 log threads run at identical speed. Maybe one thread fetches the time (12:00:05), creates the string, wants to write to the disk but the OS switches to the other thread before the write goes through, the other thread is the other logger, it reads time (12:00:06), makes the string, writes it out, finishes up, and then the first logger continues, writes its context. Tada: 2 threads where you 'observe' one thread is 'earlier' but that is incorrect. Perhaps this example will further highlight why thinking about threads in terms of which one is 'first' steers you wrong.
[2] This code has the additional complication that you're interacting with systems that cannot be transactional. The point of a transaction is that you can abort it; you cannot abort the user grabbing a bill from the ATM. You solve that by logging that you're about to spit out the money, then spit out the money, then log that you have spit out the money. And finally write to this log that it has been processed in the user's account balance. Other code needs to check this log and act accordingly. For example, on startup the bank's DB machine needs to flag 'dangling' ATM transactions and will have to get a human to check the video feed. This solves the problem where someone trips over the power cable of the bank DB machine juuust as the user is about to grab the banknote from the machine.
Seems like guava cache is implementing ConcurrentMap api
class LocalCache<K, V> extends AbstractMap<K, V> implements ConcurrentMap<K, V>
so the base get and put operations should be thread safe by nature

Get/Set the value in the cache using the AtomicReference in java

I've already posted this question on codereview site https://codereview.stackexchange.com/questions/158999/get-set-the-value-in-the-cache-using-the-atomicreference-in-java , but thought of posting here, so that it reaches the wider audience and i can get the quicker solution posting it here as well.
I am having below code which get and set the data in the cache using the synchronized block and i want to know if i can optimize the below code :-
public int getValue() {
AtomicReferenceTest<Integer> cachedIntRef = new AtomicReference<Integer>();
boolean wasCached = true;
Integer cachedInt = cachedIntRef.get();
if (cachedInt == null) {
synchronized (cachedIntRef) {
cachedInt = cachedIntRef.get();
if (cachedInt == null) {
wasCached = false;
// Make DB call to get the data and update the cache.
cachedInt = baseDao.getCloudMaximumWeight();
cachedIntRef.set(cachedInt);
}
}
}
}
I want to know if is there is any way by which i can remove the synchronized block and optimize further or this code is already optimized?
EDIT :- i'll remove the question from one of the site, if i get the answer on any of the site. Also when i profile my application sometime even with less no of threads, i see threads blocking on synchronized piece of code. which made me think as i code is using the AtomicRef , somehow i can get rid of syncronized or is there is some other better way of optimize the code.
I want to know if is there is any way by which i can remove the synchronized block and optimize further or this code is already optimized?
I assume that optimizing the code means removing the synchronized block. The problem with that thinking is that most likely your dao call is significantly more expensive than synchronized. Any IO (especially to a remote database) is going to be at least 4+ orders of magnitude more expensive than the locking.
That said, you can remove the synchronized block if you don't mind multiple DAO calls when initializing the cache. If the DAO calls are inexpensive then having 2 threads making them maybe isn't a problem. There is a race condition on which one's answer will be put into the cache but chances are their results will be the same anyway. I often do this and assume that as the application starts up, the first couple of calls are going to be more expensive as the cache warms. But are 2 threads making the same DAO request ever going to be faster than 1 thread doing it and 1 waiting for the other thread to finish?
If there is a number of different DAO calls then you can try some sort of lock segregation so not all cache requests go through the same lock. This would allow some parallelization which might help. I can't tell if your code is specific or an example of the problem. This is how the ConcurrentHashMap works for example.
But really I would be sure that this section of code has performance problems before I worry too much about it. And even if a profiler is saying that it is a primary time sink, it may just be that the DAO calls are the most expensive part of the equation so saving a couple with synchronization would be the best way to speed it up anyway. You can take out the dao calls and replace with a straight assignment if you need to see if it the synchronized or dao.* calls that is the problem.
Try using volatile integer instead. Maybe I am missing something here but I don't see the use case for the AtomicReference here.

Updating integer atomically over multiple JVMs for every key

We have a requirement, where the problem can be narrowed down as.
There are multiple keys and each key maps to a integer.
When a key is received on a JVM, you need to retrieve the int value from the shared memory, increment it and then put the incremented value back on the shared memory.
So when two JVMs or two threads read the same value, then the update of one of them should fail consistently, so that you do not lose any increment done by any of the thread on any of the JVM.
Once an update fails, you read again from the shared memory, increment it and then update again till the update is successful or you have exhausted some 'N' number of retries.
Right now we are using infinispan with optimistic locking, but the behavior is not consistent. Please find the link to that thread.
https://developer.jboss.org/message/914490
Is there any other technology which will fit in well for this requirement.
Synchronizing between threads is easy, but between JVMs is extremely hard, especially if you need to support multiple platforms. I would suggest centralising the update code using one of the following methods, both of which "contract out" the data update task:
Publish a trivial REST API from a single process that knows how to do the update task, and serialize the requests.
Use a relational database to hold the counts, and make sure the client code correctly rolls back transactions when they don't succeed.
Probably not what you wanted to hear, but either method will work well.

Why should we use HashMap in multi-threaded environments?

Today I was reading about how HashMap works in Java. I came across a blog and I am quoting directly from the article of the blog. I have gone through this article on Stack Overflow. Still
I want to know the detail.
So the answer is Yes there is potential race condition exists while
resizing HashMap in Java, if two thread at the same time found that
now HashMap needs resizing and they both try to resizing. on the
process of resizing of HashMap in Java , the element in bucket which
is stored in linked list get reversed in order during there migration
to new bucket because java HashMap doesn't append the new element at
tail instead it append new element at head to avoid tail traversing.
If race condition happens then you will end up with an infinite loop.
It states that as HashMap is not thread-safe during resizing of the HashMap a potential race condition can occur. I have seen in our office projects even, people are extensively using HashMaps knowing they are not thread safe. If it is not thread safe, why should we use HashMap then? Is it just lack of knowledge among developers as they might not be aware about structures like ConcurrentHashMap or some other reason. Can anyone put a light on this puzzle.
I can confidently say ConcurrentHashMap is a pretty ignored class. Not many people know about it and not many people care to use it. The class offers a very robust and fast method of synchronizing a Map collection. I have read a few comparisons of HashMap and ConcurrentHashMap on the web. Let me just say that they’re totally wrong. There is no way you can compare the two, one offers synchronized methods to access a map while the other offers no synchronization whatsoever.
What most of us fail to notice is that while our applications, web applications especially, work fine during the development & testing phase, they usually go tilts up under heavy (or even moderately heavy) load. This is due to the fact that we expect our HashMap’s to behave a certain way but under load they usually misbehave. Hashtable’s offer concurrent access to their entries, with a small caveat, the entire map is locked to perform any sort of operation.
While this overhead is ignorable in a web application under normal load, under heavy load it can lead to delayed response times and overtaxing of your server for no good reason. This is where ConcurrentHashMap’s step in. They offer all the features of Hashtable with a performance almost as good as a HashMap. ConcurrentHashMap’s accomplish this by a very simple mechanism.
Instead of a map wide lock, the collection maintains a list of 16 locks by default, each of which is used to guard (or lock on) a single bucket of the map. This effectively means that 16 threads can modify the collection at a single time (as long as they’re all working on different buckets). Infact there is no operation performed by this collection that locks the entire map.
There are several aspects to this: First of all, most of the collections are not thread safe. If you want a thread safe collection you can call synchronizedCollection or synchronizedMap
But the main point is this: You want your threads to run in parallel, no synchronization at all - if possible of course. This is something you should strive for but of course cannot be achieved every time you deal with multithreading.
But there is no point in making the default collection/map thread safe, because it should be an edge case that a map is shared. Synchronization means more work for the jvm.
In a multithreaded environment, you have to ensure that it is not modified concurrently or you can reach a critical memory problem, because it is not synchronized in any way.
Dear just check Api previously I also thinking in same manner.
I thought that the solution was to use the static Collections.synchronizedMap method. I was expecting it to return a better implementation. But if you look at the source code you will realize that all they do in there is just a wrapper with a synchronized call on a mutex, which happens to be the same map, not allowing reads to occur concurrently.
In the Jakarta commons project, there is an implementation that is called FastHashMap. This implementation has a property called fast. If fast is true, then the reads are non-synchronized, and the writes will perform the following steps:
Clone the current structure
Perform the modification on the clone
Replace the existing structure with the modified clone
public class FastSynchronizedMap implements Map,
Serializable {
private final Map m;
private ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
.
.
.
public V get(Object key) {
lock.readLock().lock();
V value = null;
try {
value = m.get(key);
} finally {
lock.readLock().unlock();
}
return value;
}
public V put(K key, V value) {
lock.writeLock().lock();
V v = null;
try {
v = m.put(key, value);
} finally {
lock.writeLock().lock();
}
return v;
}
.
.
.
}
Note that we do a try finally block, we want to guarantee that the lock is released no matter what problem is encountered in the block.
This implementation works well when you have almost no write operations, and mostly read operations.
Hashmap can be used when a single thread has an access to it. However when multiple threads start accessing the Hashmap there will be 2 main problems:
1. resizing of hashmap is not gauranteed to work as expected.
2. Concurrent Modification exception would be thrown. This can also be thrown when its accessed by single thread to read and write onto the hashmap at the same time.
A workaround for using HashMap in multi-threaded environment is to initialize it with the expected number of objects' count, hence avoiding the need for a re-sizing.

How to make cache thread safe

I have a instance of a object which performs very complex operation.
So in the first case I create an instance and save it it my own custom cache.
From next times whatever thread comes if he finds that a ready made object is already present in the cache they take it from the cache so as to be good in performance wise.
I was worried about what if two threads have the same instance. IS there a chance that the two threads can corrupt each other.
Map<String, SoftReference<CacheEntry<ClassA>>> AInstances= Collections.synchronizedMap(new HashMap<String, SoftReference<CacheEntry<ClassA>>>());
There are many possible solutions:
Use an existing caching solution like EHcache
Use the Spring framework which got an easy way to cache results of a method with a simple #Cacheable annotation
Use one of the synchronized maps like ConcurrentHashMap
If you know all keys in advance, you can use a lazy init code. Note that everything in this code is there for a reason; change anything in get() and it will break eventually (eventually == "your unit tests will work and it will break after running one year in production without any problem whatsoever").
ConcurrentHashMap is most simple to set up but it has simple way to say "initialize the value of a key once".
Don't try to implement the caching by yourself; multithreading in Java has become a very complex area with Java 5 and the advent of multi-core CPUs and memory barriers.
[EDIT] yes, this might happen even though the map is synchronized. Example:
SoftReference<...> value = cache.get( key );
if( value == null ) {
value = computeNewValue( key );
cache.put( key, value );
}
If two threads run this code at the same time, computeNewValue() will be called twice. The method calls get() and put() are safe - several threads can try to put at the same time and nothing bad will happen, but that doesn't protect you from problems which arise when you call several methods in succession and the state of the map must not change between them.
Assuming you are talking about singletons, simply use the "demand on initialization holder idiom" to make sure your "check" works across all JVM's. This will also make sure all threads which are requesting the same object concurrently wait till the initialization is over and be given back only valid object instance.
Here I'm assuming you want a single instance of the object. If not, you might want to post some more code.
Ok If I understand your problem correctly, you are worried that 2 objects changing the state of the shared object will corrupt each other.
The short answer is yes they will.
If the object is expensive in creation but is needed in a read only manner. I suggest you make it immutable, this way you get the benefit of it being fast in access and at the same time thread safe.
If the state should be writable but you don't actually need threads to see each others updates. You can simply load the object once in an immutable cache and just return copies to anyone who asks for the object.
Finally if your object needs to be writable and shared (for other reasons than it just being expensive to create). Then my friend you need to handle thread safety, I don't know your case but you should take a look at the synchronized keyword, Locks and java 5 concurrency features, Atomic types. I am sure one of them will satisfy your need and I sincerely wish that your case is one of the first 2 :)
If you only have a single instance of the Object, have a quick look at:
Thread-safe cache of one object in java
Other wise I can't recommend the google guava library enough, in particular look at the MapMaker class.

Categories

Resources