Building user level cache in Spring boot managed application - java

I have built a spring boot application for the first time. I have about 10-12 user specific caches and 1 global cache in my application. User level caches are HashMap based.
Class userCache1 {
private Map<Integer, MyObject> dataMap = new HashMap<>();
public Map<Integer, MyObject> getDataMap() {
return dataMap;
}
public void setDataMap(Map<Integer, MyData> dataMap) {
this.dataMap = dataMap;
}
public dataMap get(int key) {
if(dataMap.containsKey(key))
return dataMap.get(key);
else
return null;
}
public void setData(int dataMap, MyData data) {
dataMap.put(key, data);
}
}
Now there has to be one instance of userCache1 for each registered user. I have similarly objects each holding various kinds of data that are required by multiple objects for the application to process the user requests. Some of the classes would required 5-10 of these caches such as userCache1...userCache5.
Below are my questions:
How would I tie the userCache1 to userId which is the primary key of the user of this application? Do I need another class that holds userId and userCache1? Wouldn't it create one such class for each of these user level caches just for the purpose of associating the cache with userId? Are there better options that I can use?
Based on the approach I take in #1, I would like to use DI to inject the required userCacheX objects to the constructor of the classes that require these caches to process the biz logic. I don't think that having a constructor with 5+ such caches is an elegant approach. What makes the most sense in this example as below
public class UserCache {
private UserCache1 cache1;
private UserCache2 cache2;
...
private UserCacheN cacheN;
UserCache(UserCache1...UserCacheN) {
this.cache1 = cache1;
....
this.cacheN = cacheN;
}
/*
use the cache1...cacheN as needed in the body of this class
*/
}
In addition to user specific cache I have 1 or 2 global cache as well. I understand that spring enables caching which I would be using once I decide on the high level design.
All my user specific caches are HashMap based at this time. The content of most of these caches would NOT change during lifetime of the user once registered but I would assume that i should expire the caches to manage memory efficiently and optimize performance which can happen once I successfully build the application.
I am planning to use Spring Boot, Hibernate to persist these objects into the datastore.
What are my recommended design options for this problem?

Related

Spring - Storing volatile data in memory

I'm developing a SpringBoot web application for managing gaming servers.
I want to have a cronjob that queries the servers, checks whether they have crashed and collects relevant data, such as the number of players online etc. This data needs to be stored and shared among services that require it. Since this data will change often and will become invalid after the whole application stops, I don't want to persist these stats in the database, but in the application memory.
Current implementation
Currently, my implementation is pretty naive - having a collection as a member field of the corresponding Spring service and storing the server statuses there. However I feel this is a really bad solution, as the services should be stateless and also I don't take concurrency into account.
Example code:
#Service
public class ServersServiceImpl implements ServersService {
private final Map<Long, ServerStats> stats = new HashMap<>(); // Map server ID -> stats
...
public void startServer(Long id) {
// ... call service to actually start server process
serverStats.setRunning(true);
stats.put(id, serverStats);
}
...
}
Alternative: Using #Repository classes
I could move the collection with the data to classes with #Repository annotation, which would be semantically more correct. There, I would implement a thread-safe logic of storing the data in java collection. Then I would inject this repository into relevant services.
#Repository
public class ServerStatsRepository {
private final Map<Long, ServerStats> stats = new ConcurrentHashMap<>();
...
public ServerStats getServerStats(Long id) {
return stats.get(id);
}
public ServerStats updateServerStats(Long id, ServerStats serverStats) {
return stats.put(id, serverStats);
}
...
}
Using Redis also came to mind, but I don't want to add too much complexity to the app.
Is my proposed solution a valid approach? Would there be any better option of handling this problem?

is there a Cacheable in C# similar to Java?

In Java Spring Boot, I can easily enable caching using the annotation #EnableCaching and make methods cache the result using #Cacheable, this way, any input to my method with the exact same parameters will NOT call the method, but return immediately using the cached result.
Is there something similar in C#?
What I did in the past was i had to implement my own caching class, my own data structures, its a big hassle. I just want an easy way for the program to cache the result and return the exact result if the input parameters are the same.
EDIT: I dont want to use any third party stuff, so no MemCached, no Redis, no RabbitMQ, etc... Just looking for a very simple and elegant solution like Java's #Cacheable.
Caches
A cache is the most valuable feature that Microsoft provides. It is a type of memory that is relatively small but can be accessed very quickly. It essentially stores information that is likely to be used again. For example, web browsers typically use a cache to make web pages load faster by storing a copy of the webpage files locally, such as on your local computer.
Caching
Caching is the process of storing data into cache. Caching with the C# language is very easy. System.Runtime.Caching.dll provides the feature for working with caching in C#. In this illustration I am using the following classes:
ObjectCache
MomoryCache
CacheItemPolicy
ObjectCache
: The CacheItem class provides a logical representation of a cache entry, that can include regions using the RegionName property. It exists in the System.Runtime.Caching.
MomoryCache
: This class also comes under System.Runtime.Caching and it represents the type that implements an in-cache memory.
CacheItemPolicy
: Represents a set of eviction and expiration details for a specific cache entry.
.NET provides
System.Web.Caching.Cache - default caching mechanizm in ASP.NET. You can get instance of this class via property Controller.HttpContext.Cache also you can get it via singleton HttpContext.Current.Cache. This class is not expected to be created explicitly because under the hood it uses another caching engine that is assigned internally. To make your code work the simplest way is to do the following:
public class DataController : System.Web.Mvc.Controller{
public System.Web.Mvc.ActionResult Index(){
List<object> list = new List<Object>();
HttpContext.Cache["ObjectList"] = list; // add
list = (List<object>)HttpContext.Cache["ObjectList"]; // retrieve
HttpContext.Cache.Remove("ObjectList"); // remove
return new System.Web.Mvc.EmptyResult();
}
}
System.Runtime.Caching.MemoryCache - this class can be constructed in user code. It has the different interface and more features like update\remove callbacks, regions, monitors etc. To use it you need to import library System.Runtime.Caching. It can be also used in ASP.net application, but you will have to manage its lifetime by yourself.
var cache = new System.Runtime.Caching.MemoryCache("MyTestCache");
cache["ObjectList"] = list; // add
list = (List<object>)cache["ObjectList"]; // retrieve
cache.Remove("ObjectList"); // remove
You can write a decorator with a get-or-create functionality. First, try to get value from cache, if it doesn't exist, calculate it and store in cache:
public static class CacheExtensions
{
public static async Task<T> GetOrSetValueAsync<T>(this ICacheClient cache, string key, Func<Task<T>> function)
where T : class
{
// try to get value from cache
var result = await cache.JsonGet<T>(key);
if (result != null)
{
return result;
}
// cache miss, run function and store result in cache
result = await function();
await cache.JsonSet(key, result);
return result;
}
}
ICacheClient is the interface you're extending. Now you can use:
await _cacheClient.GetOrSetValueAsync(key, () => Task.FromResult(value));

Vaadin 8 - application-wide cache

In my Vaadin 8 project, there are some objects that I keep in the main memory whenever the application is up&running.
For memory efficiency, I'm looking to keep this cache for all users-- not a separate cache for each user. So the cache should be per application, i.e., a single set of these objects, and NOT per session.
How to do this in Vaadin 8? Please note - there's no Spring in this system, so doing it on Spring is not an option.
Excuse if a naïve question. Not yet that savvy on Vaadin/web dev.
For a application wide cache, just create a class with a public static cache that you can access from everywhere. The static object will be the same for every UI and session.
Since you didn't specify what do you want to cache, I suppose you want to cache custom objects made by you to use for some sort of server side logic.
To do so in plain java, you can create a simple hashmap to use as cache. A VERY simple example would be:
public class GlobalCache {
private static ConcurrentHashMap<String, Object> cacheMap = new ConcurrentHashMap<>();
public static Object getObject(String key, Function<String, Object> creator) {
return cacheMap.computeIfAbsent(key, creator);
}
}
That wouldn't be a good cache since it won't invalidate its entries.
If you can add any libraries, you should add Guava. Guava provides a great cache implementation that you can use:
//Add this to your gradle:
dependencies {
implementation group: 'com.google.guava', name: 'guava', version: '24.1-jre'
}
//And this will become your code
public class GlobalCache {
private static Cache<String, Object> cache =
CacheBuilder.newBuilder().maximumSize(100).expireAfterWrite(5, TimeUnit.MINUTES).build();
public static Object getObject(String key, Callable<? extends Object> creator) throws ExecutionException {
return cache.get(key, creator);
}
}

Clean Architecture and Cache Invalidation

I have an app that tries to follow the Clean Architecture and I need to do some cache invalidation but I don't know in which layer this should be done.
For the sake of this example, let's say I have an OrderInteractor with 2 use cases : getOrderHistory() and sendOrder(Order).
The first use case is using an OrderHistoryRepository and the second one is using a OrderSenderRepository. Theses repositories are interfaces with multiple implementations (MockOrderHistoryRepository and InternetOrderHistoryRepository for the first one). The OrderInteractor only interact with theses repositories through the interfaces in order to hide the real implementation.
The Mock version is very dummy but the Internet version of the history repository is keeping some data in cache to perform better.
Now, I want to implement the following : when an order is sent successfully, I want to invalidate the cache of the history but I don't know where exactly I should perform the actual cache invalidation.
My first guess is to add a invalidateCache() to the OrderHistoryRepository and use this method at the end of the sendOrder() method inside the interactor. In the InternetOrderHistoryRepository, I will just have to implement the cache invalidation and I will be good. But I will be forced to actually implement the method inside the MockOrderHistoryRepository and it's exposing to the outside the fact that some cache management is performed by the repository. I think that the OrderInteractor should not be aware of this cache management because it is implementation details of the Internet version of the OrderHistoryRepository.
My second guess would be perform the cache invalidation inside the InternetOrderSenderRepository when it knows that the order was sent successfully but it will force this repository to know the InternetOrderHistoryRepository in order to get the cache key used by this repo for the cache management. And I don't want my OrderSenderRepository to have a dependency with the OrderHistoryRepository.
Finally, my third guess is to have some sort of CacheInvalidator (whatever the name) interface with a Dummy implementation used when the repository is mocked and an Real implementation when the Interactor is using the Internet repositories. This CacheInvalidator would be injected to the Interactor and the selected implementation would be provided by a Factory that's building the repository and the CacheInvalidator. This means that I will have a MockedOrderHistoryRepositoryFactory - that's building the MockedOrderHistoryRepository and the DummyCacheInvalidator - and a InternetOrderHistoryRepositoryFactory - that's building the InternetOrderHistoryRepository and the RealCacheInvalidator. But here again, I don't know if this CacheInvalidator should be used by the Interactor at the end of sendOrder() or directly by the InternetOrderSenderRepository (even though I think the latter is better because again the interactor should probably not know that there is some cache management under the hood).
What would be your preferred way of architecturing this ?
Thank you very much.
Pierre
Your 2nd guess is correct because caching is a detail of the persistence mechanism. E.g. if the repository would be a file based repository caching might not be an issue (e.g. a local ssd).
The interactor (use case) should not know about caching at all. This will make it easier to test because you don't need a real cache or mock for testing.
My second guess would be perform the cache invalidation inside the InternetOrderSenderRepository when it knows that the order was sent successfully but it will force this repository to know the InternetOrderHistoryRepository in order to get the cache key used by this repo for the cache management.
It seems that your cache key is a composite of multiple order properties and therefore you need to encapsulate the cache key creation logic somewhere for reuse.
In this case, you have the following options:
One implementation for both interfaces
You can create a class that implements the InternetOrderSenderRepository as well as the InternetOrderHistoryRepository interface. In this case, you can extract the cache key generation logic into a private method and reuse it.
Use a utility class for the cache key creation
Simple extract the cache key creation logic in a utility class and use it in both repositories.
Create a cache key class
A cache key is just an arbitrary object because a cache must only check if a key exists and this means use the equals method that every object has. But to be more type-safe most caches use a generic type for the key so that you can define one.
Thus you can put the cache key logic and validation in an own class. This has the advantage that you can easily test that logic.
public class OrderCacheKey {
private Integer orderId;
private int version;
public OrderCacheKey(Integer orderId, int version) {
this.orderId = Objects.requireNonNull(orderId);
if (version < 0) {
throw new IllegalArgumentException("version must be a positive integer");
}
this.version = version;
}
public OrderCacheKey(Order order) {
this(order.getId(), order.getVersion());
}
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
OrderCacheKey other = (OrderCacheKey) obj;
if (!Objects.equals(orderId, other.orderId))
return false;
return Objects.equals(version, other.version);
}
public int hashCode() {
int result = 1;
result = 31 * result + Objects.hashCode(orderId);
result = 31 * result + Objects.hashCode(version);
return result;
}
}
You can use this class as the key type of your cache: Cache<OrderCacheKey, Order>. Then you can use the OrderCacheKey class in both repository implementations.
Introduce a order cache interface to hide caching details
You can apply the interface segregation principle and hide the complete caching details behind a simple interface. This will make your unit tests more easy because you have to mock less.
public interface OrderCache {
public void add(Order order);
public Order get(Integer orderId, int version);
public void remove(Order order);
public void removeByKey(Integer orderId, int version);
}
You can then use the OrderCache in both repository implementations and you can also combine the interface segregation with the cache key class above.
How to apply
You can use aspect-oriented programming and one of the options above to implement the caching
You can create a wrapper (or delegate) for each repository that applies caching and delegates to the real repositories when needed. This is very similar to the aspect-oriented way. You just implement the aspect manually.

Pre-load values for a Guava Cache

I have a requirement where we are loading static data from a database for use in a Java application. Any caching mechanism should have the following functionality:
Load all static data from the database (once loaded, this data will not change)
Load new data from the database (data present in the database at start-up will not change but it is possible to add new data)
Lazy loading of all the data isn't an option as the application will be deployed to multiple geographical locations and will have to communicate with a single database. Lazy loading the data will make the first request for a specific element too slow where the application is in a different region to the database.
I have been using the MapMaker API in Guava with success but we are now upgrading to the latest release and I can't seem to find the same functionality in the CacheBuilder API; I can't seem to find a clean way of loading all data at start-up.
One way would be to load all keys from the database and load those through the Cache individually. This would work but would result in N+1 calls to the database, which isn't quite the efficient solution I'm looking for.
public void loadData(){
List<String> keys = getAllKeys();
for(String s : keys)
cache.get(s);
}
Or the other solution is to use a ConcurrentHashMap implementation and handle all of the threads and missing entries myself? I'm not keen on doing this as the MapMaker and CacheBuilder APIs provide the key-based thread locking for free without having to provide extra testing. I'm also pretty sure the MapMaker/CacheBuilder implementations will have some efficiencies that I don't know about/haven't got time to investigate.
public Element get(String key){
Lock lock = getObjectLock(key);
lock.lock();
try{
Element ret = map.get(key)
if(ret == null){
ret = getElement(key); // database call
map.put(key, e);
}
return ret;
}finally {
lock.unlock();
}
}
Can anyone think of a better solution to my two requirements?
Feature Request
I don't think pre-loading a cache is an uncommon requirement, so it would be nice if the CacheBuilder provided a configuration option to pre-load the cache. I think providing an Interface (much like CacheLoader) which will populate the cache at start-up would be an ideal solution, such as:
CacheBuilder.newBuilder().populate(new CachePopulator<String, Element>(){
#Override
public Map<String, Element> populate() throws Exception {
return getAllElements();
}
}).build(new CacheLoader<String, Element>(){
#Override
public Element load(String key) throws Exception {
return getElement(key);
}
});
This implementation would allow the Cache to be pre-populated with all relevant Element objects, whilst keeping the underlying CustomConcurrentHashMap non-visible to the outside world.
In the short-term I would just use Cache.asMap().putAll(Map<K, V>).
Once Guava 11.0 is released you can use Cache.getAll(Iterable<K>), which will issue a single bulk request for all absent elements.
I'd load all static data from the DB, and store it in the Cache using cache.asMap().put(key, value) ([Guava 10.0.1 allows write operations on the Cache.asMap() view][1]).
Of course, this static data might get evicted, if your cache is configured to evict entries...
The CachePopulator idea is interesting.

Categories

Resources