Get age of cache entry in Ehcache 3.x - java

I'm using Ehcache 3.8.0 with my Spring 5.x application for caching long running data aggregation. The aggregated data is displayed in a web frontend.
I'm now looking for a way to get the age of an cached element to display it alongside with the data.
Ehcache 2.x stored cache entries as Element (see javadoc) which had the public method getLatestOfCreationAndUpdateTime() to retrieve the timestamp, at which the cache entry was created or last updated (= the age/freshness of the cached data).
But in Ehcache 3.x cache entries are stored as Cache.Entry<V,K> (see javadoc) which is nothing more than a key/value tuple which doesn't provide something like the getLatestOfCreationAndUpdateTime() method.
I know that I could use something like cache eventlisteners to store the timestamp seperatly whenever a cache entry is created or updated, but I like to know if there's a more direct way to get the timestamp formerly provided with getLatestOfCreationAndUpdateTime().

The following manages to capture the internal epoch timestamp of creation of the Ehcache cache entry:
var method = Ehcache.class.getDeclaredMethod("doGet", Object.class);
method.setAccessible(true);
var valueHolder = (Store.ValueHolder<?>) method.invoke(ehcache, key);
return valueHolder.creationTime();
The use of reflection is unfortunate but this was sufficient for my debugging purposes.
The org.ehcache.core.Ehcache instance in my specific scenario was retrieved inside a custom org.ehcache.event.CacheEventListener:
#Override
public void onEvent(CacheEvent<? extends String, ? extends Object> cacheEvent) {
var ehcache = (Ehcache<?, ?>) cacheEvent.getSource();
...
}

Related

implementing cache on filter apis spring boot

I am working on a spring boot app where I have multiple fetch apis which are basically filter apis taking in params and sending response from db.
Now under load they are acting pretty slow,Is there any way I can fasten these with cache?
Can filter apis results be cached? as they may have different filters everytime.
Currently I did this:
#Cacheable(value = "sku-info-cache", unless = "#result == null")
public SkuGroupPagedResponseMap fetchSkuGroupsByDatesAndWarehouseId(Integer warehouseId,
Integer pageNumber,
Integer pageSize,
String startDate,
String endDate){
log.info("fetching from db");
SkuGroupPagedResponseMap skuGroupPagedResponseMap = locationInventoryClientService.fetchSkuGroupsByDatesAndWarehouseId(warehouseId,pageNumber,pageSize,startDate,endDate);
updateLotDetailsInSkuGroup(skuGroupPagedResponseMap);
return skuGroupPagedResponseMap;
}
The best way to handle this particular scenario is using a smart key. As per your case, you can make a smart key using the combination of requested filter parameters which will lead to formation of 5! combination of keys (in your case) which can be updated at time of database update using prefix deletion strategy of cache update and hence proves to be very fast. I have tried this and found to be very fast.

How can I cache by method parameter in Spring Boot?

I use Redis for caching and have the following service method:
#Cacheable(value = "productCache")
#Override
public List<ProductDTO> findAllByCategory(Category category) {
// code omitted
return productDTOList;
}
When I pass categoryA to this method, the result is cached and is kept during expiration period. If I pass categoryB to this method, it is retrieved from database and then kept in cache. Then if I pass categoryA again, it is retrieved from cache.
1. I am not sure if it is normal, because I just use value parameter ("productCache") of #Cacheable annotation and have no idea how it caches categoryA and categoryB results separately. Could you please explain how it works?
2. As mentioned on this page, there is also key parameter. But when using it as shown below, I think it does not make any sense and it works as above. Is that normal or am I missing something?
#Cacheable(value = "productCache", key="#category")
#Override
public List<ProductDTO> findAllByCategory(Category category) {
// code omitted
return productDTOList;
}
3. Should I get cache via Cache cache = cacheManager.getCache("productCache#" + category); ?
Caches are essentially key-value stores, where – in Spring –
the key is generated from the method parameter(s)
the value is the result of the method invocation
The default key generation algorithm works like this (taken right from Spring docs):
If no params are given, return SimpleKey.EMPTY.
If only one param is given, return that instance.
If more than one param is given, return a SimpleKey that contains all parameters.
This approach works well for most use-cases, as long as parameters have natural keys and implement valid hashCode() and equals() methods. If that is not the case, you need to change the strategy.
So in your example, the category object acts as key per default (for this to work, the Category class should have hashCode() and equals() implemented correctly). Writing key="#category" is hence redundant and has no effect. If your Category class would have an id property, you could write key="#category.id" however.
Should I get cache via Cache cache = cacheManager.getCache("productCache#" + category); ?
No, since there is no such cache. You only have one single cache named productCache.

Spring cache and retrieve object without permanent store

I'm building a stub in spring that does not have a permanent store but I want to be able to retrieve objects that have been sent to the stub.
Thus when the stub receives a payload I want to add it to a cache, and be able to get that object from the cache when queried.
I've read about spring's cache annotation implementation (#Cacheable ect.) but I can't work out how to implement this without a permanent store for the first call to that function.
I think an object can be put into the cache using:
#CachePut(value = "addressCache", key = "#customerId")
public Submission cache(Address address, String customerId) {
return address;
}
Is there a way to retrieve this object from the cache using the key (customerId) without calling that original cache() function?
What would be a way to implement cache for what I need?
What you are looking for is #Cacheable annotation that does the look-up from the cache. #CachePut instead is invoked everytime to update the cache.

Google Dataflow: Write to Datastore without overwriting existing entities

TLDR: Looking for a way to update Datastore entities without overwriting existing data via Dataflow
I'm using dataflow 2.0.0 (beam) to update entities in Google Datastore. My dataflow loads entities from datastore, updates them, and then saves them back into datastore (overwriting existing entities).
However, during the update process I also discover additional entities that may or may not already exist. In order to prevent overwriting existing entities, I previously would load all the entities from Datastore and reduce them (group by key), removing new duplicates.
As the number of entities grows, I want to avoid having to load all entities into Dataflow (instead taking them in batches based on oldest timestamps), but I'm coming across the problem that old entities are getting overwritten when they are not in the current batch.
I'm writing the entities to Dataflow using (in two spots, one for existing entities, and one for new entities):
collection.apply(DatastoreIO.v1().write().withProjectId("..."))
It would be really nice if there was something like a DatastoreIO.v1().writeNew() method but sadly it doesn't exist. Thank you for any help.
If you want to write a new entity that does not exist on Datastore, you just create one with a new key and write it.
List<String> keyNames = Arrays.asList("L1", "L2"); // Somewhat you have new keys to store
PTransform<PCollection<Entity>, ?> write =
DatastoreIO.v1().write().withProjectId(project_id); // This is a typical write operation
p.
apply("GetInMemory", Create.of(keyNames)).setCoder(StringUtf8Coder.of()). // L1 and L2 are loaded
apply("Proc1", ParDo.of(new DoFn<String, Entity>(){
#ProcessElement
public void processElement(ProcessContext c) {
Key.Builder key = makeKey("k2", c.element()); // Generate an entity key
final Entity entity = Entity.newBuilder().
setKey(key). // Set the key
putProperties("p1", makeValue(new String("test constant value")
).setExcludeFromIndexes(true).build()).
build();
c.output(entity);
}
})).
apply(write); // Write them
p.run();
Entire code can be referred in my code repository at https://github.com/yiu31802/gcp-project/commit/cc224b34

How to refresh JPA entities when backend database changes asynchronously?

I have a PostgreSQL 8.4 database with some tables and views which are essentially joins on some of the tables. I used NetBeans 7.2 (as described here) to create REST based services derived from those views and tables and deployed those to a Glassfish 3.1.2.2 server.
There is another process which asynchronously updates contents in some of tables used to build the views. I can directly query the views and tables and see these changes have occured correctly. However, when pulled from the REST based services, the values are not the same as those in the database. I am assuming this is because JPA has cached local copies of the database contents on the Glassfish server and JPA needs to refresh the associated entities.
I have tried adding a couple of methods to the AbstractFacade class NetBeans generates:
public abstract class AbstractFacade<T> {
private Class<T> entityClass;
private String entityName;
private static boolean _refresh = true;
public static void refresh() { _refresh = true; }
public AbstractFacade(Class<T> entityClass) {
this.entityClass = entityClass;
this.entityName = entityClass.getSimpleName();
}
private void doRefresh() {
if (_refresh) {
EntityManager em = getEntityManager();
em.flush();
for (EntityType<?> entity : em.getMetamodel().getEntities()) {
if (entity.getName().contains(entityName)) {
try {
em.refresh(entity);
// log success
}
catch (IllegalArgumentException e) {
// log failure ... typically complains entity is not managed
}
}
}
_refresh = false;
}
}
...
}
I then call doRefresh() from each of the find methods NetBeans generates. What normally happens is the IllegalArgumentsException is thrown stating somethng like Can not refresh not managed object: EntityTypeImpl#28524907:MyView [ javaType: class org.my.rest.MyView descriptor: RelationalDescriptor(org.my.rest.MyView --> [DatabaseTable(my_view)]), mappings: 12].
So I'm looking for some suggestions on how to correctly refresh the entities associated with the views so it is up to date.
UPDATE: Turns out my understanding of the underlying problem was not correct. It is somewhat related to another question I posted earlier, namely the view had no single field which could be used as a unique identifier. NetBeans required I select an ID field, so I just chose one part of what should have been a multi-part key. This exhibited the behavior that all records with a particular ID field were identical, even though the database had records with the same ID field but the rest of it was different. JPA didn't go any further than looking at what I told it was the unique identifier and simply pulled the first record it found.
I resolved this by adding a unique identifier field (never was able to get the multipart key to work properly).
I recommend adding an #Startup #Singleton class that establishes a JDBC connection to the PostgreSQL database and uses LISTEN and NOTIFY to handle cache invalidation.
Update: Here's another interesting approach, using pgq and a collection of workers for invalidation.
Invalidation signalling
Add a trigger on the table that's being updated that sends a NOTIFY whenever an entity is updated. On PostgreSQL 9.0 and above this NOTIFY can contain a payload, usually a row ID, so you don't have to invalidate your entire cache, just the entity that has changed. On older versions where a payload isn't supported you can either add the invalidated entries to a timestamped log table that your helper class queries when it gets a NOTIFY, or just invalidate the whole cache.
Your helper class now LISTENs on the NOTIFY events the trigger sends. When it gets a NOTIFY event, it can invalidate individual cache entries (see below), or flush the entire cache. You can listen for notifications from the database with PgJDBC's listen/notify support. You will need to unwrap any connection pooler managed java.sql.Connection to get to the underlying PostgreSQL implementation so you can cast it to org.postgresql.PGConnection and call getNotifications() on it.
An an alternative to LISTEN and NOTIFY, you could poll a change log table on a timer, and have a trigger on the problem table append changed row IDs and change timestamps to the change log table. This approach will be portable except for the need for a different trigger for each DB type, but it's inefficient and less timely. It'll require frequent inefficient polling, and still have a time delay that the listen/notify approach does not. In PostgreSQL you can use an UNLOGGED table to reduce the costs of this approach a little bit.
Cache levels
EclipseLink/JPA has a couple of levels of caching.
The 1st level cache is at the EntityManager level. If an entity is attached to an EntityManager by persist(...), merge(...), find(...), etc, then the EntityManager is required to return the same instance of that entity when it is accessed again within the same session, whether or not your application still has references to it. This attached instance won't be up-to-date if your database contents have since changed.
The 2nd level cache, which is optional, is at the EntityManagerFactory level and is a more traditional cache. It isn't clear whether you have the 2nd level cache enabled. Check your EclipseLink logs and your persistence.xml. You can get access to the 2nd level cache with EntityManagerFactory.getCache(); see Cache.
#thedayofcondor showed how to flush the 2nd level cache with:
em.getEntityManagerFactory().getCache().evictAll();
but you can also evict individual objects with the evict(java.lang.Class cls, java.lang.Object primaryKey) call:
em.getEntityManagerFactory().getCache().evict(theClass, thePrimaryKey);
which you can use from your #Startup #Singleton NOTIFY listener to invalidate only those entries that have changed.
The 1st level cache isn't so easy, because it's part of your application logic. You'll want to learn about how the EntityManager, attached and detached entities, etc work. One option is to always use detached entities for the table in question, where you use a new EntityManager whenever you fetch the entity. This question:
Invalidating JPA EntityManager session
has a useful discussion of handling invalidation of the entity manager's cache. However, it's unlikely that an EntityManager cache is your problem, because a RESTful web service is usually implemented using short EntityManager sessions. This is only likely to be an issue if you're using extended persistence contexts, or if you're creating and managing your own EntityManager sessions rather than using container-managed persistence.
You can either disable caching entirely (see: http://wiki.eclipse.org/EclipseLink/FAQ/How_to_disable_the_shared_cache%3F ) but be preparedto a fairly large performance loss.
Otherwise, you can perform a clear cache programmatically with
em.getEntityManagerFactory().getCache().evictAll();
You can map it to a servlet so you can call it externally - this is better if your database is modify externally very seldom and you just want to be sure JPS will pick up the new version
Just a thought, but how do you receive your EntityManager/Session/whatever?
If you queried the entity in one session, it will be detached in the next one and you will have to merge it back into the persistence context to get it managed again.
Trying to work with detached entities may result in those not-managed exceptions, you should re-query the entity or you could try it with merge (or similar methods).
JPA doesn't do any caching by default. You have to explicitly configure it. I believe its the side effect of the architectural style you have chosen: REST. I think caching is happening at the web servers, proxy servers etc. I suggest you read this and debug more.

Categories

Resources