Infinispan per entity eviction,lifespan controlling using java annotations - java

Infinispan has eviction policy, lifespan for specific to entity. From below question we can make the change in persistence.xml.
Infinispan - set per Entity expiration.lifespan
My question is there a way to do this in annotation in that particular entity?

I am not aware of any such config. The reason for the absence of it is probably because Infinispan (and other cache providers) are general-purpose caching frameworks which in general are not aware of Hibernate second-level caching specifics.
On the other hand, again in general, Hibernate and java.persistence do not interfere with specific cache provider implementations and APIs. That means that a cache provider may even not allow defining expiration policy while still being perfectly able to serve as Hibernate L2 cache.
However, you could define your own annotations and set the Infinispan config values programmatically. You could turn it to an interesting open source project, if there are none so far that do similar thing. :)

Related

Will Spring Cache cause memory leak issue?

I am trying to use Spring Cache (#Cacheable annotation) on method level in the Spring Boot Application, but unlike other google guava cache, I have no idea if Spring Cache will cause a memory leak issue. Because it didn't have a size limitation or refresh policy, where and how long would the data be stored in the application? I assume it'd be memory, but will Spring itself clear it automatically? If not, when there might be millions of requests coming in hitting the application, will that trigger a memory leak issue?
My use case is that I have a heavy method per request, and I would like to only execute that method one time during my current request, after the request is done there is no need to keep the data in Cache, but how would I ensure my Spring Cache would be cleared after each request? I know there is a evict action, however, what if my request errors out before hitting my cache evict method so that it returns 500 directly, that means my last request data would always sit in the cache memory, with more and more requests like that which might cause a memory leak, correct?
In Spring application, Perhaps you can disable the cache by :
spring.cache.type=NONE
The type of cache is by default automatically detected and configured. However you can specify which cache type to use by adding spring.cache.type to your configuration. To disable it set the value to NONE.
As you want to do it for a specific profile add it to that profiles application.properties in this case modify the application-dev.properties and add
spring.cache.type=NONE
This will disable caching.
This will disable caching.
The Spring Framework's caching support and infrastructure, whether consumed through declarative annotation-based caching and demarcation of Spring application components (using either Spring Annotations, e.g. #Cacheable, or with JSR-107, JCache Annotations), or by using Spring's Cache API directly (not common), is simply an "abstraction", hence the Spring Cache Abstraction. There is NO underlying caching provider (implementation of this SPI), by default.
Of course, if you are using Spring Boot (on top of, or to consume the core Spring Framework), and you do not configure an explicit caching provider (see here), such as Redis, then by default, Spring Boot will configure and provide your Spring Boot application with a ConcurrentHashMap caching provider implementation (see here).
When the documentation states:
This is the default if no caching library is present in your application.
It means when no caching library, like Redis (using Spring Data Redis, for instance), is detected on your Spring Boot application classpath.
In general, however, it is a good practice to choose an underlying caching provider implementation, such as Redis, or in your case, Google Guava, which is possible to position as a caching provider implementation in Spring's Cache Abstraction (see here, for example).
Given the Spring Framework's Cache Abstraction is simply a Facade with an caching API/SPI common to multiple caching provider implementations, effectively providing the lowest common denominator for caching functionality (e.g. put, get, evict, invalidate) across caching providers, then to your question, there is no cause of memory leak originating from Spring's "Cache", which really is not even a thing anyway. It is technically the provider implementation's "cache", like Google Guave, Redis, Hazelcast, Apache Geode (VMW GemFire), etc, that would actually be the cause of a memory leak if a leak existed in your application in the first place.
In other words, if there is any memory leak, then it originates with the caching provider.
You should refer to your caching provider's documentation on configuring critical aspects of the cache, such as memory management, that are explicitly stated to be beyond the control of the Spring Framework's Cache Abstraction.
The reason these aspects are beyond the control of the core Spring Framework is simply because the configuration of these low-level cache features (e.g. memory management) is usually very provider specific and varies widely from 1 provider to the next, especially with respect to the capabilities and features.
I hopes this explanation gives you clarity on the position of the Cache Abstraction provided by Spring and its responsibilities.
By adhering to the abstraction then it effectively makes it easier to switch between caching providers if your application requirements, UC or SLAs change.
You can still specify what cache you want. For instance you can use Caffeine which
"... provides an in-memory cache using a Google Guava inspired API"
And you can still configure it for instance for a maximumSize
spring:
cache:
cache-names: instruments, directory
caffeine:
spec: maximumSize=500, expireAfterAccess=30s
To enable it you just have to add the dependency and add the #EnableCaching in a config class (and add #Cacheable of course in your method)
See:
https://github.com/ben-manes/caffeine/
https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#cache-store-configuration-caffeine
https://memorynotfound.com/spring-boot-caffeine-caching-example-configuration/
https://www.baeldung.com/spring-boot-caffeine-cache

how to use another implementation for JPA 2 level 2 cache?

We'd like to use another L2 cache for our big JPA application. We are trying to achieve a shared cache between multiple servers.
We use Eclipselink as JPA implementation, and some legacy codes uses internal Eclipselink API's, so switching is not an option.
Coherence/Toplink Grid seems too expensive (4000$/cpu?).
Is there a way we could plug another cache implementation? Is something specified in JPA 2 (I can't find anything in the specs, but maybe I just misread it)? Proprietary (=Eclipselink specific) solutions are ok, as long as they are somewhat documented or simple enough (we don't want that to break).
Is there a way we could plug another cache implementation?
Did you investigate the use of the EclipseLink shared object cache that comes with EclipseLink? Going by the description, the shared object cache is not confined to a single EntityManager alone, and is available across the lifecycles of several Entity managers, i.e. across several transactions. It is of course, constrained to the lifecycle of an EntityManagerFactory, which may be as live as long as the application is running in the container.
The EclipseLink shared object cache is different from Oracle Coherence, and I believe it is not licensed and packaged separately, thereby making it available on all containers.
JPA does not specify a pluggable cache interface. I don't know if it ever will, but if it does, my bet is that it won't be until after the resurrected JSR-107 finishes defining a standard API to object caches, which JPA would then be able to use. It might also have to wait for JSR 347, which is defining another cache interface, whose relationship to JCache is somewhat unclear (there is open factional warfare between and within the groups, with some members of the 107 expert group trying to declare 347 an independent republic, and invade Mexico).
So, until then, you're at the mercy of your provider's cache interface. I am not an EclipseLink expert, but last time i looked, i couldn't see a pluggable second-level cache interface. In fact, i think only Hibernate and, of course, DataNucleus, have them.
Most cache implementations are not distributed (other than Coherence), just local.
EclipseLink already supports a share cache and cache coordination for caching in a cluster.
What cache do you intent to use, and what benefit do you intend to get from it?
EclipseLink does support integration with 3rd party caches, this API was created for the Coherence integration, although Coherence is the only cache that currently provides an integration.

JPA with Multiple Servers

I am currently working on a project that uses JPA (Toplink, currently) for its persistence. Currently, we are running a single application server, but, for redundancy, we would like to add a load balancer and another application sever (and possibly more as it grows).
First, I'm running into the issue of JPA caching. Since two processes will be updating the same database, the JPA cache returns the cached value rather than going to the database. I see how to turn that off, and the database itself implements a level of caching. Is turning off the cache completely the way to go here? I see the ways to tell JPA to always get from the database at a query level, but in a multi-server environment, it seems that you'll always want that to happen.
Along with this specific question, I'm interested in anyone out there who has implemented a JPA solution with multiple application servers and what problems arose during the implementation (and any suggestions you have).
Thanks much.
As you have found, you can disable the shared cache, see http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching or http://wiki.eclipse.org/EclipseLink/FAQ/How_to_disable_the_shared_cache%3F
There are also other options available in EclipseLink depending on your data and requirements.
A list of option include:
Disable shared cache
Enable cache coordination (see, http://www.eclipse.org/eclipselink/api/2.1/org/eclipse/persistence/config/PersistenceUnitProperties.html#COORDINATION_PROTOCOL)
Set a cache invalidation timeout (see, http://www.eclipse.org/eclipselink/api/2.1/org/eclipse/persistence/annotations/Cache.html#expiry%28%29)
Enable optimistic locking, this will ensure that any stale object cannot be updated, when an update on stale data occurs it will fail, and EclipseLink will automatically invalidate the object in the cache.
Investigate the Oracle TopLink integration of EclipseLink and Oracle Coherence to provide a distributed cache.
See also, http://en.wikibooks.org/wiki/Java_Persistence/Caching#Caching_in_a_Cluster
There is no perfect solution, the solution used normally depend on the data/class, normally an application has a set of read-only classes, read-mostly classes and write mostly classes. Personally I would enable the cache for the read-only with a 1 day timeout, enable the cache with cache coordination for the read-mostly, and disable the cache for the write mostly.

JPA2.0 support of custom user-types and second level cache

I'm trying to decide whether to switch from having Hibernate sprinkled all over to using JPA2.0 and thus be provider portable.
1.Does JPA2.0 support custom user-types?
2.I'm on the verge of implementing Terracotta as a second-level cache to Hibernate with its clustering abilities mainly in mind. I would imagine, but I don't actually know, that JPA2.0 also defines a spec for second-level cache providers. If I'm right, does Terracotta implement it? (If someone could point me to a getting started with Terracotta and JPA I'd appreciate it).
Thanks in advance,
Ittai
Does JPA2.0 support custom user-types?
Nothing beyond #Embedded and #Embeddable (already in JPA 1.0). Depending on the complexity of your needs, they might do the job).
I would imagine, but I don't actually know, that JPA2.0 also defines a spec for second-level cache providers.
JPA 2.0 defines methods on the EntityManager to access the second level cache that is maintained by the persistence provider, a Cacheable annotation, some other things. But the way to plug a cache on your JPA provider is provider specific. So no, JPA doesn't define a spec for L2 cache providers. And if you want to use Terracota as the L2 cache provider with Hibernate as JPA 2.0 implementation, look at the Hibernate integration documentation.
References
JPA 2.0 specification
Section 3.7 "Caching"
Section 7.10 "Cache Interface"
Section 11.1.7 "Cacheable Annotation"

how to assign different concurrency strategy to the same (persistence) entity?

I'm using JPA and I am using second-level cache for all the reference entities.
Everything's working well, I can get the entities from second-level cache is they were already been selected before.
Now, I have two applications, they both use the same database (so they both use the same table, values, etc).
1.The read-only application just read data from database, it doesn't modify the database at all. Therefore, I choose the "READ_ ONLY" concurrency strategy for the second-level cache, aiming at a better performance.
2.The read-write application read and write as well the data of database, it modify the database. Consequently, I have to choose the "READ_ WRITE" or "NONSTRICT_ READ_ WRITE" concurrency strategy for the second-level cache.
However, the concurrency strategy is assigned in the annotation of each entity class, so I cannot change it programatically. (I don't use class mapping files for JPA, so I can't use two mapping files, each for a different concurrency strategy for the same entity class.)
My Question is, is there a good way to change the concurrenty strategy of the second-level cache on the fly according to my 2 different applications?
I have not used Hibernate, but at least if you use JPA it is possible to override even a single annotation with a deployment descriptor file. You should also be able to override also any vendor specific property with the deployment descriptor.
Unfortunately I cannot give you an example but hope this helps you.
So Therefore, I think the current solution is to replace all the annotations of each entities with Hibernate mapping files, so that for different deployment (application as well), we could use different Hibernate mapping files.

Categories

Resources