I am trying to use Spring Cache (#Cacheable annotation) on method level in the Spring Boot Application, but unlike other google guava cache, I have no idea if Spring Cache will cause a memory leak issue. Because it didn't have a size limitation or refresh policy, where and how long would the data be stored in the application? I assume it'd be memory, but will Spring itself clear it automatically? If not, when there might be millions of requests coming in hitting the application, will that trigger a memory leak issue?
My use case is that I have a heavy method per request, and I would like to only execute that method one time during my current request, after the request is done there is no need to keep the data in Cache, but how would I ensure my Spring Cache would be cleared after each request? I know there is a evict action, however, what if my request errors out before hitting my cache evict method so that it returns 500 directly, that means my last request data would always sit in the cache memory, with more and more requests like that which might cause a memory leak, correct?
In Spring application, Perhaps you can disable the cache by :
spring.cache.type=NONE
The type of cache is by default automatically detected and configured. However you can specify which cache type to use by adding spring.cache.type to your configuration. To disable it set the value to NONE.
As you want to do it for a specific profile add it to that profiles application.properties in this case modify the application-dev.properties and add
spring.cache.type=NONE
This will disable caching.
This will disable caching.
The Spring Framework's caching support and infrastructure, whether consumed through declarative annotation-based caching and demarcation of Spring application components (using either Spring Annotations, e.g. #Cacheable, or with JSR-107, JCache Annotations), or by using Spring's Cache API directly (not common), is simply an "abstraction", hence the Spring Cache Abstraction. There is NO underlying caching provider (implementation of this SPI), by default.
Of course, if you are using Spring Boot (on top of, or to consume the core Spring Framework), and you do not configure an explicit caching provider (see here), such as Redis, then by default, Spring Boot will configure and provide your Spring Boot application with a ConcurrentHashMap caching provider implementation (see here).
When the documentation states:
This is the default if no caching library is present in your application.
It means when no caching library, like Redis (using Spring Data Redis, for instance), is detected on your Spring Boot application classpath.
In general, however, it is a good practice to choose an underlying caching provider implementation, such as Redis, or in your case, Google Guava, which is possible to position as a caching provider implementation in Spring's Cache Abstraction (see here, for example).
Given the Spring Framework's Cache Abstraction is simply a Facade with an caching API/SPI common to multiple caching provider implementations, effectively providing the lowest common denominator for caching functionality (e.g. put, get, evict, invalidate) across caching providers, then to your question, there is no cause of memory leak originating from Spring's "Cache", which really is not even a thing anyway. It is technically the provider implementation's "cache", like Google Guave, Redis, Hazelcast, Apache Geode (VMW GemFire), etc, that would actually be the cause of a memory leak if a leak existed in your application in the first place.
In other words, if there is any memory leak, then it originates with the caching provider.
You should refer to your caching provider's documentation on configuring critical aspects of the cache, such as memory management, that are explicitly stated to be beyond the control of the Spring Framework's Cache Abstraction.
The reason these aspects are beyond the control of the core Spring Framework is simply because the configuration of these low-level cache features (e.g. memory management) is usually very provider specific and varies widely from 1 provider to the next, especially with respect to the capabilities and features.
I hopes this explanation gives you clarity on the position of the Cache Abstraction provided by Spring and its responsibilities.
By adhering to the abstraction then it effectively makes it easier to switch between caching providers if your application requirements, UC or SLAs change.
You can still specify what cache you want. For instance you can use Caffeine which
"... provides an in-memory cache using a Google Guava inspired API"
And you can still configure it for instance for a maximumSize
spring:
cache:
cache-names: instruments, directory
caffeine:
spec: maximumSize=500, expireAfterAccess=30s
To enable it you just have to add the dependency and add the #EnableCaching in a config class (and add #Cacheable of course in your method)
See:
https://github.com/ben-manes/caffeine/
https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#cache-store-configuration-caffeine
https://memorynotfound.com/spring-boot-caffeine-caching-example-configuration/
https://www.baeldung.com/spring-boot-caffeine-cache
Related
If I am supposed to implement Caching in existing Spring application for all web service as well as database call, what would be the best way to implement it? I mean any of the design patterns and caching mechanism that can be used with other required stuffs.
I would appreciate any suggestion provided.
Since you are already using Spring stack, Spring Caching could be a alternative you can consider as it will require very less integration and most of the things come out of the box. You can take a look at simple examples here and here too to get a feel of how it works. However if you want more control on the actual underlying cache implementation and the code to interact with that you can roll out your own easily too, though that will require more code to write at your end.
If you are using springboot you can use the
#EnableCaching and #Cacheable
Since Spring Boot automatically configures a suitable CacheManager to serve as a provider for the relevant cache.
You can find more on https://spring.io/guides/gs/caching/
In addition to Guru's answer.
You can find more info about Spring Boot Caching on https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-caching.html and https://docs.spring.io/spring/docs/4.3.14.RELEASE/spring-framework-reference/htmlsingle/#cache
#EnableCaching is for configuration and #Cacheable is for trigger cache object.
I'm using the default caching methods provided by spring boot. I have not configured any cache manager. I am simply using #Cacheable, #CacheEvict annotations and nothing else.
- According to spring:
#cache-strategies
Just like other services in the Spring Framework, the caching service is an abstraction (not a cache implementation) and requires the use of an actual storage to store the cache data - that is, the abstraction frees the developer from having to write the caching logic but does not provide the actual stores. This abstraction is materialized by the org.springframework.cache.Cache and org.springframework.cache.CacheManager interfaces.
It also says:
#boot-features-caching-provider-simple
If none of the other providers can be found, a simple implementation using a ConcurrentHashMap as cache store is configured.
My question is:
- What is the actual memory used by spring boot simple cache?
- Does it obey JAVA_OPTS/CATALINA_OPTS memory limits?
In our project we are using Infinispan and Spring's #Cacheable annotation, mainly for caching I/O results.
As we have many concurrent calls, we would like to avoid 2 threads performing the same data retrieval twice, hence having the second #Cacheable method call (with the same cache key) blocked until the first one finishes and returns a result.
I am used to Ehcache's SelfPopulatingCache which supports this automatically, but is there a similar feature in Infinispan?
Ideally this should be used via Spring's #Cacheable so that we avoid boilerplate code. I have noticed that Spring 4.3 now has #Cacheable.sync() but it is indicated that it is only a hint and it depends on the underlying cache provider implementation. Also, we are not on Spring 4.3 yet so a solution for 4.2 would be better.
If you want that feature out-of-the-box you'll have to upgrade to 4.3. If you are using 4.2, upgrading to 4.3 should be painless anyway (if that's not the case, let us know!)
As Ben already mentioned, you can use the JCache bridge that has explicit support for such a call (i.e. it will work for any JSR-107 compliant cache library). Infinispan does not have such native feature yet, I've just submitted a feature request in their tracker.
Infinispan has eviction policy, lifespan for specific to entity. From below question we can make the change in persistence.xml.
Infinispan - set per Entity expiration.lifespan
My question is there a way to do this in annotation in that particular entity?
I am not aware of any such config. The reason for the absence of it is probably because Infinispan (and other cache providers) are general-purpose caching frameworks which in general are not aware of Hibernate second-level caching specifics.
On the other hand, again in general, Hibernate and java.persistence do not interfere with specific cache provider implementations and APIs. That means that a cache provider may even not allow defining expiration policy while still being perfectly able to serve as Hibernate L2 cache.
However, you could define your own annotations and set the Infinispan config values programmatically. You could turn it to an interesting open source project, if there are none so far that do similar thing. :)
We'd like to use another L2 cache for our big JPA application. We are trying to achieve a shared cache between multiple servers.
We use Eclipselink as JPA implementation, and some legacy codes uses internal Eclipselink API's, so switching is not an option.
Coherence/Toplink Grid seems too expensive (4000$/cpu?).
Is there a way we could plug another cache implementation? Is something specified in JPA 2 (I can't find anything in the specs, but maybe I just misread it)? Proprietary (=Eclipselink specific) solutions are ok, as long as they are somewhat documented or simple enough (we don't want that to break).
Is there a way we could plug another cache implementation?
Did you investigate the use of the EclipseLink shared object cache that comes with EclipseLink? Going by the description, the shared object cache is not confined to a single EntityManager alone, and is available across the lifecycles of several Entity managers, i.e. across several transactions. It is of course, constrained to the lifecycle of an EntityManagerFactory, which may be as live as long as the application is running in the container.
The EclipseLink shared object cache is different from Oracle Coherence, and I believe it is not licensed and packaged separately, thereby making it available on all containers.
JPA does not specify a pluggable cache interface. I don't know if it ever will, but if it does, my bet is that it won't be until after the resurrected JSR-107 finishes defining a standard API to object caches, which JPA would then be able to use. It might also have to wait for JSR 347, which is defining another cache interface, whose relationship to JCache is somewhat unclear (there is open factional warfare between and within the groups, with some members of the 107 expert group trying to declare 347 an independent republic, and invade Mexico).
So, until then, you're at the mercy of your provider's cache interface. I am not an EclipseLink expert, but last time i looked, i couldn't see a pluggable second-level cache interface. In fact, i think only Hibernate and, of course, DataNucleus, have them.
Most cache implementations are not distributed (other than Coherence), just local.
EclipseLink already supports a share cache and cache coordination for caching in a cluster.
What cache do you intent to use, and what benefit do you intend to get from it?
EclipseLink does support integration with 3rd party caches, this API was created for the Coherence integration, although Coherence is the only cache that currently provides an integration.