Why I have cache misses in Service using Spring Cache - java

I have configured my cache as follows:
#Configuration
#EnableCaching
public class CacheConfig {
#Bean(name = "caffeineCachingProvider")
public CachingProvider caffeineCachingProvider() {
return Caching.getCachingProvider("com.github.benmanes.caffeine.jcache.spi.CaffeineCachingProvider");
}
#Bean(name = "caffeineCacheManager")
public JCacheCacheManager getSpringCacheManager() {
CacheManager cacheManager = caffeineCachingProvider().getCacheManager();
CaffeineConfiguration<String, List<Product>> caffeineConfiguration = new CaffeineConfiguration<>();
caffeineConfiguration.setExpiryPolicyFactory(FactoryBuilder.factoryOf(new AccessedExpiryPolicy(new Duration(TimeUnit.MINUTES, 60))));
caffeineConfiguration.setCopierFactory(Copier::identity);
cacheManager.createCache("informerCache", caffeineConfiguration);
return new JCacheCacheManager(cacheManager);
}
}
Also I have the #Service that uses it in following way:
#Service
public class InformerService {
#CacheResult(cacheName = "informerCache")
public List<Product> getProducts(#CacheKey String category, #CacheKey String countrySign, #CacheKey long townId) throws Exception {
Thread.sleep(5000);
// do some work
}
}
So I have the next behavior.
When I'm calling the service method first time it takes 5 seconds
then doing some work as expected.
Calling method the second time with same parameters - > caching works -> returns result immediately
Calling the third time with same parameters again results in Thread.sleep
And all over again.
How to solve this ? Is that the issue about proxying ? What did I miss ?

As discussed in the comments, this was a bug in the JCache adapter. Thank you for letting me know about this problem. I released version 2.1.0 which includes this fix. That release also includes friendlier initial settings for CaffeineConfiguration which you identified in another post.
While the core library is heavily tested, the JCache adapters relied too heavily on the JSR's TCK (test compatibility kit). Unfortunately that test suite isn't very effective, so I added tests to help avoid these types of mistakes in the future.
This issue only occurred in JCache because its version of expiration is not supported by Caffeine's core library. Caffeine prefers to use an O(1) design that eagerly cleans up expired entries by using fixed durations. JCache uses per-entry lazy expiration and the specification authors assume a capacity constraint is used to eventually discard expired entries. I added a warning to the documentation about this feature, as it could be error prone. While none of the other JCache implementations go beyond this, a pending task is to decide on a mechanism to help mitigate this JCache design flaw.
Thanks again for reporting this issue. As always, feel free to reach out if you have any other issues or feedback to share.

Related

How to update cache every 30 minutes in spring?

I have following declaration:
#Cacheable("books")
public Book findBook(ISBN isbn) {...}
But I want to update the cache every 30 minutes. I understand that I can create #Scheduled job to invoke method annotated #CacheEvict("books")
Also, I suppose that in this case all books will be cleared but it is more desirable to update only stale data(which were put in cache > 30 minutes ago)
Is there anything in spring that can facilitate implementation?
Cache implementations provide a feature named expire after write or time to life for this task. The different cache implementations have a lot variances. In Spring no effort was made to try to abstract or generalize the configuration part as well. Here is an example of programmatic configuration for your cache in Spring, if you like to use cache2k:
#Configuration
#EnableCaching
public class CachingConfig extends CachingConfigurerSupport {
#Bean
public CacheManager cacheManager() {
return new SpringCache2kCacheManager()
.addCaches(
b->b.name("books").keyType(ISBN.class).valueType(Book.class)
.expireAfterWrite(30, TimeUnit.MINUTES)
.entryCapacity(5000);
}
}
More information about this is in cache2k User Guide - Spring Framework Support. Other cache implementations like, EHCache or Caffeine support expiry as well, but the configuration is different.
If you like to configure the cache expiry in a "vendor neutral" way, you can use a cache implementation that support the JCache/JSR107 standard. The standard includes setting an expiry. A way to do it, looks like this:
#Configuration
#EnableCaching
public class CacheConfiguration {
#Bean
public JCacheCacheManager cacheManager() {
return new JCacheCacheManager() {
#Override
protected Collection<Cache> loadCaches() {
Collection<Cache> caches = new ArrayList<>();
caches.add(new JCacheCache(
getCacheManager().createCache("books",
new MutableConfiguration<ISBN,Book>()
.setExpiryPolicyFactory(ModifiedExpiryPolicy.factoryOf(new Duration(TimeUnit.MINUTES, 30)))),
false));
return caches;
}
};
}
}
The in JCache is, that there are configuration options, that you need, which are not part of the standard. One example is limiting the cache size. For this, you always need to add a vendor specific configuration. In case of cache2k (I am the author of cache2k), which supports JCache, the configurations are merged, which is described in detail at cache2k User Guide - JCache. This means on a programmatic level you do the "logic" part of the configuration, where as, the "operational" part like the cache size is configurable in an external configuration file.
Unfortunately, its not part of the standard how a vendor configuration and a programmatic configuration via the JCache API needs to interoperate. So, even a 100% JCache compatible cache might refuse operation, and require that you only use one way of configuration.

Is it possible to make the application ignore caching in case the cache server fails?

I have a spring boot application with the following properties:
spring.cache.type: redis
spring.redis.host: <hostname>
spring.redis.port: <hostport>
Right now if the remote host fails the application also fails with a connection error.
As in this case my cache is not core to my application, but it is used only for performance, I'd like for spring to simply bypass it and go to the database retrieving its data.
I saw that this could be attained by defining a custom errorHandler method, but in order to do so I have to implement the CachingConfigurer bean...but this also forces me to override every method (for example cache manager, cache resolver, ecc.).
#Configuration
public class CacheConfiguration implements CachingConfigurer{
#Override
public CacheManager cacheManager() {
// TODO Auto-generated method stub
return null;
}
#Override
public CacheResolver cacheResolver() {
// TODO Auto-generated method stub
return null;
}
...
#Override
public CacheErrorHandler errorHandler() {
// the only method I need, maybe
return null;
}
I would like to avoid that...I simply need a way to tell spring "the cache crashed but it's ok: just pretend you have no cache at all"
#Phate - Absolutely! I just answered a related question (possibly) using Apache Geode or Pivotal GemFire as the caching provider in a Spring Boot application with Spring's Cache Abstraction.
In that posting, rather than disabling the cache completely, I switched GemFire/Geode to run in a local-only mode (a possible configuration with GemFire/Geode). However, the same techniques can be applied to disable caching entirely if that is what is desired.
In essence, you need a pre-processing step, before Spring Boot and Spring in general start to evaluate the configuration of your application.
In my example, I implemented a custom Spring Condition that checked the availability of the cluster (i.e. servers). I then applied the Condition to my #Configuration class.
In the case of Spring Boot, Spring Boot applies auto-configuration for Redis (as a store and a caching provider) when it effectively sees (as well as see here) Redis and Spring Data Redis on the classpath of your application. So, essentially, Redis is only enabled as a caching provider when the "conditions" are true, primarily that a
RedisConnectionFactory bean was declared by your application configuration, your responsibility.
So, what would this look like?
Like my Apache Geode & Pivotal GemFire custom Spring Condition, you could implement a similar Condition for Redis, such as:
static RedisAvailableCondition implements Condition {
#Override
public boolean matches(ConditionContext conditionContext,
AnnotatedTypeMetadata annotatedTypeMetadata) {
// Check the available of the Redis server, such as by opening a Socket
// connection to the server node.
// NOTE: There might be other, more reliable/robust means of checking
// the availability of a Redis server in the Redis community.
Socket redisServer;
try {
Environment environment = conditionContext.getEnvironment();
String host = environment.getProperty("spring.redis.host");
Integer port = environment.getProperty("spring.redis.port", Integer.class);
SocketAddress redisServerAddress = new InetSocketAddress(host, port);
redisServer = new Socket();
redisServer.connect(redisServerAddress);
return true;
}
catch (Throwable ignore) {
System.setProperty("spring.cache.type", "none");
return false;
}
finally {
// TODO: You need to implement this method yourself.
safeCloseSocket(redisServer);
}
}
}
Additionally, I also set the spring.cache.type to NONE, to ensure that caching is rendered as a no-op in the case that Redis is not available. NONE is explained in more detail here.
Of course, you could also use a fallback caching option, using some other caching provider (like a simple ConcurrentHashMap, but I leave that as an exercise for you). Onward...
Then, in your Spring Boot application configuration class where you have defined your RedisConnectionFactory bean (as expected by Spring Boot's auto-configuration), you add this custom Condition using Spring's #Conditiional annotation, like so:
#Confgiuration
#Conditional(RedisAvailableCondition.class);
class MyRedisConfiguration {
#Bean
RedisConnectionFactory redisConnectionFactory() {
// Construct and return new RedisConnectionFactory
}
}
This should effectively handle the case when Redis is not available.
DISCLAIMER: I did not test this myself, but is based on my Apache Geode/Pivotal GemFire example that does work. So, perhaps, with some tweaks this will address your needs. It should also serve to give you some ideas.
Hope this helps!
Cheers!
We can use any of Circuit breaker implementation to use database as fallback option in case of any cache failure. The advantage of going with circuit breaker pattern is that once your cache is up, the request will be automatically routed back to your cache, so the switch happens seamlessly.
Also you configure how many times you want to retry before falling back to database and how frequently you want to check if your cache is back up online.
Spring cloud provides out of the box support for hystrix and Resilience4j circuit breaker implementation and its easy to integrate with spring boot applications.
https://spring.io/projects/spring-cloud-circuitbreaker
https://resilience4j.readme.io/docs/circuitbreaker
I have to implement the CachingConfigurer bean...but this also forces
me to override every method (for example cache manager, cache
resolver, ecc.)
Instead of this, you can simply extend CachingConfigurerSupport and only override the errorHandler() method, returning a custom CacheErrorHandler whose method implementations are no-ops. See https://stackoverflow.com/a/68072419/1527469

Spring Boot 2 Actuator doesnt publish jvm metric

I am running a Spring Boot 2 Application and added the actuator spring boot starter dependency. I enabled all web endpoints and then called:
http://localhost:8080/actuator/metrics
result is:
{
"names": ["jdbc.connections.active",
"jdbc.connections.max",
"jdbc.connections.min",
"hikaricp.connections.idle",
"hikaricp.connections.pending",
"hikaricp.connections",
"hikaricp.connections.active",
"hikaricp.connections.creation",
"hikaricp.connections.max",
"hikaricp.connections.min",
"hikaricp.connections.usage",
"hikaricp.connections.timeout",
"hikaricp.connections.acquire"]
}
But I am missing all the JVM stats and other built-in metrics. What am I missing here? Everything I read said that these metrics should be available at all times.
Thanks for any hints.
I want to share the findings with you. The problem was that a 3rd party library (Shiro) and my configuration for it. The bean loading of micrometer got mixed up which resulted in a too late initialisation of a needed PostProcessingBean which configures the MicroMeterRegistry (in my case the PrometheusMeterRegistry).
I dont know if its wise to do the configuration of the Registries via a different Bean (PostProcessor) which can lead to situations i had... the Registries should configure themselves without relying on other Beans which might get constructed too late.
In case this ever happens to anybody else:
I had a similar issue (except it wasn't Graphite but Prometheus, and I was not using Shiro).
Basically I only had Hikari and HTTP metrics, nothing else (no JVM metrics like GC).
I banged my head on several walls before finding out the root cause: there was a Hikari auto configure post processor in Spring Boot Autoconfigure that eagerly retrieved a MeterRegistry, so all Metric beans didn't have time to initialize before.
And to my surprise, when looking at this code in Github I didn't find it. So I bumped my spring-boot-starter-parent version from 2.0.4.RELEASE to 2.1.0.RELEASE and now everything works fine. I correctly get all the metrics.
As I expected, this problem is caused by the loading order of the beans.
I used Shiro in the project.
Shiro's verification method used MyBatis to read data from the database.
I used #Autowired for MyBatis' Mapper file, which caused the Actuator metrics related beans to not be assembled by SpringBoot (I don't know what the specific reason is).
So i disabled the automatic assembly of the Mapper file by manual assembly.
The code is as follows:
public class SpringContextUtil implements ApplicationContextAware {
private static ApplicationContext applicationContext;
public void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
SpringContextUtil.applicationContext = applicationContext;
}
public static ApplicationContext getApplicationContext() {
return applicationContext;
}
public static Object getBean(String beanId) throws BeansException {
return applicationContext.getBean(beanId);
}
}
Then
StoreMapper userMapper = (UserMapper) SpringContextUtil.getBean("userMapper");
UserModel userModel = userMapper.findUserByName(name);
The problem can be solved for the time being. This is just a stopgap measure, but at the moment I have no better way.
I can not found process_update_seconds in /actuator/prometheus, so I have spent some time to solve my problem.
My solution:
Rewrite HikariDataSourceMetricsPostProcessor and MeterRegistryPostProcessor;
The ordered of HikariDataSourceMetricsPostProcessor is Ordered.HIGHEST_PRECEDENCE + 1;
package org.springframework.boot.actuate.autoconfigure.metrics.jdbc;
...
class HikariDataSourceMetricsPostProcessor implements BeanPostProcessor, Ordered {
...
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE + 1;
}
}
The ordered of MeterRegistryPostProcessor is Ordered.HIGHEST_PRECEDENCE;
package org.springframework.boot.actuate.autoconfigure.metrics;
...
import org.springframework.core.Ordered;
class MeterRegistryPostProcessor implements BeanPostProcessor, Ordered {
...
#Override
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE;
}
}
In my case I have used shiro and using jpa to save user session id. I found the order of MeterRegistryPostProcessor and HikariDataSourceMetricsPostProcessor cause the problem. MeterRegistry did not bind the metirc because of the loading order.
Maybe my solution will help you to solve the problem.
I have a working sample with Spring Boot, Micrometer, and Graphite and confirmed the out-of-the-box MeterBinders are working as follows:
{
"names" : [ "jvm.memory.max", "process.files.max", "jvm.gc.memory.promoted", "tomcat.cache.hit", "system.load.average.1m", "tomcat.cache.access", "jvm.memory.used", "jvm.gc.max.data.size", "jvm.gc.pause", "jvm.memory.committed", "system.cpu.count", "logback.events", "tomcat.global.sent", "jvm.buffer.memory.used", "tomcat.sessions.created", "jvm.threads.daemon", "system.cpu.usage", "jvm.gc.memory.allocated", "tomcat.global.request.max", "tomcat.global.request", "tomcat.sessions.expired", "jvm.threads.live", "jvm.threads.peak", "tomcat.global.received", "process.uptime", "tomcat.sessions.rejected", "process.cpu.usage", "tomcat.threads.config.max", "jvm.classes.loaded", "jvm.classes.unloaded", "tomcat.global.error", "tomcat.sessions.active.current", "tomcat.sessions.alive.max", "jvm.gc.live.data.size", "tomcat.servlet.request.max", "tomcat.threads.current", "tomcat.servlet.request", "process.files.open", "jvm.buffer.count", "jvm.buffer.total.capacity", "tomcat.sessions.active.max", "tomcat.threads.busy", "my.counter", "process.start.time", "tomcat.servlet.error" ]
}
Note that the sample on the graphite branch, not the master branch.
If you could break the sample in the way you're seeing, I can take another look.

Schedule Spring cache eviction?

Is it possible to schedule spring cache eviction to everyday at midnight?
I've read Springs Cache Docs and found nothing about scheduled cache eviction.
I need to evict cache daily and recache it in case there were some changes outside my application.
Try to use #Scheduled
Example:
#Scheduled(fixedRate = ONE_DAY)
#CacheEvict(value = { CACHE_NAME })
public void clearCache() {
log.debug("Cache '{}' cleared.", CACHE);
}
You can also use cron expression with #Scheduled.
If you use #Cacheable on methods with parameters, you should NEVER forget the allEntries=true annotation property on the #CacheEvict, otherwise your call will only evict the key parameter you give to the clearCache() method, which is nothing => you will not evict anything from the cache.
Maybe not the most elegant solution, but #CacheEvict was not working, so I directly went for the CacheManager.
This code clears a cache called foo via scheduler:
class MyClass {
#Autowired CacheManager cacheManager;
#Cacheable(value = "foo")
public Int expensiveCalculation(String bar) {
...
}
#Scheduled(fixedRate = 60 * 1000);
public void clearCache() {
cacheManager.getCache("foo").clear();
}
}
I know this question is old, but I found a better solution that worked for me. Maybe that will help others.
So, it is indeed possible to make a scheduled cache eviction. Here is what I did in my case.
Both annotations #Scheduled and #CacheEvict do not seem to work together.
You must thus split apart the scheduling method and the cache eviction method.
But since the whole mechanism is based on proxies, only external calls to public methods of your class will trigger the cache eviction. This because internal calls between to methods of the same class do not go through the Spring proxy.
I managed to fixed it the same way as Celebes (see comments), but with an improvement to avoid two components.
#Component
class MyClass
{
#Autowired
MyClass proxiedThis; // store your component inside its Spring proxy.
// A cron expression to define every day at midnight
#Scheduled(cron ="0 0 * * *")
public void cacheEvictionScheduler()
{
proxiedThis.clearCache();
}
#CacheEvict(value = { CACHE_NAME })
public void clearCache()
{
// intentionally left blank. Or add some trace info.
}
}
Please follow the below code.change cron expression accordingly. I have set it for 3 minutes
Create a class.
Use the below method inside the class.
class A
{
#Autowired CacheManager cacheManager;
#Scheduled(cron ="0 */3 * ? * *")
public void cacheEvictionScheduler()
{
logger.info("inside scheduler start");
//clearCache();
evictAllCaches();
logger.info("inside scheduler end");
}
public void evictAllCaches() {
logger.info("inside clearcache");
cacheManager.getCacheNames().stream()
.forEach(cacheName -> cacheManager.getCache(cacheName).clear());
}
}
Spring cache framework is event driven i.e. #Cacheable or #CacheEvict will be triggered only when respective methods are invoked.
However you can leverage the underlying cache provider (remember the Spring cache framework is just an abstraction and does not provide a cache solution by itself) to invalidate the cache by itself. For instance EhCache has a property viz. timeToLiveSeconds which dictates the time till the cache be active. But this won't re-populate the cache for you unless the #Cacheable annotated method is invoked.
So for cache eviction and re-population at particular time (say midnight as mentioned) consider implementing a background scheduled service in Spring which will trigger the cache eviction and re-population as desired. The expected behavior is not provided out-of-box.
Hope this helps.

Spring Couchbase Health Indicator

I'm having trouble setting up a health indicator for my Spring project. Since version 1.4.0 Spring comes with its own CouchbaseHealthIndicator (see http://docs.spring.io/spring-boot/docs/1.4.1.RELEASE/api/org/springframework/boot/actuate/health/CouchbaseHealthIndicator.html) but I cannot make it work. Other health indicators are working fine (disk space in this project, mail and db in other projects).
#ConditionalOnClass({ CouchbaseOperations.class, Bucket.class })
#ConditionalOnBean(CouchbaseOperations.class)
#ConditionalOnEnabledHealthIndicator("couchbase")
This are the conditions for the health indicator being initialized. The mentioned classes are on the classpath.
After the project is started, I let it print the beans it has in its application context - there is a bean 'CouchbaseTemplate', which implements the required CouchbaseOperations.
I also manually enabled the couchbase health check like this
management.health.couchbase.enabled = true
Still, it keeps checking disc space, no check for couchbase.
You can find my project on GitHub: https://github.com/Age15990/CouchbaseHI
Please feel free to download and try. If you have come across this problem before or you have an idea how to solve it I would be happy to read your answer.
Thanks in advance!
There seems to be a defect in spring-boot-starter-actuator. I filed a ticket for that here.
A work-around could be to provide the configuration yourself:
#Configuration
#ConditionalOnClass({ CouchbaseOperations.class, Bucket.class })
//#ConditionalOnBean(CouchbaseOperations.class)
#ConditionalOnEnabledHealthIndicator("couchbase")
public class CouchbaseHealthConfig extends CompositeHealthIndicatorConfiguration<CouchbaseHealthIndicator, CouchbaseOperations>
{
private final Map<String, CouchbaseOperations> couchbaseOperations;
public CouchbaseHealthConfig(Map<String, CouchbaseOperations> couchbaseOperations)
{
this.couchbaseOperations = couchbaseOperations;
}
#Bean
#ConditionalOnMissingBean(name = "couchbaseHealthIndicator")
public HealthIndicator couchbaseHealthIndicator()
{
return createHealthIndicator(this.couchbaseOperations);
}
}
Explanation:
The auto-configuration of the couchbase health indicator seems to be broken. It works for me when I comment-out #ConditionalOnBean(CouchbaseOperations.class) from the original configuration in HealthIndicatorAutoConfiguration.class.
This means CouchbaseOperations.class was not initialized properly before the auto-configuration took place.

Categories

Resources