I have a class that performs some read operations from a service XXX. These read operations will eventually perform DB reads and I want to optimize on those calls by caching the results of each method in the class for a specified custom key per method.
Class a {
public Output1 func1(Arguments1 ...) {
...
}
public Output2 func2(Arguments2 ...) {
...
}
public Output3 func3(Arguments3 ...) {
...
}
public Output4 func4(Arguments4 ...) {
...
}
}
I am thinking of using Spring caching(#Cacheable annotation) for caching results of each of these methods.
However, I want cache invalidation to happen automatically by some mechanism(ttl etc). Is that possible in Spring caching ? I understand that we have a #CacheEvict annotation but I want that eviction to happen automatically.
Any help would be appreciated.
According to the Spring documentation (section 36.8) :
How can I set the TTL/TTI/Eviction policy/XXX feature?
Directly through your cache provider. The cache abstraction is...
well, an abstraction not a cache implementation. The solution you are
using might support various data policies and different topologies
which other solutions do not (take for example the JDK
ConcurrentHashMap) - exposing that in the cache abstraction would be
useless simply because there would no backing support. Such
functionality should be controlled directly through the backing cache,
when configuring it or through its native API.#
This mean that Spring does not directly expose API to set Time To Live , but instead relays on the caching provider implementation to set this. This mean that you need to either set Time to live through the exposed Cache Manager, if the caching provider allows dynamic setup of these attributes. Or alternatively you should configure yourself the cache region that the Spring is using with the #Cacheable annotation.
In order to find the name of the cache region that the #Cacheable is exposing. You can use a JMX console to browse the available cache regions in your application.
If you are using EHCache for example once you know the cache region you can provide xml configuration like this:
<cache name="myCache"
maxEntriesLocalDisk="10000" eternal="false" timeToIdleSeconds="3600"
timeToLiveSeconds="0" memoryStoreEvictionPolicy="LFU">
</cache>
Again I repeat all configuration is Caching provider specific and Spring does not expose an interface when dealing with it.
REMARK: The default cache provider that is configured by Spring if no cache provider defined is ConcurrentHashMap. It does not have support for Time To Live. In order to get this functionality you have to switch to a different cache provider(for example EHCache).
Related
I have following declaration:
#Cacheable("books")
public Book findBook(ISBN isbn) {...}
But I want to update the cache every 30 minutes. I understand that I can create #Scheduled job to invoke method annotated #CacheEvict("books")
Also, I suppose that in this case all books will be cleared but it is more desirable to update only stale data(which were put in cache > 30 minutes ago)
Is there anything in spring that can facilitate implementation?
Cache implementations provide a feature named expire after write or time to life for this task. The different cache implementations have a lot variances. In Spring no effort was made to try to abstract or generalize the configuration part as well. Here is an example of programmatic configuration for your cache in Spring, if you like to use cache2k:
#Configuration
#EnableCaching
public class CachingConfig extends CachingConfigurerSupport {
#Bean
public CacheManager cacheManager() {
return new SpringCache2kCacheManager()
.addCaches(
b->b.name("books").keyType(ISBN.class).valueType(Book.class)
.expireAfterWrite(30, TimeUnit.MINUTES)
.entryCapacity(5000);
}
}
More information about this is in cache2k User Guide - Spring Framework Support. Other cache implementations like, EHCache or Caffeine support expiry as well, but the configuration is different.
If you like to configure the cache expiry in a "vendor neutral" way, you can use a cache implementation that support the JCache/JSR107 standard. The standard includes setting an expiry. A way to do it, looks like this:
#Configuration
#EnableCaching
public class CacheConfiguration {
#Bean
public JCacheCacheManager cacheManager() {
return new JCacheCacheManager() {
#Override
protected Collection<Cache> loadCaches() {
Collection<Cache> caches = new ArrayList<>();
caches.add(new JCacheCache(
getCacheManager().createCache("books",
new MutableConfiguration<ISBN,Book>()
.setExpiryPolicyFactory(ModifiedExpiryPolicy.factoryOf(new Duration(TimeUnit.MINUTES, 30)))),
false));
return caches;
}
};
}
}
The in JCache is, that there are configuration options, that you need, which are not part of the standard. One example is limiting the cache size. For this, you always need to add a vendor specific configuration. In case of cache2k (I am the author of cache2k), which supports JCache, the configurations are merged, which is described in detail at cache2k User Guide - JCache. This means on a programmatic level you do the "logic" part of the configuration, where as, the "operational" part like the cache size is configurable in an external configuration file.
Unfortunately, its not part of the standard how a vendor configuration and a programmatic configuration via the JCache API needs to interoperate. So, even a 100% JCache compatible cache might refuse operation, and require that you only use one way of configuration.
I'm already using spring boot and would like to be able to use the the cache layer it provides to cache and entire table in the cache. But I also need to be able to refresh the cache after a period of time (every 5 minutes).
The standard cacheable is easy, but when you try to add refreshAfterWrite, you need to define a loadingCache.
How does one do that (in a spring bean way) when your annotated method doesn't have any parms?
#Cacheable("all")
public List<User> getAllUsers() {
ResultSet rset;
....
}
And what's the point of using the spring abstraction if you now need to define implementation specific objects?
my ehcache config starts like this:
<ehcache maxBytesLocalHeap="200M" updateCheck="false">
after experiencing error like "maxEntriesLocalHeap is not compatible with maxBytesLocalHeap set on cache manager" I looked into the source of spring-context-support:
#SuppressWarnings("deprecation")
public EhCacheFactoryBean() {
setMaxEntriesLocalHeap(10000);
setMaxElementsOnDisk(10000000);
setTimeToLiveSeconds(120);
setTimeToIdleSeconds(120);
}
The call to setMaxElementsOnDisk is commented like this:
void net.sf.ehcache.config.CacheConfiguration.setMaxElementsOnDisk(int maxElementsOnDisk)
Deprecated. use setMaxEntriesLocalDisk(long) for unclustered caches and setMaxEntriesInCache(long) for clustered caches.
Sets the maximum number elements on Disk. 0 means unlimited.
This property can be modified dynamically while the cache is operating.
Parameters:
maxElementsOnDisk the maximum number of Elements to allow on the disk. 0 means unlimited.
Does this mean that using spring 4.1.7 together with its dependency ehcache 2.9.1 a clustered cache is impossible using springs EhCacheFactoryBean?
Best regards,
Carsten
Sorry but I fail to see the link between the code/documentation snippets and your conclusion. Can you elaborate?
One thing is that it seems the default cache creation from Spring conflicts with what you want to do at the CacheManager level. But if you do not use default caches, you should have no problem.
CacheManager and cache must use the same configuration parameters maxBytesLocalHeap or maxEntriesLocalHeap, and MaxElements* is Deprecated
I am having spring webservice application with oracle as a database. Right now i have datasource created using weblogic server. Also using eclipse linkg JPA to do both read and write transactions(insert,Read and update). Now we want to separate dataSources for read(read) and wrtie(insert or update) transactions.
My current dataSource is as followed:
JNDI NAME : jdbc/POI_DS
URL : jdbc:oracle:thin:#localhost:1521:XE
using this, I am doing both read and write transactions.
What if i do the following:
JNDI NAME : jdbc/POI_DS_READ
URL : jdbc:oracle:thin:#localhost:1521:XE
JNDI NAME : jdbc/POI_DS_WRITE
URL : jdbc:oracle:thin:#localhost:1521:XE
I knew that using XA datasource we can define multiple dataSources. Can I do same thing without XA dataSource. Does any one tried this kind of approach.
::UPDATE::
Thank you all for your responses I have implemented following solution.
I have taken the multiple database approach. where you will define multiple transactionManagers and managerFactory. I have taken only single non xa dataSource(JNDI) that is refereed in EntityManagerFactory Bean.
you can reefer following links here which are for multiple dataSources
Multiple DataSource Approach
defining #transactional value
Also explored on transaction managers org.springframework.transaction.jta.WebLogicJtaTransactionManager and org.springframework.orm.jpa.JpaTransactionManager as well.
There is an interesting article about this in Spring docs - Dynamic DataSource Routing. There is an example there, that allows you to basically switch data sources at runtime. It should help you. I'd gladly help you more, if you have any more specific questions.
EDIT: It tells, that the actual use is to have connection to multiple databases via one configuration, but you could manage to create different configs to one database with different params, as you'd need to.
I would suggest using Database "services". Each workload, read-only and read-write, would be using its own service to access the database. That way you can use AWR reports to get statistics for each service. You can also turn off read-write when you keep read-only up and running.
Here is a pointer to the Oracle Database documentation that talks about Services:
https://docs.oracle.com/database/121/ADMIN/create.htm#CIABBCAI
If you're using spring, you should be able to accomplish this without using 2 Datasources via spring #Transactional with the readonly property set to true. The reason why I suggest this is that you seem to be concerned about the transactionality only and this seems to be catered for in the spring framework?
I'd suggest something like this for your case:
#Transactional(readOnly = true)
public class DefaultFooService implements FooService {
public Foo getFoo(String fooName) {
// do something
}
// these settings have precedence for this method
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
public void updateFoo(Foo foo) {
// do something
}
}
Using this style, you should be able to split read only services from their write counterparts, or even have read and write service methods combined. But both of these do not use 2 datasources.
Code is from the Spring Reference
I am pretty sure that you need to address the problem on the database / connection url + properties layer.
I would google around for something like read write replication.
Related to your question with JPA and transaction. You are doomed when you are using multiple Datasources. Also XA datasources are not really a solution for that. The only thing they do for you is to ensure consistency over multi data source operations. XA Transaction do only span some sort of logical transaction over two transactions (one for each datasource). From the transaction isolation point of view (as long as your not using READ_UNCOMMITED) both datasources use their own transaction. This means the read data source would not see the changes made by the write transaction.
In my project i have to intercept Hibernate L2 cache calls in-order to set lifespan for some selected cached objects. The problem is hibenate cache calls never comes through the interceptor.
My Interceptor ( test code)
public class HibernateCacheInterceptor extends BaseCustomInterceptor {
private static Log log = LogFactory.getLog(HibernateCacheInterceptor.class);
#Override
public Object visitPutKeyValueCommand(InvocationContext ctx, PutKeyValueCommand command) throws Throwable {
log.info(this.getClass().getName() + " intercept.");
if (command.getValue() instanceof Car) {
return null;
} else {
return invokeNextInterceptor(ctx, command);
}
}
}
My Cache definition ( infinispan.xml)
<namedCache name="mycache">
<customInterceptors>
<interceptor position="FIRST" class="test.HibernateCacheInterceptor">
</interceptor>
</customInterceptors>
</namedCache>
org.infinispan.Cache.put(key,value) calls comes to interceptor but hibernate cache calls does not comes. Does hibernate uses different API to skip interceptors ? How can i intercept hibernate cache calls ?
No, Hibernate cannot skip interceptors - all of the logic of core Infinispan is triggered from interceptors.
My guess is that Hibernate does not use the cache (when you open JConsole, can you see entries there in Infinispan?), uses another cache (without the interceptor) or buffers the entries before inserting to the cache.
You can try to set trace logging on both hibernate and infinispan.
There are easier ways to achieve this. As indicated in the Infinispan 2LC documentation (see advanced configuration section), each entity can be assigned a specific cache where you can tweak the settings declaratively. The easiest thing is to check which is the Infinispan configuration that's used in your application, copy the default cache used for entities, give it a different name and tweak it. Then, you need to define something like:
<property name="hibernate.cache.infinispan.com.acme.Person.cfg"
value="person-entity"/>
Where person-entity is the name of the cache for that particular entity.
NOTE: Remember that if you're running on Wildfly or EAP, the property name requires indicating the deployment archive and persistence unit name. This is explained in the advanced configuration section.