I use ehcache (not replicated or distributed) in my application and as I know it can be accessed only from the same JVM but all the applications in the same JVM (e.g. deployed in a app server) can get values from the cache. Am I right?
What I want is that only my application can get the cache and its values. Is it possible somehow? I checked the XML config file but I haven't found any configuration to control this. Or should I set something when I get the cache from CacheManager?
It is how I get the cache in code:
private static final String LOCAL_CACHE_NAME = "LOCAL_PROTNEUT_STORE";
private Cache getCache() {
// the name of the ehcache should be able to be configured in the general config XML
URL url = getClass().getResource("/protneut-local-ehcache.xml");
CacheManager manager = CacheManager.create(url);
Cache cache = manager.getCache(LOCAL_CACHE_NAME);
return cache;
}
The config file:
<ehcache>
<cache name="LOCAL_PROTNEUT_STORE" maxElementsInMemory="500" eternal="true" memoryStoreEvictionPolicy="LRU" />
</ehcache>
Is it possible to control the access at all?
Thanks for the help!
Regards,
V.
In general applications don't have access to each other as they are loaded with separe classpaths (you can read more about it here) so you shouldn't worry about it.
You would have to make extra effort to make simple cache manager avialable in all applications (like make it available via JNDI or put it in shared lib)
Related
We are trying to use EhCache as a distributed cache in our application.
EhCache instance are embedded in our server and we used a terracota cluster.
All our server (and ehcache) instances successfully connect to this tc.
We successfully inserts, update and get into each of our caches.
But we cannot iterates over any cache.
Maybe we have configured our caches in the wrong way, but it seems that the iterator methods are not yet implemented (in org.ehcache.clustered.client.internal.store.ClusteredStore):
#Override
public Iterator<Cache.Entry<K, ValueHolder<V>>> iterator() {
// TODO: Make appropriate ServerStoreProxy call
throw new UnsupportedOperationException("Implement me");
}
Our cache configuration looks like the following:
<service>
<tc:cluster>
<tc:connection url="terracotta://10.23.69.20:9510/clustered"/>
<tc:server-side-config auto-create="true">
<tc:default-resource from="default-resource"/>
</tc:server-side-config>
</tc:cluster>
</service>
<cache-template name="CacheTemplate">
<resources>
<tc:clustered-dedicated unit="MB">16</tc:clustered-dedicated>
</resources>
<tc:clustered-store consistency="strong"/>
</cache-template>
<cache alias="CacheDaemon" uses-template="CacheTemplate">
<key-type>java.lang.String</key-type>
<value-type>com.server.cache.DaemonInstance</value-type>
</cache>
<cache alias="CacheProperty" uses-template="CacheTemplate">
<key-type>java.lang.String</key-type>
<value-type>java.lang.String</value-type>
</cache>
I don't found any other way, even getting the key list.
So did we made a mistake in the cache configuration ?
Or is it that the EhCache distributed mode is totally incompatible with this method (and so we won't use EhCache).
I have a class that performs some read operations from a service XXX. These read operations will eventually perform DB reads and I want to optimize on those calls by caching the results of each method in the class for a specified custom key per method.
Class a {
public Output1 func1(Arguments1 ...) {
...
}
public Output2 func2(Arguments2 ...) {
...
}
public Output3 func3(Arguments3 ...) {
...
}
public Output4 func4(Arguments4 ...) {
...
}
}
I am thinking of using Spring caching(#Cacheable annotation) for caching results of each of these methods.
However, I want cache invalidation to happen automatically by some mechanism(ttl etc). Is that possible in Spring caching ? I understand that we have a #CacheEvict annotation but I want that eviction to happen automatically.
Any help would be appreciated.
According to the Spring documentation (section 36.8) :
How can I set the TTL/TTI/Eviction policy/XXX feature?
Directly through your cache provider. The cache abstraction is...
well, an abstraction not a cache implementation. The solution you are
using might support various data policies and different topologies
which other solutions do not (take for example the JDK
ConcurrentHashMap) - exposing that in the cache abstraction would be
useless simply because there would no backing support. Such
functionality should be controlled directly through the backing cache,
when configuring it or through its native API.#
This mean that Spring does not directly expose API to set Time To Live , but instead relays on the caching provider implementation to set this. This mean that you need to either set Time to live through the exposed Cache Manager, if the caching provider allows dynamic setup of these attributes. Or alternatively you should configure yourself the cache region that the Spring is using with the #Cacheable annotation.
In order to find the name of the cache region that the #Cacheable is exposing. You can use a JMX console to browse the available cache regions in your application.
If you are using EHCache for example once you know the cache region you can provide xml configuration like this:
<cache name="myCache"
maxEntriesLocalDisk="10000" eternal="false" timeToIdleSeconds="3600"
timeToLiveSeconds="0" memoryStoreEvictionPolicy="LFU">
</cache>
Again I repeat all configuration is Caching provider specific and Spring does not expose an interface when dealing with it.
REMARK: The default cache provider that is configured by Spring if no cache provider defined is ConcurrentHashMap. It does not have support for Time To Live. In order to get this functionality you have to switch to a different cache provider(for example EHCache).
my ehcache config starts like this:
<ehcache maxBytesLocalHeap="200M" updateCheck="false">
after experiencing error like "maxEntriesLocalHeap is not compatible with maxBytesLocalHeap set on cache manager" I looked into the source of spring-context-support:
#SuppressWarnings("deprecation")
public EhCacheFactoryBean() {
setMaxEntriesLocalHeap(10000);
setMaxElementsOnDisk(10000000);
setTimeToLiveSeconds(120);
setTimeToIdleSeconds(120);
}
The call to setMaxElementsOnDisk is commented like this:
void net.sf.ehcache.config.CacheConfiguration.setMaxElementsOnDisk(int maxElementsOnDisk)
Deprecated. use setMaxEntriesLocalDisk(long) for unclustered caches and setMaxEntriesInCache(long) for clustered caches.
Sets the maximum number elements on Disk. 0 means unlimited.
This property can be modified dynamically while the cache is operating.
Parameters:
maxElementsOnDisk the maximum number of Elements to allow on the disk. 0 means unlimited.
Does this mean that using spring 4.1.7 together with its dependency ehcache 2.9.1 a clustered cache is impossible using springs EhCacheFactoryBean?
Best regards,
Carsten
Sorry but I fail to see the link between the code/documentation snippets and your conclusion. Can you elaborate?
One thing is that it seems the default cache creation from Spring conflicts with what you want to do at the CacheManager level. But if you do not use default caches, you should have no problem.
CacheManager and cache must use the same configuration parameters maxBytesLocalHeap or maxEntriesLocalHeap, and MaxElements* is Deprecated
In my project i have to intercept Hibernate L2 cache calls in-order to set lifespan for some selected cached objects. The problem is hibenate cache calls never comes through the interceptor.
My Interceptor ( test code)
public class HibernateCacheInterceptor extends BaseCustomInterceptor {
private static Log log = LogFactory.getLog(HibernateCacheInterceptor.class);
#Override
public Object visitPutKeyValueCommand(InvocationContext ctx, PutKeyValueCommand command) throws Throwable {
log.info(this.getClass().getName() + " intercept.");
if (command.getValue() instanceof Car) {
return null;
} else {
return invokeNextInterceptor(ctx, command);
}
}
}
My Cache definition ( infinispan.xml)
<namedCache name="mycache">
<customInterceptors>
<interceptor position="FIRST" class="test.HibernateCacheInterceptor">
</interceptor>
</customInterceptors>
</namedCache>
org.infinispan.Cache.put(key,value) calls comes to interceptor but hibernate cache calls does not comes. Does hibernate uses different API to skip interceptors ? How can i intercept hibernate cache calls ?
No, Hibernate cannot skip interceptors - all of the logic of core Infinispan is triggered from interceptors.
My guess is that Hibernate does not use the cache (when you open JConsole, can you see entries there in Infinispan?), uses another cache (without the interceptor) or buffers the entries before inserting to the cache.
You can try to set trace logging on both hibernate and infinispan.
There are easier ways to achieve this. As indicated in the Infinispan 2LC documentation (see advanced configuration section), each entity can be assigned a specific cache where you can tweak the settings declaratively. The easiest thing is to check which is the Infinispan configuration that's used in your application, copy the default cache used for entities, give it a different name and tweak it. Then, you need to define something like:
<property name="hibernate.cache.infinispan.com.acme.Person.cfg"
value="person-entity"/>
Where person-entity is the name of the cache for that particular entity.
NOTE: Remember that if you're running on Wildfly or EAP, the property name requires indicating the deployment archive and persistence unit name. This is explained in the advanced configuration section.
I'm working on the development of library that depends heavily on XML based configuration files. These files describe a process workflow that contains variables and references to java objects that have different scopes. An extremely simplified pseudo configuration would be:
<config>
<valueProducer name="thingThatProducesAValue" class="org.com.Blah" method="foo" args="arg1, arg2" scope="application" />
<var name="v" scope="process" value="${thingThatProducesAValue}" />
<process step="somethingImportant">
<write value="${v}" to="a_file_somewhere">
<write value="${v}" to="a_queue">
</process>
</config>
Basically this configuration defines that:
1 - an instance of the class "org.com.Blah" will be created and it will be reused while the application is running (pretty much as if it were a singleton)
2 - a variable named "v", when used somewhere, will be populated with the result of the value producer named "thingThatProducesAValue"
2.1 - the value of "v" will be evaluated once during the execution of the process "somethingImportant", and will be reused subsequently, until the end of the process.
I am looking for a java based IoC container that can be programmatically configured, and that offers some sort of support for custom management of scoped entities. I had a look on Spring but it seems very difficult do anything without using its own configuration file format or annotatinos. My requirement is to create an engine capable of reading this XML in its peculiar format and perform the value replacements and class/method invocations, but it would be great to do the essential stuff and just work on top of some library that is already available.
Do you have any suggestions?
You can implement Spring's custom namespace handler.