We can't find another way to test cache replication between two instances of server except run stress test on nginx address, that
distributes the requests by load between two instances. After test working finished, we check the number of elements in each cache using java-melody browser interface on both instances. And there is difference in cache between two instances. Can somebody tell is that normal behavior of ehcache? If no what solution can we use to fix that problem?
P.S.: In case using single requests cache replication works fine.
Configuration Instance1:
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
updateCheck="false" monitoring="autodetect"
dynamicConfig="true">
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostName=host1, port=7571, socketTimeoutMillis=2000"/>
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=manual,
rmiUrls=//host1:7572/ClientLink_ID|//host1:7572/ClientLink_MainID|//host1:7572/ClientLink_SubID|.../>
Configuration Instance2:
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
updateCheck="false" monitoring="autodetect"
dynamicConfig="true">
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostName=host1, port=7572, socketTimeoutMillis=2000"/>
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=manual,
rmiUrls=//host1:7571/ClientLink_ID|//host1:7571/ClientLink_MainID|//host1:7571/ClientLink_SubID|.../>
</ehcache>
Example of cache config:
<cache name="ClientLink_ID"
eternal="false"
maxEntriesLocalHeap="1000"
timeToIdleSeconds="600"
timeToLiveSeconds="900"
statistics="true">
<persistence strategy="none"/>
<cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"/>
</cache>
Related
Below is ehcache configuration we are using. We use Jgroups for cache replication.
ehcache.xml
<defaultCache
maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="1200"
timeToLiveSeconds="86400"
overflowToDisk="true"
diskSpoolBufferSizeMB="30"
maxElementsOnDisk="10000000"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="120"
memoryStoreEvictionPolicy="LRU">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true,replicatePuts=true,replicateUpdates=true,replicateUpdatesViaCopy=true,replicateRemovals=true" />
</defaultCache>
jgroups_tcp_config.xml
<?xml version="1.0" encoding="UTF-8"?>
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">
<!--Configure node ip inside bind_addr-->
<TCP bind_addr="host1" bind_port="7831" max_bundle_size="9999999"/>
<!--Configure nodes inside 'initial_hosts' property-->
<TCPPING timeout="3000" initial_hosts="host1[7831],host2[7831]" port_range="1" num_initial_members="3"/>
<FRAG2 frag_size="9999999"/>
<MERGE3 max_interval="30000" min_interval="10000"/>
<FD timeout="3000" max_tries="10"/>
<VERIFY_SUSPECT timeout="1500"/>
<pbcast.NAKACK use_mcast_xmit="false" exponential_backoff="500" discard_delivered_msgs="false"/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="5000" view_bundling="true"/>
</config>
Initially from the logs we can see that the nodes are getting clustered. Also we can see that messages are being replicated across nodes. But after some time, we see that messages are no more being replicated and hence resulting in erroneous behavior. Is there any problem with the jgroups configurations we are using?
Also we tried using NAKACK2, but the messages are not getting replicated across nodes at all. We simply replaced NAKACK with NAKACK2 in above configuration specified. Not sure where we are going wrong.
Above issue we are facing in AWS cloud.Ehcache Jgroups tcp will not work in cloud environment because cloud VPN dosn't support TCP multicasting due to which node discovery will not happen, to address this we are using jgroups_s3_config.xml instead of jgroups_tcp_config.xml in the AWS cloud.With the following jgroups_s3_config.xml configuration we have addressed the issue.
<?xml version="1.0" encoding="UTF-8"?>
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.1.xsd">
<TCP loopback="true" bind_port="7800"/>
<S3_PING location="s3 bucket name should be in the same region in which app servers are running"
access_key="s3 bucket access key from aws credential file"
secret_access_key="s3 bucket secret access key from aws credential file" timeout="10000" num_initial_members="2"/>
<FRAG2/>
<MERGE2 min_interval="10000" max_interval="30000"/>
<FD_ALL timeout="12000" interval="3000" timeout_check_interval="4000"/>
<VERIFY_SUSPECT timeout="1500"/>
<pbcast.NAKACK2 use_mcast_xmit="false" discard_delivered_msgs="false"/>
<UNICAST2 timeout="300,600,1200"/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="40K"/>
<pbcast.GMS print_local_addr="true" join_timeout="5000" view_bundling="true"/>
</config>
Additionally we have to set the JAVA_OPTS.
export JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true"
I am trying to integrate Ehcache with my Java Spring MVC Web Applications. I have followed the instructions from the following article:
https://dzone.com/articles/implementing-ehcache-using.
I have added the following dependency to my pom.xml file:
<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache</artifactId>
<version>2.9.0</version>
</dependency>
My ehcache.xml is as follows:
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="ehcache.xsd"
updateCheck="true"
monitoring="autodetect"
dynamicConfig="true">
<diskStore path="java.io.tmpdir" />
<cache name="swcmRestData"
maxEntriesLocalHeap="10000"
maxEntriesLocalDisk="1000"
eternal="false"
diskSpoolBufferSizeMB="20"
timeToIdleSeconds="300" timeToLiveSeconds="600"
memoryStoreEvictionPolicy="LFU"
transactionalMode="off">
<persistence strategy="localTempSwap" />
</cache>
</ehcache>
I have the following entries in my root-context.xml:
<!-- EhCache Configuration -->
<bean id="ehcache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:configLocation="classpath:ehcache.xml" p:shared="true"/>
<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager" p:cacheManager-ref="ehcache"/>
And I have a method for which I want to enable ehCache:
#Cacheable(value="swcmRestData", key="url")
public <T> T getEntity(String url, java.lang.Class<T> gt) throws RestException
{
T t = restClientService.getEntity(url, gt);
return t;
}
I want the data to be retrieved from the ehCache if the same url is passed to the specified method. I do not get any errors while running the code. But looks like the caching is not working. Is there anything that I am missing here
Two things that could be the cause of the problem:
You are missing the Spring configuration so that there is a link between the defined beans and the caching annotation. Effectively that's point 2 in the article you link which you do not mention here.
As suggested in comments, the method you are caching is called from within the same class. That's a limitation of Spring AOP implementation when using proxies. If you configure Spring to do bytecode weaving it will work.
If none of the above are the source of error, please provide more information on your setup.
I have a tomcat server with ehcache. And I have a second tomcat with servlet. When second servlet is inited it should take ehcache from #1 and put all data to its cache.
Is there built-in mechanism such "on start replication" in ehcache? Or how can I get serialized ehcache data and then de-serialize them to ehcache. I understand that I can read all keys one by one and then all their values then serialize but maybe there's a better way?
Thanks
There is no out of the box support for what you are asking.
Alternatives are:
Ehcache clustering with Terracotta
Ehcache replication
Note that a major difference between the two is that the second option offers NO guarantees with regards to data consistency between the two Ehcache instances.
Disclaimer: I work for Terracotta on Ehcache
Here's my solution using built-in rmi replication solution:
Master config:
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="ehcache.xsd"
updateCheck="true"
monitoring="autodetect"
dynamicConfig="true">
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostName=localhost, port=40001,
socketTimeoutMillis=2000"/>
<cache name="masterCache"
maxEntriesLocalHeap="10000"
eternal="false"
timeToIdleSeconds="300"
timeToLiveSeconds="600"
memoryStoreEvictionPolicy="LFU"
transactionalMode="off">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=false, replicateUpdates=false,
replicateUpdatesViaCopy=false, replicateRemovals=false "/>
<persistence strategy="none"/>
</cache>
Here I turn on RMI listener for slaves and enable RMI for cache but without any replication because I just need data on slave when it starts and that's all
Slave config:
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="ehcache.xsd"
updateCheck="true"
monitoring="autodetect"
dynamicConfig="true">
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=manual,
rmiUrls=//localhost:40001/masterCache"/>
<cache name="masterCache"
maxEntriesLocalHeap="10000"
eternal="false"
timeToIdleSeconds="300"
timeToLiveSeconds="600"
memoryStoreEvictionPolicy="LFU"
transactionalMode="off">
<bootstrapCacheLoaderFactory class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"
properties="bootstrapAsynchronously=false"/>
<persistence strategy="none"/>
</cache>
I made RMI connection to master node and turn on cache's bootstrap.
I have four nodes in a cluster, and I need synchronize them. When node BRJGSD309173 tried send a messages through JGROUPs to BRJGSD333007, the server BRJGSD333007 informed the message below:
11:34:07,759 WARN [org.jgroups.protocols.pbcast.NAKACK] (Incoming-2,maestroCacheManager,BRJGSD333007-24075) BRJGSD333007-24075: dropped message 4787 from BRJGSD309173-7667 (sender not in table [BRJGSD333007-24075]), view=[BRJGSD333007-24075|0] [BRJGSD333007-24075]
The follow configuration of eh jgroups_tcp.xml
<?xml version='1.0'?>
<config>
<TCP bind_port="7800"
max_bundle_size="5M" />
<TCPPING timeout="3000"
initial_hosts="brjgsm10.weg.net[7800],brjgsm11.weg.net[7800],brjgsd309173.weg.net[7800],brjgsd333007.weg.net[7800]"
port_range="10"
num_initial_members="5"/>
<VERIFY_SUSPECT timeout="1500" />
<pbcast.NAKACK use_mcast_xmit="false"
retransmit_timeout="300,600,1200,2400,4800"
discard_delivered_msgs="true"/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="5000" view_bundling="true"/>
</config>
And a fragment of ehcache.xml
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
updateCheck="false" xsi:noNamespaceSchemaLocation="ehcache.xsd" name="maestroCacheManager">
...
<cache
name="objectServiceExecute"
maxEntriesLocalHeap="100000"
eternal="false" >
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true,
replicateUpdates=true, replicateUpdatesViaCopy=true, replicateRemovals=true" />
</cache>
<diskStore
path="java.io.tmpdir/ehcache" />
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="file=jgroups_tcp.xml"
propertySeparator=";"
/>
<cache
name="org.hibernate.cache.internal.StandardQueryCache"
maxElementsInMemory="10000000"
eternal="true"
memoryStoreEvictionPolicy="LRU" />
<defaultCache
maxElementsInMemory="10000000"
eternal="true"
memoryStoreEvictionPolicy="LRU" >
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true,
replicateUpdates=true, replicateUpdatesViaCopy=true, replicateRemovals=true" />
</defaultCache>
</ehcache>
This is a very stange JGroups configuration! You're missing failure detection protocols, UNICAST3 and MERFGE3 etc!
The error above means that you received a message from a member not in the cluster, so it was consequently dropped. Why the member was not in the cluster is unclear, perhaps it didn't join correctly. Since you don't have any failure detection protocols, it can't have been suspected and expelled.
I suggest copy tcp.xml shipped with JGroups and replace TCPPING with your TCPPING config. Also make sure you set bind_addr in TCP, to make sure JGroups binds to the correct interface.
Hope this helps,
Cheers
I have configured ehcache for hibernate 2nd level cache to use a Terracotta server. Everything is working fine, except the UpdateTimestampsCache for the query cache is just not showing up in the Dev Console. We are using Hibernate 3.6.10 and ehcache 2.6.0.
I am seeing all entity, collection, query and the StandardQueryCache, but not org.hibernate.cache.UpdateTimestampsCache. I know the timestamp cache exists and is being used because I can see the stats on it using the the metrics lib instrumented in.
Any ideas?
Thanks!
Here's my ehcache.xml config
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
updateCheck="false"
name="Hibernate-CacheManager"
monitoring="autodetect"
dynamicConfig="true">
<terracottaConfig url="localhost:9510" />
<defaultCache
eternal="false"
overflowToDisk="false"
maxElementsInMemory="50000"
timeToIdleSeconds="7200"
timeToLiveSeconds="0">
<cacheDecoratorFactory
class="com.yammer.metrics.ehcache.InstrumentedEhcacheFactory" />
<terracotta/>
</defaultCache>
<cache
name="org.hibernate.cache.UpdateTimestampsCache"
eternal="false"
overflowToDisk="false"
maxElementsInMemory="500"
timeToIdleSeconds="7200"
timeToLiveSeconds="0">
<cacheDecoratorFactory
class="com.yammer.metrics.ehcache.InstrumentedEhcacheFactory" />
<terracotta/>
</cache>
<cache
name="org.hibernate.cache.StandardQueryCache"
eternal="false"
overflowToDisk="false"
maxElementsInMemory="50000"
timeToIdleSeconds="7200"
timeToLiveSeconds="0">
<cacheDecoratorFactory
class="com.yammer.metrics.ehcache.InstrumentedEhcacheFactory" />
<terracotta/>
</cache>
</ehcache>
Recopying answer from: http://forums.terracotta.org/forums/posts/list/0/7554.page#36815
Hibernate does not maintain statistics for the UpdateTimestampsCache cache up to Hibernate 4.0.0. This explains why the cache does not show up in the terracotta dev console.
This is filed as bug https://hibernate.onjira.com/browse/HHH-5326