I am trying to get Hazelcast 3.0.2 working with Spring abstraction however it seems the TTL functionality is not working.
I have configured my spring context in the following way
<cache:annotation-driven cache-manager="cacheManager" mode="proxy" proxy-target-class="true" />
<bean id="cacheManager" class="com.hazelcast.spring.cache.HazelcastCacheManager">
<constructor-arg ref="hzInstance" />
</bean>
<hz:hazelcast id="hzInstance">
<hz:config>
<hz:group name="instance" password="password" />
<hz:properties>
<hz:property name="hazelcast.merge.first.run.delay.seconds">5</hz:property>
<hz:property name="hazelcast.merge.next.run.delay.seconds">5</hz:property>
<hz:property name="hazelcast.logging.type">slf4j</hz:property>
<hz:property name="hazelcast.jmx">true</hz:property>
<hz:property name="hazelcast.jmx.detailed">true</hz:property>
</hz:properties>
<hz:network port="8995" port-auto-increment="true">
<hz:join>
<hz:tcp-ip enabled="true">
<hz:interface>10.0.5.5</hz:interface>
<hz:interface>10.0.5.7</hz:interface>
</hz:tcp-ip>
</hz:join>
</hz:network>
<hz:map name="somecache"
backup-count="1"
max-size="0"
eviction-percentage="30"
read-backup-data="false"
time-to-live-seconds="120"
eviction-policy="NONE"
merge-policy="hz.ADD_NEW_ENTRY" />
</hz:config>
</hz:hazelcast>
I then made a simple test class having the following method
#Cacheable("somecache")
public boolean insertDataIntoCache(String data) {
logger.info("Inserting data = '{}' into cache",data);
return true;
}
I also made some method to print some information from every map Hazelcast finds and also the entires inside. Inserting the data and caching seems to work fine however the entry never expires even though I set a TTL of 120 seconds.
When I write the data from the cache it shows me that there is one map called "somecache" and that map has a TTL of 120 seconds but when I loop through the entries, it finds all the ones I inserted with a expirationTime of 0. I am not what is supposed to be the behaviour of hazelcast (maybe a map ttl takes precedence over an entry ttl) but in any case it will just not expire.
Is anybody aware of any issues with 3.0.2 and spring cache? I should also mention that I have other applications in the same application server running an older version of Hazelcast however they have their own separate config and my test application seems to be keeping to itself and not conflicting with anything.
Any input is appreciated.
EDIT 1:
It seems to work if I downgrade to using HZ 2.6.3 so it looks like there is a bug somewhere in hazelcast 3 regarding TTL
I just stumbled on the same thing and it seems that it has been fixed about a month ago: https://github.com/hazelcast/hazelcast/commit/602ce5835a7cc5e495b8e75aa3d4192db34d8b1a#diff-d20dd943d2216ab106807892ead44871
Basically TTL was overridden when you use Hazelcast Spring integration.
Related
There is a service that connects to Oracle DB for reading data and it uses Hibernate-3.6 and SpringData-JPA-1.10.x. Heap dumps are getting generated frequently which results in out of memory on the hosts.
After analyzing few heapdumps using Eclipse MAT, found that the majority of the memory is accumulated in one instance of org.hibernate.engine.StatefulPersistenceContext -> org.hibernate.util.IdentityMap -> java.util.LinkedHashMap.
And the leak suspect says
The thread java.lang.Thread # 0x84427e10 ... : 29 keeps local
variables with total size 1,582,637,976 (95.04%) bytes.
The memory is accumulated in one instance of "java.util.LinkedHashMap"
loaded by "".
Searched it on StackOverflow and it says SessionFactory should be singleton and session.flush() and session.clear() should be invoked before each call to clear the cache. But SessionFactory is not explicitly initialized or used in the code.
What's causing the memory leak here (looks like the result of each query is cached and not cleared) and how to fix it?
More info about the Spring Data configuration:
TransactionManager is initialized as:
<tx:annotation-driven mode='proxy' proxy-target-class='true' />
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
....
</bean>
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" depends-on="...">
....
</bean>
To interact with the table, an interface is declared extending Spring Data Repository and JpaSpecificationExecutor. Both are typed into the domain class that it will handle.
API activity method has the annotation #Transactional(propagation = Propagation.SUPPORTS, readOnly = true).
From what you describe this is what I expect to be going on:
Hibernate (actually JPA in general) keeps a reference to all entities it loads or saves for the lifetime of the session.
In a typical web application setup, this isn't a problem, because. A new session starts with each request and gets closed once the request is finished and it doesn't involve that many entities.
But for your application, it looks like the session keeps growing and growing.
I can imagine the following reasons:
something runs in an open session all the time without it ever closing. Maybe something like a batch job or a scheduled job which runs at regular intervals.
Hibernate is configured in such a way that it reuses the same session without ever closing it.
In order to find the culprit enable logging for opening and closing the session. Judging from https://hibernate.atlassian.net/browse/HHH-2425 org.hibernate.impl.SessionImpl should be the right log category and you probably need trace level logging.
Now test the various requests to your server and see if there are any sessions that get opened but not closed.
The question contains information about the creations of some beans. But the problem doesn't lie there. The problem is in your code, where have you use these beans.
Please check your code. Probably you are loading items in a loop. And the loop is wrapped with a transaction.
Hibernate creates huge intermediate objects, and it doesn't clean these before the transaction being completed (commit/rollback).
I am trying to setup caffeine cache using spring xml bean configuration.
I want to have two different caches,
to store "id"
to store "name"
I tried doing following,
<bean id="cacheManager" class="org.springframework.cache.caffeine.CaffeineCacheManager">
<property name="cacheNames">
<set>
<value>id</value>
<value>name</value>
</set>
</property>
<property name="cacheSpecification" value="${caffeine.spec}"/>
</bean>
Code where I am using it looks like,
#Cacheable(cacheNames = {"id"})
public String getId(final String key){
System.out.println("no id in cache");
//code
}
#Cacheable(cacheNames = {"name"})
public String getName(final String key){
System.out.println("no name in cache");
//code
}
The getId() method somehow works as per the caffeine.spec values which is maximumSize=500,expireAfterAccess=5s in my project. So if I call the method within 5 sec it does not print the message and if I call it within 5sec it calls the method. But the getName does not work. It prints the message all the time.
Anyone has ever tried to setup caffeine cache to setup multiple caches.
Just a note for people looking for an answer for above issue, Looks like the above configuration actually works, it must have been some other issue which did not work for me at that time.
I had similar issue with cache configuration. Appeared that there was another cache provider in the class-path (Guava) which has been chosen by Spring instead of Caffeine.
You have to specify which cache provider is the default one by using
spring.cache.type=caffeine property. However you have solved this with configuration bean.
Hope that will save some time to other people.
I have been struggling to find a work around to be able to dynamically read the polling frequency in mule flow. Currently I am reading that from a file using spring's Propertyplaceholder at the start up and value remains the same even if the fie is changed(as we all know)..
Since poll tag needs to be the first component in the flow, There is nothing much i could do to read the "live" file update.
Is there any way I could set the polling frequency dynamically read from a file(without requiring restart)?
For Reference:
<spring:beans>
<context:property-placeholder location="file:///C:/Users/test/config.properties" />
</spring:beans>
<flow name="querying-database-pollingFlow1" doc:name="querying-database-pollingFlow1">
<poll doc:name="Poll3e3">
<fixed-frequency-scheduler frequency="${pollinginterval}"/>
<db:select config-ref="MySQL_Configuration1" doc:name="Perform a query in MySQL">
<db:dynamic-query><![CDATA[select empId,empName from employer where status='active';]]></db:dynamic-query>
</db:select>
</poll>
....</flow>
There is absolutely no issue with <fixed-frequency-scheduler frequency="${pollinginterval}"/> as you can dynamically read polling frequency from a properties file ...
The only thing I am concern here is :- <context:property-placeholder location="file:///C:/Users/test/config.properties" />
Since you are reading from a properties file outside your classpath, better try with the following :-
<context:property-placeholder
location="file:C:/Users/test/config.properties" />
One more thing .. if you are using Spring beans for properties file use the following way :-
<spring:beans>
<spring:bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<spring:property name="locations">
<spring:list>
<spring:value>file:C:/Users/test/config.properties</spring:value>
</spring:list>
</spring:property>
</spring:bean>
</spring:beans>
The clean way using FixedFrequencyScheduler is not there. You could potentially go to the registry, fetch your flow by name, then get the MessageSource and cast it to FixedFrequencyScheduler set the new interval and stop-start, however if you take a look to the code you'll see there is no setter for it and reflexion it's just too dirty.
My first choice would probably be to leverage a quartz endpoint and then leverage the quartz abilities to expose the configuration throught jmx/rmi.
I would definitely advise against using hot deploy to solve this problem especially if you need to change the frequency often. There is a risk that this will lead to problems with permgen running out of memory.
Instead you could use a flow with a quartz endpoint that fires at a relatively low frequency. Then add a filter that only lets through the message at the required frequency.
The filter can either watch a properties file for changes or expose attributes over JMX to allow you to change the frequency. Something like this.
<spring:beans>
<spring:bean id="frequencyFilter" class="FrequencyFilter" />
</spring:beans>
<flow name="trigger-polling-every-second" doc:name="trigger-polling-every-second">
<quartz:inbound-endpoint repeatInterval="1000" doc:name="Quartz" responseTimeout="10000" jobName="poll-trigger">
<quartz:event-generator-job>
<quartz:payload>Scheduled Trigger</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<filter ref="frequencyFilter" />
<vm:outbound-endpoint path="query-database" />
</flow>
<flow name="query-database">
<vm:inbound-endpoint path="query-database" />
<db:select config-ref="databaseConfig" doc:name="Perform a query in database">
<db:dynamic-query><![CDATA[select empId,empName from employer where status='active']]></db:dynamic-query>
</db:select>
<logger level="ERROR" message="#[payload]"/>
</flow>
I am using the HDIV Web Application Security Framework for a java web application. Every new web-page-request generates hdiv-internal security information that is cached and used for security checks.
I have the following szenario:
I have one order page that pops up a confirmation-page for 2 seconds when something was added to or removed from the cart.
after 50 popups the the underlaying order page is removed from the cache and therefor an error occurs in the app.
does anybody know how to influence the hdiv cache-removal strategy to keep the basepage alive?
One way around is to increase org.hdiv.session.StateCache.maxSize from 50 to 500.
but this would only cure the symptoms not the underlying cause.
Update:
using #rbelasko solution
I succeded to use the original org.hdiv.session.StateCache to change the maxSize to 20 and verified in the debug-log that the cachentries are dismissed after 20 entries.
When I changed it to use my own implementation it didn-t work
Bean definition
<bean id="cache" class="com.mycompany.session.StateCacheTest" singleton="false"
init-method="init">
<property name="maxSize">
<value>20</value>
</property>
</bean>
My own class
public class StateCacheTest extends StateCache
{
private static final Log log = LogFactory.getLog(StateCacheTest.class);
public StateCacheTest()
{
log.debug("StateCache()");
}
#Override
public void setMaxSize(final int maxSize)
{
super.setMaxSize(maxSize);
if (log.isDebugEnabled())
{
log.debug("setMaxSize to " + maxSize);
}
}
}
In the debug-log were no entries from StateCacheTest
Any ideas?
Update 2:
While i was not able to load a different IStateCache implementation via spring i was able to make this error less likely using
<hdiv:config ... maxPagesPerSession="200" ... />
the bean-settings definition
<property name="maxSize">
<value>20</value>
</property>
had no effect on the cachesize in my system.
You could create a custom IStateCache interface implementation.
Using the HDIV explicit configuration (not using HDIV's new custom schema) this is the default configuration for "cache" bean:
<bean id="cache" class="org.hdiv.session.StateCache" singleton="false"
init-method="init">
<property name="maxSize">
<value>200</value>
</property>
</bean>
You could create your own implementation and implement the strategy that fits your requirements.
Regards,
Roberto
After upgrading from Spring 3.0.0.M4 to 3.0.0.RC1 and Spring Security 3.0.0.M2 to 3.0.0.RC1, I've had to use a security:authentication-manager tag instead of defining an _authenticationManager like I used to in M4/M2. I've done my best at defining it, and ended up with this:
<security:authentication-manager alias="authenticationManager">
<security:authentication-provider user-service-ref="userService">
<security:password-encoder hash="plaintext"/>
</security:authentication-provider>
</security:authentication-manager>
When I do my unit tests one at a time, this works great, and for most AJAX requests it works fine as well, but seemingly randomly, I get weird errors in my transactions where my database session seems to get closed midway in the work. The way I can provoke these errors is just sending a lot of different AJAX requests to my different controllers from the same client, then at least one of them will fail at random. Next time I try, that one will work and another will fail.
The error happens most frequently in my userDAO, but also quite frequently in other DAOS, and the exceptions include at least the following:
"java.sql.SQLException: Operation not allowed after ResultSet closed"
"org.hibernate.impl.AbstractSessionImpl:errorIfClosed(): Session is closed!"
"java.lang.NullPointerException at com.mysql.jdbc.PreparedStatement.fillSendPacket(PreparedStatement.java:2439)"
"java.util.ConcurrentModificationException at java.util.LinkedHashMap$LinkedHashIterator.nextEntry(Unknown Source)"
"org.hibernate.LazyInitializationException: illegal access to loading collection"
etc...
Before, I used to define an _authenticationManager bean, and the same requests worked like a charm. But with RC1, I'm no longer allowed to define it. It used to look like this:
<bean id="_authenticationManager" class="org.springframework.security.authentication.ProviderManager">
<property name="providers">
<list>
<bean class="org.springframework.security.authentication.dao.DaoAuthenticationProvider">
<property name="userDetailsService" ref="userService"/>
<property name="passwordEncoder">
<bean class="org.springframework.security.authentication.encoding.PlaintextPasswordEncoder" />
</property>
</bean>
</list>
</property>
</bean>
Have I defined my security:authentication-manager incorrectly so that it will share transactions for multiple requests from the same client? Should I define it differently, or should I define some other security: beans?
Is there something I have misunderstood that makes my database sessions close? In my head, each request has its own database connection and transaction. All getters and setters are synchronized methods, so I really shouldn't have any concurrency issues. All the REST controllers that the UI makes requests against are GET-requests and do read-only work. To my knowledge, not a single INSERT/UPDATE/DELETE is done during any of these requests, and I've inspected the database logs to verify this.
I look forward to hearing your suggestions on how to avoid these race-conditions.
Cheers
Nik
PS, my I've updated the question to be more specific that the problem is with the security:authentication-manager (or so it seems to me, if you have tips that it could be something else that would be great) that I'm forced to use instead of my own _authenticationManager starting with 3.0.0.RC1
PPS, here is the thread that made me understand I could no longer define an _authenticationManager: SpringSource Forum Post
It seems that I had a big problem in database session handling in my DAO, so I've made a write-up of my problem and posted the solution in another thread here at StackOverflow and asked for people's opinion on the solution. I hope it doesn't give more issues :-)
Cheers
Nik