my ehcache config starts like this:
<ehcache maxBytesLocalHeap="200M" updateCheck="false">
after experiencing error like "maxEntriesLocalHeap is not compatible with maxBytesLocalHeap set on cache manager" I looked into the source of spring-context-support:
#SuppressWarnings("deprecation")
public EhCacheFactoryBean() {
setMaxEntriesLocalHeap(10000);
setMaxElementsOnDisk(10000000);
setTimeToLiveSeconds(120);
setTimeToIdleSeconds(120);
}
The call to setMaxElementsOnDisk is commented like this:
void net.sf.ehcache.config.CacheConfiguration.setMaxElementsOnDisk(int maxElementsOnDisk)
Deprecated. use setMaxEntriesLocalDisk(long) for unclustered caches and setMaxEntriesInCache(long) for clustered caches.
Sets the maximum number elements on Disk. 0 means unlimited.
This property can be modified dynamically while the cache is operating.
Parameters:
maxElementsOnDisk the maximum number of Elements to allow on the disk. 0 means unlimited.
Does this mean that using spring 4.1.7 together with its dependency ehcache 2.9.1 a clustered cache is impossible using springs EhCacheFactoryBean?
Best regards,
Carsten
Sorry but I fail to see the link between the code/documentation snippets and your conclusion. Can you elaborate?
One thing is that it seems the default cache creation from Spring conflicts with what you want to do at the CacheManager level. But if you do not use default caches, you should have no problem.
CacheManager and cache must use the same configuration parameters maxBytesLocalHeap or maxEntriesLocalHeap, and MaxElements* is Deprecated
Related
I am looking for the right way to set a run-time parameter when a database connection is open. My run-time parameter is actually a time zone, but I think this should work for an arbitrary parameter.
I've found following solutions, but I feel like none of these is the right thing.
JdbcInterceptor
Because Spring Boot has Apache Tomcat connection pool as default I can use org.apache.tomcat.jdbc.pool.JdbcInterceptor to intercept connections.
I don't think this interceptor provides a reliable way to perform a statement when connection is open. Possibility to intercept every statement provided by this interceptor is unnecessary to set a parameter that should be set only once.
initSQL property
Apache's pooled connection has a build-in ability to initialise itself by a statement provided by PoolProperties.initSQL parameter. This is executed in ConnectionPool.createConnection(...) method.
Unfortunately official support for this parameter has been removed from Spring and no equivalent functionality has been introduced since then.
I mean, I can still use a datasource builder like in an example below, and then hack the property into a connection pool, but this is not a good looking solution.
// Thank's to property binders used while creating custom datasource,
// the datasource.initSQL parameter will be passed to an underlying connection pool.
#Bean
#ConfigurationProperties(prefix = "datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
Update
I was testing this in a Spring Boot 1.x application. Above statements are no more valid for Spring Boot 2 applications, because:
Default Tomcat datasource was replaced by Hikari which supports spring.datasource.hikari.connection-init-sql property. It's documentation says Get the SQL string that will be executed on all new connections when they are created, before they are added to the pool.
It seems that similar property was reintroduced for Tomcat datasource as spring.datasource.tomcat.init-s-q-l.
ConnectionPreparer & AOP
This is not an actual solution. It is more like an inspiration. The connection preparer was a mechanism used to initialise Oracle connections in Spring Data JDBC Extensions project. This thing has its own problems and is no more maintained but possibly can be used as a base for similar solution.
If your parameter is actually a time zone, why don't you find a way to set this property.
For example if you want to store or read a DateTime with a predefined timestamp the right way to do this is to set property hibernate.jdbc.time_zone in hibernate entityManager or spring.jpa.properties.hibernate.jdbc.time_zone in application.properties
I have a class that performs some read operations from a service XXX. These read operations will eventually perform DB reads and I want to optimize on those calls by caching the results of each method in the class for a specified custom key per method.
Class a {
public Output1 func1(Arguments1 ...) {
...
}
public Output2 func2(Arguments2 ...) {
...
}
public Output3 func3(Arguments3 ...) {
...
}
public Output4 func4(Arguments4 ...) {
...
}
}
I am thinking of using Spring caching(#Cacheable annotation) for caching results of each of these methods.
However, I want cache invalidation to happen automatically by some mechanism(ttl etc). Is that possible in Spring caching ? I understand that we have a #CacheEvict annotation but I want that eviction to happen automatically.
Any help would be appreciated.
According to the Spring documentation (section 36.8) :
How can I set the TTL/TTI/Eviction policy/XXX feature?
Directly through your cache provider. The cache abstraction is...
well, an abstraction not a cache implementation. The solution you are
using might support various data policies and different topologies
which other solutions do not (take for example the JDK
ConcurrentHashMap) - exposing that in the cache abstraction would be
useless simply because there would no backing support. Such
functionality should be controlled directly through the backing cache,
when configuring it or through its native API.#
This mean that Spring does not directly expose API to set Time To Live , but instead relays on the caching provider implementation to set this. This mean that you need to either set Time to live through the exposed Cache Manager, if the caching provider allows dynamic setup of these attributes. Or alternatively you should configure yourself the cache region that the Spring is using with the #Cacheable annotation.
In order to find the name of the cache region that the #Cacheable is exposing. You can use a JMX console to browse the available cache regions in your application.
If you are using EHCache for example once you know the cache region you can provide xml configuration like this:
<cache name="myCache"
maxEntriesLocalDisk="10000" eternal="false" timeToIdleSeconds="3600"
timeToLiveSeconds="0" memoryStoreEvictionPolicy="LFU">
</cache>
Again I repeat all configuration is Caching provider specific and Spring does not expose an interface when dealing with it.
REMARK: The default cache provider that is configured by Spring if no cache provider defined is ConcurrentHashMap. It does not have support for Time To Live. In order to get this functionality you have to switch to a different cache provider(for example EHCache).
I use ehcache (not replicated or distributed) in my application and as I know it can be accessed only from the same JVM but all the applications in the same JVM (e.g. deployed in a app server) can get values from the cache. Am I right?
What I want is that only my application can get the cache and its values. Is it possible somehow? I checked the XML config file but I haven't found any configuration to control this. Or should I set something when I get the cache from CacheManager?
It is how I get the cache in code:
private static final String LOCAL_CACHE_NAME = "LOCAL_PROTNEUT_STORE";
private Cache getCache() {
// the name of the ehcache should be able to be configured in the general config XML
URL url = getClass().getResource("/protneut-local-ehcache.xml");
CacheManager manager = CacheManager.create(url);
Cache cache = manager.getCache(LOCAL_CACHE_NAME);
return cache;
}
The config file:
<ehcache>
<cache name="LOCAL_PROTNEUT_STORE" maxElementsInMemory="500" eternal="true" memoryStoreEvictionPolicy="LRU" />
</ehcache>
Is it possible to control the access at all?
Thanks for the help!
Regards,
V.
In general applications don't have access to each other as they are loaded with separe classpaths (you can read more about it here) so you shouldn't worry about it.
You would have to make extra effort to make simple cache manager avialable in all applications (like make it available via JNDI or put it in shared lib)
I am having spring webservice application with oracle as a database. Right now i have datasource created using weblogic server. Also using eclipse linkg JPA to do both read and write transactions(insert,Read and update). Now we want to separate dataSources for read(read) and wrtie(insert or update) transactions.
My current dataSource is as followed:
JNDI NAME : jdbc/POI_DS
URL : jdbc:oracle:thin:#localhost:1521:XE
using this, I am doing both read and write transactions.
What if i do the following:
JNDI NAME : jdbc/POI_DS_READ
URL : jdbc:oracle:thin:#localhost:1521:XE
JNDI NAME : jdbc/POI_DS_WRITE
URL : jdbc:oracle:thin:#localhost:1521:XE
I knew that using XA datasource we can define multiple dataSources. Can I do same thing without XA dataSource. Does any one tried this kind of approach.
::UPDATE::
Thank you all for your responses I have implemented following solution.
I have taken the multiple database approach. where you will define multiple transactionManagers and managerFactory. I have taken only single non xa dataSource(JNDI) that is refereed in EntityManagerFactory Bean.
you can reefer following links here which are for multiple dataSources
Multiple DataSource Approach
defining #transactional value
Also explored on transaction managers org.springframework.transaction.jta.WebLogicJtaTransactionManager and org.springframework.orm.jpa.JpaTransactionManager as well.
There is an interesting article about this in Spring docs - Dynamic DataSource Routing. There is an example there, that allows you to basically switch data sources at runtime. It should help you. I'd gladly help you more, if you have any more specific questions.
EDIT: It tells, that the actual use is to have connection to multiple databases via one configuration, but you could manage to create different configs to one database with different params, as you'd need to.
I would suggest using Database "services". Each workload, read-only and read-write, would be using its own service to access the database. That way you can use AWR reports to get statistics for each service. You can also turn off read-write when you keep read-only up and running.
Here is a pointer to the Oracle Database documentation that talks about Services:
https://docs.oracle.com/database/121/ADMIN/create.htm#CIABBCAI
If you're using spring, you should be able to accomplish this without using 2 Datasources via spring #Transactional with the readonly property set to true. The reason why I suggest this is that you seem to be concerned about the transactionality only and this seems to be catered for in the spring framework?
I'd suggest something like this for your case:
#Transactional(readOnly = true)
public class DefaultFooService implements FooService {
public Foo getFoo(String fooName) {
// do something
}
// these settings have precedence for this method
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
public void updateFoo(Foo foo) {
// do something
}
}
Using this style, you should be able to split read only services from their write counterparts, or even have read and write service methods combined. But both of these do not use 2 datasources.
Code is from the Spring Reference
I am pretty sure that you need to address the problem on the database / connection url + properties layer.
I would google around for something like read write replication.
Related to your question with JPA and transaction. You are doomed when you are using multiple Datasources. Also XA datasources are not really a solution for that. The only thing they do for you is to ensure consistency over multi data source operations. XA Transaction do only span some sort of logical transaction over two transactions (one for each datasource). From the transaction isolation point of view (as long as your not using READ_UNCOMMITED) both datasources use their own transaction. This means the read data source would not see the changes made by the write transaction.
In my project i have to intercept Hibernate L2 cache calls in-order to set lifespan for some selected cached objects. The problem is hibenate cache calls never comes through the interceptor.
My Interceptor ( test code)
public class HibernateCacheInterceptor extends BaseCustomInterceptor {
private static Log log = LogFactory.getLog(HibernateCacheInterceptor.class);
#Override
public Object visitPutKeyValueCommand(InvocationContext ctx, PutKeyValueCommand command) throws Throwable {
log.info(this.getClass().getName() + " intercept.");
if (command.getValue() instanceof Car) {
return null;
} else {
return invokeNextInterceptor(ctx, command);
}
}
}
My Cache definition ( infinispan.xml)
<namedCache name="mycache">
<customInterceptors>
<interceptor position="FIRST" class="test.HibernateCacheInterceptor">
</interceptor>
</customInterceptors>
</namedCache>
org.infinispan.Cache.put(key,value) calls comes to interceptor but hibernate cache calls does not comes. Does hibernate uses different API to skip interceptors ? How can i intercept hibernate cache calls ?
No, Hibernate cannot skip interceptors - all of the logic of core Infinispan is triggered from interceptors.
My guess is that Hibernate does not use the cache (when you open JConsole, can you see entries there in Infinispan?), uses another cache (without the interceptor) or buffers the entries before inserting to the cache.
You can try to set trace logging on both hibernate and infinispan.
There are easier ways to achieve this. As indicated in the Infinispan 2LC documentation (see advanced configuration section), each entity can be assigned a specific cache where you can tweak the settings declaratively. The easiest thing is to check which is the Infinispan configuration that's used in your application, copy the default cache used for entities, give it a different name and tweak it. Then, you need to define something like:
<property name="hibernate.cache.infinispan.com.acme.Person.cfg"
value="person-entity"/>
Where person-entity is the name of the cache for that particular entity.
NOTE: Remember that if you're running on Wildfly or EAP, the property name requires indicating the deployment archive and persistence unit name. This is explained in the advanced configuration section.