Infinispan custom interceptors not working with Hibernate L2 cache? - java

In my project i have to intercept Hibernate L2 cache calls in-order to set lifespan for some selected cached objects. The problem is hibenate cache calls never comes through the interceptor.
My Interceptor ( test code)
public class HibernateCacheInterceptor extends BaseCustomInterceptor {
private static Log log = LogFactory.getLog(HibernateCacheInterceptor.class);
#Override
public Object visitPutKeyValueCommand(InvocationContext ctx, PutKeyValueCommand command) throws Throwable {
log.info(this.getClass().getName() + " intercept.");
if (command.getValue() instanceof Car) {
return null;
} else {
return invokeNextInterceptor(ctx, command);
}
}
}
My Cache definition ( infinispan.xml)
<namedCache name="mycache">
<customInterceptors>
<interceptor position="FIRST" class="test.HibernateCacheInterceptor">
</interceptor>
</customInterceptors>
</namedCache>
org.infinispan.Cache.put(key,value) calls comes to interceptor but hibernate cache calls does not comes. Does hibernate uses different API to skip interceptors ? How can i intercept hibernate cache calls ?

No, Hibernate cannot skip interceptors - all of the logic of core Infinispan is triggered from interceptors.
My guess is that Hibernate does not use the cache (when you open JConsole, can you see entries there in Infinispan?), uses another cache (without the interceptor) or buffers the entries before inserting to the cache.
You can try to set trace logging on both hibernate and infinispan.

There are easier ways to achieve this. As indicated in the Infinispan 2LC documentation (see advanced configuration section), each entity can be assigned a specific cache where you can tweak the settings declaratively. The easiest thing is to check which is the Infinispan configuration that's used in your application, copy the default cache used for entities, give it a different name and tweak it. Then, you need to define something like:
<property name="hibernate.cache.infinispan.com.acme.Person.cfg"
value="person-entity"/>
Where person-entity is the name of the cache for that particular entity.
NOTE: Remember that if you're running on Wildfly or EAP, the property name requires indicating the deployment archive and persistence unit name. This is explained in the advanced configuration section.

Related

How do I use multiple #Transactional annotated methods?

I have a Spring-boot project where I have a service bean with 2 #Transactional annotated methods.
These methods do read-only JPA (hibernated) actions to fetch data from an HSQL file database, using both JPA repositories and lazy loaded getters in entities.
I also have a cli bean that handles commands (Using PicoCLI). From one of these commands I try to call both #Transactional annotated methods, but I get the following error during execution of the second method:
org.hibernate.LazyInitializationException - could not initialize proxy - no Session
at org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:602)
at org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:217)
at org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:581)
at org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:148)
at org.hibernate.collection.internal.PersistentSet.iterator(PersistentSet.java:188)
at java.util.Spliterators$IteratorSpliterator.estimateSize(Spliterators.java:1821)
at java.util.Spliterator.getExactSizeIfKnown(Spliterator.java:408)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
at <mypackage>.SomeImpl.getThings(SomeImpl.java:<linenr>)
...
If I mark the method that calls both #Transactional annotated methods with #Transactional itself, the code seems to work (due to there now only being 1 top level transaction I presume?).
I just want to find out why I cannot start multiple transactions in a single session or why the second transaction doesn't start a new session if there are none.
So my questions are:
Does this have to do with how hibernate starts a session, how transactions close sessions or anything related to the HSQL database?
Is adding an encompassing transaction the right way to fix the issue
or is this just fighting the symptom?
What would be the best way to be able to use multiple #Transactional annotated methods from one method?
EDIT: I want to make clear that I don't expose the entities outside of the transactional methods, so on the surface it looks to me like the 2 transactional methods should be working independently from one another.
EDIT2: for more clarification: the transactional methods need to be available in an api and the user of the api should be able to call multiple of these transactional methods, without needing to use transactional annotations and without getting the LazyInitializationException
Api:
public interface SomeApi {
List<String> getSomeList();
List<Something> getThings(String somethingGroupName);
}
Implementation:
public class SomeImpl implements SomeApi {
#Transactional
public List<String> getSomeList() {
return ...; //Do jpa stuff to get the list
}
#Transactional
public List<Something> getThings(String somethingGroupName) {
return ...; //Do other jpa stuff to get the result from the group name
}
}
Usage by 3rd party (who might not know what transactionality is):
public someMethod(String somethingGroupName) {
...
SomeApi someApi = ...; // Get an implementation of the api in some way
List<String> someList = someApi.someList();
if (someList.contains(somethingGroupName) {
System.out.println(someApi.getThings(somethingGroupName));
}
...
}
It seems that you are accessing some not initialized data from your entities after the transactions have ended. In that cases, the persistence provider may throw the lazyinitialization exception.
If you need to retrieve some information not eagerly loaded with the entities, you may use one of two strategies:
annotate the calling method also with #Transactional annotation, as you did: it does not start a new transaction for each call, but makes the opened transaction active until your calling method ends, avoiding the exception; or
make the called methods load eagerly the required fields USING the JOIN FETCH JPQL idiom.
Transaction boundaries requires some analysis of your scenario. Please, read this answer and search for better books or tutorials to master it. Probably only you will be able to define aptly your requirements.
I found that hibernate out of the box doesn't reopen a session and therefore doesn't enable lazy loading after the first transaction has ended, whether or not subsequent jpa statements are in a transaction or not. There is however a property in hibernate to enable this feature:
spring:
jpa:
properties:
hibernate.enable_lazy_load_no_trans: true
This will make sure that if there is no session, then a temp session will be created. I believe that it will also prevent a session from ending after a transaction, but I don't know this for sure.
Partial credit goes to the following answers from other StackOverflow questions:
http://stackoverflow.com/a/32046337/2877358
https://stackoverflow.com/a/11913404/2877358
WARNING: In hibernate 4.1.8 there is a bug that could lead to loss of data! Make sure that you are using 4.2.12, 4.3.5 or newer versions of hibernate. See: https://hibernate.atlassian.net/browse/HHH-7971.

Automatic cache invalidation in Spring

I have a class that performs some read operations from a service XXX. These read operations will eventually perform DB reads and I want to optimize on those calls by caching the results of each method in the class for a specified custom key per method.
Class a {
public Output1 func1(Arguments1 ...) {
...
}
public Output2 func2(Arguments2 ...) {
...
}
public Output3 func3(Arguments3 ...) {
...
}
public Output4 func4(Arguments4 ...) {
...
}
}
I am thinking of using Spring caching(#Cacheable annotation) for caching results of each of these methods.
However, I want cache invalidation to happen automatically by some mechanism(ttl etc). Is that possible in Spring caching ? I understand that we have a #CacheEvict annotation but I want that eviction to happen automatically.
Any help would be appreciated.
According to the Spring documentation (section 36.8) :
How can I set the TTL/TTI/Eviction policy/XXX feature?
Directly through your cache provider. The cache abstraction is...
well, an abstraction not a cache implementation. The solution you are
using might support various data policies and different topologies
which other solutions do not (take for example the JDK
ConcurrentHashMap) - exposing that in the cache abstraction would be
useless simply because there would no backing support. Such
functionality should be controlled directly through the backing cache,
when configuring it or through its native API.#
This mean that Spring does not directly expose API to set Time To Live , but instead relays on the caching provider implementation to set this. This mean that you need to either set Time to live through the exposed Cache Manager, if the caching provider allows dynamic setup of these attributes. Or alternatively you should configure yourself the cache region that the Spring is using with the #Cacheable annotation.
In order to find the name of the cache region that the #Cacheable is exposing. You can use a JMX console to browse the available cache regions in your application.
If you are using EHCache for example once you know the cache region you can provide xml configuration like this:
<cache name="myCache"
maxEntriesLocalDisk="10000" eternal="false" timeToIdleSeconds="3600"
timeToLiveSeconds="0" memoryStoreEvictionPolicy="LFU">
</cache>
Again I repeat all configuration is Caching provider specific and Spring does not expose an interface when dealing with it.
REMARK: The default cache provider that is configured by Spring if no cache provider defined is ConcurrentHashMap. It does not have support for Time To Live. In order to get this functionality you have to switch to a different cache provider(for example EHCache).

Can we have multiple dataSources to single database

I am having spring webservice application with oracle as a database. Right now i have datasource created using weblogic server. Also using eclipse linkg JPA to do both read and write transactions(insert,Read and update). Now we want to separate dataSources for read(read) and wrtie(insert or update) transactions.
My current dataSource is as followed:
JNDI NAME : jdbc/POI_DS
URL : jdbc:oracle:thin:#localhost:1521:XE
using this, I am doing both read and write transactions.
What if i do the following:
JNDI NAME : jdbc/POI_DS_READ
URL : jdbc:oracle:thin:#localhost:1521:XE
JNDI NAME : jdbc/POI_DS_WRITE
URL : jdbc:oracle:thin:#localhost:1521:XE
I knew that using XA datasource we can define multiple dataSources. Can I do same thing without XA dataSource. Does any one tried this kind of approach.
::UPDATE::
Thank you all for your responses I have implemented following solution.
I have taken the multiple database approach. where you will define multiple transactionManagers and managerFactory. I have taken only single non xa dataSource(JNDI) that is refereed in EntityManagerFactory Bean.
you can reefer following links here which are for multiple dataSources
Multiple DataSource Approach
defining #transactional value
Also explored on transaction managers org.springframework.transaction.jta.WebLogicJtaTransactionManager and org.springframework.orm.jpa.JpaTransactionManager as well.
There is an interesting article about this in Spring docs - Dynamic DataSource Routing. There is an example there, that allows you to basically switch data sources at runtime. It should help you. I'd gladly help you more, if you have any more specific questions.
EDIT: It tells, that the actual use is to have connection to multiple databases via one configuration, but you could manage to create different configs to one database with different params, as you'd need to.
I would suggest using Database "services". Each workload, read-only and read-write, would be using its own service to access the database. That way you can use AWR reports to get statistics for each service. You can also turn off read-write when you keep read-only up and running.
Here is a pointer to the Oracle Database documentation that talks about Services:
https://docs.oracle.com/database/121/ADMIN/create.htm#CIABBCAI
If you're using spring, you should be able to accomplish this without using 2 Datasources via spring #Transactional with the readonly property set to true. The reason why I suggest this is that you seem to be concerned about the transactionality only and this seems to be catered for in the spring framework?
I'd suggest something like this for your case:
#Transactional(readOnly = true)
public class DefaultFooService implements FooService {
public Foo getFoo(String fooName) {
// do something
}
// these settings have precedence for this method
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
public void updateFoo(Foo foo) {
// do something
}
}
Using this style, you should be able to split read only services from their write counterparts, or even have read and write service methods combined. But both of these do not use 2 datasources.
Code is from the Spring Reference
I am pretty sure that you need to address the problem on the database / connection url + properties layer.
I would google around for something like read write replication.
Related to your question with JPA and transaction. You are doomed when you are using multiple Datasources. Also XA datasources are not really a solution for that. The only thing they do for you is to ensure consistency over multi data source operations. XA Transaction do only span some sort of logical transaction over two transactions (one for each datasource). From the transaction isolation point of view (as long as your not using READ_UNCOMMITED) both datasources use their own transaction. This means the read data source would not see the changes made by the write transaction.

JMX MXBean Attributes all UNDEFINED - Spring 3.0.x/Tomcat 6.0

I've been trying to get a sample JMX MXBean working in a Spring-configured webapp, but any basic attributes on the MXBean are coming up as UNDEFINED when I connect with jconsole.
Java interface/classes:
public interface IJmxBean { // marker interface for spring config, see below
}
public interface MgmtMXBean { // lexical convention for MXBeans - mgmt interface
public int getAttribute();
}
public class Mgmt implements IJmxBean, MgmtMXBean { // actual JMX bean
private IServiceBean serviceBean; // service bean injected by Spring
private int attribute = 0;
#Override
public int getAttribute() {
if(serviceBean != null) {
attribute = serviceBean.getRequestedAttribute();
}
return attribute;
}
public void setServiceBean(IServiceBean serviceBean) {
this.serviceBean = serviceBean;
}
}
Spring JMX config:
<beans>
<context:component-scan base-package="...">
<context:include-filter type="assignable" expression="...IJmxBean" />
</context:component-scan>
<context:mbean-export />
</beans>
Here's what I know so far:
The element is correctly instantiating a bean named "mgmt". I've got logging in a zero-argument public constructor that indicates it gets constructed.
is correctly automatically detecting and registering the MgmtMXBean interface with my Tomcat 6.0 container. I can connect to the MBeanServer in Tomcat with jconsole and drill down to the Mgmt MXBean.
When examining the MXBean, "Attribute" is always listed as UNDEFINED, but jconsole can tell the correct type of the attribute. Further, hitting "Refresh" in jconsole does not actually invoke the getter method of "Attribute"- I have logging in the getter method to indicate if it is being invoked (similar to the constructor logging that works) and I see nothing in the logs.
At this point I'm not sure what I'm doing wrong. I've tried a number of things, including constructing an explicit Spring MBeanExporter instance and registering the MXBean by hand, but it either results in the MBean/MXBean not getting registered with Tomcat's MBean server or an Attribute value of UNDEFINED.
For various reasons, I'd prefer not to have to use Spring's #ManagedResource/#ManagedAttribute annotations.
Is there something that I'm missing in the Spring docs or MBean/MXBean specs?
ISSUE RESOLVED: Thanks to prompting by Jon Stevens (above), I went back and re-examined my code and Spring configuration files:
Throwing an exception in the getAttribute() method is a sure way to get "Unavailable" to show up as the attribute's value in JConsole. In my case:
The Spring JMX config file I was using was lacking the default-autowire="" attribute on the root <beans> element;
The code presented above checks to see if serviceBean != null. Apparently I write better code on stackoverflow.com than in my test code, since my test code wasn't checking for that. Nor did I have implements InitializingBean or #PostConstruct to check for serviceBean != null like I normally do on almost all the other beans I use;
The code invoking the service bean was before the logging, so I never saw any log messages about getter methods being entered;
JConsole doesn't report when attribute methods throw exceptions;
The NPE did not show up in the Tomcat logs.
Once I resolved the issue with serviceBean == null, everything worked perfectly. Regardless, +1 to Jon for providing a working demo, since there are literally 50 different ways to configure MBeans/MXBeans within Spring.
I've recently built a sample Spring based webapp that very cleanly enables JMX for latest versions of Spring, Hibernate and Ehcache.
It has examples for both EntityManager based access and DAO access (including transactions!). It also shows how to do annotation based injection in order to negate having to use Spring's xml config for beans. There is even a SpringMVC based example servlet using annotations. Basically, this is a Spring based version of a fairly powerful application server running on top of any servlet engine.
It isn't documented yet, but I'll get to that soon. Take a look at the configuration files and source code and it should be pretty clear.
The motivation behind this is that I got tired of all of the crazy blog posts with 50 different ways to set things up and finally made a single simple source that people can work from. It is up on github so feel free to fork the project and do whatever you want with it.
https://github.com/lookfirst/fallback

How to ensure that methods called from same thread use the same DB session

In our system we have multi-threaded processing engine. During processing each thread calls methods to retrieve data from the database. We determined that performance is greatly improved if methods called from the same thread use the same DB session (sessions are coming from the pool of course).
Is there any standard way in Spring to ensure such thing or we have to come up with our own custom solution?
UPDATE: Forgot to mention that same methods can be called in different context where they should use a standard way of getting the session from the pool
I did not see Spring anywhere in your question. So I assume you want a simple utility to do this.
class SessionUtil {
private ThreadLocal currentSession;
public Session getCurrentSession() {
if(currentSession.get() == null) {
Session s = //create new session
currentSession.set(s);
}
return (Session)currentSession.get();
}
}
The Thread local will ensure that within the same thread it is always the same session. If you are using Spring then the classes/utilities mentioned above (in other responses) should be perfect.
Spring has a class called TransactionSynchronizationManager. It stores the current Session in a ThreadLocal. The TransactionSynchronizationManager is not recommended for use by the developer, but you can try using it.
Session session = ((SessionHolder)
TransactionSynchronizationManager.getResource(sessionFactory)).getSession();
(if you are using EntityManager, simply replace "Session" with "EntityManager").
You can have the sessionFactory injected in your bean - it is per-application.
Take a look at this discussion.
Other options, which I think are preferable to manual thread-handling are:
Thread pooling
Spring batch
Spring-JMS integration
Spring 3.0 has a concept of thread-scoped beans (hovewer, this scope is not registered by default, see docs): 3.5 Bean scopes, 3.5.5.2 Using a custom scope
EDIT:
I say about this:
Thread-scoped beans As of Spring 3.0,
a thread scope is available, but is
not registered by default. For more
information, see the documentation for
SimpleThreadScope. For
instructions on how to register this
or any other custom scope, see
Section 3.5.5.2, “Using a custom
scope”.
Spring coordinates database sessions, connections and threads through it's Transaction Framework (actually, using its TransactionSynchronizationManager - see description here - but you really don't want to mess with that directly, it's fearsome). If you need to coordinate your threads, then this is by far the simplest way of doing it.
How you choose to use the framework, however, is up top you.

Categories

Resources