implementing multitenancy in hibernate 4 - java

I am trying to build multitenancy based on unique schema for each client
So there would be a single datasource to Oracle in weblogic server
the datasource would have a user which has synonyms / grants to all client schemas
Let us ignore the fact whether this is good or bad or horrible design - I just want to find a way to get it to work
I am defining the datasource as a part of my hibernate.cfg.xml file
The sessionfactory will get created from this hibernate.cfg.xml
From reading on the internet - I will have to add additional properties in my hibernate.cfg.xml
<session-factory>
<property name="connection.datasource">MYSQLDS</property>
<property name="default_schema">testpage</property>
<!-- multitenant properties start here -->
<property name="multi_tenant_connection_provider">multiTenantConnectionProvider</property>
<property name="multiTenancy">SCHEMA</property>
<property name="tenant_identifier_resolver">CurrentTenantIdentifierResolverImpl</property>
&globalpit;
</session-factory>
Now is the part that I dont understand:
It seems for CurrentTenantIdentifierResolverImpl I will implement a custom class that will implement the interface
import org.hibernate.context.spi.CurrentTenantIdentifierResolver;
public class SchemaResolver implements CurrentTenantIdentifierResolver {
#Override
public String resolveCurrentTenantIdentifier() {
return "master";
}
#Override
public boolean validateExistingCurrentSessions() {
return false;
}
}
So I am quite confused how this class and its methods will get invoked - I do understand that this is used to determine which schema is to be used based on some identifier unique to a client - ala user http session ?
The next part is the implementation of MultiTenantConnectionProvider and again its methods such as
public Connection getConnection(String tenantIdentifier) throws SQLException {
RANT -START This strangely starts looking like good old plain and simple JDBC :) so am wondering why I have to use hibernate :(
RANT-END
So related to this method and the class - how do I consume it ?
The frustrating part is lots of places where you get to see these two classes defined - but no explanation of how these are invoked ?
Is there a complete end to end example of how to use these two classes with a clear example of how these two classes are consumed - not just the two stand alone classes please !

Related

How to make a Spring Application' Business Logic User Configurable?

I am a developer working on a Java web application that is built on the Spring framework.
This application will be deployed to several different customers.
There is a class which contains some business logic that is different for each client.
From a Spring framework point of view, it is enough to simply wire in the appropriate class (as a bean) for each client.
However, the developers are not responsible for deployment of the application. The Operations team is responsible, and having them open up the WAR file and modify the spring configuration XML for each client deployment is probably too much to ask from them. Properties files are ok, but modifying internal files - probably not.
Has anyone else come up with a strategy for dealing with this?
Edit:
To give an example of what I'm talking about:
public interface IEngine {
void makeNoise();
}
public class Car {
public void setEngine(IEngine engine) {
this.engine = engine;
}
}
Customer A's business logic:
HeavyDutyEngine implements IEngine {
public void makeNoise() {
System.out.println("VROOOM!");
}
}
Customer B's business logic:
LightWeightEngine implements IEngine {
public void makeNoise() {
System.out.println("putputput");
}
}
In the Spring configuration XML:
For client A, it might look like this:
<bean id="hdEngine" class="HeavyDutyEngine" />
<bean id="lwEngine" class="LightWeightEngine" />
<bean id="car" class="Car">
<property name="engine" ref="hdDngine">
</bean>
For client B, it might look like this:
<bean id="hdEngine" class="HeavyDutyEngine" />
<bean id="lwEngine" class="LightWeightEngine" />
<bean id="car" class="Car">
<property name="engine" ref="lwEngine">
</bean>
To configure Spring for different environments, you can use the concept called spring "profiles". (introduced in Spring 3.1)
You have different ways so enable/disable this properties. For example java properties parameter.
But because you are using a WAR, and therefore some Servlet container, I would recommend to put this configuration in the Servlet container.
In a tomcat for example, you can put this line in the context.xml (global or application specific) to enable a profile
<Parameter name="spring.profiles.active" value="myProfile"/>
You can move some configuration to DB, if you are using one.
And you can use something like this
ref="{ref-name}"
where ref-name can be resolved using properties file(default) by configuring PropertyPlaceholderConfigurer.
Or you can write your own wrapper over PropertyPlaceholderConfigurer which will take the values from DB table which is external to you deployable WAR file.
In one of mine project, we used this method to resolve custom dependencies. The wrapper which looks up the DB used to take first priority and if the DB doesn't have the key/value pair, then properties file (bundled in the WAR) was used to resolve dependencies.
This will also allow you to change some value externally from DB, however with IBM Websphere we need to recycle the server for changes to take place.

How to setup Hibernate to read/write to different datasources?

Using Spring and Hibernate, I want to write to one MySQL master database, and read from one more more replicated slaves in cloud-based Java webapp.
I can't find a solution that is transparent to the application code. I don't really want to have to change my DAOs to manage different SessionFactories, as that seems really messy and couples the code with a specific server architecture.
Is there any way of telling Hibernate to automatically route CREATE/UPDATE queries to one datasource, and SELECT to another? I don't want to do any sharding or anything based on object type - just route different types of queries to different datasources.
An example can be found here: https://github.com/afedulov/routing-data-source.
Spring provides a variation of DataSource, called AbstractRoutingDatasource. It can be used in place of standard DataSource implementations and enables a mechanism to determine which concrete DataSource to use for each operation at runtime. All you need to do is to extend it and to provide an implementation of an abstract determineCurrentLookupKey method. This is the place to implement your custom logic to determine the concrete DataSource. Returned Object serves as a lookup key. It is typically a String or en Enum, used as a qualifier in Spring configuration (details will follow).
package website.fedulov.routing.RoutingDataSource
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
public class RoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return DbContextHolder.getDbType();
}
}
You might be wondering what is that DbContextHolder object and how does it know which DataSource identifier to return? Keep in mind that determineCurrentLookupKey method will be called whenever TransactionsManager requests a connection. It is important to remember that each transaction is "associated" with a separate thread. More precisely, TransactionsManager binds Connection to the current thread. Therefore in order to dispatch different transactions to different target DataSources we have to make sure that every thread can reliably identify which DataSource is destined for it to be used. This makes it natural to utilize ThreadLocal variables for binding specific DataSource to a Thread and hence to a Transaction. This is how it is done:
public enum DbType {
MASTER,
REPLICA1,
}
public class DbContextHolder {
private static final ThreadLocal<DbType> contextHolder = new ThreadLocal<DbType>();
public static void setDbType(DbType dbType) {
if(dbType == null){
throw new NullPointerException();
}
contextHolder.set(dbType);
}
public static DbType getDbType() {
return (DbType) contextHolder.get();
}
public static void clearDbType() {
contextHolder.remove();
}
}
As you see, you can also use an enum as the key and Spring will take care of resolving it correctly based on the name. Associated DataSource configuration and keys might look like this:
....
<bean id="dataSource" class="website.fedulov.routing.RoutingDataSource">
<property name="targetDataSources">
<map key-type="com.sabienzia.routing.DbType">
<entry key="MASTER" value-ref="dataSourceMaster"/>
<entry key="REPLICA1" value-ref="dataSourceReplica"/>
</map>
</property>
<property name="defaultTargetDataSource" ref="dataSourceMaster"/>
</bean>
<bean id="dataSourceMaster" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="${db.master.url}"/>
<property name="username" value="${db.username}"/>
<property name="password" value="${db.password}"/>
</bean>
<bean id="dataSourceReplica" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="${db.replica.url}"/>
<property name="username" value="${db.username}"/>
<property name="password" value="${db.password}"/>
</bean>
At this point you might find yourself doing something like this:
#Service
public class BookService {
private final BookRepository bookRepository;
private final Mapper mapper;
#Inject
public BookService(BookRepository bookRepository, Mapper mapper) {
this.bookRepository = bookRepository;
this.mapper = mapper;
}
#Transactional(readOnly = true)
public Page<BookDTO> getBooks(Pageable p) {
DbContextHolder.setDbType(DbType.REPLICA1); // <----- set ThreadLocal DataSource lookup key
// all connection from here will go to REPLICA1
Page<Book> booksPage = callActionRepo.findAll(p);
List<BookDTO> pContent = CollectionMapper.map(mapper, callActionsPage.getContent(), BookDTO.class);
DbContextHolder.clearDbType(); // <----- clear ThreadLocal setting
return new PageImpl<BookDTO>(pContent, p, callActionsPage.getTotalElements());
}
...//other methods
Now we can control which DataSource will be used and forward requests as we please. Looks good!
...Or does it? First of all, those static method calls to a magical DbContextHolder really stick out. They look like they do not belong the business logic. And they don't. Not only do they not communicate the purpose, but they seem fragile and error-prone (how about forgetting to clean the dbType). And what if an exception is thrown between the setDbType and cleanDbType? We cannot just ignore it. We need to be absolutely sure that we reset the dbType, otherwise Thread returned to the ThreadPool might be in a "broken" state, trying to write to a replica in the next call. So we need this:
#Transactional(readOnly = true)
public Page<BookDTO> getBooks(Pageable p) {
try{
DbContextHolder.setDbType(DbType.REPLICA1); // <----- set ThreadLocal DataSource lookup key
// all connection from here will go to REPLICA1
Page<Book> booksPage = callActionRepo.findAll(p);
List<BookDTO> pContent = CollectionMapper.map(mapper, callActionsPage.getContent(), BookDTO.class);
DbContextHolder.clearDbType(); // <----- clear ThreadLocal setting
} catch (Exception e){
throw new RuntimeException(e);
} finally {
DbContextHolder.clearDbType(); // <----- make sure ThreadLocal setting is cleared
}
return new PageImpl<BookDTO>(pContent, p, callActionsPage.getTotalElements());
}
Yikes >_< ! This definitely does not look like something I would like to put into every read only method. Can we do better? Of course! This pattern of "do something at the beginning of a method, then do something at the end" should ring a bell. Aspects to the rescue!
Unfortunately this post has already gotten too long to cover the topic of custom aspects. You can follow up on the details of using aspects using this link.
I don't think that deciding that SELECTs should go to one DB (one slave) and CREATE/UPDATES should go to a different one (master) is a very good decision. The reasons are:
replication is not instantaneous, so you could CREATE something in the master DB and, as part of the same operation, SELECT it from the slave and notice that the data hasn't yet reached the slave.
if one of the slaves is down, you shouldn't be prevented from writing data in the master, because as soon as the slave is back up, its state will be synchronized with master. In your case though, your write operations are dependent on both master and slave.
How would you then define transactionality if you're in fact using 2 dbs?
I would advise using the master DB for all the WRITE flows, with all the instructions they might require (whether they are SELECTs, UPDATE or INSERTS). Then, the application dealing with the read-only flows can read from the slave DB.
I'd also advise having separate DAOs, each with its own methods, so that you'll have a clear distinction between read-only flows and write/update flows.
You could create 2 session factories and hava a BaseDao wrapping the 2 factories(or the 2 hibernateTemplates if you use them) and use the get methods with on factory and the saveOrUpdate methods with the other
Try this way : https://github.com/kwon37xi/replication-datasource
It works nicely and very easy to implement without any extra annotation or code. It requires only #Transactional(readOnly=true|false).
I have been using this solution with Hibernate(JPA),Spring JDBC Template, iBatis.
You can use DDAL to implement writting master database and reading slave database in a DefaultDDRDataSource without modifying your Daos, and what's more, DDAL provided loading balance for mulit-slave databases. It doesn't rely on spring or hibernate. There is a demo project to show how to use it: https://github.com/hellojavaer/ddal-demos and the demo1 is just what you described scene.

Can I replace a Spring bean definition at runtime?

Consider the following scenario. I have a Spring application context with a bean whose properties should be configurable, think DataSource or MailSender. The mutable application configuration is managed by a separate bean, let's call it configuration.
An administrator can now change the configuration values, like email address or database URL, and I would like to re-initialize the configured bean at runtime.
Assume that I can't just simply modify the property of the configurable bean above (e.g. created by FactoryBean or constructor injection) but have to recreate the bean itself.
Any thoughts on how to achieve this? I'd be glad to receive advice on how to organize the whole configuration thing as well. Nothing is fixed. :-)
EDIT
To clarify things a bit: I am not asking how to update the configuration or how to inject static configuration values. I'll try an example:
<beans>
<util:map id="configuration">
<!-- initial configuration -->
</util:map>
<bean id="constructorInjectedBean" class="Foo">
<constructor-arg value="#{configuration['foobar']}" />
</bean>
<bean id="configurationService" class="ConfigurationService">
<property name="configuration" ref="configuration" />
</bean>
</beans>
So there's a bean constructorInjectedBean that uses constructor injection. Imagine the construction of the bean is very expensive so using a prototype scope or a factory proxy is not an option, think DataSource.
What I want to do is that every time the configuration is being updated (via configurationService the bean constructorInjectedBean is being recreated and re-injected into the application context and dependent beans.
We can safely assume that constructorInjectedBean is using an interface so proxy magic is indeed an option.
I hope to have made the question a little bit clearer.
Here is how I have done it in the past: running services which depend on configuration which can be changed on the fly implement a lifecycle interface: IRefreshable:
public interface IRefreshable {
// Refresh the service having it apply its new values.
public void refresh(String filter);
// The service must decide if it wants a cache refresh based on the refresh message filter.
public boolean requiresRefresh(String filter);
}
Controllers (or services) which can modify a piece of configuration broadcast to a JMS topic that the configuration has changed (supplying the name of the configuration object). A message driven bean then invokes the IRefreshable interface contract on all beans which implement IRefreshable.
The nice thing with spring is that you can automatically detect any service in your application context that needs to be refreshed, removing the need to explicitly configure them:
public class MyCacheSynchService implements InitializingBean, ApplicationContextAware {
public void afterPropertiesSet() throws Exception {
Map<String, ?> refreshableServices = m_appCtx.getBeansOfType(IRefreshable.class);
for (Map.Entry<String, ?> entry : refreshableServices.entrySet() ) {
Object beanRef = entry.getValue();
if (beanRef instanceof IRefreshable) {
m_refreshableServices.add((IRefreshable)beanRef);
}
}
}
}
This approach works particularly well in a clustered application where one of many app servers might change the configuration, which all then need to be aware of. If you want to use JMX as the mechanism for triggering the changes, your JMX bean can then broadcast to the JMS topic when any of its attributes are changed.
I can think of a 'holder bean' approach (essentially a decorator), where the holder bean delegates to holdee, and it's the holder bean which is injected as a dependency into other beans. Nobody else has a reference to holdee but the holder. Now, when the holder bean's config is changed, it recreates the holdee with this new config and starts delegating to it.
You should have a look at JMX. Spring also provides support for this.
Spring 2.0.x
Spring 2.5.x
Spring 3.0.x
Further updated answer to cover scripted bean
Another approach supported by spring 2.5.x+ is that of the scripted bean. You can use a variety of languages for your script - BeanShell is probably the most intuitive given that it has the same syntax as Java, but it does require some external dependencies. However, the examples are in Groovy.
Section 24.3.1.2 of the Spring Documentation covers how to configure this, but here are some salient excerpts illustrating the approach which I've edited to make them more applicable to your situation:
<beans>
<!-- This bean is now 'refreshable' due to the presence of the 'refresh-check-delay' attribute -->
<lang:groovy id="messenger"
refresh-check-delay="5000" <!-- switches refreshing on with 5 seconds between checks -->
script-source="classpath:Messenger.groovy">
<lang:property name="message" value="defaultMessage" />
</lang:groovy>
<bean id="service" class="org.example.DefaultService">
<property name="messenger" ref="messenger" />
</bean>
</beans>
With the Groovy script looking like this:
package org.example
class GroovyMessenger implements Messenger {
private String message = "anotherProperty";
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message
}
}
As the system administrator wants to make changes then they (or you) can edit the contents of the script appropriately. The script is not part of the deployed application and can reference a known file location (or one that is configured through a standard PropertyPlaceholderConfigurer during startup).
Although the example uses a Groovy class, you could have the class execute code that reads a simple properties file. In that manner, you never edit the script directly, just touch it to change the timestamp. That action then triggers the reload, which in turn triggers the refresh of properties from the (updated) properties file, which finally updates the values within the Spring context and off you go.
The documentation does point out that this technique doesn't work for constructor-injection, but maybe you can work around that.
Updated answer to cover dynamic property changes
Quoting from this article, which provides full source code, one approach is:
* a factory bean that detects file system changes
* an observer pattern for Properties, so that file system changes can be propagated
* a property placeholder configurer that remembers where which placeholders were used, and updates singleton beans’ properties
* a timer that triggers the regular check for changed files
The observer pattern is implemented by
the interfaces and classes
ReloadableProperties,
ReloadablePropertiesListener,
PropertiesReloadedEvent, and
ReloadablePropertiesBase. None of them
are especially exciting, just normal
listener handling. The class
DelegatingProperties serves to
transparently exchange the current
properties when properties are
updated. We only update the whole
property map at once, so that the
application can avoid inconsistent
intermediate states (more on this
later).
Now the
ReloadablePropertiesFactoryBean can be
written to create a
ReloadableProperties instance (instead
of a Properties instance, as the
PropertiesFactoryBean does). When
prompted to do so, the RPFB checks
file modification times, and if
necessary, updates its
ReloadableProperties. This triggers
the observer pattern machinery.
In our case, the only listener is the
ReloadingPropertyPlaceholderConfigurer.
It behaves just like a standard spring
PropertyPlaceholderConfigurer, except
that it tracks all usages of
placeholders. Now when properties are
reloaded, all usages of each modified
property are found, and the properties
of those singleton beans are assigned
again.
Original answer below covering static property changes:
Sounds like you just want to inject external properties into your Spring context. The PropertyPlaceholderConfigurer is designed for this purpose:
<!-- Property configuration (if required) -->
<bean id="serverProperties" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<!-- Identical properties in later files overwrite earlier ones in this list -->
<value>file:/some/admin/location/application.properties</value>
</list>
</property>
</bean>
you then reference the external properties with Ant syntax placeholders (that can be nested if you want from Spring 2.5.5 onwards)
<bean id="example" class="org.example.DataSource">
<property name="password" value="${password}"/>
</bean>
You then ensure that the application.properties file is only accessible to the admin user and the user running the application.
Example application.properties:
password=Aardvark
Or you could use the approach from this similar question and hence also my solution:
The approach is to have beans that are configured via property files and the solution is to either
refresh the entire applicationContext (automatically using a scheduled task or manually using JMX) when properties have changed or
use a dedicated property provider object to access all properties. This property provider will keep checking the properties files for modification. For beans where prototype-based property lookup is impossible, register a custom event that your property provider will fire when it finds an updated property file. Your beans with complicated lifecycles will need to listen for that event and refresh themselves.
You can create a custom scope called "reconfigurable" into the ApplicationContext. It creates and caches instances of all beans in this scope. On a configuration change it clears the cache and re-creates the beans on first access with the new configuration. For this to work you need to wrap all instances of reconfigurable beans into an AOP scoped proxy, and access the configuration values with Spring-EL: put a map called config into the ApplicationContext and access the configuration like #{ config['key'] }.
This is not something I tried, I am trying to provide pointers.
Assuming your application context is a subclass of AbstractRefreshableApplicationContext(example XmlWebApplicationContext, ClassPathXmlApplicationContext). AbstractRefreshableApplicationContext.getBeanFactory() will give you instance of ConfigurableListableBeanFactory. Check if it is instance of BeanDefinitionRegistry. If so you can call 'registerBeanDefinition' method. This approach will be tightly coupled with Spring implementation,
Check the code of AbstractRefreshableApplicationContext and DefaultListableBeanFactory(this is the implementation you get when you call 'AbstractRefreshableApplicationContext getBeanFactory()')
Option 1 :
Inject the configurable bean into the DataSource or MailSender. Always get the configurable values from the configuration bean from within these beans.
Inside the configurable bean run a thread to read the externally configurable properties (file etc..) periodically. This way the configurable bean will refresh itself after the admin had changed the properties and so the DataSource will get the updated values automatically.
You need not actually implement the "thread" - read : http://commons.apache.org/configuration/userguide/howto_filebased.html#Automatic_Reloading
Option 2 (bad, i think, but maybe not - depends on use case) :
Always create new beans for beans of type DataSource / MailSender - using prototype scope. In the init of the bean, read the properties afresh.
Option 3 :
I think, #mR_fr0g suggestion on using JMX might not be a bad idea. What you could do is :
expose your configuration bean as a MBean (read http://static.springsource.org/spring/docs/2.5.x/reference/jmx.html)
Ask your admin to change the configuration properties on the MBean (or provide an interface in the bean to trigger property updates from their source)
This MBean (a new piece of java code that you will need to write), MUST keep references of Beans (the ones that you want to change / inject the changed properties into). This should be simple (via setter injection or runtime fetch of bean names / classes)
When the property on the MBean is changed (or triggered), it must call the appropriate setters on the respective beans. That way, your legacy code does not change, you can still manage runtime property changes.
HTH!
You may want to have a look at the Spring Inspector a plug-gable component that provides programmatic access to any Spring based application at run-time. You can use Javascript to change configurations or manage the application behaviour at run-time.
Here is the nice idea of writing your own PlaceholderConfigurer that tracks the usage of properties and changes them whenever a configuration change occurs. This has two disadvantages, though:
It does not work with constructor injection of property values.
You can get race conditions if the reconfigured bean receives a
changed configuration while it is processing some stuff.
My solution was to copy the original object. Fist i created an interface
/**
* Allows updating data to some object.
* Its an alternative to {#link Cloneable} when you cannot
* replace the original pointer. Ex.: Beans
* #param <T> Type of Object
*/
public interface Updateable<T>
{
/**
* Import data from another object
* #param originalObject Object with the original data
*/
public void copyObject(T originalObject);
}
For easing the implementation of the function fist create a constructor with all fields, so the IDE could help me a bit. Then you can make a copy constructor that uses the same function Updateable#copyObject(T originalObject). You can also profit of the code of the constructor created by the IDE to create the function to implement:
public class SettingsDTO implements Cloneable, Updateable<SettingsDTO>
{
private static final Logger LOG = LoggerFactory.getLogger(SettingsDTO.class);
#Size(min = 3, max = 30)
private String id;
#Size(min = 3, max = 30)
#NotNull
private String name;
#Size(min = 3, max = 100)
#NotNull
private String description;
#Max(100)
#Min(5)
#NotNull
private Integer pageSize;
#NotNull
private String dateFormat;
public SettingsDTO()
{
}
public SettingsDTO(String id, String name, String description, Integer pageSize, String dateFormat)
{
this.id = id;
this.name = name;
this.description = description;
this.pageSize = pageSize;
this.dateFormat = dateFormat;
}
public SettingsDTO(SettingsDTO original)
{
copyObject(original);
}
#Override
public void copyObject(SettingsDTO originalObject)
{
this.id = originalObject.id;
this.name = originalObject.name;
this.description = originalObject.description;
this.pageSize = originalObject.pageSize;
this.dateFormat = originalObject.dateFormat;
}
}
I used it in a Controller for updating the current settings for the app:
if (bindingResult.hasErrors())
{
model.addAttribute("settingsData", newSettingsData);
model.addAttribute(Templates.MSG_ERROR, "The entered data has errors");
}
else
{
synchronized (settingsData)
{
currentSettingData.copyObject(newSettingsData);
redirectAttributes.addFlashAttribute(Templates.MSG_SUCCESS, "The system configuration has been updated successfully");
return String.format("redirect:/%s", getDao().getPath());
}
}
So the currentSettingsData which has the configuration of the application gonna have the updated values, located in newSettingsData. These method allows updating any bean without high complexity.

Spring, #Transactional and Hibernate Lazy Loading

i'm using spring + hibernate. All my HibernateDAO use directly sessionFactory.
I have application layer -> service layer -> DAO layer and all collections is lazly loaded.
So, the problem is that sometime in the application layer(that contains GUI/swing) i load an entity using a service layer method(that contains #Transactional annotation) and i want to use a lazly property of this object, but obviusly the session is already closed.
What is the best way to resolve this trouble?
EDIT
I try to use a MethodInterceptor, my idea is to write an AroundAdvice for all my Entities and use annotation, so for example:
// Custom annotation, say that session is required for this method
#Target(ElementType.METHOD)
#Retention(RetentionPolicy.RUNTIME)
public #interface SessionRequired {
// An AroundAdvice to intercept method calls
public class SessionInterceptor implements MethodInterceptor {
public Object invoke(MethodInvocation mi) throws Throwable {
bool sessionRequired=mi.getMethod().isAnnotationPresent(SessionRequired.class);
// Begin and commit session only if #SessionRequired
if(sessionRequired){
// begin transaction here
}
Object ret=mi.proceed();
if(sessionRequired){
// commit transaction here
}
return ret;
}
}
// An example of entity
#Entity
public class Customer implements Serializable {
#Id
Long id;
#OneToMany
List<Order> orders; // this is a lazy collection
#SessionRequired
public List<Order> getOrders(){
return orders;
}
}
// And finally in application layer...
public void foo(){
// Load customer by id, getCustomer is annotated with #Transactional
// this is a lazy load
Customer customer=customerService.getCustomer(1);
// Get orders, my interceptor open and close the session for me... i hope...
List<Order> orders=customer.getOrders();
// Finally use the orders
}
Do you think can this work?
The problem is, how to register this interceptor for all my entities without do it in xml file?
There is a way to do it with annotation?
Hibernate recently introduced fetch profiles which (in addition to performance tuning) is ideal for solving issues like this. It allows you to (at runtime) choose between different loading and initialization strategies.
http://docs.jboss.org/hibernate/core/3.5/reference/en/html/performance.html#performance-fetching-profiles
Edit (added section on how to set the fetch profile using an interceptor):
Before you get started: Check that fetch profiles actually will work for you. I haven't used them myself and see that they are currently limited to join fetches. Before you waste time on implementing and wiring up the interceptor, try setting the fetch profile manually and see that it actually solves your problem.
There are many ways to setup interceptors in Spring (according to preference), but the most straight-forward way would be to implement a MethodInterceptor (see http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop-api.html#aop-api-advice-around). Let it have a setter for the fetch profile you want and setter for the Hibernate Session factory:
public class FetchProfileInterceptor implements MethodInterceptor {
private SessionFactory sessionFactory;
private String fetchProfile;
... setters ...
public Object invoke(MethodInvocation invocation) throws Throwable {
Session s = sessionFactory.openSession(); // The transaction interceptor has already opened the session, so this returns it.
s.enableFetchProfile(fetchProfile);
try {
return invocation.proceed();
} finally {
s.disableFetchProfile(fetchProfile);
}
}
}
Lastly, enable the interceptor in the Spring config. This can be done in several ways and you probably already have a AOP setup that you can add it to. See http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-schema.
If you're new to AOP, I'd suggest trying the "old" ProxyFactory way first (http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop-api.html#aop-api-proxying-intf) because it's easier to understand how it works. Here's some sample XML to get you started:
<bean id="fetchProfileInterceptor" class="x.y.zFetchProfileInterceptor">
<property name="sessionFactory" ref="sessionFactory"/>
<property name="fetchProfile" ref="gui-profile"/>
</bean>
<bean id="businessService" class="x.y.x.BusinessServiceImpl">
<property name="dao" .../>
...
</bean>
<bean id="serviceForSwinGUI"
class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="proxyInterfaces" value="x.y.z.BusinessServiceInterface/>
<property name="target" ref="businessService"/>
<property name="interceptorNames">
<list>
<value>existingTransactionInterceptorBeanName</value>
<value>fetchProfileInterceptor</value>
</list>
</property>
</bean>
Create a method in the service layer that returns the lazy-loaded object for that entity
Change to fetch eager :)
If possible extend your transaction into the application layer
(just while we wait for someone who knows what they are talking about)
You need to rework your session management, unfortunately. This is a major problem when dealing with Hibernate and Spring, and it's a gigantic hassle.
Essentially, what you need is for your application layer to create a new session when it gets your Hibernate object, and to manage that and close the session properly. This stuff is tricky, and non-trivial; one of the best ways to manage this is to mediate the sessions through a factory available from your application layer, but you still need to be able to end the session properly, so you have to be aware of the lifecycle needs of your data.
This stuff is the most common complaint about using Spring and Hibernate in this way; really, the only way to manage it is to get a good handle on exactly what your data lifecycles are.

Short way of making many beans depend-on one bean

When using database migrations, I obviously want none of the DAOs to be usable before the migrations are run.
At the moment I'm declaring a lot of DAOs, all having a depends-on=databaseMigrator property. I find this troubling, especially since it's error prone.
Is there a more compact way of doing this?
Notes:
the depends-on attribute is not 'inherited' from parent beans;
I am not using Hibernate or JPA so I can't make the sessionFactory bean depend-on the migrator.
You could try writing a class that implements the BeanFactoryPostProcessor interface to automatically register the dependencies for you:
Warning: This class has not been compiled.
public class DatabaseMigratorDependencyResolver implements BeanFactoryPostProcessor {
#Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) {
String[] beanNames = beanFactory.getBeanDefinitionNames();
for (String beanName : beanNames) {
BeanDefinition beanDefinition = beanFactory.getBeanDefinition(beanName);
// Your job is here:
// Feel free to make use of the methods available from the BeanDefinition class (http://static.springframework.org/spring/docs/2.5.x/api/org/springframework/beans/factory/config/BeanDefinition.html)
boolean isDependentOnDatabaseMigrator = ...;
if (isDependentOnDatabaseMigrator) {
beanFactory.registerDependentBean("databaseMigrator", beanName);
}
}
}
}
You could then include a bean of this class alongside all your other beans.
<bean class="DatabaseMigratorDependencyResolver"/>
Spring will automatically run it before it starts initiating the rest of the beans.
I do this on application start-up. The schema version that the application requires is compiled into the application as part of the build process. It is also stored in the database and updated by the database migration scripts.
On application start-up, the app checks that the schema version in the database is what it expects and if not, aborts immediately with a clear error message.
In a normal Java program, this happens right at the start of the main method.
In a webapp, it's performed by the app's ServletContextListener and is the first thing it does when the servlet context is created.
That's saved my (apps') bacon several times.
I ended up creating a simple ForwardingDataSource class which appears in the context files as:
<bean id="dataSource" class="xxx.ForwardingDataSource" depends-on="databaseMigrator">
<property name="delegate">
<!-- real data source here -->
</property>
</bean>
If find it less elegant than Adam Paynter's solution, but clearer.

Categories

Resources