How to setup Hibernate to read/write to different datasources? - java

Using Spring and Hibernate, I want to write to one MySQL master database, and read from one more more replicated slaves in cloud-based Java webapp.
I can't find a solution that is transparent to the application code. I don't really want to have to change my DAOs to manage different SessionFactories, as that seems really messy and couples the code with a specific server architecture.
Is there any way of telling Hibernate to automatically route CREATE/UPDATE queries to one datasource, and SELECT to another? I don't want to do any sharding or anything based on object type - just route different types of queries to different datasources.

An example can be found here: https://github.com/afedulov/routing-data-source.
Spring provides a variation of DataSource, called AbstractRoutingDatasource. It can be used in place of standard DataSource implementations and enables a mechanism to determine which concrete DataSource to use for each operation at runtime. All you need to do is to extend it and to provide an implementation of an abstract determineCurrentLookupKey method. This is the place to implement your custom logic to determine the concrete DataSource. Returned Object serves as a lookup key. It is typically a String or en Enum, used as a qualifier in Spring configuration (details will follow).
package website.fedulov.routing.RoutingDataSource
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
public class RoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return DbContextHolder.getDbType();
}
}
You might be wondering what is that DbContextHolder object and how does it know which DataSource identifier to return? Keep in mind that determineCurrentLookupKey method will be called whenever TransactionsManager requests a connection. It is important to remember that each transaction is "associated" with a separate thread. More precisely, TransactionsManager binds Connection to the current thread. Therefore in order to dispatch different transactions to different target DataSources we have to make sure that every thread can reliably identify which DataSource is destined for it to be used. This makes it natural to utilize ThreadLocal variables for binding specific DataSource to a Thread and hence to a Transaction. This is how it is done:
public enum DbType {
MASTER,
REPLICA1,
}
public class DbContextHolder {
private static final ThreadLocal<DbType> contextHolder = new ThreadLocal<DbType>();
public static void setDbType(DbType dbType) {
if(dbType == null){
throw new NullPointerException();
}
contextHolder.set(dbType);
}
public static DbType getDbType() {
return (DbType) contextHolder.get();
}
public static void clearDbType() {
contextHolder.remove();
}
}
As you see, you can also use an enum as the key and Spring will take care of resolving it correctly based on the name. Associated DataSource configuration and keys might look like this:
....
<bean id="dataSource" class="website.fedulov.routing.RoutingDataSource">
<property name="targetDataSources">
<map key-type="com.sabienzia.routing.DbType">
<entry key="MASTER" value-ref="dataSourceMaster"/>
<entry key="REPLICA1" value-ref="dataSourceReplica"/>
</map>
</property>
<property name="defaultTargetDataSource" ref="dataSourceMaster"/>
</bean>
<bean id="dataSourceMaster" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="${db.master.url}"/>
<property name="username" value="${db.username}"/>
<property name="password" value="${db.password}"/>
</bean>
<bean id="dataSourceReplica" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="${db.replica.url}"/>
<property name="username" value="${db.username}"/>
<property name="password" value="${db.password}"/>
</bean>
At this point you might find yourself doing something like this:
#Service
public class BookService {
private final BookRepository bookRepository;
private final Mapper mapper;
#Inject
public BookService(BookRepository bookRepository, Mapper mapper) {
this.bookRepository = bookRepository;
this.mapper = mapper;
}
#Transactional(readOnly = true)
public Page<BookDTO> getBooks(Pageable p) {
DbContextHolder.setDbType(DbType.REPLICA1); // <----- set ThreadLocal DataSource lookup key
// all connection from here will go to REPLICA1
Page<Book> booksPage = callActionRepo.findAll(p);
List<BookDTO> pContent = CollectionMapper.map(mapper, callActionsPage.getContent(), BookDTO.class);
DbContextHolder.clearDbType(); // <----- clear ThreadLocal setting
return new PageImpl<BookDTO>(pContent, p, callActionsPage.getTotalElements());
}
...//other methods
Now we can control which DataSource will be used and forward requests as we please. Looks good!
...Or does it? First of all, those static method calls to a magical DbContextHolder really stick out. They look like they do not belong the business logic. And they don't. Not only do they not communicate the purpose, but they seem fragile and error-prone (how about forgetting to clean the dbType). And what if an exception is thrown between the setDbType and cleanDbType? We cannot just ignore it. We need to be absolutely sure that we reset the dbType, otherwise Thread returned to the ThreadPool might be in a "broken" state, trying to write to a replica in the next call. So we need this:
#Transactional(readOnly = true)
public Page<BookDTO> getBooks(Pageable p) {
try{
DbContextHolder.setDbType(DbType.REPLICA1); // <----- set ThreadLocal DataSource lookup key
// all connection from here will go to REPLICA1
Page<Book> booksPage = callActionRepo.findAll(p);
List<BookDTO> pContent = CollectionMapper.map(mapper, callActionsPage.getContent(), BookDTO.class);
DbContextHolder.clearDbType(); // <----- clear ThreadLocal setting
} catch (Exception e){
throw new RuntimeException(e);
} finally {
DbContextHolder.clearDbType(); // <----- make sure ThreadLocal setting is cleared
}
return new PageImpl<BookDTO>(pContent, p, callActionsPage.getTotalElements());
}
Yikes >_< ! This definitely does not look like something I would like to put into every read only method. Can we do better? Of course! This pattern of "do something at the beginning of a method, then do something at the end" should ring a bell. Aspects to the rescue!
Unfortunately this post has already gotten too long to cover the topic of custom aspects. You can follow up on the details of using aspects using this link.

I don't think that deciding that SELECTs should go to one DB (one slave) and CREATE/UPDATES should go to a different one (master) is a very good decision. The reasons are:
replication is not instantaneous, so you could CREATE something in the master DB and, as part of the same operation, SELECT it from the slave and notice that the data hasn't yet reached the slave.
if one of the slaves is down, you shouldn't be prevented from writing data in the master, because as soon as the slave is back up, its state will be synchronized with master. In your case though, your write operations are dependent on both master and slave.
How would you then define transactionality if you're in fact using 2 dbs?
I would advise using the master DB for all the WRITE flows, with all the instructions they might require (whether they are SELECTs, UPDATE or INSERTS). Then, the application dealing with the read-only flows can read from the slave DB.
I'd also advise having separate DAOs, each with its own methods, so that you'll have a clear distinction between read-only flows and write/update flows.

You could create 2 session factories and hava a BaseDao wrapping the 2 factories(or the 2 hibernateTemplates if you use them) and use the get methods with on factory and the saveOrUpdate methods with the other

Try this way : https://github.com/kwon37xi/replication-datasource
It works nicely and very easy to implement without any extra annotation or code. It requires only #Transactional(readOnly=true|false).
I have been using this solution with Hibernate(JPA),Spring JDBC Template, iBatis.

You can use DDAL to implement writting master database and reading slave database in a DefaultDDRDataSource without modifying your Daos, and what's more, DDAL provided loading balance for mulit-slave databases. It doesn't rely on spring or hibernate. There is a demo project to show how to use it: https://github.com/hellojavaer/ddal-demos and the demo1 is just what you described scene.

Related

implementing multitenancy in hibernate 4

I am trying to build multitenancy based on unique schema for each client
So there would be a single datasource to Oracle in weblogic server
the datasource would have a user which has synonyms / grants to all client schemas
Let us ignore the fact whether this is good or bad or horrible design - I just want to find a way to get it to work
I am defining the datasource as a part of my hibernate.cfg.xml file
The sessionfactory will get created from this hibernate.cfg.xml
From reading on the internet - I will have to add additional properties in my hibernate.cfg.xml
<session-factory>
<property name="connection.datasource">MYSQLDS</property>
<property name="default_schema">testpage</property>
<!-- multitenant properties start here -->
<property name="multi_tenant_connection_provider">multiTenantConnectionProvider</property>
<property name="multiTenancy">SCHEMA</property>
<property name="tenant_identifier_resolver">CurrentTenantIdentifierResolverImpl</property>
&globalpit;
</session-factory>
Now is the part that I dont understand:
It seems for CurrentTenantIdentifierResolverImpl I will implement a custom class that will implement the interface
import org.hibernate.context.spi.CurrentTenantIdentifierResolver;
public class SchemaResolver implements CurrentTenantIdentifierResolver {
#Override
public String resolveCurrentTenantIdentifier() {
return "master";
}
#Override
public boolean validateExistingCurrentSessions() {
return false;
}
}
So I am quite confused how this class and its methods will get invoked - I do understand that this is used to determine which schema is to be used based on some identifier unique to a client - ala user http session ?
The next part is the implementation of MultiTenantConnectionProvider and again its methods such as
public Connection getConnection(String tenantIdentifier) throws SQLException {
RANT -START This strangely starts looking like good old plain and simple JDBC :) so am wondering why I have to use hibernate :(
RANT-END
So related to this method and the class - how do I consume it ?
The frustrating part is lots of places where you get to see these two classes defined - but no explanation of how these are invoked ?
Is there a complete end to end example of how to use these two classes with a clear example of how these two classes are consumed - not just the two stand alone classes please !

Insert external data into persistence.xml

I want my persistence.xml to set some of its properties dynamically, to be specific:
<property name="hibernate.connection.password" value="password"/>
<property name="hibernate.connection.username" value="username"/>
I can build a class that could provide me the data I need, but I don't know how to set the class up in a way that it works like this:
<property name="hibernate.connection.password" value="${my.clazz.pass}"/>
<property name="hibernate.connection.username" value="${my.clazz.user}"/>
I have tried to set the class up like this
public class clazz{
String pass;
String user;
public clazz(){
//do stuff to set pass and user
}
//getter/setter
}
But that does not work. I haven't found a way here or in google, but I have seen the ${my.clazz.smth}-way several times.
So, how can I set that up? :)
Thanks in advance!
So, resolved this a while ago, but I still didn't answer:
Anthony Accioly pointed me to the right direction:
I added this to my applicationContext.xml's entityManagerFactory
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitPostProcessors">
<bean class="my.package.SetupDatabase">
</bean>
</property>
//the other stuff
</bean>
The corresponding class, in this case I use hibernate:
package my.package;
public class SetupDatabase implements PersistenceUnitPostProcessor {
private String username;
private String password;
private String dbserver;
public void SetupDatabase(){
//do stuff to obtain needed information
}
public void postProcessPersistenceUnitInfo(MutablePersistenceUnitInfo pui) {
pui.getProperties().setProperty("hibernate.connection.username", username );
pui.getProperties().setProperty("hibernate.connection.password", password);
pui.getProperties().setProperty("hibernate.connection.url", dbserver );
}
}
This way the setup is done just once when starting the whole thing up, but the required data may be 'outsourced'.
Thanks again for pointing me to the right direction!
The value placeholder ${my.clazz.smth} that you refer to is usually read from a properties file as opposed to a class directly.
This is done using Spring's PropertyPlaceholderConfigurer.
Here is an example of a project which combined the Hibernate & Spring.
If you really need to delay the configuration until runtime (e.g., to obtain the database credentials from an external source such as a Web service) you can do it with Hibernate API Programatic Configuration, particularly Ejb3Configuration for legacy versions of Hibernate or ServiceRegistryBuilder (v 4.X)...
But be warned that, to the best of my knowledge, there is no way to dynamically update the username and password of a PersintenceUnit. You will have to build another EntityManagerFactory from a new Configuration instance (a quite expensive operation) every time that you need to change it's properties.
Anyway, unless you have a really good reason to, do not manage database credentials from your application, delegate it to a JNDI-Bound DataSource instead.
Ejb3Configuration cfg = new Ejb3Configuration()
// Add classes, other properties, etc
.setProperty("hibernate.connection.password", "xxxx")
.setProperty("hibernate.connection.username", "yyyy");
EntityManagerFactory emf= cfg.buildEntityManagerFactory();

How to use Pooled Spring beans instead of Singleton ones?

For efficiency reasons, I am interested in limiting the number of threads that simultaneously uses the beans of the Spring application context (I don't want an unlimited number of threads proccessing in my limited memory).
I have found here (spring documentation) a way to achieve this by pooling the beans in a EJB style, by doing the following:
Declare the target bean as scope "prototype".
Declare a Pool provider that will deliver a limited number of pooled "target" instances.
Declare a "ProxyFactoryBean" which function is not clear to me.
Here is the declaration of this beans:
<bean id="businessObjectTarget" class="com.mycompany.MyBusinessObject"
scope="prototype">
... properties omitted
</bean>
<bean id="poolTargetSource" class="org.springframework.aop.target.CommonsPoolTargetSource">
<property name="targetBeanName" value="businessObjectTarget"/>
<property name="maxSize" value="25"/>
</bean>
<bean id="businessObject" class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="targetSource" ref="poolTargetSource"/>
<property name="interceptorNames" value="myInterceptor"/>
</bean>
My problem is that when I will declare another bean to use pooled instances of the "businessObjectTarget", how should I do it? I mean, when i try to do something like this:
<bean id="clientBean" class="com.mycompany.ClientOfTheBusinessObject">
<property name="businessObject" ref="WHAT TO PUT HERE???"/>
</bean>
What should be the value of the "ref" ??
You cannot use properties to get instances of prototypes.
One option is to use the lookup methods (see chapter 3.3.7.1)
Another option to get your bean in code: make your com.mycompany.ClientOfTheBusinessObject to implement the ApplicationContextAware interface and then call context.getBean("clientBean")
Please note the name of the third bean in the spring example:-"businessObject"
It means this the bean from where you are supposed to access the common pool.
For your case if you need your own client bean you may have it as follows.
But in such a case businessObject is not required.:-
<bean id="businessObjectTarget" class="com.mycompany.MyBusinessObject"
scope="prototype">
... properties omitted
</bean>
<bean id="poolTargetSource" class="org.springframework.aop.target.CommonsPoolTargetSource">
<property name="targetBeanName" value="businessObjectTarget"/>
<property name="maxSize" value="25"/>
</bean>
<bean id="clientBean" class="com.mycompany.ClientOfTheBusinessObject">
<property name="poolTargetSource" ref="poolTargetSource"/>
</bean>
Java classes:-
public class ClientOfTheBusinessObject{
CommonsPoolTargetSource poolTargetSource;
//<getter and setter for poolTargeTSource>
public void methodToAccessCommonPool(){
//The following line gets the object from the pool.If there is nothing left in the pool then the thread will be blocked.(The blocking can be replaced with an exception by changing the properties of the CommonsPoolTargetSource bean)
MyBusinessObject mbo = (MyBusinessObject)poolTargetSource.getTarget();
//Do whatever you want to do with mbo
//the following line puts the object back to the pool
poolTargetSource.releaseTarget(mbo);
}
}
I'm pretty sure you can limit the number of simultaneous threads in a less convoluted way. Did you look at the Java Concurrency API, specifically at the Executors.newFixedThreadPool() ?
i used java-configuration to construct a proxy over the interface that handles pooling using apache commons-pool to achieve invocation-level-pooling.
I did it using Annotations based configuration:
I did create my BusinessObject class as a POJO and annotate it this way:
#Component("businessObject")
#Scope("prototype")
public class BusinessObject { ... }
I gave it a specific name and did mark it as prototype so that Spring doesn't create a singleton instance for it; every time the bean is required, Spring would create a new instance.
In my #Configuration class (or in the #SpringBootApplication class, if using Spring Boot) I created a CommonsPool2TargetSource instance to hold BusinessObject instances:
#Bean
public CommonsPool2TargetSource pooledTargetSource() {
final CommonsPool2TargetSource commonsPoolTargetSource = new CommonsPool2TargetSource();
commonsPoolTargetSource.setTargetBeanName("businessObject");
commonsPoolTargetSource.setTargetClass(BusinessObject.class);
commonsPoolTargetSource.setMaxSize(maxPoolSize);
return commonsPoolTargetSource;
}
Here I'm indicating that the pool will hold BusinessObject instances. Notice that my maxPoolSize=? value is set with the max number of BusinessObject instances I want to hold in the pool.
Finally, I did access my pooled instances this way:
#Autowired
private CommonsPool2TargetSource pooledTargetSource;
void someMethod() {
// First I retrieve one pooled BusinessObject instance
BusinessObject businessObject = (BusinessObject)pooledTargetSource.getTarget();
try {
// Second, I do some logic using the BusinessObject instance gotten
} catch (SomePossibleException e) {
// Catch and handle any potential error, if any
} finally {
// Finally, after executing my business logic
// I release the BusinessObject instance so that it can be reused
pooledTargetSource.releaseTarget(businessObject);
}
}
It is very important to always make sure to release the BusinessObject borrowed from the pool, without mattering if the business logic did finish successfully or with error. Otherwise the pool could get empty with all the instances being borrowed and never released and any further requests for instances will block forever.

Can I replace a Spring bean definition at runtime?

Consider the following scenario. I have a Spring application context with a bean whose properties should be configurable, think DataSource or MailSender. The mutable application configuration is managed by a separate bean, let's call it configuration.
An administrator can now change the configuration values, like email address or database URL, and I would like to re-initialize the configured bean at runtime.
Assume that I can't just simply modify the property of the configurable bean above (e.g. created by FactoryBean or constructor injection) but have to recreate the bean itself.
Any thoughts on how to achieve this? I'd be glad to receive advice on how to organize the whole configuration thing as well. Nothing is fixed. :-)
EDIT
To clarify things a bit: I am not asking how to update the configuration or how to inject static configuration values. I'll try an example:
<beans>
<util:map id="configuration">
<!-- initial configuration -->
</util:map>
<bean id="constructorInjectedBean" class="Foo">
<constructor-arg value="#{configuration['foobar']}" />
</bean>
<bean id="configurationService" class="ConfigurationService">
<property name="configuration" ref="configuration" />
</bean>
</beans>
So there's a bean constructorInjectedBean that uses constructor injection. Imagine the construction of the bean is very expensive so using a prototype scope or a factory proxy is not an option, think DataSource.
What I want to do is that every time the configuration is being updated (via configurationService the bean constructorInjectedBean is being recreated and re-injected into the application context and dependent beans.
We can safely assume that constructorInjectedBean is using an interface so proxy magic is indeed an option.
I hope to have made the question a little bit clearer.
Here is how I have done it in the past: running services which depend on configuration which can be changed on the fly implement a lifecycle interface: IRefreshable:
public interface IRefreshable {
// Refresh the service having it apply its new values.
public void refresh(String filter);
// The service must decide if it wants a cache refresh based on the refresh message filter.
public boolean requiresRefresh(String filter);
}
Controllers (or services) which can modify a piece of configuration broadcast to a JMS topic that the configuration has changed (supplying the name of the configuration object). A message driven bean then invokes the IRefreshable interface contract on all beans which implement IRefreshable.
The nice thing with spring is that you can automatically detect any service in your application context that needs to be refreshed, removing the need to explicitly configure them:
public class MyCacheSynchService implements InitializingBean, ApplicationContextAware {
public void afterPropertiesSet() throws Exception {
Map<String, ?> refreshableServices = m_appCtx.getBeansOfType(IRefreshable.class);
for (Map.Entry<String, ?> entry : refreshableServices.entrySet() ) {
Object beanRef = entry.getValue();
if (beanRef instanceof IRefreshable) {
m_refreshableServices.add((IRefreshable)beanRef);
}
}
}
}
This approach works particularly well in a clustered application where one of many app servers might change the configuration, which all then need to be aware of. If you want to use JMX as the mechanism for triggering the changes, your JMX bean can then broadcast to the JMS topic when any of its attributes are changed.
I can think of a 'holder bean' approach (essentially a decorator), where the holder bean delegates to holdee, and it's the holder bean which is injected as a dependency into other beans. Nobody else has a reference to holdee but the holder. Now, when the holder bean's config is changed, it recreates the holdee with this new config and starts delegating to it.
You should have a look at JMX. Spring also provides support for this.
Spring 2.0.x
Spring 2.5.x
Spring 3.0.x
Further updated answer to cover scripted bean
Another approach supported by spring 2.5.x+ is that of the scripted bean. You can use a variety of languages for your script - BeanShell is probably the most intuitive given that it has the same syntax as Java, but it does require some external dependencies. However, the examples are in Groovy.
Section 24.3.1.2 of the Spring Documentation covers how to configure this, but here are some salient excerpts illustrating the approach which I've edited to make them more applicable to your situation:
<beans>
<!-- This bean is now 'refreshable' due to the presence of the 'refresh-check-delay' attribute -->
<lang:groovy id="messenger"
refresh-check-delay="5000" <!-- switches refreshing on with 5 seconds between checks -->
script-source="classpath:Messenger.groovy">
<lang:property name="message" value="defaultMessage" />
</lang:groovy>
<bean id="service" class="org.example.DefaultService">
<property name="messenger" ref="messenger" />
</bean>
</beans>
With the Groovy script looking like this:
package org.example
class GroovyMessenger implements Messenger {
private String message = "anotherProperty";
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message
}
}
As the system administrator wants to make changes then they (or you) can edit the contents of the script appropriately. The script is not part of the deployed application and can reference a known file location (or one that is configured through a standard PropertyPlaceholderConfigurer during startup).
Although the example uses a Groovy class, you could have the class execute code that reads a simple properties file. In that manner, you never edit the script directly, just touch it to change the timestamp. That action then triggers the reload, which in turn triggers the refresh of properties from the (updated) properties file, which finally updates the values within the Spring context and off you go.
The documentation does point out that this technique doesn't work for constructor-injection, but maybe you can work around that.
Updated answer to cover dynamic property changes
Quoting from this article, which provides full source code, one approach is:
* a factory bean that detects file system changes
* an observer pattern for Properties, so that file system changes can be propagated
* a property placeholder configurer that remembers where which placeholders were used, and updates singleton beans’ properties
* a timer that triggers the regular check for changed files
The observer pattern is implemented by
the interfaces and classes
ReloadableProperties,
ReloadablePropertiesListener,
PropertiesReloadedEvent, and
ReloadablePropertiesBase. None of them
are especially exciting, just normal
listener handling. The class
DelegatingProperties serves to
transparently exchange the current
properties when properties are
updated. We only update the whole
property map at once, so that the
application can avoid inconsistent
intermediate states (more on this
later).
Now the
ReloadablePropertiesFactoryBean can be
written to create a
ReloadableProperties instance (instead
of a Properties instance, as the
PropertiesFactoryBean does). When
prompted to do so, the RPFB checks
file modification times, and if
necessary, updates its
ReloadableProperties. This triggers
the observer pattern machinery.
In our case, the only listener is the
ReloadingPropertyPlaceholderConfigurer.
It behaves just like a standard spring
PropertyPlaceholderConfigurer, except
that it tracks all usages of
placeholders. Now when properties are
reloaded, all usages of each modified
property are found, and the properties
of those singleton beans are assigned
again.
Original answer below covering static property changes:
Sounds like you just want to inject external properties into your Spring context. The PropertyPlaceholderConfigurer is designed for this purpose:
<!-- Property configuration (if required) -->
<bean id="serverProperties" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<!-- Identical properties in later files overwrite earlier ones in this list -->
<value>file:/some/admin/location/application.properties</value>
</list>
</property>
</bean>
you then reference the external properties with Ant syntax placeholders (that can be nested if you want from Spring 2.5.5 onwards)
<bean id="example" class="org.example.DataSource">
<property name="password" value="${password}"/>
</bean>
You then ensure that the application.properties file is only accessible to the admin user and the user running the application.
Example application.properties:
password=Aardvark
Or you could use the approach from this similar question and hence also my solution:
The approach is to have beans that are configured via property files and the solution is to either
refresh the entire applicationContext (automatically using a scheduled task or manually using JMX) when properties have changed or
use a dedicated property provider object to access all properties. This property provider will keep checking the properties files for modification. For beans where prototype-based property lookup is impossible, register a custom event that your property provider will fire when it finds an updated property file. Your beans with complicated lifecycles will need to listen for that event and refresh themselves.
You can create a custom scope called "reconfigurable" into the ApplicationContext. It creates and caches instances of all beans in this scope. On a configuration change it clears the cache and re-creates the beans on first access with the new configuration. For this to work you need to wrap all instances of reconfigurable beans into an AOP scoped proxy, and access the configuration values with Spring-EL: put a map called config into the ApplicationContext and access the configuration like #{ config['key'] }.
This is not something I tried, I am trying to provide pointers.
Assuming your application context is a subclass of AbstractRefreshableApplicationContext(example XmlWebApplicationContext, ClassPathXmlApplicationContext). AbstractRefreshableApplicationContext.getBeanFactory() will give you instance of ConfigurableListableBeanFactory. Check if it is instance of BeanDefinitionRegistry. If so you can call 'registerBeanDefinition' method. This approach will be tightly coupled with Spring implementation,
Check the code of AbstractRefreshableApplicationContext and DefaultListableBeanFactory(this is the implementation you get when you call 'AbstractRefreshableApplicationContext getBeanFactory()')
Option 1 :
Inject the configurable bean into the DataSource or MailSender. Always get the configurable values from the configuration bean from within these beans.
Inside the configurable bean run a thread to read the externally configurable properties (file etc..) periodically. This way the configurable bean will refresh itself after the admin had changed the properties and so the DataSource will get the updated values automatically.
You need not actually implement the "thread" - read : http://commons.apache.org/configuration/userguide/howto_filebased.html#Automatic_Reloading
Option 2 (bad, i think, but maybe not - depends on use case) :
Always create new beans for beans of type DataSource / MailSender - using prototype scope. In the init of the bean, read the properties afresh.
Option 3 :
I think, #mR_fr0g suggestion on using JMX might not be a bad idea. What you could do is :
expose your configuration bean as a MBean (read http://static.springsource.org/spring/docs/2.5.x/reference/jmx.html)
Ask your admin to change the configuration properties on the MBean (or provide an interface in the bean to trigger property updates from their source)
This MBean (a new piece of java code that you will need to write), MUST keep references of Beans (the ones that you want to change / inject the changed properties into). This should be simple (via setter injection or runtime fetch of bean names / classes)
When the property on the MBean is changed (or triggered), it must call the appropriate setters on the respective beans. That way, your legacy code does not change, you can still manage runtime property changes.
HTH!
You may want to have a look at the Spring Inspector a plug-gable component that provides programmatic access to any Spring based application at run-time. You can use Javascript to change configurations or manage the application behaviour at run-time.
Here is the nice idea of writing your own PlaceholderConfigurer that tracks the usage of properties and changes them whenever a configuration change occurs. This has two disadvantages, though:
It does not work with constructor injection of property values.
You can get race conditions if the reconfigured bean receives a
changed configuration while it is processing some stuff.
My solution was to copy the original object. Fist i created an interface
/**
* Allows updating data to some object.
* Its an alternative to {#link Cloneable} when you cannot
* replace the original pointer. Ex.: Beans
* #param <T> Type of Object
*/
public interface Updateable<T>
{
/**
* Import data from another object
* #param originalObject Object with the original data
*/
public void copyObject(T originalObject);
}
For easing the implementation of the function fist create a constructor with all fields, so the IDE could help me a bit. Then you can make a copy constructor that uses the same function Updateable#copyObject(T originalObject). You can also profit of the code of the constructor created by the IDE to create the function to implement:
public class SettingsDTO implements Cloneable, Updateable<SettingsDTO>
{
private static final Logger LOG = LoggerFactory.getLogger(SettingsDTO.class);
#Size(min = 3, max = 30)
private String id;
#Size(min = 3, max = 30)
#NotNull
private String name;
#Size(min = 3, max = 100)
#NotNull
private String description;
#Max(100)
#Min(5)
#NotNull
private Integer pageSize;
#NotNull
private String dateFormat;
public SettingsDTO()
{
}
public SettingsDTO(String id, String name, String description, Integer pageSize, String dateFormat)
{
this.id = id;
this.name = name;
this.description = description;
this.pageSize = pageSize;
this.dateFormat = dateFormat;
}
public SettingsDTO(SettingsDTO original)
{
copyObject(original);
}
#Override
public void copyObject(SettingsDTO originalObject)
{
this.id = originalObject.id;
this.name = originalObject.name;
this.description = originalObject.description;
this.pageSize = originalObject.pageSize;
this.dateFormat = originalObject.dateFormat;
}
}
I used it in a Controller for updating the current settings for the app:
if (bindingResult.hasErrors())
{
model.addAttribute("settingsData", newSettingsData);
model.addAttribute(Templates.MSG_ERROR, "The entered data has errors");
}
else
{
synchronized (settingsData)
{
currentSettingData.copyObject(newSettingsData);
redirectAttributes.addFlashAttribute(Templates.MSG_SUCCESS, "The system configuration has been updated successfully");
return String.format("redirect:/%s", getDao().getPath());
}
}
So the currentSettingsData which has the configuration of the application gonna have the updated values, located in newSettingsData. These method allows updating any bean without high complexity.

Spring, #Transactional and Hibernate Lazy Loading

i'm using spring + hibernate. All my HibernateDAO use directly sessionFactory.
I have application layer -> service layer -> DAO layer and all collections is lazly loaded.
So, the problem is that sometime in the application layer(that contains GUI/swing) i load an entity using a service layer method(that contains #Transactional annotation) and i want to use a lazly property of this object, but obviusly the session is already closed.
What is the best way to resolve this trouble?
EDIT
I try to use a MethodInterceptor, my idea is to write an AroundAdvice for all my Entities and use annotation, so for example:
// Custom annotation, say that session is required for this method
#Target(ElementType.METHOD)
#Retention(RetentionPolicy.RUNTIME)
public #interface SessionRequired {
// An AroundAdvice to intercept method calls
public class SessionInterceptor implements MethodInterceptor {
public Object invoke(MethodInvocation mi) throws Throwable {
bool sessionRequired=mi.getMethod().isAnnotationPresent(SessionRequired.class);
// Begin and commit session only if #SessionRequired
if(sessionRequired){
// begin transaction here
}
Object ret=mi.proceed();
if(sessionRequired){
// commit transaction here
}
return ret;
}
}
// An example of entity
#Entity
public class Customer implements Serializable {
#Id
Long id;
#OneToMany
List<Order> orders; // this is a lazy collection
#SessionRequired
public List<Order> getOrders(){
return orders;
}
}
// And finally in application layer...
public void foo(){
// Load customer by id, getCustomer is annotated with #Transactional
// this is a lazy load
Customer customer=customerService.getCustomer(1);
// Get orders, my interceptor open and close the session for me... i hope...
List<Order> orders=customer.getOrders();
// Finally use the orders
}
Do you think can this work?
The problem is, how to register this interceptor for all my entities without do it in xml file?
There is a way to do it with annotation?
Hibernate recently introduced fetch profiles which (in addition to performance tuning) is ideal for solving issues like this. It allows you to (at runtime) choose between different loading and initialization strategies.
http://docs.jboss.org/hibernate/core/3.5/reference/en/html/performance.html#performance-fetching-profiles
Edit (added section on how to set the fetch profile using an interceptor):
Before you get started: Check that fetch profiles actually will work for you. I haven't used them myself and see that they are currently limited to join fetches. Before you waste time on implementing and wiring up the interceptor, try setting the fetch profile manually and see that it actually solves your problem.
There are many ways to setup interceptors in Spring (according to preference), but the most straight-forward way would be to implement a MethodInterceptor (see http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop-api.html#aop-api-advice-around). Let it have a setter for the fetch profile you want and setter for the Hibernate Session factory:
public class FetchProfileInterceptor implements MethodInterceptor {
private SessionFactory sessionFactory;
private String fetchProfile;
... setters ...
public Object invoke(MethodInvocation invocation) throws Throwable {
Session s = sessionFactory.openSession(); // The transaction interceptor has already opened the session, so this returns it.
s.enableFetchProfile(fetchProfile);
try {
return invocation.proceed();
} finally {
s.disableFetchProfile(fetchProfile);
}
}
}
Lastly, enable the interceptor in the Spring config. This can be done in several ways and you probably already have a AOP setup that you can add it to. See http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-schema.
If you're new to AOP, I'd suggest trying the "old" ProxyFactory way first (http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop-api.html#aop-api-proxying-intf) because it's easier to understand how it works. Here's some sample XML to get you started:
<bean id="fetchProfileInterceptor" class="x.y.zFetchProfileInterceptor">
<property name="sessionFactory" ref="sessionFactory"/>
<property name="fetchProfile" ref="gui-profile"/>
</bean>
<bean id="businessService" class="x.y.x.BusinessServiceImpl">
<property name="dao" .../>
...
</bean>
<bean id="serviceForSwinGUI"
class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="proxyInterfaces" value="x.y.z.BusinessServiceInterface/>
<property name="target" ref="businessService"/>
<property name="interceptorNames">
<list>
<value>existingTransactionInterceptorBeanName</value>
<value>fetchProfileInterceptor</value>
</list>
</property>
</bean>
Create a method in the service layer that returns the lazy-loaded object for that entity
Change to fetch eager :)
If possible extend your transaction into the application layer
(just while we wait for someone who knows what they are talking about)
You need to rework your session management, unfortunately. This is a major problem when dealing with Hibernate and Spring, and it's a gigantic hassle.
Essentially, what you need is for your application layer to create a new session when it gets your Hibernate object, and to manage that and close the session properly. This stuff is tricky, and non-trivial; one of the best ways to manage this is to mediate the sessions through a factory available from your application layer, but you still need to be able to end the session properly, so you have to be aware of the lifecycle needs of your data.
This stuff is the most common complaint about using Spring and Hibernate in this way; really, the only way to manage it is to get a good handle on exactly what your data lifecycles are.

Categories

Resources