Spring JDBC DAO - java

Im learning Spring (2 and 3) and i got this method in a ClientDao
public Client getClient(int id) {
List<Client> clients= getSimpleJdbcTemplate().query(
CLIENT_GET,
new RowMapper<Client>() {
public Client mapRow(ResultSet rs, int rowNum) throws SQLException {
Client client = new ClientImpl(); // !! this (1)
client.setAccounts(new HashSet<Account>()); // !! this (2)
client.setId(rs.getInt(1));
client.setName(rs.getString(2));
return client;
}
},id
);
return clients.get(0);
}
and the following Spring wiring:
<bean id="account" class="client.AccountRON" scope="prototype">
<property name="currency" value = "RON" />
<property name="ammount" value="0" />
</bean>
<bean id="client" class="client.ClientImpl" scope="prototype">
<property name="name" value="--client--" />
<property name="accounts">
<set>
</set>
</property>
</bean>
The things is that i dont like the commented lines of java code (1) and (2).
I'm going to start with (2) which i think is the easy one: is there a way i can wire the bean in the .xml file to tell spring to instantiate a set implementation for the 'accounts' set in ClientImpl? so i can get rid of (2)
Now moving on to (1): what happens if the implementation changes ? do i really need to write another DAO for a different implementation? or do i have to construct a BeanFactory ? or is there another more beautiful solution ?
Thanks!

I'm a bit confused here - why have you defined a ClientImpl bean in your XML, but not using it in your Java?
Your already have most of the solution, you just need to fetch a new ClientImpl from Spring each iterations through the loop:
private #Autowired BeanFactory beanFactory;
public Client getClient(int id) {
List<Client> clients= getSimpleJdbcTemplate().query(
CLIENT_GET,
new RowMapper<Client>() {
public Client mapRow(ResultSet rs, int rowNum) throws SQLException {
Client client = beanFactory.getBean(Client.class);
client.setId(rs.getInt(1));
client.setName(rs.getString(2));
return client;
}
},id
);
return clients.get(0);
}
With this approach, the actual construction and initialization of ClientImpl is done by Spring, not your code.

Related

wrap spring bean with another bean

I have some service bean which is accessible by identifier someSpecificService which I need to modify.
Beans are defined in different xml files and are collected together in runtime. So one big xml file is created where all these xmls are imported:
context.xml
....
<import path="spring1.xml" />
<import path="spring2.xml" />
...
So there is following configuration:
<-- definitions from spring1.xml -->
<alias name="defaultSomeSpecificService" alias="someSpecificService" />
<bean id="defaultSomeSpecificService" class="..."/>
....
<!-- definitions from spring2.xml -->
<alias name="myOwnSomeSpecificService" alias="someSpecificService" />
<bean id="myOwnSomeSpecificService" class="..." /> <!-- how to inject previously defined someSpecificService into this new bean? -->
I would like to override someSpecificService from spring1.xml in spring2.xml, however I do need to inject previously defined bean defaultSomeSpecificService and all I know is its alias name someSpecificService which I need to redefine to new bean myOwnSomeSpecificService.
Is it possible to implement?
One solution would be to avoid trying to override the definition, by creating a proxy for the service implementation to intercept all calls towards it.
1) For the sake of the example, suppose the service would be something like:
public interface Service {
public String run();
}
public class ExistingServiceImpl implements Service {
#Override
public String run() {
throw new IllegalStateException("Muahahahaha!");
}
}
2) Implement an interceptor instead of myOwnSomeSpecificService:
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
public class SomeSpecificServiceInterceptor implements MethodInterceptor {
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
String status;
try {
// allow the original invocation to actually execute
status = String.valueOf(invocation.proceed());
} catch (IllegalStateException e) {
System.out.println("Existing service threw the following exception [" + e.getMessage() + "]");
status = "FAIL";
}
return status;
}
}
3) In spring2.xml define the proxy creator and the interceptor:
<bean id="serviceInterceptor" class="com.nsn.SomeSpecificServiceInterceptor" />
<bean id="proxyCreator" class="org.springframework.aop.framework.autoproxy.BeanNameAutoProxyCreator">
<property name="beanNames" value="someSpecificService"/>
<property name="interceptorNames">
<list>
<value>serviceInterceptor</value>
</list>
</property>
</bean>
4) Running a small example such as:
public class Main {
public static void main(String[] args) {
Service service = new ClassPathXmlApplicationContext("context.xml").getBean("someSpecificService", Service.class);
System.out.println("Service execution status [" + service.run() + "]");
}
}
... instead of the IllegalStateException stacktrace you'd normally expect, it will print:
Existing service threw the following exception [Muahahahaha!]
Service execution status [FAIL]
Please note that in this example the service instance is not injected in the interceptor as you asked because I had no user for it. However should you really need it, you can easily inject it via constructor/property/etc because the interceptor is a spring bean itself.

Handling Hibernate Transactions

Currently I have this code duplicated in each one of my Controller methods:
Transaction transaction = HibernateUtil.getSessionFactory().getCurrentSession().getTransaction();
if (!HibernateUtil.getSessionFactory().getCurrentSession().getTransaction().isActive()) {
transaction.begin();
}
Is this the correct way or is there a better way of doing this, perhaps in a separate class that I can reference? If so, how? Every time I've tried to put it in a separate class and reference it from other classes, it failed.
edit: I'm trying to use as few external libraries as possible. I wouldn't use Hibernate if Java had an ORM/JPA implementation built into the JDK
I've run into this myself many times. Ordinarily my first recommendation would be Spring transaction management, however I understand you are trying to limit the number of third party libraries you are using.
Since you're using a static API in your HibernateUtil class, you may find it helpful to consolidate your logic in a method, and putting the 'what you want to do in a transaction' code (which varies controller to controller) in a callback.
First, define an interface to describe each controller's inTransaction behavior:
public interface TransactionCallback {
void doInTransaction();
}
Now, create a static method in your HibernateUtil class to handle beginning, committing, and if necessary rolling back your transactions:
public class HibernateUtil {
public static void inTransaction(TransactionCallback tc) {
Transaction transaction = HibernateUtil.getSessionFactory().getCurrentSession().getTransaction();
if (!HibernateUtil.getSessionFactory().getCurrentSession().getTransaction().isActive()) {
transaction.begin();
try {
tc.doInTransaction();
transaction.commit();
} catch (Exception e) {
transaction.rollback();
}
}
}
}
In your controller, you'd use your new method with an anonymous inner class:
....
HibernateUtil.inTransaction(new TransactionCallback() {
void doInTransaction() {
// do stuff for this controller
}
});
....
This approach should at least take care of the duplication you'd like to eliminate, and there's plenty of room for extending it to handle particular exceptions, etc.
You have to close hibernate transaction after each transaction (e.g. Controller request).
In this case you will not need
if (!HibernateUtil.getSessionFactory().getCurrentSession().getTransaction().isActive())
and you WILL need to call .close() each time after request.
It is better to use code like:
class Controller {
//...
commonActionMethod() {
begin transaction
specificActionMethod();
close transaction
}
And childs of this Controller class should implement specificActionMethod().
Code is clean. Transactions are safe. No third-party libs required.
You can Very well use JDK Proxies to implement your own AOP .
Ref : Link1 Link2
Have Service Layer to intract with DAO framework such as Hibernate and so. So that your controller is just controll the flow and your service can implement business.
Have SeviceLocator / FactoryPattern to get hold of your Service instances ( In other words return proxies instead of Actual instance).
Define your own Annotations and identify your methods required transaction or not. if required handle transaction around your method call in your proxy handler.
In this way you don't need to depend on any library other than JDK. and you can turn off or on transaction just by having Annotations.
If you start manage the instances ( services) you can lot of magics with combination of FactoryPattern + JDK Proxies ( Actual Interfaces) + AOP Concepts.
you can create separate class for connection.
public class HibernateUtil {
private static final SessionFactory sessionFactory = buildSessionFactory();
#SuppressWarnings("deprecation")
private static SessionFactory buildSessionFactory() {
try {
// Create the SessionFactory from Annotation
return new AnnotationConfiguration().configure().buildSessionFactory();
}
catch (Throwable ex) {
// Make sure you log the exception, as it might be swallowed
System.err.println("Initial SessionFactory creation failed." + ex);
throw new ExceptionInInitializerError(ex);
}
}
public static SessionFactory getSessionFactory() {
return sessionFactory;
}
}
On Server Side you can write :-
Session session=null;
Transaction tx=null;
try {
session =HibernateUtil.getSessionFactory().openSession();
tx=session.beginTransaction();
} catch (HibernateException e) {
e.printStackTrace();
}finally
{
session.close();
}
Avoiding the use of additional, external libraries, you may wish to supply an interceptor that implements that standard J2EE servlet Filter interface. Such implementation is sometimes referred to as the Open Session in View pattern. I cite the following from this page:
When an HTTP request has to be handled, a new Session and database transaction will begin. Right before the response is send to the client, and after all the work has been done, the transaction will be committed, and the Session will be closed.
If you are using spring in your project. I will suggest to use the TX using spring AOP, In that you just have to specify the pointcuts for your transactions. The Spring AOP TX will taken care of begin and commit the transaction on basis of your point cut and could also roll-back the TX in case of exception occurred. Please go through the link of example - here
package com.project.stackoverflow;
import org.hibernate.Session;
import org.hibernate.SessionFactory;
public class HibernateUtil {
private static final ThreadLocal threadSession = new ThreadLocal();
private static SessionFactory sessionFactory;
/**
* A public method to get the Session.
*
* #return Session
*/
public static Session getSession() {
Session session = (Session) threadSession.get();
// Open a Session, if this thread has none yet
if ((null == session) || !session.isOpen()) {
logger.info("Null Session");
session = sessionFactory.openSession();
logger.info("Session Opened");
threadSession.set(session);
}
return session;
}
public static void closeSession() {
Session session = (Session) threadSession.get();
// Open a Session, if this thread has none yet
if (null != session) {
session.close();
session = null;
threadSession.set(null);
}
}
return sessionFactory;
}
public void setSessionFactory(SessionFactory sessionFactory) {
logger.info("Inside set session Factory");
this.sessionFactory = sessionFactory;
logger.info("After set session Factory");
}
public static void save(Object obj) {
getSession().save(obj);
getSession().flush();
}
public static void saveOrUpdate(Object obj) {
getSession().saveOrUpdate(obj);
getSession().flush();
}
public static void batchUpdate(Object obj) {
getSession().saveOrUpdate(obj);
getSession().flush();
}
public static void update(Object obj) {
getSession().update(obj);
getSession().flush();
}
public static void delete(Object obj) {
getSession().delete(obj);
getSession().flush();
}
}
You can probably go for this solution. I have made a separate JavaClass for Hibernate Instantiation and use. You can get the session from here itself which may suffice your need. Hope it helps :)
I used this technique.
My Servlet context is like this:
<beans:bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close" p:driverClassName="${jdbc.driverClassName}"
p:url="${jdbc.databaseurl}" p:username="${jdbc.username}" p:password="${jdbc.password}" />
<beans:bean id="sessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<beans:property name="dataSource" ref="dataSource" />
<beans:property name="configLocation">
<beans:value>classpath:hibernate.cfg.xml</beans:value>
</beans:property>
<beans:property name="configurationClass">
<beans:value>org.hibernate.cfg.AnnotationConfiguration</beans:value>
</beans:property>
<beans:property name="hibernateProperties">
<beans:props>
<beans:prop key="hibernate.dialect">${jdbc.dialect}</beans:prop>
<beans:prop key="hibernate.show_sql">true</beans:prop>
</beans:props>
</beans:property>
</beans:bean>
<beans:bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<beans:property name="sessionFactory" ref="sessionFactory" />
</beans:bean>
<tx:annotation-driven transaction-manager="transactionManager" />
Then you can simply use
#Autowired
private SessionFactory sessionFactory;
Whenever I want to use a session or do any operations I simply do it like this:
Session session = sessionFactory.openSession();
Transaction transaction = session.beginTransaction();
session.save(userAccount);
transaction.commit();
session.close();
I think it will help.
If you have application server like glassfish, it has imbended eclipselink JPA/ORM implementation and you can manage transaction using standart JEE annotations.

JOOQ & transactions

I've been reading about transactions & jooq but I struggle to see how to implement it in practice.
Let's say I provide JOOQ with a custom ConnectionProvider which happens to use a connection pool with autocommit set to false.
The implementation is roughly:
#Override public Connection acquire() throws DataAccessException {
return pool.getConnection();
}
#Override public void release(Connection connection) throws DataAccessException {
connection.commit();
connection.close();
}
How would I go about wrapping two jooq queries into a single transaction?
It is easy with the DefaultConnectionProvider because there's only one connection - but with a pool I'm not sure how to go about it.
jOOQ 3.4 Transaction API
With jOOQ 3.4, a transaction API has been added to abstract over JDBC, Spring, or JTA transaction managers. This API can be used with Java 8 as such:
DSL.using(configuration)
.transaction(ctx -> {
DSL.using(ctx)
.update(TABLE)
.set(TABLE.COL, newValue)
.where(...)
.execute();
});
Or with pre-Java 8 syntax
DSL.using(configuration)
.transaction(new TransactionRunnable() {
#Override
public void run(Configuration ctx) {
DSL.using(ctx)
.update(TABLE)
.set(TABLE.COL, newValue)
.where(...)
.execute();
}
});
The idea is that the lambda expression (or anonymous class) form the transactional code, which:
Commits upon normal completion
Rolls back upon exception
The org.jooq.TransactionProvider SPI can be used to override the default behaviour, which implements nestable transactions via JDBC using Savepoints.
A Spring example
The current documentation shows an example when using Spring for transaction handling:
http://www.jooq.org/doc/latest/manual/getting-started/tutorials/jooq-with-spring/
This example essentially boils down to using a Spring TransactionAwareDataSourceProxy
<!-- Using Apache DBCP as a connection pooling library.
Replace this with your preferred DataSource implementation -->
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
init-method="createDataSource" destroy-method="close">
<property name="driverClassName" value="org.h2.Driver" />
<property name="url" value="jdbc:h2:~/maven-test" />
<property name="username" value="sa" />
<property name="password" value="" />
</bean>
<!-- Using Spring JDBC for transaction management -->
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="transactionAwareDataSource"
class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy">
<constructor-arg ref="dataSource" />
</bean>
<!-- Bridging Spring JDBC data sources to jOOQ's ConnectionProvider -->
<bean class="org.jooq.impl.DataSourceConnectionProvider"
name="connectionProvider">
<constructor-arg ref="transactionAwareDataSource" />
</bean>
A running example is available from GitHub here:
https://github.com/jOOQ/jOOQ/tree/master/jOOQ-examples/jOOQ-spring-example
A Spring and Guice example
Although I personally wouldn't recommend it, some users have had success replacing a part of Spring's DI by Guice and handle transactions with Guice. There is also an integration-tested running example on GitHub for this use-case:
https://github.com/jOOQ/jOOQ/tree/master/jOOQ-examples/jOOQ-spring-guice-example
This is probably not the best way but it seems to work. The caveat is that it is not the release but the commit method which closes the connection and returns it to the pool, which is quite confusing and could lead to issues if some code "forgets" to commit...
So the client code looks like:
final PostgresConnectionProvider postgres =
new PostgresConnectionProvider("localhost", 5432, params.getDbName(), params.getUser(), params.getPass())
private static DSLContext sql = DSL.using(postgres, SQLDialect.POSTGRES, settings);
//execute some statements here
sql.execute(...);
//and don't forget to commit or the connection will not be returned to the pool
PostgresConnectionProvider p = (PostgresConnectionProvider) sql.configuration().connectionProvider();
p.commit();
And the ConnectionProvider:
public class PostgresConnectionProvider implements ConnectionProvider {
private static final Logger LOG = LoggerFactory.getLogger(PostgresConnectionProvider.class);
private final ThreadLocal<Connection> connections = new ThreadLocal<>();
private final BoneCP pool;
public PostgresConnectionProvider(String serverName, int port, String schema, String user, String password) throws SQLException {
this.pool = new ConnectionPool(getConnectionString(serverName, port, schema), user, password).pool;
}
private String getConnectionString(String serverName, int port, String schema) {
return "jdbc:postgresql://" + serverName + ":" + port + "/" + schema;
}
public void close() {
pool.shutdown();
}
public void commit() {
LOG.debug("Committing transaction in {}", Thread.currentThread());
try {
Connection connection = connections.get();
if (connection != null) {
connection.commit();
connection.close();
connections.set(null);
}
} catch (SQLException ex) {
throw new DataAccessException("Could not commit transaction in postgres pool", ex);
}
}
#Override
public Connection acquire() throws DataAccessException {
LOG.debug("Acquiring connection in {}", Thread.currentThread());
try {
Connection connection = connections.get();
if (connection == null) {
connection = pool.getConnection();
connection.setAutoCommit(false);
connections.set(connection);
}
return connection;
} catch (SQLException ex) {
throw new DataAccessException("Can't acquire connection from postgres pool", ex);
}
}
#Override
//no-op => the connection won't be released until it is commited
public void release(Connection connection) throws DataAccessException {
LOG.debug("Releasing connection in {}", Thread.currentThread());
}
}
Easiest way,(I have found) to use Spring Transactions with jOOQ, is given here: http://blog.liftoffllc.in/2014/06/jooq-and-transactions.html
Basically we implement a ConnectionProvider that uses org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(ds) method to find and return the DB connection that holds transaction created by Spring.
Create a TransactionManager bean for your DataSource, example shown below:
<bean
id="dataSource"
class="org.apache.tomcat.jdbc.pool.DataSource"
destroy-method="close"
p:driverClassName="com.mysql.jdbc.Driver"
p:url="mysql://locahost:3306/db_name"
p:username="root"
p:password="root"
p:initialSize="2"
p:maxActive="10"
p:maxIdle="5"
p:minIdle="2"
p:testOnBorrow="true"
p:validationQuery="/* ping */ SELECT 1"
/>
<!-- Configure the PlatformTransactionManager bean -->
<bean
id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager"
p:dataSource-ref="dataSource"
/>
<!-- Scan for the Transactional annotation -->
<tx:annotation-driven/>
Now you can annotate all the classes or methods which uses jOOQ's DSLContext with
#Transactional(rollbackFor = Exception.class)
And while creating the DSLContext object jOOQ will make use of the transaction created by Spring.
Though its an old question, Please look at this link to help configure JOOQ to use spring provided transaction manager. Your datasource and DSLContext have to be aware of Transacation.
https://www.baeldung.com/jooq-with-spring
You may have to change
#Bean
public DefaultDSLContext dsl() {
    return new DefaultDSLContext(configuration());
}
to
#Bean
public DSLContext dsl() {
    return new DefaultDSLContext(configuration());
}

Property is not found from properties file using #Value

I use properties file in spring framework
root-context.xml
<context:property-placeholder location="classpath:config.properties" />
<util:properties id="config" location="classpath:config.properties" />
java code
#Value("#{config[ebookUseYN]}")
String EBOOKUSEYN;
when Using url call(#RequestMapping(value="/recommendbooks" , method=RequestMethod.GET, produces="application/json;charset=UTF-8")).. this work!
but, i use method call,
public void executeInternal(JobExecutionContext arg0) throws JobExecutionException {
IndexManageController indexManage = new IndexManageController();
CommonSearchDTO commonSearchDTO = new CommonSearchDTO();
try {
if("Y".equals(EBOOKUSEYN)){
indexManage.deleteLuceneDocEbook();
indexManage.initialBatchEbook(null, commonSearchDTO);
}
indexManage.deleteLuceneDoc(); <= this point
indexManage.deleteLuceneDocFacet();
indexManage.initialBatch(null, commonSearchDTO);
}catch (Exception e) {
e.printStackTrace();
}
}
when 'this point ' method call, changing controller, and don't read properties file field..
#Value("#{config[IndexBasePath]}")
String IndexBasePath;
#RequestMapping(value="/deleteLuceneDoc" , method=RequestMethod.GET, produces="application/json;charset=UTF-8")
public #ResponseBody ResultCodeMessageDTO deleteLuceneDoc()
throws Exception
{
long startTime = System.currentTimeMillis();
ResultCodeMessageDTO result = new ResultCodeMessageDTO();
System.out.println(IndexBasePath);
}
It doesn't read IndexBasePath
In your code you are creating a new instance of the IndexManageController, Spring doesn't know this instance and as such it will never be processed.
public void executeInternal(JobExecutionContext arg0) throws JobExecutionException {
IndexManageController indexManage = new IndexManageController();
Instead of creating a new instance inject the dependency for the IndexManageController so that it uses the pre-configured instance constructed and managed by Spring. (And remove the line which constructs a new instance of that class).
public class MyJob {
#Autowired
private IndexManageController indexManage;
}
Your configuration is also loading the properties twice
<context:property-placeholder location="classpath:config.properties" />
<util:properties id="config" location="classpath:config.properties" />
Both load the config.properties file. Simply wire the config to the property-placeholder element.
<context:property-placeholder properties-ref="config"/>
<util:properties id="config" location="classpath:config.properties" />
Saves you loading twice and saves you another bean.

Populate envers revision tables with existing data from Hibernate Entities

I'm adding envers to an existing hibernate entities. Everything is working smoothly so far as far as auditing, however querying is a different issue because the revision tables aren’t populated with the existing data. Has anyone else already solved this issue? Maybe you’ve found some way to populate the revision tables with the existing table? Just thought I’d ask, I'm sure others would find it useful.
We populated the initial data by running a series of raw SQL queries to simulate "inserting" all the existing entities as if they had just been created at the same time. For example:
insert into REVINFO(REV,REVTSTMP) values (1,1322687394907);
-- this is the initial revision, with an arbitrary timestamp
insert into item_AUD(REV,REVTYPE,id,col1,col1) select 1,0,id,col1,col2 from item;
-- this copies the relevant row data from the entity table to the audit table
Note that the REVTYPE value is 0 to indicate an insert (as opposed to a modification).
You'll have a problem in this category if you are using Envers ValidityAuditStrategy and have data which has been created other than with Envers enabled.
In our case (Hibernate 4.2.8.Final) a basic object update throws "Cannot update previous revision for entity and " (logged as [org.hibernate.AssertionFailure] HHH000099).
Took me a while to find this discussion/explanation so cross-posting:
ValidityAuditStrategy with no audit record
You don't need to.
AuditQuery allows you to get both RevisionEntity and data revision by :
AuditQuery query = getAuditReader().createQuery()
.forRevisionsOfEntity(YourAuditedEntity.class, false, false);
This will construct a query which returns a list of Object [3]. Fisrt element is your data, the second is the revision entity and the third is the type of revision.
We have solved the issue of populating the audit logs with the existing data as follows:
SessionFactory defaultSessionFactory;
// special configured sessionfactory with envers audit listener + an interceptor
// which flags all properties as dirty, even if they are not.
SessionFactory replicationSessionFactory;
// Entities must be retrieved with a different session factory, otherwise the
// auditing tables are not updated. ( this might be because I did something
// wrong, I don't know, but I know it works if you do it as described above. Feel
// free to improve )
FooDao fooDao = new FooDao();
fooDao.setSessionFactory( defaultSessionFactory );
List<Foo> all = fooDao.findAll();
// cleanup and close connection for fooDao here.
..
// Obtain a session from the replicationSessionFactory here eg.
Session session = replicationSessionFactory.getCurrentSession();
// replicate all data, overwrite data if en entry for that id already exists
// the trick is to let both session factories point to the SAME database.
// By updating the data in the existing db, the audit listener gets triggered,
// and inserts your "initial" data in the audit tables.
for( Foo foo: all ) {
session.replicate( foo, ReplicationMode.OVERWRITE );
}
The configuration of my data sources (via Spring):
<bean id="replicationDataSource"
class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="org.postgresql.Driver"/>
<property name="url" value=".."/>
<property name="username" value=".."/>
<property name="password" value=".."/>
<aop:scoped-proxy proxy-target-class="true"/>
</bean>
<bean id="auditEventListener"
class="org.hibernate.envers.event.AuditEventListener"/>
<bean id="replicationSessionFactory"
class="o.s.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="entityInterceptor">
<bean class="com.foo.DirtyCheckByPassInterceptor"/>
</property>
<property name="dataSource" ref="replicationDataSource"/>
<property name="packagesToScan">
<list>
<value>com.foo.**</value>
</list>
</property>
<property name="hibernateProperties">
<props>
..
<prop key="org.hibernate.envers.audit_table_prefix">AUDIT_</prop>
<prop key="org.hibernate.envers.audit_table_suffix"></prop>
</props>
</property>
<property name="eventListeners">
<map>
<entry key="post-insert" value-ref="auditEventListener"/>
<entry key="post-update" value-ref="auditEventListener"/>
<entry key="post-delete" value-ref="auditEventListener"/>
<entry key="pre-collection-update" value-ref="auditEventListener"/>
<entry key="pre-collection-remove" value-ref="auditEventListener"/>
<entry key="post-collection-recreate" value-ref="auditEventListener"/>
</map>
</property>
</bean>
The interceptor:
import org.hibernate.EmptyInterceptor;
import org.hibernate.type.Type;
..
public class DirtyCheckByPassInterceptor extends EmptyInterceptor {
public DirtyCheckByPassInterceptor() {
super();
}
/**
* Flags ALL properties as dirty, even if nothing has changed.
*/
#Override
public int[] findDirty( Object entity,
Serializable id,
Object[] currentState,
Object[] previousState,
String[] propertyNames,
Type[] types ) {
int[] result = new int[ propertyNames.length ];
for ( int i = 0; i < propertyNames.length; i++ ) {
result[ i ] = i;
}
return result;
}
}
ps: keep in mind that this is a simplified example. It will not work out of the box but it will guide you towards a working solution.
Take a look at http://www.jboss.org/files/envers/docs/index.html#revisionlog
Basically you can define your own 'revision type' using #RevisionEntity annotation,
and then implement a RevisionListener interface to insert your additional audit data,
like current user and high level operation. Usually those are pulled from ThreadLocal context.
You could extend the AuditReaderImpl with a fallback option for the find method, like:
public class AuditReaderWithFallback extends AuditReaderImpl {
public AuditReaderWithFallback(
EnversService enversService,
Session session,
SessionImplementor sessionImplementor) {
super(enversService, session, sessionImplementor);
}
#Override
#SuppressWarnings({"unchecked"})
public <T> T find(
Class<T> cls,
String entityName,
Object primaryKey,
Number revision,
boolean includeDeletions) throws IllegalArgumentException, NotAuditedException, IllegalStateException {
T result = super.find(cls, entityName, primaryKey, revision, includeDeletions);
if (result == null)
result = (T) super.getSession().get(entityName, (Serializable) primaryKey);
return result;
}
}
You could add a few more checks in terms of returning null in some cases.
You might want to use your own factory as well:
public class AuditReaderFactoryWithFallback {
/**
* Create an audit reader associated with an open session.
*
* #param session An open session.
* #return An audit reader associated with the given sesison. It shouldn't be used
* after the session is closed.
* #throws AuditException When the given required listeners aren't installed.
*/
public static AuditReader get(Session session) throws AuditException {
SessionImplementor sessionImpl;
if (!(session instanceof SessionImplementor)) {
sessionImpl = (SessionImplementor) session.getSessionFactory().getCurrentSession();
} else {
sessionImpl = (SessionImplementor) session;
}
final ServiceRegistry serviceRegistry = sessionImpl.getFactory().getServiceRegistry();
final EnversService enversService = serviceRegistry.getService(EnversService.class);
return new AuditReaderWithFallback(enversService, session, sessionImpl);
}
}
I've checked many ways, but the best way for me is to write a PL/SQL script as below.
The below script is written for PostgreSQL. Didn't check other vendors, but they must have the same feature.
CREATE SEQUENCE hibernate_sequence START 1;
DO
$$
DECLARE
u RECORD;
next_id BIGINT;
BEGIN
FOR u IN SELECT * FROM user
LOOP
SELECT NEXTVAL('hibernate_sequence')
INTO next_id;
INSERT INTO revision (rev, user_id, timestamp)
VALUES (next_id,
'00000000-0000-0000-0000-000000000000',
(SELECT EXTRACT(EPOCH FROM NOW() AT TIME ZONE 'utc')) * 1000);
INSERT INTO user_aud(rev,
revend,
revtype,
id,
created_at,
created_by,
last_modified_at,
last_modified_by,
name)
VALUES (next_id,
NULL,
0,
f.id,
f.created_at,
f.created_by,
f.last_modified_at,
f.last_modified_by,
f.name);
END LOOP;
END;
$$;

Categories

Resources