I am attempting to write to multiple databases using hibernate. I have encapsulated write and read/write sessions within a single session object. However, when I go to save I get a lot of errors that the objects are already associated with another session: "Illegal attempt to associate a collection with two open sessions"
Here is my code:
public class MultiSessionObject implements Session {
private Session writeOnlySession;
private Session readWriteSession;
#Override
public void saveOrUpdate(Object arg0) throws HibernateException {
readWriteSession.saveOrUpdate(arg0);
writeOnlySession.saveOrUpdate(arg0);
}
}
I have tried evicting the object and flushing; however, that causes problems with "Row was updated or deleted by another transaction"... even though both sessions point to different databases.
public class MultiSessionObject implements Session {
private Session writeOnlySession;
private Session readWriteSession;
#Override
public void saveOrUpdate(Object arg0) throws HibernateException {
readWriteSession.saveOrUpdate(arg0);
readWriteSession.flush();
readWriteSession.evict(arg0);
writeOnlySession.saveOrUpdate(arg0);
writeOnlySession.flush();
writeOnlySession.evict(arg0);
}
}
In addition to the above, I have also attempted using the replicate functionality of hibernate. This was also unsuccessful without errors.
Has anyone successfully saved an object to two databases that have the same schema?
The saveOrUpdate tries to reattach a given Entity to the current running Session, so Proxies (LAZY associations) are bound to the Hibernate Session. Try using merge instead of saveOrUpdate, because merge simply copies a detached entity state to a newly retrieved managed entity. This way, the supplied arguments never gets attached to a Session.
Another problem is Transaction Management. If you use Thread-bound Transaction, then you need two explicit transactions if you want to update two DataSources from the same Thread.
Try to set the transaction boundaries explicitly too:
public class MultiSessionObject implements Session {
private Session writeOnlySession;
private Session readWriteSession;
#Override
public void saveOrUpdate(Object arg0) throws HibernateException {
Transaction readWriteSessionTx = null;
try {
readWriteSessionTx = readWriteSession.beginTransaction();
readWriteSession.merge(arg0);
readWriteSessionTx.commit();
} catch (RuntimeException e) {
if ( readWriteSessionTx != null && readWriteSessionTx.isActive() )
readWriteSessionTx.rollback();
throw e;
}
Transaction writeOnlySessionTx = null;
try {
writeOnlySessionTx = writeOnlySession.beginTransaction();
writeOnlySession.merge(arg0);
writeOnlySessionTx.commit();
} catch (RuntimeException e) {
if ( writeOnlySessionTx != null && writeOnlySessionTx.isActive() )
writeOnlySessionTx.rollback();
throw e;
}
}
}
As mentioned in other answers, if you are using Session then you probably need to separate the 2 updates and in two different transactions. The detached instance of entity (after evict) should be able to be reused in the second update operation.
Another approach is to use StatelessSession like this (I tried a simple program so had to handle the transactions. I assume you have to handle the transactions differently)
public static void main(final String[] args) throws Exception {
final StatelessSession session1 = HibernateUtil.getReadOnlySessionFactory().openStatelessSession();
final StatelessSession session2 = HibernateUtil.getReadWriteSessionFactory().openStatelessSession();
try {
Transaction transaction1 = session1.beginTransaction();
Transaction transaction2 = session2.beginTransaction();
ErrorLogEntity entity = (ErrorLogEntity) session1.get(ErrorLogEntity.class, 1);
entity.setArea("test");
session1.update(entity);
session2.update(entity);
transaction1.commit();
transaction2.commit();
System.out.println("Entry details: " + entity);
} finally {
session1.close();
session2.close();
HibernateUtil.getReadOnlySessionFactory().close();
HibernateUtil.getReadWriteSessionFactory().close();
}
}
The issue with StatelessSession is that it does not use any cache and does not support cascading of associated objects. You need to handle that manually.
Yeah,
The problem is exactly what it's telling you. The way to successfully achieve this is to treat it like 2 different things with 2 different commits.
Create a composite Dao. In it you have a
Collection<Dao>
Each of those Dao in the collection is just an instance of your existing code configured for 2 different data sources. Then, in your composite dao, when you call save, you actually independently save to both.
Out-of-band you said you it's best effort. So, that's easy enough. Use spring-retry to create a point cut around your individual dao save methods so that they try a few times. Eventually give up.
public interface Dao<T> {
void save(T type);
}
Create new instances of this using a applicationContext.xml where each instance points to a different database. While you're in there use spring-retry to play a retry point-cut around your save method. Go to the bottom for the application context example.
public class RealDao<T> implements Dao<T> {
#Autowired
private Session session;
#Override
public void save(T type) {
// save to DB here
}
}
The composite
public class CompositeDao<T> implements Dao<T> {
// these instances are actually of type RealDao<T>
private Set<Dao<T>> delegates;
public CompositeDao(Dao ... daos) {
this.delegates = new LinkedHashSet<>(Arrays.asList(daos));
}
#Override
public void save(T stuff) {
for (Dao<T> delegate : delegates) {
try {
delegate.save(stuff);
} catch (Exception e) {
// skip it. Best effort
}
}
}
}
Each 'stuff' is saved in it's own seperate session or not. As the session is on the 'RealDao' instances, then you know that, by the time the first completes it's totally saved or failed. Hibernate might want you to have a different ID for then so that hash/equals are different but I don't think so.
Related
im just working on a project to create, change user in my mysql database. Therefore i have UserService (REST) which creates a user and a GenericDAO class where i can persist users. In my DAO for each user i begin, persist and commit a transaction. Creating single users or find users works perfect.
Now i am facing with the problem to persist or update a list of users. Especially if one user can not be persisted (e.g. duplicates) the hole transaction should be rolled back. It doesnt work in my current setup.
My first idea is to outsource the commit in a separate method. With an loop over all users i only persist them. At the end of the loop i would call my method to commit everything. If a single or more users fails i can catch them with the rollback. Is that a good approach?
AbstractDAO (current)
public abstract class GenericDAO<T> implements IGenericDAO<T>{
#PersistenceContext
protected EntityManager em = null;
private CriteriaBuilder cb = null;
private Class<T> clazz;
public GenericDAO(Class<T> class1) {
this.clazz = class1;
this.em = EntityManagerUtil.getEntityManager();
this.em.getCriteriaBuilder();
}
public final void setClazz(Class<T> clazzToSet) {
this.clazz = clazzToSet;
}
public T create(T entity) {
try {
em.getTransaction().begin();
em.persist(entity);
em.getTransaction().commit();
return entity;
} catch (PersistenceException e) {
em.getTransaction().rollback();
return null;
}
}
public T find(int id) {
return em.find(this.clazz, id);
}
public List<T> findAll() {
return em.createQuery("from "+this.clazz.getName()).getResultList();
}
/** Save changes made to a persistent object. */
public void update(T entity) {
em.getTransaction().begin();
em.merge(entity);
em.getTransaction().commit();
}
/** Remove an object from persistent storage in the database */
public void delete(T entity) {
em.getTransaction().begin();
em.remove(entity);
em.getTransaction().commit();
}
Wouldn't the most convenient solution be to simply add methods like createAll()/updateAll()?
Adding separate public methods for starting and persisting the transaction like start() and commit() creates a whole bunch of problems because it means you suddenly introduce a stateful conversation between the Dao and its clients.
The Dao methods now need to be called in a certain order and, worse still, the state of the EntityManager transaction is retained. If you forget to commit() at the end of one service call using your Dao, a subsequent call is going to mistakenly assume a transaction was not yet started, and that call is going to fail 'for no apparent reason' (not to mention that the original call will appear completed when in reality the transaction was left hanging). This creates bugs that are hard to debug, and tricky to recover from.
EDIT As I already pointed out in the comment below this answer, getting programmatic transaction management right is tricky in a multi-layer application structure, and so, I would recommend to have a look at declarative transaction management.
However, if you insist on managing transactions yourself, I would probably introduce sth like a TransactionTemplate:
public class TransactionTemplate {
private EntityManager em; //populated in a constructor, for instance
public void executeInTransaction(Runnable action) {
try {
em.getTransaction().begin();
action.run();
em.getTransaction().commit();
} catch (Exception e) {
em.getTransaction().rollback();
} finally {
em.clear(); // since you're using extended persistence context, you might want this line
}
}
}
and use it in a service like so:
public class UserService {
private TransactionTemplate template;
private RoleDao roleDao;
private UserDao userDao; //make sure TransactionTemplate and all Daos use the same EntityManager - for a single transaction, at least
public void saveUsers(Collection<User> users, String roleName) {
template.executeInTransaction(() -> {
Role role = roleDao.findByName(roleName);
users.forEach(user -> {
user.addRole(role);
userDao.create(user);
});
// some other operations
});
}
}
(of course, using the above approach means only one layer - the service layer in this case - is aware of transactions, and so DAOs must always be called from inside a service)
I developed a typical enterprise application that is responsible for provisioning customer to a 3rd party system. This system has a limitation, that only one thread can work on a certain customer. So we added a simple locking mechanism that consists of #Singleton which contains a Set of customerIds currently in progress. Whenever a new request comes for provisioning, it first checks this Set. If cusotomerId is present, it waits otherwise it adds it to the Set and goes into processing.
Recently it was decided, that this application will be deployed in cluster which means that this locking approach is no longer valid. We came up with a solution to use DB for locking. We created a table with single column that will contain customerIds (it also has a unique constraint). When a new provisioning request comes we start a transaction and try and lock the row with customerId with SELECT FOR UPDATE (if customerId does not exist yet, we insert it). After that we start provisioning customer and when finished, we commit transaction.
Concept works but I have problems with transactions. Currently we have a class CustomerLock with add() and remove() methods that take care of adding and removing customerIds from Set. I wanted to convert this class to a stateless EJB that has bean-managed transactions. add() method would start a transaction and lock the row while remove() method would commit transaction and thus unlocked the row. But it seems that start and end of transaction has to happen in the same method. Is there a way to use the approach I described or do I have to modify the logic so the transaction starts and ends in the same method?
CustomerLock class:
#Stateless
#TransactionManagement(TransactionManagementType.BEAN)
public class CustomerLock {
#Resource
private UserTransaction tx;
public void add(String customerId) throws Exception {
try {
tx.begin();
dsApi.lock()
} catch (Exception e) {
throw e;
}
}
public void remove(String customerId) throws Exception {
try {
tx.commit();
} catch (Exception e) {
throw e
}
}
}
CustomerProvisioner class excerpt:
public abstract class CustomerProvisioner {
...
public void execute(String customerId) {
try {
customerLock.add(customerId);
processing....
customerLock.remove(customerId);
} catch (Exception e) {
logger.error("Error", e);
}
}
...
}
StandardCustomerProvisioner class:
#Stateless
public class StandardCustomerProvisioner extends CustomerProvisioner {
...
public void provision(String customerId) {
// do some business logic
super.execute(customerId);
}
}
As #Gimby noted, you should not mix container-managed and bean-managed transactions. Since your StandardCustomerProvisioner has no annotation like "#TransactionManagement(TransactionManagementType.BEAN)" - it uses container-managed transactions, and REQUIRED by default.
You have 2 options to make it work:
1) To remove "#TransactionManagement(TransactionManagementType.BEAN)" with UserTransaction calls and run CMT
2) Add this annotation ("#TransactionManagement(TransactionManagementType.BEAN)") to StandardCustomerProvisioner and use transaction markup calls from this method, so all the invoked methods use the same transactional context. Markup calls from CustomerLock should be removed anyway.
What I want is to implement the Repository pattern in a JPA/Hibernate application. I have a generic interface that describes the basic contract of my repositories:
public interface EntityRepository<Entity extends Object, EntityId> {
Entity add(Entity entity);
Entity byId(EntityId id);
void remove(Entity entity);
void removeById(EntityId id);
void save();
List<Entity> toList();
}
And here is an implementation of such an interface:
public class EntityRepositoryHibernate<Entity extends Object, EntityId>
implements Serializable,
EntityRepository<Entity, EntityId> {
private static final long serialVersionUID = 1L;
#Inject
protected EntityManager entityManager;
protected Class<Entity> entityClass;
public EntityRepositoryHibernate(Class<Entity> entityClass) {
this.entityClass = entityClass;
}
public EntityManager getEntityManager() {
return entityManager;
}
#Override
public Entity add(Entity entity) {
entityManager.persist(entity);
return entity;
}
#SuppressWarnings("unchecked")
#Override
public Entity byId(EntityId id) {
DetachedCriteria criteria = criteriaDAO.createDetachedCriteria(entityClass);
criteria.add(Restrictions.eq("id", id));
return (Entity)criteriaDAO.executeCriteriaUniqueResult(criteria);
}
#Override
public void remove(Entity entity) {
if(entity==null)
return;
entityManager.remove(entity);
}
#Override
public void removeById(EntityId id) {
remove(byId(id));
}
#Override
public List<Entity> toList() {
throw new UnsupportedOperationException("toList() not implemented in "+entityClass.getName());
}
#Override
public void save() {
entityManager.flush();
}
}
All methods are working fine, except save(), so this is the focus here.
As far as I understand, Hibernate is able to track all changes in any instance returned by a query (the byId() method). So, the idea of the save() method is to save any instances that where retrieved and changed, that's why the method does not receives any parameters, it is supposed to save everything that has to be saved (which means, any persistent instance that was retrived and somehow updated while the repository lives.
In a possible scenario, I could call byId() 10 times to retrieve 10 different instances and change only 4 of them. The idea is that by calling save() once, those 4 instances would be saved in the data server.
Problem is when I call flush() I receive an exception stating that there is no transaction active. Since I'm using a JTA persistence unit, it's illegal to open the transation programatically by calling entityManager.getTransaction().
Considering that, what to do to fix the code?
First of all, it seems that your are missunderstanding the purpose of EntityManager.flush method. It doesn't commit any changes managed by persistence context, just sends SQL instructuions to the database. I mean, for the same JTA transaction, when you retrieve and modify some entity instances the changes/SQL instructions are cached waiting to be sent to the database. If the underlying transaction is commited this changes are flushed to the database along with the commit instruction. If you invoke flush before transaction is commited, only flush the changes until the invokation point (well, some SQL instruction could have been flushed previously by reasons out of this matter) but not the commit instruction is sent.
How to fixed?
I suggest you to don't mix Repository Pattern with Transaction manipulation.
Looks like you are using Container Managed Transactions (javaee tutorial) so just erase the save method and let container to manage the transactions. This will change your focus, you now have to care about rolling back transactions (throwing exception or invoking setRollbackOnly) but you don't need to explicit commmit.
I have a Hibernate project where a call to update() needs to compare the modified object in memory to the data that has already been saved to the database. For example, my business logic states that if a record is "effective" (the effective date is today or earlier), an update cannot change the effective date. In order to accomplish this, I have the following code (it's a little long and involved):
Manager
public class LogicManager {
#Autowired
SessionFactory sessionFactory
private Session getSession() {
return sessionFactory.getCurrentSession();
}
public MemberRecord findRecord(Integer id) {
// << Code to check authorization >>
return memberRecordDAO.findById(id);
}
public void updateRecord(MemberRecord record) {
getSession().evict(record);
MemberRecord oldRecord = memberRecordDAO.findById(record.getId());
Date oldEffectiveDate = oldRecord.getEffectiveDate();
if ( isEffective(oldEffectiveDate) &&
!oldEffectiveDate.equals(record.getEffectiveDate)) {
throw new IllegalArgumentException("Cannot change date");
}
// << Other data checks >>
memberRecordDAO.update(record);
}
}
DAO
public class MemberRecordDAO {
#Autowired
private SessionFactory sessionFactory;
private Session getSession() {
return sessionFactory.getCurrentSession();
}
public MemberRecord findById(Integer id) {
return (MemberRecord)getSession()
.getNamedQuery("findMemberById")
.setInteger("id", id)
.uniqueResult();
}
}
Client Code
// ...
public void changeEffectiveDate(Integer recordId, Date newDate) {
LogicManager manager = getBean("logicManager");
MemberRecord record = manager.findById(recordId);
record.setEffectiveDate(newDate);
manager.updateRecord(record);
}
Before I added the evict() call in the Manager, I noticed that the manager was behaving in unexpected ways. In order to update a record, I'd first have to get that record by calling findById(), which would put the record into the Session cache. I'd make changes on that object, then call updateRecord() which would call findById() to get the (supposedly) persisted data. I realized that this second call to findById() would not look at the database data, but just pull the object from the cache. This would result in my oldEffectiveDate always being the same as my newly changed date, since record and oldRecord would be the exact same object.
To counteract this, I added the call to evict(), which I understood to mean that the object would be removed from the cache, forcing Hibernate to go to the database to get the MemberRecord. After I made that change, my MemberRecordDAO throws an exception when it calls uniqueResult(), which says AssertionFailed: possible nonthreadsafe access to session. When I run the debugger, I see that both LogicManager and MemberRecordDAO are using the same Session, which is what I thought was correct.
So, my questions:
Is my thinking/algorithm correct? Is evict() the correct thing to do? Is there a better way? I am not too savvy on Sessions, caching or evict(). I want to make sure that this logic is correct before dealing with threading issues.
Why is it that accessing the Session from the DAO is not threadsafe?
The evict() approach will work, but I believe the 'preferred hibernate way of doing things' would be to use Session.merge(), as in:
public MemberRecord updateRecord(MemberRecord newRecord) {
MemberRecord oldRecord = memberRecordDAO.findById(record.getId());
Date oldEffectiveDate = oldRecord.getEffectiveDate();
if ( isEffective(oldEffectiveDate) &&
!oldEffectiveDate.equals(newRecord.getEffectiveDate)) {
throw new IllegalArgumentException("Cannot change date");
} else {
MemberRecord merged = (MemberRecord) session.merge(newRecord);
return merged;
}
}
Just keep in mind that Session.merge() will update all of the fields of oldRecord with the values from newRecord.
This was the solution that passed my tests, but it still seems a little gross to me:
Manager
public void updateRecord(MemberRecord record) {
MemberRecord oldRecord = record;
record = record.clone(); //Added a clone() to MemberRecord
getSession().evict(record);
getSession().evict(oldRecord);
getSession().refresh(oldRecord);
// At this point, record has all of the new values, but none of the Hibernate
// data attached to it, due to the clone().
// oldRecord is populated with the data currently in the database.
Date oldEffectiveDate = oldRecord.getEffectiveDate();
if ( isEffective(oldEffectiveDate) &&
!oldEffectiveDate.equals(record.getEffectiveDate)) {
throw new IllegalArgumentException("Cannot change date");
}
// << Other data checks >>
memberRecordDAO.update(record);
}
If this type of thing can be done cleaner, please tell me.
I have two transaction managers and was curious if there is some possibility to get the one that has been used.
To be more concrete, how could underlyingMethod(..) find out which transactionManager was used (without sending it an additional parameter "transactionManagerName/Ref"):
#Transactional("transactionManager1")
public void transactionFromFirstTM() {
someClass.underlyingMethod()
}
#Transactional("transactionManager2")
public void transactionFromSecondTM() {
someClass.underlyingMethod()
}
?
ok I have used this to get the hibernate Session from actual transaction manager:
protected Session getSession() {
Map<Object, Object> resourceMap = TransactionSynchronizationManager.getResourceMap();
Session session = null;
for (Object value : resourceMap.values()) {
if (value instanceof SessionHolder) {
session = ((SessionHolder) value).getSession();
break;
}
}
return session;
}
I don't think you can, but you shouldn't do anything with the transaction manager. Some actions on the current transaction are available in TransactionSynchronizationManager
Another useful class is the TransactionAspectUtils. But not that both are meant to be used internally, and you should not rely on them in many places in your code.