Actually, This is not a question but really I need your opinions in a matter...
I put his post here because I know you always active, so please don't consider this a bad question and share me your opinions.
I've used Java dynamic proxies to Centralize The code of JPA that I used in a standalone mode, and Here's the dynamic proxy code:
package com.forat.service;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.EntityTransaction;
import javax.persistence.Persistence;
import com.forat.service.exceptions.DAOException;
/**
* Example of usage :
* <pre>
* OnlineFromService onfromService =
* (OnlineFromService) DAOProxy.newInstance(new OnlineFormServiceImpl());
* try {
* Student s = new Student();
* s.setName("Mohammed");
* s.setNationalNumber("123456");
* onfromService.addStudent(s);
* }catch (Exception ex) {
* System.out.println(ex.getMessage());
* }
*</pre>
* #author mohammed hewedy
*
*/
public class DAOProxy implements InvocationHandler{
private Object object;
private Logger logger = Logger.getLogger(this.getClass().getSimpleName());
private DAOProxy(Object object) {
this.object = object;
}
public static Object newInstance(Object object) {
return Proxy.newProxyInstance(object.getClass().getClassLoader(),
object.getClass().getInterfaces(), new DAOProxy(object));
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
EntityManagerFactory emf = null;
EntityManager em = null;
EntityTransaction et = null;
Object result = null;
try {
emf = Persistence.createEntityManagerFactory(Constants.UNIT_NAME);
em = emf.createEntityManager();;
Method entityManagerSetter = object.getClass().
getDeclaredMethod(Constants.ENTITY_MANAGER_SETTER_METHOD, EntityManager.class);
entityManagerSetter.invoke(object, em);
et = em.getTransaction();
et.begin();
result = method.invoke(object, args);
et.commit();
return result;
}catch (Exception ex) {
et.rollback();
Throwable cause = ex.getCause();
logger.log(Level.SEVERE, cause.getMessage());
if (cause instanceof DAOException)
throw new DAOException(cause.getMessage(), cause);
else
throw new RuntimeException(cause.getMessage(), cause);
}finally {
em.close();
emf.close();
}
}
}
And here's the link that contains more info (http://m-hewedy.blogspot.com/2010/04/using-dynamic-proxies-to-centralize-jpa.html)
So, Please give me your opinions.
Thanks.
So you've encapsulated the transaction demarcation logic in one place and use dynamic proxy to enhance existing services with transaction management and reduce boilerplate code, right?
The sounds a rather OK to me. Actually what containers such as Spring, or EJB do when we speak of declarative transaction demarcation is very similar. Implementation-wise, you can do it with dynamic proxy, or byte code instrumentation, or even use AspectJ. I did something very similar once for a tiny testing framework once. Here is a blog post about it.
The tricky parts that I see are:
1) Rollback only. As per JPA spec, an entity transaction can be flagged as "rollback only". Such a transaction can never commit. So I feel like you should check that between these two lines:
result = method.invoke(object, args);
et.commit();
2) Re-entrancy. Most system that have declarative transaction implement a semantics in which a transaction is started only if there isn't one already active (See "Required" in this list of EJB annotations). Looks like you should maybe check with isActive that in your logic.
3) Exception handling. Be very careful with the exception propagation in dynamic proxy. The proxy is supposed to be transparent for the client as much as possible. If an exception other than DAOException leaks out of the DAO, the proxy will transform it into a RuntimeException. Doesn't sound optimal to me. Also don't confuse the exception because invoke failed, and the exception wrapped by the invocation, that I think you should re-throw as-is:
catch ( InvocationTargetException e )
{
Throwable nested = e.getTargetException();
throw nested;
}
Conclusion: the idea to use dynamic proxy in this scenario sounds OK to me. But I suspect there are a few stuffs to double-check in your code (I don't remember all the details of the JPA specs and exception handling with dynamic proxy, but there are some tricky cases). This kind of code can hide subtle bugs, so it's worth taking time to make it bullet-proof.
I've used something similar in the past, but coded to the hibernate API (this was pre-JPA). Data access for most types of DAO was managed by an interface named after the object type, E.g. CustomerPersistence for managing Customer instnaces. Methods such as findXXX mapped to named queries, with parameter names in the method mapped to parameters in the query.
The implementation of the interfaces were proxies, which used the interface name, method names, parameter names etc.. to invoke appropriate methods in the hibernate API.
It saves a lot of boilerplate coding, with an intuitive mapping to the underlying data access framework, and makes for very easy mocking of the data access layer.
So, I'm definitely "thumbs up" on using proxies.
Related
Is there any specific reason while creating entity manager jps transaction using the native object?
EntityManagerFactory emf = this.getEntityManagerFactory();
if (emf instanceof EntityManagerFactoryInfo) {
emf =
((EntityManagerFactoryInfo)emf).getNativeEntityManagerFactory();
}
Our requirement is to use the Proxy (Created ) instead of the native object, we have created aspect around the getSession method to add tenant id(Discriminator Column) dynamically for query
Thanks,
Vishnu
To-the-point explanation:
The reason why the native EntityManagerFactory is unwrapped in JpaTransactionManager's createEntityManagerForTransaction() method is because still using the proxy would later create an extended EntityManager instead of a transaction-scoped one. The latter is the one we want in the case of regular transactional sessions.
What would happen if we still used the proxy:
Because the proxy delegates all calls to ManagedEntityManagerFactoryInvocationHandler, the call to createEntityManager(), from which we hope to obtain a transaction-scoped entity manager, will also be delegated.
If we look at the implementation of the above-mentioned handler's invoke() method, we observe that it calls invokeProxyMethod() from AbstractEntityManagerFactoryBean where we see that it specifically triggers the creation of an extended transaction manager, instead of a transaction-scoped one. This is probably just a design choice of Spring developers.
Object invokeProxyMethod(Method method, #Nullable Object[] args) throws Throwable {
// unrelated code here
if (method.getName().equals("createEntityManager") && args != null && args.length > 0 &&
args[0] == SynchronizationType.SYNCHRONIZED) {
EntityManager rawEntityManager = (args.length > 1 ?
getNativeEntityManagerFactory().createEntityManager((Map<?, ?>) args[1]) :
getNativeEntityManagerFactory().createEntityManager());
// we get an extended entity manager
return ExtendedEntityManagerCreator.createApplicationManagedEntityManager(rawEntityManager, this, true);
}
// more unrelated code here
Object retVal = method.invoke(getNativeEntityManagerFactory(), args);
if (retVal instanceof EntityManager) {
EntityManager rawEntityManager = (EntityManager) retVal;
retVal = ExtendedEntityManagerCreator.createApplicationManagedEntityManager(rawEntityManager, this, false);
}
// in the final path, we also get an extended entity manager
return retVal;
}
Some paths I would look into to circumvent your problem:
You may try to subclass JpaTransactionManager and override createEntityManagerForTransaction() with the minimum changes that would accomodate your mentioned use of aspects later on. However, simply not unwrapping the EntityManagerFactory proxy would probably not be what you intend, due to above explanation related to extended entity managers.
Another thing you may want to look into is changing the aspects you have mentioned to not require the use of Spring's entityManagerFactory proxy and play fine with the native object. As a last resort, you could try to subclass and override multiple Spring ORM classes' methods mentioned throughout this solution so that you accomodate both Spring's behaviour and your aspects.
I have this class and I tought three ways to handle detached entity state in case of persistence exceptions (which are handled elsewhere):
#ManagedBean
#ViewScoped
public class EntityBean implements Serializable
{
#EJB
private PersistenceService service;
private Document entity;
public void update()
{
// HANDLING 1. ignore errors
service.transact(em ->
{
entity = em.merge(entity);
// some other code that modifies [entity] properties:
// entity.setCode(...);
// entity.setResposible(...);
// entity.setSecurityLevel(...);
}); // an exception may be thrown on method return (rollback),
// but [entity] has already been reassigned with a "dirty" one.
//------------------------------------------------------------------
// HANDLING 2. ensure entity is untouched before flush is ok
service.transact(em ->
{
Document managed = em.merge(entity);
// some other code that modifies [managed] properties:
// managed.setCode(...);
// managed.setResposible(...);
// managed.setSecurityLevel(...);
em.flush(); // an exception may be thrown here (rollback)
// forcing method exit without [entity] being reassigned.
entity = managed;
}); // an exception may be thrown on method return (rollback),
// but [entity] has already been reassigned with a "dirty" one.
//------------------------------------------------------------------
// HANDLING 3. ensure entity is untouched before whole transaction is ok
AtomicReference<Document> reference = new AtomicReference<>();
service.transact(em ->
{
Document managed = em.merge(entity);
// some other code that modifies [managed] properties:
// managed.setCode(...);
// managed.setResposible(...);
// managed.setSecurityLevel(...);
reference.set(managed);
}); // an exception may be thrown on method return (rollback),
// and [entity] is safe, it's not been reassigned yet.
entity = reference.get();
}
...
}
PersistenceService#transact(Consumer<EntityManager> consumer) can throw unchecked exceptions.
The goal is to maintain the state of the entity aligned with the state of the database, even in case of exceptions (prevent entity to become "dirty" after transaction fail).
Method 1. is obviously naive and doesn't guarantee coherence.
Method 2. asserts that nothing can go wrong after flushing.
Method 3. prevents the new entity assigment if there's an exception in the whole transaction
Questions:
Is method 3. really safer than method 2.?
Are there cases where an exception is thrown between flush [excluded] and commit [included]?
Is there a standard way to handle this common problem?
Thank you
Note that I'm already able to rollback the transaction and close the EntityManager (PersistenceService#transact will do it gracefully), but I need to solve database state and the business objects do get out of sync. Usually this is not a problem. In my case this is the problem, because exceptions are usually generated by BeanValidator (those on JPA side, not on JSF side, for computed values that depends on user inputs) and I want the user to input correct values and try again, without losing the values he entered before.
Side note: I'm using Hibernate 5.2.1
this is the PersistenceService (CMT)
#Stateless
#Local
public class PersistenceService implements Serializable
{
#PersistenceContext
private EntityManager em;
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public void transact(Consumer<EntityManager> consumer)
{
consumer.accept(em);
}
}
#DraganBozanovic
That's it! Great explanation for point 1. and 2.
I'd just love you to elaborate a little more on point 3. and give me some advice on real-world use case.
However, I would definitely not use AtomicReference or similar cumbersome constructs. Java EE, Spring and other frameworks and application containers support declaring transactional methods via annotations: Simply use the result returned from a transactional method.
When you have to modify a single entity, the transactional method would just take the detached entity as parameter and return the updated entity, easy.
public Document updateDocument(Document doc)
{
Document managed = em.merge(doc);
// managed.setXxx(...);
// managed.setYyy(...);
return managed;
}
But when you need to modify more than one in a single transaction, the method can become a real pain:
public LinkTicketResult linkTicket(Node node, Ticket ticket)
{
LinkTicketResult result = new LinkTicketResult();
Node managedNode = em.merge(node);
result.setNode(managedNode);
// modify managedNode
Ticket managedTicket = em.merge(ticket);
result.setTicket(managedTicket);
// modify managedTicket
Remark managedRemark = createRemark(...);
result.setRemark(managedemark);
return result;
}
In this case, my pain:
I have to create a dedicated transactional method (maybe a dedicated #EJB too)
That method will be called only once (will have just one caller) - is a "one-shot" non-reusable public method. Ugly.
I have to create the dummy class LinkTicketResult
That class will be instantiated only once, in that method - is "one-shot"
The method could have many parameters (or another dummy class LinkTicketParameters)
JSF controller actions, in most cases, will just call a EJB method, extract updated entities from returned container and reassign them to local fields
My code will be steadily polluted with "one-shotters", too many for my taste.
Probably I'm not seeing something big that's just in front of me, I'll be very grateful if you can point me in the right direction.
Is method 3. really safer than method 2.?
Yes. Not only is it safer (see point 2), but it is conceptually more correct, as you change transaction-dependent state only when you proved that the related transaction has succeeded.
Are there cases where an exception is thrown between flush [excluded] and commit [included]?
Yes. For example:
LockMode.OPTIMISTIC:
Optimistically assume that transaction will not experience contention
for entities. The entity version will be verified near the transaction
end.
It would be neither performant nor practically useful to check optimistick lock violation during each flush operation within a single transaction.
Deferred integrity constraints (enforced at commit time in db). Not used often, but are an illustrative example for this case.
Later maintenance and refactoring. You or somebody else may later introduce additional changes after the last explicit call to flush.
Is there a standard way to handle this common problem?
Yes, I would say that your third approach is the standard one: Use the results of a complete and successful transaction.
However, I would definitely not use AtomicReference or similar cumbersome constructs. Java EE, Spring and other frameworks and application containers support declaring transactional methods via annotations: Simply use the result returned from a transactional method.
Not sure if this is entirely to the point, but there is only one way to recover after exceptions: rollback and close the EM. From https://docs.jboss.org/hibernate/entitymanager/3.6/reference/en/html/transactions.html#transactions-basics-issues
An exception thrown by the Entity Manager means you have to rollback
your database transaction and close the EntityManager immediately
(discussed later in more detail). If your EntityManager is bound to
the application, you have to stop the application. Rolling back the
database transaction doesn't put your business objects back into the
state they were at the start of the transaction. This means the
database state and the business objects do get out of sync. Usually
this is not a problem, because exceptions are not recoverable and you
have to start over your unit of work after rollback anyway.
-- EDIT--
Also see http://piotrnowicki.com/2013/03/jpa-and-cmt-why-catching-persistence-exception-is-not-enough/
ps: downvote is not mine.
There are countless questions here, how to solve the "could not initialize proxy" problem via eager fetching, keeping the transaction open, opening another one, OpenEntityManagerInViewFilter, and whatever.
But is it possible to simply tell Hibernate to ignore the problem and pretend the collection is empty? In my case, not fetching it before simply means that I don't care.
This is actually an XY problem with the following Y:
I'm having classes like
class Detail {
#ManyToOne(optional=false) Master master;
...
}
class Master {
#OneToMany(mappedBy="master") List<Detail> details;
...
}
and want to serve two kinds of requests: One returning a single master with all its details and another one returning a list of masters without details. The result gets converted to JSON by Gson.
I've tried session.clear and session.evict(master), but they don't touch the proxy used in place of details. What worked was
master.setDetails(nullOrSomeCollection)
which feels rather hacky. I'd prefer the "ignorance" as it'd be applicable generally without knowing what parts of what are proxied.
Writing a Gson TypeAdapter ignoring instances of AbstractPersistentCollection with initialized=false could be a way, but this would depend on org.hibernate.collection.internal, which is surely no good thing. Catching the exception in the TypeAdapter doesn't sound much better.
Update after some answers
My goal is not to "get the data loaded instead of the exception", but "how to get null instead of the exception"
I
Dragan raises a valid point that forgetting to fetch and returning a wrong data would be much worse than an exception. But there's an easy way around it:
do this for collections only
never use null for them
return null rather than an empty collection as an indication of unfetched data
This way, the result can never be wrongly interpreted. Should I ever forget to fetch something, the response will contain null which is invalid.
You could utilize Hibernate.isInitialized, which is part of the Hibernate public API.
So, in the TypeAdapter you can add something like this:
if ((value instanceof Collection) && !Hibernate.isInitialized(value)) {
result = new ArrayList();
}
However, in my modest opinion your approach in general is not the way to go.
"In my case, not fetching it before simply means that I don't care."
Or it means you forgot to fetch it and now you are returning wrong data (worse than getting the exception; the consumer of the service thinks the collection is empty, but it is not).
I would not like to propose "better" solutions (it is not topic of the question and each approach has its own advantages), but the way that I solve issues like these in most use cases (and it is one of the ways commonly adopted) is using DTOs: Simply define a DTO that represents the response of the service, fill it in the transactional context (no LazyInitializationExceptions there) and give it to the framework that will transform it to the service response (json, xml, etc).
What you can try is a solution like the following.
Creating an interface named LazyLoader
#FunctionalInterface // Java 8
public interface LazyLoader<T> {
void load(T t);
}
And in your Service
public class Service {
List<Master> getWithDetails(LazyLoader<Master> loader) {
// Code to get masterList from session
for(Master master:masterList) {
loader.load(master);
}
}
}
And call this service like below
Service.getWithDetails(new LazyLoader<Master>() {
public void load(Master master) {
for(Detail detail:master.getDetails()) {
detail.getId(); // This will load detail
}
}
});
And in Java 8 you can use Lambda as it is a Single Abstract Method (SAM).
Service.getWithDetails((master) -> {
for(Detail detail:master.getDetails()) {
detail.getId(); // This will load detail
}
});
You can use the solution above with session.clear and session.evict(master)
I have raised a similar question in the past (why dependent collection isn't evicted when parent entity is), and it has resulted an answer which you could try for your case.
The solution for this is to use queries instead of associations (one-to-many or many-to-many). Even one of the original authors of Hibernate said that Collections are a feature and not an end-goal.
In your case you can get better flexibility of removing the collections mapping and simply fetch the associated relations when you need them in your data access layer.
You could create a Java proxy for every entity, so that every method is surrounded by a try/catch block that returns null when a LazyInitializationException is catched.
For this to work, all your entities would need to implement an interface and you'd need to reference this interface (instead of the entity class) all throughout your program.
If you can't (or just don't want) to use interfaces, then you could try to build a dynamic proxy with javassist or cglib, or even manually, as explained in this article.
If you go by common Java proxies, here's a sketch:
public static <T> T ignoringLazyInitialization(
final Object entity,
final Class<T> entityInterface) {
return (T) Proxy.newProxyInstance(
entityInterface.getClassLoader(),
new Class[] { entityInterface },
new InvocationHandler() {
#Override
public Object invoke(
Object proxy,
Method method,
Object[] args)
throws Throwable {
try {
return method.invoke(entity, args);
} catch (InvocationTargetException e) {
Throwable cause = e.getTargetException();
if (cause instanceof LazyInitializationException) {
return null;
}
throw cause;
}
}
});
}
So, if you have an entity A as follows:
public interface A {
// getters & setters and other methods DEFINITIONS
}
with its implementation:
public class AImpl implements A {
// getters & setters and other methods IMPLEMENTATIONS
}
Then, assuming you have a reference to the entity class (as returned by Hibernate), you could create a proxy as follows:
AImpl entityAImpl = ...; // some query, load, etc
A entityA = ignoringLazyInitialization(entityAImpl, A.class);
NOTE 1: You'd need to proxy collections returned by Hibernate as well (left as an excersice to the reader) ;)
NOTE 2: Ideally, you should do all this proxying stuff in a DAO or in some type of facade, so that everything is transparent to the user of the entities
NOTE 3: This is by no means optimal, since it creates a stacktrace for every access to an non-initialized field
NOTE 4: This works, but adds complexity; consider if it's really necessary.
I'm writing javadoc for my jsp web application. So i have a class (created according to Command pattern and located in service layer) called AcceptOrder. This class contains method execute and it calls method acceptOrder from DAO layer. Class is located in service layer.
/**
* Class allows customer order (which was assigned by dispatcher) be accepted by driver.
*
*
*/
public class AcceptOrder implements Command {
private static final String USER_ATTRIBUTE = "user";
private static final String ORDER_ID_ATTRIBUTE = "order_id";
private static final String DAO_COMMAND_EXCEPTION_MESSAGE = "Exception on executing DAO command";
private static final String WRONG_ORDER_ID_EXCEPTION_MESSAGE = "Wrong order ID";
/** {#inheritDoc}
* <p> Accepts user order, which was assigned by dispatcher.
* #param request request object
* #param response response object
*/
#Override
public String execute(HttpServletRequest request, HttpServletResponse response) throws CommandException {
DriverDao driverDao = MySqlDaoFactory.getInstance().getDriverDao();
try {
User user = (User) request.getSession().getAttribute(USER_ATTRIBUTE);
int userId = user.getId();
int orderId = Integer.valueOf(request.getParameter(ORDER_ID_ATTRIBUTE));
driverDao.acceptOrder(orderId, userId);
} catch (DaoException e) {
throw new CommandException(DAO_COMMAND_EXCEPTION_MESSAGE, e);
} catch (NumberFormatException e) {
throw new CommandException(WRONG_ORDER_ID_EXCEPTION_MESSAGE, e);
}
return PageManager.getInstance().generatePageRequest(CommandName.SHOW_DRIVER_ORDER);
}
}
Aslo i have a method in driver DAO class (in DAO layer) called acceptOrder which connects to the database and apply some changes according to parameters.
#Override
public void acceptOrder(int orderId, int userId) throws DaoException {
ConnectionPool connectionPool = null;
Connection connection = null;
PreparedStatement preparedStatement = null;
ResultSet result = null;
try {
connectionPool = ConnectionPool.getInstance();
connection = connectionPool.takeConnection();
preparedStatement = connection.prepareStatement(SQL_ACCEPT_ORDER);
preparedStatement.setInt(1, userId);
preparedStatement.setInt(2, orderId);
preparedStatement.executeUpdate();
} catch (SQLException e) {
throw new DaoException(STATEMENT_EXCEPTION_MESSAGE, e);
} catch (ConnectionPoolException e) {
throw new DaoException(CONNECTION_POOL_EXCEPTION_MESSAGE, e);
} finally {
connectionPool.closeConnection(connection, preparedStatement, result);
}
}
So the question is: What javadoc should i write for it and is my javadoc for command method execute is correct? What should be written in the description of both methods. Seems like their descriptions are the same- accept customer order.
I think you should explain better what is the method doing, cases when exception is thrown, returned values, what is actually returned and why. How to call this method, examples. A common usage, dependencies used, general workflow, validation, modeling you can explain at the class level.
I try to follow the rules below.
Does the code you write provide an API for external users? Or maybe it's just an internal implementation hidden behind another interfaces and classes (and they should provide that javadoc)?
When you write the doc, think what you would like to see from the API you don't know
Don't write obvious things (e.g. trivial javadoc for getters and setters, don't just repeat method name without spaces etc.)
Probably you shouldn't share the implementation details as it shouldn't matter to the user of your API. However, sometimes there are some details that you need to share to warn users so they don't misuse your API. If you need to leave a message for future code maintainers leave it in the code comment, not public javadoc
Document null handling. Is it accepted as the parameter value? If yes, what meaning does it have? Can the method return null? If yes, under what circumstances?
Document exceptions. Provide useful information for API clients so they can appropriately handle them.
Document any assumptions about the class state. Does the object need to be in a particular state in order to call the method because otherwise it will throw an exception?
Inheritance: is the class designed for extension (otherwise it should be marked final, right)? If yes, should subclasses obey any specific rules?
Thread safety: is the class thread safe? If the class is designed for extension, how subclasses should preserve thread safety?
Depending on your domain, performance might be very important. Maybe you need to document time and space complexity.
And again, always try to think what information you would expect from an external library. I use Java SDK and Java EE javadoc a lot. Some parts of it are great. From others I would expect information like if I can use an object from multiple threads or not but there is no single word about it and I have to refer to sources (and there will never be guarantee that my findings will be correct).
On the other hand think also if you should write a javadoc comment at all. Is it worth it? Will you have external clients of your API (especially without access to your source code)? If you do, you probably should also write a reference manual. If you have a small application and a small team, it might be easier to just skim the short method body.
Note that I am not saying you shouldn't write javadoc at all. I am trying to say that javadoc is a tool with a specific purpose. Think if writing a specific snippet of javadoc will help you to fulfill it.
I have a code that saves a bean, and updates another bean in a DB via Hibernate. It must be do in the same transaction, because if something wrong occurs (f.ex launches a Exception) rollback must be executed for the two operations.
public class BeanDao extends ManagedSession {
public Integer save(Bean bean) {
Session session = null;
try {
session = createNewSessionAndTransaction();
Integer idValoracio = (Integer) session.save(bean); // SAVE
doOtherAction(bean); // UPDATE
commitTransaction(session);
return idBean;
} catch (RuntimeException re) {
log.error("get failed", re);
if (session != null) {
rollbackTransaction(session);
}
throw re;
}
}
private void doOtherAction(Bean bean) {
Integer idOtherBean = bean.getIdOtherBean();
OtherBeanDao otherBeanDao = new OtherBeanDao();
OtherBean otherBean = otherBeanDao.findById(idOtherBean);
.
. (doing operations)
.
otherBeanDao.attachDirty(otherBean)
}
}
The problem is:
In case that
session.save(bean)
launches an error, then I get AssertionFailure, because the function doOtherAction (that is used in other parts of the project) uses session after a Exception is thrown.
The first thing I thought were extract the code of the function doOtherAction, but then I have the same code duplicate, and not seems the best practice to do it.
What is the best way to refactor this?
It's a common practice to manage transactions at one level above DAOs, in services or other business logic classes. That way you can, based on the business/service logic, in one case do two DAO operations in one transaction and, in another case, do them in separate transactions.
I'm a huge fan of Declarative Transaction Management. If you can spare the time to get it working (piece of cake with an Application Server such as GlassFish or JBoss, and easy with Spring). If you annotate your business method with #TransactionAttribute(REQUIRED) (it can even be set to be done as default) and it calls the two DAO methods you will get exactly what you want: everything gets committed at once or rolled back over an Exception.
This solution is about as loosely coupled as it gets.
The others are correct in that they take in to account what are common practice currently.
But that doesn't really help you with your current practice.
What you should do is create two new DAO methods. Such as CreateGlobalSession and CommitGlobalSession.
What these do is the same thing as your current create and commit routines.
The difference is that they set a "global" session variable (most likely best done with a ThreadLocal). Then you change the current routines so that they check if this global session already exists. If your create detects the global session, then simply return it. If your commit detects the global session, then it does nothing.
Now when you want to use it you do this:
try {
dao.createGlobalSession();
beanA.save();
beanb.save();
Dao.commitGlobalSession();
} finally {
dao.rollbackGlobalSession();
}
Make sure you wrap the process in a try block so that you can reset your global session if there's an error.
While the other techniques are considered best practice and ideally you could one day evolve to something like that, this will get you over the hump with little more than 3 new methods and changing two existing methods. After that the rest of your code stays the same.