UPDATE
Liferay ticket accepted , solution in dev : https://issues.liferay.com/browse/LPS-82954
Situation
My context is a parallel import of liferay layouts through a liferay portlet; build with spring. When i´m executing it in Liferay dxp; the api call to add a Layout throws a StaleObjectStateException. (https://github.com/liferay/liferay-portal/blob/d969e0e839db9ea64267f7bff0a76be93cd26fa0/portal-impl/src/com/liferay/portal/service/impl/LayoutLocalServiceImpl.java)
This exception occurs when the api internally does an update on the corresponding LayoutSet (updating the PageCount for that Group, where the layout has been added to, just a single moment ago).
This does not happen in a single threaded execution!
Actions
Firstly i synchronized that call .. without any better results
meanwhile i read something about, that only synchronizing the threading won´t help, because the transaction itself may not be inside the synchronized execution block. therefore i also added a transactional annotation. .. without better results
so far i gained the following insight:
there is a Bug in the LayoutSetLocalService.updatePageCount(): the updated LayoutSet is not returned .. therefore the (with Liferay 7/DXP introduced) mvcc Version of the LayoutSet is not incremented. .. but this should not have any influences on my situation (https://github.com/liferay/liferay-portal/blob/7eb86ce5f6a7b2c9a405853a20fe81592e639219/portal-impl/src/com/liferay/portal/service/impl/LayoutSetLocalServiceImpl.java).
Can anybody give me a hint , whether there is any chance to tackle that ?
Or is this a consequence of the optimistic locking and i have to live with that?
did i missed a puzzle when creating the threads ? maybe some weird .. configurate my hibernate session ... thing ?
Code Excerpts
-> Test Project Available: https://github.com/andrebiegel/liferay-layout-issue.git
private static final Object layoutCreationLock = new Object();
synchronized (layoutCreationLock) {
newLayout = addLayoutApiCall(pageContext, serviceContext, typeSettings, friendlyURLMap);
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public Layout addLayoutApiCall(IPageContext pageContext, ServiceContext serviceContext, String typeSettings,
Map<Locale, String> friendlyURLMap) throws PortalException {
Layout newLayout;
newLayout = LayoutLocalServiceUtil.addLayout( pageContext.getProjectConfiguration().getUserId(), pageContext.getProjectConfiguration()
.getSiteId(), pageContext.isPrivatePage(), pageContext.getParentLayoutId(), pageContext
.getNamesMap(), pageContext.getTitleMap(), pageContext.getDescriptionMap(), pageContext
.getKeywordsMap(), pageContext.getRobotsMap(), pageContext.getPageType(), typeSettings,
pageContext.isHiddenPage(), friendlyURLMap, serviceContext );
return newLayout;
}
Unfortunately Liferay will not fix this. Ticket has been closed; declared as not a Use Case. The reason is that the solution produced negativ consequences in other use case. So Liferay seems to have an issue in their transaction management. by the way , i have also seen such Exceptions when concurrently adding Expandos.
Related
My unit tests are seeing org.hibernate.LazyInitializationException: could not initialize proxy [org.openapitools.entity.MenuItem#5] - no Session. I'm not sure why they expect a session in a unit test. I'm trying to write to an in-memory h2 database for the unit tests of my Controller classes that implement the RESTful APIs. I'm not using any mock objects for the test, because I want to test the actual database transactions. This worked fine when I was using Spring-Boot version 1.x, but broke when I moved to version 2. (I'm not sure if that's what caused the tests to break, since I made lots of other changes. My point is that my code has passed these tests already.)
My Repositories extend JPARepository, so I'm using a standard Hibernate interface.
There are many answers to this question on StackOverflow, but very few describe a solution that I could use with Spring-Data.
Addendum: Here's a look at the unit test:
#Test
public void testDeleteOption() throws ResponseException {
MenuItemDto menuItemDto = createPizzaMenuItem();
ResponseEntity<CreatedResponse> responseEntity
= adminApiController.addMenuItem(menuItemDto);
final CreatedResponse body = responseEntity.getBody();
assertNotNull(body);
Integer id = body.getId();
MenuItem item = menuItemApiController.getMenuItemTestOnly(id);
// Hibernate.initialize(item); // attempted fix blows up
List<String> nameList = new LinkedList<>();
for (MenuItemOption option : item.getAllowedOptions()) { // blows up here
nameList.add(option.getName());
}
assertThat(nameList, hasItems("pepperoni", "olives", "onions"));
// ... (more code)
}
My test application.properties has these settings
spring.datasource.url=jdbc:h2:mem:pizzaChallenge;DB_CLOSE_ON_EXIT=FALSE
spring.datasource.username=pizza
spring.datasource.password=pizza
spring.jpa.show-sql=true
This is not standard Hibernate, but spring data. You have to understand that Hibernate uses lazy loading to avoid loading the whole object graph from the database. If you close the session or connection to the database e.g. by ending a transaction, Hibernate can't lazy load anymore and apparently, your code tries to access state that needs lazy loading.
You can use #EntityGraph on your repository to specify that an association should be fetched or you avoid accessing the state that isn't initialized outside of a transaction. Maybe you just need to enlarge the transaction scope by putting #Transactional on the method that calls the repository and accesses the state, so that lazy loading works.
I found a way around this. I'm not sure if this is the best approach, so if anyone has any better ideas, I'd appreciate hearing from them.
Here's what I did. First of all, before reading a value from the lazy-loaded entity, I call Hibernate.initialize(item);
This throws the same exception. But now I can add a property to the test version of application.properties that says
spring.jpa.properties.hibernate.enable_lazy_load_no_trans=true
Now the initialize method will work.
P.S. I haven't been able to find a good reference for Spring properties like this one. If anyone knows where I can see the available properties, I'd love to hear about it. The folks at Spring don't do a very good job of documenting these properties. Even when they mention a specific property, they don't provide a link that might explain it more thoroughly.
I have this class and I tought three ways to handle detached entity state in case of persistence exceptions (which are handled elsewhere):
#ManagedBean
#ViewScoped
public class EntityBean implements Serializable
{
#EJB
private PersistenceService service;
private Document entity;
public void update()
{
// HANDLING 1. ignore errors
service.transact(em ->
{
entity = em.merge(entity);
// some other code that modifies [entity] properties:
// entity.setCode(...);
// entity.setResposible(...);
// entity.setSecurityLevel(...);
}); // an exception may be thrown on method return (rollback),
// but [entity] has already been reassigned with a "dirty" one.
//------------------------------------------------------------------
// HANDLING 2. ensure entity is untouched before flush is ok
service.transact(em ->
{
Document managed = em.merge(entity);
// some other code that modifies [managed] properties:
// managed.setCode(...);
// managed.setResposible(...);
// managed.setSecurityLevel(...);
em.flush(); // an exception may be thrown here (rollback)
// forcing method exit without [entity] being reassigned.
entity = managed;
}); // an exception may be thrown on method return (rollback),
// but [entity] has already been reassigned with a "dirty" one.
//------------------------------------------------------------------
// HANDLING 3. ensure entity is untouched before whole transaction is ok
AtomicReference<Document> reference = new AtomicReference<>();
service.transact(em ->
{
Document managed = em.merge(entity);
// some other code that modifies [managed] properties:
// managed.setCode(...);
// managed.setResposible(...);
// managed.setSecurityLevel(...);
reference.set(managed);
}); // an exception may be thrown on method return (rollback),
// and [entity] is safe, it's not been reassigned yet.
entity = reference.get();
}
...
}
PersistenceService#transact(Consumer<EntityManager> consumer) can throw unchecked exceptions.
The goal is to maintain the state of the entity aligned with the state of the database, even in case of exceptions (prevent entity to become "dirty" after transaction fail).
Method 1. is obviously naive and doesn't guarantee coherence.
Method 2. asserts that nothing can go wrong after flushing.
Method 3. prevents the new entity assigment if there's an exception in the whole transaction
Questions:
Is method 3. really safer than method 2.?
Are there cases where an exception is thrown between flush [excluded] and commit [included]?
Is there a standard way to handle this common problem?
Thank you
Note that I'm already able to rollback the transaction and close the EntityManager (PersistenceService#transact will do it gracefully), but I need to solve database state and the business objects do get out of sync. Usually this is not a problem. In my case this is the problem, because exceptions are usually generated by BeanValidator (those on JPA side, not on JSF side, for computed values that depends on user inputs) and I want the user to input correct values and try again, without losing the values he entered before.
Side note: I'm using Hibernate 5.2.1
this is the PersistenceService (CMT)
#Stateless
#Local
public class PersistenceService implements Serializable
{
#PersistenceContext
private EntityManager em;
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public void transact(Consumer<EntityManager> consumer)
{
consumer.accept(em);
}
}
#DraganBozanovic
That's it! Great explanation for point 1. and 2.
I'd just love you to elaborate a little more on point 3. and give me some advice on real-world use case.
However, I would definitely not use AtomicReference or similar cumbersome constructs. Java EE, Spring and other frameworks and application containers support declaring transactional methods via annotations: Simply use the result returned from a transactional method.
When you have to modify a single entity, the transactional method would just take the detached entity as parameter and return the updated entity, easy.
public Document updateDocument(Document doc)
{
Document managed = em.merge(doc);
// managed.setXxx(...);
// managed.setYyy(...);
return managed;
}
But when you need to modify more than one in a single transaction, the method can become a real pain:
public LinkTicketResult linkTicket(Node node, Ticket ticket)
{
LinkTicketResult result = new LinkTicketResult();
Node managedNode = em.merge(node);
result.setNode(managedNode);
// modify managedNode
Ticket managedTicket = em.merge(ticket);
result.setTicket(managedTicket);
// modify managedTicket
Remark managedRemark = createRemark(...);
result.setRemark(managedemark);
return result;
}
In this case, my pain:
I have to create a dedicated transactional method (maybe a dedicated #EJB too)
That method will be called only once (will have just one caller) - is a "one-shot" non-reusable public method. Ugly.
I have to create the dummy class LinkTicketResult
That class will be instantiated only once, in that method - is "one-shot"
The method could have many parameters (or another dummy class LinkTicketParameters)
JSF controller actions, in most cases, will just call a EJB method, extract updated entities from returned container and reassign them to local fields
My code will be steadily polluted with "one-shotters", too many for my taste.
Probably I'm not seeing something big that's just in front of me, I'll be very grateful if you can point me in the right direction.
Is method 3. really safer than method 2.?
Yes. Not only is it safer (see point 2), but it is conceptually more correct, as you change transaction-dependent state only when you proved that the related transaction has succeeded.
Are there cases where an exception is thrown between flush [excluded] and commit [included]?
Yes. For example:
LockMode.OPTIMISTIC:
Optimistically assume that transaction will not experience contention
for entities. The entity version will be verified near the transaction
end.
It would be neither performant nor practically useful to check optimistick lock violation during each flush operation within a single transaction.
Deferred integrity constraints (enforced at commit time in db). Not used often, but are an illustrative example for this case.
Later maintenance and refactoring. You or somebody else may later introduce additional changes after the last explicit call to flush.
Is there a standard way to handle this common problem?
Yes, I would say that your third approach is the standard one: Use the results of a complete and successful transaction.
However, I would definitely not use AtomicReference or similar cumbersome constructs. Java EE, Spring and other frameworks and application containers support declaring transactional methods via annotations: Simply use the result returned from a transactional method.
Not sure if this is entirely to the point, but there is only one way to recover after exceptions: rollback and close the EM. From https://docs.jboss.org/hibernate/entitymanager/3.6/reference/en/html/transactions.html#transactions-basics-issues
An exception thrown by the Entity Manager means you have to rollback
your database transaction and close the EntityManager immediately
(discussed later in more detail). If your EntityManager is bound to
the application, you have to stop the application. Rolling back the
database transaction doesn't put your business objects back into the
state they were at the start of the transaction. This means the
database state and the business objects do get out of sync. Usually
this is not a problem, because exceptions are not recoverable and you
have to start over your unit of work after rollback anyway.
-- EDIT--
Also see http://piotrnowicki.com/2013/03/jpa-and-cmt-why-catching-persistence-exception-is-not-enough/
ps: downvote is not mine.
I am using the PostContextCreate part of the life cycle in an e4 RCP application to create the back-end "business logic" part of my application. I then inject it into the context using an IEclipseContext. I now have a requirement to persist some business logic configuration options between executions of my application. I have some questions:
It looks like properties (e.g. accessible from MContext) would be really useful here, a straightforward Map<String,String> sounds ideal for my simple requirements, but how can I get them in PostContextCreate?
Will my properties persist if my application is being run with clearPersistedState set to true? (I'm guessing not).
If I turn clearPersistedState off then will it try and persist the other stuff that I injected into the context?
Or am I going about this all wrong? Any suggestions would be welcome. I may just give up and read/write my own properties file.
I think the Map returned by MApplicationElement.getPersistedState() is intended to be used for persistent data. This will be cleared by -clearPersistedState.
The PostContextCreate method of the life cycle is run quite early in the startup and not everything is available at this point. So you might have to wait for the app startup complete event (UIEvents.UILifeCycle.APP_STARTUP_COMPLETE) before accessing the persisted state data.
You can always use the traditional Platform.getStateLocation(bundle) to get a location in the workspace .metadata to store arbitrary data. This is not touched by clearPersistedState.
Update:
To subscribe to the app startup complete:
#PostContextCreate
public void postContextCreate(IEventBroker eventBroker)
{
eventBroker.subscribe(UIEvents.UILifeCycle.APP_STARTUP_COMPLETE, new AppStartupCompleteEventHandler());
}
private static final class AppStartupCompleteEventHandler implements EventHandler
{
#Override
public void handleEvent(final Event event)
{
... your code here
}
}
My application loads entities from a Hibernate DAO, with OpenSessionInViewFilter to allow rendering.
In some cases I want to make a minor change to a field -
Long orderId ...
link = new Link("cancel") {
#Override public void onClick() {
Order order = orderDAO.load(orderId);
order.setCancelledTime(timeSource.getCurrentTime());
};
but such a change is not persisted, as the OSIV doesn't flush.
It seems a real shame to have to call orderDOA.save(order) in these cases, but I don't want to go as far as changing the FlushMode on the OSIV.
Has anyone found any way of declaring a 'request handling' (such as onClick) as requiring a transaction?
Ideally I suppose the transaction would be started early in the request cycle, and committed by the OSIV, so that all logic and rendering would take place in same transaction.
I generally prefer to use additional 'service' layer of code that wraps basic DAO
logic and provides transactions via #Transactional. That gives me better separation of presentation vs business logic and is
easier to test.
But since you already use OSIV may be you can just put some AOP interceptor around your code
and have it do flush()?
Disclaimer : I've never actually tried this, but I think it would work. This also may be a little bit more code than you want to write. Finally, I'm assuming that your WebApplication subclasses SpringWebApplication. Are you with me so far?
The plan is to tell Spring that we want to run the statements of you onClick method in a transaction. In order to do that, we have to do three things.
Step 1 : inject the PlatformTransactionManager into your WebPage:
#SpringBean
private PlatformTransactionManager platformTransactionManager;
Step 2 : create a static TransactionDefinition in your WebPage that we will later reference:
protected static final TransactionDefinition TRANSACTION_DEFINITION;
static {
TRANSACTION_DEFINITION = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
((DefaultTransactionDefinition) TRANSACTION_DEFINITION).setIsolationLevel(TransactionDefinition.ISOLATION_SERIALIZABLE);
}
Feel free to change the TransactionDefinition settings and/or move the definition to a shared location as appropriate. This particular definition instructs Spring to start a new transaction even if there's already one started and to use the maximum transaction isolation level.
Step 3 : add transaction management to the onClick method:
link = new Link("cancel") {
#Override
public void onClick() {
new TransactionTemplate(platformTransactionManager, TRANSACTION_DEFINITION).execute(new TransactionCallback() {
#Override
public Object doInTransaction(TransactionStatus status) {
Order order = orderDAO.load(orderId);
order.setCancelledTime(timeSource.getCurrentTime());
}
}
}
};
And that should do the trick!
I have a code that saves a bean, and updates another bean in a DB via Hibernate. It must be do in the same transaction, because if something wrong occurs (f.ex launches a Exception) rollback must be executed for the two operations.
public class BeanDao extends ManagedSession {
public Integer save(Bean bean) {
Session session = null;
try {
session = createNewSessionAndTransaction();
Integer idValoracio = (Integer) session.save(bean); // SAVE
doOtherAction(bean); // UPDATE
commitTransaction(session);
return idBean;
} catch (RuntimeException re) {
log.error("get failed", re);
if (session != null) {
rollbackTransaction(session);
}
throw re;
}
}
private void doOtherAction(Bean bean) {
Integer idOtherBean = bean.getIdOtherBean();
OtherBeanDao otherBeanDao = new OtherBeanDao();
OtherBean otherBean = otherBeanDao.findById(idOtherBean);
.
. (doing operations)
.
otherBeanDao.attachDirty(otherBean)
}
}
The problem is:
In case that
session.save(bean)
launches an error, then I get AssertionFailure, because the function doOtherAction (that is used in other parts of the project) uses session after a Exception is thrown.
The first thing I thought were extract the code of the function doOtherAction, but then I have the same code duplicate, and not seems the best practice to do it.
What is the best way to refactor this?
It's a common practice to manage transactions at one level above DAOs, in services or other business logic classes. That way you can, based on the business/service logic, in one case do two DAO operations in one transaction and, in another case, do them in separate transactions.
I'm a huge fan of Declarative Transaction Management. If you can spare the time to get it working (piece of cake with an Application Server such as GlassFish or JBoss, and easy with Spring). If you annotate your business method with #TransactionAttribute(REQUIRED) (it can even be set to be done as default) and it calls the two DAO methods you will get exactly what you want: everything gets committed at once or rolled back over an Exception.
This solution is about as loosely coupled as it gets.
The others are correct in that they take in to account what are common practice currently.
But that doesn't really help you with your current practice.
What you should do is create two new DAO methods. Such as CreateGlobalSession and CommitGlobalSession.
What these do is the same thing as your current create and commit routines.
The difference is that they set a "global" session variable (most likely best done with a ThreadLocal). Then you change the current routines so that they check if this global session already exists. If your create detects the global session, then simply return it. If your commit detects the global session, then it does nothing.
Now when you want to use it you do this:
try {
dao.createGlobalSession();
beanA.save();
beanb.save();
Dao.commitGlobalSession();
} finally {
dao.rollbackGlobalSession();
}
Make sure you wrap the process in a try block so that you can reset your global session if there's an error.
While the other techniques are considered best practice and ideally you could one day evolve to something like that, this will get you over the hump with little more than 3 new methods and changing two existing methods. After that the rest of your code stays the same.