OptimisticLockException when using JPA merge() - java

I have a rest application where one of the resources can be updated. Below are two methods responsible for achieving this task:
updateWithRelatedEntities(String, Store): receives id and new object Store which was constructed by deserializing PUT request entity, sets the version (used for optimistic locking) on new object and calls update in a transaction.
public Store updateWithRelatedEntities(String id, Store newStore) {
Store existingStore = this.get(id);
newStore.setVersion(existingStore.getVersion());
em.getTransaction().begin();
newStore = super.update(id, newStore);
em.getTransaction().commit();
return newStore;
}
update(String, T): a generic method for making an update. Checks that ids match and performs merge operation.
public T update(String id, T newObj) {
if (newObj == null) {
throw new EmptyPayloadException(type.getSimpleName());
}
Type superclass = getClass().getGenericSuperclass();
if (superclass instanceof Class) {
superclass = ((Class) superclass).getGenericSuperclass();
}
Class<T> type = (Class<T>) (((ParameterizedType) superclass).getActualTypeArguments()[0]);
T obj = em.find(type, id);
if (!newObj.getId().equals(obj.getId())) {
throw new IdMismatchException(id, newObj.getId());
}
return em.merge(newObj);
}
The problem is that this call: T obj = em.find(type, id); triggers an update of store object in the database which means that we get OptimisticLockException when triggering merge (because versions are now different).
Why is this happening? What would be the correct way to achieve this?
I kind of don't want to copy properties from newStore to existingStore and use existingStore for merge - which would, I think, solve the optimistic lock problem.
This code is not running on an application server and I am not using JTA.
EDIT:
If I detach existingStore before calling update, T obj = em.find(type, id); doesn't trigger an update of store object so this solves the problem. The question still remains though - why does it trigger it when entity is not detached?

I can't see your entity from code which you added but I believe that you missing some key point with optimistic locking -> #Version annotation on version field.
If you have this field on your entity then container should be able to do merge procedure without problems. Please take a look to
Optimistic Locking also good article don't break optimistic locking

Related

JPA Version Entity merge

I know that there are some questions about this subject already but I think that this one is different.
Let's say I have this class:
#Entity
public class foo{
#Id
#GeneratedValue
private long id;
#Version
private long version;
private String description;
...
}
They I create some objects and persist them to a DB using JPA add().
Later, I get all from the repository using JPA all();
From that list I select one object and change the description.
Then I want to update that object in the repository using JPA merge() (see code).
The problem here is that it works the first time I try to change the description (Version value is now 2).
The second time, a OptimisticLockException is raised saying that that object was changed meanwhile.
I'm using H2 has DB in embedded mode.
MERGE CODE:
//First: persist is tried, if the object already exists, an exception is raised and then this code is executed
try {
tx = em.getTransaction();
tx.begin();
entity = em.merge(entity);
tx.commit();
} catch (PersistenceException pex) {
//Do stuff
}
What can be wrong where?
Thank you.
EDIT (more code)
//Foo b is obtained by getting all objects from db using JPA all() and then one object is selected from that list
b.changeDescription("Something new!");
//Call update method (Merge code already posted)
I would assume that you are changing elements in the list from different clients or different threads. This is what causes an OptimisticLockException.
One thread, in it's own EntityManager, reads the Foo object and gets a #Version at the time of the read.
// select and update AnyEntity
EntityManager em1 = emf.createEntityManager();
EntityTransaction tx1 = em1.getTransaction();
tx1.begin();
AnyEntity firstEntity = em1.createQuery("select a from AnyEntity a", AnyEntity.class).getSingleResult();
firstEntity.setName("name1");
em1.merge(firstEntity);
Another client reads and updates the Foo object at the same time, before the first client has committed its changes to the database:
// select and update AnyEntity from a different EntityManager from a different thread or client
EntityManager em2 = emf.createEntityManager();
EntityTransaction tx2 = em2.getTransaction();
tx2.begin();
AnyEntity secondEntity = em2.createQuery("select a from AnyEntity a", AnyEntity.class).getSingleResult();
secondEntity.setName("name2");
em2.merge(secondEntity);
Now the first client commits its changes to the database:
// commit first change while second change still pending
tx1.commit();
em1.close();
And the second client gets an OptimisticLockException when it updates its changes:
// OptimisticLockException thrown here means that a change happened while AnyEntity was still "checked out"
try {
tx2.commit();
em2.close();
} catch (RollbackException ex ) {
Throwable cause = ex.getCause();
if (cause != null && cause instanceof OptimisticLockException) {
System.out.println("Someone already changed AnyEntity.");
} else {
throw ex;
}
}
Reference: Java - JPA - #Version annotation
Are you properly initialising the version field?
If not, it is not supposed to work with null, try adding a default value to it:
#Version
private Long version = 0L;
Here are a post which explains perfectly when OptimisticLockException is thrown.
Also, just for future reference, you can make JPA avoid this in-memory validation of entities when you are updating them but want to change in DB side just in the end of this transaction using detach method on EntityManager:
em.detach(employee);

GreenDao - Saving an Entity and related - "Entity is detached from DAO context"

I'm trying to save with GreenDAO an entity called hotel. Each hotel has a relation one-to-many with some agreements and each agreement has got... well, a picture is worth a thousand words.
Now, what I do is the following:
daoSession.runInTx(new Runnable() {
#Override
public void run() {
ArrayList<Hotel> listOfHotels = getData().getAvailability();
for(Hotel h : listOfHotels)
{
List<HotelAgreement> hotelAgreements = h.getAgreements();
for(HotelAgreement ha : hotelAgreements) {
ha.setHotel_id(h.getHotel_id());
HotelAgreementDeadline hotelAgreementDeadline = ha.getDeadline();
List<HotelRemark> hr = hotelAgreementDeadline.getRemarks();
List<HotelAgreementDeadlinePolicies> hadp = hotelAgreementDeadline.getPolicies();
daoSession.getHotelReportDao().insertOrReplaceInTx( h.getReports() );
daoSession.getHotelPictureDao().insertOrReplaceInTx( h.getPictures() );
daoSession.getHotelRemarkDao().insertOrReplaceInTx(hr);
daoSession.getHotelAgreementDeadlinePoliciesDao().insertOrReplaceInTx(hadp);
daoSession.getHotelAgreementDeadlineDao().insertOrReplace(hotelAgreementDeadline);
daoSession.getHotelAgreementDao().insertOrReplace(ha);
}
// daoSession.getHotelReportsDao().insertOrReplace( getData().getReports() );
}
daoSession.getHotelDao().insertOrReplaceInTx(listOfHotels);
}
});
This, of course, does not work. I get a "Entity is detached from DAO context" error on the following line:
HotelAgreementDeadline hotelAgreementDeadline = ha.getDeadline();
I understand this is because I try to get the Agreements from a Hotel entity which does not come from the database, but from another source (a web service, in this case). But why does this happen with ha.getDeadline() and not with h.getAgreements()?
Now, I have the Hotel object and it does include pretty much all data: agreements, deadline, policies, remarks, pictures, report. I'd just like to tell GreenDAO: save it! And if I can't and I have to cycle through the tree - which is what I'm trying to do with the code above - how am I supposed to do it?
Here I read that I have to "store/load the object first using a Dao". Pretty awesome, but... how does it work? I read the greenDAO documentation about relations but couldn't find anything.
Thank you to everybody who's willing to help :-)
At some point, when you get the response from the webservice, you are creating new entity objects and filling them with the info. Try inserting each new object in the DB just after that.
If you want, you can insert, for example, all n Agreement for an Hotel using insertOrReplaceInTx, but you shouldn't use any relation before all the involved objects are in the DB.
I think that greendao team have to add the following control in the method
getToOneField() like in the getToManyList()
if(property == null){
code already generated by greendao plugin
}
return property;
so in your case in HotelAgreements class
#Keep
public DeadLine getDeadLine {
if(deadLine == null) {
long __key = this.deadLineId;
if (deadLine__resolvedKey == null || !deadLine__resolvedKey.equals(__key)) {
final DaoSession daoSession = this.daoSession;
if (daoSession == null) {
throw new DaoException("Entity is detached from DAO context");
}
DeadLineDao targetDao = daoSession.getDeadLineDao();
DeadLine deadLineNew = targetDao.load(__key);
synchronized (this) {
deadLine = deadLineNew;
deadLine__resolvedKey = __key;
}
}
}
return deadLine;
}
adding the control
if(deadLine == null) {
...
}
so if you receive data from rest json
the object is populated and getProperty() method return property field from object not from database just like it does with Lists
then you can insert or replace it
Then, when you load or load deeply object from db the property is null and greendao take it from DB

Objectify - does transaction throw ConcurrentModException in case of simultaneous creation of an entity?

Does Objectify throw a ConcurrentModificationException in case an entity with the same key (without a parent) is created at the same time (when before it did not exist) in two different transactions? I just found information regarding the case that the entity already exists and is modified, but not in case it does not yet exist...
ofy().transactNew(20, new VoidWork() {
#Override
public void vrun() {
Key<GameRequest> key = Key.create(GameRequest.class, numberOfPlayers + "_" + rules);
Ref<GameRequest> ref = ofy().load().key(key);
GameRequest gr = ref.get();
if(gr == null) {
// create new gamerequest and add...
// <-- HERE
} else {
...
}
}
});
Thanks!
Yes, you will get CME if anything in that entity group changes - including entity creation and deletion.
The code you show should work fine. Unless you really know what you are doing, you're probably better off just using the transact() method without trying to limit retries or forcing a new transaction. 99% of the time, transact() just does the right thing.

How to eagerly load lazy fields with JPA 2.0?

I have an entity class that has a lazy field like this:
#Entity
public Movie implements Serializable {
...
#Basic(fetch = FetchType.LAZY)
private String story;
...
}
The story field should normally be loaded lazily because it's usually large. However sometimes, I need to load it eagerly, but I don't write something ugly like movie.getStory() to force the loading. For lazy relationship I know a fetch join can force a eager loading, but it doesn't work for lazy field. How do I write a query to eagerly load the story field?
I'd try Hibernate.initialize(movie). But calling the getter (and adding a comment that this forces initialization) is not that wrong.
The one possible solution is:
SELECT movie
FROM Movie movie LEFT JOIN FETCH movie.referencedEntities
WHERE...
Other could be to use #Transactional on method in ManagedBean or Stateless and try to access movie.getReferencedEntities().size() to load it but it will generate N+1 problem i.e. generating additional N queries for each relationship which isn't too efficient in many cases.
You can use the fetch all properties keywords in your query:
SELECT movie
FROM Movie movie FETCH ALL PROPERTIES
WHERE ...
To quote the JPA spec (2.0, 11.1.6):
The LAZY strategy is a hint to the persistence provider runtime that data should be fetched
lazily when it is first accessed. The implementation is permitted to eagerly fetch data for which the
LAZY strategy hint has been specified.
Hibernate only supports what you are trying if you use its bytecode enhancement features. There are a few ways to do that. First is to use the build-time enhancement tool. The second is to use (class-)load-time enhancement. In Java EE environments you can enable that on Hibernate JPA using the 'hibernate.ejb.use_class_enhancer' setting (set it to true, false is the default). In Java SE environments, you need to enhance the classes as they are loaded, either on your own or you can leverage org.hibernate.bytecode.spi.InstrumentedClassLoader
If you don't mind having a POJO as a query result you can use constructor query. This will require your object to have constructor with all needed parameters and a query like this:
select new Movie(m.id, m.story) from Movie m
I would suggest to traverse the objects using Java reflection calling all methods starting with "get" and repeat this for all the gotten object, if it has an #Entity annotation.
Not the most beautiful way, but it must be a robust workaround. Something like that (not tested yet):
public static <T> void deepDetach(EntityManager emanager, T entity) {
IdentityHashMap<Object, Object> detached = new IdentityHashMap<Object, Object>();
try {
deepDetach(emanager, entity, detached);
} catch (IllegalAccessException e) {
throw new RuntimeException("Error deep detaching entity [" + entity + "].", e);
} catch (InvocationTargetException e) {
throw new RuntimeException("Error deep detaching entity [" + entity + "].", e);
}
}
private static <T> void deepDetach(EntityManager emanager, T entity, IdentityHashMap<Object, Object> detached) throws IllegalAccessException, InvocationTargetException {
if (entity == null || detached.containsKey(entity)) {
return;
}
Class<?> clazz = entity.getClass();
Entity entityAnnotation = clazz.getAnnotation(Entity.class);
if (entityAnnotation == null) {
return; // Not an entity. No need to detach.
}
emanager.detach(entity);
detached.put(entity, null); // value doesn't matter. Using a map, because there is no IdentitySet.
Method[] methods = clazz.getMethods();
for (Method m : methods) {
String name = m.getName();
if (m.getParameterTypes().length == 0) {
if (name.length() > 3 && name.startsWith("get") && Character.isUpperCase(name.charAt(3))) {
Object res = m.invoke(entity, new Object[0]);
deepDetach(emanager, res, detached);
}
// It is actually not needed for searching for lazy instances, but it will load
// this instance, if it was represented by a proxy
if (name.length() > 2 && name.startsWith("is") && Character.isUpperCase(name.charAt(2))) {
Object res = m.invoke(entity, new Object[0]);
deepDetach(emanager, res, detached);
}
}
}
}

Why does eclipselink/jpa attempt to persist an entity while I don't ask it to?

I'm trying to persist 3 entities (exp,def,meng) in a transaction, and then persist another 2 (def', meng'), where meng' is related to exp.
However, as I attempt to persist meng' eclipselink/jpa2 is being told to:
Call: INSERT INTO EXPRESSION (EXPRESSION, GENDER) VALUES (?, ?)
bind => [exp, null]
which will throw an expession since it's been already inserted and it's a key.
So apparently persisting the entity meng' which includes updating exp itself would somehow make eclipselink think I asked to persist a new exp.
Here is the test:
#Test
public void testInsertWords() throws MultipleMengsException, Exception{
final List<String[]> mengsWithSharedExp = new LinkedList<String[]>();
mengsWithSharedExp.add(mengsList.get(3));
mengsWithSharedExp.add(mengsList.get(4));
insertWords(mengsWithSharedExp, null, mengsDB);
}
Here is the problematic code:
public void insertWords(EnumMap<Input, MemoEntity> input) throws MultipleMengsException {
Expression def = (Expression) input.get(Input.definition);
Expression exp = (Expression) input.get(Input.expression);
beginTransaction();
persistIfNew(def);
persistIfNew(exp);
persistNewMeng(null, exp, def);
commitTransaction();
}
private void persistNewMeng(final MUser usr, Expression exp, final Expression def) throws RuntimeException {
final Meaning meng = new Meaning(usr, exp, def);
if (!persistIfNew(meng)) {
throw new RuntimeException("Meng ." + meng.toString() + " was expected to be new.");
}
if (usr != null) {
usr.addMeng(meng);
}
}
public <Entity> boolean persistIfNew(final Entity entity) {
final Object key = emf.getPersistenceUnitUtil().getIdentifier(entity);
if (em.find(entity.getClass(), emf.getPersistenceUnitUtil().getIdentifier(entity)) != null) {
return false;
}
em.persist(entity);
return true;
}
You can checkout the Maven source code (to test) from here.
Is this expected behavior? If so, why? And most importantly, how to solve?
It looks as if
#ManyToMany(cascade=CascadeType.ALL)
private Set<Expression> exps;
in Meaning is the culprit, although I don't understand why it should. The documentation says:
If the entity is already managed, the persist operation is ignored, although the persist operation will cascade to related entities that have the cascade element set to PERSIST or ALL in the relationship annotation.
Frank is correct. You are not reading in the Expression, so when you call persist on Meaning, when it is referencing existing Expressions they are detached, which cause the transaction to fail. Calling merge will work, or you can remove the cascade persist on the exps relationship since you seem to persist new Expressions directly anyway its not needed.
Mostly likely you run into:
[...] If the entity is detached [...] the transaction commit will fail.
(same source that you are citing)
If you persist a new entity that references an already persistent entity, you must use "merge" instead of "persist". "merge" will persist new entities and update existing entities.
Also beware of the fact that the merge operation will return an attached data graph, that must be used for further operations within the same persistence context.

Categories

Resources