Getting rollback error in JPA when deploying to smartfox [duplicate] - java

I have a java project with a collection of unit tests that perform simple updates, deletes using JPA2. The unit tests run without a problem, and I can verify the changes in the database - all good. I attempt to copy/paste this same function in a handler (Smartfox Extension) - I recieve a rollback exception.
Column 'levelid' cannot be null.
Looking for suggestions as to why this might be. I can perform data reads from within this extension ( GetModelHandler ) but trying to set data does not work. It's completely baffling.
So in summary -
This works...
#Test
public void Save()
{
LevelDAO dao = new LevelDAO();
List levels = dao.findAll();
int i = levels.size();
Level l = new Level();
l.setName("test");
Layer y = new Layer();
y.setLayername("layer2");
EntityManagerHelper.beginTransaction();
dao.save(l);
EntityManagerHelper.commit();
}
This fails with rollback exception
public class SetModelHandler extends BaseClientRequestHandler
{
#Override
public void handleClientRequest(User sender, ISFSObject params)
{
LevelDAO dao = new LevelDAO();
List levels = dao.findAll();
int i = levels.size();
Level l = new Level();
l.setName("test");
Layer y = new Layer();
y.setLayername("layer2");
EntityManagerHelper.beginTransaction();
dao.save(l);
EntityManagerHelper.commit();
}
}
The Level and Layer class have a OneToMany and ManyToOne attribute respectively.
Any ideas appeciated.
Update
Here's the schema
Level
--------
levelid (int) PK
name (varchar)
Layer
--------
layerid (int) 11 PK
layername (varchar) 100
levelid (int)
Foreign Key Name:Level.levelid ,
On Delete: no action,
On Update: no action
When I changed
EntityManagerHelper.beginTransaction();
dao.update(l);
EntityManagerHelper.commit();
to
EntityManagerFactory factory = Persistence.createEntityManagerFactory("bwmodel");
EntityManager entityManager = factory.createEntityManager();
entityManager.getTransaction().begin();
dao.update(l);
entityManager.persist(l);
entityManager.getTransaction().commit();
This performs a save but not an update ? I'm missing something obvious here.

The most likely problem I can see would be different database definitions. Testing EJBs often use an in-memory database that is generated on the fly. Whereas in actual production you are using a real database which is probably enforcing constraints.
Try assigning the levelid value a value or changing the database schema.

Related

OptimisticLockException when using JPA merge()

I have a rest application where one of the resources can be updated. Below are two methods responsible for achieving this task:
updateWithRelatedEntities(String, Store): receives id and new object Store which was constructed by deserializing PUT request entity, sets the version (used for optimistic locking) on new object and calls update in a transaction.
public Store updateWithRelatedEntities(String id, Store newStore) {
Store existingStore = this.get(id);
newStore.setVersion(existingStore.getVersion());
em.getTransaction().begin();
newStore = super.update(id, newStore);
em.getTransaction().commit();
return newStore;
}
update(String, T): a generic method for making an update. Checks that ids match and performs merge operation.
public T update(String id, T newObj) {
if (newObj == null) {
throw new EmptyPayloadException(type.getSimpleName());
}
Type superclass = getClass().getGenericSuperclass();
if (superclass instanceof Class) {
superclass = ((Class) superclass).getGenericSuperclass();
}
Class<T> type = (Class<T>) (((ParameterizedType) superclass).getActualTypeArguments()[0]);
T obj = em.find(type, id);
if (!newObj.getId().equals(obj.getId())) {
throw new IdMismatchException(id, newObj.getId());
}
return em.merge(newObj);
}
The problem is that this call: T obj = em.find(type, id); triggers an update of store object in the database which means that we get OptimisticLockException when triggering merge (because versions are now different).
Why is this happening? What would be the correct way to achieve this?
I kind of don't want to copy properties from newStore to existingStore and use existingStore for merge - which would, I think, solve the optimistic lock problem.
This code is not running on an application server and I am not using JTA.
EDIT:
If I detach existingStore before calling update, T obj = em.find(type, id); doesn't trigger an update of store object so this solves the problem. The question still remains though - why does it trigger it when entity is not detached?
I can't see your entity from code which you added but I believe that you missing some key point with optimistic locking -> #Version annotation on version field.
If you have this field on your entity then container should be able to do merge procedure without problems. Please take a look to
Optimistic Locking also good article don't break optimistic locking

JPA Version Entity merge

I know that there are some questions about this subject already but I think that this one is different.
Let's say I have this class:
#Entity
public class foo{
#Id
#GeneratedValue
private long id;
#Version
private long version;
private String description;
...
}
They I create some objects and persist them to a DB using JPA add().
Later, I get all from the repository using JPA all();
From that list I select one object and change the description.
Then I want to update that object in the repository using JPA merge() (see code).
The problem here is that it works the first time I try to change the description (Version value is now 2).
The second time, a OptimisticLockException is raised saying that that object was changed meanwhile.
I'm using H2 has DB in embedded mode.
MERGE CODE:
//First: persist is tried, if the object already exists, an exception is raised and then this code is executed
try {
tx = em.getTransaction();
tx.begin();
entity = em.merge(entity);
tx.commit();
} catch (PersistenceException pex) {
//Do stuff
}
What can be wrong where?
Thank you.
EDIT (more code)
//Foo b is obtained by getting all objects from db using JPA all() and then one object is selected from that list
b.changeDescription("Something new!");
//Call update method (Merge code already posted)
I would assume that you are changing elements in the list from different clients or different threads. This is what causes an OptimisticLockException.
One thread, in it's own EntityManager, reads the Foo object and gets a #Version at the time of the read.
// select and update AnyEntity
EntityManager em1 = emf.createEntityManager();
EntityTransaction tx1 = em1.getTransaction();
tx1.begin();
AnyEntity firstEntity = em1.createQuery("select a from AnyEntity a", AnyEntity.class).getSingleResult();
firstEntity.setName("name1");
em1.merge(firstEntity);
Another client reads and updates the Foo object at the same time, before the first client has committed its changes to the database:
// select and update AnyEntity from a different EntityManager from a different thread or client
EntityManager em2 = emf.createEntityManager();
EntityTransaction tx2 = em2.getTransaction();
tx2.begin();
AnyEntity secondEntity = em2.createQuery("select a from AnyEntity a", AnyEntity.class).getSingleResult();
secondEntity.setName("name2");
em2.merge(secondEntity);
Now the first client commits its changes to the database:
// commit first change while second change still pending
tx1.commit();
em1.close();
And the second client gets an OptimisticLockException when it updates its changes:
// OptimisticLockException thrown here means that a change happened while AnyEntity was still "checked out"
try {
tx2.commit();
em2.close();
} catch (RollbackException ex ) {
Throwable cause = ex.getCause();
if (cause != null && cause instanceof OptimisticLockException) {
System.out.println("Someone already changed AnyEntity.");
} else {
throw ex;
}
}
Reference: Java - JPA - #Version annotation
Are you properly initialising the version field?
If not, it is not supposed to work with null, try adding a default value to it:
#Version
private Long version = 0L;
Here are a post which explains perfectly when OptimisticLockException is thrown.
Also, just for future reference, you can make JPA avoid this in-memory validation of entities when you are updating them but want to change in DB side just in the end of this transaction using detach method on EntityManager:
em.detach(employee);

The NodeEntity caches the value of the Set

I'm using the Spring Data Neo4j 4. It seems the "PersistenceContext" of Neo4j cache the values of the "Set" value.
The Entity
#NodeEntity
public class ServiceStatus implements java.io.Serializable {
#GraphId Long id;
private Set<String> owners = new HashSet<String>();
}
First, I put a value "ROLE_ADMIN" in the owners and save it.
Then I edit the value to "ROLE_SYSTEM_OWNER" and called save() again.
In the Neo4j query browser, it only show the "ROLE_SYSTEM_OWNER", which is all correct for now.
However, when I called the findAll(), the owners has two values ["ROLE_ADMIN","ROLE_SYSTEM_OWNER"]
It will work fine when I restart my web server.
[The way to change value]
#Test
public void testSaveServiceStatus() throws OSPException {
//1. save
ServiceStatus serviceStatus = new ServiceStatus();
serviceStatus.setServiceName("My Name");
Set<String> owners = new HashSet<String>();
owners.add("ROLE_SITE_ADMIN");
serviceStatus.setOwners(owners);
serviceStatusRepository.save(serviceStatus);
System.out.println(serviceStatus.getId()); //262
}
#Test
public void testEditServiceStatus() throws OSPException{
//1. to find all , it seems cache the set value
serviceStatusRepository.findAll();
//2. simulate the web process behavior
ServiceStatus serviceStatus = new ServiceStatus();
serviceStatus.setId(new Long(262));
serviceStatus.setServiceName("My Name");
Set<String> owners = new HashSet<String>();
//change the owner to Requestor
owners.add("Requestor");
serviceStatus.setOwners(owners);
//3. save the "changed" value
// In the cypher query browser, it show "Requestor" only
serviceStatusRepository.save(serviceStatus);
//4. retrieve it again
serviceStatus = serviceStatusRepository.findOne(new Long(262));
System.out.println(serviceStatus); //ServiceStatus[id=262,serviceName=My Name,owners=[Requestor5, Requestor4]]
}
Your test appears to be working with detached objects in a way. Step one, findAll() loads these entities into the session, but then step 2 instead of using the loaded entity, creates a new one which is subsequently saved. The "attached" entity still refers to the earlier version of the entity.
The OGM does not handle this currently.
You're best off modifying the entity loaded in findAll or just a findOne(id), modify, save (instead of recreating one by setting the id). That will ensure everything is consistent.

Spring data - insert data depending on previous insert

I need to save data into 2 tables (an entity and an association table).
I simply save my entity with the save() method from my entity repository.
Then, for performances, I need to insert rows into an association table in native sql. The rows have a reference on the entity I saved before.
The issue comes here : I get an integrity constraint exception concerning a Foreign Key. The entity saved first isn't known in this second query.
Here is my code :
The repo :
public interface DistributionRepository extends JpaRepository<Distribution, Long>, QueryDslPredicateExecutor<Distribution> {
#Modifying
#Query(value = "INSERT INTO DISTRIBUTION_PERIMETER(DISTRIBUTION_ID, SERVICE_ID) SELECT :distId, p.id FROM PERIMETER p "
+ "WHERE p.id in (:serviceIds) AND p.discriminator = 'SRV' ", nativeQuery = true)
void insertDistributionPerimeter(#Param(value = "distId") Long distributionId, #Param(value = "serviceIds") Set<Long> servicesIds);
}
The service :
#Service
public class DistributionServiceImpl implements IDistributionService {
#Inject
private DistributionRepository distributionRepository;
#Override
#Transactional
public DistributionResource distribute(final DistributionResource distribution) {
// 1. Entity creation and saving
Distribution created = new Distribution();
final Date distributionDate = new Date();
created.setStatus(EnumDistributionStatus.distributing);
created.setDistributionDate(distributionDate);
created.setDistributor(agentRepository.findOne(distribution.getDistributor().getMatricule()));
created.setDocument(documentRepository.findOne(distribution.getDocument().getTechId()));
created.setEntity(entityRepository.findOne(distribution.getEntity().getTechId()));
created = distributionRepository.save(created);
// 2. Association table
final Set<Long> serviceIds = new HashSet<Long>();
for (final ServiceResource sr : distribution.getServices()) {
serviceIds.add(sr.getTechId());
}
// EXCEPTION HERE
distributionRepository.insertDistributionPerimeter(created.getId(), serviceIds);
}
}
The 2 queries seem to be in different transactions whereas I set the #Transactionnal annotation. I also tried to execute my second query with an entityManager.createNativeQuery() and got the same result...
Invoke entityManager.flush() before you execute your native queries or use saveAndFlush instead.
I your specific case I would recommend to use
created = distributionRepository.saveAndFlush(created);
Important: your "native" queries must use the same transaction! (or you need a now transaction isolation level)
you also wrote:
I don't really understand why the flush action is not done by default
Flushing is handled by Hibernate (it can been configured, default is "auto"). This mean that hibernate will flush the data at any point in time. But always before you commit the transaction or execute an other SQL statement VIA HIBERNATE. - So normally this is no problem, but in your case, you bypass hibernate with your native query, so hibernate will not know about this statement and therefore it will not flush its data.
See also this answer of mine: https://stackoverflow.com/a/17889017/280244 about this topic

Why does eclipselink/jpa attempt to persist an entity while I don't ask it to?

I'm trying to persist 3 entities (exp,def,meng) in a transaction, and then persist another 2 (def', meng'), where meng' is related to exp.
However, as I attempt to persist meng' eclipselink/jpa2 is being told to:
Call: INSERT INTO EXPRESSION (EXPRESSION, GENDER) VALUES (?, ?)
bind => [exp, null]
which will throw an expession since it's been already inserted and it's a key.
So apparently persisting the entity meng' which includes updating exp itself would somehow make eclipselink think I asked to persist a new exp.
Here is the test:
#Test
public void testInsertWords() throws MultipleMengsException, Exception{
final List<String[]> mengsWithSharedExp = new LinkedList<String[]>();
mengsWithSharedExp.add(mengsList.get(3));
mengsWithSharedExp.add(mengsList.get(4));
insertWords(mengsWithSharedExp, null, mengsDB);
}
Here is the problematic code:
public void insertWords(EnumMap<Input, MemoEntity> input) throws MultipleMengsException {
Expression def = (Expression) input.get(Input.definition);
Expression exp = (Expression) input.get(Input.expression);
beginTransaction();
persistIfNew(def);
persistIfNew(exp);
persistNewMeng(null, exp, def);
commitTransaction();
}
private void persistNewMeng(final MUser usr, Expression exp, final Expression def) throws RuntimeException {
final Meaning meng = new Meaning(usr, exp, def);
if (!persistIfNew(meng)) {
throw new RuntimeException("Meng ." + meng.toString() + " was expected to be new.");
}
if (usr != null) {
usr.addMeng(meng);
}
}
public <Entity> boolean persistIfNew(final Entity entity) {
final Object key = emf.getPersistenceUnitUtil().getIdentifier(entity);
if (em.find(entity.getClass(), emf.getPersistenceUnitUtil().getIdentifier(entity)) != null) {
return false;
}
em.persist(entity);
return true;
}
You can checkout the Maven source code (to test) from here.
Is this expected behavior? If so, why? And most importantly, how to solve?
It looks as if
#ManyToMany(cascade=CascadeType.ALL)
private Set<Expression> exps;
in Meaning is the culprit, although I don't understand why it should. The documentation says:
If the entity is already managed, the persist operation is ignored, although the persist operation will cascade to related entities that have the cascade element set to PERSIST or ALL in the relationship annotation.
Frank is correct. You are not reading in the Expression, so when you call persist on Meaning, when it is referencing existing Expressions they are detached, which cause the transaction to fail. Calling merge will work, or you can remove the cascade persist on the exps relationship since you seem to persist new Expressions directly anyway its not needed.
Mostly likely you run into:
[...] If the entity is detached [...] the transaction commit will fail.
(same source that you are citing)
If you persist a new entity that references an already persistent entity, you must use "merge" instead of "persist". "merge" will persist new entities and update existing entities.
Also beware of the fact that the merge operation will return an attached data graph, that must be used for further operations within the same persistence context.

Categories

Resources