i have an issue that a freshly created payment transaction has no ID.
#Override
#Transactional("blTransactionManager")
public PaymentTransaction getNewTemporaryOrderPayment(Order cart, PaymentType paymentType) {
OrderPayment tempPayment = null;
if (CollectionUtils.isNotEmpty(cart.getPayments())) {
Optional<OrderPayment> optionalPayment = NmcPaymentUtils.getPaymentForOrder(cart);
if (optionalPayment.isPresent()) {
tempPayment = optionalPayment.get();
invalidateTemporaryPaymentTransactions(tempPayment);
}else {
throw new IllegalStateException("Missing payment");
}
} else {
tempPayment = this.orderPaymentService.create();
}
tempPayment = this.populateOrderPayment(tempPayment, cart, paymentType);
//its necessary to create every time a new transaction because the ID needs to be unique in the parameter passed to 24pay
PaymentTransaction transaction = createPendingTransaction(cart);
transaction.setOrderPayment(tempPayment);
tempPayment.addTransaction(transaction);
tempPayment = orderService.addPaymentToOrder(cart, tempPayment, null);
orderPaymentService.save(transaction);
orderPaymentService.save(tempPayment);
return transaction;
}
even if i do an explicit save on the returned PaymentTransaction, the ID is still null. It is correctly persisted and has an ID in the database.
PaymentTransaction paymentTransaction = paymentService.getNewTemporaryOrderPayment(cart, PaymentType.CREDIT_CARD);
orderPaymentService.save(paymentTransaction);
how can i explicitly refresh this entity ? or any other suggestions how to solve this? I can do something like this to find my pending transaction
OrderPayment orderPayment = paymentTransaction.getOrderPayment();
Optional<PaymentTransaction> any = orderPayment.getTransactions().stream().filter(t -> t.isActive()).findFirst();
but that seems like an extra step which should not be needed. Any suggestions how to solve this in an elegant way ?
The transaction object has a null id because that variable is not updated when the order is saved.
Calls to save() methods return a new object, and that new object will have its id set.
Consider the following example:
Transaction transaction1 = createTransaction(...);
Transaction transaction2 = orderPaymentService.save(transaction1);
After this code executes, transaction1 will not have been changed in save(), so its id will still be null. Transaction2 will be a new object with the id set.
Therefore, the variable transaction, created with PaymentTransaction transaction = createPendingTransaction(cart);, is never updated with the saved value, so the id is still null at the end.
Further, the save() calls at the end for the transaction and payment probably won't work as you intend. This is because the orderService.addPaymentToOrder(cart, tempPayment, null); will save the order, which should also cascade to save the transaction and payment. I'm pretty sure that calling save again would result in new objects that are not connected to the saved order.
So what do you do about this?
The call to tempPayment = orderService.addPaymentToOrder(cart, tempPayment, null); returns a persisted OrderPayment. Read the transactions from that object to find the one you just created. It is very similar to the extra step you are trying to avoid, but you can at least cut out one line.
OrderPayment persistedPayment = orderService.addPaymentToOrder(cart, tempPayment, null);
Optional<PaymentTransaction> persistedTransaction = persistedPayment.getTransactions().stream().filter(t -> t.isActive()).findFirst();
Related
I have a rest application where one of the resources can be updated. Below are two methods responsible for achieving this task:
updateWithRelatedEntities(String, Store): receives id and new object Store which was constructed by deserializing PUT request entity, sets the version (used for optimistic locking) on new object and calls update in a transaction.
public Store updateWithRelatedEntities(String id, Store newStore) {
Store existingStore = this.get(id);
newStore.setVersion(existingStore.getVersion());
em.getTransaction().begin();
newStore = super.update(id, newStore);
em.getTransaction().commit();
return newStore;
}
update(String, T): a generic method for making an update. Checks that ids match and performs merge operation.
public T update(String id, T newObj) {
if (newObj == null) {
throw new EmptyPayloadException(type.getSimpleName());
}
Type superclass = getClass().getGenericSuperclass();
if (superclass instanceof Class) {
superclass = ((Class) superclass).getGenericSuperclass();
}
Class<T> type = (Class<T>) (((ParameterizedType) superclass).getActualTypeArguments()[0]);
T obj = em.find(type, id);
if (!newObj.getId().equals(obj.getId())) {
throw new IdMismatchException(id, newObj.getId());
}
return em.merge(newObj);
}
The problem is that this call: T obj = em.find(type, id); triggers an update of store object in the database which means that we get OptimisticLockException when triggering merge (because versions are now different).
Why is this happening? What would be the correct way to achieve this?
I kind of don't want to copy properties from newStore to existingStore and use existingStore for merge - which would, I think, solve the optimistic lock problem.
This code is not running on an application server and I am not using JTA.
EDIT:
If I detach existingStore before calling update, T obj = em.find(type, id); doesn't trigger an update of store object so this solves the problem. The question still remains though - why does it trigger it when entity is not detached?
I can't see your entity from code which you added but I believe that you missing some key point with optimistic locking -> #Version annotation on version field.
If you have this field on your entity then container should be able to do merge procedure without problems. Please take a look to
Optimistic Locking also good article don't break optimistic locking
I know that there are some questions about this subject already but I think that this one is different.
Let's say I have this class:
#Entity
public class foo{
#Id
#GeneratedValue
private long id;
#Version
private long version;
private String description;
...
}
They I create some objects and persist them to a DB using JPA add().
Later, I get all from the repository using JPA all();
From that list I select one object and change the description.
Then I want to update that object in the repository using JPA merge() (see code).
The problem here is that it works the first time I try to change the description (Version value is now 2).
The second time, a OptimisticLockException is raised saying that that object was changed meanwhile.
I'm using H2 has DB in embedded mode.
MERGE CODE:
//First: persist is tried, if the object already exists, an exception is raised and then this code is executed
try {
tx = em.getTransaction();
tx.begin();
entity = em.merge(entity);
tx.commit();
} catch (PersistenceException pex) {
//Do stuff
}
What can be wrong where?
Thank you.
EDIT (more code)
//Foo b is obtained by getting all objects from db using JPA all() and then one object is selected from that list
b.changeDescription("Something new!");
//Call update method (Merge code already posted)
I would assume that you are changing elements in the list from different clients or different threads. This is what causes an OptimisticLockException.
One thread, in it's own EntityManager, reads the Foo object and gets a #Version at the time of the read.
// select and update AnyEntity
EntityManager em1 = emf.createEntityManager();
EntityTransaction tx1 = em1.getTransaction();
tx1.begin();
AnyEntity firstEntity = em1.createQuery("select a from AnyEntity a", AnyEntity.class).getSingleResult();
firstEntity.setName("name1");
em1.merge(firstEntity);
Another client reads and updates the Foo object at the same time, before the first client has committed its changes to the database:
// select and update AnyEntity from a different EntityManager from a different thread or client
EntityManager em2 = emf.createEntityManager();
EntityTransaction tx2 = em2.getTransaction();
tx2.begin();
AnyEntity secondEntity = em2.createQuery("select a from AnyEntity a", AnyEntity.class).getSingleResult();
secondEntity.setName("name2");
em2.merge(secondEntity);
Now the first client commits its changes to the database:
// commit first change while second change still pending
tx1.commit();
em1.close();
And the second client gets an OptimisticLockException when it updates its changes:
// OptimisticLockException thrown here means that a change happened while AnyEntity was still "checked out"
try {
tx2.commit();
em2.close();
} catch (RollbackException ex ) {
Throwable cause = ex.getCause();
if (cause != null && cause instanceof OptimisticLockException) {
System.out.println("Someone already changed AnyEntity.");
} else {
throw ex;
}
}
Reference: Java - JPA - #Version annotation
Are you properly initialising the version field?
If not, it is not supposed to work with null, try adding a default value to it:
#Version
private Long version = 0L;
Here are a post which explains perfectly when OptimisticLockException is thrown.
Also, just for future reference, you can make JPA avoid this in-memory validation of entities when you are updating them but want to change in DB side just in the end of this transaction using detach method on EntityManager:
em.detach(employee);
I'm using the Spring Data Neo4j 4. It seems the "PersistenceContext" of Neo4j cache the values of the "Set" value.
The Entity
#NodeEntity
public class ServiceStatus implements java.io.Serializable {
#GraphId Long id;
private Set<String> owners = new HashSet<String>();
}
First, I put a value "ROLE_ADMIN" in the owners and save it.
Then I edit the value to "ROLE_SYSTEM_OWNER" and called save() again.
In the Neo4j query browser, it only show the "ROLE_SYSTEM_OWNER", which is all correct for now.
However, when I called the findAll(), the owners has two values ["ROLE_ADMIN","ROLE_SYSTEM_OWNER"]
It will work fine when I restart my web server.
[The way to change value]
#Test
public void testSaveServiceStatus() throws OSPException {
//1. save
ServiceStatus serviceStatus = new ServiceStatus();
serviceStatus.setServiceName("My Name");
Set<String> owners = new HashSet<String>();
owners.add("ROLE_SITE_ADMIN");
serviceStatus.setOwners(owners);
serviceStatusRepository.save(serviceStatus);
System.out.println(serviceStatus.getId()); //262
}
#Test
public void testEditServiceStatus() throws OSPException{
//1. to find all , it seems cache the set value
serviceStatusRepository.findAll();
//2. simulate the web process behavior
ServiceStatus serviceStatus = new ServiceStatus();
serviceStatus.setId(new Long(262));
serviceStatus.setServiceName("My Name");
Set<String> owners = new HashSet<String>();
//change the owner to Requestor
owners.add("Requestor");
serviceStatus.setOwners(owners);
//3. save the "changed" value
// In the cypher query browser, it show "Requestor" only
serviceStatusRepository.save(serviceStatus);
//4. retrieve it again
serviceStatus = serviceStatusRepository.findOne(new Long(262));
System.out.println(serviceStatus); //ServiceStatus[id=262,serviceName=My Name,owners=[Requestor5, Requestor4]]
}
Your test appears to be working with detached objects in a way. Step one, findAll() loads these entities into the session, but then step 2 instead of using the loaded entity, creates a new one which is subsequently saved. The "attached" entity still refers to the earlier version of the entity.
The OGM does not handle this currently.
You're best off modifying the entity loaded in findAll or just a findOne(id), modify, save (instead of recreating one by setting the id). That will ensure everything is consistent.
I'm trying to persist 3 entities (exp,def,meng) in a transaction, and then persist another 2 (def', meng'), where meng' is related to exp.
However, as I attempt to persist meng' eclipselink/jpa2 is being told to:
Call: INSERT INTO EXPRESSION (EXPRESSION, GENDER) VALUES (?, ?)
bind => [exp, null]
which will throw an expession since it's been already inserted and it's a key.
So apparently persisting the entity meng' which includes updating exp itself would somehow make eclipselink think I asked to persist a new exp.
Here is the test:
#Test
public void testInsertWords() throws MultipleMengsException, Exception{
final List<String[]> mengsWithSharedExp = new LinkedList<String[]>();
mengsWithSharedExp.add(mengsList.get(3));
mengsWithSharedExp.add(mengsList.get(4));
insertWords(mengsWithSharedExp, null, mengsDB);
}
Here is the problematic code:
public void insertWords(EnumMap<Input, MemoEntity> input) throws MultipleMengsException {
Expression def = (Expression) input.get(Input.definition);
Expression exp = (Expression) input.get(Input.expression);
beginTransaction();
persistIfNew(def);
persistIfNew(exp);
persistNewMeng(null, exp, def);
commitTransaction();
}
private void persistNewMeng(final MUser usr, Expression exp, final Expression def) throws RuntimeException {
final Meaning meng = new Meaning(usr, exp, def);
if (!persistIfNew(meng)) {
throw new RuntimeException("Meng ." + meng.toString() + " was expected to be new.");
}
if (usr != null) {
usr.addMeng(meng);
}
}
public <Entity> boolean persistIfNew(final Entity entity) {
final Object key = emf.getPersistenceUnitUtil().getIdentifier(entity);
if (em.find(entity.getClass(), emf.getPersistenceUnitUtil().getIdentifier(entity)) != null) {
return false;
}
em.persist(entity);
return true;
}
You can checkout the Maven source code (to test) from here.
Is this expected behavior? If so, why? And most importantly, how to solve?
It looks as if
#ManyToMany(cascade=CascadeType.ALL)
private Set<Expression> exps;
in Meaning is the culprit, although I don't understand why it should. The documentation says:
If the entity is already managed, the persist operation is ignored, although the persist operation will cascade to related entities that have the cascade element set to PERSIST or ALL in the relationship annotation.
Frank is correct. You are not reading in the Expression, so when you call persist on Meaning, when it is referencing existing Expressions they are detached, which cause the transaction to fail. Calling merge will work, or you can remove the cascade persist on the exps relationship since you seem to persist new Expressions directly anyway its not needed.
Mostly likely you run into:
[...] If the entity is detached [...] the transaction commit will fail.
(same source that you are citing)
If you persist a new entity that references an already persistent entity, you must use "merge" instead of "persist". "merge" will persist new entities and update existing entities.
Also beware of the fact that the merge operation will return an attached data graph, that must be used for further operations within the same persistence context.
I'm going through the Sharded Counters example in Java:
http://code.google.com/appengine/articles/sharding_counters.html
I have a question about the implementation of the increment method. In python it explicitly wraps the get() and increment in a transaction. In the Java example it just retrieves it and sets it. I'm not sure I fully understand the Datastore and transactions but it seems like the critical update section should be wrapped in a datastore transaction. Am I missing something?
Original code:
public void increment() {
PersistenceManager pm = PMF.get().getPersistenceManager();
Random generator = new Random();
int shardNum = generator.nextInt(NUM_SHARDS);
try {
Query shardQuery = pm.newQuery(SimpleCounterShard.class);
shardQuery.setFilter("shardNumber == numParam");
shardQuery.declareParameters("int numParam");
List<SimpleCounterShard> shards =
(List<SimpleCounterShard>) shardQuery.execute(shardNum);
SimpleCounterShard shard;
// If the shard with the passed shard number exists, increment its count
// by 1. Otherwise, create a new shard object, set its count to 1, and
// persist it.
if (shards != null && !shards.isEmpty()) {
shard = shards.get(0);
shard.setCount(shard.getCount() + 1);
} else {
shard = new SimpleCounterShard();
shard.setShardNumber(shardNum);
shard.setCount(1);
}
pm.makePersistent(shard);
} finally {
pm.close();
}
}
}
Transactional code (I believe you need to run this in a transaction to gurantee correctness under concurrent transactions?) :
public void increment() {
PersistenceManager pm = PMF.get().getPersistenceManager();
Random generator = new Random();
int shardNum = generator.nextInt(NUM_SHARDS);
try {
Query shardQuery = pm.newQuery(SimpleCounterShard.class);
shardQuery.setFilter("shardNumber == numParam");
shardQuery.declareParameters("int numParam");
List<SimpleCounterShard> shards =
(List<SimpleCounterShard>) shardQuery.execute(shardNum);
SimpleCounterShard shard;
// If the shard with the passed shard number exists, increment its count
// by 1. Otherwise, create a new shard object, set its count to 1, and
// persist it.
if (shards != null && !shards.isEmpty()) {
Transaction tx = pm.currentTransaction();
try {
tx.begin();
//I believe in a transaction objects need to be loaded by ID (can't use the outside queried entity)
Key shardKey = KeyFactory.Builder(SimpleCounterShard.class.getSimpleName(), shards.get(0).getID())
shard = pm.getObjectById(SimpleCounterShard.class, shardKey);
shard.setCount(shard.getCount() + 1);
tx.commit();
} finally {
if (tx.isActive()) {
tx.rollback();
}
}
} else {
shard = new SimpleCounterShard();
shard.setShardNumber(shardNum);
shard.setCount(1);
}
pm.makePersistent(shard);
} finally {
pm.close();
}
}
This section straight out of the docs shows that you are exactly right about needing a transaction:
http://code.google.com/appengine/docs/java/datastore/transactions.html#Uses_For_Transactions
This example demonstrates one use of transactions: updating an entity with a new property value relative to its current value.
Key k = KeyFactory.createKey("Employee", "k12345");
Employee e = pm.getObjectById(Employee.class, k);
e.counter += 1;
pm.makePersistent(e);
This requires a transaction because the value may be updated by another user after this code fetches the object, but before it saves the modified object. Without a transaction, the user's request will use the value of counter prior to the other user's update, and the save will overwrite the new value. With a transaction, the application is told about the other user's update. If the entity is updated during the transaction, then the transaction fails with an exception. The application can repeat the transaction to use the new data.
Its very close to what that sharded example is doing and, like you, I was unable to find any reason why sharded counters would be different.