reading a modified value from bean instead of DB within same transaction - java

I am facing a scenario where, I need to update the parameter and want to retrieve the modified value within same transaction
For example :
#Transactional(propagation = Propagation.REQUIRED)
public void modifiyParameter(Object object, BigInteger attribute_id) {
...
Attribute attrValue = object.getParameter(attribute_id);
attrValue.setValue("new_value");
object.setParameter(attr_id, attrValue);
...
object.getParameter(attribute_id); //getting old value instead of modified value
}
In the above case, It would return the old value itself, but I tried to wrap in separate transaction and I could able to retrieve the modified value.
My question is that, can't we retrieve the modified value from the bean itself within same transaction, instead of committing the inner transaction (i.e new transaction) and retrieving it from DB?

If you have the value then why you want to fetch the value from DB, Use the same value. That is a good design from a performance and maintainability perspective.

Related

hibernate java curiosity - after saving the object, both the object to save and the saved one have id set

I have the following simple code:
#Test
public void saveExpense() {
// Create dummy Expense object i.e. { "description": "Short Description", "date": etc }
Expense expenseToSave = ExpenseHelper.createExpense("Short Description", new Date(), user);
Expense savedExpense = expenseService.save(expenseToSave);
// What is strange, is that here, both expenseToSave and savedExpense have id set to 1 for example; after save the expense should have an id;
Expense expected = ExpenseHelper.createExpense("Short Description", new Date(), user);
// Check if expected object is equal to the saved one
Assert.assertTrue(expected.equals(expenseService.findByDescription("Short Description")));
}
Normally I would expect that expenseToSave to be without id and savedExpense with id, but both have id after save. Why?
That made another variable to be necessary and complicate the test.
Thanks.
That's just how the Hibernate Session.save() method is specified. From the documentation:
Persist the given transient instance, first assigning a generated
identifier. (Or using the current value of the identifier property if
the assigned generator is used.) This operation cascades to associated
instances if the association is mapped with cascade="save-update".
IDs are the mechanism how Hibernate differentiates between persisted and transient objects, and how it identifies specific objects. Therefore, the ID is set early in the persistence step, as for example cyclic references in an object tree are resolved via IDs while persisting.
What differentiates the returned object vs. the original object is that the returned object is attached to the Hibernate session. For example, with active cascading, contained entities (e.g. in a one-to-many collection) are now persistent instances as well in the returned object.
Please be aware that
void EntityManager#persist(java.lang.Object entity)
(http://docs.oracle.com/javaee/6/api/javax/persistence/EntityManager.html#persist%28java.lang.Object%29)
Persists the given object by changing the object passed in and does not return a persisted copy - I suspect your ExpenseHelper to return the original object additionally so that you receive the same object via return as you already have by passing it in.
This follows a common anti-pattern for a kind of unified behaviour of DAO to be something like
public T create(T entity) {
this.entityManager.persist(entity);
return entity;
}
to get a kind of synchronicity with saving something
public T save(T entity) {
return this.entityManager.merge(entity);
}
Where
<T> T EntityManager#merge(T entity)
does indeed merge and pass you the merged entity.
It can depend on Hibernate mapping of the Expense entity, or implementation of ExpenseHelper class.
Also, take a look on Expense.equals() implementation.
Based on this statement:
Expense savedExpense = expenseService.save(expenseToSave);
the value of the savedExpense object will depend on what your are doing in the save method. Usually save methods don't return an object. You already have a reference to the object that you just saved (expenseToSave) available to you. And you are trying to assert that your expected object equals the object that was saved, which is fine. So I am not sure what the purpose of returning an object in expenseService.save(expenseToSave)
Also, note that the id of the object expenseToSave would have been populated by your ORM (Hibernate, I assume) based on your configuration, when you save it. There is no need to return this object or another object in the save method.

how to use HibernateTemplate.find() for and operator

I am new to Hibernate and Spring. I want to retrieve data from HQL query made by HibernateTemplate.find(). This query has a and operator.
When I reference a List with HibernateTemplate.find(), List size comes out to be 0. Below is my code.
public long getMetaDataID(String customerID,String objectID){
long metadataID=0;
long customerID_l=Long.parseLong(customerID);
long objectID_l=Long.parseLong(objectID);
List<RWFieldMetadata> list =template.find("from RWFieldMetadata p where p.customer_id = ? and p.object_id=?", customerID_l,objectID_l);
for(RWFieldMetadata obj: list){
metadataID=obj.getId();
}
return metadataID;
}
I know there is a criteria also. But I find it difficult and want to keep using HibernateTemplate.find(). Why am I getting list size as 0. What error am I committing here.
Usually if you get an empty list it means that you don't use a transaction. HibernateTemplate doesn't create a transaction itself. You need a transaction not only to write data, but to read it too.
You need to add a transaction to the getMetaDataID() method. You can do it with the #Transactional annotation, but you need get your class from a spring context.
You can open a transaction using session.beginTransaction() too but you need to properly control it.

JPA transaction handling

I have a Foo entity with fields Name, SecondaryName and Counter.
In the DB I have a unique constraint on (name, secondaryName, counter).
In the service layer I have the following method (where fooRepositry is a CrudRepository):
#Transactional(isolation = Isolation.SERIALIZABLE, propagation = Propagation.REQUIRES_NEW)
public void saveFoo(Foo foo) {
Optional<TestDto> fooWithHighestCounter= fooRepository.
findTopByNameAndSecondaryNameOrderByCounterDesc(foo.getName(), foo.getSecondaryName());
if (fooWithHighestCounter.isPresent()) {
foo.setCounter(fooWithHighestCounter.get().getCounter() + 1);
} else {
foo.setCounter(1);
}
Foo saved = fooRepository.save(foo);
}
With every call on saveFoo, a new record shall be created in the DB with already the existing highest counter + 1. Hence, the highest counter must be found, thus the #Transactional.
However, I constantly get ContraintViolationException when multiple threads call the saveFoo method as every thread finds the same highest counter value.
I assumed that every thread would create a new transaction and those transactions will run serially so no transaction would find the same counter value. (The #EnableTransactionManagement is put on the Application)
What else can I do to achieve the aforementioned behavior?
I think the fooRepository.save(foo) at last is saving the same values again and again in the database that is why it is giving ContrainViolationException. If you need to update the value to any existing Object just call the setCounter but dont call the .save() method instead call the update method of the repository (if you have any) else if it is a new entity which is not present in database yet then call the save method.
If it is done in hibernate refer the following link
Ref: http://www.objectdb.com/java/jpa/persistence/update

Hibernate same object with different values

I have one class with specific columns, say
Class A
{
private String A;
private String B;
private String C;
// Getter Setter of respectives
}
Now what happened I have same value of column A and column B only column C's value change. So I do something like below
A a = new A();
a.setA(..);
a.setB(..);
for(i=0;i<length;i++){
a.setC(..);
getHibernateTemplate.saveOrUpdate(a);
// or something like this
// A a1 = new A();
// a1 = a;
// a1. setC(..);
// getHibernateTemplate.saveOrUpdate(a1);
}
My issue is it does not store length number of records, it only updates that single record.
I know the reason that hibernate access it as persistent object and even if I change value and again save it will update existing record and it can be resolve by taking new object every time and setting it all values. But I don't want it, is there any way to tell hibernate to save that record instead of updating?
You haven't described actual entity details. If you want to save entity with the same values, set the identifier property as null.
From Documentation -
saveOrUpdate()
if the object is already persistent in this session, do nothing
if another object associated with the session has the same
identifier, throw an exception
if the object has no identifier property, save() it
if the object's identifier has the value assigned to a newly
instantiated object, save() it
if the object is versioned by a or , and the
version property value is the same value assigned to a newly
instantiated object, save() it
otherwise update() the object
saveOrUpdateAll()
Save or update all given persistent instances, according to its id (matching the configured "unsaved-value"?). Associates the instances with the current Hibernate Session.
[If it works, can try this for your other query]
Edit : It's mine oversight, I haven't checked your code carefully.
You have defined object A outside for loop, therefore the same object was being updated in each iteration. Try the below code, might help.
for(i=0;i<length;i++){
A a = new A(); //-- Create new object for each iteration
a.setA(..);
a.setB(..);
a.setC(..);
getHibernateTemplate.saveOrUpdate(a);
}
Yes.Try save(a) instead of saveOrUpdate(a)
getHibernateTemplate.save(a); //each time a new object saves.

Find or insert based on unique key with Hibernate

I'm trying to write a method that will return a Hibernate object based on a unique but non-primary key. If the entity already exists in the database I want to return it, but if it doesn't I want to create a new instance and save it before returning.
UPDATE: Let me clarify that the application I'm writing this for is basically a batch processor of input files. The system needs to read a file line by line and insert records into the db. The file format is basically a denormalized view of several tables in our schema so what I have to do is parse out the parent record either insert it into the db so I can get a new synthetic key, or if it already exists select it. Then I can add additional associated records in other tables that have foreign keys back to that record.
The reason this gets tricky is that each file needs to be either totally imported or not imported at all, i.e. all inserts and updates done for a given file should be a part of one transaction. This is easy enough if there's only one process that's doing all the imports, but I'd like to break this up across multiple servers if possible. Because of these constraints I need to be able to stay inside one transaction, but handle the exceptions where a record already exists.
The mapped class for the parent records looks like this:
#Entity
public class Foo {
#Id
#GeneratedValue(strategy = IDENTITY)
private int id;
#Column(unique = true)
private String name;
...
}
My initial attempt at writting this method is as follows:
public Foo findOrCreate(String name) {
Foo foo = new Foo();
foo.setName(name);
try {
session.save(foo)
} catch(ConstraintViolationException e) {
foo = session.createCriteria(Foo.class).add(eq("name", name)).uniqueResult();
}
return foo;
}
The problem is when the name I'm looking for exists, an org.hibernate.AssertionFailure exception is thrown by the call to uniqueResult(). The full stack trace is below:
org.hibernate.AssertionFailure: null id in com.searchdex.linktracer.domain.LinkingPage entry (don't flush the Session after an exception occurs)
at org.hibernate.event.def.DefaultFlushEntityEventListener.checkId(DefaultFlushEntityEventListener.java:82) [hibernate-core-3.6.0.Final.jar:3.6.0.Final]
at org.hibernate.event.def.DefaultFlushEntityEventListener.getValues(DefaultFlushEntityEventListener.java:190) [hibernate-core-3.6.0.Final.jar:3.6.0.Final]
at org.hibernate.event.def.DefaultFlushEntityEventListener.onFlushEntity(DefaultFlushEntityEventListener.java:147) [hibernate-core-3.6.0.Final.jar:3.6.0.Final]
at org.hibernate.event.def.AbstractFlushingEventListener.flushEntities(AbstractFlushingEventListener.java:219) [hibernate-core-3.6.0.Final.jar:3.6.0.Final]
at org.hibernate.event.def.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:99) [hibernate-core-3.6.0.Final.jar:3.6.0.Final]
at org.hibernate.event.def.DefaultAutoFlushEventListener.onAutoFlush(DefaultAutoFlushEventListener.java:58) [hibernate-core-3.6.0.Final.jar:3.6.0.Final]
at org.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:1185) [hibernate-core-3.6.0.Final.jar:3.6.0.Final]
at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1709) [hibernate-core-3.6.0.Final.jar:3.6.0.Final]
at org.hibernate.impl.CriteriaImpl.list(CriteriaImpl.java:347) [hibernate-core-3.6.0.Final.jar:3.6.0.Final]
at org.hibernate.impl.CriteriaImpl.uniqueResult(CriteriaImpl.java:369) [hibernate-core-3.6.0.Final.jar:3.6.0.Final]
Does anyone know what is causing this exception to be thrown? Does hibernate support a better way of accomplishing this?
Let me also preemptively explain why I'm inserting first and then selecting if and when that fails. This needs to work in a distributed environment so I can't synchronize across the check to see if the record already exists and the insert. The easiest way to do this is to let the database handle this synchronization by checking for the constraint violation on every insert.
I had a similar batch processing requirement, with processes running on multiple JVMs. The approach I took for this was as follows. It is very much like jtahlborn's suggestion. However, as vbence pointed out, if you use a NESTED transaction, when you get the constraint violation exception, your session is invalidated. Instead, I use REQUIRES_NEW, which suspends the current transaction and creates a new, independent transaction. If the new transaction rolls back it will not affect the original transaction.
I am using Spring's TransactionTemplate but I'm sure you could easily translate it if you do not want a dependency on Spring.
public T findOrCreate(final T t) throws InvalidRecordException {
// 1) look for the record
T found = findUnique(t);
if (found != null)
return found;
// 2) if not found, start a new, independent transaction
TransactionTemplate tt = new TransactionTemplate((PlatformTransactionManager)
transactionManager);
tt.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
try {
found = (T)tt.execute(new TransactionCallback<T>() {
try {
// 3) store the record in this new transaction
return store(t);
} catch (ConstraintViolationException e) {
// another thread or process created this already, possibly
// between 1) and 2)
status.setRollbackOnly();
return null;
}
});
// 4) if we failed to create the record in the second transaction, found will
// still be null; however, this would happy only if another process
// created the record. let's see what they made for us!
if (found == null)
found = findUnique(t);
} catch (...) {
// handle exceptions
}
return found;
}
You need to use UPSERT or MERGE to achieve this goal.
However, Hibernate does not offer support for this construct, so you need to use jOOQ instead.
private PostDetailsRecord upsertPostDetails(
DSLContext sql, Long id, String owner, Timestamp timestamp) {
sql
.insertInto(POST_DETAILS)
.columns(POST_DETAILS.ID, POST_DETAILS.CREATED_BY, POST_DETAILS.CREATED_ON)
.values(id, owner, timestamp)
.onDuplicateKeyIgnore()
.execute();
return sql.selectFrom(POST_DETAILS)
.where(field(POST_DETAILS.ID).eq(id))
.fetchOne();
}
Calling this method on PostgreSQL:
PostDetailsRecord postDetailsRecord = upsertPostDetails(
sql,
1L,
"Alice",
Timestamp.from(LocalDateTime.now().toInstant(ZoneOffset.UTC))
);
Yields the following SQL statements:
INSERT INTO "post_details" ("id", "created_by", "created_on")
VALUES (1, 'Alice', CAST('2016-08-11 12:56:01.831' AS timestamp))
ON CONFLICT DO NOTHING;
SELECT "public"."post_details"."id",
"public"."post_details"."created_by",
"public"."post_details"."created_on",
"public"."post_details"."updated_by",
"public"."post_details"."updated_on"
FROM "public"."post_details"
WHERE "public"."post_details"."id" = 1
On Oracle and SQL Server, jOOQ will use MERGE while on MySQL it will use ON DUPLICATE KEY.
The concurrency mechanism is ensured by the row-level locking mechanism employed when inserting, updating, or deleting a record, which you can view in the following diagram:
Code avilable on GitHub.
Two solution come to mind:
That's what TABLE LOCKS are for
Hibernate does not support table locks, but this is the situation when they come handy. Fortunately you can use native SQL thru Session.createSQLQuery(). For example (on MySQL):
// no access to the table for any other clients
session.createSQLQuery("LOCK TABLES foo WRITE").executeUpdate();
// safe zone
Foo foo = session.createCriteria(Foo.class).add(eq("name", name)).uniqueResult();
if (foo == null) {
foo = new Foo();
foo.setName(name)
session.save(foo);
}
// releasing locks
session.createSQLQuery("UNLOCK TABLES").executeUpdate();
This way when a session (client connection) gets the lock, all the other connections are blocked until the operation ends and the locks are released. Read operations are also blocked for other connections, so needless to say use this only in case of atomic operations.
What about Hibernate's locks?
Hibernate uses row level locking. We can not use it directly, because we can not lock non-existent rows. But we can create a dummy table with a single record, map it to the ORM, then use SELECT ... FOR UPDATE style locks on that object to synchronize our clients. Basically we only need to be sure that no other clients (running the same software, with the same conventions) will do any conflicting operations while we are working.
// begin transaction
Transaction transaction = session.beginTransaction();
// blocks until any other client holds the lock
session.load("dummy", 1, LockOptions.UPGRADE);
// virtual safe zone
Foo foo = session.createCriteria(Foo.class).add(eq("name", name)).uniqueResult();
if (foo == null) {
foo = new Foo();
foo.setName(name)
session.save(foo);
}
// ends transaction (releasing locks)
transaction.commit();
Your database has to know the SELECT ... FOR UPDATE syntax (Hibernate is goig to use it), and of course this only works if all your clients has the same convention (they need to lock the same dummy entity).
The Hibernate documentation on transactions and exceptions states that all HibernateExceptions are unrecoverable and that the current transaction must be rolled back as soon as one is encountered. This explains why the code above does not work. Ultimately you should never catch a HibernateException without exiting the transaction and closing the session.
The only real way to accomplish this it would seem would be to manage the closing of the old session and reopening of a new one within the method itself. Implementing a findOrCreate method which can participate in an existing transaction and is safe within a distributed environment would seem to be impossible using Hibernate based on what I have found.
The solution is in fact really simple. First perform a select using your name value. If a result is found, return that. If not, create a new one. In case the creation fail (with an exception), this is because another client added this very same value between your select and your insert statement. This is then logical that you have an exception. Catch it, rollback your transaction and run the same code again. Because the row already exist, the select statement will find it and you'll return your object.
You can see here explanation of strategies for optimistic and pessimistic locking with hibernate here : http://docs.jboss.org/hibernate/core/3.3/reference/en/html/transactions.html
a couple people have mentioned different parts of the overall strategy. assuming that you generally expect to find an existing object more often than you create a new object:
search for existing object by name. if found, return
start nested (separate) transaction
try to insert new object
commit nested transaction
catch any failure from nested transaction, if anything but constraint violation, re-throw
otherwise search for existing object by name and return it
just to clarify, as pointed out in another answer, the "nested" transaction is actually a separate transaction (many databases don't even support true, nested transactions).
Well, here's one way to do it - but it's not appropriate for all situations.
In Foo, remove the "unique = true" attribute on name. Add a timestamp that gets updated on every insert.
In findOrCreate(), don't bother checking if the entity with the given name already exists - just insert a new one every time.
When looking up Foo instances by name, there may be 0 or more with a given name, so you just select the newest one.
The nice thing about this method is that it doesn't require any locking, so everything should run pretty fast. The downside is that your database will be littered with obsolete records, so you may have to do something somewhere else to deal with them. Also, if other tables refer to Foo by its id, then this will screw up those relations.
Maybe you should change your strategy:
First find the user with the name and only if the user thoes not exist, create it.
I would try the following strategy:
A. Start a main transaction (at time 1)
B. Start a sub-transaction (at time 2)
Now, any object created after time 1 will not be visible in the main transaction. So when you do
C. Create new race-condition object, commit sub-transaction
D. Handle conflict by starting a new sub-transaction (at time 3) and getting the object from a query (the sub-transaction from point B is now out-of-scope).
only return the object primary key and then use EntityManager.getReference(..) to obtain the object you will be using in the main transaction. Alternatively, start the main transaction after D; it is not totally clear to me in how many race conditions you will have within your main transaction, but the above should allow for n times B-C-D in a 'large' transaction.
Note that you might want to do multi-threading (one thread per CPU) and then you can probably reduce this issue considerably by using a shared static cache for these kind of conflicts - and point 2 can be kept 'optimistic', i.e. not doing a .find(..) first.
Edit: For a new transaction, you need an EJB interface method call annotated with transaction type REQUIRES_NEW.
Edit: Double check that the getReference(..) works as I think it does.

Categories

Resources