This question already has answers here:
Spring Data JPA Update #Query not updating?
(5 answers)
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Let's suppose to have this situation:
We have Spring Data configured in the standard way, there is a Respository object, an Entity object and all works well.
Now for some complex motivations I have to use EntityManager (or JdbcTemplate, whatever is at a lower level than Spring Data) directly to update the table associated to my Entity, with a native SQL query. So, I'm not using Entity object, but simply doing a database update manually on the table I use as entity (it's more correct to say the table from which I get values, see next rows).
The reason is that I had to bind my spring-data Entity to a MySQL view that makes UNION of multiple tables, not directly to the table I need to update.
What happens is:
In a functional test, I call the "manual" update method (on table from which the MySQL view is created) as previously described (through entity-manager) and if I make a simple Respository.findOne(objectId), I get the old object (not updated one). I have to call Entitymanager.refresh(object) to get the updated object.
Why?
Is there a way to "synchronize" (out of the box) objects (or force some refresh) in spring-data? Or am I asking for a miracle?
I'm not ironical, but maybe I'm not so expert, maybe (or probably) is my ignorance. If so please explain me why and (if you want) share some advanced knowledge about this amazing framework.
If I make a simple Respository.findOne(objectId) I get old object (not
updated one). I've to call Entitymanager.refresh(object) to get
updated object.
Why?
The first-level cache is active for the duration of a session. Any object entity previously retrieved in the context of a session will be retrieved from the first-level cache unless there is reason to go back to the database.
Is there a reason to go back to the database after your SQL update? Well, as the book Pro JPA 2 notes (p199) regarding bulk update statements (either via JPQL or SQL):
The first issue for developers to consider when using these [bulk update] statements
is that the persistence context is not updated to reflect the results
of the operation. Bulk operations are issued as SQL against the
database, bypassing the in-memory structures of the persistence
context.
which is what you are seeing. That is why you need to call refresh to force the entity to be reloaded from the database as the persistence context is not aware of any potential modifications.
The book also notes the following about using Native SQL statements (rather than JPQL bulk update):
â– CAUTION Native SQL update and delete operations should not be
executed on tables mapped by an entity. The JP QL operations tell the
provider what cached entity state must be invalidated in order to
remain consistent with the database. Native SQL operations bypass such
checks and can quickly lead to situations where the inmemory cache is
out of date with respect to the database.
Essentially then, should you have a 2nd level cache configured then updating any entity currently in the cache via a native SQL statement is likely to result in stale data in the cache.
In Spring Boot JpaRepository:
If our modifying query changes entities contained in the persistence context, then this context becomes outdated.
In order to fetch the entities from the database with latest record.
Use #Modifying(clearAutomatically = true)
#Modifying annotation has clearAutomatically attribute which defines whether it should clear the underlying persistence context after executing the modifying query.
Example:
#Modifying(clearAutomatically = true)
#Query("UPDATE NetworkEntity n SET n.network_status = :network_status WHERE n.network_id = :network_id")
int expireNetwork(#Param("network_id") Integer network_id, #Param("network_status") String network_status);
Based on the way you described your usage, fetching from the repo should retrieve the updated object without the need to refresh the object as long as the method which used the entity manager to merge has #transactional
here's a sample test
#DirtiesContext(classMode = ClassMode.AFTER_CLASS)
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = ApplicationConfig.class)
#EnableJpaRepositories(basePackages = "com.foo")
public class SampleSegmentTest {
#Resource
SampleJpaRepository segmentJpaRepository;
#PersistenceContext
private EntityManager entityManager;
#Transactional
#Test
public void test() {
Segment segment = new Segment();
ReflectionTestUtils.setField(segment, "value", "foo");
ReflectionTestUtils.setField(segment, "description", "bar");
segmentJpaRepository.save(segment);
assertNotNull(segment.getId());
assertEquals("foo", segment.getValue());
assertEquals("bar",segment.getDescription());
ReflectionTestUtils.setField(segment, "value", "foo2");
entityManager.merge(segment);
Segment updatedSegment = segmentJpaRepository.findOne(segment.getId());
assertEquals("foo2", updatedSegment.getValue());
}
}
Related
The only link I found that's close to what I am experiencing is this one :
How do you synchronize the id of a java object to its associated db row?
and there's not much of a solution in it.
My problem is that my Java objects aren't updated after being added to my database despite the .commit()
em.getTransaction().begin();
System.out.println(eleve.getID());
em.persist(eleve);
em.getTransaction().commit();
System.out.println(eleve.getID());
which refers to this class
public class Eleve {
private String _nom;
private String _prenom;
private float _ptsMerite;
#Id
private int _IDEleve;
and yields this output :
0
0
I think I've done everything properly when it comes to the persistence since it does create the object in the database (mySQL) with correct ID's which I've set to be autoincrement.
I am using javax.persistence for everything (annotations and such).
Did you try to add the #GeneratedValue annotation at your ID field?
There are four possible strategies you can choose from:
GenerationType.AUTO: The JPA provider will choose an appropriate strategy for the underlying database.
GenerationType.IDENTITY: Relies on a auto-increment column in your database.
GenerationType.SEQUENCE: Relies on a database sequence
GenerationType.TABLE: Uses a generator table in the database.
More info: https://www.baeldung.com/jpa-strategies-when-set-primary-key
If you ever change to a more powerful framework it is likely that this manages your transactions (CMT) so you can't (or don't want) commit everytime you want to access the ID for a new entity. In these cases you can use EntityManager#flush to synchronize Entity Manager with database.
for example I have a method in my CRUD interface which deletes a user from the database:
public interface CrudUserRepository extends JpaRepository<User, Integer> {
#Transactional
#Modifying
#Query("DELETE FROM User u WHERE u.id=:id")
int delete(#Param("id") int id, #Param("userId") int userId);
}
This method will work only with the annotation #Modifying. But what is the need for the annotation here? Why cant spring analyze the query and understand that it is a modifying query?
CAUTION!
Using #Modifying(clearAutomatically=true) will drop any pending updates on the managed entities in the persistence context spring states the following :
Doing so triggers the query annotated to the method as an updating
query instead of selecting one. As the EntityManager might contain
outdated entities after the execution of the modifying query, we do
not automatically clear it (see the JavaDoc of EntityManager.clear()
for details), since this effectively drops all non-flushed changes
still pending in the EntityManager. If you wish the EntityManager to
be cleared automatically, you can set the #Modifying annotation’s
clearAutomatically attribute to true.
Fortunately, starting from Spring Boot 2.0.4.RELEASE Spring Data added flushAutomatically flag (https://jira.spring.io/browse/DATAJPA-806) to auto flush any managed entities on the persistence context before executing the modifying query check reference https://docs.spring.io/spring-data/jpa/docs/2.0.4.RELEASE/api/org/springframework/data/jpa/repository/Modifying.html#flushAutomatically
So the safest way to use #Modifying is :
#Modifying(clearAutomatically=true, flushAutomatically=true)
What happens if we don't use those two flags??
Consider the following code :
repo {
#Modifying
#Query("delete User u where u.active=0")
public void deleteInActiveUsers();
}
Scenario 1 why flushAutomatically
service {
User johnUser = userRepo.findById(1); // store in first level cache
johnUser.setActive(false);
repo.save(johnUser);
repo.deleteInActiveUsers();// BAM it won't delete JOHN right away
// JOHN still exist since john with active being false was not
// flushed into the database when #Modifying kicks in
// so imagine if after `deleteInActiveUsers` line you called a native
// query or started a new transaction, both cases john
// was not deleted so it can lead to faulty business logic
}
Scenario 2 why clearAutomatically
In following consider johnUser.active is false already
service {
User johnUser = userRepo.findById(1); // store in first level cache
repo.deleteInActiveUsers(); // you think that john is deleted now
System.out.println(userRepo.findById(1).isPresent()) // TRUE!!!
System.out.println(userRepo.count()) // 1 !!!
// JOHN still exists since in this transaction persistence context
// John's object was not cleared upon #Modifying query execution,
// John's object will still be fetched from 1st level cache
// `clearAutomatically` takes care of doing the
// clear part on the objects being modified for current
// transaction persistence context
}
So if - in the same transaction - you are playing with modified objects before or after the line which does #Modifying, then use clearAutomatically & flushAutomatically if not then you can skip using these flags
BTW this is another reason why you should always put the #Transactional annotation on service layer, so that you only can have one persistence context for all your managed entities in the same transaction.
Since persistence context is bounded to hibernate session, you need to know that a session can contain couple of transactions see this answer for more info https://stackoverflow.com/a/5409180/1460591
The way spring data works is that it joins the transactions together (known as Transaction Propagation) into one transaction (default propagation (REQUIRED)) see this answer for more info https://stackoverflow.com/a/25710391/1460591
To connect things together if you have multiple isolated transactions (e.g not having a transactional annotation on the service) hence you would have multiple sessions following the way spring data works hence you have multiple persistence contexts (aka 1st level cache) that means you might delete/modify an entity in a persistence context even with using flushAutomatically the same deleted/modified entity might be fetched and cached in another transaction's persistence context already, That would cause wrong business decisions due to wrong or un-synced data.
This will trigger the query annotated to the method as updating query instead of a selecting one. As the EntityManager might contain outdated entities after the execution of the modifying query, we automatically clear it (see JavaDoc of EntityManager.clear() for details). This will effectively drop all non-flushed changes still pending in the EntityManager. If you don't wish the EntityManager to be cleared automatically you can set #Modifying annotation's clearAutomatically attribute to false;
for further detail you can follow this link:-
http://docs.spring.io/spring-data/jpa/docs/1.3.4.RELEASE/reference/html/jpa.repositories.html
Queries that require a #Modifying annotation include INSERT, UPDATE, DELETE, and DDL statements.
Adding #Modifying annotation indicates the query is not for a SELECT query.
When you use only #Query annotation,you should use select queries
However you #Modifying annotation you can use insert,delete,update queries above the method.
This question already has answers here:
Hibernate: comparing current & previous record
(2 answers)
Closed 2 years ago.
I'm trying to calculate the modifications in an entity before updating it.
display HTML form
on submit:
start transaction
load entity
apply form data to entity
calculate changes
persist entity
Steps 2.1 through 2.3 are managed through my MVC framework (Spring MVC) so I never actually have a reference to the original entity without the changes.
So what I'm currently trying to do is the following:
// transaction is already started
public Modifications store(Entity updated) {
Entity original = entityManager.find(Entity.class, updated.getId());
Modifications mods = calculateModifications(original, updated);
entityManager.merge(updated);
return mods;
}
But unfortunately the entity I receive from the entity manager is the same (read ==) as the updated one.
Here's the question: How do I force the entity manager to load the entity again from the database without detaching my updated entity?
I am using Hibernate as my persistence provider, but I would like to keep this provider agnostic as much as possible.
Here is an SO answer which is similar to your question Hibernate: comparing current & previous record
Use method evict to remove old instance from the session cache then check calculateModifications for updates with new one which you get from method find
BTW, you can use detach if EntityManager used by you. The code like below:
entityManager.detach(newOne);
Order oldOne = orderRepository.findById(orderId);
// compare newOne and oldOne
...
I have a PostgreSQL 8.4 database with some tables and views which are essentially joins on some of the tables. I used NetBeans 7.2 (as described here) to create REST based services derived from those views and tables and deployed those to a Glassfish 3.1.2.2 server.
There is another process which asynchronously updates contents in some of tables used to build the views. I can directly query the views and tables and see these changes have occured correctly. However, when pulled from the REST based services, the values are not the same as those in the database. I am assuming this is because JPA has cached local copies of the database contents on the Glassfish server and JPA needs to refresh the associated entities.
I have tried adding a couple of methods to the AbstractFacade class NetBeans generates:
public abstract class AbstractFacade<T> {
private Class<T> entityClass;
private String entityName;
private static boolean _refresh = true;
public static void refresh() { _refresh = true; }
public AbstractFacade(Class<T> entityClass) {
this.entityClass = entityClass;
this.entityName = entityClass.getSimpleName();
}
private void doRefresh() {
if (_refresh) {
EntityManager em = getEntityManager();
em.flush();
for (EntityType<?> entity : em.getMetamodel().getEntities()) {
if (entity.getName().contains(entityName)) {
try {
em.refresh(entity);
// log success
}
catch (IllegalArgumentException e) {
// log failure ... typically complains entity is not managed
}
}
}
_refresh = false;
}
}
...
}
I then call doRefresh() from each of the find methods NetBeans generates. What normally happens is the IllegalArgumentsException is thrown stating somethng like Can not refresh not managed object: EntityTypeImpl#28524907:MyView [ javaType: class org.my.rest.MyView descriptor: RelationalDescriptor(org.my.rest.MyView --> [DatabaseTable(my_view)]), mappings: 12].
So I'm looking for some suggestions on how to correctly refresh the entities associated with the views so it is up to date.
UPDATE: Turns out my understanding of the underlying problem was not correct. It is somewhat related to another question I posted earlier, namely the view had no single field which could be used as a unique identifier. NetBeans required I select an ID field, so I just chose one part of what should have been a multi-part key. This exhibited the behavior that all records with a particular ID field were identical, even though the database had records with the same ID field but the rest of it was different. JPA didn't go any further than looking at what I told it was the unique identifier and simply pulled the first record it found.
I resolved this by adding a unique identifier field (never was able to get the multipart key to work properly).
I recommend adding an #Startup #Singleton class that establishes a JDBC connection to the PostgreSQL database and uses LISTEN and NOTIFY to handle cache invalidation.
Update: Here's another interesting approach, using pgq and a collection of workers for invalidation.
Invalidation signalling
Add a trigger on the table that's being updated that sends a NOTIFY whenever an entity is updated. On PostgreSQL 9.0 and above this NOTIFY can contain a payload, usually a row ID, so you don't have to invalidate your entire cache, just the entity that has changed. On older versions where a payload isn't supported you can either add the invalidated entries to a timestamped log table that your helper class queries when it gets a NOTIFY, or just invalidate the whole cache.
Your helper class now LISTENs on the NOTIFY events the trigger sends. When it gets a NOTIFY event, it can invalidate individual cache entries (see below), or flush the entire cache. You can listen for notifications from the database with PgJDBC's listen/notify support. You will need to unwrap any connection pooler managed java.sql.Connection to get to the underlying PostgreSQL implementation so you can cast it to org.postgresql.PGConnection and call getNotifications() on it.
An an alternative to LISTEN and NOTIFY, you could poll a change log table on a timer, and have a trigger on the problem table append changed row IDs and change timestamps to the change log table. This approach will be portable except for the need for a different trigger for each DB type, but it's inefficient and less timely. It'll require frequent inefficient polling, and still have a time delay that the listen/notify approach does not. In PostgreSQL you can use an UNLOGGED table to reduce the costs of this approach a little bit.
Cache levels
EclipseLink/JPA has a couple of levels of caching.
The 1st level cache is at the EntityManager level. If an entity is attached to an EntityManager by persist(...), merge(...), find(...), etc, then the EntityManager is required to return the same instance of that entity when it is accessed again within the same session, whether or not your application still has references to it. This attached instance won't be up-to-date if your database contents have since changed.
The 2nd level cache, which is optional, is at the EntityManagerFactory level and is a more traditional cache. It isn't clear whether you have the 2nd level cache enabled. Check your EclipseLink logs and your persistence.xml. You can get access to the 2nd level cache with EntityManagerFactory.getCache(); see Cache.
#thedayofcondor showed how to flush the 2nd level cache with:
em.getEntityManagerFactory().getCache().evictAll();
but you can also evict individual objects with the evict(java.lang.Class cls, java.lang.Object primaryKey) call:
em.getEntityManagerFactory().getCache().evict(theClass, thePrimaryKey);
which you can use from your #Startup #Singleton NOTIFY listener to invalidate only those entries that have changed.
The 1st level cache isn't so easy, because it's part of your application logic. You'll want to learn about how the EntityManager, attached and detached entities, etc work. One option is to always use detached entities for the table in question, where you use a new EntityManager whenever you fetch the entity. This question:
Invalidating JPA EntityManager session
has a useful discussion of handling invalidation of the entity manager's cache. However, it's unlikely that an EntityManager cache is your problem, because a RESTful web service is usually implemented using short EntityManager sessions. This is only likely to be an issue if you're using extended persistence contexts, or if you're creating and managing your own EntityManager sessions rather than using container-managed persistence.
You can either disable caching entirely (see: http://wiki.eclipse.org/EclipseLink/FAQ/How_to_disable_the_shared_cache%3F ) but be preparedto a fairly large performance loss.
Otherwise, you can perform a clear cache programmatically with
em.getEntityManagerFactory().getCache().evictAll();
You can map it to a servlet so you can call it externally - this is better if your database is modify externally very seldom and you just want to be sure JPS will pick up the new version
Just a thought, but how do you receive your EntityManager/Session/whatever?
If you queried the entity in one session, it will be detached in the next one and you will have to merge it back into the persistence context to get it managed again.
Trying to work with detached entities may result in those not-managed exceptions, you should re-query the entity or you could try it with merge (or similar methods).
JPA doesn't do any caching by default. You have to explicitly configure it. I believe its the side effect of the architectural style you have chosen: REST. I think caching is happening at the web servers, proxy servers etc. I suggest you read this and debug more.
I have a couple of objects that are mapped to tables in a database using Hibernate, BatchTransaction and Transaction. BatchTransaction's table (batch_transactions) has a foreign key reference to transactions, named transaction_id.
In the past I have used a batch runner that used internal calls to run the batch transactions and complete the reference from BatchTransaction to Transaction once the transaction is complete. After a Transaction has been inserted, I just call batchTransaction.setTransaction(txn), so I have a #ManyToOne mapping from BatchTransaction to Transaction.
I am changing the batch runner so that it executes its transactions through a Web service. The ID of the newly inserted Transaction will be returned by the service and I'll want to update transaction_id in BatchTransaction directly (rather than using the setter for the Transaction field on BatchTransaction, which would require me to load the newly inserted item unnecessarily).
It seems like the most logical way to do it is to use SQL rather than Hibernate, but I was wondering if there's a more elegant approach. Any ideas?
Here's the basic mapping.
BatchQuery.java
#Entity
#Table(name = "batch_queries")
public class BatchQuery
{
#ManyToOne
#JoinColumn(name = "query_id")
public Query getQuery()
{
return mQuery;
}
}
Query.java
#Entity
#Table(name = "queries")
public class Query
{
}
The idea is to update the query_id column in batch_queries without setting the "query" property on a BatchQuery object.
Using a direct SQL update, or an HQL update, is certainly feasible.
Not seeing the full problem, it looks to me like you might be making a modification to your domain that's worth documenting in your domain. You may be moving to having a BatchTransaction that has as a member just the TransactionId and not the full transaction.
If in other activities, the BatchTransaction will still be needing to hydrate that Transaction, I'd consider adding a separate mapping for the TransactionId, and having that be the managing mapping (make the Transaction association update and insert false).
If BatchTransaction will no longer be concerned with the full Transaction, just remove that association after adding a the TransactionId field.
As you have writeen, we can use SQL to achieve solution for above problem. But i will suggest not to update the primary keys via SQL.
Now, as you are changing the key, which means you are creating alltogether a new object, for this, you can first delete the existing object, with the previous key, and then try to insert a new object with the updated key(in your case transaction_id)