I want to delete all records with some lineId to save another records with the same lineId(as refresh) but after deleting I can't save anything. There isn't any error, but I don't have my record in database.
When I don't have ma deleting code everything saves correctly.
public void deleteAndSaveEntities(List<Entity> entities, Long lineId){
deleteEntities(lineId);
saveEntities(entities);
}
private void deleteEntities(Long lineId) {
List<Entity> entitiesToDelete = entityRepository.findAllByLineId(lineId);
entityRepository.deleteAll(entitiesToDelete);
}
private void saveEntities(List<Entity> entities) {
entityRepository.saveAll(entities);
}
Actually you want to update the entries that has the lineId. Try it as:
First fetch by find..().
Make related changes on that entries
Then save them.
As thomas mentioned, hibernate reorders the queries within the transaction for performance reasons and executes the delete after the update.
I would commit the transaction between these two operations.
Add a #Transactional over deleteEntities and saveEntities.
But be aware that #Transactional does not work when invoked with in the same object.
You must inject the Service into itself and then call the methods on the self reference
Related
I am trying to capture the entity data in the database before the save is executed, for the purpose of creating a shadow copy.
I have implemented the following EntityListener in my Spring application:
public class CmsListener {
public CmsListener() {
}
#PreUpdate
private void createShadow(CmsModel entity) {
EntityManager em = BeanUtility.getBean(EntityManager.class);
CmsModel p = em.find(entity.getClass(), entity.getId());
System.out.println(entity);
}
}
The entity does indeed contain the entity object that is to be saved, and then I inject the EntityManager using another tool, which works fine - but for some reason, the entity has already been saved to the database. The output of CmsModel p = em.find(...) results in identical data which is in entity.
Why is JPA/hibernate persisting the changes before #PreUpdate is called? How can I prevent that?
I would assume this is because em.find doesn't actually query the database but fetches the object from cache, so it actually fetches the same object entity refers to (with changes already applied).
You could check your database log for the query that fetches the data for entity.id to verify this is indeed the case or you could add a breakpoint in createShadow() and have a look at the database entry for entity at the time the function is called to see for yourself if the changes are already applied to the database at that time.
To actually solve your problem and get your shadow copy you could fetch the object directly from database via native query.
Here is an untested example of what this could look like:
public CmsModel fetchCmsModelDirectly(){
Query q = em.createNativeQuery("SELECT cm.id,cm.value_a,cm.value_b FROM CmsModel cm", CmsModel.class);
try{
return q.getSingleResult();
}catch(NoResultException e){
return null;
}
}
Do you check if the entity is really updated to database? My suspect is that the change is only updated to the persistence context (cache). And when the entity is query back at the listener, the one from the cache is returned. So they are identical.
This is the default behavior of most of the ORM (JPA in this case) to speed up the data lookup. The ORM framework will take care of the synchronizing between the persistence context and the database. Usually when the transaction is committed.
I have an issue where my spring boot application performance is very slow when inserting data.
I am extracting a large subset of data from one database and inserting the data into another database.
The following is my entity.
#Entity
#Table(name = "element")
public class VXMLElementHistorical {
#Id
#Column(name = "elementid")
private long elementid;
#Column(name = "elementname")
private String elementname;
Getter/Setter methods...
I have configured a JPA repository
public interface ElementRepository extends JpaRepository<Element, Long> {
}
and call the save() method with my object
#Transactional
public void processData(List<sElement> hostElements)
throws DataAccessException {
List<Element> elements = new ArrayList<Element>();
for (int i = 0; i < hostElements.size(); i++) {
Element element = new Element();
element.setElementid(hostElements.get(i).getElementid());
element.setElementname(hostElements.get(i).getElementname());
elements.add(element);
}
try{
elementRepository.save(elements);{
//catch etc...
}
What is happening is that for each item, it is taking between 6 and 12 seconds to perform an insert. I have turned on hibernate trace logging and statistics and what is happening when I call the save function is that hibernate performs two queries, a select and an insert. The select query is taking 99% of the overall time.
I have ran the select query direct on the database and the result returns in nanoseconds. Which leads me to believe it is not an indexing issue however I am no DBA.
I have created a load test in my dev environment, and with similar load sizes, the over all process time is no where near as long as in my prod environment.
Any suggestions?
Instead of creating a list of elements and saving those, save the individual elements. Every now an then do a flush and clear to prevent dirty checking to become a bottleneck.
#PersistenceContext
private EntityManager entityManager;
#Transactional
public void processData(List<sElement> hostElements)
throws DataAccessException {
for (int i = 0; i < hostElements.size(); i++) {
Element element = new Element();
element.setElementid(hostElements.get(i).getElementid());
element.setElementname(hostElements.get(i).getElementname());
elementRepository.save(element)
if ( (i % 50) == 0) {
entityManager.flush();
entityManager.clear();
}
}
entityManager.flush(); // flush the last records.
You want to flush + clear every x elements (here it is 50 but you might want to find your own best number.
Now as you are using Spring Boot you also might want to add some additional properties. Like configuring the batch-size.
spring.jpa.properties.hibernate.jdbc.batch_size=50
This will, if your JDBC driver supports it, convert 50 single insert statements into 1 large batch insert. I.e. 50 inserts to 1 insert.
See also https://vladmihalcea.com/how-to-batch-insert-and-update-statements-with-hibernate/
As #M. Deinum said in comment you can improve by calling flush() and clear() after a certain number of inserts like below.
int i = 0;
for(Element element: elements) {
dao.save(element);
if(++i % 20 == 0) {
dao.flushAndClear();
}
}
Since loading the entities seems to be the bottleneck and you really just want to do inserts, i.e. you know the entities don't exist in the database you probably shouldn't use the standard save method of Spring Data JPA.
The reason is that it performs a merge which triggers Hibernate to load an entity that might already exist in the database.
Instead, add a custom method to your repository which does a persist on the entity manager. Since you are setting the Id in advance, make sure you have a version property so that Hibernate can determine that this indeed is a new entity.
This should make the select go away.
Other advice given in other answers is worth considering as a second step:
enable batching.
experiment with intermediate flushing and clearing of the session.
saving one instance at a time without gathering them in a collection, since the call to merge or persist doesn't actually trigger writing to the database, but only the flushing does (this is a simplification, but it shall do for this context)
I am working on a Spring-MVC project in which I am using Hibernate as the ORM, PostgreSQL as our DB and in one of our Objects(GroupCanvas), we have a number which is incremented everytime when user takes some action, and then the GroupCanvas object is updated in DB, and it should be unique.
THe problem we have currently is, if multiple users take action in front-end, some of them are getting duplicate numbers. We are working on fixing this now, so later we can implement a sequence and are assured that the numbers are unique.
How can I ensure that when I am updating the row, other users are waiting till the row is updated. I tried LockMode.Pessimistic_write, and a few others, none helped.
Code :
#Override
public void incrementNoteCounterForGroupCanvas(int canvasId) {
Session session = this.sessionFactory.getCurrentSession();
session.flush();
Query query = session.createQuery("update GroupCanvas as gc set gc.noteCount=gc.noteCount+1 where gc.mcanvasid=:canvasId");
query.setParameter("canvasId",canvasId);
query.executeUpdate();
session.flush();
}
#Override
public GroupCanvas getCanvasById(int mcanvasid) {
Session session = this.sessionFactory.getCurrentSession();
session.flush();
return (GroupCanvas) session.get(GroupCanvas.class, mcanvasid,LockMode.PESSIMISTIC_WRITE);
}
Both methods are in DAO, which has #Transactional annotation, and annotation present in service layer as well.
Thank you.
Looking at the method you have posted the usage if the 'LOCKING' technique is not quite correct. In order for a lock to end up with the result you are looking for the sequence of actions should be similar to the ones below (in the nutshell it is similar to the Double-Checked Locking but implemented using DB locks - https://en.wikipedia.org/wiki/Double-checked_locking).
Start the transaction (eg #Transactional annotation on your service method)
Retrieve entity from database with the PESSIMISTIC_WRITE lock mode (make sure to indicate hibernate that fresh copy should be read instead of the one stored in session cache)
If required check the current value of the target field if it meets your invariants
Perform the change/update on the field (eg, increment the value of a field )
Save the entity (and make sure to flush the value to the DB if you do not want to wait for the auto-flush)
Commit the transaction (done automatically when using #Transactional)
The essential difference of this sequence when compared with the posted method is that the update of the property value is performed while your transaction holds a lock on the target entity/db row, hence preventing other transactions from reading it while your update is in progress.
Hope this helps .
UPDATE:
I believe something like the code snippet bellow should work as expected :
#Transactional
#Override
public void incrementNoteCounterForGroupCanvas(int canvasId) {
final Session session = this.sessionFactory.getCurrentSession();
final GroupCanvas groupCanvas = session.get(GroupCanvas.class, canvasId,LockMode.PESSIMISTIC_WRITE);
session.refresh(groupCanvas);
groupCanvas.setNoteCount(groupCanvas.getNoteCount()+1);
session.saveOrUpdate(groupCanvas);
session.flush();
}
Normally, if I change an object mapped with #Entity, it will be persisted at the end of transactional methods, even if I don't call any save methods.
I'm doing a bulk update for performance reasons using the EntityManager#CriteriaUpdate from JPA, but I need to trigger some events in the setters of the objects, so I set them, but don't call the save method.
What I want to know is if the bulk update is useful if I change the object, or each object will be persisted, even though the bulk update is executed?
PgtoDAO:
public void bulkUpdateStatus(List<Long> pgtos, Long newStatusId) {
CriteriaBuilder cb = this.manager.getCriteriaBuilder();
CriteriaUpdate<Pgto> update = cb.createCriteriaUpdate(Pgto.class);
Root e = update.from(Pgto.class);
update.set("status", newStatusId);
update.where(e.get("id").in(pgtos));
this.manager.createQuery(update).executeUpdate();
}
PgtoService:
#Transactional(readOnly = false)
public int changePgtosStatus(List<Pgto> pgtos, StatusEnum newStatus){
...
List<Long> pgtoIds = new ArrayList<Pgto>();
for(Pgto pgto : pgtos){
// Hibernate will persist each object here, individually?
pgto.setStatus(newStatus.id());
pgtoIds.add(pgto.getId());
}
pgtoDao.bulkUpdateStatus(pgtoIds, newStatus.id());
// I tried setting a different status here to the objects, but it did not persisted
}
Perhaps I should end the connection after the bulk update?
Criteria query and changed entities are treated separately. Criteria query is just executed, and managed (loaded via entity manager) changed entities are synchronized with database on transaction commit.
If you like to prevent this, you will have to detach those entities from entity manager. Then changes will be not propagated to database anymore
I have the following situation:
class Container {
...
String key;
...
}
class Item {
String containerKey;
}
I require a mechanism to automatically delete all items "referencing" containers, something like cascading.
Is there such a mechanism in JPA 2?
No, you'll have to get them all and delete them, or execute a delete query:
delete from Item i where i.containerKey = :containerKey
It's not a JPA related solution, but what I did was to create a DB trigger. So every time when a record is deleted from the first table the deletion from the second one is triggered also.