Locking in JpaRepository Java Spring Boot - java

Is it possible in spring boot to take lock on whole table instead of rows?
So far I saw only EntityManager.lock(*) but this locks on only given records.
I have a situation in which I have to delete all records of a table and fill that table again with records, For this transaction, I want to take a lock on the table so that no other process reads from this table.
I am using JpaRepository.

Use entityManager.lock(user,LockModeType.PESSIMISTIC_WRITE);
It will take lock on whole table and no other transaction can read, update or delete from the table.
Hope it helps..!!

Related

How to turn off select before SaveAll()?

I need to turn off the select before saveAll() in a spring boot application with Hibernate and Jpa to boost performance with high number of records.
I've found a method with JPQL with good performance (delete + save of 10k records in 30s), but i'd like to stay with hibernate and jpa.
My expectations are that when i run my code written in java, i have to deleteAll the record of a table, then saveAll records from another one. When i do that in classic way (deleteAll(), findAll() and then saveAll()), i got low performance during the saveAll() because it does a select of all records got on the list before saving them.
I'd like to avoid the code to execute all selects before saving the records. Is that possibile without using EntityManager or EntityManagerFactory?
Code two native query: DELETE and INSERT SELECT using the #Query annotation on your repository class.
It's the best way to resolve these issues. If you only has to copy records from a table to another, it's no sense use jpa and loading thousands of objects. Using findAll could throw out of memory errors.

Asynchronous inserts in audit table in spring-hibernate

I have a DB table with many columns and associated Entities.
Update is supported on some of the columns. I need to maintain history of the data that's overwritten in update/delete in a separate table. Options that I have considered are below:
1. Hibernate-envers: Most easiest to use but issue with this is the insert in audit table are synchronous and also it becomes a part of actual transaction. Which is not a desired solution for my use-case.
2. Debezium: While it does make the audit insert asynchronous, but it looks like an overkill for my use-case as it includes installation of a lot of services like Kafka, zookeeper and there seem to be multiple points of failure.
3. JPA listeners: I can use these to get the data being updated/deleted and call an async insert in history table. Only issue I see here is I'll have to replicate actual entity classes code in the history entities.
Please suggest a solution I can go ahead with. Thanks.

a native query to trigger a table lock using SQL (Postgres)

I would like to make sure that the whole table is locked during my JPA transaction.
As far as i could figure out, there is no JPA Locking Mode to lock the whole table.
My question is, how does a proper Locking Statement look like and how can i combine it with entity managers merge or persist operations?
Actually thanks to the comment the solution was following statement:
getEntityManager().createNativeQuery("LOCK TABLE schemaname.tablename").executeUpdate();
The lock will get removed then the transaction (also the one from hibarnate - actually its the same) is over.

Managing history records in a database

I have a web project that uses a database to store data that is used to generate tasks that would be processed for remote machines to alter that records and store new data. My problem here is that I have to store all that changes on each table but I don't need all these information. For example, a table A could have 5 fields but I only need 2 for historical purposes. Another table B could have 3 and I would have to add another one (date for example). Also, I don't need changes during daily task generation, only the most recent one.
Which is the best way to maintain a change history? Someone told me that a good idea is having two tables, the A (B) table and another one called A_history (B_history) with the needed fields. This is actually what I'm doing, using triggers to insert into history tables but I don't feel comfortable with this approach. My project uses Spring (Spring-data, Hibernate and JPA) and if I change the DB (currently MySQL) I'd have to migrate triggers. Is there a good way to manage history records? Tables could be generated with Hibernate/JPA annotations.
If I maintain the two tables approach, can I add a method to the repository to fetch rows from current table and history table at once?
For this pourpose there is a special Hibernate Envers project. See official documentation here. Just configure it, annotate necessary properties with #Audited annotation and that's all. No need for DB triggers.
One pitfall: if you want to have a record for each delete operation then you need to use Session.delete(entity) way instead of HQL "delete ...".
EDIT. Also take a look into native auditing support of spring data jpa.
I am not a database expert. What I have seen them do boils down to a few ways of approach.
1) They add a trigger to the transactional table that copies inserts and updates to a history table but not deletes. This means any queries that need to include history can be done from the history table since all the current info is there too.
a) They can tag each entry in the history table with time and date and
keep track of all the states of the original records.
b) They can only
keep track of the current state of the original record and then it
settles when the original is deleted.
2) They have a periodic task that goes around and copies data marked as deletable into the history table. It then deletes the data from the transactional table. Any queries in the transactional table have to make sure to ignore the deletable rows. Any queries that need history have to search both tables and merge the results.
3) If the volume of data isn't too large, they just leave everything in one table and mark some entries as historical. Queries have to ignore historical rows. Queries that include history are easy. This may slow down database access as the table grows to include many unused rows but that can sometimes be ameliorated by clever use of indexes.

How to make update faster with Hibernate while using huge number of records

I am facing issue while using the hibernate update (Session.update()) portion with huge number of records. it is becoming very slower. but there is no issue with the insert (Session.insert()) portion. is there any way to do the update portion while we do update on lakh's of records.is there any way to tune the sql server so that the update will become faster. while we add seperate indexes to all the primary fields then the delete portion is taking time. is there any better way to tune sql server so that it performs well with insert, delete and update.
Thank you,
Saif.
do a batch update instead of individual update for each record. this way you will only hit the database once for all the records.
When you do a save only the data is saved into the database whereas when your updating a record it has to first perform the search operation and then update the record that is why your facing issues on update and not on save when your are handling huge number of records can use hibernate's BATCH PROCESSING to update your records. Here is a good link for batch processing in hibernate from tutorials point:
http://www.tutorialspoint.com/hibernate/hibernate_batch_processing.htm
There may be other solutions to this but one way I know is:
Whenever you save or update an instance through session (e.g. session.save(), session.update(), session.saveOrUpdate() etc.), it also updates the instance FK associations.
So if your POJO has multiple FK associations, it will fire queries on those tables as well.
So instead of updating instance in this way, I would suggest to use HQL (if it applies to your requirement) to save or update instance.

Categories

Resources