How does pessimistic locking work in Hibernate? - java

I'm currently using Hibernate 6 and H2. I want to safely increment count field of Entity class but using more then 1 thread per time just to make sure that transaction is actually locking my entity. But when I ran this code, result count column in H2 wasn't 10, but instead some random number under 10. What am I missing about pessimistic locking?
for (int a = 0; a < 5; a++) {
executorService.execute(() -> {
Session innerSession = sessionFactory.openSession();
Transaction innerTransaction = innerSession.beginTransaction();
Entity entity = innerSession.get(Entity.class, id, LockMode.PESSIMISTIC_WRITE);
entity.setCount(entity.getCount() + 1);
innerSession.flush();
innerTransaction.commit();
innerSession.close();
});
executorService.execute(() -> {
Session innerSession = sessionFactory.openSession();
Transaction innerTransaction = innerSession.beginTransaction();
Entity entity = innerSession.get(Entity.class, id, LockMode.PESSIMISTIC_WRITE);
entity.setCount(entity.getCount() + 1);
innerSession.flush();
innerTransaction.commit();
innerSession.close();
});
}
Entire method:
Long id;
SessionFactory sessionFactory;
Session session;
Transaction transaction;
ExecutorService executorService = Executors.newFixedThreadPool(4);
Properties properties = new Properties();
Configuration configuration = new Configuration();
properties.put(AvailableSettings.URL, "jdbc:h2:tcp://localhost/~/test");
properties.put(AvailableSettings.USER, "root");
properties.put(AvailableSettings.PASS, "root");
properties.put(AvailableSettings.DIALECT, H2Dialect.class.getName());
properties.put(AvailableSettings.SHOW_SQL, true);
properties.put(AvailableSettings.HBM2DDL_AUTO, Action.CREATE.getExternalHbm2ddlName());
// classes are provided by another library
entityClasses.forEach(configuration::addAnnotatedClass);
sessionFactory = configuration.buildSessionFactory(new StandardServiceRegistryBuilder().applySettings(properties).build());
session = sessionFactory.openSession();
transaction = session.beginTransaction();
// initial value of count field is 0
id = (Long) session.save(new Entity());
transaction.commit();
for (int a = 0; a < 5; a++) {
executorService.execute(() -> {
Session innerSession = sessionFactory.openSession();
Transaction innerTransaction = innerSession.beginTransaction();
Entity entity = innerSession.get(Entity.class, id, LockMode.PESSIMISTIC_WRITE);
entity.setCount(entity.getCount() + 1);
innerSession.flush();
innerTransaction.commit();
innerSession.close();
});
executorService.execute(() -> {
Session innerSession = sessionFactory.openSession();
Transaction innerTransaction = innerSession.beginTransaction();
Entity entity = innerSession.get(Entity.class, id, LockMode.PESSIMISTIC_WRITE);
entity.setCount(entity.getCount() + 1);
innerSession.flush();
innerTransaction.commit();
innerSession.close();
});
}
executorService.shutdown();
executorService.awaitTermination(5, TimeUnit.SECONDS);
session.clear(); // prevent reading from cache
System.out.println(session.get(Entity.class, id).getCount()); // printed result doesn't match 10, same for reading from H2 browser interface
session.close();

Answer was simple, I need just to upgrade version of hibernate to 6.0.0.Alpha9. Higher versions requires 11 java to compile (I'm using 8). Seems like it was a bug in 6.0.0.Alpha6, which I used previously. There was no problem with H2 1.4.200. From hibernate sql logs I understood that the main problem in 6.0.0.Alpha6 was incorrect select query for transaction with pessimistic lock, it was just regular select, but in 6.0.0.Alpha9 already used select for update, which prevents other transactions from reading this row.

Related

Batch insert using spring data

I have 60K records to be inserted. I want to commit the records by batch of 100.
Below is my code
for(int i = 0 ;i < 60000; i++) {
entityRepo.save(entity);
if(i % 100 == 0) {
entityManager.flush();
entityManager.clear();
LOG.info("Committed = " + i);
}
}
entityManager.flush();
entityManager.clear();
I keep checking the database whenever I receive the log but I don't see the records getting committed.. What am I missing?
It is not enough to call flush() and clear(). You need a reference to the Transaction and call .commit() (from the reference guide)
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ ) {
Customer customer = new Customer(.....);
session.save(customer);
}
tx.commit();
session.close();
I assume two ways to do this, One as define transaction declarative, and call from external method.
Parent:
List<Domain> domainList = new ArrayList<>();
for(int i = 0 ;i < 60000; i++) {
domainList.add(domain);
if(i%100 == 0){
child.saveAll(domainList);
domainList.clear();
}
}
Child:
#Transactional
public void saveAll(List<Domain> domainList) {
}
This calls the declarative method at regular intervals as defined by the parent.
The other one is to manually begin and end the transaction and close the session.

Hibernate - the second query gives Unknown service requested

I'm trying to understand better how Hibernate works...
I've a problem I cannot resolve.
When the application starts, it makes a query
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
session.beginTransaction();
int result;
String query = "SELECT count(*) as posti_disponibili from occupazione t inner join ";
query += "(select id_posto_park, max(date_time) as MaxDate from occupazione group by id_posto_park) tm on ";
query += "t.id_posto_park = tm.id_posto_park and t.date_time = tm.Maxdate and t.isOccupied = 0";
BigInteger bi = (BigInteger) session.createSQLQuery(query).uniqueResult();
result = bi.intValue();
HibernateUtil.shutdown();
At the end I close the current session.
Then, after it, I have a second query to be accomplished:
I open a new session (the first one was closed with the method HibernateUtil.shutdown();)
Session session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
Client client = new Client();
client.setIdClient(clientId);
String queryString ="from it.besmart.models.Client where clientId = :c)";
List<?> list = session.createQuery(queryString).setProperties(client).list();
but I got, now,
org.hibernate.service.UnknownServiceException: Unknown service requested [org.hibernate.cache.spi.RegionFactory]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:184)
at org.hibernate.cfg.Settings.getRegionFactory(Settings.java:300)
at org.hibernate.internal.SessionFactoryImpl$SessionBuilderImpl.openSession(SessionFactoryImpl.java:1322)
at org.hibernate.internal.SessionFactoryImpl.openSession(SessionFactoryImpl.java:677)
at it.besmart.parkserver.SocketClientHandler.run(SocketClientHandler.java:78)
at java.lang.Thread.run(Thread.java:744)
I cannot understand why, I closed the first session, but then opened a new one..
Is it correct to close the session on each query
EDIT
I'm trying to solve this problem, but with no result.
Now I have the first select query, which goes well. It's at the startup of the application.
try {
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
session.beginTransaction();
String query = "SELECT count(*) as posti_disponibili from occupazione t inner join ";
query += "(select id_posto_park, max(date_time) as MaxDate from occupazione group by id_posto_park) tm on ";
query += "t.id_posto_park = tm.id_posto_park and t.date_time = tm.Maxdate and t.isOccupied = 0";
BigInteger bi = (BigInteger) session.createSQLQuery(query).uniqueResult();
result = bi.intValue();
}
I do not commit or flush it.
Then, going up with the application, I have the second query, so I getCurrentSession and try to do the select
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
session.beginTransaction();
Client client = new Client();
client.setIdClient(clientId);
String queryString ="from it.besmart.models.Client c where c.clientId = :c";
logger.debug(queryString);
// logger.debug(session);
Query theQuery = session.createQuery(queryString).setProperties(client);
List<?> list = theQuery.list();
The application stops, nothing comes out, I don't know what's going on also because I cannot setup hibernate to log with pi4j...
Is there something wrong in how I use hibernate sessions?
If you use sessionFactory.getCurrentSession(), you'll obtain a "current session" which is bound to the lifecycle of the transaction and will be automatically flushed and closed when the transaction ends (commit or rollback).
If you decide to use sessionFactory.openSession(), you'll have to manage the session yourself and to flush and close it "manually".
For more info go to Hibernate transactions.

What is the use of Hibernate batch processing

I am new to hibernate i have doubt in hibernate batch processing, i read some tutorial for hibernate batch processing they said
Session session = SessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ )
{
Employee employee = new Employee(.....);
session.save(employee);
}
tx.commit();
session.close();
Hibernate will cache all the persisted objects in the session-level cache and ultimately your application would fall over with an OutOfMemoryException somewhere around the 50,000th row. You can resolve this problem if you are using batch processing with Hibernate like,
Session session = SessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ )
{
Employee employee = new Employee(.....);
session.save(employee);
if( i % 50 == 0 )
{ // Same as the JDBC batch size
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
}
tx.commit();
session.close();
My doubt is instead of initializing the session outside, why can't we initialize it in to the for loop like,
Session session = null;
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ )
{
session =SessionFactory.openSession()
Employee employee = new Employee(.....);
session.save(employee);
}
tx.commit();
session.close();
Is it correct way or not any one suggest me the correct way?
No. Don't initialize the session in the for loop; every time you start a new session you're starting a new batch (so you have a batch size of one your way, that is it is non-batching). Also, it would be much slower your way. That is why the first example has
if( i % 50 == 0 ) {
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
that is what "flush a batch of inserts and release memory" was for.
Batch Processing in Hibernate means to divide a task of huge numbers to some smaller tasks.
When you fire session.save(obj), hibernate will actually cache that object into its memory (still the object is not written into database), and would save it to database when you commit your transaction i.e when you call transactrion.commit().
Lets say you have millions of records to insert, so firing session.save(obj) would consume a lot of memory and eventually would result into OutOfMemoryException.
Solution :
Creating a simple batch of smaller size and saving them to database.
if( i % 50 == 0 ) {
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
Note :
In code above session.flush() would flush i.e actually save the objects into database and session.clear() would clear any memory occupied by those objects for a batch of size 50.
Batch processing allows you to optimize writing data.
However, the usual advice of flushing and clearing the Hibernate Session is incomplete.
You need to commit the transaction at the end of the batch to avoid long-running transactions which can hurt performance and, if the last item fails, undoing all changes is going to put a lot of pressure on the DB.
Therefore, this is how you should do batch processing:
int entityCount = 50;
int batchSize = 25;
EntityManager entityManager = entityManagerFactory().createEntityManager();
EntityTransaction entityTransaction = entityManager.getTransaction();
try {
entityTransaction.begin();
for (int i = 0; i < entityCount; i++) {
if (i > 0 && i % batchSize == 0) {
entityTransaction.commit();
entityTransaction.begin();
entityManager.clear();
}
Post post = new Post(
String.format("Post %d", i + 1)
);
entityManager.persist(post);
}
entityTransaction.commit();
} catch (RuntimeException e) {
if (entityTransaction.isActive()) {
entityTransaction.rollback();
}
throw e;
} finally {
entityManager.close();
}

Multithreading with Embedded ObjectDB

I need to have a atomic counter with ObjectDB but the following code doesn't work as I expected:
final EntityManagerFactory emf = Persistence.createEntityManagerFactory("test.odb");
EntityManager em = emf.createEntityManager();
Point p = new Point(0, 0);
em.getTransaction().begin();
em.persist(p);
em.getTransaction().commit();
em.close();
final CountDownLatch l = new CountDownLatch(100);
for (int i = 0; i < 100; i++) {
Thread t = new Thread(new Runnable() {
#Override
public void run() {
EntityManager em = emf.createEntityManager();
em.getTransaction().begin();
//Query q = em.createQuery("UPDATE Point SET x = x + 1");
Query query = em.createQuery("UPDATE Point SET x = x + 1");
query.executeUpdate();
em.getTransaction().commit();
em.close();
l.countDown();
}
});
t.start();
}
l.await();
em = emf.createEntityManager();
TypedQuery<Point> myquery = em.createQuery("SELECT p from Point p", Point.class);
List<Point> results = myquery.getResultList();
System.out.println("X coordiate is: " + results.get(0).getX());
em.close();
It should have printed out X coordinate is 100. But in reality, it doesn't.
What is wrong with my code?
You can fix your code by one of the following ways:
Synchronize your update queries, so they will be executed sequently, rather than concurrently:
synchronized (lock) {
em.createQuery("UPDATE Point SET x = x + 1").executeUpdate();
}
Your lock object must be one object that is shared by all the threads.
Or, use ObjectDB / JPA locking, by setting a pessimistic locking timeout, e.g. by:
Map<String, Integer> properties =
Collections.singletonMap("javax.persistence.lock.timeout", 1000);
EntityManagerFactory emf =
Persistence.createEntityManagerFactory(
"objectdb:$objectdb/db/test.tmp;drop", properties);
and then replacing the UPDATE query with a retrieval with a lock and an update:
Point point = em.find(Point.class, 1, LockModeType.PESSIMISTIC_WRITE);
point.setX(point.getX() + 1);
Better create a DAO class for Point and wrap the persist(), merge() ... functions with synchronized functions instead.
Alternatively an AtomicInteger with getAndIncrement() should solve your problem without the need for synchronization and locks.

Hibernate is not deleting my objects. Why?

I have just set up a test that checks that I am able to insert entries into my database using Hibernate. The thing that drives me crazy is that Hibernate does not actually delete the entries, although it reports that they are gone!
The test below runs successfully, but when I check my DB afterwards the entries that were inserted are still there! I even try to check it using assert (yes I have -ea as vm parameter). Does anyone have a clue why the entries are not deleted?
public class HibernateExportStatisticDaoIntegrationTest {
HibernateExportStatisticDao dao;
Transaction transaction;
#Before
public void setUp(){
assert numberOfStatisticRowsInDB() == 0;
dao = new HibernateExportStatisticDao(HibernateUtil.getSessionFactory());
}
#After
public void deleteAllEntries(){
assert numberOfStatisticRowsInDB() != 0;
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
for(PersistableStatisticItem item:allStatisticItemsInDB()) {
session.delete(item);
}
session.flush();
assert numberOfStatisticRowsInDB() == 0;
}
#Test public void exportAllSavesEntriesToDatabase(){
int expectedNumberOfStatistics = 20;
dao.exportAll(StatisticItemFactory.createTestStatistics(expectedNumberOfStatistics));
assertEquals(expectedNumberOfStatistics, numberOfStatisticRowsInDB());
}
private int numberOfStatisticRowsInDB() {
return allStatisticItemsInDB().size();
}
#SuppressWarnings("unchecked")
private List<PersistableStatisticItem> allStatisticItemsInDB(){
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
transaction = session.beginTransaction();
Query q = session.createQuery("FROM PersistableStatisticItem item");
return q.list();
}
}
The console is filled with
Hibernate: delete from UPTIME_STATISTICS where logDate=? and serviceId=?
but nothing has been deleted when I check it.
I guess it's related to inconsistent use of transactions (note that beginTransaction() in allStatisticItemsInDB() is called several times without corresponding commits).
Try to manage transactions in proper way, for example, like this:
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = session.beginTransaction();
for(PersistableStatisticItem item:
session.createQuery("FROM PersistableStatisticItem item").list()) {
session.delete(item);
}
session.flush();
assert session.createQuery("FROM PersistableStatisticItem item").list().size() == 0;
tx.commit();
See also:
13.2. Database transaction demarcation
I have the same problem. Although I was not using transaction at all. I was using namedQuery like this :
Query query = session.getNamedQuery(EmployeeNQ.DELETE_EMPLOYEES);
int rows = query.executeUpdate();
session.close();
It was returning 2 rows but the database still had all the records. Then I wrap up the above code with this :
Transaction transaction = session.beginTransaction();
Query query = session.getNamedQuery(EmployeeNQ.DELETE_EMPLOYEES);
int rows = query.executeUpdate();
transaction.commit();
session.close();
Then it started working fine. I was using SQL server. But I think if we use h2, above code (without transaction) will also work fine.
One more observation : To insert and get records usage of transaction is not mandatory but for deletion of records we will have to use transaction. (only tested in SQL server)
Can you post your DB schema and HBM or Fluent maps? One thing that got me a while back was I had a ReadOnly() in my Fluent map. It never threw an error and I too saw the "delete from blah where blahblah=..." in the logs.

Categories

Resources