According to the Hibernate doc, the best way to do a bulk insert is that :
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ ) {
Customer customer = new Customer(.....);
session.save(customer);
if ( i % 20 == 0 ) { //20, same as the JDBC batch size
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
}
tx.commit();
session.close();
Actually, I'm using Hibernate with Spring (#Transactional), so I don't use Session and methods flush() and clear().
I have major problem with performance : 1 hour to insert 7400 rows...
I think Spring is mismanaging the session and disconnects from the database between calling a method of the DAO class.
How to check that ?
Related
I'm currently using Hibernate 6 and H2. I want to safely increment count field of Entity class but using more then 1 thread per time just to make sure that transaction is actually locking my entity. But when I ran this code, result count column in H2 wasn't 10, but instead some random number under 10. What am I missing about pessimistic locking?
for (int a = 0; a < 5; a++) {
executorService.execute(() -> {
Session innerSession = sessionFactory.openSession();
Transaction innerTransaction = innerSession.beginTransaction();
Entity entity = innerSession.get(Entity.class, id, LockMode.PESSIMISTIC_WRITE);
entity.setCount(entity.getCount() + 1);
innerSession.flush();
innerTransaction.commit();
innerSession.close();
});
executorService.execute(() -> {
Session innerSession = sessionFactory.openSession();
Transaction innerTransaction = innerSession.beginTransaction();
Entity entity = innerSession.get(Entity.class, id, LockMode.PESSIMISTIC_WRITE);
entity.setCount(entity.getCount() + 1);
innerSession.flush();
innerTransaction.commit();
innerSession.close();
});
}
Entire method:
Long id;
SessionFactory sessionFactory;
Session session;
Transaction transaction;
ExecutorService executorService = Executors.newFixedThreadPool(4);
Properties properties = new Properties();
Configuration configuration = new Configuration();
properties.put(AvailableSettings.URL, "jdbc:h2:tcp://localhost/~/test");
properties.put(AvailableSettings.USER, "root");
properties.put(AvailableSettings.PASS, "root");
properties.put(AvailableSettings.DIALECT, H2Dialect.class.getName());
properties.put(AvailableSettings.SHOW_SQL, true);
properties.put(AvailableSettings.HBM2DDL_AUTO, Action.CREATE.getExternalHbm2ddlName());
// classes are provided by another library
entityClasses.forEach(configuration::addAnnotatedClass);
sessionFactory = configuration.buildSessionFactory(new StandardServiceRegistryBuilder().applySettings(properties).build());
session = sessionFactory.openSession();
transaction = session.beginTransaction();
// initial value of count field is 0
id = (Long) session.save(new Entity());
transaction.commit();
for (int a = 0; a < 5; a++) {
executorService.execute(() -> {
Session innerSession = sessionFactory.openSession();
Transaction innerTransaction = innerSession.beginTransaction();
Entity entity = innerSession.get(Entity.class, id, LockMode.PESSIMISTIC_WRITE);
entity.setCount(entity.getCount() + 1);
innerSession.flush();
innerTransaction.commit();
innerSession.close();
});
executorService.execute(() -> {
Session innerSession = sessionFactory.openSession();
Transaction innerTransaction = innerSession.beginTransaction();
Entity entity = innerSession.get(Entity.class, id, LockMode.PESSIMISTIC_WRITE);
entity.setCount(entity.getCount() + 1);
innerSession.flush();
innerTransaction.commit();
innerSession.close();
});
}
executorService.shutdown();
executorService.awaitTermination(5, TimeUnit.SECONDS);
session.clear(); // prevent reading from cache
System.out.println(session.get(Entity.class, id).getCount()); // printed result doesn't match 10, same for reading from H2 browser interface
session.close();
Answer was simple, I need just to upgrade version of hibernate to 6.0.0.Alpha9. Higher versions requires 11 java to compile (I'm using 8). Seems like it was a bug in 6.0.0.Alpha6, which I used previously. There was no problem with H2 1.4.200. From hibernate sql logs I understood that the main problem in 6.0.0.Alpha6 was incorrect select query for transaction with pessimistic lock, it was just regular select, but in 6.0.0.Alpha9 already used select for update, which prevents other transactions from reading this row.
I would like to improve my insert performance on hibernate with my Java EJB Application deployed on Wildfly 10.0
I would like to perform batch inser on hibernate but with my code the insert is more slowly than without batch insert.
Here is my Method which performs the insert. It gets a List of customers and should persist it. The If-clause makes the difference.
public List<Customer> insertCustomerList(List<Customer> cusList)
{
try {
int batchSize = 25;
for ( int i = 0; i < cusList.size(); ++i ) {
em.persist( cusList.get(i) );
if ( i > 0 && i % batchSize == 0 ) {
System.out.println("FLUSHFG");
//flush a batch of inserts and release memory
em.flush();
em.clear();
}
}
} catch (RuntimeException e) {
throw e;
}
return cusList;
}
In my opinion it should be faster with the flush and clear but it is way more slowly than without!
In my Wildfly container I can not activate a new session or a new transaction because I get an error.
Can you tell me how I can manage the batch insert with wildfly so that the insert of large and many entity will be more fast and not more slowly?
In my persistence xml I have this property:
<property name="hibernate.jdbc.batch_size" value="25"/>
Thanks!
I am new to hibernate i have doubt in hibernate batch processing, i read some tutorial for hibernate batch processing they said
Session session = SessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ )
{
Employee employee = new Employee(.....);
session.save(employee);
}
tx.commit();
session.close();
Hibernate will cache all the persisted objects in the session-level cache and ultimately your application would fall over with an OutOfMemoryException somewhere around the 50,000th row. You can resolve this problem if you are using batch processing with Hibernate like,
Session session = SessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ )
{
Employee employee = new Employee(.....);
session.save(employee);
if( i % 50 == 0 )
{ // Same as the JDBC batch size
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
}
tx.commit();
session.close();
My doubt is instead of initializing the session outside, why can't we initialize it in to the for loop like,
Session session = null;
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ )
{
session =SessionFactory.openSession()
Employee employee = new Employee(.....);
session.save(employee);
}
tx.commit();
session.close();
Is it correct way or not any one suggest me the correct way?
No. Don't initialize the session in the for loop; every time you start a new session you're starting a new batch (so you have a batch size of one your way, that is it is non-batching). Also, it would be much slower your way. That is why the first example has
if( i % 50 == 0 ) {
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
that is what "flush a batch of inserts and release memory" was for.
Batch Processing in Hibernate means to divide a task of huge numbers to some smaller tasks.
When you fire session.save(obj), hibernate will actually cache that object into its memory (still the object is not written into database), and would save it to database when you commit your transaction i.e when you call transactrion.commit().
Lets say you have millions of records to insert, so firing session.save(obj) would consume a lot of memory and eventually would result into OutOfMemoryException.
Solution :
Creating a simple batch of smaller size and saving them to database.
if( i % 50 == 0 ) {
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
Note :
In code above session.flush() would flush i.e actually save the objects into database and session.clear() would clear any memory occupied by those objects for a batch of size 50.
Batch processing allows you to optimize writing data.
However, the usual advice of flushing and clearing the Hibernate Session is incomplete.
You need to commit the transaction at the end of the batch to avoid long-running transactions which can hurt performance and, if the last item fails, undoing all changes is going to put a lot of pressure on the DB.
Therefore, this is how you should do batch processing:
int entityCount = 50;
int batchSize = 25;
EntityManager entityManager = entityManagerFactory().createEntityManager();
EntityTransaction entityTransaction = entityManager.getTransaction();
try {
entityTransaction.begin();
for (int i = 0; i < entityCount; i++) {
if (i > 0 && i % batchSize == 0) {
entityTransaction.commit();
entityTransaction.begin();
entityManager.clear();
}
Post post = new Post(
String.format("Post %d", i + 1)
);
entityManager.persist(post);
}
entityTransaction.commit();
} catch (RuntimeException e) {
if (entityTransaction.isActive()) {
entityTransaction.rollback();
}
throw e;
} finally {
entityManager.close();
}
I have a code looking like this:
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
try {
for ( Customer customer: customers ) {
i++;
session.update(customer);
if ( i % 200 == 0 ) { //200, same as the JDBC batch size
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
}
} catch (Exc e) {
//TODO want to know customer id here!
}
tx.commit();
session.close();
Say, at some point session.flush() raises an DataException, because one of the fields did not map into the database column size, one of those batch of 200 customers. Nothing wrong with it, data can be corrupted, it's ok in this case. BUT, I really need to know the customer id which failed. Database returns meaningless error message, not stating what was the params of the statement, etc. Catched exception also does not contain which customer did fail, only the sql statement text, looking like 'update Customer set name=?'
Can I somehow determine it using the hibernate session? Does it store somewhere the information about last entity it tried to save down?
I have just set up a test that checks that I am able to insert entries into my database using Hibernate. The thing that drives me crazy is that Hibernate does not actually delete the entries, although it reports that they are gone!
The test below runs successfully, but when I check my DB afterwards the entries that were inserted are still there! I even try to check it using assert (yes I have -ea as vm parameter). Does anyone have a clue why the entries are not deleted?
public class HibernateExportStatisticDaoIntegrationTest {
HibernateExportStatisticDao dao;
Transaction transaction;
#Before
public void setUp(){
assert numberOfStatisticRowsInDB() == 0;
dao = new HibernateExportStatisticDao(HibernateUtil.getSessionFactory());
}
#After
public void deleteAllEntries(){
assert numberOfStatisticRowsInDB() != 0;
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
for(PersistableStatisticItem item:allStatisticItemsInDB()) {
session.delete(item);
}
session.flush();
assert numberOfStatisticRowsInDB() == 0;
}
#Test public void exportAllSavesEntriesToDatabase(){
int expectedNumberOfStatistics = 20;
dao.exportAll(StatisticItemFactory.createTestStatistics(expectedNumberOfStatistics));
assertEquals(expectedNumberOfStatistics, numberOfStatisticRowsInDB());
}
private int numberOfStatisticRowsInDB() {
return allStatisticItemsInDB().size();
}
#SuppressWarnings("unchecked")
private List<PersistableStatisticItem> allStatisticItemsInDB(){
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
transaction = session.beginTransaction();
Query q = session.createQuery("FROM PersistableStatisticItem item");
return q.list();
}
}
The console is filled with
Hibernate: delete from UPTIME_STATISTICS where logDate=? and serviceId=?
but nothing has been deleted when I check it.
I guess it's related to inconsistent use of transactions (note that beginTransaction() in allStatisticItemsInDB() is called several times without corresponding commits).
Try to manage transactions in proper way, for example, like this:
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = session.beginTransaction();
for(PersistableStatisticItem item:
session.createQuery("FROM PersistableStatisticItem item").list()) {
session.delete(item);
}
session.flush();
assert session.createQuery("FROM PersistableStatisticItem item").list().size() == 0;
tx.commit();
See also:
13.2. Database transaction demarcation
I have the same problem. Although I was not using transaction at all. I was using namedQuery like this :
Query query = session.getNamedQuery(EmployeeNQ.DELETE_EMPLOYEES);
int rows = query.executeUpdate();
session.close();
It was returning 2 rows but the database still had all the records. Then I wrap up the above code with this :
Transaction transaction = session.beginTransaction();
Query query = session.getNamedQuery(EmployeeNQ.DELETE_EMPLOYEES);
int rows = query.executeUpdate();
transaction.commit();
session.close();
Then it started working fine. I was using SQL server. But I think if we use h2, above code (without transaction) will also work fine.
One more observation : To insert and get records usage of transaction is not mandatory but for deletion of records we will have to use transaction. (only tested in SQL server)
Can you post your DB schema and HBM or Fluent maps? One thing that got me a while back was I had a ReadOnly() in my Fluent map. It never threw an error and I too saw the "delete from blah where blahblah=..." in the logs.