Multithreading Exception : "Illegal attempt to associate a collection with two open sessions"? - java

So i was trying to implement proper multithreading for saving and update operation in hibernate. Each of my threads uses one newly created session with its own transaction.
I always get a "Illegal attempt to associate a collection with two open session" exception, because im using multiple sessions at the same time ( I also tested it with a shared one, did not worked either ). The problem here is that i need multiple sessions, thats the way i was told hibernate multithreading would work.
This code basically gets executed on multiple threads at the "same" time.
var session = database.openSession(); // sessionFactory.openSession()
session.beginTransaction();
try {
// Save my entities
for (var identity : identities)
session.update(identity);
session.flush();
session.clear();
session.getTransaction().commit();
} catch (Exception e){
GameExtension.getInstance().trace("Save");
GameExtension.getInstance().trace(e.getMessage());
session.getTransaction().rollback();
}
session.close();
So how do i solve this issue ? How do i keep multiple sessions and prevent that exception at the same time ?

The issue you are facing is that you load an entity with a collection with session 1 and then somehow this object is passed to session 2 while the object is still associated to an open session 1. This simply does not work, even in a single threaded environment.
Either detach the object from the session 1 or reload the object in session 2.

Related

Can Hibernate first level cache results be stale when two separate applications consumes same database?

I am quite new to hibernate and I was learning about first-level caching in hibernate. I have a concern in first level cache consistency.
Imagine I have two separate Web Applications which can read/write to the same database. Both applications use hibernate. First application consists following code segment.
//First Application code
//Open the hibernate session
Session session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
//fetch the client entity from database first time
Client client= (Client) session.load(Client.class, new Integer(77869));
System.out.println(client.getName());
//execute some code which runs for several minutes
//fetch the client entity again
client= (Client) session.load(Client.class, new Integer(77869));
System.out.println(client.getName());
session.getTransaction().commit();
The second application consists of the following code.
//Second Application code
//Open the hibernate session
Session session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
//fetch the client entity from database first time
String hql = "UPDATE client set name = :name WHERE id = :client_id";
Query query = session.createQuery(hql);
query.setParameter("name", "Kumar");
query.setParameter("client_id", "77869");
int result = query.executeUpdate();
System.out.println("Rows affected: " + result);
session.getTransaction().commit();
Let's say the first application creates the session at 10.00 AM. the first application keeps that session object live for 10 minutes. Meanwhile, at 10.01AM second application do an update to Database (update CLIENT set name = 'Kumar' where id = 77869).
So first level cache in first application is outdated after 10.01AM. am I right? if so, is there any method to avoid this scenario?
There is no implicit way your first application will know about the underlying changes that were triggered by the second application.
One of the possible ways to handle this situation could be the following:
1) Once one of the applications does update / insert it needs to save a flag of some sort in the database for example that the data has been changed.
2) In the other application you just after you start a new session you need to check whether that flag is set and therefore the data has been altered.
If so, you need to set the CacheMode of the session accordingly:
session.setCacheMode(CacheMode.REFRESH);
This would ensure that during this session all the queried entities will not be taken from the cache but from the DB (therefore updating the cache in the process). Most likely this will not update all the changes in the second-level change, you would need to set that session attribute periodically every now and then.
Keep in mind that second-level-caching anything else than dictionary entities that do not really change is tricky in terms of consistency.

When to call flush() and commit() in Hibernate?

I have the following case:
openSession()
tx = session.beginTransaction();
try
{
...
session.saveOrUpdate(obj_1);
...
session.saveOrUpdate(obj_2);
...
session.saveOrUpdate(obj_3);
session.flush();
tx.commit();
}
catch()
{
tx.rollback()
}
finally
{
session.close();
}
The first call of saveOrUpdate(obj_1) will failed due to Duplicate entry error. But this error would not actually happen until the database is accessed at the end of the session.flush().
Is there a way to make this kind of error happens early and give me more chance to correctly handle it?
I also don't want to cut the transaction too small because it is hard to rollback.
My Hibernate entities are reversed from an existing database schema.
I have disabled the cache in my configuration file:
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
EDIT:
My question is how can I get the error as early as possible if any. The error itself doesn't matter to me.
Right now the error happens at the end, but it actually happened at the first access.
You can execute session.flush() after each session.saveOrUpdate to force hibernate to execute the update or insert on your database. Without the flush command the actual database operations are deferred and will only be processed immediately before ending the transaction.
This way you don't have to change the size of your transaction.
You can execute Surendran after each session.saveOrUpdate to force hibernate to execute the update or insert on your database. Without the flush command the actual database operations are deferred and will only be processed immediately before ending the transaction.
This way you don't have to change the size of your transaction.

How can I check if the db is locking between transactions

I am using Hibernate to implement the DAO layer (Sybase DB) in a web application running on Jboss5.
The problem I am facing is when the client/UI makes multiple simultaneous HTTP calls - which in-turn calls the DAO insert method - there is a race condition which causes both calls to run the DAO insert method at near same time. What I actually want is
The 1st request calls the DAO method
1st request reads the current db value
Check if new data is valid based on current db value
If valid, insert the new value
AND then the 2nd request to read the current db value
Check if new data is valid
If valid, insert the value...and so on if there are more calls
My DAO layer code looks like so:
#Override
#Transactional
public Set<PartyData> insertPartyData(final Set<PartyData> pData) throws DataServiceException
{
sessionFactory.getCurrentSession().getTransaction().begin();
//code to read the current db value
//validation code to check if new value can be inserted based on what's currently in db
sessionFactory.getCurrentSession().save(pData);
sessionFactory.getCurrentSession().getTransaction().commit();
}
Question?
How can I make sure that the db locks the table for the duration of one transaction so that any other request waits until the previous transaction is complete?
While locking the table will work - the process of locking an entire table just doesn't seem right. There must be other ways to accomplish what you're trying to do, like unique constraints for example.
IMO this is best to be done in the container level rather than the application level (Don't know if it could even be done in the application level).
Based on this article, you can do what you like by configuring two attributes in the Connector.
Set maxThreads to 1 (maxThreads: The maximum number of request processing threads to be created by this connector, which therefore determines the maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200.)
This will make sure that you run exactly one request at each time.
Increase the acceptCount : (acceptCount: The maximum queue length for incoming connection requests, when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 10.)
This should be set relatively high so that you do not deny connections to your service, rather add then to queue untill another request finishes execution.
* ANSWER *
Ok I have tried the below solution which seems to have worked so far.
What seems to happen is each request seems to create a new session and that session runs in its own transaction. So no two request seem to interfere with each other and each transaction runs in one go.
I am not an expert in hibernate so please correct me if this is not the right way to do it.
#Override
#Transactional
public Set<PartyData> insertPartyData(final Set<PartyData> pData) throws DataServiceException
{
final Session session = sessionFactory.openSession();
Transaction tx;
try {
tx = session.beginTransaction();
\\read curren db value and do the validation with new data (throw exception if validation fails else continue)
session.save(pData);
}
tx.commit();
}
catch (final Exception e) {
throw new DataServiceException(e);
}
finally {
session.close();
}
return pData;
}

Temporarily forcing LazyInitializationExceptions

Traditionally, we try to avoid LazyInitializationExceptions. However, I need to temporarily allow them to be thrown. This is the pseudo code of what I'm trying to do:
Session session = ...;
Customer customer = (Customer) session.get(Customer.class, 56);
// All eagerly fetched data is now in memory.
disconnectSession(session);
// Attempts to access lazy data throw LazyInitializationExceptions.
// I want to take advantage of this fact to *only* access data that
// is currently in memory.
magicallySerializeDataCurrentlyInMemory(customer);
// Now that the in-memory data has been serialized, I want to re-connect
// the session.
reconnectSession(session);
The magicallySerializeDataCurrentlyInMemory method recursively attempts to serialize in-memory data of customer and its related entities, absorbing LazyInitializationExceptions along the way.
Attempt #1: session.disconnect / session.reconnect
In this attempt, I used this pattern:
Connection connection = session.disconnect();
magicallySerializeDataCurrentlyInMemory(customer);
session.reconnect(connection);
Unfortunately, it didn't throw LazyInitializationExceptions.
Attempt #2: session.close / session.reconnect
In this attempt, I used this pattern:
Connection connection = session.close();
magicallySerializeDataCurrentlyInMemory(customer);
session.reconnect(connection);
Unfortunately, this rendered the session useless after session.reconnect(connection).
How can I temporarily force LazyInitializationExceptions?
There isn't a way to temporarily close the session. What you can do is remove the Entity from the session and then put it back.
session.evict(customer);
//do something will throw lazy load
session.refresh(customer);
connect and reconnect just manually manage what particular JDBC connection is in use. A hibernate Session is a larger concept than a single JDBC connection. The connection is just a resource to it. It can go through using many different physical database connections throughout its lifecycle and not care one bit. If you disconnect then ask the Session to do something, it will just go get another connection.

How to write a fallback mechanism that saves entities that weren't successfully saved the first time

I am trying to write a mechanism that will manage my save to DB operation.
I send the server a list of objects, it iterates them and saves each one.
Now, if they fail for some strange reason (exception) it saves them to another list that
has a timer that runs every 5 seconds, and tries to re-save them.
I then have a locking problem, which I can solve with another boolean.
My function that saves my lost object is:
private void saveLostDeals() {
synchronized (unsavedDeals) {
if (unsavedDeals.size() > 0) {
for (DealBean unsavedDeal : unsavedDeals) {
boolean successfullySaved = reportDeal(unsavedDeal,false);
if (successfullySaved) {
unsavedDeals.remove(unsavedDeal);
}
}
}
}
}
And my reportDeal() method that's being called for regular reports and for lost deals report:
try {
...
} catch (HibernateException e) {
...if (fallback)
synchronized (unsavedDeals) {
unsavedDeals.add(deal);
}
session.getTransaction().rollback();
} finally {
....
}
Now, when a lost deal is saved - if an exception occurs - the synchronized block will stop it.
What do you have to say about this save fallback mechanism? Are there better design patterns to deal with this common issue?
I would suggest using either a proxy or aspects to handle the rollback/retry mechanism. The proxy could use something like the strategy pattern for advice on what action to take.
If you however don't want to retry immediately, but say in 5 seconds as you propose, I would consider building that into the contract of your database layer by providing asynchronous routines to begin with. Something like dao.scheduleStore(o); or dao.asyncStore(o);.
It depends
For example,
Request to save Entity ---> Exception occurs ---> DB Connection Problem----> In Exception Block Retry to save entity in fallback DB-----> Return the response to request
Request to save Entity ---> Exception occurs ---> DB Connection Problem----> In Exception Block Retry to save entity in in-memory Store of Application -----> Return the response to request
Request to save Entity ----> Exception occurs----> unknown Exception----> In Exception block save entity to XML File store[serialize in XML]---->Return the response mentioning temp saved will be updated later to request
Timer ----> checks the file store for any serialized XML ----> updates the DB
Points to watch out for
Async calls are better in such scenarios rather than making requesting client to wait.
In case of in-memory saving , watch out for amount of data saved in memory in case of prolonged DB failure. That might bring down the whole application
Transactions, whether you want to rollback of save its intermittent state.
consistency of data to be watched for

Categories

Resources