Temporarily forcing LazyInitializationExceptions - java

Traditionally, we try to avoid LazyInitializationExceptions. However, I need to temporarily allow them to be thrown. This is the pseudo code of what I'm trying to do:
Session session = ...;
Customer customer = (Customer) session.get(Customer.class, 56);
// All eagerly fetched data is now in memory.
disconnectSession(session);
// Attempts to access lazy data throw LazyInitializationExceptions.
// I want to take advantage of this fact to *only* access data that
// is currently in memory.
magicallySerializeDataCurrentlyInMemory(customer);
// Now that the in-memory data has been serialized, I want to re-connect
// the session.
reconnectSession(session);
The magicallySerializeDataCurrentlyInMemory method recursively attempts to serialize in-memory data of customer and its related entities, absorbing LazyInitializationExceptions along the way.
Attempt #1: session.disconnect / session.reconnect
In this attempt, I used this pattern:
Connection connection = session.disconnect();
magicallySerializeDataCurrentlyInMemory(customer);
session.reconnect(connection);
Unfortunately, it didn't throw LazyInitializationExceptions.
Attempt #2: session.close / session.reconnect
In this attempt, I used this pattern:
Connection connection = session.close();
magicallySerializeDataCurrentlyInMemory(customer);
session.reconnect(connection);
Unfortunately, this rendered the session useless after session.reconnect(connection).
How can I temporarily force LazyInitializationExceptions?

There isn't a way to temporarily close the session. What you can do is remove the Entity from the session and then put it back.
session.evict(customer);
//do something will throw lazy load
session.refresh(customer);
connect and reconnect just manually manage what particular JDBC connection is in use. A hibernate Session is a larger concept than a single JDBC connection. The connection is just a resource to it. It can go through using many different physical database connections throughout its lifecycle and not care one bit. If you disconnect then ask the Session to do something, it will just go get another connection.

Related

Multithreading Exception : "Illegal attempt to associate a collection with two open sessions"?

So i was trying to implement proper multithreading for saving and update operation in hibernate. Each of my threads uses one newly created session with its own transaction.
I always get a "Illegal attempt to associate a collection with two open session" exception, because im using multiple sessions at the same time ( I also tested it with a shared one, did not worked either ). The problem here is that i need multiple sessions, thats the way i was told hibernate multithreading would work.
This code basically gets executed on multiple threads at the "same" time.
var session = database.openSession(); // sessionFactory.openSession()
session.beginTransaction();
try {
// Save my entities
for (var identity : identities)
session.update(identity);
session.flush();
session.clear();
session.getTransaction().commit();
} catch (Exception e){
GameExtension.getInstance().trace("Save");
GameExtension.getInstance().trace(e.getMessage());
session.getTransaction().rollback();
}
session.close();
So how do i solve this issue ? How do i keep multiple sessions and prevent that exception at the same time ?
The issue you are facing is that you load an entity with a collection with session 1 and then somehow this object is passed to session 2 while the object is still associated to an open session 1. This simply does not work, even in a single threaded environment.
Either detach the object from the session 1 or reload the object in session 2.

Using galera mariadb with jdbc

I've created a mariadb cluster and I'm trying to get a Java application to be able to failover to another host when one of them dies.
I've created an application that creates a connection with "jdbc:mysql:sequential://host1,host2,host3/database?socketTimeout=2000&autoReconnect=true". The application makes a query in a loop every second. If I kill the node where the application is currently executing the query (Statement.executeQuery()) I get a SQLException because of a timeout. I can catch the exception and re-execute the statement and I see that the request is being sent to another server, so failover in that case works ok. But I was expecting that executeQuery() would not throw an exception and silently retry another server automatically.
Am I wrong in assuming that I shouldn't have to handle an exception and explicitely retry the query? Is there something more I need to configure for that to happen?
It is dangerous to auto reconnect for the following reason. Let's say you have this code:
BEGIN;
SELECT ... FROM tbl WHERE ... FOR UPDATE;
(line 3)
UPDATE tbl ... WHERE ...;
COMMIT;
Now let's say the server crashes at (line 3). The transaction will be rolled back. In my fabricated example, that only involves releasing the lock on tbl.
Now let's say that some other connection succeeds in performing the same transaction on the same row while you are auto-reconnecting.
Now, with auto-reconnect, the first thread is oblivious that the first half of the transaction was rolled back and proceeds to do the UPDATE based on data that is now out of date.
You need to get an exception so that you can go back to the BEGIN so that you can be "transaction safe".
You need this anyway -- With Galera, and no crashes, a similar thing could happen. Two threads performing that transaction on two different nodes at the same time... Each succeeds until it gets to the COMMIT, at which point the Galera magic happens and one of the COMMITs is told to fail. The 'right' response is replay the entire transaction on the server that was chosen for failure.
Note that Galera, unlike non-Galera, requires checking for errors on COMMIT.
More Galera tips (aimed at devs and dbas migrating from non-Galera)
Failover doesn't mean that application doesn't have to handle exceptions.
Driver will try to reconnect to another server when connection is lost.
If driver fail to reconnect to another server a SQLNonTransientConnectionException will be thrown, pools will automatically discard those connection.
If connection is recovered, there is some marginals cases where relaunching query is safe: when query is not in a transaction, and connection is currently in read-only mode (using Spring #Transactional(readOnly = false)) for example. For thoses cases, MariaDb java connection will then relaunch query automatically. In those particular cases, no exception will be thrown, and failover is transparent.
Driver cannot re-execute current query during a transaction.
Even without without transaction, if query is an UPDATE command, driver cannot know if the last request has been received by the database server and executed.
Then driver will send an SQLException (with SQLState begining by "25" = INVALID_TRANSACTION_STATE), and it's up to the application to handle those cases.

How can I check if the db is locking between transactions

I am using Hibernate to implement the DAO layer (Sybase DB) in a web application running on Jboss5.
The problem I am facing is when the client/UI makes multiple simultaneous HTTP calls - which in-turn calls the DAO insert method - there is a race condition which causes both calls to run the DAO insert method at near same time. What I actually want is
The 1st request calls the DAO method
1st request reads the current db value
Check if new data is valid based on current db value
If valid, insert the new value
AND then the 2nd request to read the current db value
Check if new data is valid
If valid, insert the value...and so on if there are more calls
My DAO layer code looks like so:
#Override
#Transactional
public Set<PartyData> insertPartyData(final Set<PartyData> pData) throws DataServiceException
{
sessionFactory.getCurrentSession().getTransaction().begin();
//code to read the current db value
//validation code to check if new value can be inserted based on what's currently in db
sessionFactory.getCurrentSession().save(pData);
sessionFactory.getCurrentSession().getTransaction().commit();
}
Question?
How can I make sure that the db locks the table for the duration of one transaction so that any other request waits until the previous transaction is complete?
While locking the table will work - the process of locking an entire table just doesn't seem right. There must be other ways to accomplish what you're trying to do, like unique constraints for example.
IMO this is best to be done in the container level rather than the application level (Don't know if it could even be done in the application level).
Based on this article, you can do what you like by configuring two attributes in the Connector.
Set maxThreads to 1 (maxThreads: The maximum number of request processing threads to be created by this connector, which therefore determines the maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200.)
This will make sure that you run exactly one request at each time.
Increase the acceptCount : (acceptCount: The maximum queue length for incoming connection requests, when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 10.)
This should be set relatively high so that you do not deny connections to your service, rather add then to queue untill another request finishes execution.
* ANSWER *
Ok I have tried the below solution which seems to have worked so far.
What seems to happen is each request seems to create a new session and that session runs in its own transaction. So no two request seem to interfere with each other and each transaction runs in one go.
I am not an expert in hibernate so please correct me if this is not the right way to do it.
#Override
#Transactional
public Set<PartyData> insertPartyData(final Set<PartyData> pData) throws DataServiceException
{
final Session session = sessionFactory.openSession();
Transaction tx;
try {
tx = session.beginTransaction();
\\read curren db value and do the validation with new data (throw exception if validation fails else continue)
session.save(pData);
}
tx.commit();
}
catch (final Exception e) {
throw new DataServiceException(e);
}
finally {
session.close();
}
return pData;
}

Hibernate check connection to database in Thread ( every time period )

What is best/good/optimal way to monitor connection to database.
Im writing swing application. What I want it to do is to check connection with database every time period. I've tried something like this.
org.hibernate.Session session = null;
try {
System.out.println("Check seesion!");
session = HibernateUtil.getSessionFactory().openSession();
} catch (HibernateException ex) {
} finally {
session.close();
}
But that dosn't work.
Second question which is comming to my mind is how this session closing will affect other queries.
Use a connection pool like c3p0 or dbcp. You can configure such pool to monitor connections in pool - before passing connection to Hibernate, after receiving it back or periodically. If the connection is broken, the pool with transparently close it, discard and open a new one - without you noticing.
Database connection pools are better suited for multi-user, database heavy application where several connections are opened at the same time, but I don't think it's an overkill. Pools should work just fine being bound to max 1 connection.
UPDATE: every time you try to access the database Hibernate will ask the DataSource (connection pool). If no active connection is available (e.g. because database is down), this will throw an exception. If you want to know in advance when database is unavailable (even when user is not doing anything), unfortunately you need a background thread checking the database once in a while.
However barely opening a session might not be enough in some configurations. You'll better run some dummy, cheap query like SELECT 1 (in raw JDBC).

How to write a fallback mechanism that saves entities that weren't successfully saved the first time

I am trying to write a mechanism that will manage my save to DB operation.
I send the server a list of objects, it iterates them and saves each one.
Now, if they fail for some strange reason (exception) it saves them to another list that
has a timer that runs every 5 seconds, and tries to re-save them.
I then have a locking problem, which I can solve with another boolean.
My function that saves my lost object is:
private void saveLostDeals() {
synchronized (unsavedDeals) {
if (unsavedDeals.size() > 0) {
for (DealBean unsavedDeal : unsavedDeals) {
boolean successfullySaved = reportDeal(unsavedDeal,false);
if (successfullySaved) {
unsavedDeals.remove(unsavedDeal);
}
}
}
}
}
And my reportDeal() method that's being called for regular reports and for lost deals report:
try {
...
} catch (HibernateException e) {
...if (fallback)
synchronized (unsavedDeals) {
unsavedDeals.add(deal);
}
session.getTransaction().rollback();
} finally {
....
}
Now, when a lost deal is saved - if an exception occurs - the synchronized block will stop it.
What do you have to say about this save fallback mechanism? Are there better design patterns to deal with this common issue?
I would suggest using either a proxy or aspects to handle the rollback/retry mechanism. The proxy could use something like the strategy pattern for advice on what action to take.
If you however don't want to retry immediately, but say in 5 seconds as you propose, I would consider building that into the contract of your database layer by providing asynchronous routines to begin with. Something like dao.scheduleStore(o); or dao.asyncStore(o);.
It depends
For example,
Request to save Entity ---> Exception occurs ---> DB Connection Problem----> In Exception Block Retry to save entity in fallback DB-----> Return the response to request
Request to save Entity ---> Exception occurs ---> DB Connection Problem----> In Exception Block Retry to save entity in in-memory Store of Application -----> Return the response to request
Request to save Entity ----> Exception occurs----> unknown Exception----> In Exception block save entity to XML File store[serialize in XML]---->Return the response mentioning temp saved will be updated later to request
Timer ----> checks the file store for any serialized XML ----> updates the DB
Points to watch out for
Async calls are better in such scenarios rather than making requesting client to wait.
In case of in-memory saving , watch out for amount of data saved in memory in case of prolonged DB failure. That might bring down the whole application
Transactions, whether you want to rollback of save its intermittent state.
consistency of data to be watched for

Categories

Resources