I have the following case:
openSession()
tx = session.beginTransaction();
try
{
...
session.saveOrUpdate(obj_1);
...
session.saveOrUpdate(obj_2);
...
session.saveOrUpdate(obj_3);
session.flush();
tx.commit();
}
catch()
{
tx.rollback()
}
finally
{
session.close();
}
The first call of saveOrUpdate(obj_1) will failed due to Duplicate entry error. But this error would not actually happen until the database is accessed at the end of the session.flush().
Is there a way to make this kind of error happens early and give me more chance to correctly handle it?
I also don't want to cut the transaction too small because it is hard to rollback.
My Hibernate entities are reversed from an existing database schema.
I have disabled the cache in my configuration file:
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
EDIT:
My question is how can I get the error as early as possible if any. The error itself doesn't matter to me.
Right now the error happens at the end, but it actually happened at the first access.
You can execute session.flush() after each session.saveOrUpdate to force hibernate to execute the update or insert on your database. Without the flush command the actual database operations are deferred and will only be processed immediately before ending the transaction.
This way you don't have to change the size of your transaction.
You can execute Surendran after each session.saveOrUpdate to force hibernate to execute the update or insert on your database. Without the flush command the actual database operations are deferred and will only be processed immediately before ending the transaction.
This way you don't have to change the size of your transaction.
Related
So i was trying to implement proper multithreading for saving and update operation in hibernate. Each of my threads uses one newly created session with its own transaction.
I always get a "Illegal attempt to associate a collection with two open session" exception, because im using multiple sessions at the same time ( I also tested it with a shared one, did not worked either ). The problem here is that i need multiple sessions, thats the way i was told hibernate multithreading would work.
This code basically gets executed on multiple threads at the "same" time.
var session = database.openSession(); // sessionFactory.openSession()
session.beginTransaction();
try {
// Save my entities
for (var identity : identities)
session.update(identity);
session.flush();
session.clear();
session.getTransaction().commit();
} catch (Exception e){
GameExtension.getInstance().trace("Save");
GameExtension.getInstance().trace(e.getMessage());
session.getTransaction().rollback();
}
session.close();
So how do i solve this issue ? How do i keep multiple sessions and prevent that exception at the same time ?
The issue you are facing is that you load an entity with a collection with session 1 and then somehow this object is passed to session 2 while the object is still associated to an open session 1. This simply does not work, even in a single threaded environment.
Either detach the object from the session 1 or reload the object in session 2.
My hibernate config:
Properties properties = new Properties();
properties.put("hibernate.dialect", "org.hibernate.dialect.MySQLDialect");
properties.put("hibernate.hbm2ddl.auto", "validate");
properties.put("hibernate.show_sql", "true");
properties.put("hibernate.id.new_generator_mappings", "false");
properties.put("hibernate.connection.autocommit", "true");
properties.put("hibernate.connection.driver_class", "com.mysql.jdbc.Driver");
properties.put("hibernate.connection.url", DBConnection.url);
properties.put("hibernate.connection.username", DBConnection.username);
properties.put("hibernate.connection.password", DBConnection.password);
Code example:
// pattern 1
Session s = sessionFactory.openSession();
ObjectA A = s.load(ObjectA.class, pk);
A.setAttr("abc");
s.update(A);
s.close();
// pattern 2
Session s = sessionFactory.openSession();
s.beginTransaction();
ObjectA A = s.load(ObjectA.class, pk);
A.setAttr("abc");
s.update(A);
s.close();
// pattern 3
Session s = sessionFactory.openSession();
Transaction tx = s.beginTransaction();
ObjectA A = s.load(ObjectA.class, pk);
A.setAttr("abc");
s.update(A);
tx.commit();
s.close();
Please ignore my compilation error. I am using hibernate in web application (without spring), and without using transaction, because I am using MySql database and MySql autocommit is true, so in turn, in hibernate, I make it as autocommit true as well. (I am using mysql-connector-java-5.1.23-bin.jar too).
Three of the pattern, I am only able to get pattern 3 works. I am totally confused now. I have few questions below:
1) I can't understand why pattern 1 is not working, all my select (via hibernate CriteriaBuilder or load) and insert (via hibernate session.save) works but only update doesn't work.
2) OK then I try using transaction like pattern 2, my hibernate auto-commit is true, so I assume when I close the session, the transaction should auto-commit but it doesn't work. Why?
3) Pattern 3 works, why I need transaction manager here? I want the jdbc to execute each single query in each transaction (one sql in one transaction), I don't worry the performance, but I have to include transaction here, why?
For pattern 1 and 2, I found that the update script is not even generated (based on hibernate log), the problem is not because script is generated but commit failed. Don't understand why? Please help...
PS:
Just wrap up some points for future reference after some trial and error:
1) Hibernate will only generate sql script upon the session.flush() is called but not tx.commit(), and session.flush() have to be called in Transaction block. without Transaction, it leads to exception. Explicit flush is not needed if the flush mode is auto, commit() will trigger flush.
2) Hibernate Transaction is not equivalent to database transaction, after some tries, I found that, if hibernate autocommit is false, yes, they are functionally equivalent and corresponding begin transaction script is generated via JDBC and send over to database (my guess only). If hibernate autocommit is true, no begin transaction is started although we declare it in hibernate Transaction tx = s.beginTransaction(), all the query will be autocommit and rollback will not work.
3) The reason of my case, session.save() (and also select) work without Transaction, it is a bit special because save have to be triggered in order to get the table identifier(primary key) and so sql script generated even without flush.
4) For pattern 2, I miss-understood, autocommit doesn't mean autocommit upon session closed, its true meaning should be autocommit upon each sql reach database. so pattern 2 will not work because there is no tx.commit, meaning there is no flush, so no sql script is generated. (whether tx.commit will be called automatically upon session.close, it depend on vendor implementation, some will be rollback.)
Conclusion, Transaction block is needed in Hibernate not matter what.
I think you have a bit of confusion. The transaction (org.hibernate.transaction) is not exactly a DB transaction.
Such Object are used by hibernate when you flush the Session (Session.flush) to bound the instruction in a single db transaction. In other word do not confuse Hibernate Session with DB session, nevertheless do not confue hibernate Sessio with db connection.
Most important is that by specificatio hibernate generate sql code only for what is included between a hibernate transaction. That's why pattern A and B doesn't work and doesn't generate sql code. More specifically the auto-commit in pattern B has no influence since the sql cod is never generated. Moreover, according with hibernate best pracitces, you have to remember to open and close a transaction even for simple select instruction. By the way a select should work even without transaction, but you may have some trouble.
To better understand the concept we can resume the architecture:
hibernate session: is a container, wich hold your hibernate object and your db operations as java objects, and many other things.
the hibernate transaction: is a transaction object referred to an hibernate session.
db connection: is your connection to DB
conenction pool: is a set ofdb connection.
What appen when a session is flushed can be resumed with the followoing step:
a connection is get from the connection pool
for each committed
transaction in your session a db connection is get from the pool,
the sql commands are generated and sent to DB
the db connection is put back on the pool
it is just a small recap, but hope this help
r.
I've created a mariadb cluster and I'm trying to get a Java application to be able to failover to another host when one of them dies.
I've created an application that creates a connection with "jdbc:mysql:sequential://host1,host2,host3/database?socketTimeout=2000&autoReconnect=true". The application makes a query in a loop every second. If I kill the node where the application is currently executing the query (Statement.executeQuery()) I get a SQLException because of a timeout. I can catch the exception and re-execute the statement and I see that the request is being sent to another server, so failover in that case works ok. But I was expecting that executeQuery() would not throw an exception and silently retry another server automatically.
Am I wrong in assuming that I shouldn't have to handle an exception and explicitely retry the query? Is there something more I need to configure for that to happen?
It is dangerous to auto reconnect for the following reason. Let's say you have this code:
BEGIN;
SELECT ... FROM tbl WHERE ... FOR UPDATE;
(line 3)
UPDATE tbl ... WHERE ...;
COMMIT;
Now let's say the server crashes at (line 3). The transaction will be rolled back. In my fabricated example, that only involves releasing the lock on tbl.
Now let's say that some other connection succeeds in performing the same transaction on the same row while you are auto-reconnecting.
Now, with auto-reconnect, the first thread is oblivious that the first half of the transaction was rolled back and proceeds to do the UPDATE based on data that is now out of date.
You need to get an exception so that you can go back to the BEGIN so that you can be "transaction safe".
You need this anyway -- With Galera, and no crashes, a similar thing could happen. Two threads performing that transaction on two different nodes at the same time... Each succeeds until it gets to the COMMIT, at which point the Galera magic happens and one of the COMMITs is told to fail. The 'right' response is replay the entire transaction on the server that was chosen for failure.
Note that Galera, unlike non-Galera, requires checking for errors on COMMIT.
More Galera tips (aimed at devs and dbas migrating from non-Galera)
Failover doesn't mean that application doesn't have to handle exceptions.
Driver will try to reconnect to another server when connection is lost.
If driver fail to reconnect to another server a SQLNonTransientConnectionException will be thrown, pools will automatically discard those connection.
If connection is recovered, there is some marginals cases where relaunching query is safe: when query is not in a transaction, and connection is currently in read-only mode (using Spring #Transactional(readOnly = false)) for example. For thoses cases, MariaDb java connection will then relaunch query automatically. In those particular cases, no exception will be thrown, and failover is transparent.
Driver cannot re-execute current query during a transaction.
Even without without transaction, if query is an UPDATE command, driver cannot know if the last request has been received by the database server and executed.
Then driver will send an SQLException (with SQLState begining by "25" = INVALID_TRANSACTION_STATE), and it's up to the application to handle those cases.
I am attempting to upgrade our code base from Hibernate 3.6 to 4.0 (just as a first step into getting us more up to date). I am hitting an issue where a COMMIT is not being issued even though we are calling the commit() after making sure the transaction is isActive().
When running some queries successfully against the Postgres database, I see this in the Postgres logs:
2014-12-30 20:09:39 CST LOG: execute <unnamed>: SET extra_float_digits=3
2014-12-30 20:09:39 CST LOG: execute S_1: BEGIN
2014-12-30 20:09:39 CST LOG: execute <unnamed>: -- This script........
Notice the BEGIN there, and then here's an example of the simple commit call:
if (sf.getCurrentSession().getTransaction().isActive()) {
sf.getCurrentSession().getTransaction().commit();
}
I have debugged down into AbstractTransactionImpl and am looking at a Jdbc4Connection commit() method, and see that the actual COMMIT call is being skipped.....but I don't know why. Here's the if statement that this is failing on (it is in AbstractJdbc2Connection).
if (protoConnection.getTransactionState() != ProtocolConnection.TRANSACTION_IDLE)
executeTransactionCommand(commitQuery);
So, apparently our transactionstate is == ProtocolConnection.TRANSACTION_IDLE. However, I'm not entirely sure what that entails, and why are we getting this issue when the transaction says it isActive()?
NOTE: this same exact code worked for us on Hibernate 3.6.
Thanks.
UPDATE: I debugged further, and it looks like there are many ProtocoalConnectionImpl objects being constructed, does that indicate a problem on our software side that is doing something it shouldn't? Like does that indicate that we're opening connections that are just hanging around? Thanks for any more insight.
So it turns out I was doing something incorrect (surprise). There's a good reason that the TransactionState was showing as TRANSACTION_IDLE and not issuing a commit.
I had seen in multiple places on the internetz where people were getting access to a JDBC connection in the new Hibernate 4.x land by getting access to the underlying ConnectionProvider (sort of like this: https://stackoverflow.com/a/21271019/115694).
However, the problem with doing that is that you are getting a BRAND NEW connection....instead of the connection associated with the current session. This is a pretty big no-no, or at least for us it was.
You do have to in fact do what the person says in this answer:
https://stackoverflow.com/a/3526626/115694
Basically, you MUST use the Work API.
I have a method that return the entity manager for particular DB.Now when i use the method for the first time to get entity manager everything works fine.I can save data into any tables A,B,C using entity manager.Now say i get a exception while saving in table B
Now when I try to perform any operation on DB after geting exception above, the next time i try to run same code it fails when updating in table A itself.I can see folloing eception
<openjpa-1.2.2-SNAPSHOT-r422266:778978M-OPENJPA-975 nonfatal user error> org.apache.openjpa.persistence.InvalidStateException: The factory has been closed. The stack trace at which the factory was closed is available if Runtime=TRACE logging is enabled.
at org.apache.openjpa.kernel.AbstractBrokerFactory.assertOpen(AbstractBrokerFactory.java:673)
at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:182)
at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:142)
at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:192)
at ..
Somewhere in your code you (or a framework) close your EntityManagerFactory. Make sure you didn't call close() on it.