I have a question regarding #Transactional annotation.
Nothing special defined, so as I understand is PROPAGATION_REQUIRED
Let’s say I have a transactional annotation which on both service and dao layer.
Service
#Transactional
public long createStudentInDB(Student student) {
final long id = addStudentToDB (student);
addStudentToCourses (id, student.getCourseIds());
return id;
}
private long addStudentToDB (Student student) {
StudentEntity entity = new StudentEntity ();
convertToEntity(student, entity);
try {
final id = dao.create(entity);
} catch (Exception e){
//
}
return id;
}
private void addStudentToCourses (long studentId, List<String> coursesIds){
//add user to group
if(coursesIds!= null){
List<StudentCourseEntity> studentCourses = new ArrayList<>();
for(String coursesId: coursesIds){
StudentCourseEntity entity = new StudentCourseEntity ();
entity.setCourseId(coursesId);
entity.setStudentId(userId);
studentCourses.add(studentId);
}
anotherDao.saveAll(studentCourses);
}
}
DAO
#Transactional
public UUID create(StudentEntity entity) {
if ( entity == null ) { throw new Exception(//…); }
getCurrentSession().save(entity);
return entity.getId();
}
ANOTHER DAO:
#Transactional
public void saveAll(Collection< StudentCourseEntity > studentCourses) {
List< StudentCourseEntity > result = new ArrayList<>();
if(studentCourses!= null) {
for (StudentCourseEntity studentCourse : studentCourses) {
if (studentCourse!= null) {
save(studentCourse);
}
}
}
}
Despite the fact that’s not optimal, it seems it causing deadlocks.
Let’s say I have max 2 connections to the database.
And I am using 3 different threads to run the same code.
Thread-1 and thread-2 receive a connection, thread-3 is not getting any connection.
More than that, it seems that thread-1 become stuck when trying to get a connection in dao level, same for thread-2. Causing a deadlock.
I was sure that by using propagation_required this would not happen.
Am I missing something?
What’s the recommendation for something like that? Is there a way I can have #transactional on both layers? If not which is preferred?
Thanks
Fabrizio
As the dao.doSomeStuff is expected to be invoked from within other transactions I would suggest you to configure this method as:
#Transaction(propagation=REQUIRES_NEW)
Thanks to that the transaction which is invoking this method will halted until the one with REQUIRES_NEW will be finished.
Not sure if this is the fix for your particular deadlock case but your example fits this particular set-up.
You are right, Propagation.REQUIRED is the default. But that also means that the second (nested) invocation on dao joins / reuses the transaction created on service level. So there is no need to create another transaction for the nested call.
In general Spring (on high level usage) should manage resource handling by forwarding it to the underlying ORM layer:
The preferred approach is to use Spring's highest level template based
persistence integration APIs or to use native ORM APIs with
transaction- aware factory beans or proxies for managing the native
resource factories. These transaction-aware solutions internally
handle resource creation and reuse, cleanup, optional transaction
synchronization of the resources, and exception mapping. Thus user
data access code does not have to address these tasks, but can be
focused purely on non-boilerplate persistence logic.
Even if you handle it on your own (on low level API usage) the connections should be reused:
When you want the application code to deal directly with the resource
types of the native persistence APIs, you use these classes to ensure
that proper Spring Framework-managed instances are obtained,
transactions are (optionally) synchronized, and exceptions that occur
in the process are properly mapped to a consistent API.
...
If an existing transaction already has a connection synchronized
(linked) to it, that instance is returned. Otherwise, the method call
triggers the creation of a new connection, which is (optionally)
synchronized to any existing transaction, and made available for
subsequent reuse in that same transaction.
Source
Maybe you have to find out what is happening down there.
Each Session / Unit of Work will be bound to a thread and released (together with the assigned connection) after the transaction has ended. Of course when your thread gets stuck it won't release the connection.
Are you sure that this 'deadlock' is caused by this nesting? Maybe that has another reason. Do you have some test code for this example? Or a thread dump or something?
#Transactional works by keeping ThreadLocal state, which can be accessed by the (Spring managed) Proxy EntityManager. If you are using Propagation.REQUIRED (the default), and you have a non-transactional method which calls two different DAOs (or two Transactional methods on the same DAO), you will get two transactions, and two calls to acquire a pooled connection. You may get the same connection twice or two different connections, but you should only use one connection at the time.
If you call two DAOs from a #Transactional method, there will only be one transaction, as the DAO will find and join the existing transaction found in the ThreadLocal state, again you only need one connection from the pool.
If you get a deadlock then something is very wrong, and you may want to debug when your connections and transaction are created. A transaction is started by calling Connection.setAutoCommit(false), in Hibernate this happens in org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor#begin(). Connections are managed by a class extending org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor so these are some good places to put a break point, and trace the call-stack back to your code to see which lines created connections.
Related
I have a Spring-boot project where I have a service bean with 2 #Transactional annotated methods.
These methods do read-only JPA (hibernated) actions to fetch data from an HSQL file database, using both JPA repositories and lazy loaded getters in entities.
I also have a cli bean that handles commands (Using PicoCLI). From one of these commands I try to call both #Transactional annotated methods, but I get the following error during execution of the second method:
org.hibernate.LazyInitializationException - could not initialize proxy - no Session
at org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:602)
at org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:217)
at org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:581)
at org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:148)
at org.hibernate.collection.internal.PersistentSet.iterator(PersistentSet.java:188)
at java.util.Spliterators$IteratorSpliterator.estimateSize(Spliterators.java:1821)
at java.util.Spliterator.getExactSizeIfKnown(Spliterator.java:408)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
at <mypackage>.SomeImpl.getThings(SomeImpl.java:<linenr>)
...
If I mark the method that calls both #Transactional annotated methods with #Transactional itself, the code seems to work (due to there now only being 1 top level transaction I presume?).
I just want to find out why I cannot start multiple transactions in a single session or why the second transaction doesn't start a new session if there are none.
So my questions are:
Does this have to do with how hibernate starts a session, how transactions close sessions or anything related to the HSQL database?
Is adding an encompassing transaction the right way to fix the issue
or is this just fighting the symptom?
What would be the best way to be able to use multiple #Transactional annotated methods from one method?
EDIT: I want to make clear that I don't expose the entities outside of the transactional methods, so on the surface it looks to me like the 2 transactional methods should be working independently from one another.
EDIT2: for more clarification: the transactional methods need to be available in an api and the user of the api should be able to call multiple of these transactional methods, without needing to use transactional annotations and without getting the LazyInitializationException
Api:
public interface SomeApi {
List<String> getSomeList();
List<Something> getThings(String somethingGroupName);
}
Implementation:
public class SomeImpl implements SomeApi {
#Transactional
public List<String> getSomeList() {
return ...; //Do jpa stuff to get the list
}
#Transactional
public List<Something> getThings(String somethingGroupName) {
return ...; //Do other jpa stuff to get the result from the group name
}
}
Usage by 3rd party (who might not know what transactionality is):
public someMethod(String somethingGroupName) {
...
SomeApi someApi = ...; // Get an implementation of the api in some way
List<String> someList = someApi.someList();
if (someList.contains(somethingGroupName) {
System.out.println(someApi.getThings(somethingGroupName));
}
...
}
It seems that you are accessing some not initialized data from your entities after the transactions have ended. In that cases, the persistence provider may throw the lazyinitialization exception.
If you need to retrieve some information not eagerly loaded with the entities, you may use one of two strategies:
annotate the calling method also with #Transactional annotation, as you did: it does not start a new transaction for each call, but makes the opened transaction active until your calling method ends, avoiding the exception; or
make the called methods load eagerly the required fields USING the JOIN FETCH JPQL idiom.
Transaction boundaries requires some analysis of your scenario. Please, read this answer and search for better books or tutorials to master it. Probably only you will be able to define aptly your requirements.
I found that hibernate out of the box doesn't reopen a session and therefore doesn't enable lazy loading after the first transaction has ended, whether or not subsequent jpa statements are in a transaction or not. There is however a property in hibernate to enable this feature:
spring:
jpa:
properties:
hibernate.enable_lazy_load_no_trans: true
This will make sure that if there is no session, then a temp session will be created. I believe that it will also prevent a session from ending after a transaction, but I don't know this for sure.
Partial credit goes to the following answers from other StackOverflow questions:
http://stackoverflow.com/a/32046337/2877358
https://stackoverflow.com/a/11913404/2877358
WARNING: In hibernate 4.1.8 there is a bug that could lead to loss of data! Make sure that you are using 4.2.12, 4.3.5 or newer versions of hibernate. See: https://hibernate.atlassian.net/browse/HHH-7971.
I am trying to develop a forum system as a part of a university assignment,
which contains a database connectivity part.
I am writing the server in java and decided to use hibernate as an ORM tool to save and load from the data base.
My question is not about the syntax, but about the design of the entire system regarding the database.
Who should be in charge of creating a session and commit transactions? should I make a singleton which recive an object and save it to the database and use this singelton in every setter/constructor of the different classes?
should each change in a memory object be commited directly or should I commit only every few changes? (and if so what is the best way to do this?)
Ideally, a framework should do that for you. Using Spring, for example, you could simply mark methods of Spring beans transactional using an annotation, and the session and transaction handling would be done by Spring:
#Autowired
private SessionFactory sessionFactory;
#Transactional
public void foo() {
Session session = sessionFactory.getCurrentSession();
// do some work with the session
}
instead of
private SessionFactory sessionFactory;
public void foo() {
Session sess = sessionFactory.openSession();
Transaction tx = null;
try {
tx = sess.beginTransaction();
// do some with the session
tx.commit();
}
catch (RuntimeException e) {
if (tx != null) {
tx.rollback();
}
throw e;
}
finally {
sess.close();
}
}
You should neither commit after each change, nor commit every few changes. You should commit atomic, coherent changes. That's the whole point of transactions:being able to go from a coherent state of your database to another coherent state of your database.
For example If posting a message to a topic forum consists in
persisting a message instance
incrementing the messageCount field of the forum
creating a notification for the topic poster
then these three changes should be in a single transaction, to make sure you don't end up with the message being persisted, but the messageCount not being incremented.
I suggest you leave creating and committing transactions to the container, unless you are sure you want to do it yourself. In this case you can use JTA, and use Container Managed Transactions (CMT). Also remember to avoid entitymanager-per-operation antipattern. To learn more read about Transactions and Concurrency and decide yourself what suit you the best.
Take a look at the DAO (Data Access Object) pattern. It will help to provide a pattern which will help you to encapsulate your data access.
http://en.wikipedia.org/wiki/Data_access_object
Also, this page does a great job:
Data access object (DAO) in Java
As for the second question, the answer entirely depends on your requirements. Usually you want to commit at least once per user interaction. That might be the result of several in-memory changes. It is definitely easier to commit after each entity is changed.
I am using following code
TestDAO {
Session session = null;
public TestDAO() {
this.session = HibernateUtil.getSessionFactory().getCurrentSession();
}
//...more code create,update ...
//each method starts a transcation using "tx= session.beginTransaction();"
}
1)Now should i commit the transcation using tx.commit for a fetch operation too or only for save/update operation??
2)Should i create a seperate instance of TestDAO every time i need?Or should i create a singleton class that returns a single instance of DAO everytme?Will this have a problem?
You don't need tx.commit() for fetch operation. That is only needed for any save, update or delete. Close the session after data fetching.
If your application connect to only one database then use of single DAO is better. Spring framework encourages this. You will find more details about this on the following link
Don't repeat the DAO!
Transactions should not be the responsibility of the DAO, those really need to be controlled at a higher level. A DAO should be something that does queries and updates without being aware of the bigger picture, calls to DAOs can be grouped within an object like a Spring service or EJB session bean which is responsible for deciding what needs to go together in a transaction. This makes your DAO code more reusable since it doesn't have to know as much about the context in which it's operating.
Look at how Spring does it (in the sample applications like petstore that come with Spring), or better, look at the King/Bauer Hibernate-JPA book, which has a chapter on creating DAOs.
Thanks for reading this.
I have 2 MySQL databases - master for writes, slave for reads. The perfect scenario I imagine is that my app uses connection to master for readOnly=false transactions, slave for readOnly=true transactions.
In order to implement this I need to provide a valid connection depending on the type of current transaction. My data service layer should not know about what type of connection it uses and just use the injected SqlMapClient (I use iBatis) directly. This means that (if I get it right) the injected SqlMapClients should be proxied and the delegate should be chosen at runtime.
public class MyDataService {
private SqlMapClient sqlMap;
#Autowired
public MyDataService (SqlMapClient sqlMap) {
this.sqlMap = sqlMap;
}
#Transactional(readOnly = true)
public MyData getSomeData() {
// an instance of sqlMap connected to slave should be used
}
#Transactional(readOnly = false)
public void saveMyData(MyData myData) {
// an instance of sqlMap connected to master should be used
}
}
So the question is - how can I do this?
Thanks a lot
It's an interesting idea, but you'd have a tough job on your hands. The readOnly attribute is intended as a hint to the transaction manager, and isn't really consulted anywhere meaningful. You'd have to rewrite or extend multiple Spring infrastructure classes.
So unless you're hell-bent on getting this working a you want, your best option is almost certainly to inject two separate SqlMapClient objects into your DAO, and for the methods to pick the appropriate one. The #Transactional annotations would also need to indicate which transaction manager to use (assuming you're using DataSourceTransactionManager rather than JpaTransactionManager), taking care to match the transaction manager to the DataSource used by the SqlMapClient.
All I know about this exception is from Spring's documentation and some forum posts with frostrated developers pasting huge stack traces, and no replies.
From Spring's documentation:
Thrown when an attempt to commit a transaction resulted in an unexpected rollback
I want to understand once and for all
Exactly what causes it?
Where did the rollback occur? in the App Server code or in the Database?
Was it caused due to a specific underlying exception (e.g. something from java.sql.*)?
Is it related to Hibernate? Is it related to Spring Transaction Manager (non JTA in my case)?
How to avoid it? is there any best practice to avoid it?
How to debug it? it seems to be hard to reproduce, any proven ways to troubleshoot it?
I found this to be answering the rest of question: https://jira.springsource.org/browse/SPR-3452
I guess we need to differentiate
between 'logical' transaction scopes
and 'physical' transactions here...
What PROPAGATION_REQUIRED creates is a
logical transaction scope for each
method that it gets applied to. Each
such logical transaction scope can
individually decide on rollback-only
status, with an outer transaction
scope being logically independent from
the inner transaction scope. Of
course, in case of standard
PROPAGATION_REQUIRED behavior, they
will be mapped to the same physical
transaction. So a rollback-only marker
set in the inner transaction scope
does affect the outer transaction's
chance to actually commit. However,
since the outer transaction scope did
not decide on a rollback itself, the
rollback (silently triggered by the
inner transaction scope) comes
unexpected at that level - which is
why an UnexpectedRollbackException
gets thrown.
PROPAGATION_REQUIRES_NEW, in contrast,
uses a completely independent
transaction for each affected
transaction scope. In that case, the
underlying physical transactions will
be different and hence can commit or
rollback independently, with an outer
transaction not affected by an inner
transaction's rollback status.
PROPAGATION_NESTED is different again
in that it uses a single physical
transaction with multiple savepoints
that it can roll back to. Such partial
rollbacks allow an inner transaction
scope to trigger a rollback for its
scope, with the outer transaction
being able to continue the physical
transaction despite some operations
having been rolled back. This is
typically mapped onto JDBC savepoints,
so will only work with JDBC resource
transactions (Spring's
DataSourceTransactionManager).
To complete the discussion:
UnexpectedRollbackException may also
be thrown without the application ever
having set a rollback-only marker
itself. Instead, the transaction
infrastructure may have decided that
the only possible outcome is a
rollback, due to constraints in the
current transaction state. This is
particularly relevant with XA
transactions.
As I suggested above, throwing an
exception at the inner transaction
scope, then catching that exception at
the outer scope and translating it
into a silent setRollbackOnly call
there should work for your scenario. A
caller of the outer transaction will
never see an exception then. Since you
only worry about such silent rollbacks
because of special requirements
imposed by a caller, I would even
argue that the correct architectural
solution is to use exceptions within
the service layer, and to translate
those exceptions into silent rollbacks
at the service facade level (right
before returning to that special
caller).
Since your problem is possibly not
only about rollback exceptions, but
rather about any exceptions thrown
from your service layer, you could
even use standard exception-driven
rollbacks all the way throughout you
service layer, and then catch and log
such exceptions once the transaction
has already completed, in some
adapting service facade that
translates your service layer's
exceptions into UI-specific error
states.
Juergen
Scroll a little more back in the log (or increase it's buffer-size) and you will see what exactly caused the exception.
If it happens not to be there, check the getMostSpecificCause() and getRootCause() methods of UnexpectedRollbackException- they might be useful.
Well I can tell you how to reproduce the UnexpectedRollbackException. I was working on my project, and I got this UnexpectedRollbackException in following situation. I am having controller, service and dao layers in my project.
What I did is in my service layer class,
class SomeServiceClass {
void outerTransaction() {
// some lines of code
innerTransaction();
//AbstractPlatformTransactionManager tries to commit but UnexpectedRollbackException is thrown, not here but in controller layer class that uses SomeServiceClass.
}
void innerTransaction() {
try {
// someDaoMethod or someDaoOperation that throws exception
} catch(Exception e) {
// when exception is caught, transaction is rolled back, outer transaction does not know about it.
// I got this point where inner transaction gets rolled back when I set HibernateTransactionManager.setFailEarlyOnGlobalRollbackOnly(true)
// FYI : use of following second dao access is wrong,
try {
// again some dao operation
} catch(Exception e1) {
throw e2;
}
}
}
}
I had this problem with Spring Framework 4 + Java 1.8 for this exception.
I solved it.
😊
I know that runtime exceptions rollbacks.
But in my case, project do not use runtime exceptions.
For example, we have code.
#Transactional
#Service
UserFacadeIml implements UserFacade
{
#Autowired
private UserService userService;
//throws UnexpectedRollbackException
//instead NotRuntimeEx
#Override
public void saveUser(User user)
throws NotRuntimeEx
{
return userService.saveUser(user);
}
}
I solved it, when add "Transactional(rollBackFor=NotRuntimeEx.class)" for facade.
It works, because Spring use proxy for facade and proxy for service.
When transactional rollback, proxy of service send context to facade.
But facade proxy does not understand, why transaction rollback.
#Transactional(rollBackFor=NotRuntimeEx.class)
#Service
UserServiceIml implements UserService
{
#Override
public void saveUser(User user)
throws NotRuntimeEx
{
throw new NotRuntimeEx(user);
}
}