I'm looking for a way to call multiple DAO functions in a transaction but I am NOT using spring or any such framework. What we actually have is a Database api type .jar which gets initialized with the used datasource. What I want to achieve is have my business logic level code do something like:
Connection conn = datasource.getConnection();
conn.setAutoCommit(false);
DAOObject1.query1(params, conn);
DAOObject2.query4(params, conn);
conn.commit();
conn.setAutoCommit(false);
however I want to avoid passing the connection object in every single function since this is not the correct way to do it. Right now in the few transactions we have we use this but we are looking for a way to stop passing the connection object to the database layer or even create it outside of it. I'm looking for something along the lines of:
//Pseudocode
try{
Datasource.startTransactionLogic();
DAO1.query(params);
DAO2.query(params);
Datasource.endAndCommitTransactionLogic();
}
catch(SQLException e){
Datasource.rollbackTransaction();
}
Could I achieve this through EJBs? Right now we're not using DAOs through injection, we're creating them by hand but we're about to migrate to EJBs and start using them via the container. I've heard that all queries executed by EJBs are transactional but how does it know what to rollback to? Through savepoints?
EDIT:
Let me point out that each DAO object's method, right now, obtains its own connection object. Here is an example of how our DAO classes will be:
public class DAO {
public DTO exampleQueryMethod(Integer id) {
DTO object = null;
String sql = "SELECT * FROM TABLE_1 WHERE ID = ?";
try (
Connection connection = datasourceObject.getConnection();
PreparedStatement statement = connection.prepareStatement(sql)
) {
statement.setInt(1, id);
try (ResultSet resultSet = statement.executeQuery()) {
if (resultSet.next()) {
object = DAO.map(resultSet);
}
}
}
return object;
}
}
Right now what we're doing for methods that need to be in a transaction is to have a second copy of them that receive a Connection object:
public void exampleUpdateMethod(DTO object, Connection connection) {
//table update logic
}
What we want is to avoid having such methods in our 'database api' .jar but instead be able to define the beginning and commit of a transaction in our business logic layer, like mentioned in the pseudocode above.
What i have done in the past is to create a Repository Object that takes the Database API and generates a connection and saves the connection as a member variable to it. (along with the database reference as well)
I then hang all the Business Layer calls as methods from this Repository Object for convenience to the caller.
This way.. you can call, mix, match any calls and use the underlying connection, perform rollback, commits.. etc.
Repository myr = new Repository(datasource); // let constructor create connection
myr.setAutoCommit(false);
myr.DAOObject1(parms); // method wrapper
myr.DAOObject2(parms); // method wrapper
myr.commitwork(); // method in Repository that calles endAndCommitTransactionLogic
We then took this new object, and created a pool of them primed, and managed in a new thread, and the Application just requested a new "Repository" from the pool.. and off we went.
#JBNizet comment was correct, but ... please think twice whether you really need to migrate to the EJBs. Even transactions are not much intuitive there: wrapping your exception into javax.ejb.EJBException isn't neither flexible nor readable. Not to mention other problems, like startup time or integration testing.
Judging from your question, it seems that all you need is a Dependency Injection framework with support for the Interceptors. So possible ways to go:
Spring is definitely the most popular in this area
CDI (Weld or OpenWebBeans) which came since Java EE 6 release - but can used totally without Java EE Application Server (I'm using this approach right now - and it works nicely).
Guice also comes with its own com.google.inject.persist.Transactional annotation.
All three above frameworks are equally good for your use case, but there are other factors that should be considered, like:
which one you & your team is familiar with
learning curve
your application's future possible needs
framework's community size
framework's current development speed
etc.
Hope it helps you a little bit.
EDIT: to clarify your doubts:
You can create your own Transaction class, which would wrap a Connection fetched from the datasource.getConnection(). Such transaction should be a #RequestScoped CDI bean and contain method methods like begin(), commit(), and rollback() - which would call connection.commit/ rollback under the hood. Then you can write a simple interceptor like this one which would use mentioned transaction and start/ commit/ rollback it wherever needed (of course with AutoCommit disabled).
It is doable, but keep in mind, that it should be carefully designed. That is why interceptors for transactions have been already provided in almost every DI platform/ framework.
EDIT:
After accumulating a few more years of experience, I'd like to point out that the simplest and most correct answer to this question was to use ThreadLocal objects to contain the Connection (since it's request scoped only a single threads executes it). Unfortunately at the time I didn't know the existence of such a construct.
#G. Demecki has the right idea but I followed a different implementation. Interceptors couldn't solve the problem (at least from what I saw) because they need to be attached to each function that is supposed to use them. Also once an interceptor is attached, calling the function will always have it intercepted which is not my goal. I wanted to be able to explicitly define the beginning and ending of a transaction, and have every sql executed between these 2 statements be part of the SAME transaction, without it having access to the database's related objects (like connection, transaction etc) through argument passing. The way I was able to achieve this (and quite elegant in my opinion) is the following:
I created a ConnectionWrapper object like so:
#RequestScoped
public class ConnectionWrapper {
#Resource(lookup = "java:/MyDBName")
private DataSource dataSource;
private Connection connection;
#PostConstruct
public void init() throws SQLException {
this.connection = dataSource.getConnection();
}
#PreDestroy
public void destroy() throws SQLException {
this.connection.close();
}
public void begin() throws SQLException {
this.connection.setAutoCommit(false);
}
public void commit() throws SQLException {
this.connection.commit();
this.connection.setAutoCommit(true);
}
public void rollback() throws SQLException {
this.connection.rollback();
this.connection.setAutoCommit(true);
}
public Connection getConnection() {
return connection;
}
}
My DAO objects themselves follow this pattern:
#RequestScoped
public class DAOObject implements Serializable {
private Logger LOG = Logger.getLogger(getClass().getName());
#Inject
private ConnectionWrapper wrapper;
private Connection connection;
#PostConstruct
public void init() {
connection = wrapper.getConnection();
}
public void query(DTOObject dto) throws SQLException {
String sql = "INSERT INTO DTO_TABLE VALUES (?)";
try (PreparedStatement statement = connection.prepareStatement(sql)) {
statement.setString(1, dto.getName());
statement.executeUpdate();
}
}
}
Now I can easily have a jax-rs resource which #Injects these objects and starts and commits a transaction, without having to pass any Connection or UserTransaction around.
#Path("test")
#RequestScoped
public class TestResource {
#Inject
ConnectionWrapper wrapper;
#Inject
DAOObject dao;
#Inject
DAOObject2 dao2;
#GET
#Produces(MediaType.TEXT_PLAIN)
public Response testMethod() throws Exception {
try {
wrapper.begin();
DTOObject dto = new DTOObject();
dto.setName("Name_1");
dao.query(dto);
DTOObject2 dto2 = new DTOObject2();
dto2.setName("Name_2");
dao2.query2(dto2);
wrapper.commit();
} catch (SQLException e) {
wrapper.rollback();
}
return Response.ok("ALL OK").build();
}
}
And everything works perfectly. No Interceptors or looking around InvocationContext etc.
There are only 2 things bothering me:
I have not yet found a way to have a dynamic JNDI name on #Resource(lookup = "java:/MyDBName") and this bothers me. In our AppServer we have defined many datasources and the one used by the application is dynamically chosen according to an .xml resource file packaged with the war. Which means that I can't know the datasource JNDI on compile time. There is the solution of obtaining a datasource through InitialContext() environment variable but I'd love to be able to get it as a resource from the server. I could also create a #Produces producer and inject it that way but still.
I'm not really sure why ConnectionWrapper's #PostConstruct gets called BEFORE the DAOObject's #PostConstruct. It is the correct and desired behavior but I haven't understood why. I'm guessing since DAOObject #Injects a ConnectionWrapper, its #PostConstruct takes precedence since it has to have finished before the DAOObjects's can even start but this is just a guess.
Related
I have a question regarding #Transactional annotation.
Nothing special defined, so as I understand is PROPAGATION_REQUIRED
Let’s say I have a transactional annotation which on both service and dao layer.
Service
#Transactional
public long createStudentInDB(Student student) {
final long id = addStudentToDB (student);
addStudentToCourses (id, student.getCourseIds());
return id;
}
private long addStudentToDB (Student student) {
StudentEntity entity = new StudentEntity ();
convertToEntity(student, entity);
try {
final id = dao.create(entity);
} catch (Exception e){
//
}
return id;
}
private void addStudentToCourses (long studentId, List<String> coursesIds){
//add user to group
if(coursesIds!= null){
List<StudentCourseEntity> studentCourses = new ArrayList<>();
for(String coursesId: coursesIds){
StudentCourseEntity entity = new StudentCourseEntity ();
entity.setCourseId(coursesId);
entity.setStudentId(userId);
studentCourses.add(studentId);
}
anotherDao.saveAll(studentCourses);
}
}
DAO
#Transactional
public UUID create(StudentEntity entity) {
if ( entity == null ) { throw new Exception(//…); }
getCurrentSession().save(entity);
return entity.getId();
}
ANOTHER DAO:
#Transactional
public void saveAll(Collection< StudentCourseEntity > studentCourses) {
List< StudentCourseEntity > result = new ArrayList<>();
if(studentCourses!= null) {
for (StudentCourseEntity studentCourse : studentCourses) {
if (studentCourse!= null) {
save(studentCourse);
}
}
}
}
Despite the fact that’s not optimal, it seems it causing deadlocks.
Let’s say I have max 2 connections to the database.
And I am using 3 different threads to run the same code.
Thread-1 and thread-2 receive a connection, thread-3 is not getting any connection.
More than that, it seems that thread-1 become stuck when trying to get a connection in dao level, same for thread-2. Causing a deadlock.
I was sure that by using propagation_required this would not happen.
Am I missing something?
What’s the recommendation for something like that? Is there a way I can have #transactional on both layers? If not which is preferred?
Thanks
Fabrizio
As the dao.doSomeStuff is expected to be invoked from within other transactions I would suggest you to configure this method as:
#Transaction(propagation=REQUIRES_NEW)
Thanks to that the transaction which is invoking this method will halted until the one with REQUIRES_NEW will be finished.
Not sure if this is the fix for your particular deadlock case but your example fits this particular set-up.
You are right, Propagation.REQUIRED is the default. But that also means that the second (nested) invocation on dao joins / reuses the transaction created on service level. So there is no need to create another transaction for the nested call.
In general Spring (on high level usage) should manage resource handling by forwarding it to the underlying ORM layer:
The preferred approach is to use Spring's highest level template based
persistence integration APIs or to use native ORM APIs with
transaction- aware factory beans or proxies for managing the native
resource factories. These transaction-aware solutions internally
handle resource creation and reuse, cleanup, optional transaction
synchronization of the resources, and exception mapping. Thus user
data access code does not have to address these tasks, but can be
focused purely on non-boilerplate persistence logic.
Even if you handle it on your own (on low level API usage) the connections should be reused:
When you want the application code to deal directly with the resource
types of the native persistence APIs, you use these classes to ensure
that proper Spring Framework-managed instances are obtained,
transactions are (optionally) synchronized, and exceptions that occur
in the process are properly mapped to a consistent API.
...
If an existing transaction already has a connection synchronized
(linked) to it, that instance is returned. Otherwise, the method call
triggers the creation of a new connection, which is (optionally)
synchronized to any existing transaction, and made available for
subsequent reuse in that same transaction.
Source
Maybe you have to find out what is happening down there.
Each Session / Unit of Work will be bound to a thread and released (together with the assigned connection) after the transaction has ended. Of course when your thread gets stuck it won't release the connection.
Are you sure that this 'deadlock' is caused by this nesting? Maybe that has another reason. Do you have some test code for this example? Or a thread dump or something?
#Transactional works by keeping ThreadLocal state, which can be accessed by the (Spring managed) Proxy EntityManager. If you are using Propagation.REQUIRED (the default), and you have a non-transactional method which calls two different DAOs (or two Transactional methods on the same DAO), you will get two transactions, and two calls to acquire a pooled connection. You may get the same connection twice or two different connections, but you should only use one connection at the time.
If you call two DAOs from a #Transactional method, there will only be one transaction, as the DAO will find and join the existing transaction found in the ThreadLocal state, again you only need one connection from the pool.
If you get a deadlock then something is very wrong, and you may want to debug when your connections and transaction are created. A transaction is started by calling Connection.setAutoCommit(false), in Hibernate this happens in org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor#begin(). Connections are managed by a class extending org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor so these are some good places to put a break point, and trace the call-stack back to your code to see which lines created connections.
Just like in the title. I need to query a specific function on database which will define a few values assigned to the transaction. If it's possible I want to make it a global configuration connected with a specific profile(There are few requests that I do not want to load that function).
Project is build on Java SE 1.7, Spring Boot 1.1.7 and it connects with PostgreSQL database.
Requests are build on 3 layers, SomeClassController(Controller), SomeClassService(Service), SomeClassDB(Repository). On SomeClassDB it connects with database using JdbcTemplate from Spring and performs a CRUD operations. Before any of those operations I want to query a function. And as i mentioned it, I don't want a method that will do the job - I need something like global configuration on TransactionManager?
Maybe I should use TransactionSynchronization with beforeCommit method? But I don't know how to use it globally.
EDIT1: What I can but i don't want to ;):
#Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS)
#Service
public class SessionService {
Boolean flag;
public SessionService(){
flag=false;
}
#Value("${appVersion}") String appVersion;
#Value("${appArtifactId}") String appArtifactId;
public void addSession(){
if(!flag){
jdbcTemplate.execute("SELECT add_ses('"+appArtifactId+"','"+appVersion+")");
flag=true;
}
}
public void deleteSession(){
jdbcTemplate.execute("SELECT del_ses()");
}
}
And now I can just call those two methods on start and end of 2nd layer class with #Autowired this class. But I really don't want to do it in that way. Someone, someday will forget about it propagating that 2nd layer SomeClassService class and I want to avoid it.
I hope that will get you closer to my problem.
This could be typical use case of aspects. Have you tried to write an aspect for this? Aspect can be configured to execute some code before and after call of a method in specific package or method decorated by specific annotation. Spring has very nice support for aspects.
Another way is to create a proxy (using java.lang.reflect.Proxy) object which will call your initialization code and then delegate to proxied object.
Let's say there are #Service and #Repository interfaces like the following:
#Repository
public interface OrderDao extends JpaRepository<Order, Integer> {
}
public interface OrderService {
void saveOrder(Order order);
}
#Service
public class OrderServiceImpl implements OrderService {
#Autowired
private OrderDao orderDao;
#Override
#Transactional
public void saveOrder(Order order) {
orderDao.save(order);
}
}
This is part of working application, everything is configured to access single database and everything works fine.
Now, I would like to have possibility to create stand-alone working instance of OrderService with auto-wired OrderDao using pure Java with jdbcUrl specified in Java code, something like this:
final int tenantId = 3578;
final String jdbcUrl = "jdbc:mysql://localhost:3306/database_" + tenantId;
OrderService orderService = someMethodWithSpringMagic(appContext, jdbcUrl);
As you can see I would like to introduce multi-tenant architecture with tenant per database strategy to existing Spring-based application.
Please note that I was able to achieve that quite easily before with self-implemented jdbcTemplate-like logic also with JDBC transactions correctly working so this is very valid task.
Please also note that I need quite simple transaction logic to start transaction, do several requests in service method in scope of that transaction and then commit it/rollback on exception.
Most solutions on the web regarding multi-tenancy with Spring propose specifying concrete persistence units in xml config AND/OR using annotation-based configuration which is highly inflexible because in order to add new database url whole application should be stopped, xml config/annotation code should be changed and application started.
So, basically I'm looking for a piece of code which is able to create #Service just like Spring creates it internally after properties are read from XML configs / annotations. I'm also looking into using ProxyBeanFactory for that, because Spring uses AOP to create service instances (so I guess simple good-old re-usable OOP is not the way to go here).
Is Spring flexible enough to allow this relatively simple case of code reuse?
Any hints will be greatly appreciated and if I find complete answer to this question I'll post it here for future generations :)
HIbernate has out of the box support for multi tenancy, check that out before trying your own. Hibernate requires a MultiTenantConnectionProvider and CurrentTenantIdentifierResolver for which there are default implementations out of the box but you can always write your own implementation. If it is only a schema change it is actually pretty simple to implement (execute a query before returning the connection). Else hold a map of datasources and get an instance from that, or create a new instance.
About 8 years ago we already wrote a generic solution which was documented here and the code is here. It isn't specific for hibernate and could be used with basically anything you need to switch around. We used it for DataSources and also some web related things (theming amongst others).
Creating a transactional proxy for an annotated service is not a difficult task but I'm not sure that you really need it. To choose a database for a tenantId I guess that you only need to concentrate in DataSource interface.
For example, with a simple driver managed datasource:
public class MultitenancyDriverManagerDataSource extends DriverManagerDataSource {
#Override
protected Connection getConnectionFromDriverManager(String url,
Properties props) throws SQLException {
Integer tenant = MultitenancyContext.getTenantId();
if (tenant != null)
url += "_" + tenant;
return super.getConnectionFromDriverManager(url, props);
}
}
public class MultitenancyContext {
private static ThreadLocal<Integer> tenant = new ThreadLocal<Integer>();
public static Integer getTenantId() {
return tenant.get();
}
public static void setTenatId(Integer value) {
tenant.set(value);
}
}
Of course, If you want to use a connection pool, you need to elaborate it a bit, for example using a connection pool per tenant.
Suppose I want to create a service layer for my web application which uses servlets,How should I go about this?(I am not using a web app framework..So,please bear with me).Should I implement it as a listener?The service is meant to do database access.That is,I should be able to call from my servlet
class MyServlet{
...
doPost(...){
...
MyEntity entity = dbAccessService.getMyEntity(someId);
...
}
}
Where the dbAccessService should deal with hibernate session,transactions etc.Previously I used to do all this inside dao methods, but I was advised that was not a good idea.
Any suggestions welcome
thanks
mark
Sample code snippet is given below
class DBAccessServiceImpl{
...
private MyEntity getMyEntity(Long id){
Transaction tx = null;
MyEntity me = null;
Session session = HibernateUtil.getCurrentSession();
try{
tx = session.beginTransaction();
return entitydao.findEntityById(id);
}catch(RuntimeException e){
logger.info("problem occurred while calling findEntityById()");
throw e;
}
}
...
}
Then create a listener to instantiate DBAccessService
class MyAppListener implements ServletContextListener {
#Override
public void contextInitialized(ServletContextEvent ctxEvent) {
ServletContext sc = ctxEvent.getServletContext();
DBAccessService dbservice = new DBAccessServiceImpl();
sc.setAttribute("dbAccessService",dbservice);
}
}
In web.xml add listener
...
<listener>
<listener-class>myapp.listeners.MyAppListener</listener-class>
</listener>
...
Assuming you do not want to introduce a framework, two options make sense (in my opinion):
define your service layer using stateless EJB session beans. You need an EJB container.
do it as always in OO languages, create an interface and a corresponding implementation:
Define an interface
public interface BusinessService {
abstract public BusinessObject performSomeOperation(SomeInput input);
}
And an implementation
public class BusinessServiceImpl implements BusinessService {
public BusinessObject performSomeOperation(SomeInput input) {
// some logic here...
}
}
You have several options for instantiating the service. If you start from scratch with a small application it may be sufficient to simply instantiate the service inside your web application:
BusinessService service = new BusinessServiceImpl();
service.performSomeOperation(...);
BTW: At a later time you may want to refactor and implement some abstractions around the Service instantiation (Factory pattern, dependency injection, etc.). Furthermore, in large systems there is a chance that you have to host the service layer on it's own infrastructure for scalability, so that your webapp communicates with the service layer via an open protocol, be it RESTful or Web Services.
However the future looks like, having a well defined interface defining your business functions in place, allows you to "easily" move forward if the application grows.
Response to your update:
I would not implement the service itself as a listener, this does not make sense. Nevertheless, your sample code seems to be reasonable, but you must distinguish between the Service (in this case DBAccessService) and the way you instantiate/retrieve it (the listener). The listener you've implemented plays in fact the role of a ServiceLocator which is capable of finding a certain services. If you store the instance of your Service in the servlet context you have to remind that the service implementation must be thread safe.
You have to be carefull to not over-engineer your design - keep it simple as long as you cannot foresee further, complex requirements. If it's not yet complex I suggest to encapsulate the implementation using a simple static factory method:
public final class ServiceFactory {
public static DBAccessService getDBAccessService() {
DBAccessService service = new DBAccessServiceImpl();
return service;
}
}
Complex alternatives are available to implement the ServiceFactory and nowadays some call it anti-pattern. But as long as you do not want to start with dependency injection (etc.) this one is still a valid solution. The service implementation DBAccessServiceImpl is accessed at one place only (the factory). As I mentioned before - keep an eye on multi-threading... hope this helps!
What you're suggesting is really no different to doing the session and transaction handling in a DAO. After all, your service class calls the DAO; to the client code, there is no difference.
Rather, i suspect that whoever told you not to put the session handling in the DAO was thinking that you should instead use Open Session In View pattern. Very simply, in its usual form, that involves writing a Filter which opens a session and starts a transaction before passing the request down the chain, and then commits the transaction (or rolls it back if necessary) and closes the session after the request completes. That means that within any one request, all access to persistent objects happens in a single transaction and a single session, which is usually the right way to do it (it's certainly the fastest way to do it).
Thanks for reading this.
I have 2 MySQL databases - master for writes, slave for reads. The perfect scenario I imagine is that my app uses connection to master for readOnly=false transactions, slave for readOnly=true transactions.
In order to implement this I need to provide a valid connection depending on the type of current transaction. My data service layer should not know about what type of connection it uses and just use the injected SqlMapClient (I use iBatis) directly. This means that (if I get it right) the injected SqlMapClients should be proxied and the delegate should be chosen at runtime.
public class MyDataService {
private SqlMapClient sqlMap;
#Autowired
public MyDataService (SqlMapClient sqlMap) {
this.sqlMap = sqlMap;
}
#Transactional(readOnly = true)
public MyData getSomeData() {
// an instance of sqlMap connected to slave should be used
}
#Transactional(readOnly = false)
public void saveMyData(MyData myData) {
// an instance of sqlMap connected to master should be used
}
}
So the question is - how can I do this?
Thanks a lot
It's an interesting idea, but you'd have a tough job on your hands. The readOnly attribute is intended as a hint to the transaction manager, and isn't really consulted anywhere meaningful. You'd have to rewrite or extend multiple Spring infrastructure classes.
So unless you're hell-bent on getting this working a you want, your best option is almost certainly to inject two separate SqlMapClient objects into your DAO, and for the methods to pick the appropriate one. The #Transactional annotations would also need to indicate which transaction manager to use (assuming you're using DataSourceTransactionManager rather than JpaTransactionManager), taking care to match the transaction manager to the DataSource used by the SqlMapClient.