So pre spring, we used version of HibernateUtil that cached the SessionFactory instance if a successful raw JDBC connection was made, and threw SQLException otherwise. This allowed us to recover from initial setup of the SessionFactory being "bad" due to authentication or server connection issues.
We moved to Spring and wired things in a more or less classic way with the LocalSessionFactoryBean, the C3P0 datasource, and various dao classes which have the SessionFactory injected.
Now, if the SQL server appears to not be up when the web app runs, the web app never recovers. All access to the dao methods blow up because a null sessionfactory gets injected. (once the sessionfactory is made properly, the connection pool mostly handles the up/down status of the sql server fine, so recovery is possible)
Now, the dao methods are wired by default to be singletons, and we could change them to prototype. I don't think that will fix the matter though - I believe the LocalSessionFactoryBean is now "stuck" and caches the null reference (I haven't tested this yet, though, I'll shamefully admit).
This has to be an issue that concerns people.
Tried proxy as suggested below -- this failed
First of all I had to ignore the suggestion (which frankly seemed wrong from a decompile) to call LocalSessionFactory.buildSessionFactory - it isn't visible.
Instead I tried a modified version as follows:
override newSessionFactory. At end return proxy of SessionFactory pointing to an invocation handler listed below
This failed too.
org.hibernate.HibernateException: No local DataSource found for configuration - 'dataSource' property must be set on LocalSessionFactoryBean
Now, if newSessionfactory() is changed to simply
return config.buildSessionFactory() (instead of a proxy) it works, but of course no longer exhibits the desired proxy behavior.
public static class HibernateInvocationHandler implements InvocationHandler {
final private Configuration config;
private SessionFactory realSessionFactory;
public HibernateInvocationHandler(Configuration config) {
this.config=config;
}
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
if (false) proxy.hashCode();
System.out.println("Proxy for SessionFactory called");
synchronized(this) {
if (this.realSessionFactory == null){
SessionFactory sf =null;
try {
System.out.println("Gonna BUILD one or die trying");
sf=this.config.buildSessionFactory();
} catch (RuntimeException e) {
System.out.println(ErrorHandle.exceptionToString(e));
log.error("SessionFactoryProxy",e);
closeSessionFactory(sf);
System.out.println("FAILED to build");
sf=null;
}
if (sf==null) throw new RetainConfigDataAccessException("SessionFactory not available");
this.realSessionFactory=sf;
}
return method.invoke(this.realSessionFactory, args);
}
}
The proxy creation in newSessionFactory looks like this
SessionFactory sfProxy= (SessionFactory) Proxy.newProxyInstance(
SessionFactory.class.getClassLoader(),
new Class[] { SessionFactory.class },
new HibernateInvocationHandler(config));
and one can return this proxy (which fails) or config.buildSessionFactory() which works but doesn't solve the initial issue.
An alternate approach has been suggested by bozho, using getObject(). Note the fatal flaw in d), because buildSessionFactory is not visible.
a) if this.sessionfactory is nonnull, no need for a proxy, just return
b) if it is , build a proxy which...
c) should contain a private reference of sessionfactory, and each time it is called check if it is null. If so, you build a new factory and if successful assign to the private reference and return it from now on.
d) Now, state how you would build that factory from getObject(). Your answer should involve calling buildSessionFactory....but you CAN'T. One could create the factory by oneself, but you would end up risking breaking spring that way (look at buildSessionFactory code)
You shouldn't worry about this. Starting the app is something you will rarely do in production, and in development - well, you need the DB server anyway.
You should worry if the application doesn't recover if the db server stops while the app is running.
What you can do is extend LocalSessionFactoryBean and override the getObject() method, and make it return a proxy (via java.lang.reflect.Proxy or CGLIB / javassist), in case the sessionFactory is null. That way a SessionFactory will be injected. The proxy should hold a reference to a bare SessionFactory, which would initially be null. Whenever the proxy is asked to connect, if the sessionFacotry is still null, you call the buildSessionFactory() (of the LocalSessionFactoryBean) and delegate to it. Otherwise throw an exception. (Then of course map your new factory bean instead of the current)
Thus your app will be available even if the db isn't available on startup. But I myself wouldn't bother with this.
Related
My web application uses Neo4j as a data storage, and it uses Spring Data Neo4j 4 framework.
As suggested in the tutorial, my neo4j session is bound to my HTTP session:
#Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS)
public Session getSession() throws Exception {
return super.getSession();
}
I have an endpoint which runs a time-consuming query, and sends the result offline. I'd like to move this method to an #Async thread, but obviously I can not access my neo4j session from that thread.
What is the best practise to access neo4j repositories outside of the HTTP session without changing the scope of the "main" session bean?
I'm not sure about best practice but can't you just create another session from the sessionFactory#openSession() method? And pass that new session to another instance of neo4jOperations (or #Override the existing bean if are not using it) thus avoiding using the proxyScoped Neo4jConfiguration#getSession() method.
like so:
// note below assumes you are extending Neo4jConfiguration
// ...
// passing in your own non proxyScoped session.
// #Override existing neo4jTemplate #Bean passing in your own session
#Bean
#Override
public Neo4jOperations neo4jTemplate() throws Exception {
return new Neo4jTemplate(getSessionFactory().openSession());
}
// or create another neo4jTemplate instance that avoids getSession() proxyScope method usage in its constructor.
#Bean("nonProxyScopedNeo4jOperations")
public Neo4jOperations nonProxyScopedNeo4jTemplate() throws Exception {
return new Neo4jTemplate(getSessionFactory().openSession());
}
// ...
and using the custom neo4jOperations bean to perform your #Async logic
see Neo4jConfiguration:
I have ended up moving my neo4jSession to a thread scope. As our application is stateless, our session is only one request. And as every request is handled in a separate thread, thread scope was the easiest way.
I'd like to thank to the developer of https://github.com/devbury/spring-boot-starter-threadscope , made my life easier. :)
I'm looking for a way to call multiple DAO functions in a transaction but I am NOT using spring or any such framework. What we actually have is a Database api type .jar which gets initialized with the used datasource. What I want to achieve is have my business logic level code do something like:
Connection conn = datasource.getConnection();
conn.setAutoCommit(false);
DAOObject1.query1(params, conn);
DAOObject2.query4(params, conn);
conn.commit();
conn.setAutoCommit(false);
however I want to avoid passing the connection object in every single function since this is not the correct way to do it. Right now in the few transactions we have we use this but we are looking for a way to stop passing the connection object to the database layer or even create it outside of it. I'm looking for something along the lines of:
//Pseudocode
try{
Datasource.startTransactionLogic();
DAO1.query(params);
DAO2.query(params);
Datasource.endAndCommitTransactionLogic();
}
catch(SQLException e){
Datasource.rollbackTransaction();
}
Could I achieve this through EJBs? Right now we're not using DAOs through injection, we're creating them by hand but we're about to migrate to EJBs and start using them via the container. I've heard that all queries executed by EJBs are transactional but how does it know what to rollback to? Through savepoints?
EDIT:
Let me point out that each DAO object's method, right now, obtains its own connection object. Here is an example of how our DAO classes will be:
public class DAO {
public DTO exampleQueryMethod(Integer id) {
DTO object = null;
String sql = "SELECT * FROM TABLE_1 WHERE ID = ?";
try (
Connection connection = datasourceObject.getConnection();
PreparedStatement statement = connection.prepareStatement(sql)
) {
statement.setInt(1, id);
try (ResultSet resultSet = statement.executeQuery()) {
if (resultSet.next()) {
object = DAO.map(resultSet);
}
}
}
return object;
}
}
Right now what we're doing for methods that need to be in a transaction is to have a second copy of them that receive a Connection object:
public void exampleUpdateMethod(DTO object, Connection connection) {
//table update logic
}
What we want is to avoid having such methods in our 'database api' .jar but instead be able to define the beginning and commit of a transaction in our business logic layer, like mentioned in the pseudocode above.
What i have done in the past is to create a Repository Object that takes the Database API and generates a connection and saves the connection as a member variable to it. (along with the database reference as well)
I then hang all the Business Layer calls as methods from this Repository Object for convenience to the caller.
This way.. you can call, mix, match any calls and use the underlying connection, perform rollback, commits.. etc.
Repository myr = new Repository(datasource); // let constructor create connection
myr.setAutoCommit(false);
myr.DAOObject1(parms); // method wrapper
myr.DAOObject2(parms); // method wrapper
myr.commitwork(); // method in Repository that calles endAndCommitTransactionLogic
We then took this new object, and created a pool of them primed, and managed in a new thread, and the Application just requested a new "Repository" from the pool.. and off we went.
#JBNizet comment was correct, but ... please think twice whether you really need to migrate to the EJBs. Even transactions are not much intuitive there: wrapping your exception into javax.ejb.EJBException isn't neither flexible nor readable. Not to mention other problems, like startup time or integration testing.
Judging from your question, it seems that all you need is a Dependency Injection framework with support for the Interceptors. So possible ways to go:
Spring is definitely the most popular in this area
CDI (Weld or OpenWebBeans) which came since Java EE 6 release - but can used totally without Java EE Application Server (I'm using this approach right now - and it works nicely).
Guice also comes with its own com.google.inject.persist.Transactional annotation.
All three above frameworks are equally good for your use case, but there are other factors that should be considered, like:
which one you & your team is familiar with
learning curve
your application's future possible needs
framework's community size
framework's current development speed
etc.
Hope it helps you a little bit.
EDIT: to clarify your doubts:
You can create your own Transaction class, which would wrap a Connection fetched from the datasource.getConnection(). Such transaction should be a #RequestScoped CDI bean and contain method methods like begin(), commit(), and rollback() - which would call connection.commit/ rollback under the hood. Then you can write a simple interceptor like this one which would use mentioned transaction and start/ commit/ rollback it wherever needed (of course with AutoCommit disabled).
It is doable, but keep in mind, that it should be carefully designed. That is why interceptors for transactions have been already provided in almost every DI platform/ framework.
EDIT:
After accumulating a few more years of experience, I'd like to point out that the simplest and most correct answer to this question was to use ThreadLocal objects to contain the Connection (since it's request scoped only a single threads executes it). Unfortunately at the time I didn't know the existence of such a construct.
#G. Demecki has the right idea but I followed a different implementation. Interceptors couldn't solve the problem (at least from what I saw) because they need to be attached to each function that is supposed to use them. Also once an interceptor is attached, calling the function will always have it intercepted which is not my goal. I wanted to be able to explicitly define the beginning and ending of a transaction, and have every sql executed between these 2 statements be part of the SAME transaction, without it having access to the database's related objects (like connection, transaction etc) through argument passing. The way I was able to achieve this (and quite elegant in my opinion) is the following:
I created a ConnectionWrapper object like so:
#RequestScoped
public class ConnectionWrapper {
#Resource(lookup = "java:/MyDBName")
private DataSource dataSource;
private Connection connection;
#PostConstruct
public void init() throws SQLException {
this.connection = dataSource.getConnection();
}
#PreDestroy
public void destroy() throws SQLException {
this.connection.close();
}
public void begin() throws SQLException {
this.connection.setAutoCommit(false);
}
public void commit() throws SQLException {
this.connection.commit();
this.connection.setAutoCommit(true);
}
public void rollback() throws SQLException {
this.connection.rollback();
this.connection.setAutoCommit(true);
}
public Connection getConnection() {
return connection;
}
}
My DAO objects themselves follow this pattern:
#RequestScoped
public class DAOObject implements Serializable {
private Logger LOG = Logger.getLogger(getClass().getName());
#Inject
private ConnectionWrapper wrapper;
private Connection connection;
#PostConstruct
public void init() {
connection = wrapper.getConnection();
}
public void query(DTOObject dto) throws SQLException {
String sql = "INSERT INTO DTO_TABLE VALUES (?)";
try (PreparedStatement statement = connection.prepareStatement(sql)) {
statement.setString(1, dto.getName());
statement.executeUpdate();
}
}
}
Now I can easily have a jax-rs resource which #Injects these objects and starts and commits a transaction, without having to pass any Connection or UserTransaction around.
#Path("test")
#RequestScoped
public class TestResource {
#Inject
ConnectionWrapper wrapper;
#Inject
DAOObject dao;
#Inject
DAOObject2 dao2;
#GET
#Produces(MediaType.TEXT_PLAIN)
public Response testMethod() throws Exception {
try {
wrapper.begin();
DTOObject dto = new DTOObject();
dto.setName("Name_1");
dao.query(dto);
DTOObject2 dto2 = new DTOObject2();
dto2.setName("Name_2");
dao2.query2(dto2);
wrapper.commit();
} catch (SQLException e) {
wrapper.rollback();
}
return Response.ok("ALL OK").build();
}
}
And everything works perfectly. No Interceptors or looking around InvocationContext etc.
There are only 2 things bothering me:
I have not yet found a way to have a dynamic JNDI name on #Resource(lookup = "java:/MyDBName") and this bothers me. In our AppServer we have defined many datasources and the one used by the application is dynamically chosen according to an .xml resource file packaged with the war. Which means that I can't know the datasource JNDI on compile time. There is the solution of obtaining a datasource through InitialContext() environment variable but I'd love to be able to get it as a resource from the server. I could also create a #Produces producer and inject it that way but still.
I'm not really sure why ConnectionWrapper's #PostConstruct gets called BEFORE the DAOObject's #PostConstruct. It is the correct and desired behavior but I haven't understood why. I'm guessing since DAOObject #Injects a ConnectionWrapper, its #PostConstruct takes precedence since it has to have finished before the DAOObjects's can even start but this is just a guess.
I removed static from DAO methods and sessionFactory. Now IDE makes me switch back to use static DAO methods because it says Non-static method updatePrice(long) cannot be referenced form a static context. Neither of classes includes static keyword. What's wrong? How to fix it?
ServiceActionDAO
#Transactional
public class ServiceActionDAO{
#Autowired
SessionFactory sessionFactory;
public void insert(ServiceActionEntity paramServiceAction){
Transaction localTransaction = null;
try{
Session localSession = sessionFactory.getCurrentSession();
localSession.save(paramServiceAction);
localSession.getTransaction().commit();
ServiceOrderDAO.updatePrice(paramServiceAction.getServiceOrderFk().longValue());// error
}
catch (Exception localException){
if (localTransaction != null) {
localTransaction.rollback();
}
}
}
UPDATE
I find a quick way to solve this problem by replacing error line with:
new ServiceOrderDAO().updatePrice(paramServiceAction.getServiceOrderFk().longValue());
Now it's not a static call.
UPDATE 2
I have a lot of DAO classes and a number of controllers. I have to find quick fix with minimum code changes taking into account Spring architecture. I have one DAO calls one or more DAOs to perform some complex queries.
As was denoted before: creating new instance of DAO would lead to unpredictable Spring session behavior.
It appears that my controllers also have calls for DAO classes.
What is the easiest way (with minimum code changes) to fix this problem?
UPDATE 3
Ended up injecting DAO into DAOs and Controllers. It seems like quick fix but from the conceptual point of view I doubt that this is the best solution...
You can either
a) inject a reference to your ServiceOrderDAO into the ServiceActionDao, and call the method on the injected DAO instance, or
b) you can introduce a service layer that calls both DAOs in the same transaction, where each DAO is injected into the service.
Either way you have to make both of the DAOs spring-managed beans.
If you have a situation where you're needing to call one DAO from another DAO it seems like introducing a service would be an appropriate solution.
Also the commit and rollback are unnecessary and even counterproductive here. When using Spring you should be able to remove this code without a problem.
Making a new instance of the DAO is not a great solution because it's not a Spring-managed bean. If it has autowired properties then those won't get set. If it's using its own SessionFactory, different from the autowired one, then you will get strange behavior as it will be using a different session than Spring-managed DAOs.
I don't understand one thing that I hope someone of you could explain me. I have a maven enterprise project developed with glassfish.
I use the insert code netbeans function (right click) to call bean in a servlet and in particular the annotation
#EJB
I don't understand why when I call a stateful session bean through Insert Code function in netbeans the bean is called through JNDI. Here what I mean
private BookingBeanInterface lookupBookingBeanLocal() {
try {
Context c = new InitialContext();
return (BookingBeanInterface) c.lookup("java:global/it.volaconnoi_volaconnoi-webapp-ear_ear_1.0-SNAPSHOT/it.volaconnoi_volaconnoi-webapp-ejb_ejb_1.0-SNAPSHOT/BookingBean!it.volaconnoi.logic.BookingBeanInterface");
} catch (NamingException ne) {
Logger.getLogger(getClass().getName()).log(Level.SEVERE, "exception caught", ne);
throw new RuntimeException(ne);
}
}`
The above function hasn't been wrote by me
I can't inject a stateful session bean through EJB?
Here is the solution to the problem:
As you probably already know a single Servlet instance is used to handle multiple requests from multiple clients so the Stateful EJB should not be injected directly in the Servlet and kept as an instance property, or we will face obvious thread-safety related issues. In our case we are fetching it from JNDI inside doGet method and storing it in the HTTP session so each user will have it's own Sateful EJB instance.
I am learning Hibernate now and I need help to understand how Sessions work. I have some methods in a class which I have given below.
I see there is a getCurrentSession() in SessionFactory class. So, it seems that only one Session can be "active" inside a SessionFactory. Is this SessionFactory like
a queue of transactions where the transactions are completed in in order ? If yes, then
is it possible to promote a transaction to a higher or lower priority ?
private static SessionFactory factory;
//Get a hibernate session.
public static Session getSession(){
if(factory == null){
Configuration config = HibernateUtil.getConfiguration();
factory = config.buildSessionFactory();
}
Session hibernateSession = factory.getCurrentSession();
return hibernateSession;
}
public static void commitTransaction(){
HibernateUtil.getSession().getTransaction().commit();
}
public static void rollbackTransaction(){
HibernateUtil.getSession().getTransaction().rollback();
}
And some more methods that use getTransaction().
SessionFactory's job is to hide the session creation strategy. For example, in a web application, you probably want the SessionFactory to return create a Session the first time getCurrentSession() is called on a thread, and then return the same Session from that point forward for the duration of the request. (Since you probably want to load customer data from that session, then maybe modify their account in that same session.) Other times, you may want SessionFactory to create a brand new session every time you call getCurrentSession(). So by hiding this decision behind the SessionFactory API, you simply write code that gets the Session from the factory and operates on it.
The Session is what handles transactions. As you probably expect, transactions are started in a Session, and then either complete or rollback. There is really no way to prioritize them since once they are started, you are committed to either rolling it back or committing it.