Open at the begin of the http request and close at the end and each http request is treated in a separated thread?
Maybe saving all session in a HashMap and access it statically?
Any information which explains how hibernate sessions work (or what they really are) are helpful.
If at the beginning of request/end of request means the http Request, then this is usually done by a servlet filter which opens/closes session for you. This design pattern is called OpenSessionInView (Filter). You can get details here.
This pattern is useful only if you application is rendered in same JVM where Hibernate Session exists. If Your data access tier resides on different JVM than your view rendering tier, you will have to (eagerly) fetch all the required model beans before dispatching the request for rendering of the view .
If you are using spring (or EJB3), you can get the Session (EntityManager) injected in your Data Access classes so you wont need to manually work on Opening and closing the session.
Ideally, you should not need to manually open/close session/transaction (because it leaves chances of missing out a session.close() or tx.commit() and the likes). Instead use the container provided JPA entitymanager or use spring to manage it for you.
There are multiple patterns of using the session, but the most common and usually the proper one is to open and close it on each request (=thread=unit of work)
In a JavaEE environment you would normally make use of JPA. So use hibernate through the EntityManager, which can be injected in components (like EJBs or cdi managed beans) with #PersistenceContext
usually a session is open when accessing data store is needed (e.g. transaction begins). When to close it has different patterns and approaches. you could keep the session open in views (jsps). but you don't have to do that.
e.g. one of our project doesn't allow to use opensessionInView filter. So the session was closed after transaction ended. All data (Value objects basically) need to send to view were loaded before dispatching.
Related
We have a web application that is using Spring Boot (1.5) with Vaadin (7.7), and is using Apache Shiro (1.4.0) for security.
The application is configured to use DefaultWebSessionManager to let Shiro handle the session management instead of the servlet container.
We are using the official Vaadin Spring integration (1.2.0), and after some configuration it all works as intended. The VaadinSession contains a wrapped ShiroHttpSession internally.
We want to achieve session replication, by configuring Shiro to use a SessionDAO that is backed by an external Cache, which means the sessions get (de)serialized.
As soon as we start using this SessionDAO, Vaadin will crash and stop working. When replace the external cache by an in memory Map for the sake of debugging, it works again.
It seems this is caused by the SpringVaadinServlet, as it stores the VaadinSession as a session attribute. VaadinSession is Serializable and the Javadoc shows:
Everything inside a VaadinSession should be serializable to ensure
compatibility with schemes using serialization for persisting the
session data.
Inside the VaadinSession are some fields that are not Serializable, for example a Lock and the wrapped http session inside is also marked as transient.
Because of this, the session that Vaadin uses will be broken as soon as it is distributed, resulting in a lot of crashes.
So it turns out the VaadinSession is not actually usable in session replication? Why is this and how can we work around this?
Note: we also have a version of the application that is using Vaadin 8, and here the same thing happens. It seems that the issue is caused by the Vaadin Spring integration.
Inside the VaadinSession are some fields that are not Serializable, for example a Lock and the wrapped http session inside is also marked as transient.
The wrapped http session is not part of Vaadin session, it is the the http session. Thus it is transient. The same can be said about Lock, whose instance is stored in the http session.
In order to implement session serialization correctly, you need to hook into serialization events and update the transients when session is being deserialized. VaadinSession should be loaded with VaadinService#loadSession, which calls VaadinSession#refreshTransients.
Everything inside a VaadinSession should be serializable to ensure compatibility with schemes using serialization for persisting the session data.
This statement does not imply that you can serialize your application out of the box. It just means, that in case your application is serializable as well, with careful engineering you can serialize the whole thing.
For example Vaadin is not updating the session attribute in each possible occasion for performance reasons. There is method VaadinService#storeSession for that. So you need to either override right method or setup request filter. E.g. you could do this at VaadinService#endRequest.
Note, you need to use sticky sessions in order to get this to work with moderate amount of effort. If your session is de-serialized in different machine, the re-entrant lock instances wont be valid. If you would like to be able to de-serialize the session in different machine, it would require that your infrastructure can offer distributed lock that you can use instead of re-entrant Lock of Java and override Vaadin's getSessionLock and setSessionLock methods to use that.
Valuable sources of further info:
Generic notes from Vaadin's CTO
https://vaadin.com/blog/session-replication-in-the-world-of-vaadin
Testimonial from developer who did it with one stack
https://vaadin.com/learn/tutorials/hazelcast
Thoughts from another senior developer
https://mvysny.github.io/vaadin-14-session-replication/
This may be more of a conceptual than technical question however I hope you can provide me some advice on how to proceed.
We are developing a large Java EE 7 application that works stateless and is getting requests from clients. Each request contains a session ID and each session contains a large amount of domain objects that are session specific.
We created a RequestScoped class that contains all the producer methods for our domain objects. When a request comes in with a session ID we call a setter Method on the producer to set the session ID in the producer CDI bean.
Now if one of the RequestScoped classes along the chain needs one of the domain objects it has an #Inject definition at the beginning of the class to get the domain object from the producer. The Producer itself has a connection to an inmemory DB to retrieve the domain objects from there and keep them in a local variable for future use in this request.
Now to the question: Say Bean A injects Domain Object X and changes some properties on X. Do I have to call an "update" Method in my producer and pass Domain Object X as a parameter or is it updated automatically in the context?
Upon injection in the Request Scope the CDI container creates a proxy to access the actual bean. Would this proxy be usable just like a regular reference? E.g. if I call a method on my injected bean, would it update the bean behind the proxy?
I know this will probably get me downvoted, but I'll answer anyway because I'm hoping it'll be valuable to you. It sounds like you guys have put the cart a mile in front of the horse.
The Producer itself has a connection to an inmemory DB to retrieve the domain objects from there and keep them in a local variable for future use in this request.
You're trying to re-invent what's called replicated, distributed, sessions. Don't do this. Use #SessionScoped beans and keep the business logic in your app, and let your infrastructure handle the application state. Imagine yourself years from now looking at this application, when your boss wants a UI refresh and your customers are demanding new features. You're going to not only maintain the application, but an entire mess of a buggy distributed framework you built :(
Instead, you can use a distributed in-memory DB to hold your session state and cache it locally! Apache Tomcat/TomEE has great support for this (I'm not sure what application server you are using)
Take a look at:
https://github.com/magro/memcached-session-manager (Use Couchbase, Redis, Memcached, Hazelcast, GridGrain, or Apache Geode)
http://community.gemstone.com/display/gemfire/Setting+Up+GemFire+HTTP+Session+Management+for+Tomcat (Specific to Gemfire)
We use the first with great success. If the Tomcat instance encounters a session id it doesn't have locally, it pulls it from the data grid. When it's done processing the request, it publishes that session changes back to the data grid. This is extremely fast and scales beautifully.
If your application server does not have the ability to do this, instead of writing the application in the painful manner you are doing, I would concentrate your efforts on writing a session replicator like memcached-session-manager. Good luck!
Based on this post http://www.adam-bien.com/roller/abien/entry/ejb_3_1_killed_the I use in my app #Named #Stateless bean for communication with a database (injecting EntityManager here) and display information on a jsf page. It's great facilitation since Java EE 5, but I have one question.
Is it secure to use such beans for maintaining a user session (shopping cart etc)? I read a book about ejb 3.0 and I know that the same stateless bean could be used with many clients.
What's the best approach to use a managed bean with all ejb features (transactions, thread safety etc)? I mean any other way than managed bean + ejb interface with implementation + ejb injection as in Java EE 5?
I use GlassFish 3.1 WebProfile
Adding to the advice of duffymo; there are some additional considerations for using stateful session beans vs using the HTTP session.
The HTTP session basically has a map like structure. It's directly available to all threads (requests) that are part of the session. This makes manipulating several items a relatively unsafe action. It's possible to synchronize on the session itself, but this is a risky operation that can potentially dead-lock your entire application. The HTTP session does allow you to declare event listeners, which fire upon any kind of modification of the http session.
The Stateful session bean of course has a bean structure. It has a kind of auto synchronization feature, as only thread can be active in the bean at the same time. Via annotations you can declare if other threads wait (and if so, for how long) or immediately throw an exception in the face of concurrent access.
Where there is normally only one http session per user, a single user can make use of multiple stateful session beans at the same time. A particular advantage of stateful session beans is that they have a mechanism to passivate their state after some timeout, which can free up your server's memory (at the cost of disk space of course). Stateful session beans do not directly have the kind of event listeners that the http session does have.
I think that originally the "session" aspect of stateful session beans was to maintain a session with remote non-web clients (Swing, another AS, etc). This is much like the http session was created to maintain a session with remote web clients. Since a non-web client can request and hold on to multiple proxies for stateful session beans, the web analogy is actually more akin to that of the recently introduced conversation scope.
In the case of remote web clients talking to a server, where the server internally talks to a stateful session bean, the concepts greatly overlap. The remote web client only knows about the http session (via the JSESSIONID) and nothing about the stateful session bean's session. So if the http session was lost, you typically would not be able to connect the remote client with the specific stateful session bean again. The HTTP session in this case is thus always leading and you might as well store your shopping cart items inside a single (http) session scoped bean.
There is one specific case where stateful session beans come in handy for internal communication, and that's if you need JPA's extended persistence context. This can be used if e.g. locks on entities need to last between requests (which is possibly handy for a shopping cart if you have limited stock and don't want to confront the user with an "out of stock" message as soon as he actually checks out).
Stateless beans cannot maintain shopping carts or session; that's what "stateless" means.
You need either a stateful EJB or to do it in the web tier. Those are the only places where session is maintained.
We've got a Spring based web application that makes use of Hibernate to load/store its entities to the underlying database.
Since it's a backend application we not only want to allow our UI but also 3rd party tools to manually initiate DB transactions. That's why the callers need to
Call a StartTransaction method and in return get an ID that they can refer to
Do all DB relevant calls (e. g. creating, modifying, deleting) by referring to this ID to make clear which operations belong to the started transaction
Call the CommitTransaction method to signal to our backend that the transaction can be committed now (or in the negative case RollbackTransaction will be called)
So keeping in mind, that all database handling will be done internally by the Java persistence annotations, how can we open the transaction management to our UI that behaves like a 3rd party application that has no direct access to the backend entities but deals with data transfer objects only?
From the Spring Reference: Programmatic transaction management
I think this can be done but would be a royal pain to implement/verify. You would basically require a transaction manager which is not bounded by "per-thread-transaction" definition but spans across multiple invocations for the same client.
JTA + Stateful session beans might be something you would want to have a look at.
Why don't you build services around your 'back end application' for example a SOAP interface or a REST interface.
With this strategy you can manage your transaction in the backend
I am using Turbine 2.3.2 with Hibernate 3. My problem is that the Hibernate session is not active when my (Velocity 1.6.4) template is executed, and I am accessing data from the database for which Hibernate needs lazy initialization. Therefore I get a LazyInitializationException - no Session error.
Since I want my Hibernate session to be alive when a velocity template executes I would like to have a class to execute after and before the Velocity template. This way I could open and close my Hibernate session at one place. (Disabling lazy initialization in Hibernate is not an option for me). Are there any possibilities related or not to Turbine to write a kind of listener or a filter (I am not sure how to call it) that would execute right before and after a Velocity template has been executed? Or maybe the servlet container could filter requests .... What option would you recommend?
Try looking at the Spring OpenSessionInViewFilter. It opens the Hibernate Session and assigns it to a threadlocal. That way, you can pick it up in your data access layer and use it.
Open Session in View is not a clean solution. You can configure in your criteria (if you use it) which association paths Hibernate has to eagerly fetch.
If you use HQL, just "touch" the association while the session is still open.
Your question seems to be about the (in)famous Open Session In View (OSIV) pattern.
Have a look at the Open Session in View page on the JBoss wiki, you'll find a filter based implementation (non Spring based).