Guice Provider<EntityManager> vs EntityManager - java

I was trying to get simple webapp working with Guice and JPA on Jetty, using the persistence and servlet guice extensions.
I have written this Service implementation class:
public class PersonServiceImpl implements PersonService {
private EntityManager em;
#Inject
public PersonServiceImpl(EntityManager em) {
this.em = em;
}
#Override
#Transactional
public void savePerson(Person p) {
em.persist(p);
}
#Override
public Person findPerson(long id) {
return em.find(Person.class, id);
}
#Override
#Transactional
public void deletePerson(Person p) {
em.remove(p);
}
}
And this is my servlet (annotated with #Singleton):
#Inject
PersonService personService;
#Override
protected void doPost(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
String name = req.getParameter("name");
String password = req.getParameter("password");
String email = req.getParameter("email");
int age = Integer.valueOf(req.getParameter("age"));
Person p = new Person();
p.setAge(age);
p.setName(name);
p.setEmail(email);
p.setPassword(password.toCharArray());
logger.info("saving person");
personService.savePerson(p);
logger.info("saved person");
logger.info("extracting person");
Person person = personService.findPerson(p.getId());
resp.getWriter().print("Hello " + person.getName());
}
When I run this it works, and I get the name sent to the client, but when I look at the log I see that there is no DML generated for the insertion and selection from postgresql does not return any results, which means it wasn't really persisted.
I debugged through the code and I saw that JpaLocalTxnInterceptor called txn.commit().
Then I made a change to PersonServiceImpl and used Provider<EntityManager> instead of just EntityManager and it worked as expected. Now I don't really understand why and it's probably because I don't really understand the idea behind Provider.
On the Guice wiki page it says:
Note that if you make MyService a #Singleton, then you should inject Provider instead.
However, my PersonServiceImpl is not a #Singleton so I am not sure why it applies, perhaps it's because of the Servlet?
I would really appreciate if you could clear this out for me.

You need Provider<EntityManager> because Guice's built-in persistence and servlet extensions expect EntityManager to be request-scoped. By injecting a request-scoped EntityManager from a service held in a singleton servlet, you're making a scope-widening injection, and Guice won't store data from a stale, mismatched EntityManager.
Providers
Provider is a one-method interface that exposes a get() method. If you inject a Provider<Foo> and then call get(), it will return an instance created the same way as if you had injected Foo directly. However, injecting the Provider allows you to control how many objects are created, and when they are created. This can be useful in a few cases:
only creating an instance if it's actually needed, especially if the creation takes lots of time or memory
creating two or more separate instances from within the same component
deferring creation to an initialization method or separate thread
mixing scopes, as described below
For binding of X, Provider<X>, or #Provides X, Guice will automatically allow you to inject either X or Provider<X> directly. You can use Providers without adjusting any of your bindings, and Providers work fine with binding annotations.
Scopes and scope-widening injections
Broadly speaking, scopes define the lifetime of the object. By default, Guice creates a new object for every injection; by marking an object #Singleton, you instruct Guice to inject the same instance for every injection. Guice's servlet extensions also support #RequestScoped and #SessionScoped injections, which cause the same object to be injected within one request (or session) consistently but for a new object to be injected for a different request (or session). Guice lets you define custom scopes as well, such as thread scope (one instance per thread, but the same instance across injections in the same thread).
#Singleton public class YourClass {
#Inject HttpServletRequest request; // BAD IDEA
}
What happens if you inject a request-scoped object directly from within a #Singleton component? When the singleton is created, it tries to inject the instance relevant to the current request. Note that there might not be a current request, but if there is one, the instance will be saved to a field in the singleton. As requests come and go, the singleton is never recreated, and the field is never reassigned--so after the very first request your component stops working properly.
Injecting a narrow-scope object (#RequestScoped) into a wide scope (#Singleton) is known as a scope-widening injection. Not all scope-widening injections show symptoms immediately, but all may introduce lingering bugs later.
How Providers help
PersonService isn't annotated with #Singleton, but because you're injecting and storing an instance in a #Singleton servlet, it might as well be a singleton itself. This means EntityManager also has singleton behavior, for the same reasons.
According to the page you quoted, EntityManager is meant to be short-lived, existing only for the session or request. This allows Guice to auto-commit the transaction when the session or request ends, but reusing the same EntityManager is likely preventing storage of data any time after the first. Switching to a Provider allows you to keep the scope narrow by creating a fresh EntityManager on every request.
(You could also make PersonService a Provider, which would also likely solve the problem, but I think it's better to observe Guice's best practices and keep EntityManager's scope explicitly narrow with a Provider.)

Related

How to pass data from EJB Interceptor to Interceptor in Async EJB

I have 2 Stateless EJBs StatelessA and StatelessB, both of them have interceptors InterceptorA and InterceptorB respectively. Also, StatelessB has Asynchronous methods. Something like this:
#Stateless
#Interceptors(InterceptorA.class)
public class StatelessA{...
#Stateless
#Asynchronous
#Interceptors(InterceptorB.class)
public class StatelessB{...
When calling a method on StatelessA, it calls several StatelessB methods and returns a value.
I am trying to develop 2 interceptors to store the total time and the subtotal times of StatelessB calls, this is the objective of the interceptors.
I need to do it so InterceptorA can see the detail of InterceptorB data, so I store only a value in the DB, containing the total time (of SLSB A) and the subtotal times (of SLSB B).
I tried using a ThreadLocal variable (containing a list of times, something like long[]), which works fine if StatelessB is not asyncrhonous.
The problem is that when it is asynchronous, the variable is not available, since it is running in a different thread (AFAIK).
I also tried injecting EJBContext or using the InvocationContext, but none of them works.
Can someone point me out what other alternatives do I have?
Thanks in advance.
I was thinking this over and over, and arrived to a solution, which is using the security context to pass data.
The solution involves using the only data propagated in an asynchronous invocation, as specified in EJB 3.1:
4.5.4 Security Caller security principal propagates with an asynchronous method invocation. Caller security principal propagation
behaves exactly the same for asynchronous method invocations as it
does for synchronous session bean invocations.
In JBoss, one can access the security context and use a data map in it to pass the values from InterceptorA to InterceptorB, as follows:
In InterceptorA:
SecurityContext securityContext = SecurityContextAssociation.getSecurityContext();
securityContext.getData().put("interceptorAData",data);
In InterceptorB:
SecurityContext securityContext = SecurityContextAssociation.getSecurityContext();
securityContext.getData().get("interceptorAData");
I tested it and it works great in JBoss EAP 6.1.
This solution implies couplig the interceptor to the server implementation (JBoss AS), but the principle works for other servers.
The advantage is that it decouples the application logic from the interceptors, which was the first objective.
I appreciate any comments.
Would it work to store the information you need in an #Entity object and then use the #PersistenceContext annotation to inject an EntityManager into the beans to persist and find the data? Something like:
#PersistenceContext
EntityManager entityManager;
...
method() {
MyEntityTimer met = new MyEntityTimer(getCurrentTime(), id);
entityManager.persist(met);
}
...
elsewhere:
MyEntityTimer met = entityManager.find(MyEntityTimer.class, id);
and:
#Entity
#Table(name = "TABLE")
public class MyEntityTimer {
#Id
#Column(name = "ID")
private int id;
...
}
I'll answer my question with what I ended up doing.
The only way I found to pass a variable from interceptor A to interceptor B was adding a parameter to the EJBs A and B, something like this:
#Stateless
#Interceptors(InterceptorA.class)
public class StatelessA{
public void methodA(Object reserved, ...other params )
#Stateless
#Asynchronous
#Interceptors(InterceptorB.class)
public class StatelessB{
public void methodB(Object reserved, ...other params)
This way, when InterceptorA is called, I'll set the reserved parameter with the data I need to share with InterceptorB.
InterceptorB will access this variable with no issue getting it from the parameters.
The down side to this solution is that the dummy parameters are needed, coupling in some way the EJBs with the interceptors..

CDI and pooling

Does CDI allows pooling in some way?Because I thought this is a feature of EJB beans but Adam Bien says in this screencast that container chooses whether to create new instance of class through reflection or use the existing one. So if I have for example these two beans
#RequestScoped
public class RequestBean {
public void doIt() {
}
}
#SessionScoped
public class SessionBean {
#Inject
private RequestBean bean;
public void doSomething() {
bean.doIt();
}
}
the question is - is there always new instance of RequestBean created upon calling doSomething or does CDI container somehow manage instances in pool?
The first one is scoped to the request, so a new instance is created for each request. The second one is scoped to the session, so a new one is created for each session.
CDI doesn't pool and recycle the objects, because it has no idea if the objects are stateful or not, and you don't want, in a request, to get back the state that a bean had in a previous request. That would ruin the whole point of the request/session scope.
Unless beans are really costly to create (because they start a new connection or something like that), pooling them doesn't bring any advantage. Short-lived objects are very fast to create and garbage collect nowadays. And if the bean is really expensive to create, then it should probably be a singleton.

Mixing declarative and programmatic transactions with Spring and JPA listeners

I'm using a JPA EntityListener to do some additional audit work and am injecting a Spring-managed AuditService into my AuditEntryListener using #Configurable. The AuditService generates a collection of AuditEntry objects. The AuditService is itself a Singleton scoped bean, and I'd like to gather all the AuditEntry objects under a common key that can then be accessed by the outermost service layer (the one that invoked the persist call which in turn triggered the EntityListener).
I'm looking at using Spring's TransactionSynchronizationManager to set a specific transaction name (using UID() or some other unique strategy) at the beginning of the transaction, and then using that name as a key within the AuditService that will allow me to group all AuditEntry objects created within that transaction.
Is mixing declarative and programmatic transaction management have the potential for trouble? (Though I'm doing nothing more than setting the transaction name). Is there a better way to associate the generated AuditEntry objects with the current transaction? This solution does work for me, but given that the TransactionSynchronizationManager isn't intended for application use, I'd like to make sure that my use of it won't cause some unforseen problems.
Related Question
Finally, a related, but not immediately pertinent question: I know that the documentation for JPA EntityListeners cautions against using the current EntityManager, but if I did want to use it to diff an object against it's persisted self, would I be safe using an #Transactional(propagation=REQUIRES_NEW) annotation around my preUpdate() method?
Prototype Code:
Service Class
#Transactional
public void create(MyEntity e) {
TransactionSynchronizationManager.setCurrentTransactionName(new UID().toString());
this.em.persist(e);
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
#Override
public void afterCommit() {
Set<AuditEntry> entries = auditService.getAuditEntries(TransactionSynchronizationManager.getCurrentTransactionName());
if(entries != null) {
for(AuditEntry entry : entries) {
//do some stuff....
LOG.info(entry.toString());
}
}
}
});
}
JPA EntityListener
#Configurable
public class AuditEntryListener {
#Autowired
private AuditService service;
#PreUpdate
public void preUpdate(Object entity) {
service.auditUpdate(TransactionSynchronizationManager.getCurrentTransactionName(), entity);
}
public void setService(AuditService service) {
this.service = service;
}
public AuditService getService() {
return service;
}
}
AuditService
#Service
public class AuditService {
private Map<String, Set<AuditEntry>> auditEntryMap = new HashMap<String, Set<AuditEntry>>();
public void auditUpdate(String key, Object entity) {
// do some audit work
// add audit entries to map
this.auditEntryMap.get(key).add(ae);
}
}
#Filip
As far as I understand, your requirement is:
Have an unique token generated within each transaction (database
transaction of course)
Keep this unique token easily accessible across all layers
So naturally you're thinking about the TransactionSynchronizationManager provided by Spring as a facility to store the unique token (in this case, an UID)
Be very carefull with this approach, the TransactionSynchronizationManager is the main storage helper to manage all the #Transactional processing for Spring. Under the #Transactional hood, Spring is creating an appropriate EntityManager, an appropriate Synchronization object and attach them to a thread local using TransactionSynchronizationManager.
In your service class code, inside a #Transactional method your are tampering with the Synchronization object, it can end up with undesirable behavior.
I've done an indept analysis of how #Transactional works here, have a look: http://doanduyhai.wordpress.com/2011/11/20/spring-transactional-explained/
Now back to your needs. What you can do is:
Add a Thread local to the AuditService, containing the unique token when entering the #Transactional method and destroy it when exiting the method. Within this method call, you can access the unique token in any layer. Explanation for ThreadLocal usage can be found here: http://doanduyhai.wordpress.com/2011/12/04/threadlocal-explained/
Create a new annotation, let's say #Auditable(uid="AuditScenario1") to annotate methods that need to be audited and use Spring AOP to intercept these method calls and manage the Thread local processing for you
Example:
Modified AuditService
#Service
public class AuditService {
public uidThreadLocal = new ThreadLocal<String>();
...
...
}
Auditable annotation
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
#Documented
public #interface Auditable
{
String uid();
}
Usage of #Auditable annotation
#Auditable(uid="AuditScenario1")
#Transactional
public void myMethod()
{
// Something
}
Spring AOP part
#Around("execution(public * *(..)) && #annotation(auditableAnnotation))
public Object manageAuditToken(ProceedingJoinPoint jp, Auditable auditableAnnotation)
{
...
...
AuditService.uidThreadLocal.set(auditableAnnotation.uid())...
...
}
Hope this will help.
You can come up with a solution using the TransactionSynchronizationManager. We register a "TransactionInterceptorEntityListener" with JPA as an entity-listener. What we wanted to achieve is the ability to listen to CRUD events such that we can work with a spring managed "listener" that has a lifecycle tied to the current transaction (i.e., spring-managed but instance per transaction). We sub-class the JPATransactionManager and introduce in the prepareSynchronization() method, a hook to setup a "TransactionInterceptorSynchronizer." We also use the same hook for allow code (in programmatic tx) to associate and retrieve arbitrary objects with the current transaction and also register jobs that run before/after transaction commit.
The overall code is complex, but definitely do-able. If you use JPATemplates for programmatic tx, it is tough to achieve this. So we rolled our own template that simply calls the JPA template after taking care of the interceptor work. We plan to open-source our JPA library (written on top of Spring's classes) soon.
You can see a pattern of adding custom transactions and hooks with Spring managed transactions in the following library for Postgresql

Spring session-scoped beans (controllers) and references to services, in terms of serialization

a standard case - you have a controller (#Controller) with #Scope("session").
classes put in the session usually are expected to implement Serializable so that they can be stored physically in case the server is restarted, for example
If the controller implements Serializable, this means all services (other spring beans) it is referring will also be serialized. They are often proxies, with references to transaction mangers, entity manager factories, etc.
It is not unlikely that some service, or even controller, hold a reference to the ApplicationContext, by implementing ApplicationContextAware, so this can effectively mean that the whole context is serialized. And given that it holds many connections - i.e. things that are not serializable by idea, it will be restored in corrupt state.
So far I've mostly ignored these issues. Recently I thought of declaring all my spring dependencies transient and getting them back in readResolve() by the static utility classes WebApplicationContextUtils and such that hold the request/ServletContext in a ThreadLocal. This is tedious, but it guarantees that, when the object is deserialized, its dependencies will be "up to date" with the current application context.
Is there any accepted practice for this, or any guidelines for serializing parts of the spring context.
Note that in JSF, managed beans (~controllers) are stateful (unlike action-based web frameworks). So perhaps my question stands more for JSF, than for spring-mvc.
In this presentation (around 1:14) the speaker says that this issue is resolved in spring 3.0 by providing a proxy of non-serializable beans, which obtains an instance from the current application context (on deserialization)
It appears that bounty didn't attract a single answer, so I'll document my limited understanding:
#Configuration
public class SpringConfig {
#Bean
#Scope(proxyMode = ScopedProxyMode.TARGET_CLASS)
MyService myService() {
return new MyService();
}
#Bean
#Scope("request")
public IndexBean indexBean() {
return new IndexBean();
}
#Bean
#Scope("request")
public DetailBean detailBean() {
return new DetailBean();
}
}
public class IndexBean implements Serializable {
#Inject MyService myService;
public void doSomething() {
myService.sayHello();
}
}
public class MyService {
public void sayHello() {
System.out.println("Hello World!");
}
}
Spring will then not inject the naked MyService into IndexBean, but a serializable proxy to it. (I tested that, and it worked).
However, the spring documentation writes:
You do not need to use the <aop:scoped-proxy/> in conjunction with beans that are scoped as singletons or prototypes. If you try to create a scoped proxy for a singleton bean, the BeanCreationException is raised.
At least when using java based configuration, the bean and its proxy can be instantiated just fine, i.e. no Exception is thrown. However, it looks like using scoped proxies to achieve serializability is not the intended use of such proxies. As such I fear Spring might fix that "bug" and prevent the creation of scoped proxies through Java based configuration, too.
Also, there is a limitation: The class name of the proxy is different after restart of the web application (because the class name of the proxy is based on the hashcode of the advice used to construct it, which in turn depends on the hashCode of an interceptor's class object. Class.hashCode does not override Object.hashCode, which is not stable across restarts). Therefore the serialized sessions can not be used by other VMs or across restarts.
I would expect to scope controllers as 'singleton', i.e. once per application, rather than in the session.
Session-scoping is typically used more for storing per-user information or per-user features.
Normally I just store the 'user' object in the session, and maybe some beans used for authentication or such. That's it.
Take a look at the spring docs for configuring some user data in session scope, using an aop proxy:
http://static.springsource.org/spring/docs/2.5.x/reference/beans.html#beans-factory-scopes-other-injection
Hope that helps
I recently combined JSF with Spring. I use RichFaces and the #KeepAlive feature, which serializes the JSF bean backing the page. There are two ways I have gotten this to work.
1) Use #Component("session") on the JSF backing bean
2) Get the bean from ELContext when ever you need it, something like this:
#SuppressWarnings("unchecked")
public static <T> T getBean(String beanName) {
return (T) FacesContext.getCurrentInstance().getApplication().getELResolver().getValue(FacesContext.getCurrentInstance().getELContext(), null, beanName);
}
After trying all the different alternatives suggested all I had to do was add aop:scoped-proxy to my bean definition and it started working.
<bean id="securityService"
class="xxx.customer.engagement.service.impl.SecurityContextServiceImpl">
<aop:scoped-proxy/>
<property name="identityService" ref="identityService" />
</bean>
securityService is injected into my managedbean which is view scoped. This seems to work fine. According to spring documentation this is supposed to throw a BeanCreationException since securityService is a singleton. However this does not seems to happen and it works fine. Not sure whether this is a bug or what the side effects would be.
Serialization of Dynamic-Proxies works well, even between different JVMs, eg. as used for Session-Replication.
#Configuration public class SpringConfig {
#Bean
#Scope(proxyMode = ScopedProxyMode.INTERFACES)
MyService myService() {
return new MyService();
}
.....
You just have to set the id of the ApplicationContext before the context is refreshed (see: org.springframework.beans.factory.support.DefaultListableBeanFactory.setSerializationId(String))
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
// all other initialisation part ...
// before! refresh
ctx.setId("portal-lasg-appCtx-id");
// now refresh ..
ctx.refresh();
ctx.start();
Works fine on Spring-Version: 4.1.2.RELEASE

Possible to inject same stateful session bean instance into multiple other session beans?

Is it possible to make the container inject the same stateful session bean instance into multiple other stateful session beans?
Given the following classes:
#Stateful
public class StatefulTwoBean implements StatefulTwo {
#EJB
private StatefulOne statefulOne;
}
#Stateful
public class StatefulThreeBean implements StatefulThree {
#EJB
private StatefulOne statefulOne;
}
In the above example, StatefulTwoBean and StatefulThreeBean each get injected their own instance of StatefulOneBean.
Is it possible to make the container inject the same instance of StatefulOneBean into both StatefulTwoBean and StatefulThreeBean?
The problem is this - Stateful beans' isntances are allocated by differentiating the clients that call them. Glassfish (and perhaps others) don't propagate this difference on injected beans. The EJB specification, as far as I remember, isn't clear about this.
So your solution is to implement the differentiation yourself. How to achieve this. I'm not pretending this is the most beautiful solution, but it worked. - we did it by putting a Facade (an EJB itself) (I'm calling it a facade, although it does not entirely cover the facade pattern) in front of all our EJBs, with the following code:
public Object call(Object bean,
String methodName,
Object[] args,
Class[] parameterTypes,
UUID sessionId) throws Throwable {
//find the session
SessionContext sessionContext = SessionRegistry.getSession(sessionId);
//set it as current
SessionRegistry.setLocalSession(sessionContext);
.....
}
The important parameter is sessionId - this is something both the client and the server know about, and identifies the current seesion between them.
On the client we used a dynamic proxy to call this facade. So the calls look like this:
getBean(MyConcreteEJB.class).someMethod(), an the getBean method created the proxy, so that callers didn't have to know about the facade bean.
The SessionRegistry had
private static ThreadLocal<SessionContext> localSessionContext = new
ThreadLocal<SessionContext>();
And the SessionContext was simply a Map providing set(key, value) and get(key)
So now, instead of using #Stateful beans to store your state, you could use the SessionContext.
In EJB3.1 you can create your StatefulOne bean as singleton (using the #Singleton annotation) giving you the desired semantics. JBoss should already support this annotation (they've wrote the standard).

Categories

Resources