Does CDI allows pooling in some way?Because I thought this is a feature of EJB beans but Adam Bien says in this screencast that container chooses whether to create new instance of class through reflection or use the existing one. So if I have for example these two beans
#RequestScoped
public class RequestBean {
public void doIt() {
}
}
#SessionScoped
public class SessionBean {
#Inject
private RequestBean bean;
public void doSomething() {
bean.doIt();
}
}
the question is - is there always new instance of RequestBean created upon calling doSomething or does CDI container somehow manage instances in pool?
The first one is scoped to the request, so a new instance is created for each request. The second one is scoped to the session, so a new one is created for each session.
CDI doesn't pool and recycle the objects, because it has no idea if the objects are stateful or not, and you don't want, in a request, to get back the state that a bean had in a previous request. That would ruin the whole point of the request/session scope.
Unless beans are really costly to create (because they start a new connection or something like that), pooling them doesn't bring any advantage. Short-lived objects are very fast to create and garbage collect nowadays. And if the bean is really expensive to create, then it should probably be a singleton.
Related
I'm reading Spring documentation and found this
One possible way to get the Spring container to release resources used
by prototype-scoped beans is through the use of a custom bean
post-processor which would hold a reference to the beans that need to
be cleaned up.
But if bean-post processor holds a reference to the prototype object then Garbage Collector won't clean it and prototypes beans with their resources will reside in a heap until Application Context is close?
Could you clarify it, please?
Spring has an interface you can implement called DestructionAwareBeanPostProcessor. Instances of this interface are asked first if a bean needs destruction via the requiresDestruction() method. If you return true, you will eventually get called back again with that bean when it is about to be destroyed via the postProcessBeforeDestruction method.
What this does is it gives you a chance to clean up that bean's resources. For example if your bean has a reference to a File, you could close any streams you might have open. The important point is your class doesn't hold a reference to the bean that is about to be destroyed, or you'll hold it up from being garbage collected as you've pointed out.
To define a post-processor, you would do something like this
#Component
public class MyDestructionAwareBeanPostProcessor implements DestructionAwareBeanPostProcessor {
public boolean requiresDestruction(final Object bean) {
// Insert logic here
return bean instanceof MyResourceHolder;
}
public void postProcessBeforeDestruction(final Object bean, final String beanName) throws BeansException {
// Clean up bean here.
// Example:
((MyResourceHolder)bean).cleanup();
}
}
This question already has an answer here:
#Inject stateless EJB contains data from previous request
(1 answer)
Closed 3 years ago.
I've wrote stateful session bean:
#Stateful
public class SessionBean {
List<Integer> list = new ArrayList<>();
public void addItem(int s) {
list.add(s);
}
public int getItemsCount() {
return list.size();
}
}
and use it in my servlet:
#WebServlet("/add")
public class AddServlet extends HttpServlet {
#Inject
SessionBean sessionBean;
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
int i = sessionBean.getItemsCount();
resp.getWriter().write(i + " ");
sessionBean.addItem(i + 1);
}
}
It works as expected, list saves the state and I can use it in next request.
But if I'd change #Stateful on #Stateless I expected not to store state of the bean, and get in each request clean list, but it always save the state of previous request and shows new number. So what is the difference between stateless and stateful? How to see it? As I see, they work the same.
I want to see example that will show something like - here we use stateful and it saves the state, and here we change on stateless and it works differently and not save the state. Please show me the differences.
But if I'd change #Stateful on #Stateless I expected not to store state of the bean, and get in each request clean list, but it always save the state of previous request and shows new number.
The Stateful vs. Stateless distinction is foremost declarative, not functional. If you declare a session bean to be stateless, then it is your responsibility to ensure that it does not actually retain any state between method invocations.
So what is the difference between stateless and stateful? How to see it? As I see, they work the same.
One of the more important differences is that if you declare a bean stateless then you afford the container the option of serving different requests with different bean instances. If it opted to do so then that could make a declared-stateless bean that in fact retains state appear not to retain state after all, at least to a limited extent. But the container is not required to do that, so if your stateless bean violates its contract by retaining state, then that will probably be visible to clients sooner or later.
There is more (rather a lot more, in fact -- read the specifications), but the most important thing is what I led off with. To put it another way, the correct declaration of a session bean as stateless vs. stateful is part of its developer's contract with the container in which it is deployed. A bean does not behave differently than its code demands just for being declared stateless.
Stateless session beans are not expected to carry any form of state, e.g. instance variables. Therefore, containers often maintain a pool of stateless beans so they can be reused by different clients.
The Oracle doc mentions the following about session beans: http://docs.oracle.com/javaee/5/tutorial/doc/bnbly.html
Clients may, however, change the state of instance variables in pooled stateless beans, and this state is held over to the next invocation of the pooled stateless bean.
This phenomenon is what you might be experiencing and why the list not empty.
I was trying to get simple webapp working with Guice and JPA on Jetty, using the persistence and servlet guice extensions.
I have written this Service implementation class:
public class PersonServiceImpl implements PersonService {
private EntityManager em;
#Inject
public PersonServiceImpl(EntityManager em) {
this.em = em;
}
#Override
#Transactional
public void savePerson(Person p) {
em.persist(p);
}
#Override
public Person findPerson(long id) {
return em.find(Person.class, id);
}
#Override
#Transactional
public void deletePerson(Person p) {
em.remove(p);
}
}
And this is my servlet (annotated with #Singleton):
#Inject
PersonService personService;
#Override
protected void doPost(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
String name = req.getParameter("name");
String password = req.getParameter("password");
String email = req.getParameter("email");
int age = Integer.valueOf(req.getParameter("age"));
Person p = new Person();
p.setAge(age);
p.setName(name);
p.setEmail(email);
p.setPassword(password.toCharArray());
logger.info("saving person");
personService.savePerson(p);
logger.info("saved person");
logger.info("extracting person");
Person person = personService.findPerson(p.getId());
resp.getWriter().print("Hello " + person.getName());
}
When I run this it works, and I get the name sent to the client, but when I look at the log I see that there is no DML generated for the insertion and selection from postgresql does not return any results, which means it wasn't really persisted.
I debugged through the code and I saw that JpaLocalTxnInterceptor called txn.commit().
Then I made a change to PersonServiceImpl and used Provider<EntityManager> instead of just EntityManager and it worked as expected. Now I don't really understand why and it's probably because I don't really understand the idea behind Provider.
On the Guice wiki page it says:
Note that if you make MyService a #Singleton, then you should inject Provider instead.
However, my PersonServiceImpl is not a #Singleton so I am not sure why it applies, perhaps it's because of the Servlet?
I would really appreciate if you could clear this out for me.
You need Provider<EntityManager> because Guice's built-in persistence and servlet extensions expect EntityManager to be request-scoped. By injecting a request-scoped EntityManager from a service held in a singleton servlet, you're making a scope-widening injection, and Guice won't store data from a stale, mismatched EntityManager.
Providers
Provider is a one-method interface that exposes a get() method. If you inject a Provider<Foo> and then call get(), it will return an instance created the same way as if you had injected Foo directly. However, injecting the Provider allows you to control how many objects are created, and when they are created. This can be useful in a few cases:
only creating an instance if it's actually needed, especially if the creation takes lots of time or memory
creating two or more separate instances from within the same component
deferring creation to an initialization method or separate thread
mixing scopes, as described below
For binding of X, Provider<X>, or #Provides X, Guice will automatically allow you to inject either X or Provider<X> directly. You can use Providers without adjusting any of your bindings, and Providers work fine with binding annotations.
Scopes and scope-widening injections
Broadly speaking, scopes define the lifetime of the object. By default, Guice creates a new object for every injection; by marking an object #Singleton, you instruct Guice to inject the same instance for every injection. Guice's servlet extensions also support #RequestScoped and #SessionScoped injections, which cause the same object to be injected within one request (or session) consistently but for a new object to be injected for a different request (or session). Guice lets you define custom scopes as well, such as thread scope (one instance per thread, but the same instance across injections in the same thread).
#Singleton public class YourClass {
#Inject HttpServletRequest request; // BAD IDEA
}
What happens if you inject a request-scoped object directly from within a #Singleton component? When the singleton is created, it tries to inject the instance relevant to the current request. Note that there might not be a current request, but if there is one, the instance will be saved to a field in the singleton. As requests come and go, the singleton is never recreated, and the field is never reassigned--so after the very first request your component stops working properly.
Injecting a narrow-scope object (#RequestScoped) into a wide scope (#Singleton) is known as a scope-widening injection. Not all scope-widening injections show symptoms immediately, but all may introduce lingering bugs later.
How Providers help
PersonService isn't annotated with #Singleton, but because you're injecting and storing an instance in a #Singleton servlet, it might as well be a singleton itself. This means EntityManager also has singleton behavior, for the same reasons.
According to the page you quoted, EntityManager is meant to be short-lived, existing only for the session or request. This allows Guice to auto-commit the transaction when the session or request ends, but reusing the same EntityManager is likely preventing storage of data any time after the first. Switching to a Provider allows you to keep the scope narrow by creating a fresh EntityManager on every request.
(You could also make PersonService a Provider, which would also likely solve the problem, but I think it's better to observe Guice's best practices and keep EntityManager's scope explicitly narrow with a Provider.)
a standard case - you have a controller (#Controller) with #Scope("session").
classes put in the session usually are expected to implement Serializable so that they can be stored physically in case the server is restarted, for example
If the controller implements Serializable, this means all services (other spring beans) it is referring will also be serialized. They are often proxies, with references to transaction mangers, entity manager factories, etc.
It is not unlikely that some service, or even controller, hold a reference to the ApplicationContext, by implementing ApplicationContextAware, so this can effectively mean that the whole context is serialized. And given that it holds many connections - i.e. things that are not serializable by idea, it will be restored in corrupt state.
So far I've mostly ignored these issues. Recently I thought of declaring all my spring dependencies transient and getting them back in readResolve() by the static utility classes WebApplicationContextUtils and such that hold the request/ServletContext in a ThreadLocal. This is tedious, but it guarantees that, when the object is deserialized, its dependencies will be "up to date" with the current application context.
Is there any accepted practice for this, or any guidelines for serializing parts of the spring context.
Note that in JSF, managed beans (~controllers) are stateful (unlike action-based web frameworks). So perhaps my question stands more for JSF, than for spring-mvc.
In this presentation (around 1:14) the speaker says that this issue is resolved in spring 3.0 by providing a proxy of non-serializable beans, which obtains an instance from the current application context (on deserialization)
It appears that bounty didn't attract a single answer, so I'll document my limited understanding:
#Configuration
public class SpringConfig {
#Bean
#Scope(proxyMode = ScopedProxyMode.TARGET_CLASS)
MyService myService() {
return new MyService();
}
#Bean
#Scope("request")
public IndexBean indexBean() {
return new IndexBean();
}
#Bean
#Scope("request")
public DetailBean detailBean() {
return new DetailBean();
}
}
public class IndexBean implements Serializable {
#Inject MyService myService;
public void doSomething() {
myService.sayHello();
}
}
public class MyService {
public void sayHello() {
System.out.println("Hello World!");
}
}
Spring will then not inject the naked MyService into IndexBean, but a serializable proxy to it. (I tested that, and it worked).
However, the spring documentation writes:
You do not need to use the <aop:scoped-proxy/> in conjunction with beans that are scoped as singletons or prototypes. If you try to create a scoped proxy for a singleton bean, the BeanCreationException is raised.
At least when using java based configuration, the bean and its proxy can be instantiated just fine, i.e. no Exception is thrown. However, it looks like using scoped proxies to achieve serializability is not the intended use of such proxies. As such I fear Spring might fix that "bug" and prevent the creation of scoped proxies through Java based configuration, too.
Also, there is a limitation: The class name of the proxy is different after restart of the web application (because the class name of the proxy is based on the hashcode of the advice used to construct it, which in turn depends on the hashCode of an interceptor's class object. Class.hashCode does not override Object.hashCode, which is not stable across restarts). Therefore the serialized sessions can not be used by other VMs or across restarts.
I would expect to scope controllers as 'singleton', i.e. once per application, rather than in the session.
Session-scoping is typically used more for storing per-user information or per-user features.
Normally I just store the 'user' object in the session, and maybe some beans used for authentication or such. That's it.
Take a look at the spring docs for configuring some user data in session scope, using an aop proxy:
http://static.springsource.org/spring/docs/2.5.x/reference/beans.html#beans-factory-scopes-other-injection
Hope that helps
I recently combined JSF with Spring. I use RichFaces and the #KeepAlive feature, which serializes the JSF bean backing the page. There are two ways I have gotten this to work.
1) Use #Component("session") on the JSF backing bean
2) Get the bean from ELContext when ever you need it, something like this:
#SuppressWarnings("unchecked")
public static <T> T getBean(String beanName) {
return (T) FacesContext.getCurrentInstance().getApplication().getELResolver().getValue(FacesContext.getCurrentInstance().getELContext(), null, beanName);
}
After trying all the different alternatives suggested all I had to do was add aop:scoped-proxy to my bean definition and it started working.
<bean id="securityService"
class="xxx.customer.engagement.service.impl.SecurityContextServiceImpl">
<aop:scoped-proxy/>
<property name="identityService" ref="identityService" />
</bean>
securityService is injected into my managedbean which is view scoped. This seems to work fine. According to spring documentation this is supposed to throw a BeanCreationException since securityService is a singleton. However this does not seems to happen and it works fine. Not sure whether this is a bug or what the side effects would be.
Serialization of Dynamic-Proxies works well, even between different JVMs, eg. as used for Session-Replication.
#Configuration public class SpringConfig {
#Bean
#Scope(proxyMode = ScopedProxyMode.INTERFACES)
MyService myService() {
return new MyService();
}
.....
You just have to set the id of the ApplicationContext before the context is refreshed (see: org.springframework.beans.factory.support.DefaultListableBeanFactory.setSerializationId(String))
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
// all other initialisation part ...
// before! refresh
ctx.setId("portal-lasg-appCtx-id");
// now refresh ..
ctx.refresh();
ctx.start();
Works fine on Spring-Version: 4.1.2.RELEASE
Is it possible to make the container inject the same stateful session bean instance into multiple other stateful session beans?
Given the following classes:
#Stateful
public class StatefulTwoBean implements StatefulTwo {
#EJB
private StatefulOne statefulOne;
}
#Stateful
public class StatefulThreeBean implements StatefulThree {
#EJB
private StatefulOne statefulOne;
}
In the above example, StatefulTwoBean and StatefulThreeBean each get injected their own instance of StatefulOneBean.
Is it possible to make the container inject the same instance of StatefulOneBean into both StatefulTwoBean and StatefulThreeBean?
The problem is this - Stateful beans' isntances are allocated by differentiating the clients that call them. Glassfish (and perhaps others) don't propagate this difference on injected beans. The EJB specification, as far as I remember, isn't clear about this.
So your solution is to implement the differentiation yourself. How to achieve this. I'm not pretending this is the most beautiful solution, but it worked. - we did it by putting a Facade (an EJB itself) (I'm calling it a facade, although it does not entirely cover the facade pattern) in front of all our EJBs, with the following code:
public Object call(Object bean,
String methodName,
Object[] args,
Class[] parameterTypes,
UUID sessionId) throws Throwable {
//find the session
SessionContext sessionContext = SessionRegistry.getSession(sessionId);
//set it as current
SessionRegistry.setLocalSession(sessionContext);
.....
}
The important parameter is sessionId - this is something both the client and the server know about, and identifies the current seesion between them.
On the client we used a dynamic proxy to call this facade. So the calls look like this:
getBean(MyConcreteEJB.class).someMethod(), an the getBean method created the proxy, so that callers didn't have to know about the facade bean.
The SessionRegistry had
private static ThreadLocal<SessionContext> localSessionContext = new
ThreadLocal<SessionContext>();
And the SessionContext was simply a Map providing set(key, value) and get(key)
So now, instead of using #Stateful beans to store your state, you could use the SessionContext.
In EJB3.1 you can create your StatefulOne bean as singleton (using the #Singleton annotation) giving you the desired semantics. JBoss should already support this annotation (they've wrote the standard).