I want to secure my "Stateless" EJb with the DeltaSpike-API.
#Stateless
#Remote(UserServiceRemote.class)
public class UserService implements UserServiceRemote
At method level i have a custom annotation "Support"
#Support
public void doSomething() {}
Therefore i wrote a custom annotation "#Support":
#Retention(value = RetentionPolicy.RUNTIME)
#Target({ElementType.TYPE, ElementType.METHOD })
#Documented
#SecurityBindingType
public #interface Support {
My custom Authorizer looks like:
#Secures
#Support
public boolean doAdminCheck(Identity identity, IdentityManager identityManager, RelationshipManager relationshipManager)
throws Exception {
return hasRole(relationshipManager, identity.getAccount(), getRole(identityManager, "Support"));
}
In my "beans.xml" file i included:
<interceptors>
<class>org.apache.deltaspike.security.impl.extension.SecurityInterceptor</class>
</interceptors>
But after i log in my application and call the "doSomething" method per remote call the "Support" annotation is ignored, no matter if I have the role or not.
What I'm doing wrong? Thanx for all suggestions!!!
Ejb and CDI are two different concepts. A stateless session bean and a managed CDI bean are managed by different containers. So you cannot use Deltaspike on a stateless session bean.
If you want to use deltaspike security, use a named bean instead and use a different remoting strategy.
In my case I had to make sure that module (jar) containing service I wanted to secure with the annotation had beans.xml file with deltaspike interceptor in it (previously I was adding the file only to module with the security code itself, which was a problem).
Also I found out that I had to separate business logic service, from the SOAP endpoint declaration itself.
This custom EJB #Stateles (or any other) service can be #Inject-ed into the SOAP and security annotations (here #Support) will work on it.
In my opinion separation of endpoint declaration from business code is good design anyway, as we may have multiple interfaces invoking same business logic. (and it's easier to unit test etc.)
Related
We know that DI frameworks, such as Spring and Guice sometimes create proxies instead of beans. To compare these proxies in equals and hashcode methods we should use instanceOf operator, because their class is not longer the same as the original one.
Also (may be) these proxies may be created in some uninitialized state just like Hibernate proxies (its just my guess).
I know only one case when Spring creates proxy of a bean: when you annotate it with #Configuration.
Is there any other situations like that?
Does Spring create uninitialized proxies which only initialize their fields once that fields are accessed?
I have found a similair question: When does Spring creates proxies in the bean's lifecycle?, but please note that it relates to AOP usage case. I am asking about simple DI usage without AOP involved.
Same question for Guice!
Spring uses proxies every time you use annotations like #Transactional or #Cacheable. It's AOP has nothing to do with the complex AOP that requires a post compile or weaving.
Anyway, be aware that if in one of your services you've autowired a repository, and inside the service you create a class where you set the repository, this will be automatically used as a proxy (if there's an annotation that requires it).
About Guice, the answer is in the comment of #Oliver.
In HK2 whether or not proxies are made for injection points depends on the scope/context of the bean being injected. In particular, in HK2 a scope can be annotated with Proxiable. You can control with the Proxiable annotation whether or not proxies should be generated for injections of beans into other beans of the same scope.
You can further control whether beans in non-proxiable scopes DO get proxied or beans in proxiable scopes should NOT be proxied with the annotations Unproxiable and UseProxy. There are equivalent verbs in the EDSL as well (for example ServiceBindingBuilder.proxyForSameScope).
In additions proxies would be generated by HK2 if AOP was in play
Assisted injection in Guice might be a use-case that could be of interest to you. Long story very short.
Factory interface:
public interface PaymentFactory {
public Payment create(Date startDate, Money amount);
}
Payment implementation:
public class RealPayment implements Payment {
#Inject
public RealPayment(
CreditService creditService,
AuthService authService,
#Assisted Date startDate,
#Assisted Money amount);
}
...
}
Binding:
install(new FactoryModuleBuilder()
.implement(Payment.class, RealPayment.class)
.build(PaymentFactory.class));
Guice will then generate the PaymentFactory implementation for you.
More details and the complete exapmple are available in the wiki. Note: this is a guice extension. I'm not aware of more Guice use cases except those mentioned by Olivier Grégoire in his comment.
I have a Spring AOP aspect used for logging, where a method can be included for logging by adding an annotation to it, like this:
#AspectLogging("do something")
public void doSomething() {
...
}
I've been using this on Spring beans and it's been working just fine. Now, I wanted to use it on a REST-service, but I ran into some problems. So, I have:
#Path("/path")
#Service
public class MyRestService {
#Inject
private Something something;
#GET
#AspectLogging("get some stuff")
public Response getSomeStuff() {
...
}
}
and this setup works just fine. The Rest-service that I'm trying to add the logging to now has an interface, and somehow that messes stuff up. As soon as I add the #AspectLogging annotation to one of the methods, no dependencies are injected in the bean, and also, the aspect is newer called!
I've tried adding an interface to the REST-service that works, and it gets the same error.
How can having an interface lead to this type of problems? The aspect-logger works on classes with interfaces elsewhere, seems it's only a problem when it's a REST-service..
Ref the below Spring documentation (para 2) -
To enable AspectJ annotation support in the Spring IoC container, you
only have to define an empty XML element aop:aspectj-autoproxy in your
bean configuration file. Then, Spring will automatically create
proxies for any of your beans that are matched by your AspectJ
aspects.
For cases in which interfaces are not available or not used in an
application’s design, it’s possible to create proxies by relying on
CGLIB. To enable CGLIB, you need to set the attribute
proxy-targetclass= true in aop:aspectj-autoproxy.
In case your class implements an interface, a JDK dynamic proxy will be used. However if your class does not implement any interfaces then a CGLIB proxy will be created. You can achieve this #EnableAspectJAutoProxy. Here is the sample
#Configuration
#EnableAspectJAutoProxy
public class AppConfig {
#Bean
public LoggingAspect logingAspect(){
return new LoggingAspect();
}
}
#Component
#Aspect
public class LoggingAspect {
...
...
}
In my opinion what you are actually trying to do is to add spring annotations to a class maintained by jersey. In the result you are receiving a proxy of proxy of proxy of somethng. I do not think so this is a good idea and this will work without any problems. I had a similar issue when I tried to implement bean based validation. For some reasons when there were #PahtParam and #Valid annotations in the same place validation annotations were not visible. My advice is to move your logging to a #Service layer instead of #Controller.
I have the following configuration:
1 EAR on one GF containing 2 EJB-JARs with EJB components.
1 WAR on another Glassfish server (=> other JVM) containing web components accessing the EJB components.
I have 2 EJB business services in each EJB-JAR of my EAR, and they are all developped like this:
#Remote
public interface ServiceAItf {
...
}
#Stateless
#Local
public class ServiceAImpl implements ServiceAItf {
...
}
In my WAR, I access the EJB components via an explicit "InitialContext.lookup" on the remote interface.
In my EAR, I am quite confused about the best practice for injection, in terms of performance, architecture, and so on...
I have the following questions:
As you can see, I have declared the annotation "#Local" on the service implementation without defining the local interface. Is it correct? At least I have no error at deployment. But maybe I should use the "#LocalBean" annotation instead? I suppose that the "#LocalBean" annotation simply allows to invoke the implementation directly as a "Local" EJB, but you have to use the implementation in your code like this:
#Stateless
#Local
public class ServiceBImpl implements ServiceBItf {
#EJB
private ServiceAImpl serviceA;
...
}
What is the best way to inject one EJB into another one?
It works like this:
#Stateless
#Local
public class ServiceBImpl implements ServiceBItf {
#EJB
private ServiceAItf serviceA;
...
}
But from what I noticed, the "serviceA" injected is the remote proxy, while it is in the same JVM within the same EAR file. So I suppose that there will be an impact on the performance. That's why I have tried to inject the service like this:
#Stateless
#Local
public class ServiceBImpl implements ServiceBItf {
#Inject
private ServiceAItf serviceA;
...
}
But it doesn't work in GF, I have the following exception:
WELD-001408 Unsatisfied dependencies for type [...] ...
I then tried to create a local interface, and the injection via the annotation "#Inject" works when both services
Even if I create a local interface like this, the service is not injected via the annotation "#Inject" but is null:
#Local
public interface ServiceALocalItf {
...
}
I read a lot of articles where it is highly recommended to use "#Inject" instead of "#EJB" when it is for a local invocation. That leads me to the following question: in which case the "#Local" EJB invocation is recommended (or simply used)?
After all this analysis, I come to the following conclusion:
For each service, I create a "#Local" and a "#Remote" interface.
From WAR to EJB-JAR of EAR, make a JNDI lookup to the remote interface.
From EJB-JAR to EJB-JAR, make an injection via "#EJB" to the local interface.
For two services within the same EJB-JAR, make an injection via "#Inject" to the local interface.
What do you think about it? Is it correct?
As you can see, I have declared the annotation "#Local" on the service
implementation without defining the local interface. Is it correct?
With EJB 3.1, the requirement for local interfaces was dropped. No need to write them unless you explicitly need them.
What is the best way to inject one EJB into another one?
Couple of things to write here:
With Java EE 6, Java Enterprise has changed. A new JSR defines a so-called managed bean (don't confuse with JSF managed beans) as a sort of minimum component that can still benefit from the container in terms of dependency injection and lifecycle management. This means: If you have a component and "just" want to use DI and let the container control its lifecycle, you do not need to use EJBs for it. You'll end up using EJBs if - and only if - you explicitly need EJB functionality like transaction handling, pooling, passivation and clustering.
This makes the answer to your question come in three parts:
Use #Inject over #EJB, the concept of CDI (a) works for all managed
beans (this includes EJBs) and (b) is stateful and therefore far
superior over pure #EJB DI
Are you sure that you need EJBs for your
components?
It's definitely worthwhile to have a look at the CDI
documentation
#Inject annotation is used for java beans(POJOs) while #EJB annotation is used for enterprise java beans. When a container inject an ejb provided by #EJB annotation to another bean it also controls that ejb's lifecycle, performs pooling for stateless beans and so on(when bean which is going to be injected is not deployed the reference will be null). If you use #Inject annotation CDI mechanism simply find and create an instance of injected resource like with new operator(if the implementation of interface which is going to be injected doesn't exist the reference will be null). You can use qualifiers with #Inject annotation to choose different implementation of injected interface.
I'va a ServiceImpl with is annotated with #Service stereotype of Spring and have two methods in it each one is annotated with custom annotations which are intercepted by Spring.
#Service
public class ServiceImpl implements Service{
#CustomAnnotation
public void method1(){
...
}
#AnotherCustomAnnotation
public void method2(){
this.method1();
...
}
}
}
Now Spring uses proxy based AOP approach and hence as I'm using this.method1() interceptor for #CustomAnnotation will not able to intercept this call, We used to inject this service in another FactoryClass and in that way we were able to get the proxy instance like -
#AnotherCustomAnnotation
public void method2(){
someFactory.getService().method1();
...
}
I'm now using Spring 3.0.x, which is the best way to get the proxy instance?
The other alternative is to use AspectJ and #Configurable.
Spring seems to be going towards these days (favoring).
I would look into it if you are using Spring 3 as it is faster (performance) and more flexible than proxy based aop.
Both methods are inside the same proxy, whereas the AOP functionality just enriches calls from the outside (see Understanding AOP Proxies). There are three ways for you to deal with that restriction:
Change your design (that's what I would recommend)
Change proxy type from JDK-proxy to proxy-target-class (CGLib-based subclassing) Nope, that doesn't help, see #axtavt's comment, it would have to be static AspectJ compilation.
Use ((Service)AopContext.currentProxy()).method1() (Works, but is an awful violation of AOP, see the end of Understanding AOP Proxies)
You could make your ServiceImpl class implement the BeanFactoryAware interface, and lookup itself thanks to the provided bean factory. But this is not dependency injection anymore.
The best solution is to put method1 in another service bean, which would be injected in your existing service bean and to which your existing service bean would delegate.
a standard case - you have a controller (#Controller) with #Scope("session").
classes put in the session usually are expected to implement Serializable so that they can be stored physically in case the server is restarted, for example
If the controller implements Serializable, this means all services (other spring beans) it is referring will also be serialized. They are often proxies, with references to transaction mangers, entity manager factories, etc.
It is not unlikely that some service, or even controller, hold a reference to the ApplicationContext, by implementing ApplicationContextAware, so this can effectively mean that the whole context is serialized. And given that it holds many connections - i.e. things that are not serializable by idea, it will be restored in corrupt state.
So far I've mostly ignored these issues. Recently I thought of declaring all my spring dependencies transient and getting them back in readResolve() by the static utility classes WebApplicationContextUtils and such that hold the request/ServletContext in a ThreadLocal. This is tedious, but it guarantees that, when the object is deserialized, its dependencies will be "up to date" with the current application context.
Is there any accepted practice for this, or any guidelines for serializing parts of the spring context.
Note that in JSF, managed beans (~controllers) are stateful (unlike action-based web frameworks). So perhaps my question stands more for JSF, than for spring-mvc.
In this presentation (around 1:14) the speaker says that this issue is resolved in spring 3.0 by providing a proxy of non-serializable beans, which obtains an instance from the current application context (on deserialization)
It appears that bounty didn't attract a single answer, so I'll document my limited understanding:
#Configuration
public class SpringConfig {
#Bean
#Scope(proxyMode = ScopedProxyMode.TARGET_CLASS)
MyService myService() {
return new MyService();
}
#Bean
#Scope("request")
public IndexBean indexBean() {
return new IndexBean();
}
#Bean
#Scope("request")
public DetailBean detailBean() {
return new DetailBean();
}
}
public class IndexBean implements Serializable {
#Inject MyService myService;
public void doSomething() {
myService.sayHello();
}
}
public class MyService {
public void sayHello() {
System.out.println("Hello World!");
}
}
Spring will then not inject the naked MyService into IndexBean, but a serializable proxy to it. (I tested that, and it worked).
However, the spring documentation writes:
You do not need to use the <aop:scoped-proxy/> in conjunction with beans that are scoped as singletons or prototypes. If you try to create a scoped proxy for a singleton bean, the BeanCreationException is raised.
At least when using java based configuration, the bean and its proxy can be instantiated just fine, i.e. no Exception is thrown. However, it looks like using scoped proxies to achieve serializability is not the intended use of such proxies. As such I fear Spring might fix that "bug" and prevent the creation of scoped proxies through Java based configuration, too.
Also, there is a limitation: The class name of the proxy is different after restart of the web application (because the class name of the proxy is based on the hashcode of the advice used to construct it, which in turn depends on the hashCode of an interceptor's class object. Class.hashCode does not override Object.hashCode, which is not stable across restarts). Therefore the serialized sessions can not be used by other VMs or across restarts.
I would expect to scope controllers as 'singleton', i.e. once per application, rather than in the session.
Session-scoping is typically used more for storing per-user information or per-user features.
Normally I just store the 'user' object in the session, and maybe some beans used for authentication or such. That's it.
Take a look at the spring docs for configuring some user data in session scope, using an aop proxy:
http://static.springsource.org/spring/docs/2.5.x/reference/beans.html#beans-factory-scopes-other-injection
Hope that helps
I recently combined JSF with Spring. I use RichFaces and the #KeepAlive feature, which serializes the JSF bean backing the page. There are two ways I have gotten this to work.
1) Use #Component("session") on the JSF backing bean
2) Get the bean from ELContext when ever you need it, something like this:
#SuppressWarnings("unchecked")
public static <T> T getBean(String beanName) {
return (T) FacesContext.getCurrentInstance().getApplication().getELResolver().getValue(FacesContext.getCurrentInstance().getELContext(), null, beanName);
}
After trying all the different alternatives suggested all I had to do was add aop:scoped-proxy to my bean definition and it started working.
<bean id="securityService"
class="xxx.customer.engagement.service.impl.SecurityContextServiceImpl">
<aop:scoped-proxy/>
<property name="identityService" ref="identityService" />
</bean>
securityService is injected into my managedbean which is view scoped. This seems to work fine. According to spring documentation this is supposed to throw a BeanCreationException since securityService is a singleton. However this does not seems to happen and it works fine. Not sure whether this is a bug or what the side effects would be.
Serialization of Dynamic-Proxies works well, even between different JVMs, eg. as used for Session-Replication.
#Configuration public class SpringConfig {
#Bean
#Scope(proxyMode = ScopedProxyMode.INTERFACES)
MyService myService() {
return new MyService();
}
.....
You just have to set the id of the ApplicationContext before the context is refreshed (see: org.springframework.beans.factory.support.DefaultListableBeanFactory.setSerializationId(String))
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
// all other initialisation part ...
// before! refresh
ctx.setId("portal-lasg-appCtx-id");
// now refresh ..
ctx.refresh();
ctx.start();
Works fine on Spring-Version: 4.1.2.RELEASE