I am using Struts 2 and Spring autowiring. Right now, the default strategy is set to by-name, but usually we use the constructor and the fallback works to autowire in properties when only one implementing class is available.
There is one property however that I'd like to wire into an action class that has several implementing classes, so I made the Action a java bean, with the properties as fields that can be set. Unfortunately, the only ways that these will be used (apparently) is if they have a public getter/setter, which also exposes them to the type converter at request time. In other words, if a client adds their name to the request as form fields or parameters, Struts will attempt to write those values to them.
So my question is, is it actually possible to use by-name autowiring without exposing properties like that (which may or may not be a security hazard), or am I better off just using XML and defining the Action as an object with scope prototype?
I did eventually track down the documentation for the ParametersInterceptor which actually lists three ways you can limit what parameters are set by the interceptor.
Configuring excludeParams in the parameter configuration, which is a global regex which applies to all actions (not what I want, also possibly deprecated as it is no longer described in the most recent class docs).
Setting excludeMethods (does the same as the previous, the preferred method for global excludes)
Implementing ParameterNameAware, which is the closest to what I wanted. Here you can whitelist what parameters are used.
In the end, defining the action as a prototype object in the normal Spring configuration seemed to be the most prudent. Letting the action manage what parameters it has means another place where parameters need to be explicitly white listed every time a change is made.
Related
My questions are about the lifecycle of controllers in the Play framework for Java, if the controllers are stateful instances or stateless with static methods, and how to use dependency injection in the controller code.
Is each web request handled by a new instance of a Play controller class, i.e. can a controller store state in fields such as services injected into the controller constructor?
(where in the documentation is it explained?)
Has the Play framework changed since earlier versions (and if so, at what version?) regarding if controllers are stateful instances or stateless controllers with static methods?
Where can you see code examples about how the framework injects services into a controller instance when stateful controller is used and example of how to inject services into a static controller method?
Regarding the latter, i.e. injection into a static method I suppose that would either have to be a parameter to the method which the frameworks will add, or if not possible you maybe instead will have to use a service locator from within the method e.g. instantiate a Guice module class and then use "injector.getInstance" from within the static controller method.
This subject is touched in the section "Dependency injecting controllers" at the following page:
https://www.playframework.com/documentation/2.4.x/JavaDependencyInjection
However, it does not show with code how to actually inject services into a controller instance (but probably the same way as other "components" i.e. with #Inject annotation) and certainly it does not currently show how to use DI with a static controller method.
I am confused about these things because I have not found documentation being clear about my questions, and I have also read in a Play book (from 2013) that the controller methods should be programmed as stateless and the controller methods should be static.
However, when now using activator for generating a Play application for Java with the latest Play version (2.4.6) I can see that the generated Controller method (Application.index) is NOT static.
Also, at the following documentation page, the controller method is NOT static:
https://www.playframework.com/documentation/2.4.x/JavaActions
This is confusing, and since it is VERY fundamental to understand whether or not each request is handled by a Controller instance or not (i.e. if state can be used) I think this should be better documented at the page about Controller/Actions than the current documentation (the above linked page) which is not explaining it.
The documentation about dependency injection touches the subject about static and non-static methods at the section "Dependency injecting controllers" mentioning "static routes generator" but I think it should be better explained including code examples.
If someone in the Play team is reading this question, then please add some information to the above linked pages, for example please do mention (if my understanding is correct) that in previous versions of Play the controller methods were static and for those versions you should never store state in fields, but in later versions (beginning from version x?) each request is handled by an instance of a controller and can therefore use state (e.g. constructor parameters injected by the framework).
Please also provide code examples about injection used with static controller methods and injection into stateful controller instances with one instance per request.
The section "Component lifecycle" in the dependency injection page only mentions "components" but I think it should also be explicit about the controller lifecycle and its injection, since it is such a fundamental and important knowledge to communicate clearly to all developers to avoid bugs caused by misunderstandings about being stateful or not.
Is each web request handled by a new instance of a Play controller class, i.e. can a controller store state in fields such as services injected into the controller constructor? (where in the documentation is it explained?)
As far as I can tell, controllers are by default singleton objects. This is not clearly documented, but it is implied that controller instances are reused. See the migration guide for Playframework 2.4:
The injected routes generator also supports the # operator on routes, but it has a slightly different meaning (since everything is injected), if you prefix a controller with #, instead of that controller being directly injected, a JSR 330 Provider for that controller will be injected. This can be used, for example, to eliminate circular dependency issues, or if you want a new action instantiated per request.
Also, check this commend made by James Roper (Play core committer) about if controllers are singleton or not:
Not really - if using Guice, each time the controller is injected into something, a new instance will be created by default. That said, the router is a singleton, and so by association, the controllers it invokes are singleton. But if you inject a controller somewhere else, it will be instantiated newly for that component.
This suggests that the default is to reuse controller instances when responding to requests and, if you want a new action per request, you need to use the syntax described in the migration guide. But... since I'm more inclined to prove and try things instead of just believe, I've created a simple controller to check that statement:
package controllers
import play.api._
import play.api.mvc._
class Application extends Controller {
def index = Action {
println(this)
Ok(views.html.index("Your new application is ready."))
}
}
Doing multiple requests to this action prints the same object identity for all the requests made. But, if I use the # operator on my routes, I start to get different identities for each request. So, yes, controllers are (kind of) singletons by default.
Has the Play framework changed since earlier versions (and if so, at what version?) regarding if controllers are stateful instances or stateless controllers with static methods?
By default, Play had always advocated stateless controllers, as you can see at the project homepage:
Play is based on a lightweight, stateless, web-friendly architecture.
That had not changed. So, you should not use controllers' fields/properties to keep data that changes over time/requests. Instead, just use controllers' fields/properties to keep a reference to other components/services that are also stateless.
Where can you see code examples about how the framework injects services into a controller instance when stateful controller is used and example of how to inject services into a static controller method?
Regarding code examples, Lightbend templates repository is the place to go. Here are some examples that use dependency injection at the controllers level:
https://github.com/adrianhurt/play-api-rest-seed
https://github.com/knoldus/playing-reactive-mongo
https://github.com/KyleU/boilerplay
Dependency Injection with static methods is not supported, and that is why Playframework stills offers old apis to use with static methods. The rule of thumb here is: choose between DI and static methods. Trying to use both will just bring complexity to your application.
Ok, thank you marcospereira.
I have now also confirmed that you indeed get different instances (different toString values which can be printed/logged in a controller method) of the controller for each request.
For those who are interested, the solution (to get different instances of controller class for each request) is to use for example the following:
GET / #controllers.Application.index()
instead of the following:
GET / controllers.Application.index()
in the file "conf/routes"
AND to also use the following:
routesGenerator := InjectedRoutesGenerator
instead of the following:
routesGenerator := StaticRoutesGenerator
in the file "build.sbt"
Regarding the statement that Play has a "stateless" architecture:
Maybe I am wrong, but as far as I understand the terminology, the "stateless" means that the web server does not store any state between requests?
The word "stateless" does not mean that a controller instance can not use fields, e.g. injected into the constructor.
If an injected object is stored as a field in a controller, then that field is a "state" of the controller.
Therefore, even if you use "InjectedRoutesGenerator" and the "#" prefix to get "stateful" controller instances, that injected "state" is only stored within one request, so you can still say that the framework itself is "stateless" since the server does not store any state between multiple requests.
Please do correct me if I have misunderstood something about Play being stateless.
I am using Spring 3.2 and I am looking for a way that I can force controllers to specify which attributes allowed to be bound, so malicious users can not inject values into bound objects.
Spring recommends using setAllowedFields() to white-list / setDisallowedFields() to black-list.
Instead of doing manually this white-list, I want to do this dinamically, so I want to bound that attributes that are visible on the form.
So is it possible to get this white-list? Is there any way that I can get the visible attributes on the form?
Thanks.
You could implement a RequestDataValueProcessor especially the method processFormFieldValue. You could construct a collection of allowed field names, store this in the session.
Next you would extend the ConfigurableWebBindingInitializer and override the initBinder method. Which would retrieve the collection and pre-configure the WebDataBinder there.
And finally you would need some configuration to wire everything together.
Links
RequestDataValueProcessor javadoc
ConfigurableWebBindingInitializer javadoc
I've read documentation, but there is no definition of the main purpose of Dynamic Bean. I understand how to implement this but dont know why this approach so good.
So could someone tell the situation when it's good to use Dynamic Bean?
Thanks
Dynamic beans typically allow you to get and set fields which may not be explicit members. The most direct comparison is a map - maps allow you to get and set fields without defining them beforehand. However, a dyanamic bean conforms to standard java idioms (getters/setters).
Unlike a hashmap, however, dyanbeans can enforce constraints more readily (and they hide the underlying data structure implementation, so they can be lazy, or make data connections when being set, etc... ) . For example, you can easily add a getter or setter to your dynabean that is explicit, and the code would read very idiomatically and cleanly interact with other bean apis.
public int getCost()
{
if(this.get("cost")==null)
return -1;
return Integer.parseInt(super.get("cost"));
}
The most useful part about dynamic beans in ATG is providing additional DynamicPropertyMapper classes for classes that aren't already covered by it. First, note that you can use the DynamicBeans.setPropertyValue(object, property, value) and DynamicBeans.getPropertyValue(object, property) static methods to set or get properties on an object that don't necessarily correspond with Java bean properties. If the object you're using isn't registered with dynamic beans, it'll try to use Java bean properties by default. Support is provided out of the box to do that with repository items (properties correspond to repository item properties; also applies to the Profile object, naturally), DynamoHttpServletRequest objects (correspond to servlet parameters), maps/dictionaries (correspond to keys), and DOM Node objects (correspond to element attributes followed by the getters/setters of Node).
To add more classes to this, you need to create classes that extend DynamicPropertyMapper. For instance, suppose you want to make HttpSession objects work similarly using attributes with a fallback to the getters and setters of HttpSession. Then you'd implement the three methods from DynamicPropertyMapper, and the getBeanInfo(object) class can be easily implemented using DynamicBeans.getBeanInfo(object) if you don't have any custom BeanInfo or DynamicBeanInfo classes for the object you're implementing this for.
Once you have a DynamicPropertyMapper, you can register it with DynamicBeans.registerPropertyMapper(mapper). Normally this would be put into a static initialization block for the class you're writing the property mapper for. However, if you're making a property mapper for another class out of your control (like HttpSession), you'll want to make a globally-scoped generic service that simply calls the register method in its doStartService(). Then you can add that service to your initial services.
Anyone know of any other custom spring scopes than Servlet Context Scope and ThreadScope ?
If you've made some closed-source custom scope I'd really also be interested in hearing what it does and how it worked out for you. (I'd imagine someone would make a WindowScope in a desktop app ?)
I'm open to all use cases, I'm looking to expand my horizon here.
We implemented our own custom Spring scope. A lot of our code works at a relatively low level, close to the database, and we maintain a conceptual level on top of that with its own object model of data sources, links, attributes etc.
Anyway, a lot of beans require a so-called StorageDictionary (an encapsulation of this object graph) to do their work. When we make non-trivial changes to the object graph, the dictionary sometimes needs to be blown away and recreated. Consequently, we implemented a custom scope for objects that were dictionary scoped, and part of the invalidation of a given dictionary involves clearing this custom scope. This lets Spring handle a nice form of automatic caching for these objects. You get the same object back every time up until the dictionary is invalidated, at which point you get a new object.
This helps not only with consistency but also allows the objects themselves to cache references to entities within the dictionary, safe within the knowledge that the cache will be valid for as long as they themselves are retrievable by Spring. This in turn lets us build these as immutable objects (so long as they can be wired via constructor injection), which is a very good thing to do anyway wherever possible.
This technique won't work everywhere and does depend heavily on the characteristics of the software (e.g. if the dictionary was modified regularly this would be horribly inefficient, and if it was updated never this would be unnecessary and slightly less efficient than direct access). However, it has definitely helped us pass off this management of lifecycle to Spring in a way that is conceptually straightforward and in my opinion quite elegant.
In my company we've created two custom scopes, one that will use Thread or Request and another that will use either Thread or Session. The idea is that a single scope can be used for scoped beans without having to change configuration based on the execution environment (JUnit or Servlet container). This also really comes in handy for when you run items in Quartz and no longer have a Request or Session scope available.
Background:
I work on a single web app that runs 4 different web sites under the same servlet context. Each site has its own domain name, e.g. www.examplesite1.com, www.examplesite2.com, etc.
Problem:
Sites sometimes require their own customised instance of a bean from the app context (usually for customised display of messages or formatting of objects).
For example, say sites 1 and 2 both use the "standardDateFormatter" bean, site 3 uses the "usDateFormatter" bean and site 4 uses the "ukDateFormatter" bean.
Solution:
I'm planning on using a "site" scope.
We have a Site enum like this:
enum Site {
SITE1, SITE2, SITE3, SITE4;
}
Then we have a filter that stores one of these Site values in the request's thread using a ThreadLocal. This is the site scope's "conversation id".
Then in the app context there'd be a bean named "dateFormatter", with 'scope="site"'. Then, wherever we want to use a date formatter, the correct one for the user's current site will be used.
Added later:
Sample code here:
http://github.com/eliotsykes/spring-site-scope
Oracle Coherence has implemented a datagrid scope for Spring beans. To sum it up:
A Data Grid Bean is a proxy to a
java.io.Serializable Bean instance
that is stored in a non-expiring
Coherence Distributed Cache (called
near-datagridbeans).
Never used them myself but they seem cool.
Apache Orchestra provides SpringConversationScope.
In a Spring Batch application, we have implemented an item scope.
Background
We have lots of #Service components which compute something based on the current batch item. Many of them need the same workflow:
Determine relevant item parts.
Init stuff based on the item.
For each item part, compute something (using stuff).
We moved the workflow into a base class template method, so the subclasses implement only findItemParts(Item) (doing 1 and 2) and computeSomething(ItemPart) (doing 3). So they became stateful (stuff initialized in findItemParts is needed in computeSomething), and that state must be cleared before the next item.
Some of those services also involve injected Spring beans which are also derived from the current item and must be removed afterwards.
Design
We implemented an AbstractScopeRegisteringItemProcessor which registers the item and allows subclasses to register derived beans. At the end of its process method, it removes the item from its scope context, and the derived beans using DefaultSingletonBeanRegistry.destroySingleton.
How it worked out
It works, but has the following problems:
We did not manage to get the derived beans cleaned up without registration (just based on their #Scope). The concrete processor must create and register them.
AbstractScopeRegisteringItemProcessor would have been nicer using composition and dynamically implementing all interfaces of the underlying processor. But then the resulting #StepScope bean is a proxy for the declared return type (i.e. AbstractScopeRegisteringItemProcessor or ItemProcessor) without the required callback interfaces.
EDIT
With the aid of #Eliot Sykes's solution and shared code plus #Cheetah's BeanDefinition registration, I was able to get rid of the registration as singleton beans. Instead, ItemScopeContext (the storage used by both the processor and the Scope implementation; Java-configured via a static #Bean method) implements BeanDefinitionRegistryPostProcessor. It registers a FactoryBean whose getObject() returns the current item or throws an exception if there is none. Now, a #Component annotated with #Scope(scopeName = "Item", proxyMode = ScopedProxyMode.TARGET_CLASS) can simply inject the item and need not be registered for end-of-scope cleanup.
So in the end, it did work out well.
A spring locale scope based on the users locale wihtin a web application
See related wiki page
In my company, we have also implemented spring custom scope. We have a multi tenant system where every customer can customize settings. Instance based scope of ours, caches the beans which are customer specific. So each time user of a customer logs in, these settings are cached and reused again when other users of the same customers sign in.
I once used a kind of conversation scope to store some objects in the session scope, in order to keep them when re-entering the same page, but limited to a single page to avoid to leave useless objects in the session. The implementation just stored the page URL and cleaned the conversation scope on each page change.
I got pretty big webflow definition, which I do not want to copy/paste for reusing. There are references to action bean in XML, which is kind natural.
I want to use same flow definiton twice: second time with actions configured differently (inject different implementation of service to it).
Is there easy way to do this?
Problem is I want to use same flow with different beans at once, in the same app. Copy/Paste is bad, but I dont see other solution for now.
You could try creating a new flow that extends the "pretty big one" and adding flowExecutionListeners to it.
The interface "FlowExecutionListener"defines methods for the following events in flow execution:
requestSubmitted
requestProceessed
sessionCreating
sessionStarting
sessionStarted
eventSignaled
transitionExecuting
stateEntering
viewRendered
viewRendering
stateEntered
paused
resuming
sessionEnding
sessionEnded
exceptionThrown
You can write a handler that injects the required resources to your flow (and use different handles with different flows) by storing it in the RequestContext, where you can access it in your flow definition.
Note that in that case you would still have to modify the "pretty big flow" to use those resources instead of referencing the beans directly.
I'm in the same fix that you're in...i have different subclasses which have corresponding action beans, but a lot of the flow is the same. In the past we have just copied and pasted...not happy with that!
I have some ideas I am going to try out with using the expression language. First, I came up with an action bean factory that will return the right action bean to use for a given class, then i can call that factory to set a variable that i can use instead of the hard-coded bean name.
Here's part of the flow:
<action-state id="checkForParams">
<on-entry>
<set name="flowScope.clientKey" value="requestParameters.clientKey"/>
<set name="flowScope.viewReportBean"
value="reportActionFactory.getViewBean(reportUnit)"/>
</on-entry>
<evaluate expression="viewReportBean"/>
The evaluate in the last line would normally refer directly to a bean, but now it refers to the result of the "set" I just did.
Good news--the right bean gets called.
Bad news--anything in the flow scope needs to be Serializable, so I get a NotSerializableException--arggh!
I can try setting something on a very short-lived scope, in which case it will need to get called all the time...or I can figure out some kind of proxy which holds the real bean as a proxy declared "transient".
BTW, I am using Spring 2.5.6 and webflow 2.0.7. Later versions may have better ways of handling this; in particular, EL's have gotten some attention, it seems. I'm still stuck with OGNL, which is the Spring 1.x EL.
I'm sure some webflow guru knows other ways of doing things in a less clunky fashion...
I don't think you can use the same webflow definition with the actions configured in two different ways.
If you want to use different actions you'll either have to reconfigure your action beans then redeploy your app or create a separate webflow definition with the differently configured beans.
This is a great Spring WebFlow resource.
Try to refactor the common configurable part in a subflow, and call the subflow from the different main flows where you want to reuse it.
Pass parameters to the subflow to configure it in any way needed, using the spring expression language to pass different spring beans, etc.