I have two classes Foo and Bar which implements Managed.
I am using 'dropwizard-guice' with enableAutoConfig (Dropwizard Guice) to automatically add bundles and managed objects. But AutoConfig adds the managed objects in random order.
But in my case, I am injecting singleton Foo instance to Bar and I always want Foo to be created and added first and Foo to be destroyed after Bar. Is there a way to achieve the required ordering ?
so looking at the code, managed objects are simply added to a list. This means that the order you add them to will be the order they are executed at. Now there might be subtleties that will screw you, so I would not rely on that.
The lifecycle in DW is handled by Jetty. So the functionality that starts/stops your beans lives there.
I would implement a custom solution and since you are using guice this will be fairly straight forward and easy.
Add a new managed interface "MyManaged"
This will enable you to have 2 different types of managed. MyManaged can also implement sortable or whatever you need to create an order and that way you will be able to exactly control execution order.
Add a new Container "MyManagedContainer"
This one will be responsible for your MyManaged classes. It must implement Managed and will be handled by DW. So basically you wrap your own managed objects into a Managed object, so that you have control over what to do.
In MyManagedContainer, in the start/stop you simply delagate to your own start/stop objects.
Create everything in Guice.
Guice offers you MultiBindings: https://github.com/google/guice/wiki/Multibindings
So, you create your Foo and Bar, they both implement MyManaged and some sort of ordering.
You bind them and inject them as Set into MyManagedContainer. MyManagedContainer you add to the Managed lifecycle of dropwizard.
Tada, you now have exactly controlled execution order.
I apologise for the lack of code, but I have not in fact implemented this. I also use guicey (which has internal support for multibindings and much much more) instead of guice.
Let me know if you need more help with this.
Thanks,
Artur
This question is about a specific usage of a callback pattern. By callback i mean an interface from which i can define method(s) that is (are) optionnaly (= with a default set to 'do nothing', thanks Java 8) called from a lower layer in my application. My "application" is in fact a product which may have a lot of changes between client projects, so i need to separates somethings in order to reuse what won't change (technical code, integration of technologies) from the rest (model, rules).
Let's take an example :
I developped a Search Service which is based upon Apache CXF JAX-RS Search.
This service parses a FIQL query which can only handle AND/OR condition with =/</>/LIKE/... condition to create a JPA criteria query. I can't use a a condition like 'isNull'.
Using a specific interface i can define a callback that will be called when i got the criteria query from apache CXF layer in my search service and add my condition to the existing ones before the query is executed. This condition are defined on the upper layer of my searchService (RestController). This is in order to reduce code duplicate, like retuning a criteria query and finalize it in every methods where i need it. And because using #Transactional in CXF JAX-RS controller does not work well Spring proxy and CXF work (some JAX-RS annotation are ignored);
First question : does this example seems to be a good idea in terms of design ?
Now another example : i have an object which have some basic fields created from a service layer. But i want to be able to set others non-nullable fields not related to the service's process before the entity is persisted. These fields may move from a projects to another so i'd like to not have to change the signature of my service's method every time we add / remove columns. So again i'm considering using a callback pattern to be able to set within the same transaction and before object is persisted by the Service layer.
Second question : What about this example ?
Global question : Except the classic usage of callback for events : is this a pratice to use this pattern for some specific usage or is there any better way to handle it ?
If you need some code sample ask me, i'll make some (can't post my current code).
I wouldn't say that what you've described is a very specific usage of "an interface from which i can define method(s) that is (are) optionally called from a lower layer". I think that it is reasonable and also quite common solution.
Your doubts may be due to the naming. I'd rather use the term command pattern here. It seems to me that it is less confusing. Your approach also resembles the strategy pattern i.e. you provide (inject) an object which performs some calculations. Depending, on the context you inject objects that behave in a different way (for example add different conditions to a query).
To sum up callbacks/commands are not only used for events. I'd even say that events are specific usage of them. Command/callback pattern is used whenever we need to encapsulate an operation within an object and transfer/pass it somehow (by the way, in Java there is no other way to do so but for example in C++ there are pointers to methods, in C# there are delegates...).
As to your second example. I'm not sure if I understand it correctly. Why can't you simply populate all required fields of an object before calling the service?
I have a Java model which is effectively a tree of Java beans. Different areas of my application can change different beans of the model. When finished, I want to save the model, which should be able to work out which beans have actually changed, and call there
I know I can implement save(), isDirty() and setDirty() methods in all the beans, and have the setter check whether there is a change and call setDirty(). But ideally I don't want to have to programmicatically do this for each setter. I want to just be able to add new properties to the beans with no additional coding.
I'm also aware of PropertyChangeListeners, but again I would have to programmatically fire a change in each setter.
Can anyone recommend a pattern/aspect/annotation that I might be able to use to make my life easier? I don't think what I'm trying to achieve is everything new or groundbreaking so hoping there's something out there I can use.
Note that I'm coding in basic Java, so no fancy frameworks to fall back on (expect Spring for bean management - outside of my model).
Thanks in advance.
I have a layered web-application driven by spring-jpa-hibernate and I'm now trying to integrate elasticsearch (search engine).
What I Want to do is to capture all postInsert/postUpdate events and send those entities to elasticsearch so that it will reindex them.
The problem I'm facing is that my "dal-entities" project will have a runtime dependency on the "search-indexer" and the "search-indexer" will have a compile dependency on "dal-entities" since it needs to do different things for different entities.
I thought about having the "search-indexer" as part of the DAL (since it can be argued it does operations on the data) but even still it should be as part of the DAO section.
I think my question can be rephrased as: How can I have logic in a hibernate event listener which cannot be encapsulated solely in an entities project (since it's not its responsibility).
Update
The reason the dal-entities project is dependant on the indexer is that I need to configure the listener in the spring configuration file which is responsible for the jpa context (which obviousely resides in the dal-entities).
The dependency is not a compile time scope but a runtime scope (since at runtime the hibernate context will need that listener).
The answer is Interfaces.
Rather than depend on the various classes directly (in either direction), you can instead depend on Interfaces that surface the capabilities you need. This way, you are not directly dependent on the classes but instead depend on the interface, and you can have the interfaces required by the "dal-entities", for example, live in the same package as the dal-entities and the indexer simply implements that interface.
This doesn't fully remove the dependency, but it does give you a much less tight of a coupling and makes your application a bit more flexible.
If you are still worried about things being too tightly coupled or if you really don't want the two pieces to be circularly dependent at all, then I would suggest you re-think your application design. Asking another question here on SO with more details about some of your code and how it could be better structured would be likely to get some good advice on how to improve the design.
Hibernate supports PostUpdateEventListener and PostInsertEventListener.
Here is a good example that might suite your case
The main concept is being able to locate when your entity was changed and act after it as shown here.
public class ElasticSearchListener implements PostUpdateEventListener {
#Override
public void onPostUpdate(PostUpdateEvent event) {
if (event.getEntity() instanceof ElasticSearchEntity ) {
callSearchIndexerService(event.getEntity());
Or
InjectedClass.act(event.getEntity());
Or
callWebService(InjectedClassUtility.modifyData(event.getEntity()));
........
}
}
Edit
You might consider Injecting the class that you want to isolate from the project (that holds the logic) using spring.
Another option might be calling an outside web service that is not dependent on your code.
passing to it either the your original project object or one that is modified by a utility, to fit elasticsearch.
Anyone know of any other custom spring scopes than Servlet Context Scope and ThreadScope ?
If you've made some closed-source custom scope I'd really also be interested in hearing what it does and how it worked out for you. (I'd imagine someone would make a WindowScope in a desktop app ?)
I'm open to all use cases, I'm looking to expand my horizon here.
We implemented our own custom Spring scope. A lot of our code works at a relatively low level, close to the database, and we maintain a conceptual level on top of that with its own object model of data sources, links, attributes etc.
Anyway, a lot of beans require a so-called StorageDictionary (an encapsulation of this object graph) to do their work. When we make non-trivial changes to the object graph, the dictionary sometimes needs to be blown away and recreated. Consequently, we implemented a custom scope for objects that were dictionary scoped, and part of the invalidation of a given dictionary involves clearing this custom scope. This lets Spring handle a nice form of automatic caching for these objects. You get the same object back every time up until the dictionary is invalidated, at which point you get a new object.
This helps not only with consistency but also allows the objects themselves to cache references to entities within the dictionary, safe within the knowledge that the cache will be valid for as long as they themselves are retrievable by Spring. This in turn lets us build these as immutable objects (so long as they can be wired via constructor injection), which is a very good thing to do anyway wherever possible.
This technique won't work everywhere and does depend heavily on the characteristics of the software (e.g. if the dictionary was modified regularly this would be horribly inefficient, and if it was updated never this would be unnecessary and slightly less efficient than direct access). However, it has definitely helped us pass off this management of lifecycle to Spring in a way that is conceptually straightforward and in my opinion quite elegant.
In my company we've created two custom scopes, one that will use Thread or Request and another that will use either Thread or Session. The idea is that a single scope can be used for scoped beans without having to change configuration based on the execution environment (JUnit or Servlet container). This also really comes in handy for when you run items in Quartz and no longer have a Request or Session scope available.
Background:
I work on a single web app that runs 4 different web sites under the same servlet context. Each site has its own domain name, e.g. www.examplesite1.com, www.examplesite2.com, etc.
Problem:
Sites sometimes require their own customised instance of a bean from the app context (usually for customised display of messages or formatting of objects).
For example, say sites 1 and 2 both use the "standardDateFormatter" bean, site 3 uses the "usDateFormatter" bean and site 4 uses the "ukDateFormatter" bean.
Solution:
I'm planning on using a "site" scope.
We have a Site enum like this:
enum Site {
SITE1, SITE2, SITE3, SITE4;
}
Then we have a filter that stores one of these Site values in the request's thread using a ThreadLocal. This is the site scope's "conversation id".
Then in the app context there'd be a bean named "dateFormatter", with 'scope="site"'. Then, wherever we want to use a date formatter, the correct one for the user's current site will be used.
Added later:
Sample code here:
http://github.com/eliotsykes/spring-site-scope
Oracle Coherence has implemented a datagrid scope for Spring beans. To sum it up:
A Data Grid Bean is a proxy to a
java.io.Serializable Bean instance
that is stored in a non-expiring
Coherence Distributed Cache (called
near-datagridbeans).
Never used them myself but they seem cool.
Apache Orchestra provides SpringConversationScope.
In a Spring Batch application, we have implemented an item scope.
Background
We have lots of #Service components which compute something based on the current batch item. Many of them need the same workflow:
Determine relevant item parts.
Init stuff based on the item.
For each item part, compute something (using stuff).
We moved the workflow into a base class template method, so the subclasses implement only findItemParts(Item) (doing 1 and 2) and computeSomething(ItemPart) (doing 3). So they became stateful (stuff initialized in findItemParts is needed in computeSomething), and that state must be cleared before the next item.
Some of those services also involve injected Spring beans which are also derived from the current item and must be removed afterwards.
Design
We implemented an AbstractScopeRegisteringItemProcessor which registers the item and allows subclasses to register derived beans. At the end of its process method, it removes the item from its scope context, and the derived beans using DefaultSingletonBeanRegistry.destroySingleton.
How it worked out
It works, but has the following problems:
We did not manage to get the derived beans cleaned up without registration (just based on their #Scope). The concrete processor must create and register them.
AbstractScopeRegisteringItemProcessor would have been nicer using composition and dynamically implementing all interfaces of the underlying processor. But then the resulting #StepScope bean is a proxy for the declared return type (i.e. AbstractScopeRegisteringItemProcessor or ItemProcessor) without the required callback interfaces.
EDIT
With the aid of #Eliot Sykes's solution and shared code plus #Cheetah's BeanDefinition registration, I was able to get rid of the registration as singleton beans. Instead, ItemScopeContext (the storage used by both the processor and the Scope implementation; Java-configured via a static #Bean method) implements BeanDefinitionRegistryPostProcessor. It registers a FactoryBean whose getObject() returns the current item or throws an exception if there is none. Now, a #Component annotated with #Scope(scopeName = "Item", proxyMode = ScopedProxyMode.TARGET_CLASS) can simply inject the item and need not be registered for end-of-scope cleanup.
So in the end, it did work out well.
A spring locale scope based on the users locale wihtin a web application
See related wiki page
In my company, we have also implemented spring custom scope. We have a multi tenant system where every customer can customize settings. Instance based scope of ours, caches the beans which are customer specific. So each time user of a customer logs in, these settings are cached and reused again when other users of the same customers sign in.
I once used a kind of conversation scope to store some objects in the session scope, in order to keep them when re-entering the same page, but limited to a single page to avoid to leave useless objects in the session. The implementation just stored the page URL and cleaned the conversation scope on each page change.