Hello I am trying to build a framework of IAction objects that will execute in a sequence. Each IAction implementation will execute its processAction() method as it was implemented for. This method returns a IAction, which in most cases is itself, but in some cases may be a pointer to another IAction in the List. The IActionIterator interface is created to manage that movement in the List.
Here is the interface
public interface IAction {
public IAction processAction();
}
pulbic interface IActionIterator {
public IAction getFirstAction();
public IAction getNextAction( IAction action );
}
My framework will obtain a List and will loop thru the list executing the processAction() of each IAction class. Here is what the loop will look like
IActionIterator iter = ... // created some how
IAction action = iter.getFirstAction();
do {
IAction newAction = action.processAction();
if( action.equals( newAction )
action = iter.getNextAction( action );
else
action = newAction;
while( action != null ) {
So each IAction has its implementation to execute and some IAction have business logic that will return an IAction in the list instead of executing the next one in the list.
I am anticipating some IAction classes that will execute, but the next IAction in the list will need the results from the first. For example one of the IAction is executing an SQL query and the results are pertinent to the next IAction in the list.
So my question is how would or should I implement this in information passing form IAction to IAction in my designed Framework?
Sounds like you are trying to represent a state-transition graph. Can you use an FSM rather than trying to roll your own?
Change the return signature of your getFirstAction() / getNextAction() to be a simple holder object:
public class IActionResponse {
List getResultList();
IAction getReturnAction();
}
I have never done this, but perhaps you can use the Serializable interface to pass generic information between Actions? Then the Actions would expect certain kinds of data and know what do to with them.
Hope this helps.I would put it in the interface itself...if only result of previous action is needed otherwise if the result of ancestors also is required then better store the results in some external class instance that gets populated in the cycle and based on the action id you can pull the results..
For a simpler case where result of previous action is needed
IActionIterator {
public IAction getFirstAction();
public getFirstActionResult();
public IAction getNextAction( IAction action );
}
This is the kind of circumstance where the following sort of interface does really well:
public interface IAction {
void invoke(IActionRequest req, IActionResponse res);
}
Each action gets its inputs -- which are open-ended, could be anything -- from the 'req' object. In turn, each action has the chance to communicate outputs -- again, open-ended -- using the 'res' object.
And FWIW, the framework you're describing sounds very similar to some existing frameworks. Check out the Cernunnos project for an example of one that's designed pretty close to what you're proposing.
You could consider simply passing the actions a context/state object (which is essentially a map). Have the action populate the context object with the elements that it needs to pass onto other actions along the chain. Subsequent actions can then use those elements and manipulate the context as necessary. You need to make sure that you sequence things correctly as this is like managing global state. You may need to have multiple interfaces that only expose a subset of the context to particular actions. Your controller can examine the context between action invocations as necessary.
I think what you need is the combination of two design patterns: Command and Chain of Responsibility.
In commands you can encapsulate parameters for actions and pass data between actions. You can record processing of parameters and even achieve an undo functionality if needed. In fact, each command object will be something like a context for the current processing of actions.
Actions can pass control to each other using chain of commands, where each action has link to the next action. After doing its business logic an action calls next action to process providing command object to it.
To make it expandable good to different action classes you can also use pattern Template method when implementing chain of responsibility. Thus each action class will be a subclass of some super action that will define a template method where some pre-processing of command will take place, like checking arguments and other stuff. Then the sub-class will be called and after that template method will pass control to the next action in the chain.
When using such patterns you will not need a list of actions and a general loop to process them. You just construct a chain of actions and then call process method on the first action in a list providing a command object to it. Also, when constructing a chain you also have additional freedom to mix the order of actions processing the way you want it. There will be no dependency on the order of actions in a list.
So, basically you'll need next classes and interfaces:
public interface IAction {
public void process (Command command);
}
public class SuperAction implements IAction {
private IAction nextAction;
public void process (Command command) {
// Check command object
processAction(command);
if (nextAction != null)
nextAction.process(command);
}
public abstract processAction(Command command);
public void setNextAction(IAction action) {nextAction = action;}
}
public class ActionConstructor {
public IAction constructActoinChain() {
// construct chain and return first object
}
}
And to complete this, you'll have to define Command object, either with execute method or not. If without execute method, then Command will just be a container of parameters that are passed between actions.
Some general remarks: you are creating a finite state machine, which could be the beginning of a workflow engine. It's an idea to look a bit at other peoples' attempts:
http://sujitpal.blogspot.com/2008/03/more-workflow-events-and-asynchronous.html
http://micro-workflow.com/PDF/toolsee01.pdf
http://www.bigbross.com/bossa/overview.shtml
Workflow engines often operate on a standard type of workflow item. If this is the case for you, then you could pass a (collection of) workflow items (objects implementing your WorkflowItem interface) into the first action, and pass it on to each next action. This allows the engine to treat common workflow concerns by referring to the WorkflowItem interface. Specific manipulation would require detecting the subtype.
One topic I have not heard anybody about here is a typical aspect of workflows: asynchronous processing. Often you want to be able to perform actions up until a certain point and then persist the workflow state. Then the workflow could be triggered to continue by some event (i.e. a user logging in and making a decision). To do this you would require a data layer which can persist the generic workflow state separately from the application-specific data.
I recently built a full-fledged workflow engine for a project I was working on. It is doable (especially if you don't try to make it endlessly generic, but match it to the problem at hand) but a lot of work. The reason we decided to do this was that the existing frameworks (jBPM et al.) seemed too unflexible and heavy-handed for our needs. And I had a lot of fun doing it!
Hope this helps.
Related
I'm kind of new to Java and have a rather simple question:
I have an interface, with a method:
public interface Interface_Updatable {
public void updateViewModel();
}
I implement this interface in several classes. Each class then of course has that method updateViewModel.
Edit: I instantiate these classes in a main function. Here I need code that calls updateViewModel for all objects that implement the interface.
Is there an easy way to do it combined? I don't want to call every method from every object instance separately and keep that updated. Keeping it updated might lead to errors in the long run.
The short form is: no, there's no simple way to "call this method on all instances of classes that implement this interface".
At least not in a way that's sane and maintainable.
So what should you do instead?
In reality you almost never want to just "call it on all instances", but you have some kind of relation between the thing that should trigger the update and the instances for which it should be triggered.
For example, the naming of the method suggests that instances of Interface_Updatable are related to the view model. So if they "care" about changes to the view model, they could register themselves as interested parties by doing something like theViewModel.registerForUpdates(this), the view model could hold on to a list of all objects that registered like this and then loop over all the instances and calls updateViewModel on each one (of course one would need to make sure that unregistration also happens, where appropriate).
This is the classical listener pattern at work.
But the high-level answer is: you almost never want to call something on "all instances", instead the instances you want to call it on have some relation to each other and you would need to make that relation explicit (via some registration mechanism like the one described above).
There is no easy way to call this method on all classes that implement this interface. The problem is that you need to somehow keep track of all the classes that implement this interface.
A possible object-oriented way to do this would be passing a list containing objects that are instances of classes that implement the Interface_Updateable interface to a function, and then calling updateViewModel on each object in that list:
public void updateViewModels(List<Interface_Updateable> instances) {
for(var instance : instances) {
instance.updateViewModel();
}
}
Going through the Command design pattern I understand that we need to create a Command by setting the context via constructor and then calling the execute method to perform some action on the context. Example:
public class Command implements ICommand {
Device device;
public Command(Device device) {
this.device = device;
}
public void execute() {
this.device.turnOn()
}
}
I was wondering that with this approach we will be required to create a new Command object for every device object we create. Will it be ok to pass the context and some parameters to the execute method? I am looking for something like:
public class Command implements ICommand {
public void execute(Device device) {
this.device.turnOn();
}
}
Are there any issues with this approach?
The idea behind the Command pattern is that it should encapsulate all the information needed to perform an action. This lets you do things like delay the execution of the action until a later time, or even undo the action after it has been executed.
For a concrete example, consider the "Undo" feature in a word processor.
Each time you type in the document, the application uses the Command pattern to record the action.
If you hit "Undo", the text you typed disappears.
Then, when you hit "Redo", the typed text reappears, without the application needing to ask for the input again. It does this by replaying the command that it stored in step 1, which contains all the information about the text you typed.
If your command object requires additional parameters in order to perform the action, it doesn't really implement the Command pattern. It loses most of the advantages of the Command pattern, because the caller can't execute the action without additional information. In your example, the caller would need to know which device is being turned on.
However, that doesn't mean that you have to stick rigidly to the pattern. If it's more useful in your case for the execute method to accept a Device parameter, that's what you should do! If you do that, you should consider renaming the interface, though. Referring to it as a Command pattern when it doesn't follow the pattern exactly could confuse other readers of the code.
When deciding whether to take an object in as a method parameter or a constructor parameter, one of the things I find most helpful is to consider how I'm going to test the application. Objects that form part of the initial setup of the test get passed in as constructor parameters, while objects that form the inputs of the test, or the test vector, are method parameters. I find that following that guideline helps to produce maintainable code.
I'm implementing a project using CQRS and Event Sourcing. I realized that my commands and my events are nearly always the same.
Let's say I have a command CreatePost :
public class CreatePost implements Command {
private final String title;
private final String content;
}
The event fired from this command is the same :
public class PostCreated implements Event {
private final String title;
private final String content;
}
How do you handle that in your applications ?
EDIT : Of course I'm aware of basic OOP technics. I could create an abstraction having the common fields, but this question needs to be taken in the CQRS/ES context.
How to avoid repeating fields between command and event?
I wouldn't -- not until I absolutely can't stand it.
Fundamentally, commands and events aren't objects, they are messages - representations of state that cross boundaries. I think it's important that your in memory representation not lose sight of that.
One of the characteristics of message schemas is that they evolve over time, so you need to be aware of compatibility. And here's the kicker: events and commands evolve on different time scales.
Command messages are how your domain model communicates with other processes; changes to that part of the API are driven by exposing/deprecating functionality.
But in an event sourced world, events are messages from previous versions of the domain to the current version. They are part of the support we need to deploy new models that resume work from where the old model left off.
So I would keep commands and events separate from one another - they are different things.
If you are seeing a lot of duplication in the fields, that may be a hint that there's some value type that you haven't yet made explicit.
CreatePost
{ Post
{ Title
, Contents
}
}
PostCreated
{ Post
{ Title
, Contents
}
}
Simply implement a model for your Post, i.e.:
public class PostModel {
private String title;
private String content;
// Add get/set methods
}
Then re-use this in both your events and commands.
Just compiling this answer from the discussion we had in comments.
Compose, don't inherit
I would definitely not use inheritance in a situation like this because it will just add unnecessary complexity, also there is no behavior to inherit there.
Another option is to have a well-defined contract for your commands and events. That is to have two interfaces — IPost and IEvent — and implement those in the commands and events.
Regarding naming: we all know that naming is hard, so you should choose names wisely, according to your business or technical language/vocabulary requirements.
Why split into two interfaces?
Because a command usually carries more information required for its handler than an event carries for its event handler, event handlers should be kept as thin as possible. It's better to carry only the needed payload.
Closing words
Separating commands and events is a must, since commands are representing an operation that's happening now, whereas events are representing actions that happened in the past. They might usually be an outcome of a command, indicating to the outside world — from the viewpoint of a bounded context — that something happened inside your current BC.
How to avoid repeating fields between command and event?
Just don't. The cost of dependency + risk of wrongful mutualization are higher than the maintenance gain. You can live with that duplication, just like you probably live with duplication between your domain model, view model, query model, etc. today.
You can use whatever you want as long as it is just an implementation detail.
In PHP I use a lot traits for this kind of reusability. You can use even inheritance but the clients (the code that uses those classes) should not depend on the base class; it would be best if they even don't find out that your event and command class share something but I don't have enough Java experience to tell you how to do it.
P.S. I would not go with creating interfaces, as I specified above, this should be just an implementation detail.
I've run into this, and almost universally I've not found a case where the event needed different properties than the command for a particular domain action. I definitely find the menial copy/paste duplication of property getters/equals/hashCode/toString pretty annoying. If I could go back, I'd define a marker interface Action and then
interface Command<T extends Action> {
T getAction();
// other properties common to commands of all action types...
}
class AbstractCommand<T extends Action> implements Command<T> {
public T getAction() { ... }
// other properties...
}
interface Event<T extends Action> {
T getAction();
// other properties common to events of all action types...
}
class AbstractEvent<T extends Action> implements Event<T> {
public T getAction() { ... }
// other properties...
}
Then for each domain action, define concrete implementations.
class ConcreteAction implements Action {
// properties COMMON to the command and event(s)...
}
class ConcreteCommand extends AbstractCommand<ConcreteAction> { ... }
class ConcreteEvent extends AbstractEvent<ConcreteAction> { ... }
If the command and event action properties need to diverge for some reason, I'd put just those particular properties in the ConcreteCommand or ConcreteEvent classes.
The inheritance model here is very simple. You may only rarely need to do anything more than extend the abstract classes with nothing more to implement than the common Action. And in the case where there are no properties needed for the Action, just define a class EmptyAction implements Action implementation to use in those types of commands and events.
I am developing an editor following the general MVC design: the model holds the data, the controller knows how to change it, and the view issues calls the controller to query and modify the model. For the sake of argument, let us use
public class Model {
public Object getStuff(String id);
public void setStuff(String id, Object value);
}
public interface Controller {
public void execute(Command command);
public void redo();
public void undo();
public void save();
public void load(File f);
}
The class that actually implements the controller holds a reference to the model; commands need to access it too, so they must all provide a void execute(Model m); interface that actually grants them this access only when needed.
However, views generally need access to the model - when building themselves and, later on, to listen for changes and refresh themselves accordingly. I am afraid that adding a "Model getModel()" call to the Controller will result in a great temptation to bypass the execute() mechanism; and I am not the only developer working on the project. Given this scenario, how would you enforce an "all changes go through the controller" policy?
Two alternatives I am considering:
An interface called "ReadOnlyModel", returned by the getModel() call instead of the real model, that catches any such attempts.
Lots of comments to clue incoming developers as to the Correct Way of Doing Things
I recommend modeling the access to the model as a set of classes. For example, if the view needs to modify a customer's attributes, there would be a ModifyCustomerCommand class that would have as properties all of the information needed in order to perform the update. The view would construct an instance of the class with values and pass it back to the controller which, in turn, would pass the command back to the model for the actual update.
A benefit from this approach is that each of these model access commands can implement undo behavior. If the controller keeps an ordered collection of these commands as they get sent back to the model, the controller can back off the changes, one at a time, by invoking the undo method on the most recently executed command.
What if you take a look at Observer Pattern http://en.wikipedia.org/wiki/Observer_pattern, your view only listen for events from model.
Hope it helps.
My current application has a JFrame with about 15 actions stored as fields within the JFrame. Each of the actions is an anonymous class and some of them are pretty long.
Is it common to break actions into their own classes possibly within a sub-package called actions?
If not, how's this complexity usually tamed?
Thanks
If it is possible that your actions could be reusable (e.g., from keyboard shortcuts, other menus, other dialogs, etc.) and especially if they can work directly on the underlying model (rather than on the UI), then it is generally better not to have them as anonymous classes.
Rather, create a separate package, and create classes for each.
Often, it also makes sense to not instantiate these directly but rather have some sort of a manager that defines constants and initializes and returns sets of actions, so that you could, for example, offer different action sets at different versions or set certain actions only for internal releases.
Finally, check whether your actions can be refactored into a class hierarchy. They often can, which saves code replication, and also helps you add robustness (e.g., check for certain conditions before letting the action execute).
That's typically how I do it. Each action gets it's own class which has a reference to the "app" object so it can get to resources it needs. I usually have an action manager that holds all the actions so there's one place to access them as well as one place to update their enablement and stuff.
Eventually this also becomes unmanageable at which point you should start thinking about using an app framework like Eclipse RCP, the NetBeans framework, JIDE, etc. This is especially true if you want to support user-defined keymaps and stuff like that.
What I do is create a package (package tree actually) for action classes, then instantiate each class according to context. Almost all of my action classes are abstract with abstract methods to get the context (ala Spring).
public abstract class CalcAndShowAction extends AbstractAction {
//initialization code - setup icons, label, key shortcuts but not context.
public void actionPerformed(ActionEvent e) {
//abstract method since it needs ui context
String data = getDataToCalc();
//the actual action - implemented in this class,
// along with any user interaction inherent to this action
String result = calc(data);
//abstract method since it needs ui context
putResultInUI(result);
}
//abstract methods, static helpers, etc...
}
//actual usage
//...
button.setAction(new CalcAndShowAction() {
String getDataToCalc() {
return textField.getText();
}
void putResultInUI(String result) {
textField.setText(result);
}
});
//...
(sorry for any mistakes, I've written it by hand in this text box, not in an IDE).