Going through the Command design pattern I understand that we need to create a Command by setting the context via constructor and then calling the execute method to perform some action on the context. Example:
public class Command implements ICommand {
Device device;
public Command(Device device) {
this.device = device;
}
public void execute() {
this.device.turnOn()
}
}
I was wondering that with this approach we will be required to create a new Command object for every device object we create. Will it be ok to pass the context and some parameters to the execute method? I am looking for something like:
public class Command implements ICommand {
public void execute(Device device) {
this.device.turnOn();
}
}
Are there any issues with this approach?
The idea behind the Command pattern is that it should encapsulate all the information needed to perform an action. This lets you do things like delay the execution of the action until a later time, or even undo the action after it has been executed.
For a concrete example, consider the "Undo" feature in a word processor.
Each time you type in the document, the application uses the Command pattern to record the action.
If you hit "Undo", the text you typed disappears.
Then, when you hit "Redo", the typed text reappears, without the application needing to ask for the input again. It does this by replaying the command that it stored in step 1, which contains all the information about the text you typed.
If your command object requires additional parameters in order to perform the action, it doesn't really implement the Command pattern. It loses most of the advantages of the Command pattern, because the caller can't execute the action without additional information. In your example, the caller would need to know which device is being turned on.
However, that doesn't mean that you have to stick rigidly to the pattern. If it's more useful in your case for the execute method to accept a Device parameter, that's what you should do! If you do that, you should consider renaming the interface, though. Referring to it as a Command pattern when it doesn't follow the pattern exactly could confuse other readers of the code.
When deciding whether to take an object in as a method parameter or a constructor parameter, one of the things I find most helpful is to consider how I'm going to test the application. Objects that form part of the initial setup of the test get passed in as constructor parameters, while objects that form the inputs of the test, or the test vector, are method parameters. I find that following that guideline helps to produce maintainable code.
Related
What would be the correct way to verify that one behaviour is triggered when there is an argument being passed, and another behaviour is triggered when there are no arguments being passed when running a java app from cmd?
Since the main method is static it's a little tricky to verify, but I also feel that introducing PowerMock is a bit over the top just for that.
Basically I want to create an object with a constructor with no arguments if there are no cmd arguments, and create an object with a String argument constructor if there are passed arguments to the app.
I do not see your code, so I can only imagine how it looks like.
I can imagine that within the main method some logic is triggered, which results in one or another event.
I suggest thinking about moving the processing of the arguments to another class (ArgumentProcessor) which can be fed with a builder object or factory object in the constructor and it could have a process(String [] args) method that returns a runnable or whatever you want to achieve.
If you then feed the ArgumentProcessor with a stubbed builder/factory than I think it should be possible to check if the logic has been processed in the right way.
I want to run MapReduceIndexerTool from Java.
Right now I do it from command line using hadoop jar as you can see here, but I want to check it's status (to see if it's finalized, in progress, etc.) from Java code.
So basically I want to run it from Java in order to be able to check it's status from Java. Is there a way to run it from command line and check it's status from Java?
Also, there is a way to make Map Reduce to send an event (on a callback for example) when a job is completed? Something like a webhook?
As far as I know Tool interface exposes only int run(String[] args) method, so in general you would create new instance, form proper argument string and call that method.
From other hand, MapReduceIndexerTool has int run(Options options) method, that could be used to run it without forming shell-style argument. However, this method is protected, so this will need to have calling class to be created in same package as MapReduceIndexerTool.
I have a ReloadableWeapon class like this:
public class ReloadableWeapon {
private int numberofbullets;
public ReloadableWeapon(int numberofbullets){
this.numberofbullets = numberofbullets;
}
public void attack(){
numberofbullets--;
}
public void reload(int reloadBullets){
this.numberofbullets += reloadBullets;
}
}
with the following interface:
public interface Command {
void execute();
}
and use it like so:
public class ReloadWeaponCommand implements Command {
private int reloadBullets;
private ReloadableWeapon weapon;
// Is is okay to specify the number of bullets?
public ReloadWeaponCommand(ReloadableWeapon weapon, int bullets){
this.weapon = weapon;
this.reloadBullets = bullets;
}
#Override
public void execute() {
weapon.reload(reloadBullets);
}
}
Client:
ReloadableWeapon chargeGun = new ReloadableWeapon(10);
Command reload = new ReloadWeaponCommand(chargeGun,10);
ReloadWeaponController controlReload = new ReloadWeaponController(reload);
controlReload.executeCommand();
I was wondering, with the command pattern, with the examples I've seen, other than the object that the command is acting on, there are no other parameters.
This example, alters the execute method to allow for a parameter.
Another example, more close to what I have here, with parameters in the constructor.
Is it bad practice/code smell to include parameters in the command pattern, in this case the constructor with the number of bullets?
I don't think adding parameters into execute will be bad design or violate command pattern.
It totally depends on how you want to use Command Object: Singleton Or Prototype scope.
If you use Prototype scope, you can pass command parameters in Constructor methods. Each command instance has its own parameters.
If you use Singleton scope (shared/reused instance), you can pass command parameters in execute method. The singleton of the command should be thread safe for this case. This solution is a friend of IoC/DI framework too.
The very purpose of this pattern is to allow to define actions, and to execute them later, once or several times.
The code you provide is a good example of this pattern : you define the action "reload", that charges the gun with an amount of bullets=10 ammunition.
Now, if you decide to modify this code to add bullets as a parameter, then you completely lose the purpose of this pattern, because you will have to define the amount of ammunition every time.
IMHO, you can keep your code as it is. You will have to define several ReloadWeaponCommand instances, with different value of bullets. Then you may have to use another pattern (such as Strategy) to switch between the commands.
Consider a case you have 95 bullets in hand in starting, and you have made 9 commands with 10 bullets and 1 command with 5 bullets. And you have submitted these commands to Invoker, now invoker doesn't have to worry about how much bullets are left. He will just execute the command. On the other hand if invoker has to provide the no of bullets at run time then it could be the case supplied number of bullets are not available.
My point here is that Invoker must not have to worry about any extra information needs to execute the command. And as mentioned in wiki "an object is used to encapsulate all information needed to perform an action or trigger an event at a later time"
Using the Command Pattern with Parameters
Consider the related 'Extension Patterns' in order to hold to a Top-Down Control paradigm 'Inversion of Control'.
This pattern, the Command Pattern, is commonly used in concert with the Composite, Iterator, and Visitor Design Patterns.
Commands are 'First Class Objects'. So it is critical that the integrity of their encapsulation is protected. Also, inverting Control From Top Down to Bottom Up, Violates a Cardinal principle of Object Oriented Design, though I see people suggesting it all of the time...
The Composite pattern will allow you to store Commands, in iterative data structures.
Before going any further, and while your code is still manageable, look at these Patterns.
There are some reasonable points made here in this thread. #Loc has it closest IMO, However, If you consider the patterns mentioned above, then, regardless of the scope of your project (it appears that you intend to make a game, no small task) you will be able to remain in control of lower-level dependency. As #Loc pointed out, with 'Dependency Injection' lower class Objects should be kept 'in the dark' when it comes to any specific implementation, in terms of the data that is consumed by them; this is (should be) reserved for the top level hierarchy. 'Programming to Interfaces, not Implementation'.
It seems that you have a notion of this. Let me just point out where I see a likely mistake at this point. Actually a couple, already, you are focused on grains of sand I.e. "Bullets" you are not at the point where trivialities like that serve any purpose, except to be a cautionary sign, that you are presently about to lose control of higher level dependencies.
Whether you are able to see it yet or not, granular parts can and should be dealt with at higher levels. I will make a couple of suggestions. #Loc already mentioned the best practice 'Constructor Injection' loosely qualified, better to maybe look up this term 'Dependency Injection'.
Take the Bullets for e.g. Since they have already appeared on your scope. The Composite Pattern is designed to deal with many differing yet related First Class Objects e.g. Commands. Between the Iterator and Visitor Patterns you are able to store all of your pre-instantiated Commands, and future instantiations as well, in a dynamic data structure, like a Linked List OR a Binary Search Tree even. At this point forget about the Strategy
Pattern, A few possible scenarios is one thing, but It makes no sense to be writing adaptive interfaces at the outset.
Another thing, I see no indication that you are spawning projectiles from a class, bullets I mean. However, even if it were just a matter of keeping track of weapon configurations, and capacities(int items) (I'm only guessing that is the cause of necessary changes in projectile counts) use a stack structure or depending on what the actual scenario is; a circular queue. If you are actually spawning projectiles from a factory, or if you decide to in the future, you are ready to take advantage of Object Pooling; which, as it turns out, was motivated by this express consideration.
Not that anyone here has done this, but I find it particularly asinine for someone to suggest that it is ok to mishandle or disregard a particular motivation behind any established (especially GoF) Design pattern. If you find yourself having to modify a GoF Design pattern, then you are using the wrong one. Just sayin'
P.S. if you absolutely must, why don't you instead, use a template solution, rather than alter an intentionally specific Interface design;
I have a project that is using CQRS and Dependency Injection. The Query side of the system is fine.
For the command side of the system I have chosen to use a queue:
BlockingQueue<Command> commandQueue;
This stores the commands as they are received along with their arguments from multiple threads. The commands all implement a common interface with an execute method:
public interface Command extends Serializable {
void execute();
}
The arguments for the Commands are stored as data in the concrete implementations of the Command interface. The types and potentially number of arguments will vary depending on which command it represents, using this structure means that this detail is all encapsulated away from the command queue logic.
The idea is that the commands are later executed in sequence by a worker thread which calls execute() on each Command in turn without caring about which command it is under the hood.
The commands only require injection once they are taken off the queue ready for execution (this is mostly because I would like to be able to serialize commands, but also because the execution of commands needs different modules to the part of the application that receives and queues them)
My problem is this:
Because the commands need to wait until they are taken off the queue to get their dependencies, I end up passing a lightly wrapped Injector to their 'execute' method so they can create themselves an object graph. This feels more like the Service Locator pattern than Dependency Injection.
public interface Command extends Serializable {
void execute(**ExecutorLocator locator**);
}
Is there something I'm missing or is it inevitable that DI has to look like a service locator at some point in the stack?
It's been a while since I laid my hands on Java code but good architecture and design is not bound to a language.
First: ServiceLocator is an anti-pattern.
Second: Tell, don't ask. If anything build-up the commands from the outside and don't let them ask a locator for their dependencies.
Third: I would create handlers that are registered for the commands and know how to handle the information encapsulated in the commands. Thus you would not need to inject or build-up your commands at all. Setup your handlers and make sure your commands get there.
Hello I am trying to build a framework of IAction objects that will execute in a sequence. Each IAction implementation will execute its processAction() method as it was implemented for. This method returns a IAction, which in most cases is itself, but in some cases may be a pointer to another IAction in the List. The IActionIterator interface is created to manage that movement in the List.
Here is the interface
public interface IAction {
public IAction processAction();
}
pulbic interface IActionIterator {
public IAction getFirstAction();
public IAction getNextAction( IAction action );
}
My framework will obtain a List and will loop thru the list executing the processAction() of each IAction class. Here is what the loop will look like
IActionIterator iter = ... // created some how
IAction action = iter.getFirstAction();
do {
IAction newAction = action.processAction();
if( action.equals( newAction )
action = iter.getNextAction( action );
else
action = newAction;
while( action != null ) {
So each IAction has its implementation to execute and some IAction have business logic that will return an IAction in the list instead of executing the next one in the list.
I am anticipating some IAction classes that will execute, but the next IAction in the list will need the results from the first. For example one of the IAction is executing an SQL query and the results are pertinent to the next IAction in the list.
So my question is how would or should I implement this in information passing form IAction to IAction in my designed Framework?
Sounds like you are trying to represent a state-transition graph. Can you use an FSM rather than trying to roll your own?
Change the return signature of your getFirstAction() / getNextAction() to be a simple holder object:
public class IActionResponse {
List getResultList();
IAction getReturnAction();
}
I have never done this, but perhaps you can use the Serializable interface to pass generic information between Actions? Then the Actions would expect certain kinds of data and know what do to with them.
Hope this helps.I would put it in the interface itself...if only result of previous action is needed otherwise if the result of ancestors also is required then better store the results in some external class instance that gets populated in the cycle and based on the action id you can pull the results..
For a simpler case where result of previous action is needed
IActionIterator {
public IAction getFirstAction();
public getFirstActionResult();
public IAction getNextAction( IAction action );
}
This is the kind of circumstance where the following sort of interface does really well:
public interface IAction {
void invoke(IActionRequest req, IActionResponse res);
}
Each action gets its inputs -- which are open-ended, could be anything -- from the 'req' object. In turn, each action has the chance to communicate outputs -- again, open-ended -- using the 'res' object.
And FWIW, the framework you're describing sounds very similar to some existing frameworks. Check out the Cernunnos project for an example of one that's designed pretty close to what you're proposing.
You could consider simply passing the actions a context/state object (which is essentially a map). Have the action populate the context object with the elements that it needs to pass onto other actions along the chain. Subsequent actions can then use those elements and manipulate the context as necessary. You need to make sure that you sequence things correctly as this is like managing global state. You may need to have multiple interfaces that only expose a subset of the context to particular actions. Your controller can examine the context between action invocations as necessary.
I think what you need is the combination of two design patterns: Command and Chain of Responsibility.
In commands you can encapsulate parameters for actions and pass data between actions. You can record processing of parameters and even achieve an undo functionality if needed. In fact, each command object will be something like a context for the current processing of actions.
Actions can pass control to each other using chain of commands, where each action has link to the next action. After doing its business logic an action calls next action to process providing command object to it.
To make it expandable good to different action classes you can also use pattern Template method when implementing chain of responsibility. Thus each action class will be a subclass of some super action that will define a template method where some pre-processing of command will take place, like checking arguments and other stuff. Then the sub-class will be called and after that template method will pass control to the next action in the chain.
When using such patterns you will not need a list of actions and a general loop to process them. You just construct a chain of actions and then call process method on the first action in a list providing a command object to it. Also, when constructing a chain you also have additional freedom to mix the order of actions processing the way you want it. There will be no dependency on the order of actions in a list.
So, basically you'll need next classes and interfaces:
public interface IAction {
public void process (Command command);
}
public class SuperAction implements IAction {
private IAction nextAction;
public void process (Command command) {
// Check command object
processAction(command);
if (nextAction != null)
nextAction.process(command);
}
public abstract processAction(Command command);
public void setNextAction(IAction action) {nextAction = action;}
}
public class ActionConstructor {
public IAction constructActoinChain() {
// construct chain and return first object
}
}
And to complete this, you'll have to define Command object, either with execute method or not. If without execute method, then Command will just be a container of parameters that are passed between actions.
Some general remarks: you are creating a finite state machine, which could be the beginning of a workflow engine. It's an idea to look a bit at other peoples' attempts:
http://sujitpal.blogspot.com/2008/03/more-workflow-events-and-asynchronous.html
http://micro-workflow.com/PDF/toolsee01.pdf
http://www.bigbross.com/bossa/overview.shtml
Workflow engines often operate on a standard type of workflow item. If this is the case for you, then you could pass a (collection of) workflow items (objects implementing your WorkflowItem interface) into the first action, and pass it on to each next action. This allows the engine to treat common workflow concerns by referring to the WorkflowItem interface. Specific manipulation would require detecting the subtype.
One topic I have not heard anybody about here is a typical aspect of workflows: asynchronous processing. Often you want to be able to perform actions up until a certain point and then persist the workflow state. Then the workflow could be triggered to continue by some event (i.e. a user logging in and making a decision). To do this you would require a data layer which can persist the generic workflow state separately from the application-specific data.
I recently built a full-fledged workflow engine for a project I was working on. It is doable (especially if you don't try to make it endlessly generic, but match it to the problem at hand) but a lot of work. The reason we decided to do this was that the existing frameworks (jBPM et al.) seemed too unflexible and heavy-handed for our needs. And I had a lot of fun doing it!
Hope this helps.