Does this "call-inversion" pattern have a name? - java

I just answered another question (Select method based on field in class), and I was wondering if the pattern had a name.
Call action.applyX(a), where X depends on some property of a (e.g. type in the example), so you instead call a.apply(action) and let a (or Type) call the appropriate applyX.
Is there a name for that?
public enum Type {
INTEGER {
#Override
public void apply(Action action, A a) {
action.applyInteger(a);
}
},
STRING {
#Override
public void apply(Action action, A a) {
action.applyString(a);
}
};
public abstract void apply(Action action, A a);
}
public interface Action {
public void applyInteger(A a);
public void applyString(A a);
}
public class A {
private Type type;
...
public void apply(Action action) {
this.type.apply(action, this);
}
}
Update
The above is just an example, and using type as the selector is not the important part.
The selection criteria for deciding which X method to call can be anything. In a dice game, X could be 'Odd' or 'Even' and class A could be 'Dice' with a 1-6 int value.
The example is using abstract enum methods as a way to avoid a switch statement (less error prone). The abstract method implementations are a kind of switching technology, in this case the way to choose the appropriate X.
Update 2
This question is about the pattern used to avoid switch statements for doing "action" logic outside of the class (A), not about changing the behavior of A (strategy/policy), where the "switch choices" are well defined, e.g. as a type enum (example above), or by well-known subclasses of A.
As an example, A could define a table column. The class should not be tightly coupled to implementation code, but there will be many different implementation methods ("Actions") that must process column types differently.
Actions might be a call to the appropriate getXxx method on ResultSet, call the appropriate setXxx method on PreparedStatement, format the value for display, render it the XML or Json, parse the value, ...
All these methods would either need a switch statement, or they could implement an interface with the "typed" methods, and ask the class "please call the right one for me".
This question is becoming pretty long. Sorry if I'm not stating the pattern clearly.

This resembles the Visitor pattern, because you are basically adding new operations to A without changing it (you are externalizing its operations to a separate class).
this.type.apply(action, this);
plays the role of:
visitor.visit(this);
If a new action is added (say applyBoolean), you would need to change the code for A if you used switch statement. However, in your implementation you would just use the new visitor subclass (a new Type enum constant) which implements the code that would otherwise be placed in A.

This is the beginnings of a Strategy Pattern
It's a Behavorial Pattern, as opposed to the more common Structural Patterns.
In your example you're not taking full advantage of the pattern. Since you use a single interface for all your strategies
In computer programming, the strategy pattern (also known as the policy pattern) is a software design pattern that enables an algorithm's behavior to be selected at runtime.

I think the name for this is just the name of the language feature used, polymorphism.

Looks like the type decomposition you would find in functional languages

I would say this is an example of using the strategy pattern. The example in the Wikipedia article (written in C#) also uses an enum.

Related

How do I avoid breaking the Liskov substitution principle with a class that implements multiple interfaces?

Given the following class:
class Example implements Interface1, Interface2 {
...
}
When I instantiate the class using Interface1:
Interface1 example = new Example();
...then I can call only the Interface1 methods, and not the Interface2 methods, unless I cast:
((Interface2) example).someInterface2Method();
Of course, to make this runtime safe, I should also wrap this with an instanceof check:
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
I'm aware that I could have a wrapper interface that extends both interfaces, but then I could end up with multiple interfaces to cater for all the possible permutations of interfaces that can be implemented by the same class. The Interfaces in question do not naturally extend one another so inheritance also seems wrong.
Does the instanceof/cast approach break LSP as I am interrogating the runtime instance to determine its implementations?
Whichever implementation I use seems to have some side-effect either in bad design or usage.
I'm aware that I could have a wrapper interface that extends both
interfaces, but then I could end up with multiple interfaces to cater
for all the possible permutations of interfaces that can be
implemented by the same class
I suspect that if you're finding that lots of your classes implement different combinations of interfaces then either: your concrete classes are doing too much; or (less likely) your interfaces are too small and too specialised, to the point of being useless individually.
If you have good reason for some code to require something that is both a Interface1 and a Interface2 then absolutely go ahead and make a combined version that extends both. If you struggle to think of an appropriate name for this (no, not FooAndBar) then that's an indicator that your design is wrong.
Absolutely do not rely on casting anything. It should only be used as a last resort and usually only for very specific problems (e.g. serialization).
My favourite and most-used design pattern is the decorator pattern. As such most of my classes will only ever implement one interface (except for more generic interfaces such as Comparable). I would say that if your classes are frequently/always implementing more than one interface then that's a code smell.
If you're instantiating the object and using it within the same scope then you should just be writing
Example example = new Example();
Just so it's clear (I'm not sure if this is what you were suggesting), under no circumstances should you ever be writing anything like this:
Interface1 example = new Example();
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
Your class can implement multiple interfaces fine, and it is not breaking any OOP principles. On the contrary, it is following the interface segregation principle.
It is confusing why would you have a situation where something of type Interface1 is expected to provide someInterface2Method(). That is where your design is wrong.
Think about it in a slightly different way: Imagine you have another method, void method1(Interface1 interface1). It can't expect interface1 to also be an instance of Interface2. If it was the case, the type of the argument should have been different. The example you have shown is precisely this, having a variable of type Interface1 but expecting it to also be of type Interface2.
If you want to be able to call both methods, you should have the type of your variable example set to Example. That way you avoid the instanceof and type casting altogether.
If your two interfaces Interface1 and Interface2 are not that loosely coupled, and you will often need to call methods from both, maybe separating the interfaces wasn't such a good idea, or maybe you want to have another interface which extends both.
In general (although not always), instanceof checks and type casts often indicate some OO design flaw. Sometimes the design would fit for the rest of the program, but you would have a small case where it is simpler to type cast rather than refactor everything. But if possible you should always strive to avoid it at first, as part of your design.
You have two different options (I bet there are a lot more).
The first is to create your own interface which extends the other two:
interface Interface3 extends Interface1, Interface2 {}
And then use that throughout your code:
public void doSomething(Interface3 interface3){
...
}
The other way (and in my opinion the better one) is to use generics per method:
public <T extends Interface1 & Interface2> void doSomething(T t){
...
}
The latter option is in fact less restricted than the former, because the generic type T gets dynamically inferred and thus leads to less coupling (a class doesn't have to implement a specific grouping interface, like the first example).
The core issue
Slightly tweaking your example so I can address the core issue:
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
So you defined the method DoTheThing(Interface1 example). This is basically saying "to do the thing, I need an Interface1 object".
But then, in your method body, it appears that you actually need an Interface2 object. Then why didn't you ask for one in your method parameters? Quite obviously, you should've been asking for an Interface2
What you're doing here is assuming that whatever Interface1 object you get will also be an Interface2 object. This is not something you can rely on. You might have some classes which implement both interfaces, but you might as well have some classes which only implement one and not the other.
There is no inherent requirement whereby Interface1 and Interface2 need to both be implemented on the same object. You can't know (nor rely on the assumption) that this is the case.
Unless you define the inherent requirement and apply it.
interface InterfaceBoth extends Interface1, Interface2 {}
public void DoTheThing(InterfaceBoth example)
{
example.someInterface2Method();
}
In this case, you've required InterfaceBoth object to both implement Interface1 and Interface2. So whenever you ask for an InterfaceBoth object, you can be sure to get an object which implements both Interface1 and Interface2, and thus you can use methods from either interface without even needing to cast or check the type.
You (and the compiler) know that this method will always be available, and there's no chance of this not working.
Note: You could've used Example instead of creating the InterfaceBoth interface, but then you would only be able to use objects of type Example and not any other class which would implement both interfaces. I assume you're interested in handling any class which implements both interfaces, not just Example.
Deconstructing the issue further.
Look at this code:
ICarrot myObject = new Superman();
If you assume this code compiles, what can you tell me about the Superman class? That it clearly implements the ICarrot interface. That is all you can tell me. You have no idea whether Superman implements the IShovel interface or not.
So if I try to do this:
myObject.SomeMethodThatIsFromSupermanButNotFromICarrot();
or this:
myObject.SomeMethodThatIsFromIShovelButNotFromICarrot();
Should you be surprised if I told you this code compiles? You should, because this code doesn't compile.
You may say "but I know that it's a Superman object which has this method!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman variable.
You may say "but I know that it's a Superman object which implements the IShovel interface!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman or IShovel variable.
Knowing this, let's look back at your code.
Interface1 example = new Example();
All you've said is that you have an Interface1 variable.
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
It makes no sense for you to assume that this Interface1 object also happens to implement a second unrelated interface. Even if this code works on a technical level, it is a sign of bad design, the developer is expecting some inherent correlation between two interfaces without actually having created this correlation.
You may say "but I know I'm putting an Example object in, the compiler should know that too!" but you'd be missing the point that if this were a method parameter, you would have no way of knowing what the callers of your method are sending.
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
When other callers call this method, the compiler is only going to stop them if the passed object does not implement Interface1. The compiler is not going to stop someone from passing an object of a class which implements Interface1 but does not implement Interface2.
Your example does not break LSP, but it seems to break SRP. If you encounter such case where you need to cast an object to its 2nd interface, the method that contains such code can be considered busy.
Implementing 2 (or more) interfaces in a class is fine. In deciding which interface to use as its data type depends entirely on the context of the code that will use it.
Casting is fine, especially when changing context.
class Payment implements Expirable, Limited {
/* ... */
}
class PaymentProcessor {
// Using payment here because i'm working with payments.
public void process(Payment payment) {
boolean expired = expirationChecker.check(payment);
boolean pastLimit = limitChecker.check(payment);
if (!expired && !pastLimit) {
acceptPayment(payment);
}
}
}
class ExpirationChecker {
// This the `Expirable` world, so i'm using Expirable here
public boolean check(Expirable expirable) {
// code
}
}
class LimitChecker {
// This class is about checking limits, thats why im using `Limited` here
public boolean check(Limited limited) {
// code
}
}
Usually, many, client-specific interfaces are fine, and somewhat part of the Interface segregation principle (the "I" in SOLID). Some more specific points, on a technical level, have already been mentioned in other answers.
Particularly that you can go too far with this segregation, by having a class like
class Person implements FirstNameProvider, LastNameProvider, AgeProvider ... {
#Override String getFirstName() {...}
#Override String getLastName() {...}
#Override int getAge() {...}
...
}
Or, conversely, that you have an implementing class that is too powerful, as in
class Application implements DatabaseReader, DataProcessor, UserInteraction, Visualizer {
...
}
I think that the main point in the Interface Segregation Principle is that the interfaces should be client-specific. They should basically "summarize" the functions that are required by a certain client, for a certain task.
To put it that way: The issue is to strike the right balance between the extremes that I sketched above. When I'm trying to figure out interfaces and their relationships (mutually, and in terms of the classes that implement them), I always try to take a step back and ask myself, in an intentionally naïve way: Who is going to receive what, and what is he going to do with it?
Regarding your example: When all your clients always need the functionality of Interface1 and Interface2 at the same time, then you should consider either defining an
interface Combined extends Interface1, Interface2 { }
or not have different interfaces in the first place. On the other hand, when the functionalities are completely distinct and unrelated and never used together, then you should wonder why the single class is implementing them at the same time.
At this point, one could refer to another principle, namely Composition over inheritance. Although it is not classically related to implementing multiple interfaces, composition can also be favorable in this case. For example, you could change your class to not implement the interfaces directly, but only provide instances that implement them:
class Example {
Interface1 getInterface1() { ... }
Interface2 getInterface2() { ... }
}
It looks a bit odd in this Example (sic!), but depending on the complexity of the implementation of Interface1 and Interface2, it can really make sense to keep them separated.
Edited in response to the comment:
The intention here is not to pass the concrete class Example to methods that need both interfaces. A case where this could make sense is rather when a class combines the functionalities of both interfaces, but does not do so by directly implementing them at the same time. It's hard to make up an example that does not look too contrived, but something like this might bring the idea across:
interface DatabaseReader { String read(); }
interface DatabaseWriter { void write(String s); }
class Database {
DatabaseConnection connection = create();
DatabaseReader reader = createReader(connection);
DatabaseReader writer = createWriter(connection);
DatabaseReader getReader() { return reader; }
DatabaseReader getWriter() { return writer; }
}
The client will still rely on the interfaces. Methods like
void create(DatabaseWriter writer) { ... }
void read (DatabaseReader reader) { ... }
void update(DatabaseReader reader, DatabaseWriter writer) { ... }
could then be called with
create(database.getWriter());
read (database.getReader());
update(database.getReader(), database.getWriter());
respectively.
With the help of various posts and comments on this page, a solution has been produced, which I feel is correct for my scenario.
The following shows the iterative changes to the solution to meet SOLID principles.
Requirement
To produce the response for a web service, key + object pairs are added to a response object. There are lots of different key + object pairs that need to be added, each of which may have unique processing required to transform the data from the source to the format required in the response.
From this it is clear that whilst the different key / value pairs may have different processing requirements to transform the source data to the target response object, they all have a common goal of adding an object to the response object.
Therefore, the following interface was produced in solution iteration 1:
Solution Iteration 1
ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
Any developer that needs to add an object to the response can now do so using an existing implementation that matches their requirement, or add a new implementation given a new scenario
This is great as we have a common interface which acts as a contract for this common practise of adding response objects
However, one scenario requires that the target object should be taken from the source object given a particular key, "identifier".
There are options here, the first is to add an implementation of the existing interface as follows:
public class GetIdentifierResponseObjectProvider<T extends Map, S extends Map> implements ResponseObjectProvider<T, S> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get("identifier"));
}
}
This works, however this scenario could be required for other source object keys ("startDate", "endDate" etc...) so this implementation should be made more generic to allow for reuse in this scenario.
Additionally, other implementations may require more context information to perform the addObject operation... So a new generic type should be added to cater for this
Solution Iteration 2
ResponseObjectProvider<T, S, U> {
void addObject(T targetObject, S sourceObject, String targetKey);
void setParams(U params);
U getParams();
}
This interface caters for both usage scenarios; the implementations that require additional params to perform the addObject operation and the implementations that do not
However, considering the latter of the usage scenarios, the implementations that do not require additional parameters will break the SOLID Interface Segregation Principle as these implementations will override getParams and setParams methods but not implement them. e.g:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S, U> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(U));
}
public void setParams(U params) {
//unimplemented method
}
U getParams() {
//unimplemented method
}
}
Solution Iteration 3
To fix the Interface Segregation issue, the getParams and setParams interface methods were moved into a new Interface:
public interface ParametersProvider<T> {
void setParams(T params);
T getParams();
}
The implementations that require parameters can now implement the ParametersProvider interface:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S>, ParametersProvider<U>
private String params;
public void setParams(U params) {
this.params = params;
}
public U getParams() {
return this.params;
}
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(params));
}
}
This solves the Interface Segregation issue but causes two more issues... If the calling client wants to program to an interface, i.e:
ResponseObjectProvider responseObjectProvider = new GetObjectBySourceKeyResponseObjectProvider<>();
Then the addObject method will be available to the instance, but NOT the getParams and setParams methods of the ParametersProvider interface... To call these a cast is required, and to be safe an instanceof check should also be performed:
if(responseObjectProvider instanceof ParametersProvider) {
((ParametersProvider)responseObjectProvider).setParams("identifier");
}
Not only is this undesirable it also breaks the Liskov Substitution Principle - "if S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program"
i.e. if we replaced an implementation of ResponseObjectProvider that also implements ParametersProvider, with an implementation that does not implement ParametersProvider then this could alter the some of the desirable properties of the program... Additionally, the client needs to be aware of which implementation is in use to call the correct methods
An additional problem is the usage for calling clients. If the calling client wanted to use an instance that implements both interfaces to perform addObject multiple times, the setParams method would need to be called before addObject... This could cause avoidable bugs if care is not taken when calling.
Solution Iteration 4 - Final Solution
The interfaces produced from Solution Iteration 3 solve all of the currently known usage requirements, with some flexibility provided by generics for implementation using different types. However, this solution breaks the Liskov Substitution Principle and has a non-obvious usage of setParams for the calling client
The solution is to have two separate interfaces, ParameterisedResponseObjectProvider and ResponseObjectProvider.
This allows the client to program to an interface, and would select the appropriate interface depending on whether the objects being added to the response require additional parameters or not
The new interface was first implemented as an extension of ResponseObjectProvider:
public interface ParameterisedResponseObjectProvider<T,S,U> extends ResponseObjectProvider<T, S> {
void setParams(U params);
U getParams();
}
However, this still had the usage issue, where the calling client would first need to call setParams before calling addObject and also make the code less readable.
So the final solution has two separate interfaces defined as follows:
public interface ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
public interface ParameterisedResponseObjectProvider<T,S,U> {
void addObject(T targetObject, S sourceObject, String targetKey, U params);
}
This solution solves the breaches of Interface Segregation and Liskov Substitution principles and also improves the usage for calling clients and improves the readability of the code.
It does mean that the client needs to be aware of the different interfaces, but since the contracts are different this seems to be a justified decision especially when considering all the issues that the solution has avoided.
The problem you describe often comes about through over-zealous application of the Interface Segregation Principle, encouraged by languages' inability to specify that members of one interface should, by default, be chained to static methods which could implement sensible behaviors.
Consider, for example, a basic sequence/enumeration interface and the following behaviors:
Produce an enumerator which can read out the objects if no other iterator has yet been created.
Produce an enumerator which can read out the objects even if another iterator has already been created and used.
Report how many items are in the sequence
Report the value of the Nth item in the sequence
Copy a range of items from the object into an array of that type.
Yield a reference to an immutable object that can accommodate the above operations efficiently with contents that are guaranteed never to change.
I would suggest that such abilities should be part of the basic sequence/enumeration interface, along with a method/property to indicate which of the above operations are meaningfully supported. Some kinds of single-shot on-demand enumerators (e.g. an infinite truly-random sequence generator) might not be able to support any of those functions, but segregating such functions into separate interfaces will make it much harder to produce efficient wrappers for many kinds of operations.
One could produce a wrapper class that would accommodate all of the above operations, though not necessarily efficiently, on any finite sequence which supports the first ability. If, however, the class is being used to wrap an object that already supports some of those abilities (e.g. access the Nth item), having the wrapper use the underlying behaviors could be much more efficient than having it do everything via the second function above (e.g. creating a new enumerator, and using that to iteratively read and ignore items from the sequence until the desired one is reached).
Having all objects that produce any kind of sequence support an interface that includes all of the above, along with an indication of what abilities are supported, would be cleaner than trying to have different interfaces for different subsets of abilities, and requiring that wrapper classes make explicit provision for any combinations they want to expose to their clients.

Decorator Pattern - Adding functionality not defined in interface

Are there any hard and fast rules on the use of this pattern or is it solely intended as a way to achieve additional functionality within method calls without using inheritance?
I have amended the example below that I took from a SO post to demonstrate what I am considering.
public interface Coffee {
public double getCost();
public String getIngredients();
}
public class SimpleCoffee implements Coffee {
#Override
public double getCost() {
return 1;
}
#Override
public String getIngredients() {
return "Coffee";
}
}
public class CoffeeDecorator implements Coffee {
protected final Coffee decoratedCoffee;
public CoffeeDecorator(Coffee c) {
this.decoratedCoffee = c;
}
#Override
public double getCost() {
//you can add extra functionality here.
return decoratedCoffee.getCost();
}
#Override
public String getIngredients() {
//you can add extra functionality here.
return decoratedCoffee.getIngredients();
}
public boolean methodNotDefinedInInterface() {
//do something else
return true;
}
}
So with the example above in mind, is it viable to:
a) use the simple Coffee whenever you see fit without decorating it
b) Add additional functionality that is not defined in the Coffee interface to decorator objects such as the methodNotDefinedInInterface()
Could someone also explain where the composition comes into this pattern as the SimpleCoffee is something that can exist in its own right, but it seems to be the decorator that actually 'owns' any object.
Although without the SimpleCoffee class (or some concrete implementation of Coffee) the decorator doesnt have any purpose, so aggregation doesnt seem to be what is occurring here.
The description of the pattern includes intent which makes it pretty clear what the pattern is for:
The decorator pattern can be used to extend (decorate) the functionality of a certain object statically, or in some cases at run-time, independently of other instances of the same class, provided some groundwork is done at design time.
As for "hard and fast rules" - I generally don't think that there are "hard and fast rules" in patterns at all. Like, if you don't implement it exactly as GoF described, there will be no "pattern police" punishing you. The only point is that if you follow the classic guidelines, other developers will have less problems recognizing patterns in your code.
Your example is quite OK from my point of view.
SimpleCoffee is not a decorator, so no composition there. CoffeeDecorator has decoratedCoffee as a component (here you have your composition)
a) use the simple Coffee whenever you see fit without decorating it
Yes, of course.
b) Add additional functionality that is not defined in the Coffee
interface to decorator objects such as the
methodNotDefinedInInterface()
You can add more methods just like adding new methods to SimpleCoffee class, but note that you would need to use those additional methods somewhere in the decorator class.
Personally, I find this pattern useful when someone gives you an instance of Coffee (i.e. you didn't instantiate it). If you need to change its behavior at runtime, the only way is to wrap it inside another object of Coffee type. This is when you can throw it into the decorator class. The decorator can expose some of the original behavior while providing some new behaviors.

Factory Pattern or Extended Method?

I am studying Design Patters and I have a situation where I am not sure what would be a better practice:
I have a class "Category" which has several fields: name, 2 kinds of urls, list of related objects. There is a method 'toHtml()' which basically generates some HTML from instances of that class.
There are 4 different types of 'Categories' which have exactly the same fields but 'toHtml()' method should give different result for each one.
I am not sure if I should pass a parameter "type" and series of ifs/switch statement to generate different html or should I make a Category class abstract and create several sub-classes that override the toHtml() method and then use CategoryFactory class to generate them? In both cases I need to pass 'type' parameter.
I tried to think about 'Close for modification, open for extension' OOP rule. But in this case if I want to add 'fifth' category type, that generates different html - for first solution I need to modify only toHtml method (adding one more if), going for second solution I need to create additional sub-class AND modify CategoryFactory class.
What would be better practice? Is there any extra rule I should follow when I have similar kind of dilemma?
First, I believe you are referring to the Factory Method, and not the Abstract Factory Pattern.
The main difference being, in the former you define a common template for a single product, whereas in the latter you define a template for a family of products. For more information, you could look here.
In your case, you wish to define a template for Category. With this assumption, here is what your group of classes would look like:
abstract class Category {
public void doSomething() {
Foo f = makeFoo();
f.whatever();
}
abstract void toHtml();
}
class Category1 extends Category {
public override void toHtml() {
... // do something here
}
}
class Category2 extends Category {
public override void toHtml() {
... // do something else here
}
}
It is true that this certainly a lot of code, and could easily be represented like this:
class Category {
public void toHtml(Integer param) {
if(param == 1) { // do something for Category1
}
else { // do something for Category2
}
}
At the end of the day, it really is a design decision. There are some factors you can consider. Is this going to be a constantly changing class? Is this going to be declared global for the customer to use? How do you want the customer to be able to use this?
The easier thing at this point would be to take the path of least resistance. Having one class to service all categories certainly results in lesser code and in Salesforce, less code is always a better thing. But consider this: Abstracting your functionality into separate classes makes for more maintainable code. You may find it easier to write a class and a wall of if statements, but tomorrow when you're not around and there's a critical failure and someone has to look through your code to figure out exactly which if caused the problem, they'll curse you for it.
Keep in mind that inheritance is an all or nothing mechanism. You may find it particularly useful to use if you have some common functionality, in which case you can choose to abstract that out into the parent class and have your children take care of the specifics.
If you create a subclass of Category and override the toHtml () method, why do you need to have a factory pattern. The toHtml () method of the runtime resolved class will be called if you are calling it using the reference. This implies that if you add a new Category subclass then you override the toHtml () method and it should work fine.

Is it good to have enum implementation with processing logic?

In the below example Enums do the amount of processing that a class would do.
enum TriggerHandlerType {
DASHBOARD {
#Override
TriggerHandler create() {
return new DashboardTriggerHandler();
}
},
COMPONENT_HANDLER {
//...
};
abstract TriggerHandler create();
}
private static TriggerContext getTriggerContext(TriggerHandlerType triggerHandlerType) throws TriggerHandlerException {
return new TriggerContext(triggerHandlerType.create());
}
Enums are usually used for type safe storage of constants where as in this case they will be returning varying values based on the processing logic. In a way its seems to be a comprehensive technique as the Enums here do the state determination themselves which eases the processing of classes. Also since the return values are a subset of finite values, it seems to make some sense to have the processing handled by the Enums themselves.
I do see problem here where this will break the Open-Close principle in SOLID and the class will have increment in lines of code whenever more enums get added, Could anyone share your thoughts on this?
I had an enum as such, doing operations like OR, AND, SEQ and such.
With java 8 and just one overriden method you could also make a constructor with a functional interface as parameter.
enum TriggerHandlerType {
DASHBOARD(() -> DashboardTriggerHandler::new)),
COMPONENT_HANDLER (() -> { ... });
private final Fun fun;
private TriggerHandlerType(Fun fun) {
this.fun = fun;
}
public TriggerHandler create() {
fun.apply();
}
}
In an other case I did not use this technique, to decouple classes, and have clear tiers of classes. The enum was an early class not already using later classes.
A Map from enum to handler would be OO too. A unit test might check that the created map has a size equal to the enum values'.
I need not say, that enum is an artificial coupling. Fixed number of elements or not, one could make separate classes/singletons.
so it depends.
the real answer is: depends what is the enum defined for..
you can define enum that do some functionality constantly implemented by your API clients as syntax sugar...
a good example of that is the java TimerUnit where every constant is a final class that can be used to calculate time transformations
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/TimeUnit.html#toHours(long)
other enums implement the comparator interface and implement in its constants already defined sort criterias..
all of them are valid since are sintax sugar

How to write a function in Java generalized to take in a map parameter that has different key types?

I have two enums:
enum Country { US, etc }
enum Language { EN, etc }
I want to be able to write a function that takes in a map that has either enum as the key:
checkMap(new HashMap<Language, Long>());
checkMap(new HashMap<Country, Long>());
The only ways I have figured out how to do it are the following:
1. private void checkMap(Map<? extends Enum, Long> mapParam) {...}
2. private <T> void checkMap(Map<T, Long> mapParam) {...}
3. private void checkMap(Map mapParam) {...}
None of these are super specific on the parameters I let in. (1) does the best by making it some subclass of Enum, but complicates much of the logic (which I am greatly simplifying here). (3) I have to do a ton of downstream casting, and I feel it's just generally bad practice.
I feel like I am missing something fairly obvious here.
I also know that I can write two separate method declarations with the different parameters, but there is so much repeat logic and I want to abstract that logic into a function and avoid duplicate code.
I use your option 2: use a generic type parameter T. In your example the methods are private so you have complete control over which methods can delegate to checkMap, and so do not need to be so concerned about delegations using inappropriate key types.
You can use an interface - make Country and Language implement some made-up interface and use it in method declaration:
interface Dummy{}
enum Country implements Dummy {...}
enum Language implements Dummy {...}
private checkMap(Map<Dummy, Long> map) {...}

Categories

Resources