When you use the Visitor pattern and you need to get a variable inside visitor method, how to you proceed ?
I see two approaches. The first one uses anonymous class :
// need a wrapper to get the result (which is just a String)
final StringBuild result = new StringBuilder();
final String concat = "Hello ";
myObject.accept(new MyVisitor() {
#Override
public void visit(ClassA o)
{
// this concatenation is expected here because I've simplified the example
// normally, the concat var is a complex object (like hashtable)
// used to create the result variable
// (I know that concatenation using StringBuilder is ugly, but this is an example !)
result.append(concat + "A");
}
#Override
public void visit(ClassB o)
{
result.append(concat + "B");
}
});
System.out.println(result.toString());
Pros & Cons :
Pros : you do not need to create a class file for this little behavior
Cons : I don't like the "final" keyword in this case : the anonymous class is less readable because it calls external variables and you need to use a wrapper to get the requested value (because with the keyword final, you can't reassign the variable)
Another way to do it is to do an external visitor class :
public class MyVisitor
{
private String result;
private String concat;
public MyVisitor(String concat)
{
this.concat = concat;
}
#Override
public void visit(ClassA o)
{
result = concat + "A";
}
#Override
public void visit(ClassB o)
{
result = concat + "B";
}
public String getResult()
{
return result;
}
}
MyVisitor visitor = new MyVisitor("Hello ");
myObject.accept(visitor);
System.out.println(visitor.getResult());
Pros & Cons :
Pros : all variables are defined in a clean scope, you don't need a wrapper to encapsulate the requested variable
Cons : need an external file, the getResult() method must be call after the accept method, this is quite ugly because you need to know the function call order to correctly use the visitor
You, what's your approach in this case ? Preferred method ? another idea ?
Well, both approaches are valid and imo, it really depends on whether you would like to reuse the code or not. By the way, your last 'Con' point is not totally valid since you do not need an 'external file' to declare a class. It might very well be an inner class...
That said, the way I use Visitors is like this:
public interface IVisitor<T extends Object> {
public T visit(ClassA element) throws VisitorException;
public T visit(ClassB element) throws VisitorException;
}
public interface IVisitable {
public <T extends Object> T accept(final IVisitor<T> visitor) throws VisitorException;
}
public class MyVisitor implements IVisitor<String> {
private String concat;
public MyVisitor(String concat) {
this.concat = concat;
}
public String visit(ClassA classA) throws VisitorException {
return this.concat + "A";
}
public String visit(ClassB classB) throws VisitorException {
return this.concat + "B";
}
}
public class ClassA implements IVisitable {
public <T> T accept(final IVisitor<T> visitor) throws VisitorException {
return visitor.visit(this);
}
}
public class ClassB implements IVisitable {
public <T> T accept(final IVisitor<T> visitor) throws VisitorException {
return visitor.visit(this);
}
}
// no return value needed?
public class MyOtherVisitor implements IVisitor<Void> {
public Void visit(ClassA classA) throws VisitorException {
return null;
}
public Void visit(ClassB classB) throws VisitorException {
return null;
}
}
That way, the visited objects are ignorant of what the visitor wants to do with them, yet they do return whatever the visitor wants to return. Your visitor can even 'fail' by throwing an exception.
I wrote the first version of this a few years ago and so far, it has worked for me in every case.
Disclaimer: I just hacked this together, quality (or even compilation) not guaranteed. But you get the idea... :)
I do not see an interface being implemented in your second example, but I believe it is there. I would add to your interface (or make a sub interface) that has a getResult() method on it.
That would help both example 1 and 2. You would not need a wrapper in 1, because you can define the getResult() method to return the result you want. In example 2, because getResult() is a part of your interface, there is no function that you 'need to know'.
My preference would be to create a new class, unless each variation of the class is only going to be used once. In which case I would inline it anonymously.
From the perspective of a cleaner design, the second approach is preferrable for the same exact reasons you've already stated.
In a normal TDD cycle I would start off with an anonymous class and refactored it out a bit later. However, if the visitor would only be needed in that one place and its complexity would match that of what you've provided in the example (i.e. not complex), I would have left it hanging and refactor to a separate class later if needed (e.g. another use case appeared, complexity of the visitor/surrounding class increased).
I would recommend using the second approach. Having the visitor in its full fledged class also serves the purpose of documentation and clean code. I do not agree with the cons that you have mentioned with the approach. Say you have an arraylist, and you don't add any element to it and do a get, surely you will get a null but that doesn't mean that it is necessarily wrong.
One of the points of the visitor pattern is to allow for multiple visitor types. If you create an anonymous class, you are kind of breaking the pattern.
You should change your accept method to be
public void accept(Visitor visitor) {
visitor.visit(this);
}
Since you pass this into the visitor, this being the object that is visited, the visitor can access the object's property according to the standard access rules.
Related
I am defining a type Option<T> in Java that should behave as much as possible as Rust's equivalent.
It has a method, Option::flatten, that is only implemented if the inner T is some other Option<T>. I am thinking of something like this:
public class Option<T> {
/* fields, constructors, other methods */
#Bound(T=Option<U>)
public <U> Option<U> flatten() {
if (isNone()) return None();
else return this.unwrap();
}
}
But the syntax is of course completely fictional. Is there some way to make this work in Java? I know static methods are an option, but they can't be called like a normal method which is the only goal of this type.
This is not supposed to be a standalone thing, but rather a part of a larger Java implementation of Rust iterators I'm currently working on.
The problem with trying to come up with a non-static method such as flatten is that in Java one cannot conditionally add more methods to a class based on whether the type parameter of the class fulfills a certain constraint.
You can, however, make it a static method and constrain its arguments to whatever you need.
class Option<T> {
// ...
public static <U> Option<U> flatten(Option<Option<U>> option) {
if (option.isNone()) return None();
return option.unwrap();
}
}
Which would work for valid implementations of None, isNone and unwrap.
A more complete example follows.
public static class Option<T> {
private final T value;
private Option(T x) {
this.value = x;
}
public static <T> Option<T> of(T x) {
java.util.Objects.requireNonNull(x);
return new Option<>(x);
}
public static <T> Option<T> None() {
return new Option<>(null);
}
public T unwrap() {
java.util.Objects.requireNonNull(this.value);
return this.value;
}
public boolean isNone() {
return this.value == null;
}
public static <U> Option<U> flatten(Option<Option<U>> option) {
if (option.isNone()) return Option.None();
return option.unwrap();
}
#Override
public String toString() {
if (this.isNone()) {
return "None";
}
return "Some(" + this.value.toString() + ")";
}
}
Usage:
var myOption = Option.of(Option.of(5));
System.out.println("Option: " + myOption);
System.out.println("Flattened: " + Option.flatten(myOption));
Output:
Option: Some(Some(5))
Flattened: Some(5)
I think the way you want to handle this is not to actually have a flatten() method, but have different handling in your constructor. Upon being created, the constructor should check the type it was handed. If that type is Option, it should try and unwrap that option, and set its internal value to the same as the option it was handed.
Otherwise, there isn't really a way for an object to 'flatten' itself, because it would have to change the type it was bounded over in the base case. You could return a new object from a static method, but are otherwise stuck.
I want to point out some of the potential headaches and issues regarding this re-implementation of Optional<T>.
Here's how I would initially go about it:
public class Option<T> {
/* fields, constructors, other methods */
public <U> Option<U> flatten() {
if (isNone()) return None();
T unwrapped = this.unwrap();
if (unwrapped instanceof Option) {
return (Option<U>) unwrapped; //No type safety!
} else {
return (Option<U>) this;
}
}
}
However, this code is EVIL. Note the signature of <U> Option<U> flatten() means that the U is going to be type-inferenced into whatever it needs to be, not whatever a potential nested type is. So now, this is allowed:
Option<Option<Integer>> opt = /* some opt */;
Option<String> bad = opt.flatten();
Option<Option<?>> worse = opt.<Option<?>>flatten();
You will face a CCE upon using this for the other operations, but it allows a type of failure which I would say is dangerous at best. Note that any Optional<Optional<T>> can have #flatMap unwrap for you: someOpt.flatMap(Function.identity());, however this again begs the question of what caused you to arrive at a wrapped optional to begin with.
Another answer (by #NathanielFord) notes the constructor as an option, which seems viable as well, but will still face the runtime check upon construction (with it simply being moved to the constructor):
public class Option<T> {
/* fields, constructors, other methods */
public Option<T>(T someValue) { ... }
public Option<T>(Option<T> wrapped) {
this(wrapped.isNone() ? EMPTY_OBJECT : wrapped.unwrap());
}
public Option<T> flatten() {
return this; //we're always flattened!
}
}
Note as well, the re-creation of Optional<T> by
#E_net4thecommentflagger has the potential for a nasty future bug: Optional.ofNullable(null).isNone() would return true! This may not be what you want for some potential use-cases, and should #equals be implemented in a similar manner, you'd end up with Optional.ofNullable(null).equals(Optional.None()), which seems very counter-intuitive.
All of this to say, that while Rust may require you to deal with these nested optionals, you are writing code for Java, and many of the potential restrictions you faced before have changed.
let's say that I have several Creature subclasses, and that they have each have some sort of getGroup() method that returns a List<Creature>.
What I mean by "some sort of" .getGroup() method is that the name of this function varies between subclasses. For instance, Wolfs travel in packs, so they have a getPack() member. Fish travel in schools, so they have a .getSchool() member, Humans have a getFamily() member, and so on.
.getGroup() doesn not exist in Creature, and it cannot be added to the interface. None of these clases can be edited.
I'm writing a method to print the number of Creatures in their group. How would I do this?
Essentially, I'm looking to condense these two functions into the same thing:
public void PrintSchoolSize(Fish dory) {
System.out.print(dory.getSchool().size());
}
public void PrintHiveSize(Bee bee) {
System.out.print(bee.getColony().size());
}
...into the following function:
public void printGroupSize( Class<? extends Creature> cree,
FunctionThatReturnsList getGroup() ) {
System.out.print(cree.getGroup().size();
}
I'd imagine I need to pass in a second argument (function pointer?) to void printGroupSize. Any help is very appreciated, thanks!
EDIT Thank you all for the help. This is just a simplification of the real problem I'm trying to solve. Long, overly complex problems are tougher to answer, so I posed this simpler scenario.
The only answer lies in using a generic function (if that exists). The classes I'm actually working with don't have a common interface, but they all have a function that returns a List.
What you describe in your question is not much related to Java's sense of "generic methods". You could implement it with reflection (see Class.getMethod()), but I promise you that you really don't want to go there.
It would be better for Creature to declare a possibly-abstract method getGroup() that each subclass would override. You may do that in addition to providing methods with subclass-specific names, if you wish. Code that wants to obtain the group (or its size) without knowing the specific type of creature would invoke that creature's getGroup() method. That's an application of polymorphism, which seems to be what you're actually after.
If getGroup cannot be added to the Creature interface why not add another interface to your creatures?
public interface HasGroup {
Group getGroup();
}
Would mean you can create the method:
public void printGroupSize(HasGroup cree) {
System.out.print(cree.getGroup().size();
}
The simplest way is to had a getGroup() method to the Creature interface and implement it in each subclass, but it seems you cannot do that.
If you can modify the subclasses, I would actually create a new interface CreatureGroupable with a getGroupSize() and/or getGroup(). Each subclass of Creature shall implement this interface, e.g.
public interface CreatureGroupable {
CreatureGroup getGroup();
}
public enum CreatureGroup {
WOLF_PACK("pack", 30),
GEES_FLOCK("flock", 20),
FISH_SCHOOL("school", 1000),
HUMAN_FAMILY("family", 4),
...
private final String name;
private final int size;
private CreatureGroup(String name, int size) {
this.name = name;
this.size = size;
}
public String getName() { return name; }
public int getSize() { return size; }
}
public class Wolf implements Creature, CreatureGroupable {
// methods from Creature, constructor, ...
public CreatureGroup getGroup() {
return CreatureGroup.WOLF_PACK;
}
This way, if you have a List<Creature> you can access the group of each one and do whatever you have to do, e.g.
public void printGroups(List<Creature> creatures) {
for (Creature c : creatures) {
CreatureGroup group = c.getGroup();
System.out.println("A " + group.getName() +
" has roughly " group.getSize() +
" individuals.");
}
}
If you want more flexibility, you may not use an enum and just a standard interface and class hierarchy for the groups.
Thanks to everyone for the help. Since I'm not allowed to edit any of the aforementioned classes/interfaces (I can only write external functions), I wrote the following function
public List<? extends Creature> getGroup(Object obj) {
if(obj.getClass() == Bee.class)
return ((Bee)obj).getColony();
if(obj.getClass() == Fish.class)
return ((Fish) obj).getSchool();
/* repeat for the other classes */
return null;
}
...and used it here, as so:
public void printGroupSize( Class<? extends Creature> cree ) {
System.out.print(getGroup(cree).size());
}
I have verified that this solution does indeed work, since all of the get*****() functions return a List<Creature>. This solution also shrinks my codebase significantly, and is easier to maintain than the current structure.
With the introduction of generics, I am reluctant to perform instanceof or casting as much as possible. But I don't see a way around it in this scenario:
for (CacheableObject<ICacheable> cacheableObject : cacheableObjects) {
ICacheable iCacheable = cacheableObject.getObject();
if (iCacheable instanceof MyObject) {
MyObject myObject = (MyObject) iCacheable;
myObjects.put(myObject.getKey(), myObject);
} else if (iCacheable instanceof OtherObject) {
OtherObject otherObject = (OtherObject) iCacheable;
otherObjects.put(otherObject.getKey(), otherObject);
}
}
In the above code, I know that my ICacheables should only ever be instances of MyObject, or OtherObject, and depending on this I want to put them into 2 separate maps and then perform some processing further down.
I'd be interested if there is another way to do this without my instanceof check.
Thanks
You could use double invocation. No promises it's a better solution, but it's an alternative.
Code Example
import java.util.HashMap;
public class Example {
public static void main(String[] argv) {
Example ex = new Example();
ICacheable[] cacheableObjects = new ICacheable[]{new MyObject(), new OtherObject()};
for (ICacheable iCacheable : cacheableObjects) {
// depending on whether the object is a MyObject or an OtherObject,
// the .put(Example) method will double dispatch to either
// the put(MyObject) or put(OtherObject) method, below
iCacheable.put(ex);
}
System.out.println("myObjects: "+ex.myObjects.size());
System.out.println("otherObjects: "+ex.otherObjects.size());
}
private HashMap<String, MyObject> myObjects = new HashMap<String, MyObject>();
private HashMap<String, OtherObject> otherObjects = new HashMap<String, OtherObject>();
public Example() {
}
public void put(MyObject myObject) {
myObjects.put(myObject.getKey(), myObject);
}
public void put(OtherObject otherObject) {
otherObjects.put(otherObject.getKey(), otherObject);
}
}
interface ICacheable {
public String getKey();
public void put(Example ex);
}
class MyObject implements ICacheable {
public String getKey() {
return "MyObject"+this.hashCode();
}
public void put(Example ex) {
ex.put(this);
}
}
class OtherObject implements ICacheable {
public String getKey() {
return "OtherObject"+this.hashCode();
}
public void put(Example ex) {
ex.put(this);
}
}
The idea here is that - instead of casting or using instanceof - you call the iCacheable object's .put(...) method which passes itself back to the Example object's overloaded methods. Which method is called depends on the type of that object.
See also the Visitor pattern. My code example smells because the ICacheable.put(...) method is incohesive - but using the interfaces defined in the Visitor pattern can clean up that smell.
Why can't I just call this.put(iCacheable) from the Example class?
In Java, overriding is always bound at runtime, but overloading is a little more complicated: dynamic dispatching means that the implementation of a method will be chosen at runtime, but the method's signature is nonetheless determined at compile time. (Check out the Java Language Specification, Chapter 8.4.9 for more info, and also check out the puzzler "Making a Hash of It" on page 137 of the book Java Puzzlers.)
Is there no way to combine the cached objects in each map into one map? Their keys could keep them separated so you could store them in one map. If you can't do that then you could have a
Map<Class,Map<Key,ICacheable>>
then do this:
Map<Class,Map<Key,ICacheable>> cache = ...;
public void cache( ICacheable cacheable ) {
if( cache.containsKey( cacheable.getClass() ) {
cache.put( cacheable.getClass(), new Map<Key,ICacheable>() );
}
cache.get(cacheable.getClass()).put( cacheable.getKey(), cacheable );
}
You can do the following:
Add a method to your ICachableInterface interface that will handle placing the object into one of two Maps, given as arguments to the method.
Implement this method in each of your two implementing classes, having each class decide which Map to put itself in.
Remove the instanceof checks in your for loop, and replace the put method with a call to the new method defined in step 1.
This is not a good design, however, because if you ever have another class that implements this interface, and a third map, then you'll need to pass another Map to your new method.
I've run into a situation in which I was to extend the functionality of a given class, but I'm not sure of the best way to go about this. I started by invoking functionality "upwards" and have now switched to "downwards", but I see issues with both. Let me explain what I mean. First, the "upwards" approach:
public class ParentValidator
{
public void validate() {
// Some code
}
}
public class ChildValidator extends ParentValidator
{
#Override
public void validate() {
super.validate();
// Some code
}
}
public class GrandchildValidator extends ChildValidator
{
#Override
public void validate() {
super.validate();
// Some code
}
}
This functions perfectly well, but it requires that I always remember to place super.validate() in my method body or the logic in the parent class(es) won't be executed. In addition, extension in this manner can be considered "unsafe" due to the fact that a child class could actually replace/modify the code defined in the parent class. This is what I call invoking methods "upwards" because I'm invoking methods from higher level classes as I go.
To counter these shortfalls, I decided to make ParentValidator.validate() final and have it invoke a different method. Here's what my code was modified to:
public class ParentValidator
{
public final void validate() {
// Some code
subValidate();
}
protected void subValidate() {}
}
public class ChildValidator extends ParentValidator
{
#Override
public final void subValidate() {
// Some code
subSubValidate();
}
protected void subSubValidate() {}
}
public class GrandchildValidator extends ChildValidator
{
#Override
public void subSubBalidate() {
// Some code
subSubSubValidate();
}
protected void subSubSubValidate();
}
This is what I was referring to when I say that I'm calling downwards as each class invokes methods on classes "down" the inheritance chain.
Using this approach, I can be guaranteed that the logic in the parent class(es) will be executed, which I like. However, it doesn't scale well. The more layers of inheritance I have, the uglier it gets. At one level, I think this is very elegant. At two levels, it starts to look shoddy. At three or more, it's hideous.
In addition, just as I had to remember to invoke super.validate() as the first line of any of my children's validate methods, I now have to remember to invoke some "subValidate" method at the end of any of my parent's validate methods, so that didn't seem to get any better.
Is there a better way to do this type of extension that I haven't even touched on. Either of these approaches have some serious flaws and I'm wondering if there's a better design pattern I could be using.
In what you describe as your first approach you are using simple inheritance, your second approach is closer to what the Gang of Four [GoF] called a Template Method Pattern because your parent class is using the so-called Hollywood Principle: "don't call us, we'll call you".
However, you could benefit from declaring the subvalidate() method as abstract in the parent class, and by this, make sure all subclasses are forced to implement it. Then it would be a true template method.
public abstract class ParentValidator
{
public final void validate() {
//some code
subValidate();
}
protected abstract void subValidate() {}
}
Depending on what you are doing there are other patterns that could help you do this in a different manner. For instance, you could use a Strategy Pattern to peform the validations, and by this favoring composition over inheritance, as suggested before, but a consequence is that you will need more validation classes.
public abstract class ParentValidator
{
private final ValidatorStrategy validator;
protected ParentValidator(ValidatorStrategy validator){
this.validator = validator;
}
public final void validate() {
//some code
this.validator.validate();
}
}
Then you can provide specific validation strategies for every type of Validator that you have.
If you want to get the best of both worlds you might considering implementing the solution as a Decorator Pattern where subclasses can extend the functionality of a parent class and still stick to a common interface.
public abstract class ValidatorDecorator implements Validator
{
private final Validator validator;
protected ParentValidator(Validator validator){
this.validator = validator;
}
public final void validate() {
//some code
super.validate(); //still forced to invoke super
this.validator.validate();
}
}
All patterns have consequences and advantages and disadvantages that you must consider carefully.
I'd prefer to 1) program against interfaces, and 2) opt for composition over inheritance. This is how I have done. Some people like it, some do not. It works.
// java pseudocode below, you'll need to work the wrinkles out
/**
* Defines a rule or set of rules under which a instance of T
* is deemed valid or invalid
**/
public interface ValidationRule<T>
{
/**
* #return String describing invalidation condition, or null
* (indicating then that parameter t is valid */
**/
String apply(final T t);
}
/**
* Utility class for enforcing a logical conjunction
* of zero or more validatoin rules on an object.
**/
public final class ValidatorEvaluator
{
/**
* evaluates zero or more validation rules (as a logical
* 'AND') on an instance of type T.
**/
static <T> String apply(final T t, ValidationRule<T> ... rules)
{
for(final ValidationRules<T> v : rules)
{
String msg = v.apply(t);
if( msg != null )
{
return msg; // t is not valid
}
}
return null;
}
}
// arbitrary dummy class that we will test for
// i being a positive number greater than zero
public class MyFoo
{
int i;
public MyFoo(int n){ i = n; }
///
}
public class NonZeroValidatorRule implements ValidatorRule<MyFoo>
{
public String apply(final MyFoo foo)
{
return foo.i == 0 ? "foo.i is zero!" : null;
}
}
// test for being positive using NonZeroValidatorRule and an anonymous
// validator that tests for negatives
String msg = ValidatorEvaluator.apply( new MyFoo(1),
new NonZeroValidatorRule(),
new ValidatorRule<MyFoo>()
{
public String apply(final MyFoo foo)
{
return foo.i < 0 ? "foo.i is negative!" : null;
}
}
);
if( msg == null )
{
\\ yay!
...
}
else
{
\\ nay...
someLogThingie.log("error: myFoo now workie. reason=" + msg );
}
More complex, non-trivial evaluation rules can be implemented this way.
The key here is that you should not use inheritance unless there exists a is-a relationship. Do not use it just to recycle or encapsulate logic. If you still feel you need to use inheritance, then don't go overkill trying to make sure that every subclass executes the validation logic inherited from the superclass. Have implementations of each subclass do an explicit execution on super:
public class ParentValidator
{
public void validate() { // notice that I removed the final you originally had
// Some code
}
}
pubic class ChildValidator extends ParentValidator
{
#Override
public void validate() {
// Some code
super.validate(); // explicit call to inherited validate
// more validation code
}
}
Keep things simple, and don't try to make it impossible or fool-proof. There is a difference between coding defensively (a good practice) and coding against stupid (a futile effort.) Simply lay out coding rules on how to subclass your validators. That is, put the onus on the implementors. If they cannot follow the guidelines, no amount of defensive coding will protect your system against their stupidity. Ergo, keep things clear and simple.
I prefer to using composition over inheritance if your subSubSubValidate is related general functionality. You can extract new class and move it there than you can use it without inheritance in the other classes.
There is also
"Favor 'object composition' over
'class inheritance'." (Gang of Four
1995:20)
maybe a look at the visitor pattern may help you to develop your pattern.
Here are some information on it : http://en.wikipedia.org/wiki/Visitor_pattern
If I have two interfaces , both quite different in their purposes , but with same method signature , how do I make a class implement both without being forced to write a single method that serves for the both the interfaces and writing some convoluted logic in the method implementation that checks for which type of object the call is being made and invoke proper code ?
In C# , this is overcome by what is called as explicit interface implementation. Is there any equivalent way in Java ?
No, there is no way to implement the same method in two different ways in one class in Java.
That can lead to many confusing situations, which is why Java has disallowed it.
interface ISomething {
void doSomething();
}
interface ISomething2 {
void doSomething();
}
class Impl implements ISomething, ISomething2 {
void doSomething() {} // There can only be one implementation of this method.
}
What you can do is compose a class out of two classes that each implement a different interface. Then that one class will have the behavior of both interfaces.
class CompositeClass {
ISomething class1;
ISomething2 class2;
void doSomething1(){class1.doSomething();}
void doSomething2(){class2.doSomething();}
}
There's no real way to solve this in Java. You could use inner classes as a workaround:
interface Alfa { void m(); }
interface Beta { void m(); }
class AlfaBeta implements Alfa {
private int value;
public void m() { ++value; } // Alfa.m()
public Beta asBeta() {
return new Beta(){
public void m() { --value; } // Beta.m()
};
}
}
Although it doesn't allow for casts from AlfaBeta to Beta, downcasts are generally evil, and if it can be expected that an Alfa instance often has a Beta aspect, too, and for some reason (usually optimization is the only valid reason) you want to be able to convert it to Beta, you could make a sub-interface of Alfa with Beta asBeta() in it.
If you are encountering this problem, it is most likely because you are using inheritance where you should be using delegation. If you need to provide two different, albeit similar, interfaces for the same underlying model of data, then you should use a view to cheaply provide access to the data using some other interface.
To give a concrete example for the latter case, suppose you want to implement both Collection and MyCollection (which does not inherit from Collection and has an incompatible interface). You could provide a Collection getCollectionView() and MyCollection getMyCollectionView() functions which provide a light-weight implementation of Collection and MyCollection, using the same underlying data.
For the former case... suppose you really want an array of integers and an array of strings. Instead of inheriting from both List<Integer> and List<String>, you should have one member of type List<Integer> and another member of type List<String>, and refer to those members, rather than try to inherit from both. Even if you only needed a list of integers, it is better to use composition/delegation over inheritance in this case.
The "classical" Java problem also affects my Android development...
The reason seems to be simple:
More frameworks/libraries you have to use, more easily things can be out of control...
In my case, I have a BootStrapperApp class inherited from android.app.Application,
whereas the same class should also implement a Platform interface of a MVVM framework in order to get integrated.
Method collision occurred on a getString() method, which is announced by both interfaces and should have differenet implementation in different contexts.
The workaround (ugly..IMO) is using an inner class to implement all Platform methods, just because of one minor method signature conflict...in some case, such borrowed method is even not used at all (but affected major design semantics).
I tend to agree C#-style explicit context/namespace indication is helpful.
The only solution that came in my mind is using referece objects to the one you want to implent muliple interfaceces.
eg: supposing you have 2 interfaces to implement
public interface Framework1Interface {
void method(Object o);
}
and
public interface Framework2Interface {
void method(Object o);
}
you can enclose them in to two Facador objects:
public class Facador1 implements Framework1Interface {
private final ObjectToUse reference;
public static Framework1Interface Create(ObjectToUse ref) {
return new Facador1(ref);
}
private Facador1(ObjectToUse refObject) {
this.reference = refObject;
}
#Override
public boolean equals(Object obj) {
if (obj instanceof Framework1Interface) {
return this == obj;
} else if (obj instanceof ObjectToUse) {
return reference == obj;
}
return super.equals(obj);
}
#Override
public void method(Object o) {
reference.methodForFrameWork1(o);
}
}
and
public class Facador2 implements Framework2Interface {
private final ObjectToUse reference;
public static Framework2Interface Create(ObjectToUse ref) {
return new Facador2(ref);
}
private Facador2(ObjectToUse refObject) {
this.reference = refObject;
}
#Override
public boolean equals(Object obj) {
if (obj instanceof Framework2Interface) {
return this == obj;
} else if (obj instanceof ObjectToUse) {
return reference == obj;
}
return super.equals(obj);
}
#Override
public void method(Object o) {
reference.methodForFrameWork2(o);
}
}
In the end the class you wanted should something like
public class ObjectToUse {
private Framework1Interface facFramework1Interface;
private Framework2Interface facFramework2Interface;
public ObjectToUse() {
}
public Framework1Interface getAsFramework1Interface() {
if (facFramework1Interface == null) {
facFramework1Interface = Facador1.Create(this);
}
return facFramework1Interface;
}
public Framework2Interface getAsFramework2Interface() {
if (facFramework2Interface == null) {
facFramework2Interface = Facador2.Create(this);
}
return facFramework2Interface;
}
public void methodForFrameWork1(Object o) {
}
public void methodForFrameWork2(Object o) {
}
}
you can now use the getAs* methods to "expose" your class
You can use an Adapter pattern in order to make these work. Create two adapter for each interface and use that. It should solve the problem.
All well and good when you have total control over all of the code in question and can implement this upfront.
Now imagine you have an existing public class used in many places with a method
public class MyClass{
private String name;
MyClass(String name){
this.name = name;
}
public String getName(){
return name;
}
}
Now you need to pass it into the off the shelf WizzBangProcessor which requires classes to implement the WBPInterface... which also has a getName() method, but instead of your concrete implementation, this interface expects the method to return the name of a type of Wizz Bang Processing.
In C# it would be a trvial
public class MyClass : WBPInterface{
private String name;
String WBPInterface.getName(){
return "MyWizzBangProcessor";
}
MyClass(String name){
this.name = name;
}
public String getName(){
return name;
}
}
In Java Tough you are going to have to identify every point in the existing deployed code base where you need to convert from one interface to the other. Sure the WizzBangProcessor company should have used getWizzBangProcessName(), but they are developers too. In their context getName was fine. Actually, outside of Java, most other OO based languages support this. Java is rare in forcing all interfaces to be implemented with the same method NAME.
Most other languages have a compiler that is more than happy to take an instruction to say "this method in this class which matches the signature of this method in this implemented interface is it's implementation". After all the whole point of defining interfaces is to allow the definition to be abstracted from the implementation. (Don't even get me started on having default methods in Interfaces in Java, let alone default overriding.... because sure, every component designed for a road car should be able to get slammed into a flying car and just work - hey they are both cars... I'm sure the the default functionality of say your sat nav will not be affected with default pitch and roll inputs, because cars only yaw!