Is it necessary to parametrize the entire interface for this scenario, even though Bar is only being used in a single method?
public interface IFoo<T>{
void method1(Bar<T> bar);
//Many other methods that don't use Bar....
}
public class Foo1 implements IFoo<Yellow>{
void method1(Bar<Yellow> bar){...};
//Many other methods that don't use Bar....
}
public class Foo2 implements IFoo<Green>{
void method1(Bar<Green> bar){...};
//Many other methods that don't use Bar....
}
No, it's not necessary from a syntactic standpoint. You can also do this:
public interface IFoo {
<T> void method1(Bar<T> bar);
/* Many other methods that don't use Bar… */
}
Or this:
public interface IFoo {
void method1(Bar<?> bar);
/* Many other methods that don't use Bar… */
}
The correct choice depends on the semantics of IFoo and what its implementations are likely to do with the Bar instances they receive through method1.
I would ask the question a bit differently, because the need suggests a cost, which is not actual. I don't think it actually matter if it is used on only one, or several methods.
When you make several calls to the instance, how does the type parameter vary?:
if constant once you instantiated the instance, you parameterize the entire interface.
if it may be different on each call, you parameterize the method.
That way, the type of parameter actually gives information about the code, improve the meaning and clarity.
Edited: Example
If sometimes, the type parameter varies from call to call, for the same instance ...
It has to be a method parameter.
You're not extending the interface. Is that deliberate? You can do this:
public class Foo2 implements IFoo<Green> {
void method1(Bar<Green> bar);
}
Just doing this:
public class Foo<Green> {
void method1(Bar<Green> bar);
}
won't compile.
Related
I have a couple interfaces to support our post processing of entities:
WorkFlowProcessor
public interface WorkFlowProcessor {
void PostProcess(List<WorkFlowStrategy> strategies);
}
WorkFlowAction
public class WorkFlowAction implements WorkFlowProcessor{
...
...
public void PostProcess(List<WorkFlowStrategy> strategies){
for(WorkFlowStrategy strategy : strategies){
strategy.process(this)
}
}
}
WorkFlowStrategy
public interface WorkFlowStrategy {
void process(WorkFlowProcessor itemToProcess);
}
TicketWorkFlowStrategy
public class TicketWorkFlowStrategy implements WorkFlowStrategy {
...
...
#Overried
public void process(WorkFlowAction action){ //must override or implement a supertype method
// do a lot of processing
}
}
I'm trying to figure out why I cannot get it to compile with the WorkFlowAction class. Normally this works just fine. Any thoughts on how I can get this to run correctly?
That's because you've got to declare it with the same signature as the method in the interface:
<T extends WorkFlowProcessor> void process(T itemToProcess)
Declaring the method like this in the interface doesn't mean you can specialize implementations of it for more specific parameters. This method has to accept any WorkflowProcessor.
Because of that fact, the type variable here is pretty useless: you may as well just declare it thus in the interface, which makes it cleaner to implement too:
void process(WorkflowProcessor itemToProcess);
Method-level type variables aren't actually useful unless you doing one or more of the following:
Returning the same type as a non-generic parameter
Constraining a generic parameter to be related either to another parameter or the return type.
If you want to specialize the process method for a particular subclass of WorkflowProcessor, you have to put this on the interface:
public interface WorkFlowStrategy<T extends WorkFlowProcessor> {
void process(T itemToProcess);
}
Then:
public class TicketWorkFlowStrategy implements WorkFlowStrategy<WorkflowAction> {
#Override
public void process(WorkFlowAction action){
// ...
}
}
The implication being made by the process method as defined in the WorkFlowStrategy interface is that implementations should be able to accept any WorkFlowProcessor as an argument to the method. The generic definition you added to this method does nothing to change this.
In your case, the generic definition probably belongs on the interface level, not on the method level. You can then be explicit about what types can be supported.
In Java 8, if I have two interfaces with different (but compatible) return types, reflection tells me that one of the two methods is a default method, even though I haven't actually declared the method as default or provided a method body.
For instance, take the following code snippet:
package com.company;
import java.lang.reflect.Method;
interface BarInterface {}
class Bar implements BarInterface {}
interface FooInterface {
public BarInterface getBar();
}
interface FooInterface2 extends FooInterface {
public Bar getBar();
}
class Foo implements FooInterface2 {
public Bar getBar(){
throw new UnsupportedOperationException();
}
}
public class Main {
public static void main(String[] args) {
for(Method m : FooInterface2.class.getMethods()){
System.out.println(m);
}
}
}
Java 1.8 produces the following output:
public abstract com.company.Bar com.company.FooInterface2.getBar()
public default com.company.BarInterface com.company.FooInterface2.getBar()
This seems odd, not only because both methods are present, but also because one of the methods has suddenly and inexplicably become a default method.
Running the same code in Java 7 yields something a little less unexpected, albeit still confusing, given that both methods have the same signature:
public abstract com.company.Bar com.company.FooInterface2.getBar()
public abstract com.company.BarInterface com.company.FooInterface.getBar()
Java definitely doesn't support multiple return types, so this result is still pretty strange.
The obvious next thought is: "Okay, so maybe this is a special behavior that only applies to interfaces, because these methods have no implementation."
Wrong.
class Foo2 implements FooInterface2 {
public Bar getBar(){
throw new UnsupportedOperationException();
}
}
public class Main {
public static void main(String[] args) {
for(Method m : Foo2.class.getMethods()){
System.out.println(m);
}
}
}
yields
public com.company.Bar com.company.Foo2.getBar()
public com.company.BarInterface com.company.Foo2.getBar()
What's going on here? Why is Java enumerating these as separate methods, and how has one of the interface methods managed to become a default method with no implementation?
It's not a default method you provide but a bridging method. In the parent interface you have defined.
public BarInterface getBar();
and you must have a method which can be called which implements this.
e.g.
FooInterface fi = new Foo();
BarInterface bi = fi.getBar(); // calls BarInterface getBar()
However, you also need to be able to call it's co-variant return type.
FooInterface2 fi = new Foo();
Bar bar = fi.getBar(); // calls Bar getBar()
These are the same method, only difference is that one calls the other and cast the return value. It's the method which appears to have a default implementation as it is on the interface which does this.
Note: if you have multiple levels of interfaces/class and each has a different return type, the number of methods accumulates.
The reason it does this is that the JVM allows having multiple methods with different return type because the return type is part of the signature. I'e the caller has to state which return type it is expecting and the JVM doesn't actually understand co-variant return types.
Is there some type of #annotation, or other method, in Java to enforce method extension, instead of overriding?
To be specific, let's say I have a class Foo:
class Foo {
public void bar(Thing thing) {
// ...
}
}
Is there a way I can enforce, at compile time, that any class X that extends Foo, and also overrides bar, makes a call to super.bar(thing) first?
No, you have to explicitly write it.
Side note for constructors: the superclass' nullary constructor will be implicitly called when instantiating the subclass, however many parameters the latter's constructor has.
You could declare bar to be final, then call an abstract method from bar, which would force subclasses to implement an "extension".
abstract class Foo {
public final void bar(Thing thing) {
barImpl(thing);
overrideMe(thing);
}
private final void barImpl(Thing thing) {
// Original implementation of "bar" here.
}
protected abstract void overrideMe(Thing thing);
}
EDIT
I've changedoverrideMe from public to protected so users of Foo can't just call overrideMe instead of bar.
Generally, what you can do is to create a final method, that calls the extendable one.
class Foo {
#Override
public final void bar(Thing thing) {
// super code comes here
this.doBar(thing);
}
public abstract void doBar(Thing thing);
}
When you call
foo.bar(thing);
your super code runs first, then the code from the child class.
This way you can protect your full bar logic, and allow only certain parts to be extended/reimplemented.
Also, it allows you to postprocess the result, or to break up your code to certain subtasks.
While you cannot force code to call up to its superclass at compile time, it's not too hard to detect at run time when code does not call up to the superclass.
class Foo {
private boolean baseCalled;
public final void bar(Thing thing) {
baseCalled = false;
barImp(thing);
if (!baseCalled) {
throw new RuntimeException("super.barImp() not called");
}
}
protected void barImp(Thing thing) {
baseCalled = true;
. . . // base class implementation of bar
}
}
Note that this extends to multiple levels of inheritance without further elaboration. The method works particularly well for methods that are called from within Foo; in those cases, you can often forgo the final qualifier and redirection to an implementation method, and just define the base class method to set the flag. Clearing the flag would be done at each point of invocation.
The above pattern is used extensively in the Android framework. It doesn't guarantee that super.barImp was called as the first thing in subclass overrides; just that it was called.
You can try to use #AroundInvoke annotation, if you are using EJBs.
By using reflection, you can find the same method in your parent class, and yo can invoke it with the same parameters as the original method was called.
Note, that in this case you must avoid super.bar(thing) calls, otherwise they would be called twice.
What is the best way to handle different subtypes of an abstract supertype as an argument, for instance when handling events.
The situation is as follows:
The supertype:
public interface MySuperInterface {
}
A subtype
public class SubclassA implements MySuperInterface {
}
Another subtype
public class SubclassB implements MySuperInterface {
}
Some class that should be able to handle any subtype of MySuperInterface
public class MySuperHandler {
public void handle(MySuperInterface mysuper) {
//do it
}
}
My different approaches are
a switch/case statement in the handler method. (which I dont like)
a method receive(MySuperHandler) in the interface and a dispatch to this method
inside the handle method: mysuper.receive(this) (which means the interface knows the handler class)
Adding a handle method for every subtype in the MySuperHandler class (this does not ensure that every subtype can be handled)
but for the mentioned reasons I'm not content with these solutions.
are there any options to handle this situation?
thanks
One approach is to use the Visitor Pattern. It would look something like this:
public interface MySuperInterface {
<T> T acceptVisitor(MySuperInterfaceVisitor<T>);
}
public interface MySuperInterfaceVisitor<T> {
T visitA(SubclassA a);
T visitB(SubclassB a);
}
public class SubclassA implements MySuperInterface {
<T> T acceptVisitor(MySuperInterfaceVisitor<T> visitor) {
return visitor.visitA(this);
}
}
public class SubclassB implements MySuperInterface {
<T> T acceptVisitor(MySuperInterfaceVisitor<T> visitor) {
return visitor.visitB(this);
}
}
public class MySuperHandler implements MySuperInterfaceVisitor<Foo>{
Foo visitA(SubclassA a) {
// construct Foo from SubclassA instance
}
Foo visitB(SubclassB a) {
// construct Foo from SubclassB instance
}
}
This is a bit like your #2, except the interface (and the subclasses) don't need to know about the handler. They just need to know about the visitor interface. This is good if you don't want MySuperInterface and its implementations to know about your specific handlers.
BTW, instead of calling:
myHandler.handle(myImpl);
you'd call:
myImpl.acceptVisior(myHandler);
This approach is nice if you want to ensure that every handler can handle every implementation of your interface, yet still keep the implementations from knowing about all of the "handlers" that exist.
If you add another implementation of your interface (MySuperInterface) the compiler will force you to add an acceptVisitor method. This method can either use one of the existing visit* methods, or you'll have to go and add a new one to the visitor interface. If you do the latter, you must then update all of the visitor (aka "handler") implementations. This ensures that every subtype can be handled, going forward.
This approach is more complex than the one in assylias's answer, and only really makes sense if you either want to break the coupling between the implementations of MySuperInterface and your handler code, or you have a strong desire to organize your handler code such that all of the code for a particular type of handling is "together".
One common use of the visitor pattern is rendering objects in different ways. Suppose you want to be able to convert an object into a PDF or HTML. You could have a toHTML and a toPDF method in your interface. The downside to this approach is that now your classes are dependent upon your libraries for generating HTML and PDF. Also, if someone later wants to add a new type of output they need to modify these core classes, which may be undesirable. With the visitor pattern, only the vistior classes need to know about the PDF or HTMl libraries, and new visitors can be added without modifying the core classes. (But again, adding new core classes means you either need to have them reuse an existing visit* method, or you'll have to modify all of the visitor implementations.)
Your description is a bit vague but if you have several subclasses, some of which share a common "handle" behavior, this could work - if you only have 2 subclasses and don't plan to have more in the future, the Abstract step is probably unnecessary:
public interface MySuperInterface {
void handle();
}
public abstract AbstractMySuperInterface {
public void handle() {
//implement default behavior
}
}
public class SubclassA implements MySuperInterface {
//nothing here, just use default behavior
}
public class SubclassB implements MySuperInterface {
public void handle() {
//implement another behavior
}
}
public class MySuperHandler {
public void handle(MySuperInterface mysuper) {
mysuper.handle();
}
}
This isn't exactly the definition of implicit type conversion, but I'm curious how many standards I'm breaking with this one...
I'm creating an abstract class in Java that basically casts its variables depending on a string passed into the constructor.
For example:
public abstract class MyClass {
Object that;
public MyClass(String input){
if("test1".equals(input){
that = new Test1();
}
else{
that = new Test();
}
}
public void doSomething(){
if(that instanceof Test1){
//specific test1 method or variable
} else if(that instanceof Test2)}
//specific test2 method or variable
} else {
//something horrible happened
}
}
}
You see what I'm getting at? Now the problem I run into is that my compiler wants me to explicitly cast that into Test1 or Test2 in the doSomething method - which I understand, as the compiler won't assume that it's a certain object type even though the if statements pretty much guarantee the type.
I guess what I'm getting at is, is this a valid solution?
I have other classes that all basically do the same thing but use two different libraries depending on a simple difference and figure this class can help me easily track and make changes to all of those other objects.
You are right. This is a horrible way to achieve polymorphism in design. Have you considered using a factory? A strategy object? It sounds like what you are trying to achieve can be implemented in a more loosely-coupled way using a combination of these patterns (and perhaps others).
For the polymorphism of doSomething, for example:
interface Thing {
public void doThing();
}
class Test1 implements Thing {
public void doThing() {
// specific Test1 behavior
}
}
class Test2 implements Thing {
public void doThing() {
// specific Test2 behavior
}
}
class MyClass {
Thing _thing;
public void doSomething() {
_thing.doThing(); // a proper polymorphism will take care of the dispatch,
// effectively eliminating usage of `instanceof`
}
}
Of course, you need to unify the behaviors of Test1 and Test2 (and other concrete Thing classes, present and planned) under a set of common interface(s).
PS: This design is commonly known as Strategy Pattern.
I would create a separate class file. So you would have something like this:
1. You abstract "MyClass"
->within "MyClass" define an abstract method call doSomething...this will force the specific implementation of the method to it's subclasses.
2. Test1 would be the implementation of MyClass which would contain the implementation of the doSomething method
3. Create a utility class that does the check "instanceOf" that check should not be in the constructor it belongs in another class.
So in the end you would have 3 class files an Abstract Class, Implementation of the Abstract and a Class that does the "instanceOf" check. I know this sounds like a lot but it's the proper way to design, for what I think you are attempting to do. You should pick up a design patterns book, I think it would help you a lot with questions like these.
The Open-Closed principle would be better satisfied by moving the object creation outside of this class.
Consider changing the constructor to accept an object that implements an interface.
public MyClass {
public MyClass( ITest tester ) { m_tester = tester; }
public void doSomething(){ m_tester.doTest(); }
}
This makes it possible to change the behavior of the class (open to extension) without modifying its code (closed to modification).
The better way to do this is to create an interface which will specify a set of methods that can be guaranteed to be called on the object.
Here's an example:
public interface TestInterface
{
void doTest();
}
Now you can write your classes to implement this interface. This means that you need to provide a full definition for all methods in the interface, in this case doTest().
public class Test implements TestInterface
{
public void doTest()
{
// do Test-specific stuff
}
}
public class Test1 implements TestInterface
{
public void doTest()
{
// do Test1-specific stuff
}
}
Looks really boring and pointless, right? Lots of extra work, I hear you say.
The true value comes in the calling code...
public abstract class MyObject
{
Test that;
// [...]
public void doSomething()
{
that.doTest();
}
}
No if statements, no instanceof, no ugly blocks, nothing. That's all moved to the class definitions, in the common interface method(s) (again, here that is doTest()).