Optional method in Java Interface - java

I have an interface with several method definitions, and I would not like to require some of them.
Is this possible? if so, how can i implement this?
I have tried setting an annotation of #Optional but this doesn't seem to work.
Do i have to define the Optional annotation somewhere?

There is no #Optional annotation in Java. One thing you can do is to create an interface, and then create an abstract class that provides stub implementations. Your classes can then extend this base class and override the methods they are interested in.

You can have an Abstract class that implements this interface with empty function implementations and then extend from the Abstract class
Having said that, I would question why you need to do this. Maybe you need to split you interface into multiple smaller ones and implement the only ones that you need for a class

Although I agree with the other answers, one should note that such optional methods exist in the JDK. For example, List.add() is optional. Implementations must throw an UnsupportedOperationException if they don't want to implement this method.
If you want to be able to know if the optional method is implemented or not, then you could add another method (not optional) :
/**
* Returns true if optionalOperation() is supported and implemented, false otherwise
*/
boolean isOptionalOperationSupported();
/**
* implements he foobar operation. Optional. If not supported, this method must throw
* UnsupportedOperationException, and isOptionalOperationSupported() must return false.
*/
void optionalOperation();

"Conceptually, what good is an interface if you cannot rely on the contract it provides" said Erik.
That's true but there is an other consideration: One can expect objects of different classes conforming to some properties or methods included in an interface to securely process with them by testing which properties or methods are implemented.
This approach can be frequently meet under Objective-C or Swift Cocoa for which the “protocol” — equiv of “interface” — allows to defined as “optional” a property or a method.
Instance of objects can be tested to check if they conform to a dedicated protocol.
// Objective C
[instance conformsToProtocol:#protocol(ProtocolName)] => BOOL
// Swift (uses an optional chaining to check the conformance and the “if-let” mech)
if let ref: PrototocolName? = instance => nil or instance of ProtocolName
The implementation of a method (including getter and setter) can be checked.
// Objective C
[instance respondsToSelector:#selector(MethodName)] => BOOL
// Swift (uses an optional chaining to check the implementation)
if let result = instance?.method…
The principle allows to use methods depending on their implementation in unknown objects but conforming to protocol.
// Objective C: example
if ([self.delegate respondsToSelector:#selector(methodA:)]) {
res = [self.delegate methodA:param];
} else if ([self.delegate respondsToSelector:#selector(methodB)]) {
res = [self.delegate methodB];
} …
// Swift: example
if let val = self.delegate?.methodA?(param) {
res = val
} else if let val = self.delegate?.methodB {
res = val
} …
JAVA does not allow to make “optional” an item in an interface but it allows to do something very similar thanks to interface extension
interface ProtocolBase {}
interface PBMethodA extends ProtocolBase {
type methodA(type Param);
}
interface PBMethodB extends ProtocolBase {
type methodB();
}
// Classes can then implement one or the other.
class Class1 implement PBMethodA {
type methodA(type Param) {
…
}
}
class Class2 implement PBMethodB {
type methodB() {
…
}
}
Then instances can be tested as “instance of” both ProtocolBase in order to see if object conform to the “general protocol” and to one of the “subclassed protocols” to execute selectively the right method.
While delegate is instance of Class1 or Class2 it appears to be instance of ProtocolBase and either instance of PBMethodA or PBMethodB. So
if (delegate instance of PBMethodA) {
res = ((PBMethodA) delegate).methodA(param);
} else if (dataSource instanceof PBMethodB) {
res = ((PBMethodB) delegate).methodB();
}
Hope this helps!

Related

How to access base class method?

Simplified demo code to show my problem.
class Base {
public String toString() { return "Base"; }
};
class A extends Base {
public String toString() { return "A"; }
};
class Test {
public void test1() {
A a = new A();
Base b = (Base)a; // cast is fine, but b is the same instance as a
System.out.println(b.toString()); // want "Base", but get "A"
}
private String testB(Base b) {
return b.toString(); // this should return "Base"
}
public void test2() {
System.out.println( testB(new A()) ); // want "Base", but get "A"
}
};
I tried the cast approach (test1) , and the helper method (test2).
Up to now, I found to need a copy constructor for Base to create a real Base object.
Is there a method that does not need a duplicate object?
Some background info:
I get an instance of A, and I know its base class has a nice method, which I'd like to use instead of the overwritten version. I'd prefer to neither modify class A nor B (although a copy c'tor were a good enhancement anyway ;) )
From class A directly, you can use super.toString(); to execute toString() on Base.
However, from outside class A, you can't call the superclass implementation in this way, doing so would break encapsulation. If you want to expose the superclass implementation then you still can, but you have to provide a separate method on A that exposes it directly.
Even using a trivial reflection based approach, you still won't be able to access it:
If the underlying method is an instance method, it is invoked using dynamic method lookup
System.out.println(Base.class.getMethod("toString", null).invoke(new A(), null)); //Prints "A"
...and using MethodHandles.lookup().findSpecial won't work either from outside the child class, as that has to be invoked where you have private access (otherwise you'll just get an IllegalAccessException.)
I concede that there may well be some weird and wonderful way of doing it directly in Java that I haven't thought of without bytecode manipulation, but suffice to say even if you can do it that way, you certainly shouldn't for anything but a quirky technical demonstration.
You need to create the B instance(copy constructor), if you are using the A instance you will always get "A" no matter if you cast it or no.

Purpose of Functional Interfaces in Java8

I've come across many questions in regards of Java8 in-built Functional Interfaces, including this, this, this and this. But all ask about "why only one method?" or "why do I get a compilation error if I do X with my functional interface" and alike. My question is: what is the existential purpose of these new Functional Interfaces, when I can use lambdas anyway in my own interfaces?
Consider the following example code from oracle documentation:
// Approach 6: print using a predicate
public static void printPersonsWithPredicate(List<Person> roster,
Predicate<Person> tester) {
for (Person p : roster) {
if (tester.test(p)) {
System.out.println(p);
}
}
}
OK, great, but this is achievable with their own example just above (an interface with a single method is nothing new):
// Approach 5:
public static void printPersons(<Person> roster,
CheckPerson tester) {
for (Person p : roster) {
if (tester.test(p)) {
System.out.println(p);
}
}
}
interface CheckPerson {
boolean test(Person p);
}
I can pass a lambda to both methods.
1st approach saves me one custom interface. Is this it?
Or are these standard functional interfaces (Consumer, Supplier, Predicate, Function) are meant to serve as a template for code organization, readability, structure, [other]?
Obviously you can skip using these new interfaces and roll your own with better names. There are some considerations though:
You will not be able to use custom interface in some other JDK API unless your custom interface extends one of built-ins.
If you always roll with your own, at some point you will come across a case where you can't think of a good name. For example, I'd argue that CheckPerson isn't really a good name for its purpose, although that's subjective.
Most builtin interfaces also define some other API. For example, Predicate defines or(Predicate), and(Predicate) and negate().
Function defines andThen(Function) and compose(Function), etc.
It's not particularly exciting, until it is: using methods other than abstract ones on functions allows for easier composition, strategy selections and many more, such as (using style suggested in this article):
Before:
class PersonPredicate {
public Predicate<Person> isAdultMale() {
return p ->
p.getAge() > ADULT
&& p.getSex() == SexEnum.MALE;
}
}
Might just become this, which is more reusable in the end:
class PersonPredicate {
public Predicate<Person> isAdultMale() {
return isAdult().and(isMale());
}
publci Predicate<Person> isAdultFemale() {
return isAdult().and(isFemale());
}
public Predicate<Person> isAdult() {
return p -> p.getAge() > ADULT;
}
public Predicate<Person> isMale() {
return isSex(SexEnum.MALE);
}
public Predicate<Person> isFemale() {
return isSex(SexEnum.FEMALE);
}
public Predicate<Person> isSex(SexEnum sex) {
return p -> p.getSex() == sex;
}
}
Although you ask "Is that it?", it's very nice that we don't have to write a new interface ever time we want type for a lambda.
Ask yourself, if you're reading an API, which is easier for a programmer to use:
public void processUsers(UserProcessor userProcessor);
... or ...
public void processUsers(Consumer<User> userProcessor);
With the former, I have to go and take a look at UserProcessor to find out what one is, and how I could create one; I don't even know it could be implemented as a lambda until I go and find out. With the latter, I know immediately that I can type u -> System.out.println(u) and I'll be processing users by writing them to stdout.
Also the author of the library didn't need to bloat their library with Yet Another Type.
In addition, if I coerce a lambda to a Functional Type, I can use that type's composition methods, for example:
candidates.filter( personPredicates.IS_GRADUATE.negate());
That gives you Predicate methods and(), or(), negate(); Function methods compose(), andThen() -- which your custom type would not have unless you implemented them.
Java API provides many built-in Function Interfaces for java developers. and we can use the built-in Function Interfaces many times. but there two reasons to use a Customer Function Interface.
Use a Customer Function Interface to describe explicitly what's like.
let's say you having a class User with a name parameter on the constructor.when you use the built-in Function Interface to refer the constructor the code like below:
Function<String,User> userFactory=User::new;
if you want describe it clearly you can introduce your own Function Interface, e.g:UserFactory;
UserFactory userFactory=User::new;
another reason to use Custom Function Interface due to built-in Function Interface is confused in somewhere. when you see a parameter with type Function<String,User>,is it create a new user or query an user from database or remove the user by a string and return the user,...?if you use an exactly Function Interface you know what it doing,as an UserFactory is create an user from a string username.
Use a Customer Function Interface to processing checked Exception in java built-in Function Interface.
Due to the built-in Function Interface can't be throwing a checked Exception,the problem occurs when you processing something in lambda expression that may be throws a checked exception,but you don't want to use the try/catch to handle the checked Exception,which will be tends to many code difficult to read in lambda expression.then you can define your own Function Interface that throws any CheckedException and adapt it to the built-in Function Interface when using java API.the code like as below:
//if the bars function throws a checked Exception,you must catch the exception
Stream.of(foos).map(t->{
try{
return bars.apply(t);
}catch(ex){//handle exception}
});
you also can define your own Function Interface throws a checked Exception,then adapt it to built-in Function,let's say it is a Mapping interface,the code below:
interface Mapping<T,R> {
R apply(T value) throws Exception;
}
private Function<T,R> reportsErrorWhenMappingFailed(Mapping<T,R> mapping){
return (it)->{
try{
return mapping.apply(it);
}catch(ex){
handleException(ex);
return null;
}
};
}
Stream.of(foos).map(reportsErrorWhenMappingFailed(bars));

Generate lambda class for generic type variable using reflection

I am attempting to use Java interfaces as mixins in some high-level wrapper for type D.
interface WrapsD {
D getWrapped();
}
interface FeatureA extends WrapsD {
default ...
}
interface FeatureB extends WrapsD {
default ...
}
abstract class DWrapperFactory<T extends WrapsD> {
protected T doWrap(D d) {
return () -> d; // <- does not work
}
}
interface FeatureAB extends FeatureA, FeatureB {
}
class ProducingDWithFeatureAB extends DWrapperFactory<FeatureAB> {
protected FeatureAB doWrap(D d) {
return () -> d; // <- has to repeat this
}
}
As seen in ProducingDWithFeatureAB, doWrap has to be implemented in each sub-class even though the body is identical. (One more example of why Java generics is really broken.)
Since I already need to create concrete classes like ProducingDWithFeatureAB for other reasons and code exists in the JRE to sythesize lambda classes, it should be possible to write doWrap only once using reflection. I want to know how it can be done.
(doWrap used to be implemented using anonymous inner classes implementing the interface, which is even more biolderplate.)
This has nothing to do with generics; your generic example is just obfuscating the real issue.
Here's the core of the issue: lambda expressions need a target type that is a functional interface, and that target type must be statically known to the compiler. Your code doesn't provide that. For example, the following code would get the same error, for the same reason:
Object o = arg -> expr;
Here, Object is not a functional interface, and lambda expressions can only be used in a context whose type is a (compatible) functional interface.
The use of generics makes it more confusing (and I think you're also confusing yourself about how generics work), but ultimately this is going to be where this bottoms out.
The first thing you have to understand, is, that a method of the form
public Function<X,Y> fun() {
return arg -> expr;
}
is desugared to the equivalent of:
public Function<X,Y> fun() {
return DeclaringClass::lambda$fun$0;
}
private static Y lambda$fun$0(X arg) {
return expr;
}
whereas the types X and Y are derived from the functional signature of your target interface. While the actual instance of the functional interface is generated at runtime, you need a materialized target method to be executed, which is generated by the compiler.
You can generate instances of different interfaces for a single target method reflectively, but it still requires that all these functional interfaces have the same functional signature, e.g. mapping from X to Y, which reduces the usefulness of a dynamic solution.
In your case, where all target interfaces indeed have the same functional signature, it is possible, but I have to emphasize that the whole software design looks questionable to me.
For implementing the dynamic generation, we have to desugar the lambda expression as described above and add the captured variable d as an additional argument to the target method. Since your specific function has no arguments, it makes the captured d the sole method argument:
protected T doWrap(D d) {
Class<T> type=getActualT();
MethodHandles.Lookup l=MethodHandles.lookup();
try
{
MethodType fType = MethodType.methodType(D.class);
MethodType tType = fType.appendParameterTypes(D.class);
return type.cast(LambdaMetafactory.metafactory(l, "getWrapped",
tType.changeReturnType(type), fType,
l.findStatic(DWrapperFactory.class, "lambda$doWrap$0", tType), fType)
.getTarget().invoke(d));
}
catch(RuntimeException|Error t) { throw t; }
catch(Throwable t) { throw new IllegalStateException(t); }
}
private static D lambda$doWrap$0(D d) {
return d;
}
You have to implement the method getActualT() which ought to return the right class object, which is possible if the actual subclass of DWrapperFactory is a proper reifiable type, as you stated. Then, the method doWrap will dynamically generate a proper instance of T, invoking the desugared lambda expression’s method with the captured value of d—all assuming that the type T is indeed a functional interface, which cannot be proven at compile time.
Note that even at runtime, the LambdaMetafactory won’t check whether the invariants hold, you might get errors thrown at a later time if T isn’t a proper functional interface (and subclass of WrapsD).
Now compare to just repeating the method
protected SubtypeOfWrapsD doWrap(D d) {
return () -> d;
}
in each reifiable type that has to exist anyway…

good practices regarding downcasting in java

I have a set of operations. Every operation is a sequence of 2 steps. So, I have a base class which executes these two steps and all the operations extend this base class and provide the actual implementations for the two steps. Ex.
class Base {
Step1 step1;
Step2 step2;
B execute() {
A a = step1.perform();
B b = step2.perform(a);
}
//Set methods...
}
Here Step1 and Step2 are interfaces and one can change the implementations for them to do different things.
I have the following questions:
Every implementation of step2 takes instance of A as input which can also contain a derived type of A. So I need to do a downcast. Is it ok to do a downcast in this case or is there a better way to achieve this?
Some implementations of step2 may not return any value. Is it ok if we have an empty class just for the type hierarchy and other classes extend this class?
Question 1
Yes, that is ok. Every class which extends the class A or implements the interface A (what ever A is) will be "an instance of A". So it is perfectly OK to pass it to a method which needs an object of the type A. Nothing to worry about. This is how you should use interface and inheritance. There are different kind of "specializations" of the same super-class.
Question 2
This is a question of your API design. If you want that this method could return null, you can do this. But you should document it very good!
A very new possibility in Java 8 are so called Optionals. You can use them if a method could return null and you want to force the programmer to keep that in mind. That would be the cleanest (and recommended) way. You can find an example and a description at http://java.dzone.com/articles/optional-java-8-cheat-sheet. Basically you would say that your method perform of the class Step2 will return an Optional instead of the type:
interface Setp2 {
public Optional<B> perform(A a);
}
// the optional will wrap the actual result which could be null
// since Java 8
Optional<B> b = step2.perform(a);
It sounds like you should use generics:
interface Step1<T extends A> {
T perform(T a);
}
interface Step2<T extends A, U extends B> {
U perform(T a);
}
class Base<T extends A, U extends B>>{
Step1<T> step1;
Step2<T, U> step2;
B execute() {
T a = step1.perform();
U b = step2.perform(a);
}
//Set methods...
}
Regarding returning "nothing", the best way is to return null.

How to check if object implements particular interface, but not the descendants of this interface, in Java

I have a tree structure, where some nodes must contain only objects implementing particular interface. But there is interfaces extending that interface, and objects, implementing them, should not be contained in nodes.
So i need to check if object implements strictly particular interface.
public interface IProcessCell {...}
public interface IMethodCell extends IProcessCell {...}
IProcessCell processInstance = new IProcessCell() {...}
IMethodCell methodInstance = new IMethodCell() {...}
/** Method implementing desired check */
public boolean check(IProcessCell instance) {...}
Method check must return true for processInstance, but false for methodInstance
You can use http://docs.oracle.com/javase/6/docs/api/java/lang/Class.html#getInterfaces()
but to me the thing you are trying to do is like patching up badly written app. To me, the better way is creating a new interface (which only desired object will implement) and making the "tree structure" nodes require that particular interface.
You can get the list of implemented interfaces using getInterfaces.
Assuming you already casted your instance to the desired interface, you just have to test that yourInstance.getClass().getInterfaces().length==1
Class implements the getInterfaces() method. It returns a Class[]. Using this you could iterate and do a comparison until found or not found.
http://docs.oracle.com/javase/6/docs/api/java/lang/Class.html#getInterfaces()
You can do a getClass().getInterfaces() on the node, then iterate through the returned classes and check how many are assignable from the particular interface you care about. So if you have:
interface A {
}
interface B extends A {
}
class NodeA implements A {
}
class NodeB implements B {
}
If you're looking for a node instance that just implements A you should be able to do:
new NodeA().getClass().getInterfaces();
new NodeB().getClass().getInterfaces();
and in each case check that 1) One of the interfaces is A, and 2) A.class.isAssignableFrom(interface) returns false for the other returned interfaces.

Categories

Resources