Delegate vs Callback in Java - java

I have some misunderstanding about terms of delegates and callbacks in Java.
class MyDriver {
public static void main(String[] argv){
MyObject myObj = new MyObject();
// definition of HelpCallback omitted for brevity
myObj.getHelp(new HelpCallback () {
#Override
public void call(int result) {
System.out.println("Help Callback: "+result);
}
});
}
}
class MyObject {
public void getHelp(HelpCallback callback){
//do something
callback.call(OK);
}
}
Is it callback or delegate (Are delegates and callbacks the same or similar?)?
How to implement then another one?

This is a callback. According to Wikipedia:
In computer programming, a callback is a reference to a piece of executable code that is passed as an argument to other code.
So let's look at the executable code:
public void getHelp(HelpCallback callback){
//do something
callback.call(OK);
}
Here, the callback argument is a reference to an object of type HelpCallback. Since that reference is passed in as an argument, it is a callback.
An example of delegation
Delegation is done internally by the object - independent of how the method is invoked. If, for example, the callback variable wasn't an argument, but rather an instance variable:
class MyDriver {
public static void main(String[] argv){
// definition of HelpStrategy omitted for brevity
MyObject myObj = new MyObject(new HelpStrategy() {
#Override
public void getHelp() {
System.out.println("Getting help!");
}
});
myObj.getHelp();
}
}
class MyObject {
private final HelpStrategy helpStrategy;
public MyObject(HelpStrategy helpStrategy) {
this.helpStrategy = helpStrategy;
}
public void getHelp(){
helpStrategy.getHelp();
}
}
... then it would be delegation.
Here, MyObject uses the strategy pattern. There are two things to note:
The invocation of getHelp() doesn't involve passing a reference to executable code. i.e. this is not a callback.
The fact that MyObject.getHelp() invokes helpStrategy.getHelp() is not evident from the public interface of the MyObject object or from the getHelp() invocation. This kind of information hiding is somewhat characteristic of delegation.
Also of note is the lack of a // do something section in the getHelp() method. When using a callback, the callback does not do anything relevant to the object's behavior: it just notifies the caller in some way, which is why a // do something section was necessary. However, when using delegation the actual behavior of the method depends on the delegate - so really we could end up needing both since they serve distinct purposes:
public void getHelp(HelpCallback callback){
helpStrategy.getHelp(); // perform logic / behavior; "do something" as some might say
if(callback != null) {
callback.call(); // invoke the callback, to notify the caller of something
}
}

I'd argue that "callback" is a name for a generic pattern where you provide the module you're calling with a a way for said module to call your code. A C# delegate, or an ObjC delegate object, (these two being entirely different beasts) or a Java class-implementing-the-callback-interface are different, platform-specific ways of implementing the callback pattern. (They can also themselves be considered patterns.) Other languages have more or less subtly different ways of doing so.
The above concept of "delegate" is also similar to the Strategy pattern, where the delegate can be thought of as one. Similarly, a Visitor is also a type of callback. (A visitor is also a strategy for processing each visited item.)
All this is using definitions that are intuitive to me, and might not be to anyone else, because neither "callback" or "delegate" are formal terms and it makes little sense to discuss them without referring to a previous definition thereof that's valid in your context. Consequently it makes little sense to ask what the definition is since, to the best of my knowledge, there isn't an authoritative one. To wit, the fact that other answers to this question are likely to say something entirely different.
My recommendation would be to focus on the merits of your design – whether it achieves what you need, doesn't introduce tight coupling, etc. – rather than on minutiae of semantics. When two design patterns appear similar, they probably can be used to achieve similar goals equally well.

What you want to achieve is bidirectional communication between the original caller and a service while avoiding the service to depend on the client. The pattern you use for that goal often depends on the restrictions of your language. You use function pointers, closures or, if you have none of these, callback objects (which might also be seen as closures).
And then there are often lots of different names for the same or a very similar pattern.

Related

Visitor Pattern and Double Dispatch

I know this is well trodden territory but I have a specific question... I promise.
Having spent very little time in the statically typed, object oriented world, I recently came across this design pattern while reading Crafting Interpreters. While I understand this pattern allows for extensible behavior (methods) on a set of well defined existing types (classes), I don't quite get the characterization of it as a solution to the double dispatch problem, at least not without some additional assumptions. I see it more as making a tradeoff to the expression problem, where you trade closed types for open methods.
In most of the examples I've seen, you end up with something like this (shamelessly stolen from the awesome Clojure Design Patterns)
public interface Visitor {
void visit(Activity a);
void visit(Message m);
}
public class PDFVisitor implements Visitor {
#Override
public void visit(Activity a) {
PDFExporter.export(a);
}
#Override
public void visit(Message m) {
PDFExporter.export(m);
}
}
public abstract class Item {
abstract void accept(Visitor v);
}
class Message extends Item {
#Override
void accept(Visitor v) {
v.visit(this);
}
}
class Activity extends Item {
#Override
void accept(Visitor v) {
v.visit(this);
}
}
Item i = new Message();
Visitor v = new PDFVisitor();
i.accept(v);
Here we have a set of types (Message and Activity) which are presumably closed or infrequently changing, and a set of methods which we want to be open for extension (the Visitors). Now where I get confused is that in most examples, they will show how you can implement other visitors without touching existing classes, e.g. something like this:
public class XMLVisitor implements Visitor {
#Override
public void visit(Activity a) {
XMLExporter.export(a);
}
#Override
public void visit(Message m) {
XMLExporter.export(m);
}
}
and then make some hand-waivy allusion to this being "double dispatch", which it is not.
Here accept dynamically dispatches on the subtype of Item, but within accept the visit methods statically dispatch to the passed in visitor via method overloading. So we have single dispatch on Item, and then the "second" static dispatch within accept is really about selecting a behavior (method) to call with that Item type. There is only one "type" being dispatched on, not two - the second is a behavior.
When I think of double dispatch, I think of a function that dispatches on the type of two arguments. One behavior, two types.
export(Activity,XML)
export(Activity,PDF)
export(Message,XML)
export(Message,PDF)
To me this is subtly different to the visitor pattern which allows any set of behaviors to be extended to existing classes, but those behaviors don't necessarily all represent the same behavior like in the four export examples above - they can be anything. If we add another Visitor it may represent exporting, but it could just as well not. From the API layer you're just calling accept methods and trusting that the passed in Visitor does what you want, whatever that may be.
Am I looking at this the wrong way?
The comment from #user207421 is spot on. If a language does not natively support double dispatch, no design pattern can alter the language to make it so. A pattern merely provides an alternative which may solve some of the problems that double dispatch would be applied to in another language.
People learning the Visitor Pattern who already have an understanding of double dispatch may be assisted by explanations such as, "Visitor solves a similar set of problems to those solved by double dispatch". Unfortunately, that explanation is often reduced to, "Visitor implements double dispatch" which is not true.
The fact you've recognized this means you have a solid understanding of both concepts already.

How do I avoid breaking the Liskov substitution principle with a class that implements multiple interfaces?

Given the following class:
class Example implements Interface1, Interface2 {
...
}
When I instantiate the class using Interface1:
Interface1 example = new Example();
...then I can call only the Interface1 methods, and not the Interface2 methods, unless I cast:
((Interface2) example).someInterface2Method();
Of course, to make this runtime safe, I should also wrap this with an instanceof check:
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
I'm aware that I could have a wrapper interface that extends both interfaces, but then I could end up with multiple interfaces to cater for all the possible permutations of interfaces that can be implemented by the same class. The Interfaces in question do not naturally extend one another so inheritance also seems wrong.
Does the instanceof/cast approach break LSP as I am interrogating the runtime instance to determine its implementations?
Whichever implementation I use seems to have some side-effect either in bad design or usage.
I'm aware that I could have a wrapper interface that extends both
interfaces, but then I could end up with multiple interfaces to cater
for all the possible permutations of interfaces that can be
implemented by the same class
I suspect that if you're finding that lots of your classes implement different combinations of interfaces then either: your concrete classes are doing too much; or (less likely) your interfaces are too small and too specialised, to the point of being useless individually.
If you have good reason for some code to require something that is both a Interface1 and a Interface2 then absolutely go ahead and make a combined version that extends both. If you struggle to think of an appropriate name for this (no, not FooAndBar) then that's an indicator that your design is wrong.
Absolutely do not rely on casting anything. It should only be used as a last resort and usually only for very specific problems (e.g. serialization).
My favourite and most-used design pattern is the decorator pattern. As such most of my classes will only ever implement one interface (except for more generic interfaces such as Comparable). I would say that if your classes are frequently/always implementing more than one interface then that's a code smell.
If you're instantiating the object and using it within the same scope then you should just be writing
Example example = new Example();
Just so it's clear (I'm not sure if this is what you were suggesting), under no circumstances should you ever be writing anything like this:
Interface1 example = new Example();
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
Your class can implement multiple interfaces fine, and it is not breaking any OOP principles. On the contrary, it is following the interface segregation principle.
It is confusing why would you have a situation where something of type Interface1 is expected to provide someInterface2Method(). That is where your design is wrong.
Think about it in a slightly different way: Imagine you have another method, void method1(Interface1 interface1). It can't expect interface1 to also be an instance of Interface2. If it was the case, the type of the argument should have been different. The example you have shown is precisely this, having a variable of type Interface1 but expecting it to also be of type Interface2.
If you want to be able to call both methods, you should have the type of your variable example set to Example. That way you avoid the instanceof and type casting altogether.
If your two interfaces Interface1 and Interface2 are not that loosely coupled, and you will often need to call methods from both, maybe separating the interfaces wasn't such a good idea, or maybe you want to have another interface which extends both.
In general (although not always), instanceof checks and type casts often indicate some OO design flaw. Sometimes the design would fit for the rest of the program, but you would have a small case where it is simpler to type cast rather than refactor everything. But if possible you should always strive to avoid it at first, as part of your design.
You have two different options (I bet there are a lot more).
The first is to create your own interface which extends the other two:
interface Interface3 extends Interface1, Interface2 {}
And then use that throughout your code:
public void doSomething(Interface3 interface3){
...
}
The other way (and in my opinion the better one) is to use generics per method:
public <T extends Interface1 & Interface2> void doSomething(T t){
...
}
The latter option is in fact less restricted than the former, because the generic type T gets dynamically inferred and thus leads to less coupling (a class doesn't have to implement a specific grouping interface, like the first example).
The core issue
Slightly tweaking your example so I can address the core issue:
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
So you defined the method DoTheThing(Interface1 example). This is basically saying "to do the thing, I need an Interface1 object".
But then, in your method body, it appears that you actually need an Interface2 object. Then why didn't you ask for one in your method parameters? Quite obviously, you should've been asking for an Interface2
What you're doing here is assuming that whatever Interface1 object you get will also be an Interface2 object. This is not something you can rely on. You might have some classes which implement both interfaces, but you might as well have some classes which only implement one and not the other.
There is no inherent requirement whereby Interface1 and Interface2 need to both be implemented on the same object. You can't know (nor rely on the assumption) that this is the case.
Unless you define the inherent requirement and apply it.
interface InterfaceBoth extends Interface1, Interface2 {}
public void DoTheThing(InterfaceBoth example)
{
example.someInterface2Method();
}
In this case, you've required InterfaceBoth object to both implement Interface1 and Interface2. So whenever you ask for an InterfaceBoth object, you can be sure to get an object which implements both Interface1 and Interface2, and thus you can use methods from either interface without even needing to cast or check the type.
You (and the compiler) know that this method will always be available, and there's no chance of this not working.
Note: You could've used Example instead of creating the InterfaceBoth interface, but then you would only be able to use objects of type Example and not any other class which would implement both interfaces. I assume you're interested in handling any class which implements both interfaces, not just Example.
Deconstructing the issue further.
Look at this code:
ICarrot myObject = new Superman();
If you assume this code compiles, what can you tell me about the Superman class? That it clearly implements the ICarrot interface. That is all you can tell me. You have no idea whether Superman implements the IShovel interface or not.
So if I try to do this:
myObject.SomeMethodThatIsFromSupermanButNotFromICarrot();
or this:
myObject.SomeMethodThatIsFromIShovelButNotFromICarrot();
Should you be surprised if I told you this code compiles? You should, because this code doesn't compile.
You may say "but I know that it's a Superman object which has this method!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman variable.
You may say "but I know that it's a Superman object which implements the IShovel interface!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman or IShovel variable.
Knowing this, let's look back at your code.
Interface1 example = new Example();
All you've said is that you have an Interface1 variable.
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
It makes no sense for you to assume that this Interface1 object also happens to implement a second unrelated interface. Even if this code works on a technical level, it is a sign of bad design, the developer is expecting some inherent correlation between two interfaces without actually having created this correlation.
You may say "but I know I'm putting an Example object in, the compiler should know that too!" but you'd be missing the point that if this were a method parameter, you would have no way of knowing what the callers of your method are sending.
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
When other callers call this method, the compiler is only going to stop them if the passed object does not implement Interface1. The compiler is not going to stop someone from passing an object of a class which implements Interface1 but does not implement Interface2.
Your example does not break LSP, but it seems to break SRP. If you encounter such case where you need to cast an object to its 2nd interface, the method that contains such code can be considered busy.
Implementing 2 (or more) interfaces in a class is fine. In deciding which interface to use as its data type depends entirely on the context of the code that will use it.
Casting is fine, especially when changing context.
class Payment implements Expirable, Limited {
/* ... */
}
class PaymentProcessor {
// Using payment here because i'm working with payments.
public void process(Payment payment) {
boolean expired = expirationChecker.check(payment);
boolean pastLimit = limitChecker.check(payment);
if (!expired && !pastLimit) {
acceptPayment(payment);
}
}
}
class ExpirationChecker {
// This the `Expirable` world, so i'm using Expirable here
public boolean check(Expirable expirable) {
// code
}
}
class LimitChecker {
// This class is about checking limits, thats why im using `Limited` here
public boolean check(Limited limited) {
// code
}
}
Usually, many, client-specific interfaces are fine, and somewhat part of the Interface segregation principle (the "I" in SOLID). Some more specific points, on a technical level, have already been mentioned in other answers.
Particularly that you can go too far with this segregation, by having a class like
class Person implements FirstNameProvider, LastNameProvider, AgeProvider ... {
#Override String getFirstName() {...}
#Override String getLastName() {...}
#Override int getAge() {...}
...
}
Or, conversely, that you have an implementing class that is too powerful, as in
class Application implements DatabaseReader, DataProcessor, UserInteraction, Visualizer {
...
}
I think that the main point in the Interface Segregation Principle is that the interfaces should be client-specific. They should basically "summarize" the functions that are required by a certain client, for a certain task.
To put it that way: The issue is to strike the right balance between the extremes that I sketched above. When I'm trying to figure out interfaces and their relationships (mutually, and in terms of the classes that implement them), I always try to take a step back and ask myself, in an intentionally naïve way: Who is going to receive what, and what is he going to do with it?
Regarding your example: When all your clients always need the functionality of Interface1 and Interface2 at the same time, then you should consider either defining an
interface Combined extends Interface1, Interface2 { }
or not have different interfaces in the first place. On the other hand, when the functionalities are completely distinct and unrelated and never used together, then you should wonder why the single class is implementing them at the same time.
At this point, one could refer to another principle, namely Composition over inheritance. Although it is not classically related to implementing multiple interfaces, composition can also be favorable in this case. For example, you could change your class to not implement the interfaces directly, but only provide instances that implement them:
class Example {
Interface1 getInterface1() { ... }
Interface2 getInterface2() { ... }
}
It looks a bit odd in this Example (sic!), but depending on the complexity of the implementation of Interface1 and Interface2, it can really make sense to keep them separated.
Edited in response to the comment:
The intention here is not to pass the concrete class Example to methods that need both interfaces. A case where this could make sense is rather when a class combines the functionalities of both interfaces, but does not do so by directly implementing them at the same time. It's hard to make up an example that does not look too contrived, but something like this might bring the idea across:
interface DatabaseReader { String read(); }
interface DatabaseWriter { void write(String s); }
class Database {
DatabaseConnection connection = create();
DatabaseReader reader = createReader(connection);
DatabaseReader writer = createWriter(connection);
DatabaseReader getReader() { return reader; }
DatabaseReader getWriter() { return writer; }
}
The client will still rely on the interfaces. Methods like
void create(DatabaseWriter writer) { ... }
void read (DatabaseReader reader) { ... }
void update(DatabaseReader reader, DatabaseWriter writer) { ... }
could then be called with
create(database.getWriter());
read (database.getReader());
update(database.getReader(), database.getWriter());
respectively.
With the help of various posts and comments on this page, a solution has been produced, which I feel is correct for my scenario.
The following shows the iterative changes to the solution to meet SOLID principles.
Requirement
To produce the response for a web service, key + object pairs are added to a response object. There are lots of different key + object pairs that need to be added, each of which may have unique processing required to transform the data from the source to the format required in the response.
From this it is clear that whilst the different key / value pairs may have different processing requirements to transform the source data to the target response object, they all have a common goal of adding an object to the response object.
Therefore, the following interface was produced in solution iteration 1:
Solution Iteration 1
ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
Any developer that needs to add an object to the response can now do so using an existing implementation that matches their requirement, or add a new implementation given a new scenario
This is great as we have a common interface which acts as a contract for this common practise of adding response objects
However, one scenario requires that the target object should be taken from the source object given a particular key, "identifier".
There are options here, the first is to add an implementation of the existing interface as follows:
public class GetIdentifierResponseObjectProvider<T extends Map, S extends Map> implements ResponseObjectProvider<T, S> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get("identifier"));
}
}
This works, however this scenario could be required for other source object keys ("startDate", "endDate" etc...) so this implementation should be made more generic to allow for reuse in this scenario.
Additionally, other implementations may require more context information to perform the addObject operation... So a new generic type should be added to cater for this
Solution Iteration 2
ResponseObjectProvider<T, S, U> {
void addObject(T targetObject, S sourceObject, String targetKey);
void setParams(U params);
U getParams();
}
This interface caters for both usage scenarios; the implementations that require additional params to perform the addObject operation and the implementations that do not
However, considering the latter of the usage scenarios, the implementations that do not require additional parameters will break the SOLID Interface Segregation Principle as these implementations will override getParams and setParams methods but not implement them. e.g:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S, U> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(U));
}
public void setParams(U params) {
//unimplemented method
}
U getParams() {
//unimplemented method
}
}
Solution Iteration 3
To fix the Interface Segregation issue, the getParams and setParams interface methods were moved into a new Interface:
public interface ParametersProvider<T> {
void setParams(T params);
T getParams();
}
The implementations that require parameters can now implement the ParametersProvider interface:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S>, ParametersProvider<U>
private String params;
public void setParams(U params) {
this.params = params;
}
public U getParams() {
return this.params;
}
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(params));
}
}
This solves the Interface Segregation issue but causes two more issues... If the calling client wants to program to an interface, i.e:
ResponseObjectProvider responseObjectProvider = new GetObjectBySourceKeyResponseObjectProvider<>();
Then the addObject method will be available to the instance, but NOT the getParams and setParams methods of the ParametersProvider interface... To call these a cast is required, and to be safe an instanceof check should also be performed:
if(responseObjectProvider instanceof ParametersProvider) {
((ParametersProvider)responseObjectProvider).setParams("identifier");
}
Not only is this undesirable it also breaks the Liskov Substitution Principle - "if S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program"
i.e. if we replaced an implementation of ResponseObjectProvider that also implements ParametersProvider, with an implementation that does not implement ParametersProvider then this could alter the some of the desirable properties of the program... Additionally, the client needs to be aware of which implementation is in use to call the correct methods
An additional problem is the usage for calling clients. If the calling client wanted to use an instance that implements both interfaces to perform addObject multiple times, the setParams method would need to be called before addObject... This could cause avoidable bugs if care is not taken when calling.
Solution Iteration 4 - Final Solution
The interfaces produced from Solution Iteration 3 solve all of the currently known usage requirements, with some flexibility provided by generics for implementation using different types. However, this solution breaks the Liskov Substitution Principle and has a non-obvious usage of setParams for the calling client
The solution is to have two separate interfaces, ParameterisedResponseObjectProvider and ResponseObjectProvider.
This allows the client to program to an interface, and would select the appropriate interface depending on whether the objects being added to the response require additional parameters or not
The new interface was first implemented as an extension of ResponseObjectProvider:
public interface ParameterisedResponseObjectProvider<T,S,U> extends ResponseObjectProvider<T, S> {
void setParams(U params);
U getParams();
}
However, this still had the usage issue, where the calling client would first need to call setParams before calling addObject and also make the code less readable.
So the final solution has two separate interfaces defined as follows:
public interface ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
public interface ParameterisedResponseObjectProvider<T,S,U> {
void addObject(T targetObject, S sourceObject, String targetKey, U params);
}
This solution solves the breaches of Interface Segregation and Liskov Substitution principles and also improves the usage for calling clients and improves the readability of the code.
It does mean that the client needs to be aware of the different interfaces, but since the contracts are different this seems to be a justified decision especially when considering all the issues that the solution has avoided.
The problem you describe often comes about through over-zealous application of the Interface Segregation Principle, encouraged by languages' inability to specify that members of one interface should, by default, be chained to static methods which could implement sensible behaviors.
Consider, for example, a basic sequence/enumeration interface and the following behaviors:
Produce an enumerator which can read out the objects if no other iterator has yet been created.
Produce an enumerator which can read out the objects even if another iterator has already been created and used.
Report how many items are in the sequence
Report the value of the Nth item in the sequence
Copy a range of items from the object into an array of that type.
Yield a reference to an immutable object that can accommodate the above operations efficiently with contents that are guaranteed never to change.
I would suggest that such abilities should be part of the basic sequence/enumeration interface, along with a method/property to indicate which of the above operations are meaningfully supported. Some kinds of single-shot on-demand enumerators (e.g. an infinite truly-random sequence generator) might not be able to support any of those functions, but segregating such functions into separate interfaces will make it much harder to produce efficient wrappers for many kinds of operations.
One could produce a wrapper class that would accommodate all of the above operations, though not necessarily efficiently, on any finite sequence which supports the first ability. If, however, the class is being used to wrap an object that already supports some of those abilities (e.g. access the Nth item), having the wrapper use the underlying behaviors could be much more efficient than having it do everything via the second function above (e.g. creating a new enumerator, and using that to iteratively read and ignore items from the sequence until the desired one is reached).
Having all objects that produce any kind of sequence support an interface that includes all of the above, along with an indication of what abilities are supported, would be cleaner than trying to have different interfaces for different subsets of abilities, and requiring that wrapper classes make explicit provision for any combinations they want to expose to their clients.

Java, differing object fields in child classes and how to work with them in the least painful way

(Edited down from a big wall o' text to better sum up) The problem is thus:
I have a collection of AbstractMyClass. There are a number of concrete child classes in this collection. Each child class will also contain a field for a particular type of object. There could be a dozen different children of AbstractMyClass, each with their own object field. Each of these objects likely has no shared parent class outside of Object.
For example, MyClassA may have a String, MyClassB may have an Integer, MyClassC may have a MyCustomClass, etc. The different objects are an unfortunate necessity in the way this is done.
The thing is, this collection needs to be evaluated and, given the right set of conditions, the object(s) within the AbstractMyClass must be extracted, examined, manipulated, stored, set aside, etc for later operations. There are a variety of potential operations based on conditions as well as the MyClass/object type within, operations such that handling the data within the MyClass may not be viable as other, more centralized classes (ie, a class managing a thread pool) may need to deal with them. That leaves me with the need to handle some very disparate object types. This can certainly be done, but I cannot think of any reasonably clean or dynamic way to handle it. Sure, you could try the following:
Use Generics, which can eliminate some child classes, but outside of
that class, you don't know what Object T is or how to handle it without more muckery.
Typecheck everything, which makes a rather lengthy and ugly
conditional, and must be maintained if new object types are
introduced.
Delegates within AbstractMyClass, but that means even more classes to
build up to handle each instance, and delegates may not be able to
handle all of the necessary functions.
A wrapper object with a field for every object type. Yay, let's
nullcheck everything.
You can see the predicament. Is there any "good" way to handle this sort of thing, or is it just one of those issues that Java can't directly handle as it may not have enough info at runtime and everyone just works around it in varying ways?
It's hard to answer without a concrete example without knowing exactly what you must do with the response, but there are basically two clean, OO solutions that I see.
First solution: Good old polymorphism:
public void handleResponse(AbstractResponse response) {
response.handle();
}
In short, the Response is itself responsible for its handling when it has been received. The response knows its own type, and has access to its own data.
Second solution: the visitor pattern, which allows externalizing the response handling without doing instanceof checks and casts to know the type of the response:
public interface ResponseVisitor {
void visitA(ResponseA responseA);
void visitB(ResponseB responseB);
}
public abstract class AbstractResponse {
public abstract void accept(ResponseVisitor visitor);
...
}
public class ResponseA extends AbstractResponse {
#Override
public void accept(ResponseVisitor visitor) {
visitor.visitA(this);
}
}
public class ResponseB extends AbstractResponse {
#Override
public void accept(ResponseVisitor visitor) {
visitor.visitB(this);
}
}
public class TheResponseVisitorImplementation implements ResponseVisitor {
#Override
public void visitA(ResponseA responseA) {
...
}
#Override
public void visitB(ResponseB responseB) {
...
}
}
...
public void handleResponse(AbstractResponse response) {
ResponseVisitor visitor = new TheResponseVisitorImplementation();
response.accept(visitor);
}

#MustOverride annotation?

In .NET, one can specify a "mustoverride" attribute to a method in a particular superclass to ensure that subclasses override that particular method.
I was wondering whether anybody has a custom java annotation that could achieve the same effect. Essentially what i want is to push for subclasses to override a method in a superclass that itself has some logic that must be run-through. I dont want to use abstract methods or interfaces, because i want some common functionality to be run in the super method, but more-or-less produce a compiler warning/error denoting that derivative classes should override a given method.
I don't quite see why you would not want to use abstract modifier -- this is intended for forcing implementation by sub-class, and only need to be used for some methods, not all. Or maybe you are thinking of C++ style "pure abstract" classes?
But one other thing that many Java developers are not aware of is that it is also possible to override non-abstract methods and declare them abstract; like:
public abstract String toString(); // force re-definition
so that even though java.lang.Object already defines an implementation, you can force sub-classes to define it again.
Ignoring abstract methods, there is no such facility in Java. Perhaps its possible to create a compile-time annotation to force that behaviour (and I'm not convinced it is) but that's it.
The real kicker is "override a method in a superclass that itself has some logic that must be run through". If you override a method, the superclass's method won't be called unless you explicitly call it.
In these sort of situations I've tended to do something like:
abstract public class Worker implements Runnable {
#Override
public final void run() {
beforeWork();
doWork();
afterWork();
}
protected void beforeWork() { }
protected void afterWork() { }
abstract protected void doWork();
}
to force a particular logic structure over an interface's method. You could use this, for example, to count invocations without having to worry about whether the user calls super.run(), etc.
... and if declaring a base class abstract is not an option you can always throw an UnsupportedOperationException
class BaseClass {
void mustOverride() {
throw new UnsupportedOperationException("Must implement");
}
}
But this is not a compile-time check of course...
I'm not sure which attribute you're thinking about in .NET.
In VB you can apply the MustOverride modifier to a method, but that's just the equivalent to making the method abstract in Java. You don't need an attribute/annotation, as the concept is built into the languages. It's more than just applying metadata - there's also the crucial difference that an abstract method doesn't include any implementation itself.
If you do think there's such an attribute, please could you say which one you mean?
Android has a new annotation out as announced in the Google I/O 2015:
#callSuper
More details here:
http://tools.android.com/tech-docs/support-annotations
If you need some default behaviour, but for some reason it should not be used by specializations, like a implementation of a logic in a non abstract Adapter class just for easy of prototyping but which should not be used in production for instance, you could encapsulate that logic and log a warning that it is being used, without actually having to run it.
The base class constructor could check if the variable holding the logic points to the default one. (writing in very abstract terms as I think it should work on any language)
It would be something like this (uncompiled, untested and incomplete) Java (up to 7) example:
public interface SomeLogic {
void execute();
}
public class BaseClass {
//...private stuff and the logging framework of your preference...
private static final SomeLogic MUST_OVERRIDE = new SomeLogic() {
public void execute() {
//do some default naive stuff
}
};
protected SomeLogic getLogic() { return MUST_OVERRIDE; }
//the method that probably would be marked as MustOverride if the option existed in the language, maybe with another name as this exists in VB but with the same objective as the abstract keyword in Java
public void executeLogic() {
getLogic().execute();
}
public BaseClass() {
if (getLogic() == MUST_OVERRIDE) {
log.warn("Using default logic for the important SomeLogic.execute method, but it is not intended for production. Please override the getLogic to return a proper implementation ASAP");
}
}
}
public GoodSpecialization extends BaseClass {
public SomeLogic getLogic() {
//returns a proper implementation to do whatever was specified for the execute method
}
//do some other specialized stuff...
}
public BadSpecialization extends BaseClass {
//do lots of specialized stuff but doesn't override getLogic...
}
Some things could be different depending on the requirements, and clearly simpler, especially for languages with lambda expressions, but the basic idea would be the same.
Without the thing built in, there is always some way to emulate it, in this example you would get a runtime warning in a log file with a home-made-pattern-like-solution, that only your needs should point if it is enough or a more hardcore bytecode manipulation, ide plugin development or whatever wizardry is needed.
I've been thinking about this.
While I don't know of any way to require it with a compile error, you might try writing a custom PMD rule to raise a red-flag if your forgot to override.
There are already loads of PMD rules that do things like reminding you to implement HhashCode if you choose to override equals. Perhaps something could be done like that.
I've never done this before, so I'm not the one to write a tutorial, but a good place to start would be this link http://techtraits.com/programming/2011/11/05/custom-pmd-rules-using-xpath/ In this example, he basically creates a little warning if you decide to use a wildcard in an import package. Use it as a starting point to explore how PMD can analyze your source code, visit each member of a hierarchy, and identify where you forgot to implement a specific method.
Annotations are also a possibility, but you'd have to figure out your own way to implement the navigation through the class path. I believe PMD already handles this. Additionally, PMD has some really good integration with IDEs.
https://pmd.github.io/

Java Delegates?

Does the Java language have delegate features, similar to how C# has support for delegates?
Not really, no.
You may be able to achieve the same effect by using reflection to get Method objects you can then invoke, and the other way is to create an interface with a single 'invoke' or 'execute' method, and then instantiate them to call the method your interested in (i.e. using an anonymous inner class).
You might also find this article interesting / useful : A Java Programmer Looks at C# Delegates (#blueskyprojects.com)
Depending precisely what you mean, you can achieve a similar effect (passing around a method) using the Strategy Pattern.
Instead of a line like this declaring a named method signature:
// C#
public delegate void SomeFunction();
declare an interface:
// Java
public interface ISomeBehaviour {
void SomeFunction();
}
For concrete implementations of the method, define a class that implements the behaviour:
// Java
public class TypeABehaviour implements ISomeBehaviour {
public void SomeFunction() {
// TypeA behaviour
}
}
public class TypeBBehaviour implements ISomeBehaviour {
public void SomeFunction() {
// TypeB behaviour
}
}
Then wherever you would have had a SomeFunction delegate in C#, use an ISomeBehaviour reference instead:
// C#
SomeFunction doSomething = SomeMethod;
doSomething();
doSomething = SomeOtherMethod;
doSomething();
// Java
ISomeBehaviour someBehaviour = new TypeABehaviour();
someBehaviour.SomeFunction();
someBehaviour = new TypeBBehaviour();
someBehaviour.SomeFunction();
With anonymous inner classes, you can even avoid declaring separate named classes and almost treat them like real delegate functions.
// Java
public void SomeMethod(ISomeBehaviour pSomeBehaviour) {
...
}
...
SomeMethod(new ISomeBehaviour() {
#Override
public void SomeFunction() {
// your implementation
}
});
This should probably only be used when the implementation is very specific to the current context and wouldn't benefit from being reused.
And then of course in Java 8, these do become basically lambda expressions:
// Java 8
SomeMethod(() -> { /* your implementation */ });
Short story: ­­­­­­­­­­­­­­­­­­­no.
Introduction
The newest version of the Microsoft Visual J++ development environment
supports a language construct called delegates or bound method
references. This construct, and the new keywords delegate and
multicast introduced to support it, are not a part of the JavaTM
programming language, which is specified by the Java Language
Specification and amended by the Inner Classes Specification included
in the documentation for the JDKTM 1.1 software.
It is unlikely that the Java programming language will ever include
this construct. Sun already carefully considered adopting it in 1996,
to the extent of building and discarding working prototypes. Our
conclusion was that bound method references are unnecessary and
detrimental to the language. This decision was made in consultation
with Borland International, who had previous experience with bound
method references in Delphi Object Pascal.
We believe bound method references are unnecessary because another
design alternative, inner classes, provides equal or superior
functionality. In particular, inner classes fully support the
requirements of user-interface event handling, and have been used to
implement a user-interface API at least as comprehensive as the
Windows Foundation Classes.
We believe bound method references are harmful because they detract
from the simplicity of the Java programming language and the
pervasively object-oriented character of the APIs. Bound method
references also introduce irregularity into the language syntax and
scoping rules. Finally, they dilute the investment in VM technologies
because VMs are required to handle additional and disparate types of
references and method linkage efficiently.
Have you read this :
Delegates are a useful construct in event-based systems. Essentially
Delegates are objects that encode a method dispatch on a specified
object. This document shows how java inner classes provide a more
generic solution to such problems.
What is a Delegate? Really it is very similar to a pointer to member
function as used in C++. But a delegate contains the target object
alongwith the method to be invoked. Ideally it would be nice to be
able to say:
obj.registerHandler(ano.methodOne);
..and that the method methodOne would be called on ano when some specific event was received.
This is what the Delegate structure achieves.
Java Inner Classes
It has been argued that Java provides this
functionality via anonymous inner classes and thus does not need the additional
Delegate construct.
obj.registerHandler(new Handler() {
public void handleIt(Event ev) {
methodOne(ev);
}
} );
At first glance this seems correct but at the same time a nuisance.
Because for many event processing examples the simplicity of the
Delegates syntax is very attractive.
General Handler
However, if event-based programming is used in a more
pervasive manner, say, for example, as a part of a general
asynchronous programming environment, there is more at stake.
In such a general situation, it is not sufficient to include only the
target method and target object instance. In general there may be
other parameters required, that are determined within the context when
the event handler is registered.
In this more general situation, the java approach can provide a very
elegant solution, particularly when combined with use of final
variables:
void processState(final T1 p1, final T2 dispatch) {
final int a1 = someCalculation();
m_obj.registerHandler(new Handler() {
public void handleIt(Event ev) {
dispatch.methodOne(a1, ev, p1);
}
} );
}
final * final * final
Got your attention?
Note that the final variables are accessible from within the anonymous
class method definitions. Be sure to study this code carefully to
understand the ramifications. This is potentially a very powerful
technique. For example, it can be used to good effect when registering
handlers in MiniDOM and in more general situations.
By contrast, the Delegate construct does not provide a solution for
this more general requirement, and as such should be rejected as an
idiom on which designs can be based.
I know this post is old, but Java 8 has added lambdas, and the concept of a functional interface, which is any interface with only one method. Together these offer similar functionality to C# delegates. See here for more info, or just google Java Lambdas.
http://cr.openjdk.java.net/~briangoetz/lambda/lambda-state-final.html
No, but they're fakeable using proxies and reflection:
public static class TestClass {
public String knockKnock() {
return "who's there?";
}
}
private final TestClass testInstance = new TestClass();
#Test public void
can_delegate_a_single_method_interface_to_an_instance() throws Exception {
Delegator<TestClass, Callable<String>> knockKnockDelegator = Delegator.ofMethod("knockKnock")
.of(TestClass.class)
.to(Callable.class);
Callable<String> callable = knockKnockDelegator.delegateTo(testInstance);
assertThat(callable.call(), is("who's there?"));
}
The nice thing about this idiom is that you can verify that the delegated-to method exists, and has the required signature, at the point where you create the delegator (although not at compile-time, unfortunately, although a FindBugs plug-in might help here), then use it safely to delegate to various instances.
See the karg code on github for more tests and implementation.
Yes & No, but delegate pattern in Java could be thought of this way. This video tutorial is about data exchange between activity - fragments, and it has great essence of delegate sorta pattern using interfaces.
I have implemented callback/delegate support in Java using reflection. Details and working source are available on my website.
How It Works
There is a principle class named Callback with a nested class named WithParms. The API which needs the callback will take a Callback object as a parameter and, if neccessary, create a Callback.WithParms as a method variable. Since a great many of the applications of this object will be recursive, this works very cleanly.
With performance still a high priority to me, I didn't want to be required to create a throwaway object array to hold the parameters for every invocation - after all in a large data structure there could be thousands of elements, and in a message processing scenario we could end up processing thousands of data structures a second.
In order to be threadsafe the parameter array needs to exist uniquely for each invocation of the API method, and for efficiency the same one should be used for every invocation of the callback; I needed a second object which would be cheap to create in order to bind the callback with a parameter array for invocation. But, in some scenarios, the invoker would already have a the parameter array for other reasons. For these two reasons, the parameter array does not belong in the Callback object. Also the choice of invocation (passing the parameters as an array or as individual objects) belongs in the hands of the API using the callback enabling it to use whichever invocation is best suited to its inner workings.
The WithParms nested class, then, is optional and serves two purposes, it contains the parameter object array needed for the callback invocations, and it provides 10 overloaded invoke() methods (with from 1 to 10 parameters) which load the parameter array and then invoke the callback target.
What follows is an example using a callback to process the files in a directory tree. This is an initial validation pass which just counts the files to process and ensure none exceed a predetermined maximum size. In this case we just create the callback inline with the API invocation. However, we reflect the target method out as a static value so that the reflection is not done every time.
static private final Method COUNT =Callback.getMethod(Xxx.class,"callback_count",true,File.class,File.class);
...
IoUtil.processDirectory(root,new Callback(this,COUNT),selector);
...
private void callback_count(File dir, File fil) {
if(fil!=null) { // file is null for processing a directory
fileTotal++;
if(fil.length()>fileSizeLimit) {
throw new Abort("Failed","File size exceeds maximum of "+TextUtil.formatNumber(fileSizeLimit)+" bytes: "+fil);
}
}
progress("Counting",dir,fileTotal);
}
IoUtil.processDirectory():
/**
* Process a directory using callbacks. To interrupt, the callback must throw an (unchecked) exception.
* Subdirectories are processed only if the selector is null or selects the directories, and are done
* after the files in any given directory. When the callback is invoked for a directory, the file
* argument is null;
* <p>
* The callback signature is:
* <pre> void callback(File dir, File ent);</pre>
* <p>
* #return The number of files processed.
*/
static public int processDirectory(File dir, Callback cbk, FileSelector sel) {
return _processDirectory(dir,new Callback.WithParms(cbk,2),sel);
}
static private int _processDirectory(File dir, Callback.WithParms cbk, FileSelector sel) {
int cnt=0;
if(!dir.isDirectory()) {
if(sel==null || sel.accept(dir)) { cbk.invoke(dir.getParent(),dir); cnt++; }
}
else {
cbk.invoke(dir,(Object[])null);
File[] lst=(sel==null ? dir.listFiles() : dir.listFiles(sel));
if(lst!=null) {
for(int xa=0; xa<lst.length; xa++) {
File ent=lst[xa];
if(!ent.isDirectory()) {
cbk.invoke(dir,ent);
lst[xa]=null;
cnt++;
}
}
for(int xa=0; xa<lst.length; xa++) {
File ent=lst[xa];
if(ent!=null) { cnt+=_processDirectory(ent,cbk,sel); }
}
}
}
return cnt;
}
This example illustrates the beauty of this approach - the application specific logic is abstracted into the callback, and the drudgery of recursively walking a directory tree is tucked nicely away in a completely reusable static utility method. And we don't have to repeatedly pay the price of defining and implementing an interface for every new use. Of course, the argument for an interface is that it is far more explicit about what to implement (it's enforced, not simply documented) - but in practice I have not found it to be a problem to get the callback definition right.
Defining and implementing an interface is not really so bad (unless you're distributing applets, as I am, where avoiding creating extra classes actually matters), but where this really shines is when you have multiple callbacks in a single class. Not only is being forced to push them each into a separate inner class added overhead in the deployed application, but it's downright tedious to program and all that boiler-plate code is really just "noise".
It doesn't have an explicit delegate keyword as C#, but you can achieve similar in Java 8 by using a functional interface (i.e. any interface with exactly one method) and lambda:
private interface SingleFunc {
void printMe();
}
public static void main(String[] args) {
SingleFunc sf = () -> {
System.out.println("Hello, I am a simple single func.");
};
SingleFunc sfComplex = () -> {
System.out.println("Hello, I am a COMPLEX single func.");
};
delegate(sf);
delegate(sfComplex);
}
private static void delegate(SingleFunc f) {
f.printMe();
}
Every new object of type SingleFunc must implement printMe(), so it is safe to pass it to another method (e.g. delegate(SingleFunc)) to call the printMe() method.
With safety-mirror on the classpath you get something similar to C#'s delegates and events.
Examples from the project's README:
Delegates in Java!
Delegate.With1Param<String, String> greetingsDelegate = new Delegate.With1Param<>();
greetingsDelegate.add(str -> "Hello " + str);
greetingsDelegate.add(str -> "Goodbye " + str);
DelegateInvocationResult<String> invocationResult =
greetingsDelegate.invokeAndAggregateExceptions("Sir");
invocationResult.getFunctionInvocationResults().forEach(funInvRes ->
System.out.println(funInvRes.getResult()));
//prints: "Hello sir" and "Goodbye Sir"
Events
//Create a private Delegate. Make sure it is private so only *you* can invoke it.
private static Delegate.With0Params<String> trimDelegate = new Delegate.With0Params<>();
//Create a public Event using the delegate you just created.
public static Event.With0Params<String> trimEvent= new Event.With0Params<>(trimDelegate)
See also this SO answer.
While it is nowhere nearly as clean, but you could implement something like C# delegates using a Java Proxy.
No, but it has similar behavior, internally.
In C# delegates are used to creates a separate entry point and they work much like a function pointer.
In java there is no thing as function pointer (on a upper look) but internally Java needs to do the same thing in order to achieve these objectives.
For example, creating threads in Java requires a class extending Thread or implementing Runnable, because a class object variable can be used a memory location pointer.
No, Java doesn't have that amazing feature. But you could create it manually using the observer pattern. Here is an example:
Write C# delegate in java
The code described offers many of the advantages of C# delegates. Methods, either static or dynamic, can be treated in a uniform manner. The complexity in calling methods through reflection is reduced and the code is reusable, in the sense of requiring no additional classes in the user code. Note we are calling an alternate convenience version of invoke, where a method with one parameter can be called without creating an object array.Java code below:
class Class1 {
public void show(String s) { System.out.println(s); }
}
class Class2 {
public void display(String s) { System.out.println(s); }
}
// allows static method as well
class Class3 {
public static void staticDisplay(String s) { System.out.println(s); }
}
public class TestDelegate {
public static final Class[] OUTPUT_ARGS = { String.class };
public final Delegator DO_SHOW = new Delegator(OUTPUT_ARGS,Void.TYPE);
public void main(String[] args) {
Delegate[] items = new Delegate[3];
items[0] = DO_SHOW .build(new Class1(),"show,);
items[1] = DO_SHOW.build (new Class2(),"display");
items[2] = DO_SHOW.build(Class3.class, "staticDisplay");
for(int i = 0; i < items.length; i++) {
items[i].invoke("Hello World");
}
}
}
Java doesn't have delegates and is proud of it :). From what I read here I found in essence 2 ways to fake delegates:
1. reflection;
2. inner class
Reflections are slooooow! Inner class does not cover the simplest use-case: sort function. Do not want to go into details, but the solution with inner class basically is to create a wrapper class for an array of integers to be sorted in ascending order and an class for an array of integers to be sorted in descending order.

Categories

Resources