Anything wrong with instanceof checks here? - java

With the introduction of generics, I am reluctant to perform instanceof or casting as much as possible. But I don't see a way around it in this scenario:
for (CacheableObject<ICacheable> cacheableObject : cacheableObjects) {
ICacheable iCacheable = cacheableObject.getObject();
if (iCacheable instanceof MyObject) {
MyObject myObject = (MyObject) iCacheable;
myObjects.put(myObject.getKey(), myObject);
} else if (iCacheable instanceof OtherObject) {
OtherObject otherObject = (OtherObject) iCacheable;
otherObjects.put(otherObject.getKey(), otherObject);
}
}
In the above code, I know that my ICacheables should only ever be instances of MyObject, or OtherObject, and depending on this I want to put them into 2 separate maps and then perform some processing further down.
I'd be interested if there is another way to do this without my instanceof check.
Thanks

You could use double invocation. No promises it's a better solution, but it's an alternative.
Code Example
import java.util.HashMap;
public class Example {
public static void main(String[] argv) {
Example ex = new Example();
ICacheable[] cacheableObjects = new ICacheable[]{new MyObject(), new OtherObject()};
for (ICacheable iCacheable : cacheableObjects) {
// depending on whether the object is a MyObject or an OtherObject,
// the .put(Example) method will double dispatch to either
// the put(MyObject) or put(OtherObject) method, below
iCacheable.put(ex);
}
System.out.println("myObjects: "+ex.myObjects.size());
System.out.println("otherObjects: "+ex.otherObjects.size());
}
private HashMap<String, MyObject> myObjects = new HashMap<String, MyObject>();
private HashMap<String, OtherObject> otherObjects = new HashMap<String, OtherObject>();
public Example() {
}
public void put(MyObject myObject) {
myObjects.put(myObject.getKey(), myObject);
}
public void put(OtherObject otherObject) {
otherObjects.put(otherObject.getKey(), otherObject);
}
}
interface ICacheable {
public String getKey();
public void put(Example ex);
}
class MyObject implements ICacheable {
public String getKey() {
return "MyObject"+this.hashCode();
}
public void put(Example ex) {
ex.put(this);
}
}
class OtherObject implements ICacheable {
public String getKey() {
return "OtherObject"+this.hashCode();
}
public void put(Example ex) {
ex.put(this);
}
}
The idea here is that - instead of casting or using instanceof - you call the iCacheable object's .put(...) method which passes itself back to the Example object's overloaded methods. Which method is called depends on the type of that object.
See also the Visitor pattern. My code example smells because the ICacheable.put(...) method is incohesive - but using the interfaces defined in the Visitor pattern can clean up that smell.
Why can't I just call this.put(iCacheable) from the Example class?
In Java, overriding is always bound at runtime, but overloading is a little more complicated: dynamic dispatching means that the implementation of a method will be chosen at runtime, but the method's signature is nonetheless determined at compile time. (Check out the Java Language Specification, Chapter 8.4.9 for more info, and also check out the puzzler "Making a Hash of It" on page 137 of the book Java Puzzlers.)

Is there no way to combine the cached objects in each map into one map? Their keys could keep them separated so you could store them in one map. If you can't do that then you could have a
Map<Class,Map<Key,ICacheable>>
then do this:
Map<Class,Map<Key,ICacheable>> cache = ...;
public void cache( ICacheable cacheable ) {
if( cache.containsKey( cacheable.getClass() ) {
cache.put( cacheable.getClass(), new Map<Key,ICacheable>() );
}
cache.get(cacheable.getClass()).put( cacheable.getKey(), cacheable );
}

You can do the following:
Add a method to your ICachableInterface interface that will handle placing the object into one of two Maps, given as arguments to the method.
Implement this method in each of your two implementing classes, having each class decide which Map to put itself in.
Remove the instanceof checks in your for loop, and replace the put method with a call to the new method defined in step 1.
This is not a good design, however, because if you ever have another class that implements this interface, and a third map, then you'll need to pass another Map to your new method.

Related

Call method of unknown object

I have two ArrayLists - ArrayList1 and ArrayList2. Each of them is filled with objects - Object1 and Object2, respectively.
Both of these objects have method 'getText'.
Object1:
public String getText() { return "1";}
Object2:
public String getText() { return "2";}
At certain point I would like to loop through each of these lists using the same method (just with different parameter).
loopThroughList(1)
loopThroughList(2)
What is the syntax if I want to call a method, but I don't know which object it is going to be? This is the code I have so far:
for (Object o : lists.getList(listNumber)) {
System.out.println(o.getText());
}
It says Cannot resolve method getText. I googled around and found another solution:
for (Object o : lists.getList(listNumber)) {
System.out.println(o.getClass().getMethod("getText"));
}
But this gives me NoSuchMethodException error. Even though the 'getText' method is public.
EDIT: To get the correct list, I am calling the method 'getList' of a different object (lists) that returns either ArrayList1 or ArrayList2 (depending on the provided parameter).
class Lists
public getList(list) {
if (list == 1) {
return ArrayList1;
}
else if (list == 2) {
return ArrayList2;
}
}
Define an interface for the getText method
public interface YourInterface {
String getText();
}
Implement the interface on the respective classes
public class Object1 implements YourInterface {
#Override
public String getText() {
return "1";
}
}
public class Object2 implements YourInterface {
#Override
public String getText() {
return "2";
}
}
Modify your getList method to return List<YourInterface>
public static List<YourInterface> getList(int list){
List<YourInterface> result = new ArrayList<>();
if(list == 1){
// your initial type
List<Object1> firstList = new ArrayList<>();
result.addAll(firstList);
} else {
// your initial type
List<Object2> secondList = new ArrayList<>();
result.addAll(secondList);
}
return result;
}
Declaration for loopThroughList
public static void loopThroughList(List<YourInterface> list){
list.forEach(yourInterface -> System.out.println(yourInterface.getText()));
}
Sample usage.
public static void main(String[] args) {
loopThroughList(getList(1));
loopThroughList(getList(2));
}
Interfaces work great here, but there a couple of other options if you're dealing with legacy code and cannot use interfaces.
First would be to cast the list items into their respective types:
for (Object o : lists.getList(listNumber)) {
if(o instanceof Object1) {
Object1 o1 = (Object1)o;
System.out.println(o1.getText());
}
else if(o instanceof Object2) {
Object1 o2 = (Object2)o;
System.out.println(o2.getText());
}
else {
System.out.println("Unknown class");
}
}
You can also use reflection to see if the object has a getText method and then invoke it:
for (Object o : lists.getList(listNumber)) {
try {
System.out.println(o.getClass().getDeclaredMethod("getName").invoke(o));
}
catch(Exception e) {
System.out.println("Object doesn't have getText method");
}
}
This is awful. Can you elaborate on what specifically you are trying to do? Java is strong typed by design, and you are trying to get around it. Why? Instead of Object, use the specific class, or interface as previously suggested. If that's not possible, and you must use lists of Objects, use instanceof and casting eg:
for (Object o : lists.getList(listNumber)) {
if (o instanceof Object1) {
Object1 o1 = (Object1) o;
System.out.println(o1.getText());
} else if (o instanceof Object2) {
Object2 o2 = (Object2) o;
System.out.println(o2.getText());
}
}
This is where interfaces come in.
interface HasText {
public String getText();
}
class Object1 implements HasText {
#Override
public String getText() {
return "1";
}
}
class Object2 implements HasText {
#Override
public String getText() {
return "2";
}
}
private void test() {
List<HasText> list = Arrays.asList(new Object1(), new Object2());
for (HasText ht : list) {
System.out.println(ht);
}
}
If one of your objects is not in your control you can use a Wrapper class.
class Object3DoesNotImplementHasText {
public String getText() {
return "3";
}
}
class Object3Wrapper implements HasText{
final Object3DoesNotImplementHasText it;
public Object3Wrapper(Object3DoesNotImplementHasText it) {
this.it = it;
}
#Override
public String getText() {
return it.getText();
}
}
private void test() {
List<HasText> list = Arrays.asList(new Object1(), new Object2(), new Object3Wrapper(new Object3DoesNotImplementHasText()));
for (HasText ht : list) {
System.out.println(ht);
}
}
Just to add more to this answer and give you some more to think on this (Will try to do it in a simple, non-formal way). Using interfaces is the proper way of doing such operation. However, I want to stand on the "bad idea":
for (Object o : lists.getList(listNumber)) {
System.out.println(o.getClass().getMethod("getText"));
}
What you are doing here, is using a mechanism called Reflection:
Reflection is a feature in the Java programming language. It allows an
executing Java program to examine or "introspect" upon itself, and
manipulate internal properties of the program. For example, it's
possible for a Java class to obtain the names of all its members and
display them.
What you actually attempted, is using that mechanism, to retrieve the method through a Class reflection object instance of your Class (sounds weird, isn't it?).
From that perspective, you need to think that, if you want to invoke your method, you now have, in a sense, a meta-Class instance to manipulate your objects. Think of it like an Object that is one step above your Objects (Similarly to a dream inside a dream, in Inception). In that sense, you need to retrieve the method, and then invoke it in a different (meta-like) way:
java.lang.reflect.Method m = o.getClass().getMethod("getText");
m.invoke(o);
Using that logic, you could possibly iterate through the object list, check if method exists, then invoke your method.
This is though a bad, BAD idea.
Why? Well, the answer relies on reflection itself: reflection is directly associated with runtime - i.e. when the program executes, practically doing all things at runtime, bypassing the compilation world.
In other words, by doing this, you are bypassing the compilation error mechanism of Java, allowing such errors happen in runtime. This can lead to unstable behavior of the program while executing - apart from the performance overhead using Reflection, which will not analyze here.
Side note: While using reflection will require the usage of Checked Exception handling, it still is not a good idea of doing this - as you practically try to duck tape a bad solution.
On the other hand, you can follow the Inheritance mechanism of Java through Classes and Interfaces - define an interface with your method (let's call it Textable), make sure that your classes implement it, and then use it as your base object in your list declaration (#alexrolea has implemented this in his answer, as also #OldCurmudgeon has).
This way, your program will still make the method call decision making at Runtime (via a mechanism called late binding), but you will not bypass the compilation error mechanism of Java. Think about it: what would happen if you define a Textable implementation without providing the class - a compile error! And what if you set a non-Textable object into the list of Textables? Guess what! A compile error again. And the list goes on....
In general, avoid using Reflection when you are able to do so. Reflection is useful in some cases that you need to handle your program in such a meta-way and there is no other way of making such things. This is not the case though.
UPDATE: As suggested by some answers, you can use instanceof to check if you have a specific Class object instance that contains your method, then invoke respectively. While this seems a simple solution, it is bad in terms of scaling: what if you have 1000 different classes that implement the same method you want to call?
your objects have to implement a common interface.
interface GetTextable {
String getText();
}
class One implements GetTextable {
private final String text;
public One(final String text) {
this.text = text;
}
public String getText() {
return this.text;
}
}
class Two implements GetTextable {
private final String text;
public Two(final String text) {
this.text = text;
}
public String getText() {
return this.text;
}
}
#Test
public void shouldIterate() throws Exception {
List<GetTextable> toIterate = Arrays.asList(new One("oneText"), new Two("twoText"));
for(GetTextable obj: toIterate) {
System.out.println(obj.getText());
}
}

Java design pattern to avoid duplication

I have the following classes
public class MyCustomFactory extends SomeOther3rdPartyFactory {
// Return our custom behaviour for the 'string' type
#Override
public StringType stringType() {
return new MyCustomStringType();
}
// Return our custom behaviour for the 'int' type
#Override
public IntType intType() {
return new MyCustomIntType();
}
// same for boolean, array, object etc
}
Now, for example, the custom type classes:
public class MyCustomStringType extends StringType {
#Override
public void enrichWithProperty(final SomePropertyObject prop) {
super.enrichWithProperty(prop);
if (prop.getSomeAttribute("attribute01")) {
this.doSomething();
this.doSomethingElse();
}
if (prop.getSomeAttribute("attribute02")) {
this.doSomethingYetAgain();
}
// other properties and actions
}
}
But each custom type class like the string one above might have exactly the same if (prop.getSomeAttribute("blah")) { // same thing; }
Suppose I was to add another attribute, is there a nice way I can avoid having to duplicate if statements in each custom type class that needs it? I can move each if statement to utility class but I still need to add the call to the method in the utility class. I think we can do better.
You can create Map<String, Consumer<MyCustomStringType>>, where the key is your attribute name and value is the method call.
public class MyCustomStringType extends StringType {
private final Map<String, Cosnumer<MyCustomStringType>> map = new HashMap<>();
{
map.put("attribute01", o -> {o.doSomething(); o.doSomethingElse();});
map.put("attribute02", MyCustomStringType::doSomethingYetAgain);
// other properties and actions
}
#Override
public void enrichWithProperty(final SomePropertyObject prop) {
super.enrichWithProperty(prop);
map.entrySet().stream()
.filter(entry -> prop.getSomeAttribute(entry.getKey()))
.forEach(entry -> entry.getValue().accept(MyCustomStringType.this));
}
}
Depending on how you initialise this class (and whether this map is always the same), you might be able to turn in into static final immutable map.
I would also recommend naming it better, but a lot here depends on your domain and what this map and loop actually do.

Using enum to implement multitons in Java

I would like to have a limited fixed catalogue of instances of a certain complex interface. The standard multiton pattern has some nice features such as lazy instantiation. However it relies on a key such as a String which seems quite error prone and fragile.
I'd like a pattern that uses enum. They have lots of great features and are robust. I've tried to find a standard design pattern for this but have drawn a blank. So I've come up with my own but I'm not terribly happy with it.
The pattern I'm using is as follows (the interface is highly simplified here to make it readable):
interface Complex {
void method();
}
enum ComplexItem implements Complex {
ITEM1 {
protected Complex makeInstance() { return new Complex() { ... }
},
ITEM2 {
protected Complex makeInstance() { return new Complex() { ... }
};
private Complex instance = null;
private Complex getInstance() {
if (instance == null) {
instance = makeInstance();
}
return instance;
}
protected void makeInstance() {
}
void method {
getInstance().method();
}
}
This pattern has some very nice features to it:
the enum implements the interface which makes its usage pretty natural: ComplexItem.ITEM1.method();
Lazy instantiation: if the construction is costly (my use case involves reading files), it only occurs if it's required.
Having said that it seems horribly complex and 'hacky' for such a simple requirement and overrides enum methods in a way which I'm not sure the language designers intended.
It also has another significant disadvantage. In my use case I'd like the interface to extend Comparable. Unfortunately this then clashes with the enum implementation of Comparable and makes the code uncompilable.
One alternative I considered was having a standard enum and then a separate class that maps the enum to an implementation of the interface (using the standard multiton pattern). That works but the enum no longer implements the interface which seems to me to not be a natural reflection of the intention. It also separates the implementation of the interface from the enum items which seems to be poor encapsulation.
Another alternative is to have the enum constructor implement the interface (i.e. in the pattern above remove the need for the 'makeInstance' method). While this works it removes the advantage of only running the constructors if required). It also doesn't resolve the issue with extending Comparable.
So my question is: can anyone think of a more elegant way to do this?
In response to comments I'll tried to specify the specific problem I'm trying to solve first generically and then through an example.
There are a fixed set of objects that implement a given interface
The objects are stateless: they are used to encapsulate behaviour only
Only a subset of the objects will be used each time the code is executed (depending on user input)
Creating these objects is expensive: it should only be done once and only if required
The objects share a lot behaviour
This could be implemented with separate singleton classes for each object using separate classes or superclasses for shared behaviour. This seems unnecessarily complex.
Now an example. A system calculates several different taxes in a set of regions each of which has their own algorithm for calculting the taxes. The set of regions is expected to never change but the regional algorithms will change regularly. The specific regional rates must be loaded at run time via remote service which is slow and expensive. Each time the system is invoked it will be given a different set of regions to calculate so it should only load the rates of the regions requested.
So:
interface TaxCalculation {
float calculateSalesTax(SaleData data);
float calculateLandTax(LandData data);
....
}
enum TaxRegion implements TaxCalculation {
NORTH, NORTH_EAST, SOUTH, EAST, WEST, CENTRAL .... ;
private loadRegionalDataFromRemoteServer() { .... }
}
Recommended background reading: Mixing-in an Enum
Seems fine. I would make initialization threadsafe like this:
enum ComplexItem implements Complex {
ITEM1 {
protected Complex makeInstance() {
return new Complex() { public void method() { }};
}
},
ITEM2 {
protected Complex makeInstance() {
return new Complex() { public void method() { }}
};
private volatile Complex instance;
private Complex getInstance() {
if (instance == null) {
createInstance();
}
return instance;
}
protected abstract Complex makeInstance();
protected synchronized void createInstance() {
if (instance == null) {
instance = makeInstance();
}
}
public void method() {
getInstance().method();
}
}
The modifier synchronized only appears on the createInstance() method, but wraps the call to makeInstance() - conveying threadsafety without putting a bottleneck on calls to getInstance() and without the programmer having to remember to add synchronized to each to makeInstance() implementation.
This works for me - it's thread-safe and generic. The enum must implement the Creator interface but that is easy - as demonstrated by the sample usage at the end.
This solution breaks the binding you have imposed where it is the enum that is the stored object. Here I only use the enum as a factory to create the object - in this way I can store any type of object and even have each enum create a different type of object (which was my aim).
This uses a common mechanism for thread-safety and lazy instantiation using ConcurrentMap of FutureTask.
There is a small overhead of holding on to the FutureTask for the lifetime of the program but that could be improved with a little tweaking.
/**
* A Multiton where the keys are an enum and each key can create its own value.
*
* The create method of the key enum is guaranteed to only be called once.
*
* Probably worth making your Multiton static to avoid duplication.
*
* #param <K> - The enum that is the key in the map and also does the creation.
*/
public class Multiton<K extends Enum<K> & Multiton.Creator> {
// The map to the future.
private final ConcurrentMap<K, Future<Object>> multitons = new ConcurrentHashMap<K, Future<Object>>();
// The enums must create
public interface Creator {
public abstract Object create();
}
// The getter.
public <V> V get(final K key, Class<V> type) {
// Has it run yet?
Future<Object> f = multitons.get(key);
if (f == null) {
// No! Make the task that runs it.
FutureTask<Object> ft = new FutureTask<Object>(
new Callable() {
public Object call() throws Exception {
// Only do the create when called to do so.
return key.create();
}
});
// Only put if not there.
f = multitons.putIfAbsent(key, ft);
if (f == null) {
// We replaced null so we successfully put. We were first!
f = ft;
// Initiate the task.
ft.run();
}
}
try {
/**
* If code gets here and hangs due to f.status = 0 (FutureTask.NEW)
* then you are trying to get from your Multiton in your creator.
*
* Cannot check for that without unnecessarily complex code.
*
* Perhaps could use get with timeout.
*/
// Cast here to force the right type.
return type.cast(f.get());
} catch (Exception ex) {
// Hide exceptions without discarding them.
throw new RuntimeException(ex);
}
}
enum E implements Creator {
A {
public String create() {
return "Face";
}
},
B {
public Integer create() {
return 0xFace;
}
},
C {
public Void create() {
return null;
}
};
}
public static void main(String args[]) {
try {
Multiton<E> m = new Multiton<E>();
String face1 = m.get(E.A, String.class);
Integer face2 = m.get(E.B, Integer.class);
System.out.println("Face1: " + face1 + " Face2: " + Integer.toHexString(face2));
} catch (Throwable t) {
t.printStackTrace(System.err);
}
}
}
In Java 8 it is even easier:
public class Multiton<K extends Enum<K> & Multiton.Creator> {
private final ConcurrentMap<K, Object> multitons = new ConcurrentHashMap<>();
// The enums must create
public interface Creator {
public abstract Object create();
}
// The getter.
public <V> V get(final K key, Class<V> type) {
return type.cast(multitons.computeIfAbsent(key, k -> k.create()));
}
}
One thought about this pattern: the lazy instantiation isn't thread safe. This may or may not be okay, it depends on how you want to use it, but it's worth knowing. (Considering that enum initialisation in itself is thread-safe.)
Other than that, I can't see a simpler solution that guarantees full instance control, is intuitive and uses lazy instantiation.
I don't think it's an abuse of enum methods either, it doesn't differ by much from what Josh Bloch's Effective Java recommends for coding different strategies into enums.

Anonymous or real class definition when using visitor pattern?

When you use the Visitor pattern and you need to get a variable inside visitor method, how to you proceed ?
I see two approaches. The first one uses anonymous class :
// need a wrapper to get the result (which is just a String)
final StringBuild result = new StringBuilder();
final String concat = "Hello ";
myObject.accept(new MyVisitor() {
#Override
public void visit(ClassA o)
{
// this concatenation is expected here because I've simplified the example
// normally, the concat var is a complex object (like hashtable)
// used to create the result variable
// (I know that concatenation using StringBuilder is ugly, but this is an example !)
result.append(concat + "A");
}
#Override
public void visit(ClassB o)
{
result.append(concat + "B");
}
});
System.out.println(result.toString());
Pros & Cons :
Pros : you do not need to create a class file for this little behavior
Cons : I don't like the "final" keyword in this case : the anonymous class is less readable because it calls external variables and you need to use a wrapper to get the requested value (because with the keyword final, you can't reassign the variable)
Another way to do it is to do an external visitor class :
public class MyVisitor
{
private String result;
private String concat;
public MyVisitor(String concat)
{
this.concat = concat;
}
#Override
public void visit(ClassA o)
{
result = concat + "A";
}
#Override
public void visit(ClassB o)
{
result = concat + "B";
}
public String getResult()
{
return result;
}
}
MyVisitor visitor = new MyVisitor("Hello ");
myObject.accept(visitor);
System.out.println(visitor.getResult());
Pros & Cons :
Pros : all variables are defined in a clean scope, you don't need a wrapper to encapsulate the requested variable
Cons : need an external file, the getResult() method must be call after the accept method, this is quite ugly because you need to know the function call order to correctly use the visitor
You, what's your approach in this case ? Preferred method ? another idea ?
Well, both approaches are valid and imo, it really depends on whether you would like to reuse the code or not. By the way, your last 'Con' point is not totally valid since you do not need an 'external file' to declare a class. It might very well be an inner class...
That said, the way I use Visitors is like this:
public interface IVisitor<T extends Object> {
public T visit(ClassA element) throws VisitorException;
public T visit(ClassB element) throws VisitorException;
}
public interface IVisitable {
public <T extends Object> T accept(final IVisitor<T> visitor) throws VisitorException;
}
public class MyVisitor implements IVisitor<String> {
private String concat;
public MyVisitor(String concat) {
this.concat = concat;
}
public String visit(ClassA classA) throws VisitorException {
return this.concat + "A";
}
public String visit(ClassB classB) throws VisitorException {
return this.concat + "B";
}
}
public class ClassA implements IVisitable {
public <T> T accept(final IVisitor<T> visitor) throws VisitorException {
return visitor.visit(this);
}
}
public class ClassB implements IVisitable {
public <T> T accept(final IVisitor<T> visitor) throws VisitorException {
return visitor.visit(this);
}
}
// no return value needed?
public class MyOtherVisitor implements IVisitor<Void> {
public Void visit(ClassA classA) throws VisitorException {
return null;
}
public Void visit(ClassB classB) throws VisitorException {
return null;
}
}
That way, the visited objects are ignorant of what the visitor wants to do with them, yet they do return whatever the visitor wants to return. Your visitor can even 'fail' by throwing an exception.
I wrote the first version of this a few years ago and so far, it has worked for me in every case.
Disclaimer: I just hacked this together, quality (or even compilation) not guaranteed. But you get the idea... :)
I do not see an interface being implemented in your second example, but I believe it is there. I would add to your interface (or make a sub interface) that has a getResult() method on it.
That would help both example 1 and 2. You would not need a wrapper in 1, because you can define the getResult() method to return the result you want. In example 2, because getResult() is a part of your interface, there is no function that you 'need to know'.
My preference would be to create a new class, unless each variation of the class is only going to be used once. In which case I would inline it anonymously.
From the perspective of a cleaner design, the second approach is preferrable for the same exact reasons you've already stated.
In a normal TDD cycle I would start off with an anonymous class and refactored it out a bit later. However, if the visitor would only be needed in that one place and its complexity would match that of what you've provided in the example (i.e. not complex), I would have left it hanging and refactor to a separate class later if needed (e.g. another use case appeared, complexity of the visitor/surrounding class increased).
I would recommend using the second approach. Having the visitor in its full fledged class also serves the purpose of documentation and clean code. I do not agree with the cons that you have mentioned with the approach. Say you have an arraylist, and you don't add any element to it and do a get, surely you will get a null but that doesn't mean that it is necessarily wrong.
One of the points of the visitor pattern is to allow for multiple visitor types. If you create an anonymous class, you are kind of breaking the pattern.
You should change your accept method to be
public void accept(Visitor visitor) {
visitor.visit(this);
}
Since you pass this into the visitor, this being the object that is visited, the visitor can access the object's property according to the standard access rules.

Array of function pointers in Java [duplicate]

This question already has answers here:
How to call a method stored in a HashMap? (Java) [duplicate]
(3 answers)
Closed 8 years ago.
I have read this question and I'm still not sure whether it is possible to keep pointers to methods in an array in Java. If anyone knows if this is possible (or not), it would be a real help. I'm trying to find an elegant solution of keeping a list of Strings and associated functions without writing a mess of hundreds of if statements.
Cheers
Java doesn't have a function pointer per se (or "delegate" in C# parlance). This sort of thing tends to be done with anonymous subclasses.
public interface Worker {
void work();
}
class A {
void foo() { System.out.println("A"); }
}
class B {
void bar() { System.out.println("B"); }
}
A a = new A();
B b = new B();
Worker[] workers = new Worker[] {
new Worker() { public void work() { a.foo(); } },
new Worker() { public void work() { b.bar(); } }
};
for (Worker worker : workers) {
worker.work();
}
You can achieve the same result with the functor pattern. For instance, having an abstract class:
abstract class Functor
{
public abstract void execute();
}
Your "functions" would be in fact the execute method in the derived classes. Then you create an array of functors and populate it with the apropriated derived classes:
class DoSomething extends Functor
{
public void execute()
{
System.out.println("blah blah blah");
}
}
Functor [] myArray = new Functor[10];
myArray[5] = new DoSomething();
And then you can invoke:
myArray[5].execute();
It is possible, you can use an array of Method. Grab them using the Reflection API (edit: they're not functions since they're not standalone and have to be associated with a class instance, but they'd do the job -- just don't expect something like closures)
Java does not have pointers (only references), nor does it have functions (only methods), so it's doubly impossible for it to have pointers to functions. What you can do is define an interface with a single method in it, have your classes that offer such a method declare they implement said interface, and make a vector with references to such an interface, to be populated with references to the specific objects on which you want to call that method. The only constraint, of course, is that all the methods must have the same signature (number and type of arguments and returned values).
Otherwise, you can use reflection/introspection (e.g. the Method class), but that's not normally the simplest, most natural approach.
I found the reflection approach the cleanest -- I added a twist to this solution since most production classes have nested classes and I didn't see any examples that demonstrates this (but I didn't look for very long either). My reason for using reflection is that my "updateUser()" method below had a bunch of redundant code and just one line that changed (for every field in the user object) in the middle that updated the user object:
NameDTO.java
public class NameDTO {
String first, last;
public String getFirst() {
return first;
}
public void setFirst(String first) {
this.first = first;
}
public String getLast() {
return last;
}
public void setLast(String last) {
this.last = last;
}
}
UserDTO.java
public class UserDTO {
private NameDTO name;
private Boolean honest;
public UserDTO() {
name = new NameDTO();
honest = new Boolean(false);
}
public NameDTO getName() {
return name;
}
public void setName(NameDTO name) {
this.name = name;
}
public Boolean getHonest() {
return honest;
}
public void setHonest(Boolean honest) {
this.honest = honest;
}
}
Example.java
import java.lang.reflect.Method;
public class Example {
public Example () {
UserDTO dto = new UserDTO();
try {
Method m1 = dto.getClass().getMethod("getName", null);
NameDTO nameDTO = (NameDTO) m1.invoke(dto, null);
Method m2 = nameDTO.getClass().getMethod("setFirst", String.class);
updateUser(m2, nameDTO, "Abe");
m2 = nameDTO.getClass().getMethod("setLast", String.class);
updateUser(m2, nameDTO, "Lincoln");
m1 = dto.getClass().getMethod("setHonest", Boolean.class);
updateUser(m1, dto, Boolean.TRUE);
System.out.println (dto.getName().getFirst() + " " + dto.getName().getLast() + ": honest=" + dto.getHonest().toString());
} catch (Exception e) {
e.printStackTrace();
}
}
public void updateUser(Method m, Object o, Object v) {
// lots of code here
try {
m.invoke(o, v);
} catch (Exception e) {
e.printStackTrace();
}
// lots of code here -- including a retry loop to make sure the
// record hadn't been written since my last read
}
public static void main(String[] args) {
Example mp = new Example();
}
}
You are right that there are no pointers in java because a reference variables are the same as the & syntax in C/C++ holding the reference to the object but no * because the JVM can reallocate the heap when necessary causing the pointer to be lost from the address which would cause a crash. But a method is just a function inside a class object and no more than that so you are wrong saying there are no functions, because a method is just a function encapsulated inside an object.
As far as function pointers, the java team endorses the use of interfaces and nested classes which all fine and dandy, but being a C++/C# programmer who uses java from time to time, I use my Delegate class I made for java because I find it more convenient when I need to pass a function only having to declare the return type of the method delegate.
It all depends on the programmer.
I read the white pages on why delegates are not support but I disagree and prefer to think outside the box on that topic.

Categories

Resources