I am studying Design Patters and I have a situation where I am not sure what would be a better practice:
I have a class "Category" which has several fields: name, 2 kinds of urls, list of related objects. There is a method 'toHtml()' which basically generates some HTML from instances of that class.
There are 4 different types of 'Categories' which have exactly the same fields but 'toHtml()' method should give different result for each one.
I am not sure if I should pass a parameter "type" and series of ifs/switch statement to generate different html or should I make a Category class abstract and create several sub-classes that override the toHtml() method and then use CategoryFactory class to generate them? In both cases I need to pass 'type' parameter.
I tried to think about 'Close for modification, open for extension' OOP rule. But in this case if I want to add 'fifth' category type, that generates different html - for first solution I need to modify only toHtml method (adding one more if), going for second solution I need to create additional sub-class AND modify CategoryFactory class.
What would be better practice? Is there any extra rule I should follow when I have similar kind of dilemma?
First, I believe you are referring to the Factory Method, and not the Abstract Factory Pattern.
The main difference being, in the former you define a common template for a single product, whereas in the latter you define a template for a family of products. For more information, you could look here.
In your case, you wish to define a template for Category. With this assumption, here is what your group of classes would look like:
abstract class Category {
public void doSomething() {
Foo f = makeFoo();
f.whatever();
}
abstract void toHtml();
}
class Category1 extends Category {
public override void toHtml() {
... // do something here
}
}
class Category2 extends Category {
public override void toHtml() {
... // do something else here
}
}
It is true that this certainly a lot of code, and could easily be represented like this:
class Category {
public void toHtml(Integer param) {
if(param == 1) { // do something for Category1
}
else { // do something for Category2
}
}
At the end of the day, it really is a design decision. There are some factors you can consider. Is this going to be a constantly changing class? Is this going to be declared global for the customer to use? How do you want the customer to be able to use this?
The easier thing at this point would be to take the path of least resistance. Having one class to service all categories certainly results in lesser code and in Salesforce, less code is always a better thing. But consider this: Abstracting your functionality into separate classes makes for more maintainable code. You may find it easier to write a class and a wall of if statements, but tomorrow when you're not around and there's a critical failure and someone has to look through your code to figure out exactly which if caused the problem, they'll curse you for it.
Keep in mind that inheritance is an all or nothing mechanism. You may find it particularly useful to use if you have some common functionality, in which case you can choose to abstract that out into the parent class and have your children take care of the specifics.
If you create a subclass of Category and override the toHtml () method, why do you need to have a factory pattern. The toHtml () method of the runtime resolved class will be called if you are calling it using the reference. This implies that if you add a new Category subclass then you override the toHtml () method and it should work fine.
Related
I am reading a Java book and stuck again this time thinking about what this whole paragraph actually means:
Interfaces are designed to support dynamic method resolution at run time. Normally, in order for a method to be called from one class to another, both classes need to be present at compile time so the Java compiler can check to ensure that the method signatures are compatible. This requirement by itself makes for a static and nonextensible classing environment. Inevitably in a system like this, functionality gets pushed up higher and higher in the class hierarchy so that the mechanisms will be available to more and more subclasses. Interfaces are designed to avoid this problem. They disconnect the definition of a method or set of methods from the inheritance hierarchy. Since interfaces are in a different hierarchy from classes, it is possible for classes that are unrelated in terms of the class hierarchy to implement the same interface. This is where the real power of interfaces is realized.
First question: what does the author mean by saying from one class to another? Does he mean that those classes are related in terms of the hierarchy? I mean, assigning subclass object reference to its superclass type variable and then calling a method?
Second question: what does the author again mean by saying This requirement by itself makes for a static and nonextensible classing environment? I don't understand the makes for meaning (english is not my main language) and why the environment is called static and nonextensible.
Third question: what does he mean by saying functionality gets pushed up higher and higher? Why does it get pushed up higher and higher? What functionality? Also, mechanisms will be available to more and more subclasses. What mechanisms? Methods?
Fourth question: Interfaces are designed to avoid this problem. What problem???
I know the answers must be obvious but I don't know them. Maybe mainly because I don't undestand some magic english phrases. Please help me to understand what is this whole paragraph telling.
Between any two classes. If your code contains a call to String.substring() for example, the String class and its substring() method must be available at compile time.
As said, "makes for" means the same as "creates". The environment is non-extensible because everything you may want to use must be available at compile time. (This isn't 100% true though. Abstract classes and methods provide extension points even when no interfaces are present, but they aren't very flexible as we're going to see.)
Imagine that you have two classes: Foo and Bar. Both classes extend the class Thingy. But then you want to add a new functionality, let's say you want to display both in HTML on a web page. So you add a method to both that does that.
The basic problem
abstract class Thingy { ... }
class Foo extends Thingy {
...
public String toHTMLString() {
...
}
}
class Bar extends Thingy {
...
public String toHTMLString() {
...
}
}
This is great but how do you call this method?
public String createWebPage( Thingy th ) {
...
if (th instanceof Foo)
return ((Foo)th).toHTMLString();
if (th instanceof Bar)
return ((Bar)th).toHTMLString();
...
}
Clearly this way isn't flexible at all. So what can you do? Well, you can push toHTMLString() up into their common ancestor, Thingy. (And this is what the book is talking about.)
A naive attempt to resolve it
abstract class Thingy {
...
public abstract String toHTMLString();
}
class Foo extends Thingy {
...
public String toHTMLString() {
...
}
}
class Bar extends Thingy {
...
public String toHTMLString() {
...
}
}
And then you can call it like this:
public String createWebPage( Thingy th ) {
...
return th.toHTMLString();
}
Success! Except now you've forced every class extending Thingy to implement a toHTMLString() method, even if it doesn't make sense for some of them. Even worse, what if the two objects do not extend anything explicitly, they're completely unrelated? You'd have to push the method up all the way into their common ancestor, which is java.lang.Object. And you can't do that.
Solution with interfaces
So what can we do with interfaces?
abstract class Thingy { ... }
interface HTMLPrintable {
public String toHTMLString();
}
class Foo extends Thingy implements HTMLPrintable {
...
public String toHTMLString() {
...
}
}
class Bar extends Thingy implements HTMLPrintable {
...
public String toHTMLString() {
...
}
}
//We've added another class that isn't related to all of the above but is still HTMLPrintable,
//with interfaces we can do this.
class NotEvenAThingy implements HTMLPrintable {
public String toHTMLString() {
...
}
}
And the calling code will be simply
public String createWebPage( HTMLPrintable th ) {
...
return th.toHTMLString(); // "implements HTMLPrintable" guarantees that this method exists
}
What are interfaces then?
There are many metaphors used to understand interfaces, the most popular is probably the idea of a contract. What it says to the caller is this: "If you need X done, we'll get it done. Don't worry about how, that's not your problem." (Although the word "contract" is often used in a more general sense, so be careful.)
Or in another way: if you want to buy a newspaper, you don't care if it's sold in a supermarket, a newsagents or a small stall in the street, you just want to buy a newspaper. So NewspaperVendor in this case is an interface with one method: sellNewsPaper(). And if someone later decides to sell newspaper online or door-to-door, all they need to do is implement the interface and people will buy from them.
But my favourite example is the little sticker in shop windows that says "we accept X,Y and Z credit cards". That's the purest real-world example of an interface. The shops could sell anything (they may not even be shops, some might be restaurants), the card readers they use are different too. But you don't care about all of that, you look at the sign and you know you can pay with your card there.
The Key to paragraph is "classes need to be present at compile time" in line 2. Classes are more concrete. While interfaces are abstract.
As classes are concrete so Designer and programmer needs to know all about class structure and how the methods are implemented. Where as interfaces are more abstract. (They contain abstract methods only). So programmer needs to know only what methods an interface has to have and signature of those methods. He does not need to know detail how these are implemented.
Thus using interfaces is easier and better while making subclasses. You only need to know method signatures of interface.
Using concrete class we have to implement functionality of a method high in class hierarchy while using interface avoids this problem. (There is a related concept of polymorphism that you would probably learn later)
Consider that we have a Car object. The acceleration and breaking features are implemented using strategy pattern. But what if we want to introduce nitro gas feature to an existing car object ? What is the design pattern that I can use ?
I want to add the nitro feature(Attribute) after creating the car object.
You can check the Decorator pattern, it can be used to dynamically add functionality to an existing object.
Decorator pattern can add different functionalities to objects dynamically. But these functionalities have to be implemented in a Concrete Decorator. The developer can decide what functionalities to add at run time.
In statically typed languages you can not add methods to an object at runtime. Compiler, when it encounters a statement like: car.nitroAccelerate(), checks whether a car object implements any interface that has nitroAccelerate method. If you could add (or remove) methods during runtime such checks would be impossible.
Dynamic languages allow to add methods during runtime. But this has a drawback, when you put car.nitroAccelerate() in the code, you need to carefully analyze if the car object in this point has such method.
You can use decorator to modify existing methods at runtime, but doing so, you are not modifying an existing object, just creating a new one that wraps the old one.
So if you do something like:
Car fasterCar = new CarWithNitro(car);
and some piece of your code still holds a reference to the original car, this original car would not be faster, because the act of wrapping does not modify the original.
If you want to add new methods, you need to create a new subclass and/or use delegation. This will be necessary if the "nitro" feature requires an explicit method call to activate.
If however all you want to do is to add to existing functionality without adding methods, Decorator is a good bet. Let's say that interface "Car" has a method called floorIt(). In that case, you can add a "nitro kick" to floorIt with Decorator without having to add to the Car inteface.
Of course, there's a middle ground. If you use runtime type discovery and/or multiple interfaces, you can both use Decorator and add methods to the resulting object.
I'll show you two methods of creating it; one is neither good nor bad and the other one is better.
Method 1
You can use Generic and Interface for that new Feature. The advantage is that you don't need to wrap inside the new object for a new field or attribute.
Create a Feature interface.
public interface Feature<T> {
T getClassValue();
void setClassValue(T feature);
}
Add it to the concrete class.
public class Car<ReturnType> {
//acceleration and breaking features are added using the strategy pattern
public Feature<ReturnType> newFeature;
public Car(Feature<ReturnType> features) {
this.newFeature = features;
}
public Feature<ReturnType> getNewFeature() {
return newFeature;
}
public void setNewFeature(Feature<ReturnType> newFeature) {
this.newFeature = newFeature;
}
}
Problem: The problem is that the new feature can be NULL. It also breaks the Single Responsibility Principle because the car has the responsibility to add the feature. It would populate more Generics to some Abstract classes later on.
Car<Void> car1 = new Car<>(null);
car1.getNewFeature(); //catch Exception here!
And you also have to catch NullPointerException for that. Even thinking with Visitor Design Pattern, its accept(Visitor v); method is always there even if there's no visitor yet.
Advantage: So, is there anything good about it? Yes, there is!
Now, we're gonna create an AbstractFeature class first.
public abstract class AbstractFeature<T> implements Feature<T> {
protected T concreteClass;
#Override
public T getConcreteClass() {
return concreteClass;
}
#Override
public void setConcreteClass(T concreteClass) {
this.concreteClass = concreteClass;
}
}
Now, we're gonna create what kind of new Feature Class we want. Let's say, we wanna add NitrogenGas to the car. Let's create it first. Note: It's a free class. You can add fields as many as you want here. And we'll call this class and use it later.
public class NitrogenGas {
//some methods
public String fillGas() {
return "Filling NitrogenGas";
}
}
And let's add a feature later like this! Just use the AbstractFeature class. That's the reusability.
public class NitrogenGasFeature extends AbstractFeature<NitrogenGas> {
}
Now, you can add a new Feature to the Car class like this! The NitrogenFeature class holds a concrete class we want to use.
Car<NitrogenGas> car2 = new Car<>(new NitrogenGasFeature()); //Feature added
car2.getNewFeature().setConcreteClass(new NitrogenGas()); //Concrete Class added
car2.getNewFeature().getConcreteClass().fillGas(); //use its method.
Let's try and use a Wrapper class like Boolean. Just in case you just need it for backtracking or something.
Car<Boolean> car2 = new Car<>(new BooleanFeature());
car2.getNewFeature().ConcreteClass(new Boolean(true));
System.out.println(car2.getNewFeature().getConcreteClass().booleanValue());
FYI, we can create BacktrackingFeature and Backtracking classes for readability. But it will increase more code. So, use think it wisely.
This is method 1. It breaks the Single Responsibility, more dependency and more code but increases reusability and readability. Sometimes, when you think too much on the code, even inheritance breaks the Single Responsibility. And that's why we use Composition with the interface. Rules are meant to be broken for the great good.
Method 2 The DTO pattern. You can use like the above answer. Just create a new Object and wrap it. Add any fields or attributes you want to use it. That's the better way. It doesn't break the Single Responsibility Principle. And it has a more meaningful OOP way. But it's up to your decision to make.
I'm working on a legacy Java application, that deals with "fruits" and "vegetables", let's say, for the sake of the question.
They are treated as different things internally, cause they don't have all methods/properties in common, but a lot of things are DONE very similar to both of them.
So, we have a ton of methods doSomethingWithAFruit(Fruit f) and doSomethingWithAVegetable(Veg v), that use the proper doOtherStuffWithAFruit(Fruit f) / doOtherStuffWithAVeg(Veg v). And those are very similar, except that methods that do things with fruits only call the methods that do things with fruits, and the same thing for vegetables.
I want to refactor this to reduce the duplication, but I'm not sure what is the best way to accomplish that. I've read a bit about some design patterns, but I don't know if it has made any clearer to me. (I can recognize some patterns in the code I use, but I don't really know when I should be applying a pattern to improve things around. Maybe I should be reading more about refactoring itself...)
I was thinking of these two options:
1. Creating a class that can have an instance of either a Fruit or a Vegetable and pass it around to the methods, trying to minimize the duplication. It would go like this:
public void doSomething(Plant p) {
// do the stuff that is common, and then...
if (p.hasFruit()) {
doThingWithFruit(p.getFruit());
} else {
doThingWithVegetable(p.getVegetable());
}
}
This would get things a bit better, but I don't know... it still feels wrong.
2. The other alternative I thought was to put an interface in Fruit and Vegetable with the stuff that is common to them, and use that to pass it around. I feel this is the cleaner approach, although I will have to use instanceof and cast to Fruit/Vegetable when it needs stuff that is specific to them.
So, what more can I do here? And what are the shortcomings of these approaches?
UPDATE: Note that the question is a bit simplified, I'm looking for way to do things WITH the "plants", that is, code that mostly "uses" them instead of doing things TO them. Having said that, those similar methods I refer to cannot be inside the "Plants" classes, and they usually have another argument, like:
public void createSomethingUsingFruit(Something s, Fruit f);
public void createSomethingUsingVegetable(Something s, Vegetable v);
Namely, those methods have other concerns besides Fruits/Vegetables, and aren't really appropriated to be in any Fruit/Vegetable class.
UPDATE 2: Most code in those methods only reads state from the Fruit/Vegetable objects, and create instances of other classes according to the appropriate type, store in the database and so on -- from my answer to a question in the comments that I think it's important.
I think the 2nd approach would be better.. Designing to an interface is always a better way to design.. That way you can switch between your implementation easily..
And if you use interfaces, you won't need to do typecast as you can easily exploit the concept of polymorphism.. That is, you will have `Base class reference pointing to a subclass object..
But if you want to keep only methods common to fruits and vegetables in your interface, and specific implementation in your implementation class.. Then in that case typecasting would be required..
So, you can have a generic method at interface level.. And more specific method at implementation level..
public interface Food {
public void eat(Food food);
}
public class Fruit implements Food {
// Can have interface reference as parameter.. But will take Fruit object
public void eat(Food food) {
/ ** Fruit specific task **/
}
}
public class Vegetable implements Food {
// Can have interface reference as parameter.. But will take Vegetable object
public void eat(Food food) {
/** Vegetable specific task **/
}
}
public class Test {
public static void main(String args[]) {
Food fruit = new Fruit();
fruit.eat(new Fruit()); // Invoke Fruit Version
Food vegetable = new Vegetable();
vegetable.eat(new Vegetable()); // Invoke vegetable version
}
}
OK, I have modified a code to make eat() method to take parameters of type Food.. That will not make much of a difference.. You can pass Vegetable object to Food reference..
Another option you can use, or perhaps include it as part of your solution, is to ask the consumer if it can manage the object that you are passing it. At this point, it becomes the consumer's responsibility to ensure it knows how to handle the object you are sending it.
For instance, if your consumer is called Eat, you would do something like:
Consumer e = new Eat();
Consumer w = new Water();
if( e.canProcess( myFruit ) )
e.doSomethingWith( myFruit );
else if ( w.canProcess( myFruit ) )
w.doSomethingWith( myFruit );
.... etc
But then you end up with a lot of it/else classes, so you create yourself a Factory which determines which consumer you want. Your Factory basically does the if/else branching to determine which consumer can handle the object you pass, and returns the consumer to you.
So it looks something like
public class Factory {
public static Consumer getConsumer( Object o ){
Consumer e = new Eat();
Consumer w = new Water();
if( e.canProcess( o ) )
return e;
else if ( w.canProcess( o ) )
return w;
}
}
Then your code becomes:
Consumer c = Factory.getConsumer( myFruit );
c.doSomethingWith( myFruit );
Of course in the canProcess method of the consumer, it would be basically an instanceof or some other function your derive to determine if it can handle your class.
public class Eat implements Consumer{
public boolean canProcess(Object o ){
return o instanceof Fruit;
}
}
So you end up shifting the responsibility from your class to a Factory class to determine which objects can be handled. The trick, of course, is that all Consumers must implement a common interface.
I realize that my pseudo-code is very basic, but it is just to point out the general idea. This may or may not work in your case and/or become overkill depending on how your classes are structured, but if well designed, can significantly improve readability of your code, and truely keep all logic for each type self-contained in their own class without instanceof and if/then scattered everywhere.
If you have functionality that is specific to fruits and vegetables respectively and a client using both types has to distinguish (using instanceof) - that is a coherence vs. coupling problem.
Maybe consider if said functionality is not better placed near fruit and vegetable themselves instead of with the client. The client may then somehow be refered to the functionality (through a generic interface) not caring what instance he is dealing with. Polymorphism would be preserved at least from the client's perspective.
But that is theoretical and may not be practical or be over-engineered for your use case. Or you could end up actually just hiding instanceof somewhere else in your design. instanceof is going to be a bad thing when you start having more inheritance siblings next to fruits and vegetables. Then you would start violating the Open Closed Principle.
I would create and abstract base class (let's say - Food, but I don't know your domain, something else might fit better) and start to migrate methods to it one after another.
In case you see that 'doSomethingWithVeg' and 'doSomthingWithFruit' are slightly different - create the 'doSomething' method at the base class and use abstract methods to do only the parts that are different (I guess the main business logic can be united, and only minor issues like write to DB/file are different).
When you have one method ready - test it. After you're sure it's ok - go to the other one. When you are done, the Fruit and Veg classes shouldn't have any methods but the implementations of the abstract ones (the tiny differences between them).
Hope it helps..
This is a basic OOD question. since fruits and vegetables are of type Plant.
I would suggest:
interface Plant {
doSomething();
}
class Vegetable {
doSomething(){
....
}
}
and the same with the fruit.
it seems to me that doOtherStuff methods should be private for the relevant class.
You can also consider having them both implement multiple interfaces instead of just one. That way you code against the most meaningful interface according the circumstances which will help avoid casting. Kind of what Comparable<T> does. It helps the methods (like the ones that sort objects), where they don't care what the objects are, the only requirement is that they have to be comparable. e.g. in your case both can implement some interfaces called Edible, then take both of them as Edible where an Edible is expected.
I’m currently facing a design problem and would appreciate advice on how I could resolve it:
The problem
I will use an example to illustrate my problem note this is just an example:
Suppose you have an interface called Pass with methods listed:
public interface Pass {
public boolean hasPassedA();
public boolean hasPassedB();
public boolean hasPassedC();
}
Suppose you have a class which implement this interface called Assessor:
public class Assessor implements Pass{
// how should I implement this class ??
}
Finally Student class:
public class Student {
// some code that defines student behaviour not important.
}
The question is then how can I make the interaction between the Assessor and the student object a lot more flexible?
What I noticed is that an Assessor object should be something that is abstract because in reality there is no such thing as an Assessor, but instead you have different types of assessors such as a Math Assessor or English Assessor etc, which in turn will allow me to create different types of Assessor objects e.g.
MathAssessor extends Assessor
EnglishAssessor extends Assessor
The concept is that a Student can pass if all the methods declared in the Pass interface return true and all additional methods in the subjectAssessor classes return true.
What do I do in the Assessor class? I have read about adapter design patterns but haven’t fully grasped that notion or does it even apply to this situation?
To start, the Pass interface you have is not very flexible, which could make for difficulties. For example, what if one implementation of Pass only needs to have hasPassedA, or you have an implementation which needs hasPassedA, hasPassedB, hasPassedC and hasPassedD. Then the various types of assessors will need to figure out which pass conditions to check.
A more flexible way to do this might be to do something like this. Rather than having a Pass interface, maybe something like a Condition interface (the names of the classes/interfaces should be changed to make sense for your domain).
public interface Condition {
// true means the condition passed, false means it did not
boolean evalutate();
}
Now you could have a single Assessor class (I'm not sure if this is exactly how your assessor would work, but it's just a guideline):
public class Assessor {
boolean assess(Collection<Condition> conditions) {
for (Condition c : conditions) {
if (!c.evaluate()) {
return false;
}
}
// all conditions passed
return true;
}
}
Hopefully this helps for your problem.
First off, to answer your question about the adapter pattern, it doesn't apply here. You use the adapter pattern to add a layer between 2 incompatible systems to allow them to pass data back and forth.
Using your example, I would recommend writing default implementations of the hasPassed_() methods in Assessor, even if the implementation is nothing more than throwing a new UnsupportedOperationException (to answer the question about what if a particular Assessor only needs a subset of hasPassed_() methods you can just overwrite only the ones you need). You can modify the subject assessor's (e.g. MathAssessor, EnglishAssessor, etc.) Pass methods to be more specific or to provide additional checks before calling super.hasPassed_() (depending on your specific implementation).
A little background first. I am looking into the possibility of implementing Ruby's ActiveRecord in Java as cleanly and succinctly as possible. To do this I would need to allow for the following type of method call:
Person person = Person.find("name", "Mike");
Which would resolve to something like:
ActiveRecord.find(Person.class, "name", "Mike");
The plan is to have Person extend ActiveRecord, which would have a static find method with two parameters (column, value). This method would need to know it was called via Person.find and not another domain class like Car.find and call the find(Class, String, Object) method to perform the actual operation.
The problem I am running into is the finding out via which child class of ActiveRecord the static find method (two param) was called. The following is a simple test case:
public class A {
public static void testMethod() {
// need to know whether A.testMethod(), B.testMethod(), or C.testMethod() was called
}
}
public class B extends A { }
public class C extends A { }
public class Runner {
public static void main(String[] args) {
A.testMethod();
B.testMethod();
C.testMethod();
}
}
Solutions found so far are load-time or compile time weaving using aspectJ. This would involve placing a call interceptor on the testMethod() in A and finding out what signature was used to call it. I am all for load time weaving but the set up of setting this up (via VM args) is a bit complex.
Is there a simpler solution?
Is this at all possible in java or would need to be done in something like groovy/ruby/python?
Would the approach of using something like ActiveRecord.find for static loads and Person.save for instances be better overall?
You cannot override static methods in Java, so any calls to the static method via a subclass will be bound to the base class at compile time. Thus a call to B.testMethod() will be bound to A.testMethod before the application is ever run.
Since you are looking for the information at runtime, it will not be available through normal Java operations.
As others have noted, I don't think the problem is solvable in Java as you pose it. A static method is not really inherited in the same way that a non-static method is. (Excuse me if I'm not using the terminology quite right.)
Nevertheless, it seems to me there are many ways you could accomplish the desired result if you're willing to modify your interface a little.
The most obvious would be to just make the call using the parent class. What's wrong with writing
Person person=(Person)ActiveRecord.find(Person.class, "name", "Mike");
?
Alternatively, you could create an instance of the record type first and then do a find to fill it in. Like
Person person=new Person();
person.find("name", "Mike");
At that point you have a Person object and if you need to know it's class from within a function in the supertype, you just do "this.getClass()".
Alternatively, you could create a dummy Person object to make the calls against, just to let you do the getClass() when necessary. Then your find would look something like:
Person dummyPerson=new Person();
Person realPerson=dummyPerson.find("name", "Mike");
By the way, seems to me that any attempt to have a generic ActiveRecord class is going to mean that the return type of find must be ActiveRecord and not the particular record type, so you'll probably have to cast it to the correct type upon return from the call. The only way to beat that is to have an explicit override of the find in each record object.
I've had plenty of times that I've written some generic record-processing code, but I always avoid creating Java objects for each record type, because that invariably turns into writing a whole bunch of code. I prefer to just keep the Record object completely generic and have field names, indexes, whatever all be internal data and names. If I want to retrieve the "foo" field from the "bar" record, my interface will look something like this:
Record bar=Record.get(key);
String foo=bar.get("foo");
Rather than:
BarRecord bar=BarRecord.get(key);
String foo=bar.getFoo();
Not as pretty and it limits compile-time error-checking, but it's way less code to implement.
You would not do this in Java. You would probably do something more like:
public interface Finder<T, RT, CT>
{
T find(RT colName, CT value);
}
public class PersonFinder
implements Finder<Person, String, String>
{
public Person find(String nameCol, String name)
{
// code to find a person
}
}
public class CarFinder
implements Finder<Car, String, int>
{
public Person find(String yearCol, int year)
{
// code to find a car
}
}
It is possible but it is expensive.
If you can find a way to only call it once then you're set.
You can create a new exception and look at the first frame and then you'll know who call it. Again the problem is it is not performant.
For instance with this answer it is possible to create a logger like this:
class MyClass {
private static final SomeLogger logger = SomeLogger.getLogger();
....
}
And have that logger create a different instance depending on who called it.
So, in the same fashion, you could have something like:
class A {
public static void myStatic() {
// find out who call it
String calledFrom = new RuntimeException()
.getStackTrace()[1].getClassName();
}
}
This is fine for a one time initialization. But not for 1,000 calls. Although I don't know if a good VM may inline this for you.
I would go for AspectJ path.
My theory on this, having built something similar, is to use a code generation strategy to create a delegate for each class which contains the method. You can't have quite as much hidden code in Java, it's probably not worth the effort as long as you generate something reasonable. If you really want to hide it, you could do something like...
public class Person extends PersonActiveRecord
{
}
//generated class, do not touch
public class PersonActiveRecord extends ActiveRecord
{
public Person find(Map params)
{
ActiveRecord.find(Person.class, params);
}
}
But it tends to mess up your inheritance hierarchy too much. I say just generate the classes and be done with it. Not worth it to hide the find method.
You can do it very manually by creating a hackish constructor.
A example = new B(B.class);
And have the superclass constructor store the class that's passed to it.
I don't think the thrown exception above would work, but if you'd want to ever do something like that without creating an exception...
Thread.currentThread().getStackTrace()
You may be able to do it much more smoothly with meta-programming and javassist.
I suppose you want to implement ActiveRecord in Java. When I decided to do the same, I hit the same problem. This is a hard one for Java, but I was able to overcome it.
I recently released entire framework called ActiveJDBC here:
http://code.google.com/p/activejdbc/
If interested, you can look at sources to see how this was implemented. Look at the Model.getClassName() method.
This is how I solved getting a class name from a static method. The second problem was to actually move all the static methods from a super class to subclasses (this is a cludgy form of inheritance after all!). I used Javassist for this. The two solutions allowed me to implement ActiveRecord in Java completely.
The byte code manipulation originally was done dynamically when classes loaded, but I ran into some class loading problems in Glassfish and Weblogic, and decided to implement static bytecode manipulation. This is done by a http: activejdbc.googlecode.com/svn/trunk/activejdbc-instrumentation/ Maven plugin.
I hope this provides an exhaustive answer to your question.
Enjoy,
Igor