I have written a program with both a Advanced Mode and a Beginner Mode to allow the user to get the hang of my program before it is actually used. The Advanced Mode is almost identical to the Beginner Mode apart from one or two methods needing to be replaced by another, so I have decided to create a general Mode class for both Advanced Mode and Beginner Mode classes to use instead of just coping code: Here is the class structure if my explanation isn't very clear:
GUI Class
General Mode Class
Beginner Mode
Advanced Mode
Let's say that the General Mode class has the following code:
public class GeneralMode {
private int range;
private String id;
public GeneralMode() {
}
public int getRange() {
return range;
}
public String getID() {
return id;
}
public void doStuff() {
}
}
The GeneralMode class is where all the work gets done for the program. Now, I would like to make it so that the Advanced Mode class can take the input from the GUI class and use it in the same way as the GeneralMode class does.
Thank you for all your help!
Make 'GeneralMode' class, an abstract class, with abstract methods that have to be implemented by the concrete 'advanced' and 'beginner' classes.
The functionality that both modes have in common, can be implemented in the 'GeneralMode' class.
Then, in your GUI class, instantiate the correct concrete class, and put it in a 'GeneralMode' variable. Then, you can use it without you having to know whether your program is running in beginner mode or in advanced mode.
pseudocode:
GeneralMode mode = (UseAdvancedMode == true)? new AdvancedMode() : new BeginnerMode();
making the GeneralMode an abstract class definitely is the way to go to get the polimorphic behavior straight (as correctly explained by Frederik Gheysels).
one other important OO paradigm is to
favor composition over inheritance (Item 14, 'Effective Java' by Josh Bloch)
if your bulleted list represents your current inheritance hierarchy (ignore my comment if it doesn't...), i would strongly encourage you to change it so that your GUI Class is composed of a mode (rather than the mode being an extension of it -- the classical "is a" vs "has a" question). extracting whatever GUI settings into a parameter object which you will then pass to the modes to do their work would reduce the coupling even further.
Just to add to Frederik's answer, GeneralMode could also be an interface with BeginnerMode and AdvancedMode implementing this interface.
Use an abstract class if you want to share logic across all subclasses, if all classes will have the same implementation of getId and any other methods that will be the same.
If you want to leave the implementation of these methods up to the implementing class, then use an interface.
Another possibility is to use the Strategy Pattern. I'd prefer that one in your case, because I see more flexibility e.g. when changing the mode during run time. In that case you won't need to change your whole model instance (The Mode-Object), but only it's behaviour by loading a different Strategy. So you won't lose the state of your context by switching the mode.
public class GeneralContext //was GeneralMode in your Code-Example
{
...
public void doStuff()
{
myMode.doStuff()
}
public void changeStrategy(int mode)
{
switch(mode)
{
case EXPERT_MODE: myMode= new ExpertMode(); break;
....
default: throw NoSuchMode();
}
}
....
private interface Mode
{
void doStuff();
}
private class ExpertMode implements Mode
{
void doStuff()
{
....
}
}
private class BeginnerMode implements Mode
{
void doStuff()
{
....
}
}
}
Further reading: GoF-Book (see wikipedia), Pages 315 ff.
You're on the right way. You just need to modify the doStuff() method to take the input parameters you need from the GUI. Then the GUI can call this method over the mode object it has, and pass it the appropriate parameters.
Related
I'm learning about SOLID principles, and ISP states that:
Clients should not be forced to depend upon interfaces that they do
not use.
Does using default methods in interfaces violate this principle?
I have seen a similar question but I'm posting here with an example to get a clearer picture if my example violate ISP.
Say I have this example:
public interface IUser{
void UserMenu();
String getID();
default void closeSession() {
System.out.println("Client Left");
}
default void readRecords(){
System.out.println("User requested to read records...");
System.out.println("Printing records....");
System.out.println("..............");
}
}
With the following classes implementing IUser Interface
public class Admin implements IUser {
public String getID() {
return "ADMIN";
}
public void handleUser() {
boolean sessionIsOpen = true;
while (sessionIsOpen) {
switch (Integer.parseInt(in.readLine())) {
case 1 -> addNewUser();
case 2 -> sessionIsOpen=false;
default -> System.out.println("Invalid Entry");
}
}
closeSession();
}
private void addNewUser() {
System.out.println("Adding New User..."); }
}
}
Editor Class:
public class Editor implements IUser {
public String getID() {
return "EDITOR";
}
public void handleUser() {
boolean sessionIsOpen=true;
while (sessionIsOpen){
switch (Integer.parseInt(in.readLine())) {
case 1 -> addBook();
case 2 -> readRecords();
case 3 -> sessionIsOpen=false;
default ->
System.out.println("Invalid Entry");
}
}
closeSession();
}
private void addBook() {
System.out.println("Adding New Book..."); }
}
}
Viewer Class
public class Viewer implements IUser {
public String getID() {
return "Viewer";
}
public void handleUser() {
boolean sessionIsOpen=true;
while (sessionIsOpen){
switch (Integer.parseInt(in.readLine())) {
case 1 -> readRecords();
case 2 -> sessionIsOpen=false;
default ->
System.out.println("Invalid Entry");
}
}
closeSession();
}
}
Since editor and viewer class use readRecords() method and Admin class doesn't provide an implementation for that method, I implemented it as a default method in IUser Interface to minimize code repetition (DRY Principle).
Am I violating the interface segregation principle in the above code by using default methods in IUser because the Admin class does not use the read method?
Can someone explain please, because I think I'm not forcing Admin class to use methods/interfaces that they do not use.
does using default methods in interfaces violate the principle?
No, not if they're used correctly. In fact, they can help to avoid violating ISP (see below).
Does your example of using default methods violate ISP?
Yes! Well, probably. We could have a debate about exactly how badly it violates ISP, but it definitely violates a bunch of other principles, and isn't good practice with Java programming.
The problem is that you're using a default method as something for the implementing class to call. That's not their intent.
Default methods should be used to define methods that:
users of the interface will likely wish to call (i.e. not implementers)
provide aggregate functionality
have an implementation that is likely to be the same for most (if not all) implementers of the interface
Your example appears to break several conditions.
The first condition is there for a simple reason: all inheritable methods on Java interfaces are public, so they always can be called by users of the interface. To give a concrete example, the below code works fine:
Admin admin = new Admin();
admin.closeSession();
admin.readRecords();
Presumably, you don't want this to be possible, not just for Admin, but for Editor and Viewer too? I would argue that this is a sort-of violation of ISP, because you are depending on users of your classes not calling those methods. For the Admin class, you could make readRecords() 'safe' by overriding it and giving it a no-op implementation, but that just highlights a much more direct violation of ISP. For all other methods/implementations, including the classes that do make use of readRecords(), you're screwed. Rather than thinking of this in terms of ISP, I'd call it API or implementation leakage: it allows your classes to be used in ways that you didn't intend (and may wish to break in the future).
The second condition I stated might need further explanation. By aggregate functionality, I mean that the methods should probably call (either directly or indirectly) one or more of the abstract methods on the interface. If they don't do that, then the behaviour of those methods can't possibly depend on the state of the implementing class, and so could probably be static, or moved into a different class entirely (i.e. see the Single-responsibility principle). There are examples and use cases where it's OK to relax this condition but they should be thought about very carefully. In the example you give, the default methods are not aggregate, but it looks like sanitized code for the sake of Stack Overflow, so maybe your "real" code is fine.
It's debatable whether 2/3 implementers counts as "most" with with regards to my third condition. However, another way to think about it is you should know in advance of writing the implementing classes whether or not they should have that method with that functionality. How certainly can you say whether in the future, if you need to create a new class of User, they will require the functionality of readRecords()? Either way, it's a moot point as this condition only really needs to be thought about if you haven't violated the first 2.
A good use of default methods
There are examples in the standard library of good uses default methods. One would be java.util.function.Function with its andThen(...) and compose(...) methods. These are are useful pieces of functionality for users of Functions, they (indirectly) make use of the Function's abstract apply(...) method, and importantly, it's highly unlikely that an implementing class would ever wish to override them, except maybe for efficiency in some highly specialized scenarios.
These default methods do not violate ISP, as classes that implement Function have no need to call or override them. There may be many use-cases where concrete instances of Function never have their andThen(...) method called, but that's fine – you don't break ISP by providing useful but non-essential functionality, as long as you don't encumber all those use-cases by forcing them to do something with it. In the case of Function, providing these methods as abstract rather than default would violate ISP, as all implementing classes would have to add their own implementations, even when they know it's unlikely to ever be called.
How can you achieve DRY without breaking 'the rules'?
Use an abstract class!
Abstract classes have been poo-pooed a lot in discussions about good Java practice, because they were frequently misunderstood, misued and abused. It wouldn't surprise me if at least some programming best-practice guides like SOLID were published in reaction to this misuse. A very frequent issue I've seen is having an abstract class provide a "default" implementation for tons of methods that is then overridden almost everywhere, often by copy-pasting the base implementation and changing 1 or 2 lines. Essentially, this is breaking my third condition on default methods above (which also applies to any method on an intended-to-be-subclassed type), and it happens A LOT.
However, in this scenario, abstract classes are probably just what you need.
Something like this:
interface IUser {
// Add all methods here intended to be CALLED by code that holds
// instances of IUser
// e.g.:
void handleUser();
String getID();
// If some methods only make sense for particular types of user,
// they shouldn't be added.
// e.g.:
// NOT void addBook();
// NOT void addNewUser();
}
abstract class AbstractUser implements IUser {
// Add methods and fields here that will be USEFUL to most or
// all implementations of IUser.
//
// Nothing should be public, unless it's an implementation of
// one of the abstract methods defined on IUser.
//
// e.g.:
protected void closeSession() { /* etc... */ }
}
abstract class AbstractRecordReadingUser extends AbstractUser {
// Add methods here that are only USEFUL to a subset of
// implementations of IUser.
//
// e.g.:
protected void readRecords(){ /* etc... */ }
}
final class Admin extends AbstractUser {
#Override
public void handleUser() {
// etc...
closeSession();
}
public void addNewUser() { /* etc... */ }
}
final class Editor extends AbstractRecordReadingUser {
#Override
public void handleUser() {
// etc...
readRecords();
// etc...
closeSession();
}
public void addBook() { /* etc... */ }
}
final class Viewer extends AbstractRecordReadingUser {
#Override
public void handleUser() {
// etc...
readRecords();
// etc...
closeSession();
}
}
Note: Depending on your situation, there may be better alternatives to abstract classes that still achieve DRY:
If your common helper methods are stateless (i.e. don't depend on fields in the class), you could use an auxiliary class of static helper methods instead (see here for an example).
You might wish to use composition instead of abstract class inheritance. For example, instead of creating the AbstractRecordReadingUser as above, you could have:
final class RecordReader {
// Fields relevant to the readRecords() method
public void readRecords() { /* etc... */ }
}
final class Editor extends AbstractUser {
private final RecordReader r = new RecordReader();
#Override
void handleUser() {
// etc...
r.readRecords();
// etc...
}
}
// Similar for Viewer
This avoids the problem that Java doesn't allow multiple inheritance, which would become an issue if you tried to have multiple abstract classes containing different pieces of optional functionality, and some final classes needed to use several of them. However, depending on what state (i.e. fields) the readRecord() method needs to interact with, it might not be possible to separate it out into a separate class cleanly.
You could just put your readRecords() method in AbstractUser and avoid having the additional abstract class. The Admin class isn't obliged to call it, and as long as the method is protected, there's no risk that anyone else will call it (assuming you have your packages properly separated). This doesn't violate ISP as even though Admin can interact with readRecords(), it isn't forced to. It can pretend that method doesn't exist, and everyone is fine!
I believe this is a violation of the principle ISP. But you don't have to strictly follow all solid principles as this will complicate development.
I'm currently working on an Android SDK with essentially two variations, base variation (for third-party developers) and privileged variation (to be used internally).
The privileged SDK just adds additional functionality that the third-party developers do not have direct access to.
My idea was to use macros to selectively remove functionality but Java does not support that.
My next idea was to take the base variation and just extend classes and interfaces in there to produce the privileged variation.
My current issue is as follows using the inheritance approach (which has produced a code smell that indicated to me that there is a probably a better solution):
An instance of BaseAPI has an instance of BaseInterface, which in some of its methods use BaseDevice as parameters.
The privileged SDK has an instance of PrivilegedAPI, PrivilegedInterface, and PrivilegedDevice.
The problem comes with the idea of wanting the interfaces to take either instances of either BaseDevice or PrivilegedDevice.
I would ideally like this BaseInterface:
public interface BaseInterface {
void deviceConnected(BaseDevice device);
}
And this PrivilegedInterface:
public interface PrivilegedInterface extends BaseInterface {
//overwrites deviceConnected in BaseInterface with PrivilegedDevice
#Override
void deviceConnected(PrivilegedDevice device);
}
But I cannot override deviceConnected with a different parameter of PrivilegedDevice in PrivilegedInterface.
Another idea I had was to utilize build flavors to hide functionality but this didn't seem to fit either.
Any ideas?
Create an extra method in the interface:(this is a concept of method overloading)
public interface BaseInterface {
void deviceConnected(BaseDevice baseDevice);
void deviceConnected(PrivilegedDevice privilegedDevice);
}
I think you need to abstract away the device class.
public interface Device {
void deviceConnected();
}
public class BaseDevice implements Device {
BaseDevice() {
}
void deviceConnected() {
}
}
public class PrivilegedDevice implements Device {
PrivilegedDevice() {
}
void deviceConnected() {
}
}
This potentially does not solve the problem for people experiencing the same issue in the future, but the overall issue had to do with the hierarchy of my classes and some of their internals.
I was able to actually move some stuff into the BaseAPI, get rid of a class that was used inside BaseDevice, and basically restructured things in a more logical way. The end result is a few less classes and a more logical structure of my classes.
The code smell was a result of having a few over-complicated pieces in my code.
First of all... Sorry for this post. I know that there are many many posts on stackoverflow which are discussing multiple inheritance. But I already know that Java does not support multiple inheritance and I know that using interfaces should be an alternative. But I don't get it and see my dilemma:
I have to make changes on a very very large and complex tool written in Java. In this tool there is a data structure built with many different class objects with a linked member hierarchy. Anyway...
I have one class Tagged which has multiple methods and returns an object tag depending on the object's class. It needs members and static variables.
And a second class called XMLElement allows to link objects and in the end generate a XML file. I also need member and static variables here.
Finally, I have these many many data classes which nearly all should extend XMLElement and some of them Tagged.
Ok ok, this won't work since it's only possible to extend just one class. I read very often that everything with Java is ok and there is no need for having multiple inheritance. I believe, but I don't see how an interface should replace inheritance.
It makes no sense to put the real implementation in all data classes since it is the same every time but this would be necessary with interfaces (I think).
I don't see how I could change one of my inheritance classes to an interface. I have variables in here and they have to be exactly there.
I really don't get it so please can somebody explain me how to handle this?
Actually, I have no good answer other than Java SHOULD have Multiple Inheritance. The whole point that interfaces should be able to replace the need for Multiple Inheritance is like the big lie that when repeated enough times becomes true.
The argument is that Multiple Inheritance causes all these problems (la-di-dah), yet I keep hearing those arguments from Java developers who have never used C++. I also don't EVER remember C++ programmers saying "Gee, I love C++, but if they would only get rid of Multiple Inheritance, it would become a great language". People used it when it was practical and didn't when it wasn't.
Your problem is a classic case of where Multiple Inheritance would be appropriate. Any suggestion to refactor the code is really telling you how to work around the PROBLEM that Java has no Multiple Inheritance.
Also all the discussion that "oh, delegation is better, la-di-dah" is confusing religion with design. There is no right way. Things are either more useful or less useful and that is all.
In your case Multiple Inheritance would be more useful and a more elegant solution.
As far as refactoring your code into a less useful form to satisfy all the religious people who have never used Multiple Inheritance and believe "Multiple Inheritance is bad", I guess you will have to downgrade your code because I don't see Java "improving" in that way any time soon. There are too many people repeating the religious mantra to the point of stupidity that I can't see it ever being added to the language.
Actually, my solution for you would be "x extends Tagged, XMLElement" and that would be all.
...but as you can see from the solutions provided above, most people think that such a solution would be WAY TOO COMPLEX AND CONFUSING!
I would prefer to venture into the "x extends a,b" territory myself, even if it is a very frightening solution that might overwhelm the abilities of most Java programmers.
What is even more amazing about the solutions suggested above is that everyone here who suggested that you refactor your code into "delegation" because Multiple Inheritance is bad, would, if they were confronted with the very same problem, would solve the problem by simply doing: "x extends a,b" and be done with it, and all their religious arguments about "delegation vs inheritance" would disappear. The whole debate is stupid, and it only being advanced by clueless programmers who only demonstrate how well they can recite out of a book and how little they can think for themselves.
You are 100% correct that Multiple Inheritance would help, and no, you are doing anything wrong in your code if you think Java should have it.
You should probably favor composition (and delegation) over inheritance :
public interface TaggedInterface {
void foo();
}
public interface XMLElementInterface {
void bar();
}
public class Tagged implements TaggedInterface {
// ...
}
public class XMLElement implements XMLElementInterface {
// ...
}
public class TaggedXmlElement implements TaggedInterface, XMLElementInterface {
private TaggedInterface tagged;
private XMLElementInterface xmlElement;
public TaggedXmlElement(TaggedInterface tagged, XMLElementInterface xmlElement) {
this.tagged = tagged;
this.xmlElement = xmlElement;
}
public void foo() {
this.tagged.foo();
}
public void bar() {
this.xmlElement.bar();
}
public static void main(String[] args) {
TaggedXmlElement t = new TaggedXmlElement(new Tagged(), new XMLElement());
t.foo();
t.bar();
}
}
Similar to what Andreas_D suggested but with the use of inner classes. This way you indeed extend each class and can override it in your own code if desired.
interface IBird {
public void layEgg();
}
interface IMammal {
public void giveMilk();
}
class Bird implements IBird {
public void layEgg() {
System.out.println("Laying eggs...");
}
}
class Mammal implements IMammal {
public void giveMilk() {
System.out.println("Giving milk...");
}
}
class Platypus implements IMammal, IBird {
private class LayingEggAnimal extends Bird {}
private class GivingMilkAnimal extends Mammal {}
private LayingEggAnimal layingEggAnimal = new LayingEggAnimal();
private GivingMilkAnimal givingMilkAnimal = new GivingMilkAnimal();
#Override
public void layEgg() {
layingEggAnimal.layEgg();
}
#Override
public void giveMilk() {
givingMilkAnimal.giveMilk();
}
}
First it makes no sense to put the real implementation in all data classes since it is the same every time but this would be necessary with interfaces (I think).
How about using aggregation for the tags?
Rename your Tagged class to Tags.
Create a Tagged interface:
interface Tagged {
Tags getTags();
}
Let each class that needs to be "tagged", implement Tagged and let it have a tags field, which is returned from getTags.
Second I don't see how I could change one of my inheritance classes to an interface. I have variables in here and they have to be exactly there.
That's right, interfaces can't have instance variables. The data structures storing the tags however, shouldn't necessarily IMO be part of the classes that are tagged. Factor out the tags in a separate data structure.
I'd solve it that way: extract interfaces for the Tagged and XMLElement class (maybe you don't need all methods in the public interface). Then, implement both interfaces and the implementing class has a Tagged (your actual concrete Tagged class) and an XMLElement (your actual concrete XMLElement class):
public class MyClass implements Tagged, XMLElement {
private Tagged tagged;
private XMLElement xmlElement;
public MyClass(/*...*/) {
tagged = new TaggedImpl();
xmlElement = new XMLElementImpl();
}
#Override
public void someTaggedMethod() {
tagged.someTaggedMethod();
}
}
public class TaggedImpl implements Tagged {
#Override
public void someTaggedMethod() {
// so what has to be done
}
}
public interface Tagged {
public void someTaggedMethod();
}
(and the same for XMLElement)
one possible way;
1- You can create base class(es) for common functionality, make it abstract if you dont need to instantiate it.
2- Create interfaces and implement those interfaces in those base class(es). If specific implementation is needed, make the method abstract. each concrete class can have its own impl.
3- extend the abstract base class for in concrete class(es) and implement specific interfaces at this level as well
Just wondering if one could not simply use inner (member) classes (LRM 5.3.7)?
E.g. like this (based on the first answer above):
// original classes:
public class Tagged {
// ...
}
public class XMLElement {
// ...
}
public class TaggedXmlElement {
public/protected/private (static?) class InnerTagged extends Tagged {
// ...
}
public/protected/private (static?) class InnerXmlElement extends XMLElement {
// ...
}
}
This way you have a class TaggedXmlElement which actually contains all elements from the two original classes and within TaggedXmlElement you have access to non-private members of the member classes. Of course one would not use "super", but call member class methods.
Alternatively one could extend one of the classes and make the other a member class.
There are some restrictions, but I think they can all be worked around.
Well using Interface and single base class you are simply stating:
A) One object can be of only one type (Which is true in real life if you think ,
A pigeon is a bird, a toyota is a car , etc .. A pigeon is also an animal but every bird is animal anyway, so its hierarchically above the bird type -And in your OOP design Animal class should be base of Bird class in case you need to represent it -)
and
B) can do many different things (A bird can sing, can fly . A car can run , can stop ,etc..) which also fits the real life objects.
In a world where objects can be of multiple types (horizontally)
Let's say a a dolphin is a mammal and also a sea animal, in this case multiple inheritance would make more sense. It would be easier to represent it using multiple inheritance.
Using composition would be the way to go as another developer suggested. The main argument against multiple inheritance is the ambiguity created when you're extending from two classes with the same method declaration (same method name & parameters). Personally, however, I think that's a load of crap. A compilation error could easily be thrown in this situation, which wouldn't be much different from defining multiple methods of the same name in a single class. Something like the following code snippet could easily solve this dilema:
public MyExtendedClass extends ClassA, ClassB {
public duplicateMethodName() {
return ClassA.duplicateMethodName();
}
}
Another argument against multiple inheritance is that Java was trying to keep things simple so that amateur developers don't create a web of interdependent classes that could create a messy, confusing software system. But as you see in your case, it also complicates and confuses things when it's not available. Plus, that argument could be used for a 100 other things in coding, which is why development teams have code reviews, style checking software, and nightly builds.
In your particular situation though, you'll have to settle with composition (see Shojaei Baghini's answer). It adds a bit of boiler plate code, but it emulates the same behavior as multiple inheritance.
I run in a similar problem on Android. I needed to extend a Button and a TextView (both inheriting from View) with additional functions. Due to not having access to their super class, I needed to find another solution. I´ve written a new class which encapsulates all the implementations:
class YourButton extends Button implements YourFunctionSet {
private Modifier modifier;
public YourButton(Context context) {
super(context);
modifier = new Modifier(this);
}
public YourButton(Context context, AttributeSet attrs) {
super(context, attrs);
modifier = new Modifier(this);
}
public YourButton(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
modifier = new Modifier(this);
}
#Override
public void generateRandomBackgroundColor() {
modifier.generateRandomBackgroundColor();
}
}
class Modifier implements YourFunctionSet {
private View view;
public Modifier(View view) {
this.view = view;
}
#Override
public void generateRandomBackgroundColor() {
/**
* Your shared code
*
* ......
*
* view.setBackgroundColor(randomColor);
*/
}
}
interface YourFunctionSet {
void generateRandomBackgroundColor();
}
The problem here is, your classes need the same super class. You can also try to use different classes, but check which type it is from, for example
public class Modifier{
private View view;
private AnotherClass anotherClass;
public Modifier(Object object) {
if (object instanceof View) {
this.view = (View) object;
} else if (object instanceof AnotherClass) {
this.anotherClass = (AnotherClass) object;
}
}
public void generateRandomBackgroundColor(){
if(view!=null){
//...do
}else if(anotherClass!=null){
//...do
}
}
}
So here is basically my Modifier class the class which encapsulates all implementations.
Hope this helps someone.
What's the best way of partitioning a class when its functionality needs to be externally accessed in different ways by different classes? Hopefully the following example will make the question clear :)
I have a Java class which accesses a single location in a directory allowing external classes to perform read/write operations to it. Read operations return usage stats on the directory (e.g. available disk space, number of writes, etc.); write operations, obviously, allow external classes to write data to the disk. These methods always work on the same location, and receive their configuration (e.g. which directory to use, min disk space, etc.) from an external source (passed to the constructor).
This class looks something like this:
public class DiskHandler {
public DiskHandler(String dir, int minSpace) {
...
}
public void writeToDisk(String contents, String filename) {
int space = getAvailableSpace();
...
}
public void getAvailableSpace() {
...
}
}
There's quite a bit more going on, but this will do to suffice.
This class needs to be accessed differently by two external classes. One class needs access to the read operations; the other needs access to both read and write operations.
public class DiskWriter {
DiskHandler diskHandler;
public DiskWriter() {
diskHandler = new DiskHandler(...);
}
public void doSomething() {
diskHandler.writeToDisk(...);
}
}
public class DiskReader {
DiskHandler diskHandler;
public DiskReader() {
diskHandler = new DiskHandler(...);
}
public void doSomething() {
int space = diskHandler.getAvailableSpace(...);
}
}
At this point, both classes share the same class, but the class which should only read has access to the write methods.
Solution 1
I could break this class into two. One class would handle read operations, and the other would handle writes:
// NEW "UTILITY" CLASSES
public class WriterUtil {
private ReaderUtil diskReader;
public WriterUtil(String dir, int minSpace) {
...
diskReader = new ReaderUtil(dir, minSpace);
}
public void writeToDisk(String contents, String filename) {
int = diskReader.getAvailableSpace();
...
}
}
public class ReaderUtil {
public ReaderUtil(String dir, int minSpace) {
...
}
public void getAvailableSpace() {
...
}
}
// MODIFIED EXTERNALLY-ACCESSING CLASSES
public class DiskWriter {
WriterUtil diskWriter;
public DiskWriter() {
diskWriter = new WriterUtil(...);
}
public void doSomething() {
diskWriter.writeToDisk(...);
}
}
public class DiskReader {
ReaderUtil diskReader;
public DiskReader() {
diskReader = new ReaderUtil(...);
}
public void doSomething() {
int space = diskReader.getAvailableSpace(...);
}
}
This solution prevents classes from having access to methods they should not, but it also breaks encapsulation. The original DiskHandler class was completely self-contained and only needed config parameters via a single constructor. By breaking apart the functionality into read/write classes, they both are concerned with the directory and both need to be instantiated with their respective values. In essence, I don't really care to duplicate the concerns.
Solution 2
I could implement an interface which only provisions read operations, and use this when a class only needs access to those methods.
The interface might look something like this:
public interface Readable {
int getAvailableSpace();
}
The Reader class would instantiate the object like this:
Readable diskReader;
public DiskReader() {
diskReader = new DiskHandler(...);
}
This solution seems brittle, and prone to confusion in the future. It doesn't guarantee developers will use the correct interface in the future. Any changes to the implementation of the DiskHandler could also need to update the interface as well as the accessing classes. I like it better than the previous solution, but not by much.
Frankly, neither of these solutions seems perfect, but I'm not sure if one should be preferred over the other. I really don't want to break the original class up, but I also don't know if the interface buys me much in the long run.
Are there other solutions I'm missing?
I'd go with the interface, combined with a little bit of Dependency Injection - you don't instantiate a new DiskHandler directly inside your reader or writer classes, they accept an object of the appropriate type in their constructors.
So your DiskReader would accept a Readable, and your DiskWriter would get a ReadWrite (or a DiskHandler directly, if you don't want to make an interface for the read-write mode, although I'd suggest otherwise - via interface ReadWrite extends Readable or similar). If you consistently inject it using the appropriate interface, you won't have to worry about incorrect usage.
I think the interface is also the most object-oriented approach here. The first approach basically refactors your collection of semantically-related methods into a bunch of little utility functions: not what you want.
The second solution allows users of your class to express exactly why they are using it. In the same way that good Java code typically declares List, Set, and NavigableMap rather than ArrayList, HashSet, and TreeMap, users of your class can declare a variable to be only a Readable or Writeable, rather than declaring a dependency on any concrete subclass.
Obviously, someone still needs to call new at some point, but as tzaman pointed out, this can be handled with setters and dependency injection. If you need an unknown number of them at runtime, inject a factory instead.
I am curious: why do you think that any changes to the implementation of DiskHandler would result in changes to the classes that use Reader? If Reader can be defined to be a stable interface, the interface should clearly spell out its semantic contract (in the Javadoc). If users code against that interface, the implementation can be changed behind the scenes without their knowledge. Sure, if the interface itself changes they have to change, but how is that different from the first solution?
One more thing to think about: Let's say you have multiple threads, most of which need a Reader, but some of which need a Writer, all to the same file. You could have DiskHandler implement both Reader and Writer and inject a single instance to all threads. Concurrency could be handled internally to this object by having appropriate ReadWriteLocks and synchronizeds where they need to go. How would this be possible in your first solution?
In .NET, one can specify a "mustoverride" attribute to a method in a particular superclass to ensure that subclasses override that particular method.
I was wondering whether anybody has a custom java annotation that could achieve the same effect. Essentially what i want is to push for subclasses to override a method in a superclass that itself has some logic that must be run-through. I dont want to use abstract methods or interfaces, because i want some common functionality to be run in the super method, but more-or-less produce a compiler warning/error denoting that derivative classes should override a given method.
I don't quite see why you would not want to use abstract modifier -- this is intended for forcing implementation by sub-class, and only need to be used for some methods, not all. Or maybe you are thinking of C++ style "pure abstract" classes?
But one other thing that many Java developers are not aware of is that it is also possible to override non-abstract methods and declare them abstract; like:
public abstract String toString(); // force re-definition
so that even though java.lang.Object already defines an implementation, you can force sub-classes to define it again.
Ignoring abstract methods, there is no such facility in Java. Perhaps its possible to create a compile-time annotation to force that behaviour (and I'm not convinced it is) but that's it.
The real kicker is "override a method in a superclass that itself has some logic that must be run through". If you override a method, the superclass's method won't be called unless you explicitly call it.
In these sort of situations I've tended to do something like:
abstract public class Worker implements Runnable {
#Override
public final void run() {
beforeWork();
doWork();
afterWork();
}
protected void beforeWork() { }
protected void afterWork() { }
abstract protected void doWork();
}
to force a particular logic structure over an interface's method. You could use this, for example, to count invocations without having to worry about whether the user calls super.run(), etc.
... and if declaring a base class abstract is not an option you can always throw an UnsupportedOperationException
class BaseClass {
void mustOverride() {
throw new UnsupportedOperationException("Must implement");
}
}
But this is not a compile-time check of course...
I'm not sure which attribute you're thinking about in .NET.
In VB you can apply the MustOverride modifier to a method, but that's just the equivalent to making the method abstract in Java. You don't need an attribute/annotation, as the concept is built into the languages. It's more than just applying metadata - there's also the crucial difference that an abstract method doesn't include any implementation itself.
If you do think there's such an attribute, please could you say which one you mean?
Android has a new annotation out as announced in the Google I/O 2015:
#callSuper
More details here:
http://tools.android.com/tech-docs/support-annotations
If you need some default behaviour, but for some reason it should not be used by specializations, like a implementation of a logic in a non abstract Adapter class just for easy of prototyping but which should not be used in production for instance, you could encapsulate that logic and log a warning that it is being used, without actually having to run it.
The base class constructor could check if the variable holding the logic points to the default one. (writing in very abstract terms as I think it should work on any language)
It would be something like this (uncompiled, untested and incomplete) Java (up to 7) example:
public interface SomeLogic {
void execute();
}
public class BaseClass {
//...private stuff and the logging framework of your preference...
private static final SomeLogic MUST_OVERRIDE = new SomeLogic() {
public void execute() {
//do some default naive stuff
}
};
protected SomeLogic getLogic() { return MUST_OVERRIDE; }
//the method that probably would be marked as MustOverride if the option existed in the language, maybe with another name as this exists in VB but with the same objective as the abstract keyword in Java
public void executeLogic() {
getLogic().execute();
}
public BaseClass() {
if (getLogic() == MUST_OVERRIDE) {
log.warn("Using default logic for the important SomeLogic.execute method, but it is not intended for production. Please override the getLogic to return a proper implementation ASAP");
}
}
}
public GoodSpecialization extends BaseClass {
public SomeLogic getLogic() {
//returns a proper implementation to do whatever was specified for the execute method
}
//do some other specialized stuff...
}
public BadSpecialization extends BaseClass {
//do lots of specialized stuff but doesn't override getLogic...
}
Some things could be different depending on the requirements, and clearly simpler, especially for languages with lambda expressions, but the basic idea would be the same.
Without the thing built in, there is always some way to emulate it, in this example you would get a runtime warning in a log file with a home-made-pattern-like-solution, that only your needs should point if it is enough or a more hardcore bytecode manipulation, ide plugin development or whatever wizardry is needed.
I've been thinking about this.
While I don't know of any way to require it with a compile error, you might try writing a custom PMD rule to raise a red-flag if your forgot to override.
There are already loads of PMD rules that do things like reminding you to implement HhashCode if you choose to override equals. Perhaps something could be done like that.
I've never done this before, so I'm not the one to write a tutorial, but a good place to start would be this link http://techtraits.com/programming/2011/11/05/custom-pmd-rules-using-xpath/ In this example, he basically creates a little warning if you decide to use a wildcard in an import package. Use it as a starting point to explore how PMD can analyze your source code, visit each member of a hierarchy, and identify where you forgot to implement a specific method.
Annotations are also a possibility, but you'd have to figure out your own way to implement the navigation through the class path. I believe PMD already handles this. Additionally, PMD has some really good integration with IDEs.
https://pmd.github.io/