Lets say we have class A in package A and class B in package B . If object of class A has reference to class B, then the two classes are said to have coupling between them.
To address the coupling, it is recommended to define an interface in package A which is implemented by class in package B. Then object of class A can refer to interface in package A . This is often an example in "inversion of dependency".
Is this the example of "decoupling two classes at the interface level". If yes, how does it remove the coupling between classes and retain the same functionality when two classes were coupled?
Let us create a fictive example of two classes A and B.
Class A in package packageA:
package packageA;
import packageB.B;
public class A {
private B myB;
public A() {
this.myB = new B();
}
public void doSomethingThatUsesB() {
System.out.println("Doing things with myB");
this.myB.doSomething();
}
}
Class B in package packageB:
package packageB;
public class B {
public void doSomething() {
System.out.println("B did something.");
}
}
As we see, A depends on B. Without B, A cannot be used. We say that A is tightly coupled to B. What if we want to replace B in the future by a BetterB? For this, we create an Interface Inter within packageA:
package packageA;
public interface Inter {
public void doSomething();
}
To utilize this interface, we
import packageA.Inter; and let B implements Inter in B and
Replace all occurences of B within A with Inter.
The result is this modified version of A:
package packageA;
public class A {
private Inter myInter;
public A() {
this.myInter = ???; // What to do here?
}
public void doSomethingThatUsesInter() {
System.out.println("Doing things with myInter");
this.myInter.doSomething();
}
}
We can see already that the dependency from A to B is gone: the import packageB.B; is no longer needed. There is just one problem: we cannot instantiate an instance of an interface. But Inversion of control comes to the rescue: instead of instantiating something of type Inter within A's constructor, the constructor will demand something that implements Inter as parameter:
package packageA;
public class A {
private Inter myInter;
public A(Inter myInter) {
this.myInter = myInter;
}
public void doSomethingThatUsesInter() {
System.out.println("Doing things with myInter");
this.myInter.doSomething();
}
}
With this approach we can now change the concrete implementation of Inter within A at will. Suppose we write a new class BetterB:
package packageB;
import packageA.Inter;
public class BetterB implements Inter {
#Override
public void doSomething() {
System.out.println("BetterB did something.");
}
}
Now we can instantiante As with different Inter-implementations:
Inter b = new B();
A aWithB = new A(b);
aWithB.doSomethingThatUsesInter();
Inter betterB = new BetterB();
A aWithBetterB = new A(betterB);
aWithBetterB.doSomethingThatUsesInter();
And we did not have to change anything within A. The code is now decoupled and we can change the concrete implementation of Inter at will, as long as the contract(s) of Inter is (are) satisfied. Most notably, we can support code that will be written in the future and implements Inter.
Adendum
I wrote this answer in 2015. While being overall satisfied with the answer, I always thought that something was missing and I think I finally know what it was. The following is not necessary to understand the answer, but is meant to spark interest in the reader, as well as provide some resources for further self-education.
In literature, this approach is known as Interface segregation principle and belongs to the SOLID-principles. There is a nice talk from uncle Bob on YouTube (the interesting bit is about 15 minutes long) showing how polymorphism and interfaces can be used to let the compile-time dependency point against the flow of control (viewer's discretion is advised, uncle Bob will mildly rant about Java). This, in return, means that the high level implementation does not need to know about lower level implementations when they are segretaget through interfaces. Thus lower levels can be swapped at will, as we have shown above.
Imagine that the functionality of B is to write a log to some database. The class B depends on the functionality of the class DB and provides some interface for its logging functionality to other classes.
Class A needs the logging functionality of B, but is does not care, where the log is written to. It does not care for DB, but since it depends on B, it also depends on DB. This is not very desirable.
So what you can do, is to split the class B into two classes: An abstract class L describing the logging functionality (and not depending on DB), and the implementation depending on DB.
Then you can decouple the class A from B, because now A will only depend on L. B now also depends on L, that is why it is called dependency inversion, because B provides the functionality offered in L.
Since A now depends on just a lean L, you can easily use it with other logging mechanism, not depending on DB. E.g. you can create a simple console based logger, implementing the interface defined in L.
But since now A does not depend on B but (in sources) only on the abstract interface L at run time it has to be set up to use some specific implementation of L (B for instance). So there needs to be somebody else that tells A to use B (or something else) during the runtime. And that is called inversion of control, because before A decided to use B, but now somebody else (e.g. a container) tells A to use B during the runtime.
The situation you describe removes the dependence that class A has on the specific implementation of class B and replaces it with an interface. Now class A can accept any object that is of a type that implements the interface, instead of only accepting class B. The design retains the same functionality because class B is made to implement that interface.
This is where DI (Dependency Injection) frameworks really shine.
When you are building interfaces, you are actually building out contracts for implementation. Your calling services will only interact with the contract and the promise that a service interface will always provide the methods that it has specified.
For example...
Your ServiceA will build their logic around ServiceB's interface and does not have to worry about what happens under ServiceB's hood.
This allows you to create multiple implementations of ServiceB without having to change any logic in ServiceA.
For the sake of example
interface ServiceB { void doMethod() }
You can interact with ServiceB in ServiceA without knowing what goes under the hood of ServiceB.
class ServiceAImpl {
private final ServiceB serviceB;
public ServiceAImpl(ServiceBImpl serviceBImpl) {
this.serviceB = serviceBImpl
}
public void doSomething() {
serviceB.doMethod(); // calls ServiceB interface method.
}
}
Now because you have built ServiceA using the contract specified in ServiceB, you are able to change out the implementation as you please.
You can mock the service, create different connection logic to different databases, create different runtime logic. All of these can change and will not at all affect the way ServiceA interacts with ServiceB.
Thus, loose coupling is achieved with IoC (Inversion of Control). You now have a modular and focused codebase.
Related
I was just curious if they are treated any differently.
For example if we have:
The interface:
public interface Test {
public void method();
}
And the abstract class:
public abstract class Test {
public abstract void method();
}
Will the JVM treat these classes any differently? Which of the two takes up more disk space during storage, which one will use up the most runtime memory which one does more operations(performs better).
This question isn't about when to use interfaces or abstract classes.
Yes, they are different.
With an interface, clients could implement it aswell as extend a class:
class ClientType implements YourInterface, SomeOtherInterface { //can still extend other types
}
With a class, clients will be able to extend it, but not extend any other type:
class ClientType extends YourClass { //can no longer extend other types
}
Another difference arises when the interface or abstract class have only a single abstract method declaration, and it has to do with anonymous functions (lambdas).
As #AlexanderPetrov said, an interface with one method can be used as a functional interface, allowing us to create functions "on-the-fly" where ever a functional interface type is specified:
//the interface
interface Runnable {
void run()
}
//where it's specified
void execute(Runnable runnable) {
runnable.run();
}
//specifying argument using lambda
execute(() -> /* code here */);
This cannot be done with an abstract class.
So you cannot use them interchangably. The difference comes in the limitations of how a client can use it, which is enforced by the semantics of the JVM.
As for differences in resource usage, it's not something to worry about unless it's causing your software problems. The idea of using a memory-managed language is to not worry about such things unless you are having problems. Don't preoptimize, I'm sure the difference is negligable. And even if there is a difference, it should only matter if it may cause a problem for your software.
If your software is having resource problems, profile your application. If it does cause memory issues, you will be able to see it, as well as how much resources each one consumes. Until then, you shouldn't worry about it. You should prefer the feature that makes your code easier to manage, as opposed to which consumes the least amount of resources.
JVM internals and memory representation
It will be almost the same for the JVM. My statement is based on Chapter 4 - Class file Format. As seen from the attached documentation the JVM is making difference between a class and interface, by the access_flags. If you have a Simple interface with just one method and a simple abstract class with just one method. Most of the fields in this format will be the same (empty) and the main difference will be the access_flags.
Default constructor generation abstract class
As #Holger pointed out, another small difference between the Interface and Abstract class is that ordinary classes require a constructor. The Java compiler will generate a default constructor for the Abstract class which will be invoked for each of its subclasses. In that sense the abstract class definition will be slightly bigger compared to the interface.
https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-4.html
ClassFile {
u4 magic;
u2 minor_version;
u2 major_version;
u2 constant_pool_count;
cp_info constant_pool[constant_pool_count-1];
u2 access_flags;
u2 this_class;
u2 super_class;
u2 interfaces_count;
u2 interfaces[interfaces_count];
u2 fields_count;
field_info fields[fields_count];
u2 methods_count;
method_info methods[methods_count];
u2 attributes_count;
attribute_info attributes[attributes_count];
}
Besides the multiple inheritance of interfaces, another difference is that in Java8 abstract class with only one method is not a Functional interface.
#FunctionalInterface
public interface SimpleFuncInterface {
public void doWork();
}
execute(SimpleFuncInterface function) {
function.doWork();
}
execute(()->System.out.printline("Did work"));
Same can not be achieved with abstract class.
Interfaces - lack of "Openness to extension".
Up to Java 8 Interfaces have been criticized for their lack of extensibility. If you change the interface contract you need to refactor all clients of an interface.
One example that comes to mind is Java MapReduce API for Hadoop, which
was changed in 0.20.0 release to favour abstract classes over
interfaces, since they are easier to evolve. Which means, a new method
can be added to abstract class (with default implementation), with out
breaking old implementations of the class.
With the introduction of Java 8 Interface Default method
this lack of extensibility has been addressed.
public interface MyInterface {
int method1();
// default method, providing default implementation
default String displayGreeting(){
return "Hello from MyInterface";
}
}
With Java 8 new methods can be added both to interfaces and abstract classes without breaking the contract will the client classes.
http://netjs.blogspot.bg/2015/05/interface-default-methods-in-java-8.html
They are different in implementation
you can only extend only one abstract class
On the other hand you can implement multiple interfaces at the same time
You need to implement all method present in an interface
for abstract it may provide some default implementation so you are independent whether you want to implement it again or just used the default implementation
If you talking about performance, then sometime ago there was an opinion that interfaces are slower than abstract classes. But now JIT makes no difference between them.
Please, see this answer for more details.
In production code I often see classes defined as follows:
public interface SomeComponent { // Some methods }
public class SomeComponentImpl implements SomeComponent { // Some methods}
public interface SomeComponentV2 extends SomeComponent { // Some methods }
public class SomeComponentV2Impl extends SomeComponentImpl implements SomeComponent { // Some methods }
Why in this case we want to separate the interface and its implementation?
Or put it this way, why is it bad to simply have one base class, and let V2 extend/override V1 as follows:
public class SomeComponent { // Some methods }
public class SomeComponentV2 extends SomeComponent
{
// Override methods for reimplementation
// Add new methods for new features.
}
It is a good practice to separate the interface and the implementation of a class because you can easily swap out classes.
Imagine you want to test a application which depends on a web-service which bills you for every request. In addition to have a class which performs real requests to this web-service, you could build a class which implements the same interface but returns fake data to avoid generating costs for every request.
Every time you inherit from a base-class there is a chance that you inherit behaviour you simply don't want to inherit. An interface is a pure contract and gives you the freedom to let you choose a base-class independently of the described advantage.
Separating interface from implementation allows to fully use polymorphism.
In this way SomeComponentV2Impl will have 3 types - own, base class, and interface.
Here you may just use only the interface without caring about it's implementation in further classes. For example:
public void methodInOuterClass(SomeComponent smCmp){
smCmp.runInterfaceMethods();
}
[Edit: this question appeared in OP question before edites]
Why do we dont use one base class for them all?
Because SomeComponentV2Impl is distinguish from SomeComponentImpl.
But if they implement same interface, you will be able to call their implementation from the interface's refference.
Say I have an interface A and a class B that implements it.
Now, I also have some class C which extends class D (which means that it can't also extends B) but I also need there the functionality of interface A.
The solution I know is to have a member of A instantiated by B in C (which will implement A) and when implementing the functions of A call the matching function from the member of A.
Is there any way to create some connection between the functions of A and the member inside C? (so that java will know that every time it needs to call a function from A it will directly go and and run the matching function from the A member without me needing to write the code for it for every function of A)
A big thank you is waiting to each one of the helpers...
No. As already stated delegation must be implemented manually.
Having said that, you have a few options to simplify this: If you're working with Eclipse, select Source|Generate Delegate Methods... and select your member variable. Eclipse will then generate all the delegate methods for you. I don't know about other IDEs, but I would be surprised, if NetBeans et al. would not have a similar feature.
Another option, if you actually want to decorate existing collection classes, consider Google Guava's Google Guava's Collection Helpers.
Last, but not least, you could consider restructing your code and decorate your classes using Advices. Advices stem from Aspect Oriented Programming (AOP) and typically use a proxying mechanism to enrich original target classes. This is a rather advanced technique, but if you are determined to go down this road, have a look at Spring's AOP support.
So to sum up, here is your class hierarchies:
package common;
public interface A
{
void doStuff();
}
package commom.impl;
public class B implements A
{
void doStuff() {}
}
package real.service;
public class D
{
void doSomeRealStuff() {}
}
package real.service;
public class C extends D
{
void doSomeRealStuffForGood() {}
}
Assuming that each class is declared in its own source file.
Just to recall from the OP, I assume you need B stuff in C and not really A stuff. Because A is nothing but a contract and you need then the real implemting class to be fetched inside your C class in order to call the declared methods on.
In such a case, you may need to use the Inversion of Responsability approach, so that you declare an instacne of type B inside your C clas then you layer each method from B with a one having the same signature and that do nothing but delegate the real call to the instance member:
package real.service;
import common.A;
import common.impl.B;
public class C extends D
{
private A delegate;
public C ()
{
delegate = new B();
}
void doStuff() {
delegate.doStuff(); // Call the real delegate method when doStuff is called on an isntance of C.
}
void doSomeRealStuffForGood() {}
}
Note that this is a legal OO concept, since you are following an HAS-a even though some could consider it a high coupling.
Otherwise if you are not tied to the B class, and you may drop the declare methods in there for some others, you can declare an inner class that implements the A interface the way you need.
Edit:
Java does not support multiple inheritance, though you have provided a common contract in your A interface, so if you need all those methods (behavior) to be availble in your C class, it would be better to implement it directely and override all the interface methods.
I want to create a class that does not implement any method of an interface, but extends any implementation of A with it's own methods.
Let's assume we have the following:
public interface A {
public void a();
}
and
public class B implements A {
#override
public void a() {
System.out.println("a");
}
}
I now want to create a class C that also implements A and takes another random implementation of A:
public class C implements A {
public C(A a) {
//what do I need to do with a here?
}
public void c() {
System.out.println("c");
}
}
Now if I have the following:
A b = new B();
A c = new C(b);
c.a();
The output should be "a".
I can't just
public class C extends B {
...
as C is supposed to be able to work with any implementation of A, not just B.
I also can't
public class C implements A {
private a;
public C(A a) {
this.a = a;
}
#override
public void a() {
a.a();
}
public void c() {
System.out.println("c");
}
}
since that would mean that I have to redirect every single interface method and rewrite C whenever something changes with A.
Is there any way to handle that problem in Java?
For another example, replace A: List; B: ArrayList; C: FooList; a(): size()
What you're looking for is a dynamic proxy, which automatically implements all the methods of an interface by delegating to a concrete implementation of this interface. That's not trivial, but not so complex to do either, using Java's Proxy class.
A concrete example of such a proxy, which "adds" methods to any instance of PreparedStatement by wrapping it, can be found at https://github.com/Ninja-Squad/ninja-core/blob/master/src/main/java/com/ninja_squad/core/jdbc/PreparedStatements.java
Unfortunately, there's no way to do it in Java, other than your last code snippet. Various IDEs will help you with the code generation, though, and marking all methods #override will mean that you'll get a warning or an error if your implementation of C doesn't exactly match A's interface.
For Eclipse (and, apparently, IntelliJ), see the "Generate Delegate Methods" command.
This is probably not going to immediately help you, but if you used Java 8, you could solve this with defender methods, which are methods implemented in the interface.
You would then, for each existing implementation class, add your own class which extends the class and implements your additional interface with the defender methods. The methods would be "mixed into" your class.
Java 8 is just around the corner, though, so it is not a far-off solution. Oracle has promised it will release it by the end of this quarter, meaning in less than a month and a half at the latest.
Is there any way to handle that problem in Java?
Basically, no.
What you are describing a wrapper class that delegates calls to the wrapped method. The only way you can implement that (in regular Java) is to implement all of the methods and have them make the calls.
Another alternative would be to use the Proxy class ... which will effectively generate a dynamic proxy. The problem is that this requires an InvocationHandler that will (I guess) use reflection to make the call to the wrapped object. It is complicated and won't be efficient.
If your goal is simply to avoid writing code, I think this is a bad idea. If your goal is to writing the same code over and over (e.g. because you have lots of exampled of C for a given A), then consider coding an abstract class for the C classes that deals with the wrappering / delegation.
It would also be possible to generate the wrapper class C from nothing, using the BCEL library or similar. But that's an even worse idea (IMO).
I am a novice in Java, so the below question may look trivial.
Background:
Scenario 1:
I have a abstract base class C1 in Java with say N interface member functions.
The client uses this class as a package and implements Client1 the client business logic.
So Client1 using the package Package1 (which contains C1 class definition) is able to work with jar JAR1.
Scenario 2:
I want to understand the impact of adding new member functions to class C1. The class C1 with additional members (say we call C2) contains N+M member functions (which assume the client does not use) has jar file JAR2.
Now there can exists multiple combination of deployment -
Client1 (built on Package1) runs in environment of JAR1
Client1 (built on Package1) runs in environment of JAR2
etc
I am basically from a C++ background and there, the concept of vptr and its impact would be studied in details when a new interface is added to class which is exposed to clients.
Question:
a. How does these extension needs to be analyzed and implemented in case of JAVA (any material on the same is very helpful).
b. If this is a "safe option" in java, what are the other considerations we need to handle in such type of situation.
To answer both questions:
a) This article describes how the JVM loads and links class files: http://java.sun.com/docs/books/jvms/second_edition/html/ConstantPool.doc.html.
b) As long as the signature of methods/public members doesn't change, calling code will still work. Changing these will result in runtime exceptions when the class is loaded.
If I have understood it right, your question is about the implication of modifying the interface.
Implementing an interface allows a class to become more formal about the behavior it promises to provide. Interfaces form a contract between the class and the outside world, and this contract is enforced at build time by the compiler.
interface Turns
{
public void turnLeft();
public void turnRight();
}
class Device implements Turns
{
public void turnLeft()
{
//implementation
}
public void turnRight()
{
//implementation
}
}
Now if we need to modify the interface. What we do is extend the interface.
interface TurnsAllWays extends Turns
{
public void turnsBack();
}
So now Device can continue as it was or be modified if necessary.