How can I 'provide' a void method in Dagger - java

I'm trying to migrate a project from Spring and starting to play with Dagger 2 by following a sample GitHub repo.
In Spring I can use an annotation at the class level to expose all of the methods of a class, including void methods. I want to do this in Dagger 2 i.e., I want to 'provide' a void method.
In the code example below maybe I should move the printRandomUUID method to the Printer interface. However, I'm doing this little exercise with the goal of migrating a classical Spring #Component or #Service.
What is the correct approach? Is it possible to provide a void method at component or module level?
public class Main {
interface Printer {
void printMsg(String msg);
}
static class ConsolePrinter implements Printer {
#Override
public void printMsg(String msg) {
System.out.println(msg);
}
}
#Singleton
#Component(modules = ConsoleModule.class)
interface HelloWorldApp {
Printer getPrinter();
//this doesn't compile -> java.lang.IllegalArgumentException: not a valid component method:
void printRandomUUID();
}
#Module
static class ConsoleModule {
#Provides
Printer providePrinter() {
return new ConsolePrinter();
}
//this doesn't compile -> #Provides methods must return a value (not void)
#Provides
void printRandomUUID() {
System.out.println(UUID.randomUUID().toString());
}
}
public static void main(String[] args) {
HelloWorldApp app = DaggerMain_HelloWorldApp.create();
app.getPrinter().printMsg("Hello");
System.out.println("-");
//app.printRandomUUID();//here i want a void method to be consumed, from the component.
}
}

This isn't possible (yet). Unlike with Spring, Dagger is only configured by inspecting the Component interface and Modules. This means that Dagger #Component methods have to match the format of the methods from the #Component documentation, and there's currently no way of providing arbitrary code or methods that delegate to other instances.
That's not to say Components can't have void methods: They can, but they're required to be one-parameter methods that inject #Inject-annotated methods and fields in externally-created instances. These are known as members-injection methods, and instead of void they can also return the type they accept for ease of chaining.
From a different standpoint, I would argue that combining arbitrary business logic with your Dagger-created component is a bad idea for reasons of simplicity and correctness:
To do so may be violating SRP or separation-of-concerns: One of the stated advantages of dependency injection is the separation of object-creation logic from other business logic. Allowing the addition of a business method on an object-creation Component should feel as improper as the use of new in a business component. (Whether or not every single object should be provided through a DI graph is a contentious topic for another day.)
If you hold to best practices and avoid side-effects and other "heavy lifting" in constructors/factories/providers, then you should be able to reason cleanly about what can and can't happen from within a Component method. Allowing for arbitrary methods--particularly void methods--on a Component would be antithetical to that practice.
If your application uses separate granular libraries instead of a monolithic compilation step, then consuming a Component from within its own object graph may make it hard to build without introducing a dependency cycle. Of course, Dagger does allows for the injection of a component within its own graph, but doing so recklessly may cause cycle problems later.
Similar use-cases are so easy to represent using existing structures--making a Runnable available through the graph as Louis Wasserman commented, or injecting a similar single-purpose object to hold the method--that keeping arbitrary implementations off of the Component seems to result in no big loss of functionality or readability. At worst, you need one extra method call to get to a class you define.
If I were migrating as you are, I would make a class adjacent to HelloWorldApp called HelloWorldMethods, and shift all of the methods I would put on HelloWorldApp onto that instead. If this is a common pattern in your Spring migration, you might even define a local convention for it (FooComponent comes with FooMethods or FooUtil, for instance). Finally, if you wanted to hide the Dagger implementation details (as in an external API) you could also write your own class that wraps and consumes your Component, delegating important methods to the inner Component and providing whichever arbitrary implementations you need.

Related

Java - correct way to delegate methods

My program gets information from an external source (can be a file, a database, or anything else I might decide upon in the future).
I want to define an interface with all my data needs, and classes that implement it (e.g. a class to get the data from a file, another for DB, etc...).
I want the rest of my project to not care where the data comes from, and not need to create any object to get the data, for example to call "DataSource.getSomething();"
For that I need DataSource to contain a variable of the type of the interface and initialize it with one of the concrete implementations, and expose all of its methods (that come from the interface) as static methods.
So, lets say the interface name is K, and the concrete implementations are A,B,C.
The way I do it today is:
public class DataSource {
private static K myVar = new B();
// For **every** method in K I do something like this:
public static String getSomething() {
return myVar.doSomething();
}
...
}
This is very bad since I need to copy all the methods of the interface and make them static just so I can delegate it to myVar, and many other obvious reasons.
What is the correct way to do it? (maybe there is a design pattern for it?)
**Note - since this will be the backbone of many many other projects and I will use these calls from thousands (if not tens of thousands) code lines, I insist on keeping it simple like "DataSource.getSomething();", I do not want anything like "DataSource.getInstance().getSomething();" **
Edit :
I was offered here to use DI framework like Guice, does this mean I will need to add the DI code in every entry point (i.e. "main" method) in all my projects, or there is a way to do it once for all projects?
The classes using your data source should access it via an interface, and the correct instance provided to the class at construction time.
So first of all make DataSource an interface:
public interface DataSource {
String getSomething();
}
Now a concrete implementation:
public class B implements DataSource {
public String getSomething() {
//read a file, call a database whatever..
}
}
And then your calling class looks like this:
public class MyThingThatNeedsData {
private DataSource ds;
public MyThingThatNeedsData(DataSource ds) {
this.ds = ds;
}
public doSomethingRequiringData() {
String something = ds.getSomething();
//do whatever with the data
}
}
Somewhere else in your code you can instantiate this class:
public class Program {
public static void main(String[] args) {
DataSource ds = new B(); //Here we've picked the concrete implementation
MyThingThatNeedsData thing = new MyThingThatNeedsData(ds); //And we pass it in
String result = thing.doSomethingThatRequiresData();
}
}
You can do the last step using a Dependency Injection framework like Spring or Guice if you want to get fancy.
Bonus points: In your unit tests you can provide a mock/stub implementation of DataSource instead and your client class will be none the wiser!
I want to focus in my answer one important aspect in your question; you wrote:
Note - I insist on keeping it simple like "DataSource.getSomething();", I do not want anything like "DataSource.getInstance().getSomething();"
Thing is: simplicity is not measured on number of characters. Simplicity comes out of good design; and good design comes out of following best practices.
In other words: if you think that DataSource.getSomething() is "easier" than something that uses (for example) dependency injection to "magically" provide you with an object that implements a certain interfaces; then: you are mistaken!
It is the other way round: those are separated concerns: one the one hand; you should declare such an interface that describes the functionality that need. On the other hand, you have client code that needs an object of that interface. That is all you should be focusing on. The step of "creating" that object; and making it available to your code might look more complicated than just calling a static method; but I guarantee you: following the answer from Paolo will make your product better.
It is sometimes easy to do the wrong thing!
EDIT: one pattern that I am using:
interface SomeFunc {
void foo();
}
class SomeFuncImpl implements SomeFunc {
...
}
enum SomeFuncProvider implements SomeFunc {
INSTANCE;
private final SomeFunc delegatee = new SomeFuncImpl();
#Override
void foo() { delegatee.foo(); }
This pattern allows you to write client code like
class Client {
private final SomeFunc func;
Client() { this(SomeFuncProvider.INSTANCE); }
Client(SomeFunc func) { this.func = func; }
Meaning:
There is a nice (singleton-correctway) of accessing an object giving you your functionality
The impl class is completely unit-testable
Client code uses dependency injection, and is therefore also fully unit-testable
My program gets information from an external source (can be a file, a database, or anything else I might decide upon in the future).
This is the thought behind patterns such as Data Access Object (short DAO) or the Repository pattern. The difference is blurry. Both are about abstracting away a data source behind a uniform interface. A common approach is having one DAO/Repository class per business- or database entity. It's up to you if you want them all to behave similarly (e.g. CRUD methods) or be specific with special queries and stuff. In Java EE the patterns are most often implemented using the Java Persistence API (short JPA).
For that I need DataSource to contain a variable of the type of the
interface and initialize it with one of the concrete implementations,
For this initialization you don't want to know or define the type in the using classes. This is where Inversion Of Control (short IOC) comes into play. A simple way to archieve this is putting all dependencies into constructor parameters, but this way you only move the problem one stage up. In Java context you'll often hear the term Context and Dependency Injection (short CDI) which is basically an implementation of the IOC idea. Specifically in Java EE there's the CDI package, which enables you to inject instances of classes based on their implemented interfaces. You basically do not call any constructors anymore when using CDI effectively. You only define your class' dependencies using annotations.
and expose all of its methods (that come from the interface)
This is a misconception. You do want it to expose the interface-defined method ONLY. All other public methods on the class are irrelevant and only meant for testing or in rare cases where you want to use specific behavior.
as static methods.
Having stateful classes with static method only is an antipattern. Since your data source classes must contain a reference to the underlying data source, they have a state. That said, the class needs a private field. This makes usage through static methods impossible. Additionally, static classes are very hard to test and do not behave nicely in multi-threaded environments.

Why should I use an interface when there is only one implementation class?

I'm new at programming and I'm learning Java.
I was just wondering why I should use an interface when there is only one implementation class?
You do this to prevent others from accessing your implementing type. For example, you could hide your implementing type inside a library, give the type package access, and return an instance of your interface to the users of your library:
// This is what the users of your library know about the class
// that does the work:
public interface SomeInterface {
void doSomethingUseful();
void doSomethingElse();
}
// This is the class itself, which is hidden from your clients
class MyImplementation implements SomeInterface {
private SomeDependency dependency = new SomeDependency();
public void doSomethingUseful() {
...
}
public void doSomethingElse() {
...
}
}
Your clients obtain objects like this:
public class MyFactory {
static SomeInterface make() {
// MyFactory can see MyImplementation
return new MyImplementation();
}
}
This trick becomes useful when the implementation uses lots of libraries. You efficiently decouple the interface of your library from its implementation, so that the user wouldn't have to know about the dependencies internal to your library.
One reason is to maintain the open/closed principle, which states that your code should be open for extension, but closed for modification. Although you only have one implementing class now, chance is that you will need another differing implementation class with the passing of time. If you extract the implementation into an interface beforehand, you just have to write another implementing class ie. You don't have to modify a perfectly working piece of code, eliminating the risks of introducing bugs.
You shoulnt do anything without thinking and reasoning.
There might be cases where you might want to add an interface even for a single implementation ... but IMO that's an OBSOLETE PRACTICE coming from old EJB times, that people use and enforce without the proper reasoning and reflection.
... and both Martin Fowler, Adan Bien, and others has been saying it for years.
https://martinfowler.com/bliki/InterfaceImplementationPair.html
https://www.adam-bien.com/roller/abien/entry/service_s_new_serviceimpl_why
To respect the Interface Segregation Principle.
The decision to create an interface should not be based on the number of implementing classes, but rather on the number of different ways that object is used. Each ways the object is used is represented by an interface, defined with the code that uses it. Say your object needs to be stored in memory, in collections that keep objects in order. That same object and also needs to be stored in some persistent storage.
Say you implement persistence first. What is needed by the storage system is a unique identifier for the persisted objects. You create an interface, say Storable, with a method getUniqueId. You then implement the storage.
Then, you implement the collection. You define what the collection needs from stored objects in an interface, like Comparable, with a method compareTo. You can then implement the collection with dependency on Comparable.
The class you want to define would implement both interfaces.
If the class you are defining implement a single interface, that interface would have to represent the needs of the collection and storage system. That would cause, for example:
unit tests for the collection would have to be written with objects that implement Storable, adding a level of complexity.
if the need arise later to display the object, you would have to add methods needed by the display code to the single interface, and modify the tests for collection and storage to also implement the methods needed for display.
I talk about impact on test code here. The problem is larger if other production level objects need storage and not display. The larger the project, the larger the issue created by not respecting the interface segregation principle will become.
It can give you the flexibility to add more implementations in the future without changing the client code which references the interface.
Another example of when it can be useful is to simulate multiple inheritance in Java when it is needed. For example, suppose you have an interface MyInterface and an implementation:
public interface MyInterface {
void aMethod1();
void aMethod2();
}
class MyInterfaceImpl implements MyInterface {
public void aMethod1() {...}
public void aMethod2() {...}
}
You also have an unrelated class with its own hierarchy:
public class SomeClass extends SomeOtherClass {
...
}
Now you want to make SomeClass be of type MyInterface but you also want to inherit all the code that is already existing in MyInterfaceImpl. Since you cannot extend both SomeOtherClass and MyInterfaceImpl, you can implement the interface and use delegation:
public class SomeClass extends SomeOtherClass implements MyInterface {
private MyInterface myInterface = new MyInterfaceImpl();
public void aMethod1() {
myInterface.aMethod1();
}
public void aMethod2() {
myInterface.aMethod2();
}
...
}
I see a lot of good points being made in this post. Also wanted to add my 2 cents to this collection of knowledge.
Interfaces Encourage parallel development in a team environment. There can be 2 classes A and B, with A calling B's API. There can be 2 developers simultaneously working on A and B. while B is not ready, A can totally go about it's own implementation by integrating with B's interfaces.
Interfaces serve as a good ground for establishing API Contracts between different layers of code.
It's good to have a separation of concerns with Interface handling implicit API documentation. it's super easy to refer to one and figure which APIs are accessible for the clients to call.
Lastly, it's better to practice using interfaces as a standard in a project than having to use it on a case by case bases (where you need multiple implementations). This ensures consistency in your project.
For the Art that Java Code is, interfaces make thmem even more beautiful :)
Interfaces can be implemented by multiple classes. There is no rule that only one class can implement these. Interfaces provide abstraction to the java.
http://www.tutorialspoint.com/java/java_interfaces.htm
You can get more information about interfaces from this link

Why adding a new method to the Java interface breaks the clients that depend on old version?

In Java when you add a new method to an interface, you break all your clients. When you have an abstract class, you can add a new method and provide a default implementation in it. All the clients will continue to work.
I wonder why the interface is designed this way?
All the old methods are still there, so seems like there is no backward compatibility issue. (Of course there need to be certain exceptions, but I think enabling to add new methods to java interfaces without breaking the clients could be really good idea...)
I'd appreciate your comments.
There are a few possible breaks I can see
you assume that clients will use a new overloaded method, but they don't because the code hasn't been recompiled.
you add a method which the client already had which does something different.
you add a method which means their subclasses break when recompiled. IMHO This is desirable.
you change the return type, method name or parameters types which will cause an Error at runtime.
you swap parameters of the same type. This is possibly the worst and most subtle bug. ;)
IMHO, It's the subtle problems which are more likely to cause you grief. However, I wouldn't assume that simply adding a method will break the code and I wouldn't assume that if a client's code isn't getting runtime error means they are using the latest version of anything. ;)
If I compile this code
public interface MyInterface {
void method1();
// void method2();
}
public class Main implements MyInterface {
#Override
public void method1() {
System.out.println("method1 called");
}
public static void main(String... args) {
new Main().method1();
}
}
it prints
method1 called
I then uncomment method2() and recompile just the interface. This means the interface has a method the Main doesn't implement. Yet when I run it without recompiling Main I get
method1 called
If I have
public class Main implements MyInterface {
public void method1() {
System.out.println("method1 called");
}
public void method2() {
System.out.println("method2 called");
}
public static void main(String... args) {
new Main().method1();
}
}
and I run with // method2() commented out, I don't have a problem.
An interface is like a template for a class. When you have an object whose class implements a certain interface and you do a cast to that interface you can only access that object (and it's methods) through the interface. Thus, your client will always see all the methods provided by the interface and not only those that are in fact implemented by the class.
Your suggestion would result in you wondering whether the object you are handling at any moment does really have an implementation for the method they are trying to call.
Of course in your scenarion that would not happen for the legacy clients, until you want to update them some time and you rely on your objects having an implementation for all the methods your IDE previews you. :)
The fact with abstract classes is (exactly as you have mentioned) that you provide a default implementation and can thus, on the client side, rely on your object having the methods implemented.
Hope this helps to clear things up.
Regards
Interface represents a set of ALL methods available from that interface to its user and not SOME of the methods to which other methods could be added. It’s a contract of being no less and no more.
The best thing about this design is that there is no ambiguity.
Java being a statically typed language and needs to know completely what all are available methods from the Interface declaration. Inside JVM there is only one representation of Interface class and it needs to contain the complete set of abstract methods available.
When you have an abstract class with some implemented methods, it semantics of being a class which cannot be instantiated but whose sole purpose is to be extended and have its abstract methods implemented.
An abstract class enforces IS-A relation whereas Interface enforces BEHAVES-AS
The role of Interfaces are more than just declarations of methods. They form the very basis for Contract based/first development. These contract apis define what goes as input and what comes as output. And like any CONTRACT they are valid only when all conditions are met. And that is the reason it is mandatory for an class to implement all the apis of the implemented interface.
There are design patters that define the way to handle versions and new developments. Factory/Abstract Factory patters which should be used when instantiating the implementations of these Interfaces (So that they can internally validate the proper version is implementation to be used).
The flexibilities that you are asking for can be implemented by proper design, instead of expecting it from programming language. For a programming tool plugging in an new version implementation into an old interface is a programming error. But it is different from backward compatibility breaks.
Availability of a newer version of component doesn't necessarily mean there will be a Backward compatibility break. A matured product or component always supports older versions as long as they don't become a real pain for maintenance. So in most cases user don't have to be worried about the availability of a newer version unless there are some new features that you would like make use of.

Write annotation to guard a method from being called when parameters are invalid

Can I have an annotation for a method that when its parameters are null, just do nothing, effectively as not being invoked?
Probably the easiest way to do this is using interfaces and dynamic proxies. Annotations do nothing other than add metadata. You're going to have to add code to act based on the annotation.
You'd have to do a few things --
Create an interface
public interface IService {
#ValidateNull // Your custom annotation
public void yourMethod(String s1);
}
When using the implementation, instantiate it as a JDK Proxy.
IService myService = (IService)java.lang.Proxy.newInstance(ServiceImpl.class.getClassLoader(),
ServiceImpl.class.getInterfaces(),
new YourProxy(new ServiceImpl());
Now, you can via reflection, capture all invocations of your method in YourProxy class.
public YourProxy implements InvocationHandler {
public Object invoke(Object arg0, Method method, Object[] args) throws Throwable {
if (method.isAnnotationPresent(ValidateNull.class)) {
//Check args if they are null and return.
}
}
}
If you dont want to do this, then you're looking at more heavyweight frameworks such as AspectJ / Spring AOP.
Annotations in and of themselves are nothing but data. They don't "do" anything. There are a number of different ways you can have a run time framework that interprets annotations and accomplishes the functionality you're looking for. The most common technique would be what's called Aspect Oriented Programming, where you alter the behaviour of program components based on metadata.
aspectJ is a full featured library that allows you to change the behaviour of just about anything! You wouldn't even technically need an annotation will full aspectJ, there are lots of different ways to 'match' methods that you want to alter the behaviour of.
SpringAOP is a more limited subset of the functionality provided by aspectJ, but it is also easier to use and add to your project as a consequence!

How do I intercept a method invocation with standard java features (no AspectJ etc)?

I want to intercept all method invocations to some class MyClass to be able to react on some setter-invocations.
I tried to use dynamic proxies, but as far as I know, this only works for classes implementing some interface. But MyClass does not have such an interface.
Is there any other way, besides implementing a wrapper class, that delegates all invocations to a member, which is an instance of the MyClass or besided using AOP?
As you note, you cannot use JDK dynamic proxies (no interface), but using Spring and CGLIB (JAR included with Spring), you can do the following:
public class Foo
{
public void setBar()
{
throw new UnsupportedOperationException("should not go here");
}
public void redirected()
{
System.out.println("Yiha");
}
}
Foo foo = new Foo();
ProxyFactory pf = new ProxyFactory(foo);
pf.addAdvice(new MethodInterceptor()
{
public Object invoke(MethodInvocation mi) throws Throwable
{
if (mi.getMethod().getName().startsWith("set"))
{
Method redirect = mi.getThis().getClass().getMethod("redirected");
redirect.invoke(mi.getThis());
}
return null;
}
});
Foo proxy = (Foo) pf.getProxy();
proxy.setBar(); // prints "Yiha"
If you are prepared to do something really ugly, have a look at:
http://docs.oracle.com/javase/7/docs/technotes/guides/jpda/
Basically the debugger interface ought to allow you to attach like a debugger, and hence intercept calls. Bear in mind I think this is a really bad idea, but you asked if it was possible.
Java doesn't have any actual language features for method interception (not sure any static language does)
I kinda like Nick's idea of using the debugger interface, that's just mean.
I think the short answer you need is: No there isn't a way of intercepting a method call in Java without actually replacing the class using a proxy or wrapper.
Note: The AOP libraries just make this happen automatically.
Some of the Java gurus might frown upon this but I've had some good success with avoiding primitive types and setters altogether. My class looks like this:
class Employee extends SmartPojo {
public SmartString name;
public SmartInt age;
}
You'll notice two things: 1. everything is public. 2. No constructor.
The magic happens in SmartPojo which searches for any field which implements the "Smart" interface and initializes it. Since this is no primitive (and no final class), I can add set() and get() methods for all fields anywhere in my model in a single place. So no setter/getter wastes anymore, it's stunningly simple to add notification (also in a single place), etc.
True, this is no POJO anymore and it's not a Bean in most ways but I've found that these old ideas limit me more than they help. YMMV.
I just developed a small framework for this purpose.
You can check it out at: http://code.google.com/p/java-interceptor/ (use svn to check out).
There isn't a lot of magic in AspectJ. You can write your own agent. http://java.sun.com/javase/6/docs/api/java/lang/instrument/package-summary.html seems to be good starting point.
Why cannot your class implement an interface? You could just extract some interface from it containing all the methods that you want to intercept and use the dynamic proxies mechanism easily. It's also a good programming practice to code with interfaces and not classes.
You could use Spring framework with Spring AOP capabilities (which are using dynamic proxies inside) to do it. You will just have to define your class as a Spring bean in the configuration file and clients of your class will have to either get its instance from the Spring application context or as a dependency automatically (by defining the setMyClass(MyClass mc) method for instance). From there you can easily go to defining an aspect that intercepts all the method calls to this class.

Categories

Resources