understanding Dagger 2 app Architecture - java

I'm trying to get my head around Dagger 2 and Dependency Injection. I figured a good way of doing it was to take a look at the official Coffee example. I also read through the official documentation on the github page, but I found it rather confusing for newcommers .
As you can see in the image below, I stripped down all the classes and colored them to understand what's going on. But I still have some doubts
My questions:
1) I've heard that we use DI because passing the dependencies on the constructor makes our code verbose. Well, the amount of code and classes on this example enormously exceeds what it would have taken to just supply these two parameters in a constructor. Furthermore, the code would be much more human friendly. What is the advantage of doing it this way?
2) PumpModule declares a provider that takes the thing it is supposed to provide as a parameter... that is a tad counter intuitive. What's really going on there?
3) I really got lost here (CoffeApp.java)
DaggerCoffeApp_Coffe.builder().build()
What is that doing? Where's DaggerCoffeApp_Coffe? Android studio says it can't find it anywhere.

Answer 1
"the amount of code and classes on this example enormously exceeds
what it would have taken to just supply these two parameters in a
constructor"
In an example code or a very small app - yes. In a normal sized or big app you will have hunders if not thousands of times to provide "these two parameters in a constructor".
One of the best advantages of the DI is that it allows (and to some extend maybe forces) you to create modular apps, where the modules can be developed and tested in isolation. This may not sound as a big deal but again, when the app grows bigger it becomes really hard to add new changes without breaking things. When you develop a module you can isolate yourself from the rest of the app by defining interfaces that provide the needed functionality and define #Injects for these interfaces. That way if you later (months, next version?) decide that you need to change/extend/rewrite some module other modules will not be affected as long as you don't change it's interface. You will be able to write your replacing module and then just 'switch' to it in your #Provides method.
The other big advantage of DI is that it allows you easily to feed mock objects into your unit tests. For example: let's say you have an Activity that uses GPS location provider to detect the location. If you want to test it without DI you will have to run your app in the emulator in debug mode, provide manually some "fake" locations and at some breakpoint inspect if the activity is in the expected state. With DI you can easily feed mock location provider into your Activity that simulates GPS location updates with some predefined by you values. Of course you may again run your app manually in the emulator (or real device) but you can also run it automatically as part of the unit tests or even in the continuous integration server process like Jenkins. That way each time you change your code you can run the tests and immediately see if the changes broke something. The other value is that automatic tests save your time. In the example probably you will need at least 2 minutes for the manual test. Automatic test will take seconds and more important it will run without the need of your attention/input while running.
For more info I recommend this great video by Jake Wharton:
https://www.parleys.com/tutorial/5471cdd1e4b065ebcfa1d557/
Here the slides for the video:
https://speakerdeck.com/jakewharton/dependency-injection-with-dagger-2-devoxx-2014
Answer 2
"PumpModule declares a provider that takes the thing it is supposed to
provide as a parameter"
That provider provides the interface, not the concrete class. That is the whole point. When you migrate your app to DI you have to create an interface for each class that you want injected. As explained in in Answer 1, that way you will be able to easily replace the concrete implementation with mock object for the testing or with new better implementation for the later versions of the app.
For example: at some point you decide that you need Rotary Vane Pump instead of Thermosiphon. You write your RotaryVanePump class and then simply change your method to #Provides Pump providePump(RotaryVanePump pump) {.
How this works ((over)simplified explanation):
DI graph it built by DaggerCoffeApp_Coffe.builder().build() (please see Answer 3 first)
At some point Dagger finds in your code #Inject Pump mMyPump;
Dagger sees that you need Pump injected and seeks in the DI graph how to provide it.
It finds the #Provides Pump providePump() method. Dagger sees that it requires RotaryVanePump object.
Dagger seeks the DI graph how to provide RotaryVanePump.
There is no provide method in any module but RotaryVanePump does not need one because it has parameterless constructor so Dagger can instantiate an object.
New object is instantiated of type RotaryVanePump
Dagger feeds this object in providePump() as actual parameter.
providePump() returns with that object as return value.
RotaryVanePump is injected into the #Inject Pump mMyPump field.
And all this is done automatically, you don't have to care about it.
Answer 3
DaggerCoffeApp_Coffe is generated by Dagger. You will have to use this in order Android studio to "see" the generated classes.
https://bitbucket.org/hvisser/android-apt
"What is that doing?
That is doing the whole magic :-). It builds the dependency graph and checks at compile time that all dependencies are satisfied. That "at compile time" differentiates Dagger from all other DI frameworks which are unable to validate the graph at compile time and if you miss to define some dependency you will get an ugly runtime error.
In order all of your #Injects to work you will have to first build the graph using call like this.

Related

Specify JDBI default plugin (SqlObjectPlugin)

Is there a way to tell JDBI I want to use a specific plugin (SqlObjectPlugin in my case) throughout my entire application, without having to re-specify upon each use? Throughout my application code I have the following:
var jdbi = Jdbi.create(dataSource);
jdbi.installPlugin(new SqlObjectPlugin()); // <== I want to make this part DRY
I'm tired of having that second boilerplate line repeated all over my code.
To the JDBI devs: I totally love the product! thx! :)
You have two options, which of the is best depends on the code of your application:
Instantiate Jdbi once and reuse it everywhere (using DI, static singleton, etc)
Use org.jdbi.v3.core.Jdbi#installPlugins, which, according to documentation, uses the ServiceLoader API to detect and install plugins automagically. Some people consider this feature dangerous; some consider it essential -- use at your own risk. But you still would need to call this method every time.

Why use enums when it creates dependency across teams?

I know enums are used when we are expecting only a set of values to be passed. We don't want the caller to pass anything other than the well defined set.
And this works very well inside a project. Because you know what you've to pass.
But consider 2 projects, I am using the models of 1st project in 2nd.
Second project has a method like this.
public void updateRefundMode(RefundMode refundMode)
enum RefundMode("CASH","CARD","GIFT_VOUCHER")
Now, I realise RefundMode can be PHONEPE also, So If I start passing this to 1st project, it would fail at their end (Unable to desirialize enum PHONEPE). Although I've added this enum at my end.
Which is fine, because If my first project doesn't know about the "PHONEPE", then it doesn't know how to handle it, so he has to update the models too.
But my problem is, Let's imagine a complex Object am trying to pass, which also takes this RefundMode, when I pass a new RefundMode just this field should be become null or ignored at their end right ? Rather than not accepting the whole object, and breaking the entire flow/request.
Is there a way I can specify jackson (jsonproperties) to just ignore that field if an unknown value is being passed. Curious to know.. (Although In that case, I am breaking the rule of ENUM) So, why not keep a String which solves all the problem ?
It's all about contracts.
When you are in a client/server situation, being a mobile app and a web server, or a Java library (jar) and another Java project, you have to keep the contracts in mind.
As you observed, a change in contracts need to be propagated to both parties: the client and the server (supplier).
One way of working with this is to use versioning. You may say: "Version 1: those are the refund modes.". Then the mobile app may call the web server by specifying the contract version in the URL: /api/v1/refund?mode=CASH
When the contract needs to be changed, you need to consider what to do with the clients. In the case of mobile apps, the users might not have updated their app to the latest version, so their app may still be calling /api/v1 (and not supporting new refund modes). In that case, you may want to support both /api/v1 and /api/v2 (with the new refund mode) in your web server.
As your example shows, it is not always possible to transparently adapt one contract version to another (in your example, there is no good equivalent to PHONEPE in the original enum). If you have to deal with contract updates, I suggest explicitly writing code to them (you can use dedicated JSON schemas, classes and services) instead of trying to bridge the gaps. Think of what would happen with a third, fourth version.
Edit: to answer your last question, you can ignore unknown fields in JSON by following this answer (with the caveats explained above): https://stackoverflow.com/a/59307683/2223027
Edit 2: in general, using Enums is a form of strong typing. Sure, you could use Strings, or even bits, but then it would be easier to make mistakes, like using GiftVoucher instead of GIFT_VOUCHER.

How to recover from legacy stuff in large system step by step?

Our team has been given a legacy system for further maintenance and development.
As this this is true "legacy" stuff, there is really, really small number of tests, and most of them are crap. This is an app with web interface, so there are both container-managed components as well as plain java classes (not tied to any framework etc.) which are "new-ed" here and there, whenever you want to.
As we work with this system, every time we touch given part we try to break all that stuff into smaller pieces, discover and refactor dependencies, push dependencies instead of pulling them in code.
My question is how to work with such system, to break dependecies, make code more testable etc? When to stop and how to deal with this?
Let me show you an example:
public class BillingSettingsAction {
private TelSystemConfigurator configurator;
private OperatorIdDao dao;
public BillingSettingsAction(String zoneId) {
configurator = TelSystemConfiguratorFactory.instance().getConfigurator(zoneId);
dao = IdDaoFactory.getDao();
...
}
// methods using configurator and dao
}
This constructor definitely does too much. Also to test this for further refactoring it requires doing magic with PowerMock etc. What I'd do is to change it into:
public BillingSettingsAction(String zone, TelSystemConfigurator configurator, OperatorIdDao dao) {
this.configurator = configurator;
this.dao = dao;
this.zone = zone;
}
or provide constructor setting zone only with setters for dependencies.
The problem I see is that if I provide dependencies in constructor, I still need to provide them somewhere. So it is just moving problem one level up. I know I can create factory for wiring all the dependencies, but touching different part of app will cause having different factories for each. I obviously can't refactor all the app at once and introduce e.g. Spring there.
Exposing setters (maybe with default implementations provided) is similar, moreover it is like adding code for tests only.
So my question is how you deal with that? How to make dependencies between objects better, more readable, testable without doing it in one go?
I just recently started reading "Working effectively with legacy code" by Michael Feathers.
The book is basically an answer to your very question. It presents very actionable "nuggets" and techniques to incrementally bring a legacy system under test and progressively improve the code base.
The navigation can be a little confusing, as the book references itself by pointing to specific techniques, almost from page 1, but I find that the content is very useful so far.
I am not affiliated with the author or anything like that, it's just that I am facing similar situations and found this a very interesting resource.
HTH
I'd try to establish a rule like the boy scout rule: When ever you touch a file you have to improve it a little, apart from implementing what ever you wanted to implement.
In order to support that you can
agree on a fixed time budget for such improvements like for 2 hours of feature work we allow 1 hour of clean up.
Have metrics visible showing the improvement over time. Often simple things like average file size and test coverage are sufficient
Have a list of things you want to change at least for the bigger stuff, like "Get rid of TelSystemConfiguratorFactory" track on which tasks you are already working and prefere working on things that already started over new things.
In any case make sure management agrees to your approach.
On the more technical side: The approach you showed is is good. In many cases I would consider a second constructor providing all the dependencies but through the new constructor with the parameters. Make the additional constructor deprecated. When you touch clients of that class make them use the new constructor.
If you are going with Spring (or some other DI Framework) you can start by replacing calls to static Factories with getting an instance from the Spring context as an intermediate step before actually creating it via spring and injecting all the dependencies.

Dummy data and unit testing strategies in a modular application stack

How do you manage dummy data used for tests? Keep them with their respective entities? In a separate test project? Load them with a Serializer from external resources? Or just recreate them wherever needed?
We have an application stack with several modules depending on another with each containing entities. Each module has its own tests and needs dummy data to run with.
Now a module that has a lot of dependencies will need a lot of dummy data from the other modules. Those however do not publish their dummy objects because they are part of the test resources so all modules have to setup all dummy objects they need again and again.
Also: most fields in our entities are not nullable so even running transactions against the object layer requires them to contain some value, most of the time with further limitations like uniqueness, length, etc.
Is there a best practice way out of this or are all solutions compromises?
More Detail
Our stack looks something like this:
One Module:
src/main/java --> gets jared (.../entities/*.java contains the entities)
src/main/resources --> gets jared
src/test/java --> contains dummy object setup, will NOT get jared
src/test/resources --> not jared
We use Maven to handle dependencies.
module example:
Module A has some dummy objects
Module B needs its own objects AND the same as Module A
Option a)
A Test module T can hold all dummy objects and provide them in a test scope (so the loaded dependencies don't get jared) to all tests in all Modules. Will that work? Meaning: If I load T in A and run install on A will it NOT contain references introduced by T especially not B? Then however A will know about B's datamodel.
Option b)
Module A provides the dummy objects somewhere in src/main/java../entities/dummy allowing B to get them while A does not know about B's dummy data
Option c)
Every module contains external resources which are serialized dummy objects. They can be deserialized by the test environment that needs them because it has the dependency to the module to which they belong. This will require every module to create and serialize its dummy objects though and how would one do that? If with another unit test it introduces dependencies between unit tests which should never happen or with a script it'll be hard to debug and not flexible.
Option d)
Use a mock framework and assign the required fields manually for each test as needed. The problem here is that most fields in our entities are not nullable and thus will require setters or constructors to be called which would end us up at the start again.
What we don't want
We don't want to set up a static database with static data as the required objects' structure will constantly change. A lot right now, a little later. So we want hibernate to set up all tables and columns and fill those with data at unit testing time. Also a static data base would introduce a lot of potential errors and test interdependencies.
Are my thoughts going in the right direction? What's the best practice to deal with tests that require a lot of data? We'll have several interdependent modules that will require objects filled with some kind of data from several other modules.
EDIT
Some more info on how we're doing it right now in response to the second answer:
So for simplicity, we have three modules: Person, Product, Order.
Person will test some manager methods using a MockPerson object:
(in person/src/test/java:)
public class MockPerson {
public Person mockPerson(parameters...) {
return mockedPerson;
}
}
public class TestPerson() {
#Inject
private MockPerson mockPerson;
public testCreate() {
Person person = mockPerson.mockPerson(...);
// Asserts...
}
}
The MockPerson class will not be packaged.
The same applies for the Product Tests:
(in product/src/test/java:)
public class MockProduct() { ... }
public class TestProduct {
#Inject
private MockProduct mockProduct;
// ...
}
MockProduct is needed but will not be packaged.
Now the Order Tests will require MockPerson and MockProduct, so now we currently need to create both as well as MockOrder to test Order.
(in order/src/test/java:)
These are duplicates and will need to be changed every time Person or Product changes
public class MockProduct() { ... }
public class MockPerson() { ... }
This is the only class that should be here:
public class MockOrder() { ... }
public class TestOrder() {
#Inject
private order.MockPerson mockPerson;
#Inject
private order.MockProduct mockProduct;
#Inject
private order.MockOrder mockOrder;
public testCreate() {
Order order = mockOrder.mockOrder(mockPerson.mockPerson(), mockProduct.mockProduct());
// Asserts...
}
}
The problem is, that now we have to update person.MockPerson and order.MockPerson whenever Person is changed.
Isn't it better to just publish the Mocks with the jar so that every other test that has the dependency anyway can just call Mock.mock and get a nicely setup object? Or is this the dark side - the easy way?
This may or may not apply - I'm curious to see an example of your dummy objects and the setup code related. (To get a better idea of whether it applies to your situation.) But what I've done in the past is not even introduce this kind of code into the tests at all. As you describe, it's hard to produce, debug, and especially package and maintain.
What I've usaully done (and AFAIKT in Java this is the best practice) is try to use the Test Data Builder pattern, as described by Nat Pryce in his Test Data Builders post.
If you think this is somewhat relevant, check these out:
Does a framework like Factory Girl exist for Java?
make-it-easy, Nat's framework that implements this pattern.
Well, I read carefully all evaluations so far, and it is very good question. I see following approaches to the problem:
Set up (static) test data base;
Each test has it's own set up data that creates (dynamic) test data prior to running unit tests;
Use dummy or mock object. All modules know all dummy objects, this way there is no duplicates;
Reduce the scope of the unit test;
First option is pretty straight forward and has many drawbacks, somebody has to reproduce it's once in while, when unit tests "mess it up", if there are changes in the data-module, somebody has to introduce corresponding changes to the test data, a lot of maintenance overhead. Not to say that generation of this data on the first hand maybe tricky. See aslo second option.
Second option, you write your test code that prior to the testing invokes some of your "core" business methods that creates your entity. Ideally, your test code should be independent from the production code, but in this case, you will end up with duplicate code, that you should support twice. Sometimes, it is good to split your production business method in order to have entry point for your unit test (I makes such methods private and use Reflection to invoke them, also some remark on the method is needed, refactoring is now a bit tricky). The main drawback that if you must change your "core" business methods it suddenly effects all of your unit test and you can't test. So, developers should be aware of it and not make partials commits to the "core" business methods, unless they works. Also, with any change in this area, you should keep in your mind "how it will affect my unit test". Sometimes also, it is impossible to reproduce all the required data dynamically (usually, it is because of the third-parties API, for example, you call another application with it's own DB from which you required to use some keys. This keys (with the associated data) is created manually through third-party application. In such a case, this data and only this data, should be created statically. For example, your created 10000 keys starting from 300000.
Third option should be good. Options a) and d) sounds for me pretty good. For your dummy object you can use the mock framework or you can not to use it. Mock Framework is here only to help you. I don't see problem that all of your unit know all your entities.
Fourth option means that you redefine what is "unit" in your unit test. When you have couple of modules with interdependence than it can be difficult to test each module in isolation. This approach says, that what we originally tested was integration test and not unit test. So, we split our methods, extract small "units of works" that receives all it's interdependences to another modules as parameters. This parameters can be (hopefully) easily mocked up. The main drawback of this approach, that you don't test all of your code, but only so to say, the "focal points". You need to make integration test separately (usually by QA team).
I'm wondering if you couldn't solve your problem by changing your testing approach.
Unit Testing a module which depends on other modules and, because of that, on the test data of other modules is not a real unit test!
What if you would inject a mock for all of the dependencies of your module under test so you can test it in complete isolation. Then you don't need to setup a complete environment where each depending module has the data it needs, you only setup the data for the module your actually testing.
If you imagine a pyramid, then the base would be your unit tests, above that you have functional tests and at the top you have some scenario tests (or as Google calls them, small, medium and big tests).
You will have a huge amount of Unit Tests that can test every code path because the mocked dependencies are completely configurable. Then you can trust in your individual parts and the only thing that your functional and scenario tests will do is test if each module is wired correctly to other modules.
This means that your module test data is not shared by all your tests but only by a few that are grouped together.
The Builder Pattern as mentioned by cwash will definitely help in your functional tests.
We are using a .NET Builder that is configured to build a complete object tree and generate default values for each property so when we save this to the database all required data is present.

Can Java self-modify via user input?

I'm interested in an executed script allowing user input to modify the process and corresponding source.
What precedents exist to implement such a structure?
Yes, depending on what is meant.
Consider such projects as ObjectWeb ASM (see the the ASM 2.0 tutorial for a general rundown).
Trying to emit the-would-need-to-be-decompiled Java source code is another story: if this was the goal then perhaps the source should be edited, re-compiled, and somehow loaded in/over. (This is possible as well, consider tools like JRebel.)
Happy coding.
You should not be able to modify existing classes. But if you implement a ClassLoader then you can dynamically load classes from non-traditional sources: network, XML file, user input, random number generator, etc.
There are probably other, better ways.
Maybe the Java scripting API is what you're looking for:
http://docs.oracle.com/javase/6/docs/api/javax/script/package-summary.html
http://docs.oracle.com/javase/6/docs/technotes/guides/scripting/programmer_guide/index.html
I wrote an app once that used reflection to allow tests to be driven by a text file. For instance, if you had a class like this:
class Tuner(String Channel) {
tune(){...
play(){...
stop(){...
}
You could execute methods via code like:
tuner=Channel 1
tune tuner
play tuner
stop tuner
It had some more capabilities (You could pass objects into other objects, etc), but mostly I used it to drive tests on a cable box where a full write/build/deploy in order to test took on the order of a half hour.
You could create a few reusable classes and tie them together with this test language to make some very complex and easy to create tests.
THAT is a DSL, not monkeying around with your loose-syntax language by eliminating parenthesis and adding underscores and dots in random locations to make it look like some strange semi-English.

Categories

Resources