Is there a way to tell JDBI I want to use a specific plugin (SqlObjectPlugin in my case) throughout my entire application, without having to re-specify upon each use? Throughout my application code I have the following:
var jdbi = Jdbi.create(dataSource);
jdbi.installPlugin(new SqlObjectPlugin()); // <== I want to make this part DRY
I'm tired of having that second boilerplate line repeated all over my code.
To the JDBI devs: I totally love the product! thx! :)
You have two options, which of the is best depends on the code of your application:
Instantiate Jdbi once and reuse it everywhere (using DI, static singleton, etc)
Use org.jdbi.v3.core.Jdbi#installPlugins, which, according to documentation, uses the ServiceLoader API to detect and install plugins automagically. Some people consider this feature dangerous; some consider it essential -- use at your own risk. But you still would need to call this method every time.
Related
There may be some related questions, but I think my situation is peculiar enough to justify a question on its own.
I'm working on a historically grown huge Java project (far over one million LOC, due to other reasons we're still bound to Java 6 at the moment), where reflection is used to display data in tables - reflection is not used for dynamically changing the displayed data, but just for using some kind of short cuts in the code. A simplified part of the code looks like this.
TableColumns taco = new TableColumns(Bean.class);
taco.add(new TableColumn("myFirstMember"));
taco.add(new TableColumn("mySecondMember"));
...
List<Bean> dataList = getDataFromDB(myFilterSettings);
taco.displayTable(dataList);
So the values of the table cells of each row are stored in an instance of Bean. The values for the first cell comes from calling itemOfDataList.getMyFirstMember() (so here comes the reflection part of the code). The rendering of the table cells is done depending on the return type of the itemOfDataList.getMyFirstMember().
This way, it's easy to add new columns to the table, getting them rendered in a standard way without caring about any details.
Problem of this approach: when the getter name changes, the compiler doesn't notice and there will be an exception at runtime in case Bean.getMyFirstMember() was renamed to Bean.getMyFirstMemberChanged().
While reflection is used to determine which getter is called, the needed info is in fact available at compile time, there are no variables used for the column info.
My goal: having a validator that will check at compile time whether the needed getter methods in the Bean class do exist.
Possible solultions:
modifying the code (using more specific infos, writing an adapter, using annotations or whatever that can be checked at compile time by the compiler), I explicitely don't want a solution of this kind, due to the huge code basis. I just need to guarantee that the reflection won't fail at runtime.
writing a custom validator: I guess this shouldn't be too complex, but I have no real idea how to start, we use eclipse as ide, so it should be possible to write such a custom validator - any hints for a good starting point?
The validator should show a warning in eclipse if the parameter in the TableColumn(parameter) isn't final (should be a literal or constant). The validator should show an error in eclipse if the TableColumn is added to TableColumns and the corresponding Bean.getParameter() doesn't exist.
as we use SonarQube for quality checking, we could also implement a custom rule checking if the methods do exist - not completely sure if such a custom rule is possible (probably yes)
maybe other solutions that will give a fast feedback within eclipse that some tables won't render correctly after some getter methods were renamed
What I'm asking for:
what will be easier in this situation: writing a custom validator for eclipse or writing a custom rule for SonarQube?
hints where to start either approach
hints for other solultions
Thanks for your help.
Some alternatives:
You could migrate to more modern Java for this pattern, it is a prime candidate for method references. Then, your IDE of choice can automatically take care of the problem when you refactor/rename. This can be done bit-by-bit as the opportunity/necessity arises.
You could write your own custom annotations:
Which you can probably get SonarQube to scan for
Which could allow you to take advantage of javax.validation.* goodies, so your code may look/feel more like 'standard' Java EE code.
Annotations can be covered by a processor during the build step, various build tools have ways to hook this up -- and the processor can do more advanced/costly introspection so you can push the validation to compile-time as opposed to run-time.
I'm trying to get my head around Dagger 2 and Dependency Injection. I figured a good way of doing it was to take a look at the official Coffee example. I also read through the official documentation on the github page, but I found it rather confusing for newcommers .
As you can see in the image below, I stripped down all the classes and colored them to understand what's going on. But I still have some doubts
My questions:
1) I've heard that we use DI because passing the dependencies on the constructor makes our code verbose. Well, the amount of code and classes on this example enormously exceeds what it would have taken to just supply these two parameters in a constructor. Furthermore, the code would be much more human friendly. What is the advantage of doing it this way?
2) PumpModule declares a provider that takes the thing it is supposed to provide as a parameter... that is a tad counter intuitive. What's really going on there?
3) I really got lost here (CoffeApp.java)
DaggerCoffeApp_Coffe.builder().build()
What is that doing? Where's DaggerCoffeApp_Coffe? Android studio says it can't find it anywhere.
Answer 1
"the amount of code and classes on this example enormously exceeds
what it would have taken to just supply these two parameters in a
constructor"
In an example code or a very small app - yes. In a normal sized or big app you will have hunders if not thousands of times to provide "these two parameters in a constructor".
One of the best advantages of the DI is that it allows (and to some extend maybe forces) you to create modular apps, where the modules can be developed and tested in isolation. This may not sound as a big deal but again, when the app grows bigger it becomes really hard to add new changes without breaking things. When you develop a module you can isolate yourself from the rest of the app by defining interfaces that provide the needed functionality and define #Injects for these interfaces. That way if you later (months, next version?) decide that you need to change/extend/rewrite some module other modules will not be affected as long as you don't change it's interface. You will be able to write your replacing module and then just 'switch' to it in your #Provides method.
The other big advantage of DI is that it allows you easily to feed mock objects into your unit tests. For example: let's say you have an Activity that uses GPS location provider to detect the location. If you want to test it without DI you will have to run your app in the emulator in debug mode, provide manually some "fake" locations and at some breakpoint inspect if the activity is in the expected state. With DI you can easily feed mock location provider into your Activity that simulates GPS location updates with some predefined by you values. Of course you may again run your app manually in the emulator (or real device) but you can also run it automatically as part of the unit tests or even in the continuous integration server process like Jenkins. That way each time you change your code you can run the tests and immediately see if the changes broke something. The other value is that automatic tests save your time. In the example probably you will need at least 2 minutes for the manual test. Automatic test will take seconds and more important it will run without the need of your attention/input while running.
For more info I recommend this great video by Jake Wharton:
https://www.parleys.com/tutorial/5471cdd1e4b065ebcfa1d557/
Here the slides for the video:
https://speakerdeck.com/jakewharton/dependency-injection-with-dagger-2-devoxx-2014
Answer 2
"PumpModule declares a provider that takes the thing it is supposed to
provide as a parameter"
That provider provides the interface, not the concrete class. That is the whole point. When you migrate your app to DI you have to create an interface for each class that you want injected. As explained in in Answer 1, that way you will be able to easily replace the concrete implementation with mock object for the testing or with new better implementation for the later versions of the app.
For example: at some point you decide that you need Rotary Vane Pump instead of Thermosiphon. You write your RotaryVanePump class and then simply change your method to #Provides Pump providePump(RotaryVanePump pump) {.
How this works ((over)simplified explanation):
DI graph it built by DaggerCoffeApp_Coffe.builder().build() (please see Answer 3 first)
At some point Dagger finds in your code #Inject Pump mMyPump;
Dagger sees that you need Pump injected and seeks in the DI graph how to provide it.
It finds the #Provides Pump providePump() method. Dagger sees that it requires RotaryVanePump object.
Dagger seeks the DI graph how to provide RotaryVanePump.
There is no provide method in any module but RotaryVanePump does not need one because it has parameterless constructor so Dagger can instantiate an object.
New object is instantiated of type RotaryVanePump
Dagger feeds this object in providePump() as actual parameter.
providePump() returns with that object as return value.
RotaryVanePump is injected into the #Inject Pump mMyPump field.
And all this is done automatically, you don't have to care about it.
Answer 3
DaggerCoffeApp_Coffe is generated by Dagger. You will have to use this in order Android studio to "see" the generated classes.
https://bitbucket.org/hvisser/android-apt
"What is that doing?
That is doing the whole magic :-). It builds the dependency graph and checks at compile time that all dependencies are satisfied. That "at compile time" differentiates Dagger from all other DI frameworks which are unable to validate the graph at compile time and if you miss to define some dependency you will get an ugly runtime error.
In order all of your #Injects to work you will have to first build the graph using call like this.
I'm new to FIT and FitNess and I'm wondering if is possible to cascade method calls without defining special fixtures.
Background: we are testing our web based GUI with Selenium WebDriver. I have created a framework based on the PageObject pattern to decouple the HTML from the page logic. This framework is used in our JUnit tests. The framework is implemented in a Fluent API style with grammar.
Something like this:
boolean connectionTest =
connectionPage
.databaseHost( "localhost" )
.databaseName( "SOME-NAME" )
.instanceNameConnection()
.instanceName("SOME-INSTANCE-NAME")
.windowsAuthentication()
.apply()
.testConnection();
Some testers want to create acceptance tests but aren't developers. So had a look to FIT. Would it be possible to use my framework with FIT as is without developing special fixtures?
I don't believe you can use the existing code with 'plain-vanilla' Fit, it would at least require a special fixture class to be defined. Maybe 'SystemUnderTest' could help?
Otherwise Slim's version might be something to get it to work for you.
As a side note: I've put a FitNesse baseline installation including features to do website testing with (almost) no Java code on GitHub. In my experience it's BrowserTest will allow non-developers to create/modify/maintain tests easily, and integrate those tests with you continuous integration process (if you have one). I would suggest you (or your testers) also have a look at that.
I know you asked about Java but in case any .NET developers see this, it's possible with the .NET implementation, fitSharp:
|with|new|connection page|
|with|database host|localhost|
|with|database name|some-name|
etc.
See http://fitsharp.github.io/Fit/WithKeyword.html
I have solved my problem by writing a generic fixture which receives the target methods and their arguments from the fitness table and uses Java reflection to invoke the appropriate framework methods.
So I have one fixture of all different page objects that are returned from the framework.
Our team has been given a legacy system for further maintenance and development.
As this this is true "legacy" stuff, there is really, really small number of tests, and most of them are crap. This is an app with web interface, so there are both container-managed components as well as plain java classes (not tied to any framework etc.) which are "new-ed" here and there, whenever you want to.
As we work with this system, every time we touch given part we try to break all that stuff into smaller pieces, discover and refactor dependencies, push dependencies instead of pulling them in code.
My question is how to work with such system, to break dependecies, make code more testable etc? When to stop and how to deal with this?
Let me show you an example:
public class BillingSettingsAction {
private TelSystemConfigurator configurator;
private OperatorIdDao dao;
public BillingSettingsAction(String zoneId) {
configurator = TelSystemConfiguratorFactory.instance().getConfigurator(zoneId);
dao = IdDaoFactory.getDao();
...
}
// methods using configurator and dao
}
This constructor definitely does too much. Also to test this for further refactoring it requires doing magic with PowerMock etc. What I'd do is to change it into:
public BillingSettingsAction(String zone, TelSystemConfigurator configurator, OperatorIdDao dao) {
this.configurator = configurator;
this.dao = dao;
this.zone = zone;
}
or provide constructor setting zone only with setters for dependencies.
The problem I see is that if I provide dependencies in constructor, I still need to provide them somewhere. So it is just moving problem one level up. I know I can create factory for wiring all the dependencies, but touching different part of app will cause having different factories for each. I obviously can't refactor all the app at once and introduce e.g. Spring there.
Exposing setters (maybe with default implementations provided) is similar, moreover it is like adding code for tests only.
So my question is how you deal with that? How to make dependencies between objects better, more readable, testable without doing it in one go?
I just recently started reading "Working effectively with legacy code" by Michael Feathers.
The book is basically an answer to your very question. It presents very actionable "nuggets" and techniques to incrementally bring a legacy system under test and progressively improve the code base.
The navigation can be a little confusing, as the book references itself by pointing to specific techniques, almost from page 1, but I find that the content is very useful so far.
I am not affiliated with the author or anything like that, it's just that I am facing similar situations and found this a very interesting resource.
HTH
I'd try to establish a rule like the boy scout rule: When ever you touch a file you have to improve it a little, apart from implementing what ever you wanted to implement.
In order to support that you can
agree on a fixed time budget for such improvements like for 2 hours of feature work we allow 1 hour of clean up.
Have metrics visible showing the improvement over time. Often simple things like average file size and test coverage are sufficient
Have a list of things you want to change at least for the bigger stuff, like "Get rid of TelSystemConfiguratorFactory" track on which tasks you are already working and prefere working on things that already started over new things.
In any case make sure management agrees to your approach.
On the more technical side: The approach you showed is is good. In many cases I would consider a second constructor providing all the dependencies but through the new constructor with the parameters. Make the additional constructor deprecated. When you touch clients of that class make them use the new constructor.
If you are going with Spring (or some other DI Framework) you can start by replacing calls to static Factories with getting an instance from the Spring context as an intermediate step before actually creating it via spring and injecting all the dependencies.
I'm interested in an executed script allowing user input to modify the process and corresponding source.
What precedents exist to implement such a structure?
Yes, depending on what is meant.
Consider such projects as ObjectWeb ASM (see the the ASM 2.0 tutorial for a general rundown).
Trying to emit the-would-need-to-be-decompiled Java source code is another story: if this was the goal then perhaps the source should be edited, re-compiled, and somehow loaded in/over. (This is possible as well, consider tools like JRebel.)
Happy coding.
You should not be able to modify existing classes. But if you implement a ClassLoader then you can dynamically load classes from non-traditional sources: network, XML file, user input, random number generator, etc.
There are probably other, better ways.
Maybe the Java scripting API is what you're looking for:
http://docs.oracle.com/javase/6/docs/api/javax/script/package-summary.html
http://docs.oracle.com/javase/6/docs/technotes/guides/scripting/programmer_guide/index.html
I wrote an app once that used reflection to allow tests to be driven by a text file. For instance, if you had a class like this:
class Tuner(String Channel) {
tune(){...
play(){...
stop(){...
}
You could execute methods via code like:
tuner=Channel 1
tune tuner
play tuner
stop tuner
It had some more capabilities (You could pass objects into other objects, etc), but mostly I used it to drive tests on a cable box where a full write/build/deploy in order to test took on the order of a half hour.
You could create a few reusable classes and tie them together with this test language to make some very complex and easy to create tests.
THAT is a DSL, not monkeying around with your loose-syntax language by eliminating parenthesis and adding underscores and dots in random locations to make it look like some strange semi-English.