Our team has been given a legacy system for further maintenance and development.
As this this is true "legacy" stuff, there is really, really small number of tests, and most of them are crap. This is an app with web interface, so there are both container-managed components as well as plain java classes (not tied to any framework etc.) which are "new-ed" here and there, whenever you want to.
As we work with this system, every time we touch given part we try to break all that stuff into smaller pieces, discover and refactor dependencies, push dependencies instead of pulling them in code.
My question is how to work with such system, to break dependecies, make code more testable etc? When to stop and how to deal with this?
Let me show you an example:
public class BillingSettingsAction {
private TelSystemConfigurator configurator;
private OperatorIdDao dao;
public BillingSettingsAction(String zoneId) {
configurator = TelSystemConfiguratorFactory.instance().getConfigurator(zoneId);
dao = IdDaoFactory.getDao();
...
}
// methods using configurator and dao
}
This constructor definitely does too much. Also to test this for further refactoring it requires doing magic with PowerMock etc. What I'd do is to change it into:
public BillingSettingsAction(String zone, TelSystemConfigurator configurator, OperatorIdDao dao) {
this.configurator = configurator;
this.dao = dao;
this.zone = zone;
}
or provide constructor setting zone only with setters for dependencies.
The problem I see is that if I provide dependencies in constructor, I still need to provide them somewhere. So it is just moving problem one level up. I know I can create factory for wiring all the dependencies, but touching different part of app will cause having different factories for each. I obviously can't refactor all the app at once and introduce e.g. Spring there.
Exposing setters (maybe with default implementations provided) is similar, moreover it is like adding code for tests only.
So my question is how you deal with that? How to make dependencies between objects better, more readable, testable without doing it in one go?
I just recently started reading "Working effectively with legacy code" by Michael Feathers.
The book is basically an answer to your very question. It presents very actionable "nuggets" and techniques to incrementally bring a legacy system under test and progressively improve the code base.
The navigation can be a little confusing, as the book references itself by pointing to specific techniques, almost from page 1, but I find that the content is very useful so far.
I am not affiliated with the author or anything like that, it's just that I am facing similar situations and found this a very interesting resource.
HTH
I'd try to establish a rule like the boy scout rule: When ever you touch a file you have to improve it a little, apart from implementing what ever you wanted to implement.
In order to support that you can
agree on a fixed time budget for such improvements like for 2 hours of feature work we allow 1 hour of clean up.
Have metrics visible showing the improvement over time. Often simple things like average file size and test coverage are sufficient
Have a list of things you want to change at least for the bigger stuff, like "Get rid of TelSystemConfiguratorFactory" track on which tasks you are already working and prefere working on things that already started over new things.
In any case make sure management agrees to your approach.
On the more technical side: The approach you showed is is good. In many cases I would consider a second constructor providing all the dependencies but through the new constructor with the parameters. Make the additional constructor deprecated. When you touch clients of that class make them use the new constructor.
If you are going with Spring (or some other DI Framework) you can start by replacing calls to static Factories with getting an instance from the Spring context as an intermediate step before actually creating it via spring and injecting all the dependencies.
Related
Is there a way to tell JDBI I want to use a specific plugin (SqlObjectPlugin in my case) throughout my entire application, without having to re-specify upon each use? Throughout my application code I have the following:
var jdbi = Jdbi.create(dataSource);
jdbi.installPlugin(new SqlObjectPlugin()); // <== I want to make this part DRY
I'm tired of having that second boilerplate line repeated all over my code.
To the JDBI devs: I totally love the product! thx! :)
You have two options, which of the is best depends on the code of your application:
Instantiate Jdbi once and reuse it everywhere (using DI, static singleton, etc)
Use org.jdbi.v3.core.Jdbi#installPlugins, which, according to documentation, uses the ServiceLoader API to detect and install plugins automagically. Some people consider this feature dangerous; some consider it essential -- use at your own risk. But you still would need to call this method every time.
Consider this line of jsp code:
function clearCart(){
cartForm.action="cart_clear?method=clear";
cartForm.submit();
}
Clearly it's trying to call a method on the back end to clear the cart. My question is how does the service (Tomcat most likely, correct me if I'm wrong) which hosts this site that contains this snippet of code know how and where to find this method, how it "indexes" it with string values etc. In my java file, the clear method is defined as:
public String clear( )
{
this.request = ServletActionContext.getRequest();
this.session = this.request.getSession();
logger.info("Cart is clearing...");
Cart cart = ( Cart ) this.session.getAttribute(Constants.SESSION_CART );
cart.clear();
for( Long id : cart.getCartItems().keySet() )
{
Item it = cart.getCartItems().get(id);
System.out.println( it.getProduct().getName() + " " + it.getNumber()
);
}
return "cart";
}
By which module/what mechanism does Tomcat know how to locate precisely that method? By copycatting online tutorials and textbooks I know how to write these codes, but I want to get a bit closer to the bottom of it all, or at least something very basic.
Here's my educated (or not so much) guess: Since I'm basing my entire project on struts, hibernate and spring, I've inadvertently/invariably configured the build path and dependencies in such ways that when I hit the "compile" button, all the "associating" and "navigating" are done by these framework, in other words, as long as I correctly configured the project and got spring etc. "involved" (sorry I can't think of that technical jargon that's on the tip of my tongue), and as long as I inherit a class or implement an interface, when compiling, the compiler will expose these java methods to the jsp script - it's part the work done by compiler, part the work done by the people who composed spring framework. Or, using a really bad analogy, consider a C++ project whereby you use a 3rd party library which came in compiled binary form, all you have to do is to do the right inclusion (.h/.hpp file) and call the right function and you'll get the function during run time when calling those functions - note that this really is a really bad analogy.
Is that how it is done or am I overthinking it? For example it's all handled by Tomcat?
Sorry for all the verbosity. Things get lengthy when you need to express slightly more complicated and nuanced ideas. Also - please go deep and go low-level don't go too deep, by that I mean you are free to lecture on how hibernate and spring etc. work, how its code is being run on a server, but try not to touch the java virtue machine, byte code and C++ pointers etc. unless of course, it is helpful.
Thanks in advance!
Tomcat doesn't do much except obey the Servlet specification. Spring tells Tomcat that all requests to http://myserver.com/ should be directed to Spring's DispatcherServlet, which is the main entry point.
Then it's up to Spring to further direct those requests to the code that handles them. There are different strategies for mapping a specific URL to the code that handles the request, but it's not set in stone and you could easily create your own strategy that would allow you to use whatever kind of URLs you want. For a simple (and stupid) example you could have http://myserver.com/1 that would execute the first method in a single massive handler class, http://myserver.com/2 would execute the second, etc.
The example is with Spring, but it's the same general idea with other frameworks. You have a mapper that maps an URL to the handler code.
These days it's all hidden under layers of abstraction so you don't have to care about the specifics of the mapping and can develop quickly and concentrate on the business code.
I'm trying to get my head around Dagger 2 and Dependency Injection. I figured a good way of doing it was to take a look at the official Coffee example. I also read through the official documentation on the github page, but I found it rather confusing for newcommers .
As you can see in the image below, I stripped down all the classes and colored them to understand what's going on. But I still have some doubts
My questions:
1) I've heard that we use DI because passing the dependencies on the constructor makes our code verbose. Well, the amount of code and classes on this example enormously exceeds what it would have taken to just supply these two parameters in a constructor. Furthermore, the code would be much more human friendly. What is the advantage of doing it this way?
2) PumpModule declares a provider that takes the thing it is supposed to provide as a parameter... that is a tad counter intuitive. What's really going on there?
3) I really got lost here (CoffeApp.java)
DaggerCoffeApp_Coffe.builder().build()
What is that doing? Where's DaggerCoffeApp_Coffe? Android studio says it can't find it anywhere.
Answer 1
"the amount of code and classes on this example enormously exceeds
what it would have taken to just supply these two parameters in a
constructor"
In an example code or a very small app - yes. In a normal sized or big app you will have hunders if not thousands of times to provide "these two parameters in a constructor".
One of the best advantages of the DI is that it allows (and to some extend maybe forces) you to create modular apps, where the modules can be developed and tested in isolation. This may not sound as a big deal but again, when the app grows bigger it becomes really hard to add new changes without breaking things. When you develop a module you can isolate yourself from the rest of the app by defining interfaces that provide the needed functionality and define #Injects for these interfaces. That way if you later (months, next version?) decide that you need to change/extend/rewrite some module other modules will not be affected as long as you don't change it's interface. You will be able to write your replacing module and then just 'switch' to it in your #Provides method.
The other big advantage of DI is that it allows you easily to feed mock objects into your unit tests. For example: let's say you have an Activity that uses GPS location provider to detect the location. If you want to test it without DI you will have to run your app in the emulator in debug mode, provide manually some "fake" locations and at some breakpoint inspect if the activity is in the expected state. With DI you can easily feed mock location provider into your Activity that simulates GPS location updates with some predefined by you values. Of course you may again run your app manually in the emulator (or real device) but you can also run it automatically as part of the unit tests or even in the continuous integration server process like Jenkins. That way each time you change your code you can run the tests and immediately see if the changes broke something. The other value is that automatic tests save your time. In the example probably you will need at least 2 minutes for the manual test. Automatic test will take seconds and more important it will run without the need of your attention/input while running.
For more info I recommend this great video by Jake Wharton:
https://www.parleys.com/tutorial/5471cdd1e4b065ebcfa1d557/
Here the slides for the video:
https://speakerdeck.com/jakewharton/dependency-injection-with-dagger-2-devoxx-2014
Answer 2
"PumpModule declares a provider that takes the thing it is supposed to
provide as a parameter"
That provider provides the interface, not the concrete class. That is the whole point. When you migrate your app to DI you have to create an interface for each class that you want injected. As explained in in Answer 1, that way you will be able to easily replace the concrete implementation with mock object for the testing or with new better implementation for the later versions of the app.
For example: at some point you decide that you need Rotary Vane Pump instead of Thermosiphon. You write your RotaryVanePump class and then simply change your method to #Provides Pump providePump(RotaryVanePump pump) {.
How this works ((over)simplified explanation):
DI graph it built by DaggerCoffeApp_Coffe.builder().build() (please see Answer 3 first)
At some point Dagger finds in your code #Inject Pump mMyPump;
Dagger sees that you need Pump injected and seeks in the DI graph how to provide it.
It finds the #Provides Pump providePump() method. Dagger sees that it requires RotaryVanePump object.
Dagger seeks the DI graph how to provide RotaryVanePump.
There is no provide method in any module but RotaryVanePump does not need one because it has parameterless constructor so Dagger can instantiate an object.
New object is instantiated of type RotaryVanePump
Dagger feeds this object in providePump() as actual parameter.
providePump() returns with that object as return value.
RotaryVanePump is injected into the #Inject Pump mMyPump field.
And all this is done automatically, you don't have to care about it.
Answer 3
DaggerCoffeApp_Coffe is generated by Dagger. You will have to use this in order Android studio to "see" the generated classes.
https://bitbucket.org/hvisser/android-apt
"What is that doing?
That is doing the whole magic :-). It builds the dependency graph and checks at compile time that all dependencies are satisfied. That "at compile time" differentiates Dagger from all other DI frameworks which are unable to validate the graph at compile time and if you miss to define some dependency you will get an ugly runtime error.
In order all of your #Injects to work you will have to first build the graph using call like this.
I have a layered web-application driven by spring-jpa-hibernate and I'm now trying to integrate elasticsearch (search engine).
What I Want to do is to capture all postInsert/postUpdate events and send those entities to elasticsearch so that it will reindex them.
The problem I'm facing is that my "dal-entities" project will have a runtime dependency on the "search-indexer" and the "search-indexer" will have a compile dependency on "dal-entities" since it needs to do different things for different entities.
I thought about having the "search-indexer" as part of the DAL (since it can be argued it does operations on the data) but even still it should be as part of the DAO section.
I think my question can be rephrased as: How can I have logic in a hibernate event listener which cannot be encapsulated solely in an entities project (since it's not its responsibility).
Update
The reason the dal-entities project is dependant on the indexer is that I need to configure the listener in the spring configuration file which is responsible for the jpa context (which obviousely resides in the dal-entities).
The dependency is not a compile time scope but a runtime scope (since at runtime the hibernate context will need that listener).
The answer is Interfaces.
Rather than depend on the various classes directly (in either direction), you can instead depend on Interfaces that surface the capabilities you need. This way, you are not directly dependent on the classes but instead depend on the interface, and you can have the interfaces required by the "dal-entities", for example, live in the same package as the dal-entities and the indexer simply implements that interface.
This doesn't fully remove the dependency, but it does give you a much less tight of a coupling and makes your application a bit more flexible.
If you are still worried about things being too tightly coupled or if you really don't want the two pieces to be circularly dependent at all, then I would suggest you re-think your application design. Asking another question here on SO with more details about some of your code and how it could be better structured would be likely to get some good advice on how to improve the design.
Hibernate supports PostUpdateEventListener and PostInsertEventListener.
Here is a good example that might suite your case
The main concept is being able to locate when your entity was changed and act after it as shown here.
public class ElasticSearchListener implements PostUpdateEventListener {
#Override
public void onPostUpdate(PostUpdateEvent event) {
if (event.getEntity() instanceof ElasticSearchEntity ) {
callSearchIndexerService(event.getEntity());
Or
InjectedClass.act(event.getEntity());
Or
callWebService(InjectedClassUtility.modifyData(event.getEntity()));
........
}
}
Edit
You might consider Injecting the class that you want to isolate from the project (that holds the logic) using spring.
Another option might be calling an outside web service that is not dependent on your code.
passing to it either the your original project object or one that is modified by a utility, to fit elasticsearch.
I plan to create 3 editions of a piece of software in Java from existing code base. What's the best practice? should I create 3 different projects one for each edition? Also any tools for managing editions?
My first thought would be to keep a single codebase if at all possible and use some kind of flag for switching.
If not possible, I'd try to keep as much as possible the same and have the larger project use the smaller project as a sub-project, If possible automatically enabling some features--for instance, each of the projects could have it's own main, the different builds may just call the different mains which set flags to enable features.
If your flags are final, it should even avoid pulling unnecessary code into your project at all.
Finally, worst case, 3 branches in Subversion.
Edit:
You made me think a bit more on this and I think I found a better solution.
I think I'd separate this into four projects, combining all the "common" stuff into a base project and the stuff that is different, I'd spread out to the other three projects--so let's say you have base, demo, pay and business projects..
Where the three might differ, you would have an object from one of the demo/pay/business classes provide the functionality. For instance, if the pay version has new menu items, you might have a getMenuItems in one of your objects. It would return a menu tree that you could place in your menu bar. The demo version would have less items.
In this way your "Base" never knows which version it's running, it just uses objects.
To get the objects, I'd have a factory. The factory would look something like this:
private String[] availablePackages={"business", "pay", "demo"};
public getMenuClass() {
Class c;
for(String package : availablePackages) {
try {
c=Class.forName("com.meh.myapp."+package+".MenuClass");
} catch... {
// No package by that name
}
if(c != null) {
return c.newInstance();
}
}
// Something went wrong, no instance found.
The updraft is that you should try to instantiate com.meh.myapp.business.MenuClass first, then try ...pay.MenuClass finally ...demo.MenuClass.
This should allow you change your configuration by simply shipping different jars--If you decide to only ship the demo and main jars, you'll get a demo app. By shipping the pay jar you'll get the pay app.
Note that it's pretty likely that you'll want business to delegate much of it's work to "pay", and "pay" to delegate a bunch of it's work to "demo", but demo knows NOTHING of business or pay, and "main" only knows of the other three reflectively.
This would be a nice solution because there is no configuration required--in fact, you'd only have to ship the pay jar to upgrade from demo to pay, the main and demo jars are reused and stay right where they are.
The ideal way would be dividing your app into "core", "additional stuff", "more additional stuff" and combine these dependencies. This would probably be a lot of work depending on the code base you have, though.
If you use SVN for source code management, you can create 3 branches for each edition from the existing code base. Generally people try to avoid that because e.g. if you need to fix a common bug across these branches, you will need to fix that one in one of the branches and then merge that to the remaining branches. Maybe other source repositories handle that kind of situations better, but for SVN, I guess this is the only way.
As for managing versions, we use maven. If you take the "core", "additional stuff", "more additional stuff" approach, maven can help because it can track the version of each component pretty cleanly (using pom).
EDIT:Bill's suggestion is probably the most practical. If you use a final flag, yes the compiler should throw the unreachable code away.