Adoption of TDD on older Java application - java

I've got a problem and I'm asking you for help
I've started working on web application, that has no tests, is based on spring 2.5 and hibernate 3.2, is not very well modularized, with classes having up to 5k lines, as view technology there is JSP used all over the place with quite a lot things duplicated (like many similar search forms with very few differencies but with not many shared parts).
Aplication works well, however, everything is running just fine, but when there is need to add or to change some functionality, it is realy slow and not very convenient.
Is there any possibility to employ TDD at this point? Or what would you recomend as I dont't think I can develop it forever the way it is now, it is just getting messier all the time.
Thanky you for answers.

I would start by picking up a copy of Michael Feathers' book Working Effectively with Legacy Code - this is pure gold.
Once you learn techniques for refactoring and breaking apart your application at it's logical seams, you can work on integrating TDD in newer modules/sprout classes and methods, etc.
Case in point, we recently switched to a TDD approach for a ten year old application written in almost every version of our framework, and while we're still struggling with some pieces, we've made sure that all of our new work is abstracted out, and all of the new code is under test.
So absolutely doable - just a bit more challenging, and the book above can be a tremendous help in getting started.

First, welcome to the club of poor good programmers that have to fix crimes done by their worse colleagues. :(
I had such experience. In this case one of the recommended practices is developing tests for new features. You cannot stop now and develop tests for whole application. What you can do is every time you have to write new feature develop tests for this feature also. If this feature requires changes in some sensitive places start tests for these places.
Refactoring is a big problem. Ideally if you want to separate 5k lines class to 10 normal size classes you should first develop test case(s) for the big class, then perform refatoring and then run tests again to validate that you have not break anything. It is very hard in practice because when you change the design you change the interface and therefore you cannot run exactly the same tests. So, each time you should make the hard decision what is the best way and what are the minimal test case that covers your ass.
For example sometimes I performed 5 phase refatoring:
1. developed tests for bad big class
2. developed new well designed code and changed the old class to be the facade for my new code.
3. ran the test case developed in #1 to validate that everything works
4. developed new tests that verify that each new (small) sub module works well
5. refactred code, i.e. removed all references to the big old class (that became lightweight facade)
5. removed the old class and its tests.
But this is the worse case scenario. I had to use it when code that I am changing is extremely sensitive.
Shortly, good luck in your hard job. Prepare to work overnight and then receive 20 bug reports from QA and angry email from your boss. :( Be strong. You are on the right way!

If you feel like you can't make any changes for fear of breaking stuff, then you have answered your own question: you need to do something.
The first rule of holes is: If you are stuck in a hole, stop digging.
You need to institute a policy such that if code is committed without a test, that is the exception and not the rule. Use continuous integration and force people to keep the build passing.
I recommend starting by capturing the core functionality of the app in tests, both unit and integration. These tests should be a baseline that shows the necessary functionality is working.
You mentioned there is a lot of code duplication. Thats the next place to go. Put a test around an area with duplicate code. You will be testing 2 or more items here, since there is duplication. Then do a refactor and see if the tests still pass.
Once you knock one domino down, the rest will follow.

Yes there is definitely a place for TDD, but it is only a part of the solution.
You need to refactor this application before you can make any changes. Refactoring requires test coverage to be in place. Take small portions of obviously substandard code and write characterisation tests for them. This means you test all the variations possible through that code. You will probably find bugs doing this. Raise the bugs via your QA system and keep the buggy behaviour for now (lock the bugs in with your characterisation tests as other parts of the system might, for now, be relying on the buggy behaviour).
If you have very long and complex methods, you may call upon your IDE to extract small portions to separate methods where appropriate. Then write characterisation tests for those methods. Attack big methods in this way, bit by bit, until they are well-partitioned. Finally, once you have tests in place, you can refactor.
integration tests can be useful in this circumstance to highlight happy-day scenarios or a few major error scenarios. But usually in this circumstance the application is far too complex to write a complete integration test suite. This means you might never be protected 100% against side-effects using integration tests alone. That is why I prefer 'extract method' and characterise.
Now that your application is protected from side-effects, you may add new features using TDD.

My approach would be to start adding tests piece by piece. If there is a section you know you're going to have to update in the near future, start getting some good coverage on that section. Then when you need to update/refactor, you have your regression tests. From the sounds of it, it will be a major undertaking to establish a comprehensive test suite, but it will most likely pay off in the end. I would also suggest using one of the various code coverage tools available to see how much your tests are actually covering.
Cheers!

You probably can't do test driven development at this point, except if you happen to add functionality that is easy to isolate from the rest of the system (which is unlikely).
However, you can (and should) certainly add automated tests of your core functionality. Those are at first not going to be real unit tests in the sense of testing small units of code in isolation, but IMO the importance of those is often overstated. Integration tests may not run as fast or help you pinpoint the cause of bugs as quickly, but they still help tremendously in protecting you against side effects of changes. And that's something you really need when you refactor the code to make future changes easier and real unit tests possible.
In general, go for the low hanging but juicy fruit first: write tests for parts of the code that can be tested easily, or break easily, or cause the most problems when they break (and where tests are thus most valuable), or ideally all of these together. This gives you real value quickly and helps convince reluctant developers (or managers) that this is a path worth pursuing.
A continuous build server is a must. A test suite that people have to remember to run manually to get its benefit means that you're wasting most of its benefit.

Related

Generating test data for unit test cases for nested objects

When mocking dependent services for writing unit test cases for any enterprise-grade java service, I find setting up the data for the unit test cases a huge pain. Most of the times, this is the single most compelling reason for developers to not write unit test cases and rather write integration style test cases. If the service is dependent on couple of other services (which depend on their respective DAO's) and a DAO of its own, generating the when-thenReturn clauses for a reasonably nested object becomes quite an effort and developers are seen to be taking the easy route and loading the entire spring context and sourcing their data from the direct sources which may not always give the data that can traverse all required code paths. With this in the background, a colleague of mine suggested that why not run a sample integration test, and using aspects, capture all of the relevant data points and serialize it to an XML representation which may be used for materializing test data for the unit test cases. To our pleasant surprise we found a framework called TestDataCaptureJ on github which was very similar to this. It used aspects to capture the data points and it generated the java code to create the objects.
The motivation stated on the site seemed very apt and I was wondering if there are any other alternatives that can give similar features. Also, it would be great if the experts can critique this overall approach.
Also, the project is about 2 yrs old and has a few bugs which we we had to fix and are hoping to give it back as a mavenized github fork. Just checking to ensure that there is no other similar initiative from one of the well-known stables as well.
Thanks in advance!
I have two critiques to that approach... and please bear in mind that my knowledge of your context is almost nil, which means that what I suggest here might not work for you.
I've only once experienced a problem like the one you mentioned, and it was a symptom that there was too much coupling between the objects because the responsbilities were way to broad. Since then I use a Domain-Driven Design approach and I haven't had this problem again.
I prefer to use Test-Data Builders to create test data. This approach allows me to have a template of what I want to build, and just replace the bits I'm interested in the test. If you decide to go this way, I strongly suggest you to use a tiny library called Make-It-Easy that simplifies the creation of these builders.
And two suggestions
If you have some time, I suggest you to
Watch a presetation called The Deep Synergy Between Testability and Good Design by Michael Feathers - Part of the talk is about something very similar to what you're experiencing.
Read the book Growing Object-Orieted Systems, Guided by Tests (aka GOOS), it has all sorts of insights about how to write simple, amazing, testable code.

How to write a unit test framework?

How to write a unit test framework?
Can anyone suggest some good reading?
I wish to work on basic building blocks that we use as programmers, so I am thinking of working on developing a unit test framework for Java.
I don't intend to write a framework that will replace junit;
my intention is to gain some experience by doing a worthy project.
There are several books that describe how to build a unit test framework. One of those is Test-Driven Development: By Example (TDD) by Kent Beck. Another book you might look at is xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.
Why do you want to build your own unit test framework?
Which ones have you tried and what did you find that was missing?
If (as your comments suggest) your objective is to learn about the factors that go into making a good unit test framework by doing it yourself, then chapters 18-24 (Part II: The xUnit Example) of the TDD book show how it can be done in Python. Adapting that to Java would probably teach you quite a lot about Python, unit testing frameworks and possibly Java too.
It will still be valuable to you to have some experience with some unit test framework so that you can compare what you produce with what others have produced. Who knows, you might have some fundamental insight that they've missed and you may improve things for everyone. (It isn't very likely, I'm sorry to say, but it is possible.)
Note that the TDD people are quite adamant that TDD does not work well with databases. That is a nuisance to me as my work is centred on DBMS development; it means I have to adapt the techniques usually espoused in the literature to accommodate the realities of 'testing whether the DBMS works does mean testing against a DBMS'. I believe that the primary reason for their concern is that setting up a database to a known state takes time, and therefore makes testing slower. I can understand that concern - it is a practical problem.
Basically, it consists of three parts:
preparing set of tests
running tests
making reports
Preparing set of tests means that your framework should collect all tests which you want to run. You can specify these tests (usually classes with test methods which satisfy some convention or marked with certain annotation or implement marker interface) in a separate file (java or xml), or you can find them dynamically (making a search over classpath).
If you choose the dynamic searching, then you'll probably have to use some libraries which can analyse java bytecode. Otherwise you'll have to load all the classes in your classpath, and this a) requires much time and b) will execute all static initializers of loaded classes and can cause unexpected tests results.
Running tests can vary significantly depending on features of your framework. The simplest way is just calling test methods inside a try/catch block, analysing and saving results (you have to analyze 2 situations - when the assertion exception was thrown and when it was not thrown).
Making reports is all about printing analyzed results in xml/html/wiki or whatever else format.
The Cook's Tour is written by Kent Beck (I believe; it's not attributed), and describes the thought process that went into writing JUnit. I would suggest reading it and considering how you might choose an alternate line of development.

Java: Convenient way to refactor the application

We have an agile enterprise application built on JSP and Servlet without any design strategy.
This application was built in early 2002 considering 1000 users. After 2002, we received lots of requests from the marketing partners.
Currently, the application has lots of spaghetti code with lots of Ifs and elses. One class has more than 20,000 lines of code with a huge body of functions without abstraction.
Now, we need to support billions of records,
what we need to do immediately and gradually?
We have to refactor the application?
Which framework, we need to use?
How the usage of the framework will be helpful to the end users?
How to convince the leaders to do the refactoring?
How to gain the faster response time as compare to the current system?
Here is how I would approach this if I had appropriate company resources at my disposal (yeah right):
Get a good QA process going, with automated regression testing set up before making significant changes. I don't care how good you are, you can't put a system like that under unit test and reasonably control for regressions.
Map out interdependencies, see how much an individual class can be tested as a unit.
How do you eat an elephant? One bite at a time. Take a given piece of required functionality (preferably something around the increase load requirements) and refactor the parts of the class or classes that can be worked on in isolation.
Learn how do 3 above by reading Working Effectively with Legacy Code.
Convenient way to refactor the application.
There are no "convenient" or "easy" ways to refactor an existing codebase, especially if the codebase looks like spaghetti.
... what we need to do immediately and gradually?
That's impossible to answer without understanding your system's current architecture.
We have to refactor the application?
On the one hand, the fact that you have a lot of poorly designed / maintained code would suggest that it needs some refactoring work.
However, it is not clear that it will be sufficient. It could be that a complete rewrite would be a better idea ... especially if you need to scale up by many orders of magnitude.
Which framework, we need to use?
Impossible to answer without details for your application.
How the usage of the framework will be helpful to the end users?
It might reduce response times. It might improve reliability. It might allow more online users simultaneously. It might do none of the above.
Using a framework won't magically fix a problem of bad design.
How to convince the leaders to do the refactoring?
You need to convince them that the project is going to give a good return on investment (ROV). You / they also need to consider the alternatives:
what happens if you / they do nothing, or
is a complete rewrite likely to give a better outcome.
How to gain the faster response time as compare to the current system?
Impossible to answer without understanding why the current system is slow.
The bottom line is that you probably need someone from outside your immediate group (e.g. an external consultant) to do a detailed review your current system and report on your options for fixing it. It sounds like your management don't trust your recommendations.
These are big, big questions. Too broad for one answer really.
My best advice is this: start small, if you can. Refactor piece by piece. And most importantly, before touching the code, write automated tests against the current codebase, so you can be relatively sure you haven't broken anything when you do refactor it.
It may not be possible to write these tests, as the code may not be testable in it's current format. But you should still make this one of your main goals.
By definition refactoring shouldn't show any difference to the users. It's only to help developers work on the code. It sounds like you want to do a more extensive rewrite to modernize the application. Moving to something like JSF will make life a lot easier for developers and will give you access to web component libraries to improve the user experience.
It is a question which needs a lengthy answer. To start with I would suggest that the application is tested well and is working as per the specification. This means there are enough unit, integration and functional tests. The functional tests also have to be automated. Once these are in place, a step by step refactoring can take place. Do you have enough tests to start?

Our project contains 2600 class files - where and how should we start writing junit tests?

Our project contains 2600 class files and we have decided to start using automated tests.
We know we have should have started this 2599 class files ago, but how and where should large projects start to write tests?
Pick a random class and just go?
What's important to know? Are there any good tools to use?
Write a unit test before you change something, and for every bug you encounter.
In other words, test the functionality you are currently working on. Otherwise, it is going to take a lot of time and effort to write tests for all classes.
Start writing tests for each bug that is filed (Write the test, watch it fail, fix the bug, test again). Also test new features first (they are more likely to have errors). It will be slow in the beginning, but as your test infrastructure grows, it will become easier.
If you are using java 5 or higher, use junit 4.
Learn about the difference of unit tests, integration tests and acceptance tests. Also have a look at mocking.
Other answers have given useful advice, but I miss a clear articulation of the underlying principle: strive to maximize the benefit from your efforts. Covering a large legacy codebase with unit tests to a significant extent takes a lot of time and effort. You want to maximize the outcome of your effort from the start. This not only gives valuable feedback early on, but helps convincing / keeping up the support of both management and fellow developers that the effort is worth it.
So
start with the easiest way to test the broadest functionality, which is typically system/integration tests,
identify the critical core functionality of the system and focus on this,
identify the fastest changing/most unstable part(s) of the system and focus on these.
Don't try unit tests first. Do system tests (end-to-end-tests) that cover large areas of code. Write unit tests for all new code.
This way you stabilize the old code with your system regression tests. As more and more new code comes in the fraction of code without unit tests begin to fade away. Writing unit tests for old code without the system tests in place will likly break the code and will be to much work to be justified as the code is not written with testability in mind.
You may find Michael Feathers' book Working Effectively with Legacy Code useful.
You're fairly dorked now, but write tests that bolster the most critical code you have. For example if you have code that allows functionality based upon users' rights, then that's a biggy - test that. The routine that camelcases a name and writes it to a log file? Not so much.
"If this code broke, how much would it suck" is a good litmus test.
"Our internal maintenance screens would look bad on IE6" is one answer.
"We'd send 10,000,000 emails to each of our customers" is another answer.
Which classes would you test first, hehe.
You might find this book relevant and interesting. The author explains how to do exactly what you ask for here.
http://my.safaribooksonline.com/0131177052
Oh, and one more thing - having an insufficient number of unit tests is far better than having none. Add a few at a time if that's all you can do. Don't give up.

Testing methodologies

What is the most commonly used approach used for testing in Java projects (iterative development) ?
My suggestion is that you should have a healthy mix of automated and manual testing.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
Personal experience would suggest the most popular approach is none at all.
I've worked with TDD (Test Driven Development) before, and my feelings towards it are mixed. Essentially, you write your tests before you write your code, and write your code to satisfy the requirements of the test. TDD forces you to have an extremely clear idea about your requirements prior to starting. An added benefit is that, once you are done development, assuming you followed closely to TDD procedures, you'd have a complete set of test suites to go with the code. The down side is that it takes an extremely long time, and at times, you'd just want to skip a couple of steps (e.g. maybe writing code before tests like a sane person would prefer to do).
More can be read here (wiki link)
Unit testing?
Contract-based programming, a la Eiffel?
Waterfall model?
Different shops do different things. If there were one method to rule them all, you wouldn't be asking this question.
On the premise of doing testing at all, I would say that testing with JUnit is the common approach to do testing in Java.
Although most tests are written with JUnit mostly tests tend to be more integration tests than unit tests. (meaning not testing one thing in isolation but some things together)
Additionally test are mostly not written in a test first approach but in parallel or after a specific feature has been implemented.
If you go to a team that makes more advanced use of testing you might probably find some CI Server (Cruise Control, Hudson) running the tests at least once a day during a nightly build.
In the order of most commonly used approach:
no tests at all
manual tests: running the app,
clicking or providing input, check
results
try to write some JUnits, forget
about them, slide to 2 and 1
Start with TDD, see that it's hard
then slide to 3, 2 and 1
on the theoretical side there are loads of ways to properly test the code.
If you are looking for something practical take a look at
Clean Code Talk. Take a look at the whole series, about 5 talks (can't post more than one link).
My Suggestion for testing of the java project is to keep it simple.
Steps :-
Manual Testing :-Achieve a stable product.
Automation Testing :- Maintain the quality of the product.
Report Generation and reporting :- Let people know the quality of the product.
Continuous Integration :-Make it a complete automated,continuous tool.
When developer will commit the Functionality then Start testing the it module by module.Try to compare the Actual Output with the expected output and against that Log the issues.
When developer resolved with the issues,Start with the Integration Testing and also start testing the resolved state issues and check whether any regression occur due to issue Fixing.
At last when product become the stable one,Then start for automating the the modules.
You can also follow automation step by step like:-
1.Automating the modules.
2.Report generation and send mail for product HealthCheck.
3.Continuous Integration and Automation testing on private server on local machine.

Categories

Resources