How to write a unit test framework?
Can anyone suggest some good reading?
I wish to work on basic building blocks that we use as programmers, so I am thinking of working on developing a unit test framework for Java.
I don't intend to write a framework that will replace junit;
my intention is to gain some experience by doing a worthy project.
There are several books that describe how to build a unit test framework. One of those is Test-Driven Development: By Example (TDD) by Kent Beck. Another book you might look at is xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.
Why do you want to build your own unit test framework?
Which ones have you tried and what did you find that was missing?
If (as your comments suggest) your objective is to learn about the factors that go into making a good unit test framework by doing it yourself, then chapters 18-24 (Part II: The xUnit Example) of the TDD book show how it can be done in Python. Adapting that to Java would probably teach you quite a lot about Python, unit testing frameworks and possibly Java too.
It will still be valuable to you to have some experience with some unit test framework so that you can compare what you produce with what others have produced. Who knows, you might have some fundamental insight that they've missed and you may improve things for everyone. (It isn't very likely, I'm sorry to say, but it is possible.)
Note that the TDD people are quite adamant that TDD does not work well with databases. That is a nuisance to me as my work is centred on DBMS development; it means I have to adapt the techniques usually espoused in the literature to accommodate the realities of 'testing whether the DBMS works does mean testing against a DBMS'. I believe that the primary reason for their concern is that setting up a database to a known state takes time, and therefore makes testing slower. I can understand that concern - it is a practical problem.
Basically, it consists of three parts:
preparing set of tests
running tests
making reports
Preparing set of tests means that your framework should collect all tests which you want to run. You can specify these tests (usually classes with test methods which satisfy some convention or marked with certain annotation or implement marker interface) in a separate file (java or xml), or you can find them dynamically (making a search over classpath).
If you choose the dynamic searching, then you'll probably have to use some libraries which can analyse java bytecode. Otherwise you'll have to load all the classes in your classpath, and this a) requires much time and b) will execute all static initializers of loaded classes and can cause unexpected tests results.
Running tests can vary significantly depending on features of your framework. The simplest way is just calling test methods inside a try/catch block, analysing and saving results (you have to analyze 2 situations - when the assertion exception was thrown and when it was not thrown).
Making reports is all about printing analyzed results in xml/html/wiki or whatever else format.
The Cook's Tour is written by Kent Beck (I believe; it's not attributed), and describes the thought process that went into writing JUnit. I would suggest reading it and considering how you might choose an alternate line of development.
Related
I'd like to ask what's the practical difference between Cucumber and JUnit. I haven't worked with Cucumber at all; found some documentation but I'd greatly appreciate some feedback from someone who has worked with both (interested in a high lvl overview).
To break it down - what i'm interested in (I'll be using Selenium and not Protractor) :
Are there any things that Cucumber can't do vs Junit.
What's easier to use (coding, how fast you can write the tests) ?
Both work with Page Objects?
Some things that i need to get done
Test css styling
Test page responsiveness
Standard operation on WebElements (clicking, getting data etc)
Asserts.
Anything in addition to this is more than welcomed. Greatly appreciate your answer on this, thank you!
JUnit and Cucumber are aiming at different goals. They are rather complement to each other than replace each other.
Are there any things that Cucumber can't do vs Junit.
There isn't anything you can do with JUnit that you can't do with Cucumber. And the other way around.
The difference is that while JUnit aims at tests, Cucumber aims at collaboration with non technical people. Non technical people will not understand what a unit test does. They will, however, be able to understand and validate an example written in Gherkin.
What's easier to use (coding, how fast you can write the tests) ?
There is more overhead when you use Cucumber. You will have to implement each step as a method and not just one test method as you would do if you used JUnit. The readability you gain from expressing examples using plain text is sometimes worth the extra work.
Both work with Page Objects?
Page Objects are an abstraction for the web page you are verifying. It is a class you write as developer/tester. The Page Objects can be used by both JUnit and Cucumber. In fact, there is no difference between the tools from that perspective.
The choice to use JUnit or Cucumber is a matter of granularity and audience.
A work flow that works well is to mix the tools. Define examples of how the application should work using BDD, (Cucumber, Gherkin). Implement these scenarios using Cucumber. Then, use JUnit to work out details that may be important but not necessary important for the business stakeholders at a high level. Think of corner cases that are important but are too much details for your stakeholders.
An image that describes this mix is available here: https://cucumber.io/images/home/bdd-cycle.png
I wrote blog post a while back where I talk about the right tool for the job: http://www.thinkcode.se/blog/2016/07/25/the-right-tool-for-the-job
The right tool may be Cucumber. It can also be JUnit. It all depends on your audience.
Simply spoken, those two work on completely different levels of abstraction.
JUnit is mainly an automation framework; giving you the ability to rapidly write down test cases using the Java programming language. It provides annotations that make to easily declare: "this method over here is a JUnit test". It was intended as framework for unit tests; but many people also use it to drive full scale "integration" or "function" tests.
Cucumber on the other hand works on a much higher level of abstraction. You start by writing "test descriptions" in pure text. Leading to probably the key difference: you don't need a to know Java to write a cucumber test (you just need a java programmer to provide the "glue code" that allows Cucumber to turn your text input into some executable piece of code).
In that sense, you are somehow asking us to compare apples and turnips here; as one would be using these two toolsets for a different set of "problem solution". But as lined out; you can also use JUnit to drive "bigger" tests; so the main differentiation between these two tools is the level of abstraction that you are dealing with.
EDIT: your comment is correct; as those tools are for different "settings", you shouldn't expect that a non-technical person alone will be able to use cucumber to write good tests covering everything. Cucumber is a nice way to enable non-technical participation for creating tests; but in the end, you are solving technical (java related) problems; thus you need Java programming expertise at some point. Either "within the same person"; or at least within different people in your team.
Cucumber seems to make something more user friendly but I don't think business analysts really care what it is. Ultimately developers have to write unit tests, integration tests , cucumber tests (so Cucumber makes no sense for developer who has already written unit tests & integration tests & Business analyst don't care because they have already provided what they want).
When mocking dependent services for writing unit test cases for any enterprise-grade java service, I find setting up the data for the unit test cases a huge pain. Most of the times, this is the single most compelling reason for developers to not write unit test cases and rather write integration style test cases. If the service is dependent on couple of other services (which depend on their respective DAO's) and a DAO of its own, generating the when-thenReturn clauses for a reasonably nested object becomes quite an effort and developers are seen to be taking the easy route and loading the entire spring context and sourcing their data from the direct sources which may not always give the data that can traverse all required code paths. With this in the background, a colleague of mine suggested that why not run a sample integration test, and using aspects, capture all of the relevant data points and serialize it to an XML representation which may be used for materializing test data for the unit test cases. To our pleasant surprise we found a framework called TestDataCaptureJ on github which was very similar to this. It used aspects to capture the data points and it generated the java code to create the objects.
The motivation stated on the site seemed very apt and I was wondering if there are any other alternatives that can give similar features. Also, it would be great if the experts can critique this overall approach.
Also, the project is about 2 yrs old and has a few bugs which we we had to fix and are hoping to give it back as a mavenized github fork. Just checking to ensure that there is no other similar initiative from one of the well-known stables as well.
Thanks in advance!
I have two critiques to that approach... and please bear in mind that my knowledge of your context is almost nil, which means that what I suggest here might not work for you.
I've only once experienced a problem like the one you mentioned, and it was a symptom that there was too much coupling between the objects because the responsbilities were way to broad. Since then I use a Domain-Driven Design approach and I haven't had this problem again.
I prefer to use Test-Data Builders to create test data. This approach allows me to have a template of what I want to build, and just replace the bits I'm interested in the test. If you decide to go this way, I strongly suggest you to use a tiny library called Make-It-Easy that simplifies the creation of these builders.
And two suggestions
If you have some time, I suggest you to
Watch a presetation called The Deep Synergy Between Testability and Good Design by Michael Feathers - Part of the talk is about something very similar to what you're experiencing.
Read the book Growing Object-Orieted Systems, Guided by Tests (aka GOOS), it has all sorts of insights about how to write simple, amazing, testable code.
I've got a problem and I'm asking you for help
I've started working on web application, that has no tests, is based on spring 2.5 and hibernate 3.2, is not very well modularized, with classes having up to 5k lines, as view technology there is JSP used all over the place with quite a lot things duplicated (like many similar search forms with very few differencies but with not many shared parts).
Aplication works well, however, everything is running just fine, but when there is need to add or to change some functionality, it is realy slow and not very convenient.
Is there any possibility to employ TDD at this point? Or what would you recomend as I dont't think I can develop it forever the way it is now, it is just getting messier all the time.
Thanky you for answers.
I would start by picking up a copy of Michael Feathers' book Working Effectively with Legacy Code - this is pure gold.
Once you learn techniques for refactoring and breaking apart your application at it's logical seams, you can work on integrating TDD in newer modules/sprout classes and methods, etc.
Case in point, we recently switched to a TDD approach for a ten year old application written in almost every version of our framework, and while we're still struggling with some pieces, we've made sure that all of our new work is abstracted out, and all of the new code is under test.
So absolutely doable - just a bit more challenging, and the book above can be a tremendous help in getting started.
First, welcome to the club of poor good programmers that have to fix crimes done by their worse colleagues. :(
I had such experience. In this case one of the recommended practices is developing tests for new features. You cannot stop now and develop tests for whole application. What you can do is every time you have to write new feature develop tests for this feature also. If this feature requires changes in some sensitive places start tests for these places.
Refactoring is a big problem. Ideally if you want to separate 5k lines class to 10 normal size classes you should first develop test case(s) for the big class, then perform refatoring and then run tests again to validate that you have not break anything. It is very hard in practice because when you change the design you change the interface and therefore you cannot run exactly the same tests. So, each time you should make the hard decision what is the best way and what are the minimal test case that covers your ass.
For example sometimes I performed 5 phase refatoring:
1. developed tests for bad big class
2. developed new well designed code and changed the old class to be the facade for my new code.
3. ran the test case developed in #1 to validate that everything works
4. developed new tests that verify that each new (small) sub module works well
5. refactred code, i.e. removed all references to the big old class (that became lightweight facade)
5. removed the old class and its tests.
But this is the worse case scenario. I had to use it when code that I am changing is extremely sensitive.
Shortly, good luck in your hard job. Prepare to work overnight and then receive 20 bug reports from QA and angry email from your boss. :( Be strong. You are on the right way!
If you feel like you can't make any changes for fear of breaking stuff, then you have answered your own question: you need to do something.
The first rule of holes is: If you are stuck in a hole, stop digging.
You need to institute a policy such that if code is committed without a test, that is the exception and not the rule. Use continuous integration and force people to keep the build passing.
I recommend starting by capturing the core functionality of the app in tests, both unit and integration. These tests should be a baseline that shows the necessary functionality is working.
You mentioned there is a lot of code duplication. Thats the next place to go. Put a test around an area with duplicate code. You will be testing 2 or more items here, since there is duplication. Then do a refactor and see if the tests still pass.
Once you knock one domino down, the rest will follow.
Yes there is definitely a place for TDD, but it is only a part of the solution.
You need to refactor this application before you can make any changes. Refactoring requires test coverage to be in place. Take small portions of obviously substandard code and write characterisation tests for them. This means you test all the variations possible through that code. You will probably find bugs doing this. Raise the bugs via your QA system and keep the buggy behaviour for now (lock the bugs in with your characterisation tests as other parts of the system might, for now, be relying on the buggy behaviour).
If you have very long and complex methods, you may call upon your IDE to extract small portions to separate methods where appropriate. Then write characterisation tests for those methods. Attack big methods in this way, bit by bit, until they are well-partitioned. Finally, once you have tests in place, you can refactor.
integration tests can be useful in this circumstance to highlight happy-day scenarios or a few major error scenarios. But usually in this circumstance the application is far too complex to write a complete integration test suite. This means you might never be protected 100% against side-effects using integration tests alone. That is why I prefer 'extract method' and characterise.
Now that your application is protected from side-effects, you may add new features using TDD.
My approach would be to start adding tests piece by piece. If there is a section you know you're going to have to update in the near future, start getting some good coverage on that section. Then when you need to update/refactor, you have your regression tests. From the sounds of it, it will be a major undertaking to establish a comprehensive test suite, but it will most likely pay off in the end. I would also suggest using one of the various code coverage tools available to see how much your tests are actually covering.
Cheers!
You probably can't do test driven development at this point, except if you happen to add functionality that is easy to isolate from the rest of the system (which is unlikely).
However, you can (and should) certainly add automated tests of your core functionality. Those are at first not going to be real unit tests in the sense of testing small units of code in isolation, but IMO the importance of those is often overstated. Integration tests may not run as fast or help you pinpoint the cause of bugs as quickly, but they still help tremendously in protecting you against side effects of changes. And that's something you really need when you refactor the code to make future changes easier and real unit tests possible.
In general, go for the low hanging but juicy fruit first: write tests for parts of the code that can be tested easily, or break easily, or cause the most problems when they break (and where tests are thus most valuable), or ideally all of these together. This gives you real value quickly and helps convince reluctant developers (or managers) that this is a path worth pursuing.
A continuous build server is a must. A test suite that people have to remember to run manually to get its benefit means that you're wasting most of its benefit.
Our project contains 2600 class files and we have decided to start using automated tests.
We know we have should have started this 2599 class files ago, but how and where should large projects start to write tests?
Pick a random class and just go?
What's important to know? Are there any good tools to use?
Write a unit test before you change something, and for every bug you encounter.
In other words, test the functionality you are currently working on. Otherwise, it is going to take a lot of time and effort to write tests for all classes.
Start writing tests for each bug that is filed (Write the test, watch it fail, fix the bug, test again). Also test new features first (they are more likely to have errors). It will be slow in the beginning, but as your test infrastructure grows, it will become easier.
If you are using java 5 or higher, use junit 4.
Learn about the difference of unit tests, integration tests and acceptance tests. Also have a look at mocking.
Other answers have given useful advice, but I miss a clear articulation of the underlying principle: strive to maximize the benefit from your efforts. Covering a large legacy codebase with unit tests to a significant extent takes a lot of time and effort. You want to maximize the outcome of your effort from the start. This not only gives valuable feedback early on, but helps convincing / keeping up the support of both management and fellow developers that the effort is worth it.
So
start with the easiest way to test the broadest functionality, which is typically system/integration tests,
identify the critical core functionality of the system and focus on this,
identify the fastest changing/most unstable part(s) of the system and focus on these.
Don't try unit tests first. Do system tests (end-to-end-tests) that cover large areas of code. Write unit tests for all new code.
This way you stabilize the old code with your system regression tests. As more and more new code comes in the fraction of code without unit tests begin to fade away. Writing unit tests for old code without the system tests in place will likly break the code and will be to much work to be justified as the code is not written with testability in mind.
You may find Michael Feathers' book Working Effectively with Legacy Code useful.
You're fairly dorked now, but write tests that bolster the most critical code you have. For example if you have code that allows functionality based upon users' rights, then that's a biggy - test that. The routine that camelcases a name and writes it to a log file? Not so much.
"If this code broke, how much would it suck" is a good litmus test.
"Our internal maintenance screens would look bad on IE6" is one answer.
"We'd send 10,000,000 emails to each of our customers" is another answer.
Which classes would you test first, hehe.
You might find this book relevant and interesting. The author explains how to do exactly what you ask for here.
http://my.safaribooksonline.com/0131177052
Oh, and one more thing - having an insufficient number of unit tests is far better than having none. Add a few at a time if that's all you can do. Don't give up.
What is the most commonly used approach used for testing in Java projects (iterative development) ?
My suggestion is that you should have a healthy mix of automated and manual testing.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
Personal experience would suggest the most popular approach is none at all.
I've worked with TDD (Test Driven Development) before, and my feelings towards it are mixed. Essentially, you write your tests before you write your code, and write your code to satisfy the requirements of the test. TDD forces you to have an extremely clear idea about your requirements prior to starting. An added benefit is that, once you are done development, assuming you followed closely to TDD procedures, you'd have a complete set of test suites to go with the code. The down side is that it takes an extremely long time, and at times, you'd just want to skip a couple of steps (e.g. maybe writing code before tests like a sane person would prefer to do).
More can be read here (wiki link)
Unit testing?
Contract-based programming, a la Eiffel?
Waterfall model?
Different shops do different things. If there were one method to rule them all, you wouldn't be asking this question.
On the premise of doing testing at all, I would say that testing with JUnit is the common approach to do testing in Java.
Although most tests are written with JUnit mostly tests tend to be more integration tests than unit tests. (meaning not testing one thing in isolation but some things together)
Additionally test are mostly not written in a test first approach but in parallel or after a specific feature has been implemented.
If you go to a team that makes more advanced use of testing you might probably find some CI Server (Cruise Control, Hudson) running the tests at least once a day during a nightly build.
In the order of most commonly used approach:
no tests at all
manual tests: running the app,
clicking or providing input, check
results
try to write some JUnits, forget
about them, slide to 2 and 1
Start with TDD, see that it's hard
then slide to 3, 2 and 1
on the theoretical side there are loads of ways to properly test the code.
If you are looking for something practical take a look at
Clean Code Talk. Take a look at the whole series, about 5 talks (can't post more than one link).
My Suggestion for testing of the java project is to keep it simple.
Steps :-
Manual Testing :-Achieve a stable product.
Automation Testing :- Maintain the quality of the product.
Report Generation and reporting :- Let people know the quality of the product.
Continuous Integration :-Make it a complete automated,continuous tool.
When developer will commit the Functionality then Start testing the it module by module.Try to compare the Actual Output with the expected output and against that Log the issues.
When developer resolved with the issues,Start with the Integration Testing and also start testing the resolved state issues and check whether any regression occur due to issue Fixing.
At last when product become the stable one,Then start for automating the the modules.
You can also follow automation step by step like:-
1.Automating the modules.
2.Report generation and send mail for product HealthCheck.
3.Continuous Integration and Automation testing on private server on local machine.