I've got 2 questions about organising Unit tests.
Do I have to put test to the same package as tested class, or can I organise tests in different packages?
For example if I have validity and other tests, is it correct to split them into different packages, even if they are for same class?
What about mock and stub classes? Shall I separate them from packages containing only tests, or put them together?
The way we do our JUnit test cases is to put them in the same package, but in a different root directory. Since we use Maven, we just use the standard locations making the structure similar to the following.
src/main/java/com/foo/Bar.java
src/test/java/com/foo/BarTest.java
Obviously there's more to the structure, but this lets us build the tests separately from the mainline code, but still access protected classes and the like. With respect to different types of tests, this is very subjective. When we started our testing effort (which unfortunately started after development), I tried to keep things pretty isolated. Unfortunately, it quickly became a nightmare when we got to the 500+ test case point. I've since tried to do more consolidation. This led to reduced amounts of code to maintain. As I said, though, it's very subjective.
As far as test-only code, we keep it in a separate com.foo.test package that resides only in the src/test/java tree.
I too tend to put my tests in the same package but under a different root directory. This allows me to test package-private classes or access packing-private classes while testing something else in the package. They are kept in a separate directory tree to allow excluding them from the deployed result (in particular to ensure that test code didn't accidentally get into production code). What matters most, however, is what works for your situation.
In terms of how many test classes per production class, the theory I've seen is that you write one test class per fixture, that is per setup structure. In many cases that is the same (or close enough) to one test class per production class, but I have sometimes written more test classes (in particular equality tests tend to be separated) for a give production class, and occasionally one test class of for a group of (related) production classes (say, for testing the Strategy pattern).
Mostly, I don't worry too much about the theory, but rework the tests as needed to keep duplication to an absolute minimum.
Keeping it the same package allows you to use package-private visibility for code that is intended to be accessed via the test only.
Regarding using separate root directories, that is a good practice. It also has an advantage for us, since we use IDEA, IDEA recognizes that production code cannot reference test code.
In terms of keeping them separate, there is a great power in having one, and only one, test class per production class at the unit level. Of course, some classes get created in production as part of refactoring that have no test classes at all, and that is fine, but when you want to know what test tests a certain class, having a convention that says ClassNameTest is the tests for ClassName is very helpful.
TestNG is much friendlier to this paradigm than JUnit, though.
Test classes should be rather in different packages, it's easier to separate them from the production code when you package it for release. I usually keep lots of test fluff in those packages, all sorts of mocks, configurations, scenarios.. But when you build - it doesn't get it. In some situations, it's a good idea to keep your testing stuff even in different projects. Depends.
Related
I am currently writing Selenium WebDriver tests for a variety of websites each using the same proprietary framework.
Because of this, there are many test cases that can be quite similar across different websites. As such I have made my test classes as generic as possible and made it so that every XPath/CSS Selector/ID used to locate elements is defined in a Constants class, which is unique to every project.
But, in some cases, the code for the same test can be the same across different websites, since I wrote them generically.
In addition each test is a direct/indirect extension of a BasicTest class which contains code that is likely to be reused by different tests (ex: WebDriver instance declaration, etc)
The way I thought about setting my test structure was the following:
one generic project that is "reused" by each subsequent project;
one project per website with its own definition of the Constantsclass and a TestSuite class that it can use to run both generic tests and tests specific to itself.
This would allow me not to have copies of these generic tests in each of my test projects.
The problem is that I don't really know how to set this up. The GenericProject is going to contain tests that require variables from Constant, but it makes no sense to have generic Constants. Plus, will I be able to call those tests inside my website project-specific TestSuites? If I redefine Constants in each specific project, will those constants be used for the generic tests defined in GenericProject?
How can I even set it up so that I can reuse Project A's classes inside of Project B, C, D... etc?
Extract your constants to a properties file which exists in each module as src/test/resources/common.properties.
Use org.apache.commons:commons-configuration2 PropertiesConfiguration to read this file. It will handle nested properties just fine.
Common code can be shared by depending on your GenericModule. Official instructions for two models of doing this (extract common tests to a new module or use a test-jar) are here
In general in order to reuse code over projects you would create a library containing the reusable code. In order to do so you'd need to think about a suitable API for the library.
This contains decisions about:
How will functionality be called from dependent code
How will dependent code provide required data.
If you are using constants for e.g. CSS selectors, that are different but have the same semantics, e.g.
root frame
side panel
main area
...
you might want to define an interface that the dependent code can provide. This could look like:
interface CssSelectors {
String rootFrame();
String sidePanel();
//...
}
If you are building this for tests you might also want to use features of your test framework (e.g. Rules in JUnit).
When reusing code in tests you also should consider another aspect:
If s.o. reads the tests written with your library, will she be able to sufficiently understand what is happening behind the border of the library to understand what the test is all about? This is a lot more of a question when dealing with test code than with production code as for test coverage and validity of tests it often matters a lot more how a setup or verification is done than is the case for production code.
I am new to jUnit and I am finding testing the small project I'm working on difficult. All the examples for jUnit testing seem to involve math, and what I wrote is a simple application that takes information via Scanner and creates various objects and stores them in an ArrayList. One of my thoughts was to test if the ArrayList was empty, but I obviously cannot access it from a separate class, but the standard seems to be to separate the test from the code. So I am not sure what to do?
You could provide a protected accessor for the array so that it can be used by the unit tests (which should be in the same package).
The typical well established practices are:
Unit test and production code goes into different files living in different projects but using the same package names.
Unit tests should not rely on internal state of production code. You don't want to write a test that needs to know about a field within the class under test. Because that means that your test can break when you change the production code to solve the problem differently.
The real answer here: you should share pieces of your code with us, to receive really helpful feedback.
We are considering to use Cucumber on our project for acceptance testing.
When we write a scenario in a Cucumber feature, we write a list of Given, When and Then statements.
As we use cucumber-jvm project, the Given, When and Then statement are related to Java methods in (JUnit) classes.
I want to know what is the best organization for the code related to Given / When / Then in the project structure. My main concern is the maintenance of the cucumber tests on a big project, where the number of scenario is quite important, and especially regarding the items that are shared between features.
I can see at least 2 main approaches:
Each feature is related to it's own JUnit class. So if I have a foo/bar/baz.feature cucumber file, I will find the releated foo.bar.Baz JUnit class with the adequate #Given, #When and #Then annotated methods.
Separate #Given, #When and #Then methods into "thematic" classes and packages. For example, if in my cucumber scenario I have a statement Given user "foo" is logged, then the #Given("^user \"([^\"]*)\" is logged$") annotated method will be located in the foo.user.User class method, but potentially, the #When method used later in the same cucumber scenario will be in a different Java class and package (let say foo.car.RentCar).
For me, the first approach seems good in the way that I can easily do the relation between my cucumber features and my Java code. But the drawback is that I can have a lot of redundancies or code duplication. Also, it may be hard to find a possible existing #Given method, to avoid to recreate it (the IDE can help, but here we are using Eclipse, and it does not seem to give a list of existing Given statement?).
The other approach seems better essentially when you have Given conditions shared among several cucumber feature, and thus I want to avoid code duplication. The drawback here is that it can be hard to make the link between the #Given Java method and the Given cucumber statement (maybe, again, the IDE can help?).
I'm quite new to cucumber, so maybe that my question is not a good question, and with time and experience, the structure will be self-evident, but I want to get good feedbacks on its usage...
Thanks.
I would suggest grouping your code according to the objects it refers to, similar to option #2 you presented in your question. The reasons being:
Structuring your code based on how and where it's being used is a big no-no. It's actually creating coupling between your feature files and your code.
Imagine such a thing in your product's code- the SendEmail() function wouldn't be in a class called NewEmailScreenCommands, would it? It would be in EmailActions or some such.
So the same applies here; structure your code according to what it does, and not who uses it.
The first approach would make it difficult to re-organize your feature files; You'd have to change your code files whenever you change your feature files.
Keeping code grouped by theme makes DRYing it much easier; you know exactly where all the code dealing with the user entity is, so it's easier for you to reuse it.
On our project we use that approach (i.e BlogPostStepDefinitions class), with further separating the code, if the class gets too large, to types of steps (i.e BlogPostGivenStepDefinitions).
We have also started using Cucumber-JVM for acceptance testing and have similar problems with organising code. We have opted to have 1 step definition class for each feature. At the moment this is fine as the features we are testing aren't very complex and quite separate, there is very little overlap in our features.
The second approach you mentioned would be better I think, but it is often challenging to tie together several different step definition classes for a single scenario. I think the best project structure will become clearer once you start adding more features and refactor as normal.
In the meantime here is an Eclipse plugin for cucumber,
https://github.com/matthewpietal/Eclipse-Plugin-for-Cucumber
it has syntax highlighting as well as a list of existing available steps when writing a feature.
On the current project I am taking part in, we asked ourselves the very same question.
After fiddling a bit with the possibilities, what we opted for was a mix of both the solutions you exposed.
Have steps regrouped in theme-centric common steps classes
app-start steps
security check steps
[place random feature concern here] steps
And classes of scenario (and in some case even feature) specific steps
This was to have at the same time the grouping of factorized code which is pretty easily identifiable on it's whatabouts, whereabouts and whatnot.
Yet it allows not to clutter those common classes with overly specific code.
The wiring between all these classes is handled by spring (with cucumber spring which does a great job once you get the hang of it).
I'm currently consulting on an existing system, and I suspect the right next step is to add unit tests because of the types of exceptions that are occurring (null pointers, null lists, invalid return data). However, an employee who has a "personal investment" in the application insists on integration tests, even though the problems being reported are not related to specific use cases failing. In this case is it better to start with unit or integration tests?
Typically, it is very difficult to retrofit an untested codebase to have unit tests. There will be a high degree of coupling and getting unit tests to run will be a bigger time sink than the returns you'll get. I recommend the following:
Get at least one copy of Working Effectively With Legacy Code by Michael Feathers and go through it together with people on the team. It deals with this exact issue.
Enforce a rigorous unit testing (preferably TDD) policy on all new code that gets written. This will ensure new code doesn't become legacy code and getting new code to be tested will drive refactoring of the old code for testability.
If you have the time (which you probably won't), write a few key focused integration tests over critical paths of your system. This is a good sanity check that the refactoring you're doing in step #2 isn't breaking core functionality.
Integration tests have an important role to play, but central to the testing of your code is unit-tests.
In the beginning, you will probably be forced to do integration tests only. The reason is that your code base is very heavily coupled (just a wild guess since there are no unit tests). Tight coupling means that you cannot create an instance of an object for test without creating a lot of related objects first. This makes any tests integration-tests per definition. It is crucial that you write these integration tests, as should be used as base lines for your bug-finding/refactoring efforts.
Write tests that document the bug.
Fix the bug so all created unit-tests are green.
It is time to be a good boyscout (leave the campsite/code in better order that it was when you entered) : Write tests that documents the functionality of the class that contained the bug.
As a part of your boyscout efforts, you start to decouple the class from others. Dependency Injection is THE tool here. Think that no other classes should be constructed inside other classses -- they should be injected as interfaces instead.
Finally, when you have decoupled the class, you can decouple the tests as well. Now, when you are injecting interfacing instead of creating concrete instances inside the tested class, you can make stubs/mocks instead. Suddenly your tests have become unit-tests!!
You can create integration tests as well, where you inject concrete classes instead of stubs and mocks. Just remember to keep them far away from the unit-tests; preferably in another assembly. Unit-tests should be able to run all the time, and run very fast don't let them be slowed down by slow integration tests.
The answer to the question depends on the context in which it is being asked. If you are looking to bring an existing codebase, and you are considering rewriting or replacing large portions of the code then it will be more valuable to design a comprehensive set of integration tests around the components you wish to rewrite or replace. On the other hand, if you are taking responsibility for an existing system that needs to be support and maintained, you might want to first start with unit tests to make sure that your more focused changes do not introduce errors.
I'll put it another way. If someone sends you an old car, take a look at it. If you are going to replace all of the components right away, then don't bother testing the minute performance characteristics of the fuel injector. If, on the other hand, you are going to be maintaining the car, as is, go ahead and write targeted unit tests around the components you are going to be fixing.
General rule, code without unit tests are brittle, systems without integrations are brittle. If you are going to be focused on low-level code changes, write Unit Tests first. If you are going to be focused on system-level changes, write integration tests.
And, also, make sure to ignore everything you read on sites like this. No one here knows the specifics of your project.
Choosing between integration tests and unit tests is highly subjective. It depends on various metrics of the codebase, most notably cohesion and coupling of the classes.
The generic advice that I would provide is that if classes that are loosely coupled, then test setup is going to consume lesser time, and hence, it would be much easier to start writing unit tests (especially against the more critical classes in the codebase).
On the other hand, in the event of high coupling, you might be better off writing integration tests against the more critical code paths, starting especially with a class that is loosely coupled (and resident much higher up in the execution stack). At the same time, attempts must be made to refactor the classes involved to reduce coupling (while using the integration tests as a safety net).
I'm new to unit testing using nunit (and Java development in general). When creating unit tests for private methods on classes, it looks as though the test file must be in the same package as the class being tested. What is the typical way of avoiding exporting the APIs of the unit tests? Can I make the classes/test methods package-protected? Or do developers typically have a separate build for release that excludes unit test files?
I can tell IntelliJ or Ant not to package JUnit tests in the deployment. I have tests in a separate directory from the source code, which is what makes it possible.
Don't mingle source and test classes together. Keep them separate to make it easier for the tool/script you use to deploy.
The test file does not necessarily have to be in the same package as the class being tested. In fact, it is a good practice to have the test files in a completely separate package, allowing them to test the public API without being concerned with package-level implementation details.
Alternately, you can set up your build script (e.g. Nant) to ignore files containing "Test" when you build your release executable.
Personally my approach is only to test exposed functionality, so you end up testing well encapsulated parts only.
This usually leads my design to contain small classes with well defined functionality, which are easier to test.
Generally, when unit testing you shouldn't be concerned with the internals of what you're testing, so I find this is the best way to approach it.
I also agree it's best to seperate test and production code.
Keep test source code out of application source code. In general, only test exposed functionality. If you really need to test private behavior, create a test object that extends the real object and allows publec access to the private behavior.
I think it's a mistake to move your test code out of the package of the CUT (Class Under Test). At some point you may want to test a protected method or class, and having your test code in another package makes that hard or impossible.
A better solution is to create a separate directory for your test code that simply mirrors the package structure of your production code. Here's what I do:
src/main/java/com/example/Foo.java
src/test/java/com/example/FooTest.java
Then your build script can very simply ignore src/test/** when it comes time for packaging and deployment.