I've written a number of tests, divided not only into separate classes but, depending on what area of my application they're testing, into separate sub-packages. So, my package structure looks a bit like this:
my.package.tests
my.package.tests.four
my.package.tests.one
my.package.tests.three
my.package.tests.two
In the my.package.tests package I have a parent class that all tests in the sub-packages one through four extend. Now, I would like to choose the order in which the tests are run; not within the classes (which seems to be possible with the FixMethodOrder annotation) but the order of the classes or sub-packages themselves (So those in sub-package one first, then those in two, ect.). Some of the test classes use the Parameterized runner, just in case that makes a difference. Choosing the order is not required for the tests to succeed, they are independent of each other; it would however be helpful to sort them to reflect the order in which the various parts of the program are normally used as it makes the analysis a bit easier.
Now, ideally I would like to have some configuration file which tells JUnit which order the tests should be executed in; I'm guessing however that it won't be so easy. Which options do I have here and what would be the easiest one? I'd also prefer to only have to list the sub-packages, not the various classes in the packages or even the test functions in the classes.
I'm using Java 1.6.0_26 (I don't have a choice here) and JUnit 4.11.
You can do this with test suites. (you can "nest" these, too)
#SuiteClasses({SuiteOne.class, SuiteTwo.class})
#RunWith(Suite.class)
public class TopLevelSuite {}
#SuiteClasses({Test1.class, Test2.class})
#RunWith(Suite.class)
public class SuiteOne {}
#SuiteClasses({Test4.class, Test3.class})
#RunWith(Suite.class)
public class SuiteTwo {}
... And so on. Use the "toplevelsuite" as an entry point, and the tests will execute in the order in which you define the #SuiteClasses array.
As for the methods within a class, you can specify the order they execute in with #FixMethodOrder (as you have already mentioned in your question.
Related
Currently I have one java class (Step Definitions.java) that contains all -
#Before, #After, #Given, #When, #Then
I want to split these annotations into three different classes. One class for the #Before / #After. One class for #Given; One class for #When; and One class for #Then.
All 4 classes will be under the same package. Can this be done? Do I have to change anything in the runner class? Any other references I have to make to these seperate classes? or should this just work like that, when I call the Gherkin in my feature file?!
It will work out off the box. As long as the code is mentioned in the package structure given to the glue option for cucumberoptions, it will be loaded and executed. All the stepdefinition and hook code are loaded for each scenario, so doesnt matter which class the code exists.
Though better would be to separate them according to parts of the application they deal with.
Is there an unit testing annotation or feature flag that I can insert in my java source code so that the method will only be triggered when running unit test?
For Example:
public class A {
#Annotation
public void fooA() { }
public void fooB() { }
}
My Unit test class:
public class TestA {
...
}
In this scenario, when I run my unit test class TestA, I want to execute fooA() and fooB().
But if I would like to run my prod source code that includes classA, then only function fooB() will be executed and not fooA().
In general: don't go there.
Your production code has one responsibility; and one responsibility only: to do its production job. You do not factor in complex test-related aspects.
Occasionally, it might be required or helpful to have a constructor that takes more arguments (for dependency injection). Then just make that thing package protected, and put an informal "/** unit test only */" comment there.
What I mean is: there is no such annotation. And it also doesn't make sense to have one. You describe your external interface with clear javadoc, you write your classes so that it becomes obvious that a user should only use/call fooB().
Thing is: the last thing you want to happen is that some test-only artifact in your code causes an issue in your production environment. And the best way to avoid that risk: don't create such artifacts.
You can declare your test package to be the same as as your unit-under-test (UUT).
package com.example.fubar.arglebargle;
public class A {
// package-private, TEST ONLY!
void fooA() { }
public void fooB() { }
}
Unit test class:
package com.example.fubar.arglebargle;
public class TestA {
// etc...
}
My IDE does this automatically with JUnit testing. All my tests are declared in the same package as my UUT (although they're obviously in a completely different source tree; don't put tests and production source in the same sub-dir!).
This is handy for accessing some internals that wouldn't be available otherwise. It also seems to be standard practice, so I think there's nothing against doing it.
This won't stop everyone from calling fooA(), but it will limit potential problems to classes in the same package as your class. That's a much smaller set of code to worry about, and often you can rely on people modifying code to read the comments in other classes in the same package carefully.
I think some test frameworks may be able to access private methods (maybe by disabling the security manager?) but I don't know off hand. If it's important, I'd research test frameworks.
Why a code that is used only for tests should be available in production ?
1) it doesn't validate the applicative code as the application doesn't use it.
2)It is error prone for clients of the class that use it.
3)It makes harder the maintainability and the readability of the code.
For example a developer could wonder why this method is present if it has no caller in the applicative code.
Considering it as dead code, he could remove the method and remove also the unit test that uses it.
To solve your problem, I think that this method that is required in non production environments should be included in the packaging of the application in non production environments but it should be excluded from the packaging in the production environment.
Tools as Maven and Gradle handle very well this requirement.
Actually, you cannot do it as the method makes part of a class that is required in production. So, a first step would be to extract the method for testing purpose in a specific class.
In this way, filtering it would be possible.
Ok, I have multiple classes in my testNG suite and all the methods inside those classes has the priority defined. When I run the single class, all the methods runs in a order defined as per the priority but when I run multiple classes, testNG doesn't run it in sequential order. First, it'll run single method from Class A then it'll run another method from Class B and then another method from Class C.
I want testNG to run all the methods from Class A and then move to other classes and run all the methods from those classes before moving to other class. Is there any way to achieve this ?
This post should cover how to retain the order specified for classes while you use priority.
How do you manage dummy data used for tests? Keep them with their respective entities? In a separate test project? Load them with a Serializer from external resources? Or just recreate them wherever needed?
We have an application stack with several modules depending on another with each containing entities. Each module has its own tests and needs dummy data to run with.
Now a module that has a lot of dependencies will need a lot of dummy data from the other modules. Those however do not publish their dummy objects because they are part of the test resources so all modules have to setup all dummy objects they need again and again.
Also: most fields in our entities are not nullable so even running transactions against the object layer requires them to contain some value, most of the time with further limitations like uniqueness, length, etc.
Is there a best practice way out of this or are all solutions compromises?
More Detail
Our stack looks something like this:
One Module:
src/main/java --> gets jared (.../entities/*.java contains the entities)
src/main/resources --> gets jared
src/test/java --> contains dummy object setup, will NOT get jared
src/test/resources --> not jared
We use Maven to handle dependencies.
module example:
Module A has some dummy objects
Module B needs its own objects AND the same as Module A
Option a)
A Test module T can hold all dummy objects and provide them in a test scope (so the loaded dependencies don't get jared) to all tests in all Modules. Will that work? Meaning: If I load T in A and run install on A will it NOT contain references introduced by T especially not B? Then however A will know about B's datamodel.
Option b)
Module A provides the dummy objects somewhere in src/main/java../entities/dummy allowing B to get them while A does not know about B's dummy data
Option c)
Every module contains external resources which are serialized dummy objects. They can be deserialized by the test environment that needs them because it has the dependency to the module to which they belong. This will require every module to create and serialize its dummy objects though and how would one do that? If with another unit test it introduces dependencies between unit tests which should never happen or with a script it'll be hard to debug and not flexible.
Option d)
Use a mock framework and assign the required fields manually for each test as needed. The problem here is that most fields in our entities are not nullable and thus will require setters or constructors to be called which would end us up at the start again.
What we don't want
We don't want to set up a static database with static data as the required objects' structure will constantly change. A lot right now, a little later. So we want hibernate to set up all tables and columns and fill those with data at unit testing time. Also a static data base would introduce a lot of potential errors and test interdependencies.
Are my thoughts going in the right direction? What's the best practice to deal with tests that require a lot of data? We'll have several interdependent modules that will require objects filled with some kind of data from several other modules.
EDIT
Some more info on how we're doing it right now in response to the second answer:
So for simplicity, we have three modules: Person, Product, Order.
Person will test some manager methods using a MockPerson object:
(in person/src/test/java:)
public class MockPerson {
public Person mockPerson(parameters...) {
return mockedPerson;
}
}
public class TestPerson() {
#Inject
private MockPerson mockPerson;
public testCreate() {
Person person = mockPerson.mockPerson(...);
// Asserts...
}
}
The MockPerson class will not be packaged.
The same applies for the Product Tests:
(in product/src/test/java:)
public class MockProduct() { ... }
public class TestProduct {
#Inject
private MockProduct mockProduct;
// ...
}
MockProduct is needed but will not be packaged.
Now the Order Tests will require MockPerson and MockProduct, so now we currently need to create both as well as MockOrder to test Order.
(in order/src/test/java:)
These are duplicates and will need to be changed every time Person or Product changes
public class MockProduct() { ... }
public class MockPerson() { ... }
This is the only class that should be here:
public class MockOrder() { ... }
public class TestOrder() {
#Inject
private order.MockPerson mockPerson;
#Inject
private order.MockProduct mockProduct;
#Inject
private order.MockOrder mockOrder;
public testCreate() {
Order order = mockOrder.mockOrder(mockPerson.mockPerson(), mockProduct.mockProduct());
// Asserts...
}
}
The problem is, that now we have to update person.MockPerson and order.MockPerson whenever Person is changed.
Isn't it better to just publish the Mocks with the jar so that every other test that has the dependency anyway can just call Mock.mock and get a nicely setup object? Or is this the dark side - the easy way?
This may or may not apply - I'm curious to see an example of your dummy objects and the setup code related. (To get a better idea of whether it applies to your situation.) But what I've done in the past is not even introduce this kind of code into the tests at all. As you describe, it's hard to produce, debug, and especially package and maintain.
What I've usaully done (and AFAIKT in Java this is the best practice) is try to use the Test Data Builder pattern, as described by Nat Pryce in his Test Data Builders post.
If you think this is somewhat relevant, check these out:
Does a framework like Factory Girl exist for Java?
make-it-easy, Nat's framework that implements this pattern.
Well, I read carefully all evaluations so far, and it is very good question. I see following approaches to the problem:
Set up (static) test data base;
Each test has it's own set up data that creates (dynamic) test data prior to running unit tests;
Use dummy or mock object. All modules know all dummy objects, this way there is no duplicates;
Reduce the scope of the unit test;
First option is pretty straight forward and has many drawbacks, somebody has to reproduce it's once in while, when unit tests "mess it up", if there are changes in the data-module, somebody has to introduce corresponding changes to the test data, a lot of maintenance overhead. Not to say that generation of this data on the first hand maybe tricky. See aslo second option.
Second option, you write your test code that prior to the testing invokes some of your "core" business methods that creates your entity. Ideally, your test code should be independent from the production code, but in this case, you will end up with duplicate code, that you should support twice. Sometimes, it is good to split your production business method in order to have entry point for your unit test (I makes such methods private and use Reflection to invoke them, also some remark on the method is needed, refactoring is now a bit tricky). The main drawback that if you must change your "core" business methods it suddenly effects all of your unit test and you can't test. So, developers should be aware of it and not make partials commits to the "core" business methods, unless they works. Also, with any change in this area, you should keep in your mind "how it will affect my unit test". Sometimes also, it is impossible to reproduce all the required data dynamically (usually, it is because of the third-parties API, for example, you call another application with it's own DB from which you required to use some keys. This keys (with the associated data) is created manually through third-party application. In such a case, this data and only this data, should be created statically. For example, your created 10000 keys starting from 300000.
Third option should be good. Options a) and d) sounds for me pretty good. For your dummy object you can use the mock framework or you can not to use it. Mock Framework is here only to help you. I don't see problem that all of your unit know all your entities.
Fourth option means that you redefine what is "unit" in your unit test. When you have couple of modules with interdependence than it can be difficult to test each module in isolation. This approach says, that what we originally tested was integration test and not unit test. So, we split our methods, extract small "units of works" that receives all it's interdependences to another modules as parameters. This parameters can be (hopefully) easily mocked up. The main drawback of this approach, that you don't test all of your code, but only so to say, the "focal points". You need to make integration test separately (usually by QA team).
I'm wondering if you couldn't solve your problem by changing your testing approach.
Unit Testing a module which depends on other modules and, because of that, on the test data of other modules is not a real unit test!
What if you would inject a mock for all of the dependencies of your module under test so you can test it in complete isolation. Then you don't need to setup a complete environment where each depending module has the data it needs, you only setup the data for the module your actually testing.
If you imagine a pyramid, then the base would be your unit tests, above that you have functional tests and at the top you have some scenario tests (or as Google calls them, small, medium and big tests).
You will have a huge amount of Unit Tests that can test every code path because the mocked dependencies are completely configurable. Then you can trust in your individual parts and the only thing that your functional and scenario tests will do is test if each module is wired correctly to other modules.
This means that your module test data is not shared by all your tests but only by a few that are grouped together.
The Builder Pattern as mentioned by cwash will definitely help in your functional tests.
We are using a .NET Builder that is configured to build a complete object tree and generate default values for each property so when we save this to the database all required data is present.
I am fairly new to Java. I have constructed a single JUnit test class and inside this file are a number of test methods. When I run this class (in NetBeans) it runs each test method in the class in order.
Question 1: How can I run only a specific sub-set of the test methods in this class?
(Potential answer: Write #Ignore above #Test for the tests I wish to ignore. However, if I want to indicate which test methods I want to run rather than those I want to ignore, is there a more convenient way of doing this?)
Question 2: Is there an easy way to change the order in which the various test methods are run?
Thanks.
You should read about TestSuite's. They allow to group & order your unit test methods. Here's an extract form this article
"JUnit test classes can be rolled up
to run in a specific order by creating
a Test Suite.
EDIT: Here's an example showing how simple it is:
public static Test suite() {
TestSuite suite = new TestSuite("Sample Tests");
suite.addTest(new SampleTest("testmethod3"));
suite.addTest(new SampleTest("testmethod5"));
return suite;
}
This answer tells you how to do it. Randomizing the order the tests run is a good idea!
Like the comment from dom farr states, each unit test should be able to run in isolation. There should be no residuals and no given requirements after or before a test run. All your unit tests should pass run in any order, or any subset.
It's not a terrible idea to have or generate a map of Test Case --> List of Test and then randomly execute all the tests.
There are a number of approaches to this, but it depends on your specific needs. For example, you could split up each of your test methods into separate test classes, and then arrange them in different test suites (which would allow for overlap of methods in the suites if desired). Or, a simpler solution would be to make your test methods normal class methods with one test method in your class that calls them in your specific order. Do you want them to be dynamcially called?
I've not been using Java that long either, but as far as I've seen there isn't a convenient method of marking methods to execute rather than ignore. Instead I think this could be achieved using the IDE. When I want to do that in Eclipse I can use the junit view to run individual tests by clicking on them. I imagine there is something similar in Netbeans.
I don't know of an easy way to reorder the test execution. Eclipse has a button to rerun tests with the failing tests first, but it sounds like you want something more versatile.